Based on Penalty Function Method
Directory of Open Access Journals (Sweden)
Ishaq Baba
2015-01-01
Full Text Available The dual response surface for simultaneously optimizing the mean and variance models as separate functions suffers some deficiencies in handling the tradeoffs between bias and variance components of mean squared error (MSE. In this paper, the accuracy of the predicted response is given a serious attention in the determination of the optimum setting conditions. We consider four different objective functions for the dual response surface optimization approach. The essence of the proposed method is to reduce the influence of variance of the predicted response by minimizing the variability relative to the quality characteristics of interest and at the same time achieving the specific target output. The basic idea is to convert the constraint optimization function into an unconstraint problem by adding the constraint to the original objective function. Numerical examples and simulations study are carried out to compare performance of the proposed method with some existing procedures. Numerical results show that the performance of the proposed method is encouraging and has exhibited clear improvement over the existing approaches.
Dominant partition method. [based on a wave function formalism
Dixon, R. M.; Redish, E. F.
1979-01-01
By use of the L'Huillier, Redish, and Tandy (LRT) wave function formalism, a partially connected method, the dominant partition method (DPM) is developed for obtaining few body reductions of the many body problem in the LRT and Bencze, Redish, and Sloan (BRS) formalisms. The DPM maps the many body problem to a fewer body one by using the criterion that the truncated formalism must be such that consistency with the full Schroedinger equation is preserved. The DPM is based on a class of new forms for the irreducible cluster potential, which is introduced in the LRT formalism. Connectivity is maintained with respect to all partitions containing a given partition, which is referred to as the dominant partition. Degrees of freedom corresponding to the breakup of one or more of the clusters of the dominant partition are treated in a disconnected manner. This approach for simplifying the complicated BRS equations is appropriate for physical problems where a few body reaction mechanism prevails.
Numerical methods for characterization of synchrotron radiation based on the Wigner function method
Directory of Open Access Journals (Sweden)
Takashi Tanaka
2014-06-01
Full Text Available Numerical characterization of synchrotron radiation based on the Wigner function method is explored in order to accurately evaluate the light source performance. A number of numerical methods to compute the Wigner functions for typical synchrotron radiation sources such as bending magnets, undulators and wigglers, are presented, which significantly improve the computation efficiency and reduce the total computation time. As a practical example of the numerical characterization, optimization of betatron functions to maximize the brilliance of undulator radiation is discussed.
GA Based Optimal Feature Extraction Method for Functional Data Classification
Jun Wan; Zehua Chen; Yingwu Chen; Zhidong Bai
2010-01-01
Classification is an interesting problem in functional data analysis (FDA), because many science and application problems end up with classification problems, such as recognition, prediction, control, decision making, management, etc. As the high dimension and high correlation in functional data (FD), it is a key problem to extract features from FD whereas keeping its global characters, which relates to the classification efficiency and precision to heavens. In this paper...
Directory of Open Access Journals (Sweden)
Hailun Wang
2017-01-01
Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.
Directory of Open Access Journals (Sweden)
Ayşe Betül Koç
2014-01-01
Full Text Available A pseudospectral method based on the Fibonacci operational matrix is proposed to solve generalized pantograph equations with linear functional arguments. By using this method, approximate solutions of the problems are easily obtained in form of the truncated Fibonacci series. Some illustrative examples are given to verify the efficiency and effectiveness of the proposed method. Then, the numerical results are compared with other methods.
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
Directory of Open Access Journals (Sweden)
Zhenxiang Jiang
2016-01-01
Full Text Available The traditional methods of diagnosing dam service status are always suitable for single measuring point. These methods also reflect the local status of dams without merging multisource data effectively, which is not suitable for diagnosing overall service. This study proposes a new method involving multiple points to diagnose dam service status based on joint distribution function. The function, including monitoring data of multiple points, can be established with t-copula function. Therefore, the possibility, which is an important fusing value in different measuring combinations, can be calculated, and the corresponding diagnosing criterion is established with typical small probability theory. Engineering case study indicates that the fusion diagnosis method can be conducted in real time and the abnormal point can be detected, thereby providing a new early warning method for engineering safety.
Feng, Shou; Fu, Ping; Zheng, Wenbin
2018-03-01
Predicting gene function based on biological instrumental data is a complicated and challenging hierarchical multi-label classification (HMC) problem. When using local approach methods to solve this problem, a preliminary results processing method is usually needed. This paper proposed a novel preliminary results processing method called the nodes interaction method. The nodes interaction method revises the preliminary results and guarantees that the predictions are consistent with the hierarchy constraint. This method exploits the label dependency and considers the hierarchical interaction between nodes when making decisions based on the Bayesian network in its first phase. In the second phase, this method further adjusts the results according to the hierarchy constraint. Implementing the nodes interaction method in the HMC framework also enhances the HMC performance for solving the gene function prediction problem based on the Gene Ontology (GO), the hierarchy of which is a directed acyclic graph that is more difficult to tackle. The experimental results validate the promising performance of the proposed method compared to state-of-the-art methods on eight benchmark yeast data sets annotated by the GO.
Cross-Correlation-Function-Based Multipath Mitigation Method for Sine-BOC Signals
Directory of Open Access Journals (Sweden)
H. H. Chen
2012-06-01
Full Text Available Global Navigation Satellite Systems (GNSS positioning accuracy indoor and urban canyons environments are greatly affected by multipath due to distortions in its autocorrelation function. In this paper, a cross-correlation function between the received sine phased Binary Offset Carrier (sine-BOC modulation signal and the local signal is studied firstly, and a new multipath mitigation method based on cross-correlation function for sine-BOC signal is proposed. This method is implemented to create a cross-correlation function by designing the modulated symbols of the local signal. The theoretical analysis and simulation results indicate that the proposed method exhibits better multipath mitigation performance compared with the traditional Double Delta Correlator (DDC techniques, especially the medium/long delay multipath signals, and it is also convenient and flexible to implement by using only one correlator, which is the case of low-cost mass-market receivers.
New Method for Mesh Moving Based on Radial Basis Function Interpolation
De Boer, A.; Van der Schoot, M.S.; Bijl, H.
2006-01-01
A new point-by-point mesh movement algorithm is developed for the deformation of unstructured grids. The method is based on using radial basis function, RBFs, to interpolate the displacements of the boundary nodes to the whole flow mesh. A small system of equations has to be solved, only involving
Modulation transfer function (MTF) measurement method based on support vector machine (SVM)
Zhang, Zheng; Chen, Yueting; Feng, Huajun; Xu, Zhihai; Li, Qi
2016-03-01
An imaging system's spatial quality can be expressed by the system's modulation spread function (MTF) as a function of spatial frequency in terms of the linear response theory. Methods have been proposed to assess the MTF of an imaging system using point, slit or edge techniques. The edge method is widely used for the low requirement of targets. However, the traditional edge methods are limited by the edge angle. Besides, image noise will impair the measurement accuracy, making the measurement result unstable. In this paper, a novel measurement method based on the support vector machine (SVM) is proposed. Image patches with different edge angles and MTF levels are generated as the training set. Parameters related with MTF and image structure are extracted from the edge images. Trained with image parameters and the corresponding MTF, the SVM classifier can assess the MTF of any edge image. The result shows that the proposed method has an excellent performance on measuring accuracy and stability.
Estimation of functional failure probability of passive systems based on subset simulation method
International Nuclear Information System (INIS)
Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing
2012-01-01
In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)
Hybrid ICA-Seed-Based Methods for fMRI Functional Connectivity Assessment: A Feasibility Study
Directory of Open Access Journals (Sweden)
Robert E. Kelly
2010-01-01
Full Text Available Brain functional connectivity (FC is often assessed from fMRI data using seed-based methods, such as those of detecting temporal correlation between a predefined region (seed and all other regions in the brain; or using multivariate methods, such as independent component analysis (ICA. ICA is a useful data-driven tool, but reproducibility issues complicate group inferences based on FC maps derived with ICA. These reproducibility issues can be circumvented with hybrid methods that use information from ICA-derived spatial maps as seeds to produce seed-based FC maps. We report results from five experiments to demonstrate the potential advantages of hybrid ICA-seed-based FC methods, comparing results from regressing fMRI data against task-related a priori time courses, with “back-reconstruction” from a group ICA, and with five hybrid ICA-seed-based FC methods: ROI-based with (1 single-voxel, (2 few-voxel, and (3 many-voxel seed; and dual-regression-based with (4 single ICA map and (5 multiple ICA map seed.
A point-value enhanced finite volume method based on approximate delta functions
Xuan, Li-Jun; Majdalani, Joseph
2018-02-01
We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.
Vedadi, Farhang; Shirani, Shahram
2014-01-01
A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.
Wave resistance calculation method combining Green functions based on Rankine and Kelvin source
Directory of Open Access Journals (Sweden)
LI Jingyu
2017-12-01
Full Text Available [Ojectives] At present, the Boundary Element Method(BEM of wave-making resistance mostly uses a model in which the velocity distribution near the hull is solved first, and the pressure integral is then calculated using the Bernoulli equation. However,the process of this model of wave-making resistance is complex and has low accuracy.[Methods] To address this problem, the present paper deduces a compound method for the quick calculation of ship wave resistance using the Rankine source Green function to solve the hull surface's source density, and combining the Lagally theorem concerning source point force calculation based on the Kelvin source Green function so as to solve the wave resistance. A case for the Wigley model is given.[Results] The results show that in contrast to the thin ship method of the linear wave resistance theorem, this method has higher precision, and in contrast to the method which completely uses the Kelvin source Green function, this method has better computational efficiency.[Conclusions] In general, the algorithm in this paper provides a compromise between precision and efficiency in wave-making resistance calculation.
Asiri, Sharefa M.
2017-10-08
Partial Differential Equations (PDEs) are commonly used to model complex systems that arise for example in biology, engineering, chemistry, and elsewhere. The parameters (or coefficients) and the source of PDE models are often unknown and are estimated from available measurements. Despite its importance, solving the estimation problem is mathematically and numerically challenging and especially when the measurements are corrupted by noise, which is often the case. Various methods have been proposed to solve estimation problems in PDEs which can be classified into optimization methods and recursive methods. The optimization methods are usually heavy computationally, especially when the number of unknowns is large. In addition, they are sensitive to the initial guess and stop condition, and they suffer from the lack of robustness to noise. Recursive methods, such as observer-based approaches, are limited by their dependence on some structural properties such as observability and identifiability which might be lost when approximating the PDE numerically. Moreover, most of these methods provide asymptotic estimates which might not be useful for control applications for example. An alternative non-asymptotic approach with less computational burden has been proposed in engineering fields based on the so-called modulating functions. In this dissertation, we propose to mathematically and numerically analyze the modulating functions based approaches. We also propose to extend these approaches to different situations. The contributions of this thesis are as follows. (i) Provide a mathematical analysis of the modulating function-based method (MFBM) which includes: its well-posedness, statistical properties, and estimation errors. (ii) Provide a numerical analysis of the MFBM through some estimation problems, and study the sensitivity of the method to the modulating functions\\' parameters. (iii) Propose an effective algorithm for selecting the method\\'s design parameters
A new CFD based non-invasive method for functional diagnosis of coronary stenosis.
Xie, Xinzhou; Zheng, Minwen; Wen, Didi; Li, Yabing; Xie, Songyun
2018-03-22
Accurate functional diagnosis of coronary stenosis is vital for decision making in coronary revascularization. With recent advances in computational fluid dynamics (CFD), fractional flow reserve (FFR) can be derived non-invasively from coronary computed tomography angiography images (FFR CT ) for functional measurement of stenosis. However, the accuracy of FFR CT is limited due to the approximate modeling approach of maximal hyperemia conditions. To overcome this problem, a new CFD based non-invasive method is proposed. Instead of modeling maximal hyperemia condition, a series of boundary conditions are specified and those simulated results are combined to provide a pressure-flow curve for a stenosis. Then, functional diagnosis of stenosis is assessed based on parameters derived from the obtained pressure-flow curve. The proposed method is applied to both idealized and patient-specific models, and validated with invasive FFR in six patients. Results show that additional hemodynamic information about the flow resistances of a stenosis is provided, which cannot be directly obtained from anatomy information. Parameters derived from the simulated pressure-flow curve show a linear and significant correlations with invasive FFR (r > 0.95, P < 0.05). The proposed method can assess flow resistances by the pressure-flow curve derived parameters without modeling of maximal hyperemia condition, which is a new promising approach for non-invasive functional assessment of coronary stenosis.
Zhu, Lingyu; Ji, Shengchang; Shen, Qi; Liu, Yuan; Li, Jinyu; Liu, Hao
2013-01-01
The capacitors in high-voltage direct-current (HVDC) converter stations radiate a lot of audible noise which can reach higher than 100 dB. The existing noise level prediction methods are not satisfying enough. In this paper, a new noise level prediction method is proposed based on a frequency response function considering both electrical and mechanical characteristics of capacitors. The electro-mechanical frequency response function (EMFRF) is defined as the frequency domain quotient of the vibration response and the squared capacitor voltage, and it is obtained from impulse current experiment. Under given excitations, the vibration response of the capacitor tank is the product of EMFRF and the square of the given capacitor voltage in frequency domain, and the radiated audible noise is calculated by structure acoustic coupling formulas. The noise level under the same excitations is also measured in laboratory, and the results are compared with the prediction. The comparison proves that the noise prediction method is effective.
International Nuclear Information System (INIS)
Huh, Jae Sung; Kwak, Byung Man
2011-01-01
Robust optimization or reliability-based design optimization are some of the methodologies that are employed to take into account the uncertainties of a system at the design stage. For applying such methodologies to solve industrial problems, accurate and efficient methods for estimating statistical moments and failure probability are required, and further, the results of sensitivity analysis, which is needed for searching direction during the optimization process, should also be accurate. The aim of this study is to employ the function approximation moment method into the sensitivity analysis formulation, which is expressed as an integral form, to verify the accuracy of the sensitivity results, and to solve a typical problem of reliability-based design optimization. These results are compared with those of other moment methods, and the feasibility of the function approximation moment method is verified. The sensitivity analysis formula with integral form is the efficient formulation for evaluating sensitivity because any additional function calculation is not needed provided the failure probability or statistical moments are calculated
International Nuclear Information System (INIS)
Shimazu, Yoichiro; Tashiro, Shoichi; Tojo, Masayuki
2017-01-01
The performance of two digital reactivity meters, one based on the conventional inverse kinetic method and the other one based on simple feedback theory, are compared analytically using their respective transfer functions. The latter one is proposed by one of the authors. It has been shown that the performance of the two reactivity meters become almost identical when proper system parameters are selected for each reactivity meter. A new correlation between the system parameters of the two reactivity meters is found. With this correlation, filter designers can easily determine the system parameters for the respective reactivity meters to obtain identical performance. (author)
A SVM-based quantitative fMRI method for resting-state functional network detection.
Song, Xiaomu; Chen, Nan-kuei
2014-09-01
Resting-state functional magnetic resonance imaging (fMRI) aims to measure baseline neuronal connectivity independent of specific functional tasks and to capture changes in the connectivity due to neurological diseases. Most existing network detection methods rely on a fixed threshold to identify functionally connected voxels under the resting state. Due to fMRI non-stationarity, the threshold cannot adapt to variation of data characteristics across sessions and subjects, and generates unreliable mapping results. In this study, a new method is presented for resting-state fMRI data analysis. Specifically, the resting-state network mapping is formulated as an outlier detection process that is implemented using one-class support vector machine (SVM). The results are refined by using a spatial-feature domain prototype selection method and two-class SVM reclassification. The final decision on each voxel is made by comparing its probabilities of functionally connected and unconnected instead of a threshold. Multiple features for resting-state analysis were extracted and examined using an SVM-based feature selection method, and the most representative features were identified. The proposed method was evaluated using synthetic and experimental fMRI data. A comparison study was also performed with independent component analysis (ICA) and correlation analysis. The experimental results show that the proposed method can provide comparable or better network detection performance than ICA and correlation analysis. The method is potentially applicable to various resting-state quantitative fMRI studies. Copyright © 2014 Elsevier Inc. All rights reserved.
Frames and other bases in abstract and function spaces novel methods in harmonic analysis
Gia, Quoc; Mayeli, Azita; Mhaskar, Hrushikesh; Zhou, Ding-Xuan
2017-01-01
The first of a two volume set on novel methods in harmonic analysis, this book draws on a number of original research and survey papers from well-known specialists detailing the latest innovations and recently discovered links between various fields. Along with many deep theoretical results, these volumes contain numerous applications to problems in signal processing, medical imaging, geodesy, statistics, and data science. The chapters within cover an impressive range of ideas from both traditional and modern harmonic analysis, such as: the Fourier transform, Shannon sampling, frames, wavelets, functions on Euclidean spaces, analysis on function spaces of Riemannian and sub-Riemannian manifolds, Fourier analysis on manifolds and Lie groups, analysis on combinatorial graphs, sheaves, co-sheaves, and persistent homologies on topological spaces. Volume I is organized around the theme of frames and other bases in abstract and function spaces, covering topics such as: The advanced development of frames, including ...
Zhou, Qiuling; Tang, Chen; Li, Biyuan; Wang, Linlin; Lei, Zhenkun; Tang, Shuwei
2018-01-01
The filtering of discontinuous optical fringe patterns is a challenging problem faced in this area. This paper is concerned with oriented partial differential equations (OPDEs)-based image filtering methods for discontinuous optical fringe patterns. We redefine a new controlling speed function to depend on the orientation coherence. The orientation coherence can be used to distinguish the continuous regions and the discontinuous regions, and can be calculated by utilizing fringe orientation. We introduce the new controlling speed function to the previous OPDEs and propose adaptive OPDEs filtering models. According to our proposed adaptive OPDEs filtering models, the filtering in the continuous and discontinuous regions can be selectively carried out. We demonstrate the performance of the proposed adaptive OPDEs via application to the simulated and experimental fringe patterns, and compare our methods with the previous OPDEs.
A prediction method for the wax deposition rate based on a radial basis function neural network
Directory of Open Access Journals (Sweden)
Ying Xie
2017-06-01
Full Text Available The radial basis function neural network is a popular supervised learning tool based on machinery learning technology. Its high precision having been proven, the radial basis function neural network has been applied in many areas. The accumulation of deposited materials in the pipeline may lead to the need for increased pumping power, a decreased flow rate or even to the total blockage of the line, with losses of production and capital investment, so research on predicting the wax deposition rate is significant for the safe and economical operation of an oil pipeline. This paper adopts the radial basis function neural network to predict the wax deposition rate by considering four main influencing factors, the pipe wall temperature gradient, pipe wall wax crystal solubility coefficient, pipe wall shear stress and crude oil viscosity, by the gray correlational analysis method. MATLAB software is employed to establish the RBF neural network. Compared with the previous literature, favorable consistency exists between the predicted outcomes and the experimental results, with a relative error of 1.5%. It can be concluded that the prediction method of wax deposition rate based on the RBF neural network is feasible.
[Cardiac Synchronization Function Estimation Based on ASM Level Set Segmentation Method].
Zhang, Yaonan; Gao, Yuan; Tang, Liang; He, Ying; Zhang, Huie
At present, there is no accurate and quantitative methods for the determination of cardiac mechanical synchronism, and quantitative determination of the synchronization function of the four cardiac cavities with medical images has a great clinical value. This paper uses the whole heart ultrasound image sequence, and segments the left & right atriums and left & right ventricles of each frame. After the segmentation, the number of pixels in each cavity and in each frame is recorded, and the areas of the four cavities of the image sequence are therefore obtained. The area change curves of the four cavities are further extracted, and the synchronous information of the four cavities is obtained. Because of the low SNR of Ultrasound images, the boundary lines of cardiac cavities are vague, so the extraction of cardiac contours is still a challenging problem. Therefore, the ASM model information is added to the traditional level set method to force the curve evolution process. According to the experimental results, the improved method improves the accuracy of the segmentation. Furthermore, based on the ventricular segmentation, the right and left ventricular systolic functions are evaluated, mainly according to the area changes. The synchronization of the four cavities of the heart is estimated based on the area changes and the volume changes.
Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem
2016-01-01
In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear
International Nuclear Information System (INIS)
Olsen, Jeppe
2014-01-01
A novel algorithm is introduced for the transformation of wave functions between the bases of Slater determinants (SD) and configuration state functions (CSF) in the genealogical coupling scheme. By modifying the expansion coefficients as each electron is spin-coupled, rather than performing a single many-electron transformation, the large transformation matrix that plagues previous approaches is avoided and the required number of operations is drastically reduced. As an example of the efficiency of the algorithm, the transformation for a configuration with 30 unpaired electrons and singlet spin is discussed. For this case, the 10 × 10 6 coefficients in the CSF basis is obtained from the 150 × 10 6 coefficients in the SD basis in 1 min, which should be compared with the seven years that the previously employed method is estimated to require
Robinson, Lucy F; Atlas, Lauren Y; Wager, Tor D
2015-03-01
We present a new method, State-based Dynamic Community Structure, that detects time-dependent community structure in networks of brain regions. Most analyses of functional connectivity assume that network behavior is static in time, or differs between task conditions with known timing. Our goal is to determine whether brain network topology remains stationary over time, or if changes in network organization occur at unknown time points. Changes in network organization may be related to shifts in neurological state, such as those associated with learning, drug uptake or experimental conditions. Using a hidden Markov stochastic blockmodel, we define a time-dependent community structure. We apply this approach to data from a functional magnetic resonance imaging experiment examining how contextual factors influence drug-induced analgesia. Results reveal that networks involved in pain, working memory, and emotion show distinct profiles of time-varying connectivity. Copyright © 2014 Elsevier Inc. All rights reserved.
An Interval-Valued Intuitionistic Fuzzy TOPSIS Method Based on an Improved Score Function
Directory of Open Access Journals (Sweden)
Zhi-yong Bai
2013-01-01
Full Text Available This paper proposes an improved score function for the effective ranking order of interval-valued intuitionistic fuzzy sets (IVIFSs and an interval-valued intuitionistic fuzzy TOPSIS method based on the score function to solve multicriteria decision-making problems in which all the preference information provided by decision-makers is expressed as interval-valued intuitionistic fuzzy decision matrices where each of the elements is characterized by IVIFS value and the information about criterion weights is known. We apply the proposed score function to calculate the separation measures of each alternative from the positive and negative ideal solutions to determine the relative closeness coefficients. According to the values of the closeness coefficients, the alternatives can be ranked and the most desirable one(s can be selected in the decision-making process. Finally, two illustrative examples for multicriteria fuzzy decision-making problems of alternatives are used as a demonstration of the applications and the effectiveness of the proposed decision-making method.
Formal Analysis of SET and NSL Protocols Using the Interpretation Functions-Based Method
Directory of Open Access Journals (Sweden)
Hanane Houmani
2012-01-01
Full Text Available Most applications in the Internet such as e-banking and e-commerce use the SET and the NSL protocols to protect the communication channel between the client and the server. Then, it is crucial to ensure that these protocols respect some security properties such as confidentiality, authentication, and integrity. In this paper, we analyze the SET and the NSL protocols with respect to the confidentiality (secrecy property. To perform this analysis, we use the interpretation functions-based method. The main idea behind the interpretation functions-based technique is to give sufficient conditions that allow to guarantee that a cryptographic protocol respects the secrecy property. The flexibility of the proposed conditions allows the verification of daily-life protocols such as SET and NSL. Also, this method could be used under different assumptions such as a variety of intruder abilities including algebraic properties of cryptographic primitives. The NSL protocol, for instance, is analyzed with and without the homomorphism property. We show also, using the SET protocol, the usefulness of this approach to correct weaknesses and problems discovered during the analysis.
A semi-classical treatment of dissipative processes based on Feynman's influence functional method
International Nuclear Information System (INIS)
Moehring, K.; Smilansky, U.
1980-01-01
We develop a semi-classical treatment of dissipative processes based on Feynman's influence functional method. Applying it to deep inelastic collisions of heavy ions we study inclusive transition probabilities corresponding to a situation when only a set of collective variables is specified in the initial and final states. We show that the inclusive probabilities as well as the final energy distributions can be expressed in terms of properly defined classical paths and their corresponding stability fields. We present a uniform approximation for the study of quantal interference and focussing phenomena and discuss the conditions under which they are to be expected. For the dissipation mechanism we study three approximations - the harmonic model for the internal system, the weak coupling (diabatic) and the adiabatic coupling. We show that these three limits can be treated in the same manner. We finally compare the present formalism with other methodes as were introduced for the description of dissipation in deep inelastic collisions. (orig.)
Characteristics and functions for place brands based on a Delphi method
Directory of Open Access Journals (Sweden)
J de San Eugenio Vela
2013-10-01
Full Text Available Introduction. Representation of territories through brands is a recurring issue in today’s modern society. The aim of this article is to establish certain characteristics and functions pertaining to brands linked to geographical areas. Methodology. The decision was made to conduct qualitative research based on a Delphi method comprising a panel of fourteen place branding experts. Results. In relation to commercial brands, it is found that, since they are publicly owned, place brands call for more complex management, preferably on three levels: public administration, private organisations and citizens. Conclusions. Based on the results obtained, it is concluded that management of places centres on the projection of unique, spatial identities on the context of increasing competition between territories.
Directory of Open Access Journals (Sweden)
SW Kang
2015-02-01
Full Text Available This article introduces an improved non-dimensional dynamic influence function method using a sub-domain method for efficiently extracting the eigenvalues and mode shapes of concave membranes with arbitrary shapes. The non-dimensional dynamic influence function method (non-dimensional dynamic influence function method, which was developed by the authors in 1999, gives highly accurate eigenvalues for membranes, plates, and acoustic cavities, compared with the finite element method. However, it needs the inefficient procedure of calculating the singularity of a system matrix in the frequency range of interest for extracting eigenvalues and mode shapes. To overcome the inefficient procedure, this article proposes a practical approach to make the system matrix equation of the concave membrane of interest into a form of algebraic eigenvalue problem. It is shown by several case studies that the proposed method has a good convergence characteristics and yields very accurate eigenvalues, compared with an exact method and finite element method (ANSYS.
Design of New Test Function Model Based on Multi-objective Optimization Method
Directory of Open Access Journals (Sweden)
Zhaoxia Shang
2017-01-01
Full Text Available Space partitioning method, as a new algorism, has been applied to planning and decision-making of investment portfolio more and more often. But currently there are so few testing function for this algorism, which has greatly restrained its further development and application. An innovative test function model is designed in this paper and is used to test the algorism. It is proved that for evaluation of space partitioning method in certain applications, this test function has fairly obvious advantage.
Rotation Matrix Method Based on Ambiguity Function for GNSS Attitude Determination.
Yang, Yingdong; Mao, Xuchu; Tian, Weifeng
2016-06-08
Global navigation satellite systems (GNSS) are well suited for attitude determination. In this study, we use the rotation matrix method to resolve the attitude angle. This method achieves better performance in reducing computational complexity and selecting satellites. The condition of the baseline length is combined with the ambiguity function method (AFM) to search for integer ambiguity, and it is validated in reducing the span of candidates. The noise error is always the key factor to the success rate. It is closely related to the satellite geometry model. In contrast to the AFM, the LAMBDA (Least-squares AMBiguity Decorrelation Adjustment) method gets better results in solving the relationship of the geometric model and the noise error. Although the AFM is more flexible, it is lack of analysis on this aspect. In this study, the influence of the satellite geometry model on the success rate is analyzed in detail. The computation error and the noise error are effectively treated. Not only is the flexibility of the AFM inherited, but the success rate is also increased. An experiment is conducted in a selected campus, and the performance is proved to be effective. Our results are based on simulated and real-time GNSS data and are applied on single-frequency processing, which is known as one of the challenging case of GNSS attitude determination.
Rotation Matrix Method Based on Ambiguity Function for GNSS Attitude Determination
Directory of Open Access Journals (Sweden)
Yingdong Yang
2016-06-01
Full Text Available Global navigation satellite systems (GNSS are well suited for attitude determination. In this study, we use the rotation matrix method to resolve the attitude angle. This method achieves better performance in reducing computational complexity and selecting satellites. The condition of the baseline length is combined with the ambiguity function method (AFM to search for integer ambiguity, and it is validated in reducing the span of candidates. The noise error is always the key factor to the success rate. It is closely related to the satellite geometry model. In contrast to the AFM, the LAMBDA (Least-squares AMBiguity Decorrelation Adjustment method gets better results in solving the relationship of the geometric model and the noise error. Although the AFM is more flexible, it is lack of analysis on this aspect. In this study, the influence of the satellite geometry model on the success rate is analyzed in detail. The computation error and the noise error are effectively treated. Not only is the flexibility of the AFM inherited, but the success rate is also increased. An experiment is conducted in a selected campus, and the performance is proved to be effective. Our results are based on simulated and real-time GNSS data and are applied on single-frequency processing, which is known as one of the challenging case of GNSS attitude determination.
Zou, Ling; Guo, Qian; Xu, Yi; Yang, Biao; Jiao, Zhuqing; Xiang, Jianbo
2016-04-29
Functional magnetic resonance imaging (fMRI) is an important tool in neuroscience for assessing connectivity and interactions between distant areas of the brain. To find and characterize the coherent patterns of brain activity as a means of identifying brain systems for the cognitive reappraisal of the emotion task, both density-based k-means clustering and independent component analysis (ICA) methods can be applied to characterize the interactions between brain regions involved in cognitive reappraisal of emotion. Our results reveal that compared with the ICA method, the density-based k-means clustering method provides a higher sensitivity of polymerization. In addition, it is more sensitive to those relatively weak functional connection regions. Thus, the study concludes that in the process of receiving emotional stimuli, the relatively obvious activation areas are mainly distributed in the frontal lobe, cingulum and near the hypothalamus. Furthermore, density-based k-means clustering method creates a more reliable method for follow-up studies of brain functional connectivity.
Hankel Matrix Correlation Function-Based Subspace Identification Method for UAV Servo System
Directory of Open Access Journals (Sweden)
Minghong She
2018-01-01
Full Text Available For the identification problem of closed-loop subspace model, we propose a zero space projection method based on the estimation of correlation function to fill the block Hankel matrix of identification model by combining the linear algebra with geometry. By using the same projection of related data in time offset set and LQ decomposition, the multiplication operation of projection is achieved and dynamics estimation of the unknown equipment system model is obtained. Consequently, we have solved the problem of biased estimation caused when the open-loop subspace identification algorithm is applied to the closed-loop identification. A simulation example is given to show the effectiveness of the proposed approach. In final, the practicability of the identification algorithm is verified by hardware test of UAV servo system in real environment.
Adaptive and non-adaptive data hiding methods for grayscale images based on modulus function
Directory of Open Access Journals (Sweden)
Najme Maleki
2014-07-01
Full Text Available This paper presents two adaptive and non-adaptive data hiding methods for grayscale images based on modulus function. Our adaptive scheme is based on the concept of human vision sensitivity, so the pixels in edge areas than to smooth areas can tolerate much more changes without making visible distortion for human eyes. In our adaptive scheme, the average differencing value of four neighborhood pixels into a block via a threshold secret key determines whether current block is located in edge or smooth area. Pixels in the edge areas are embedded by Q-bit of secret data with a larger value of Q than that of pixels placed in smooth areas. Also in this scholar, we represent one non-adaptive data hiding algorithm. Our non-adaptive scheme, via an error reduction procedure, produces a high visual quality for stego-image. The proposed schemes present several advantages. 1-of aspects the embedding capacity and visual quality of stego-image are scalable. In other words, the embedding rate as well as the image quality can be scaled for practical applications 2-the high embedding capacity with minimal visual distortion can be achieved, 3-our methods require little memory space for secret data embedding and extracting phases, 4-secret keys have used to protect of the embedded secret data. Thus, level of security is high, 5-the problem of overflow or underflow does not occur. Experimental results indicated that the proposed adaptive scheme significantly is superior to the currently existing scheme, in terms of stego-image visual quality, embedding capacity and level of security and also our non-adaptive method is better than other non-adaptive methods, in view of stego-image quality. Results show which our adaptive algorithm can resist against the RS steganalysis attack.
Directory of Open Access Journals (Sweden)
Kihong Shin
2015-01-01
Full Text Available Most existing techniques for machinery health monitoring that utilize measured vibration signals usually require measurement points to be as close as possible to the expected fault components of interest. This is particularly important for implementing condition-based maintenance since the incipient fault signal power may be too small to be detected if a sensor is located further away from the fault source. However, a measurement sensor is often not attached to the ideal point due to geometric or environmental restrictions. In such a case, many of the conventional diagnostic techniques may not be successfully applicable. In this paper, a two-channel analysis method is proposed to overcome such difficulty. It uses two vibration signals simultaneously measured at arbitrary points in a machine. The proposed method is described theoretically by introducing a fictitious system frequency response function. It is then verified experimentally for bearing fault detection. The results show that the suggested method may be a good alternative when ideal points for measurement sensors are not readily available.
Generic primal-dual interior point methods based on a new kernel function
EL Ghami, M.; Roos, C.
2008-01-01
In this paper we present a generic primal-dual interior point methods (IPMs) for linear optimization in which the search direction depends on a univariate kernel function which is also used as proximity measure in the analysis of the algorithm. The proposed kernel function does not satisfy all the
Energy Technology Data Exchange (ETDEWEB)
Yi Luo; Jian-wei Cheng [West Virginia University, Morgantown, WV (United States). Department of Mining Engineering
2009-09-15
The distribution of the final surface subsidence basin induced by longwall operations in inclined coal seam could be significantly different from that in flat coal seam and demands special prediction methods. Though many empirical prediction methods have been developed, these methods are inflexible for varying geological and mining conditions. An influence function method has been developed to take the advantage of its fundamentally sound nature and flexibility. In developing this method, significant modifications have been made to the original Knothe function to produce an asymmetrical influence function. The empirical equations for final subsidence parameters derived from US subsidence data and Chinese empirical values have been incorporated into the mathematical models to improve the prediction accuracy. A corresponding computer program is developed. A number of subsidence cases for longwall mining operations in coal seams with varying inclination angles have been used to demonstrate the applicability of the developed subsidence prediction model. 9 refs., 8 figs.
Cai, Jianhua
2017-05-01
The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.
International Nuclear Information System (INIS)
Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing
2012-01-01
In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)
Asiri, Sharefa M.
2016-10-20
In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear in unknown parameters. The well-posedness of the modulating functions-based solution is proved. The wave and the fifth-order KdV equations are used as examples to show the effectiveness of the proposed method in both noise-free and noisy cases.
Introduction to functional methods
International Nuclear Information System (INIS)
Faddeev, L.D.
1976-01-01
The functional integral is considered in relation to Feynman diagrams and phase space. The holomorphic form of the functional integral is then discussed. The main problem of the lectures, viz. the construction of the S-matrix by means of the functional integral, is considered. The functional methods described explicitly take into account the Bose statistics of the fields involved. The different procedure used to treat fermions is discussed. An introduction to the problem of quantization of gauge fields is given. (B.R.H.)
A new diffusion nodal method based on analytic basis function expansion
International Nuclear Information System (INIS)
Noh, J.M.; Cho, N.Z.
1993-01-01
The transverse integration procedure commonly used in most advanced nodal methods results in some limitations. The first is that the transverse leakage term that appears in the transverse integration procedure must be appropriately approximated. In most advanced nodal methods, this term is expanded in a quadratic polynomial. The second arises when reconstructing the pinwise flux distribution within a node. The available one-dimensional flux shapes from nodal calculation in each spatial direction cannot be used directly in the flux reconstruction. Finally, the transverse leakage defined for a hexagonal node becomes so complicated as not to be easily handled and contains nonphysical singular terms. In this paper, a new nodal method called the analytic function expansion nodal (AFEN) method is described for both the rectangular geometry and the hexagonal geometry in order to overcome these limitations. This method does not solve the transverse-integrated one-dimensional diffusion equations but instead solves directly the original multidimensional diffusion equation within a node. This is a accomplished by expanding the solution (or the intranodal homogeneous flux distribution) in terms of nonseparable analytic basis functions satisfying the diffusion equation at any point in the node
Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin
2015-03-01
Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.
Parand, K.; Nikarya, M.
2017-11-01
In this paper a novel method will be introduced to solve a nonlinear partial differential equation (PDE). In the proposed method, we use the spectral collocation method based on Bessel functions of the first kind and the Jacobian free Newton-generalized minimum residual (JFNGMRes) method with adaptive preconditioner. In this work a nonlinear PDE has been converted to a nonlinear system of algebraic equations using the collocation method based on Bessel functions without any linearization, discretization or getting the help of any other methods. Finally, by using JFNGMRes, the solution of the nonlinear algebraic system is achieved. To illustrate the reliability and efficiency of the proposed method, we solve some examples of the famous Fisher equation. We compare our results with other methods.
A novel JPEG steganography method based on modulus function with histogram analysis
Directory of Open Access Journals (Sweden)
V. Banoci
2012-06-01
Full Text Available In this paper, we present a novel steganographic method for embedding of secret data in still grayscale JPEG image. In order to provide large capacity of the proposed method while maintaining good visual quality of stego-image, the embedding process is performed in quantized transform coefficients of Discrete Cosine transform (DCT by modifying coefficients according to modulo function, what gives to the steganography system blind extraction predisposition. After-embedding histogram of proposed Modulo Histogram Fitting (MHF method is analyzed to secure steganography system against steganalysis attacks. In addition, AES ciphering was implemented to increase security and improve histogram after-embedding characteristics of proposed steganography system as experimental results show.
Secure method for biometric-based recognition with integrated cryptographic functions.
Chiou, Shin-Yan
2013-01-01
Biometric systems refer to biometric technologies which can be used to achieve authentication. Unlike cryptography-based technologies, the ratio for certification in biometric systems needs not to achieve 100% accuracy. However, biometric data can only be directly compared through proximal access to the scanning device and cannot be combined with cryptographic techniques. Moreover, repeated use, improper storage, or transmission leaks may compromise security. Prior studies have attempted to combine cryptography and biometrics, but these methods require the synchronization of internal systems and are vulnerable to power analysis attacks, fault-based cryptanalysis, and replay attacks. This paper presents a new secure cryptographic authentication method using biometric features. The proposed system combines the advantages of biometric identification and cryptographic techniques. By adding a subsystem to existing biometric recognition systems, we can simultaneously achieve the security of cryptographic technology and the error tolerance of biometric recognition. This method can be used for biometric data encryption, signatures, and other types of cryptographic computation. The method offers a high degree of security with protection against power analysis attacks, fault-based cryptanalysis, and replay attacks. Moreover, it can be used to improve the confidentiality of biological data storage and biodata identification processes. Remote biometric authentication can also be safely applied.
Secure Method for Biometric-Based Recognition with Integrated Cryptographic Functions
Directory of Open Access Journals (Sweden)
Shin-Yan Chiou
2013-01-01
Full Text Available Biometric systems refer to biometric technologies which can be used to achieve authentication. Unlike cryptography-based technologies, the ratio for certification in biometric systems needs not to achieve 100% accuracy. However, biometric data can only be directly compared through proximal access to the scanning device and cannot be combined with cryptographic techniques. Moreover, repeated use, improper storage, or transmission leaks may compromise security. Prior studies have attempted to combine cryptography and biometrics, but these methods require the synchronization of internal systems and are vulnerable to power analysis attacks, fault-based cryptanalysis, and replay attacks. This paper presents a new secure cryptographic authentication method using biometric features. The proposed system combines the advantages of biometric identification and cryptographic techniques. By adding a subsystem to existing biometric recognition systems, we can simultaneously achieve the security of cryptographic technology and the error tolerance of biometric recognition. This method can be used for biometric data encryption, signatures, and other types of cryptographic computation. The method offers a high degree of security with protection against power analysis attacks, fault-based cryptanalysis, and replay attacks. Moreover, it can be used to improve the confidentiality of biological data storage and biodata identification processes. Remote biometric authentication can also be safely applied.
Analysis of calculating methods for failure distribution function based on maximal entropy principle
International Nuclear Information System (INIS)
Guo Chunying; Lin Yuangen; Jiang Meng; Wu Changli
2009-01-01
The computation of invalidation distribution functions of electronic devices when exposed in gamma rays is discussed here. First, the possible devices failure distribution models are determined through the tests of statistical hypotheses using the test data. The results show that: the devices' failure distribution can obey multi-distributions when the test data is few. In order to decide the optimum failure distribution model, the maximal entropy principle is used and the elementary failure models are determined. Then, the Bootstrap estimation method is used to simulate the intervals estimation of the mean and the standard deviation. On the basis of this, the maximal entropy principle is used again and the simulated annealing method is applied to find the optimum values of the mean and the standard deviation. Accordingly, the electronic devices' optimum failure distributions are finally determined and the survival probabilities are calculated. (authors)
A fast computation method for MUSIC spectrum function based on circular arrays
Du, Zhengdong; Wei, Ping
2015-02-01
The large computation amount of multiple signal classification (MUSIC) spectrum function seriously affects the timeliness of direction finding system using MUSIC algorithm, especially in the two-dimensional directions of arrival (DOA) estimation of azimuth and elevation with a large antenna array. This paper proposes a fast computation method for MUSIC spectrum. It is suitable for any circular array. First, the circular array is transformed into a virtual uniform circular array, in the process of calculating MUSIC spectrum, for the cyclic characteristics of steering vector, the inner product in the calculation of spatial spectrum is realised by cyclic convolution. The computational amount of MUSIC spectrum is obviously less than that of the conventional method. It is a very practical way for MUSIC spectrum computation in circular arrays.
A fuzzy method for improving the functionality of search engines based on user's web interactions
Directory of Open Access Journals (Sweden)
Farzaneh Kabirbeyk
2015-04-01
Full Text Available Web mining has been widely used to discover knowledge from various sources in the web. One of the important tools in web mining is mining of web user’s behavior that is considered as a way to discover the potential knowledge of web user’s interaction. Nowadays, Website personalization is regarded as a popular phenomenon among web users and it plays an important role in facilitating user access and provides information of users’ requirements based on their own interests. Extracting important features about web user behavior plays a significant role in web usage mining. Such features are page visit frequency in each session, visit duration, and dates of visiting a certain pages. This paper presents a method to predict user’s interest and to propose a list of pages based on their interests by identifying user’s behavior based on fuzzy techniques called fuzzy clustering method. Due to the user’s different interests and use of one or more interest at a time, user’s interest may belong to several clusters and fuzzy clustering provide a possible overlap. Using the resulted cluster helps extract fuzzy rules. This helps detecting user’s movement pattern and using neural network a list of suggested pages to the users is provided.
A novel method for one-way hash function construction based on spatiotemporal chaos
International Nuclear Information System (INIS)
Ren Haijun; Wang Yong; Xie Qing; Yang Huaqian
2009-01-01
A novel hash algorithm based on a spatiotemporal chaos is proposed. The original message is first padded with zeros if needed. Then it is divided into a number of blocks each contains 32 bytes. In the hashing process, each block is partitioned into eight 32-bit values and input into the spatiotemporal chaotic system. Then, after iterating the system for four times, the next block is processed by the same way. To enhance the confusion and diffusion effect, the cipher block chaining (CBC) mode is adopted in the algorithm. The hash value is obtained from the final state value of the spatiotemporal chaotic system. Theoretic analyses and numerical simulations both show that the proposed hash algorithm possesses good statistical properties, strong collision resistance and high efficiency, as required by practical keyed hash functions.
A novel method for one-way hash function construction based on spatiotemporal chaos
Energy Technology Data Exchange (ETDEWEB)
Ren Haijun [College of Software Engineering, Chongqing University, Chongqing 400044 (China); State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044 (China)], E-mail: jhren@cqu.edu.cn; Wang Yong; Xie Qing [Key Laboratory of Electronic Commerce and Logistics of Chongqing, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Yang Huaqian [Department of Computer and Modern Education Technology, Chongqing Education of College, Chongqing 400067 (China)
2009-11-30
A novel hash algorithm based on a spatiotemporal chaos is proposed. The original message is first padded with zeros if needed. Then it is divided into a number of blocks each contains 32 bytes. In the hashing process, each block is partitioned into eight 32-bit values and input into the spatiotemporal chaotic system. Then, after iterating the system for four times, the next block is processed by the same way. To enhance the confusion and diffusion effect, the cipher block chaining (CBC) mode is adopted in the algorithm. The hash value is obtained from the final state value of the spatiotemporal chaotic system. Theoretic analyses and numerical simulations both show that the proposed hash algorithm possesses good statistical properties, strong collision resistance and high efficiency, as required by practical keyed hash functions.
Directory of Open Access Journals (Sweden)
Cai Ligang
2017-01-01
Full Text Available Instead improving the accuracy of machine tool by increasing the precision of key components level blindly in the production process, the method of combination of SNR quality loss function and machine tool geometric error correlation analysis to optimize five-axis machine tool geometric errors will be adopted. Firstly, the homogeneous transformation matrix method will be used to build five-axis machine tool geometric error modeling. Secondly, the SNR quality loss function will be used for cost modeling. And then, machine tool accuracy optimal objective function will be established based on the correlation analysis. Finally, ISIGHT combined with MATLAB will be applied to optimize each error. The results show that this method is reasonable and appropriate to relax the range of tolerance values, so as to reduce the manufacturing cost of machine tools.
Wills, John M.; Mattsson, Ann E.
2012-02-01
Density functional theory (DFT) provides a formally predictive base for equation of state properties. Available approximations to the exchange/correlation functional provide accurate predictions for many materials in the periodic table. For heavy materials however, DFT calculations, using available functionals, fail to provide quantitative predictions, and often fail to be even qualitative. This deficiency is due both to the lack of the appropriate confinement physics in the exchange/correlation functional and to approximations used to evaluate the underlying equations. In order to assess and develop accurate functionals, it is essential to eliminate all other sources of error. In this talk we describe an efficient first-principles electronic structure method based on the Dirac equation and compare the results obtained with this method with other methods generally used. Implications for high-pressure equation of state of relativistic materials are demonstrated in application to Ce and the light actinides. Sandia National Laboratories is a multi-program laboratory managed andoperated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Zhang, Jian-Hua; Peng, Xiao-Di; Liu, Hua; Raisch, Jörg; Wang, Ru-Bin
2013-12-01
The human operator's ability to perform their tasks can fluctuate over time. Because the cognitive demands of the task can also vary it is possible that the capabilities of the operator are not sufficient to satisfy the job demands. This can lead to serious errors when the operator is overwhelmed by the task demands. Psychophysiological measures, such as heart rate and brain activity, can be used to monitor operator cognitive workload. In this paper, the most influential psychophysiological measures are extracted to characterize Operator Functional State (OFS) in automated tasks under a complex form of human-automation interaction. The fuzzy c-mean (FCM) algorithm is used and tested for its OFS classification performance. The results obtained have shown the feasibility and effectiveness of the FCM algorithm as well as the utility of the selected input features for OFS classification. Besides being able to cope with nonlinearity and fuzzy uncertainty in the psychophysiological data it can provide information about the relative importance of the input features as well as the confidence estimate of the classification results. The OFS pattern classification method developed can be incorporated into an adaptive aiding system in order to enhance the overall performance of a large class of safety-critical human-machine cooperative systems.
Directory of Open Access Journals (Sweden)
Chitale Meghana
2013-02-01
Full Text Available Abstract Background Many Automatic Function Prediction (AFP methods were developed to cope with an increasing growth of the number of gene sequences that are available from high throughput sequencing experiments. To support the development of AFP methods, it is essential to have community wide experiments for evaluating performance of existing AFP methods. Critical Assessment of Function Annotation (CAFA is one such community experiment. The meeting of CAFA was held as a Special Interest Group (SIG meeting at the Intelligent Systems in Molecular Biology (ISMB conference in 2011. Here, we perform a detailed analysis of two sequence-based function prediction methods, PFP and ESG, which were developed in our lab, using the predictions submitted to CAFA. Results We evaluate PFP and ESG using four different measures in comparison with BLAST, Prior, and GOtcha. In addition to the predictions submitted to CAFA, we further investigate performance of a different scoring function to rank order predictions by PFP as well as PFP/ESG predictions enriched with Priors that simply adds frequently occurring Gene Ontology terms as a part of predictions. Prediction accuracies of each method were also evaluated separately for different functional categories. Successful and unsuccessful predictions by PFP and ESG are also discussed in comparison with BLAST. Conclusion The in-depth analysis discussed here will complement the overall assessment by the CAFA organizers. Since PFP and ESG are based on sequence database search results, our analyses are not only useful for PFP and ESG users but will also shed light on the relationship of the sequence similarity space and functions that can be inferred from the sequences.
Nakae, Ken; Ikegaya, Yuji; Ishikawa, Tomoe; Oba, Shigeyuki; Urakubo, Hidetoshi; Koyama, Masanori; Ishii, Shin
2014-01-01
Crosstalk between neurons and glia may constitute a significant part of information processing in the brain. We present a novel method of statistically identifying interactions in a neuron–glia network. We attempted to identify neuron–glia interactions from neuronal and glial activities via maximum-a-posteriori (MAP)-based parameter estimation by developing a generalized linear model (GLM) of a neuron–glia network. The interactions in our interest included functional connectivity and response functions. We evaluated the cross-validated likelihood of GLMs that resulted from the addition or removal of connections to confirm the existence of specific neuron-to-glia or glia-to-neuron connections. We only accepted addition or removal when the modification improved the cross-validated likelihood. We applied the method to a high-throughput, multicellular in vitro Ca2+ imaging dataset obtained from the CA3 region of a rat hippocampus, and then evaluated the reliability of connectivity estimates using a statistical test based on a surrogate method. Our findings based on the estimated connectivity were in good agreement with currently available physiological knowledge, suggesting our method can elucidate undiscovered functions of neuron–glia systems. PMID:25393874
Directory of Open Access Journals (Sweden)
Hui Yuan
2017-01-01
Full Text Available Interrupted-sampling repeater jamming (ISRJ is a new kind of coherent jamming to the large time-bandwidth linear frequency modulation (LFM signal. Many jamming modes, such as lifelike multiple false targets and dense false targets, can be made through setting up different parameters. According to the “storage-repeater-storage-repeater” characteristics of the ISRJ and the differences in the time-frequency-energy domain between the ISRJ signal and the target echo signal, one new method based on the energy function detection and band-pass filtering is proposed to suppress the ISRJ. The methods mainly consist of two parts: extracting the signal segments without ISRJ and constructing band-pass filtering function with low sidelobe. The simulation results show that the method is effective in the ISRJ with different parameters.
Baryons with functional methods
International Nuclear Information System (INIS)
Fischer, Christian S.
2017-01-01
We summarise recent results on the spectrum of ground-state and excited baryons and their form factors in the framework of functional methods. As an improvement upon similar approaches we explicitly take into account the underlying momentum-dependent dynamics of the quark-gluon interaction that leads to dynamical chiral symmetry breaking. For light octet and decuplet baryons we find a spectrum in very good agreement with experiment, including the level ordering between the positive- and negative-parity nucleon states. Comparing the three-body framework with the quark-diquark approximation, we do not find significant differences in the spectrum for those states that have been calculated in both frameworks. This situation is different in the electromagnetic form factor of the Δ, which may serve to distinguish both pictures by comparison with experiment and lattice QCD.
Rowe, David K.; Parkyn, Stephanie; Quinn, John; Collier, Kevin; Hatton, Chris; Joy, Michael K.; Maxted, John; Moore, Stephen
2009-06-01
A method was developed to score the ecological condition of first- to third-order stream reaches in the Auckland region of New Zealand based on the performance of their key ecological functions. Such a method is required by consultants and resource managers to quantify the reduction in ecological condition of a modified stream reach relative to its unmodified state. This is a fundamental precursor for the determination of fair environmental compensation for achieving no-net-loss in overall stream ecological value. Field testing and subsequent use of the method indicated that it provides a useful measure of ecological condition related to the performance of stream ecological functions. It is relatively simple to apply compared to a full ecological study, is quick to use, and allows identification of the degree of impairment of each of the key ecological functions. The scoring system was designed so that future improvements in the measurement of stream functions can be incorporated into it. Although the methodology was specifically designed for Auckland streams, the principles can be readily adapted to other regions and stream types.
Directory of Open Access Journals (Sweden)
Yong Ma
2013-01-01
Full Text Available We present one algorithm based on particle swarm optimization (PSO with penalty function to determine the conflict-free path for mobile objects in four-dimension (three spatial and one-time dimensions with obstacles. The shortest path of the mobile object is set as goal function, which is constrained by conflict-free criterion, path smoothness, and velocity and acceleration requirements. This problem is formulated as a calculus of variation problem (CVP. With parametrization method, the CVP is converted to a time-varying nonlinear programming problem (TNLPP. Constraints of TNLPP are transformed to general TNLPP without any constraints through penalty functions. Then, by using a little calculations and applying the algorithm PSO, the solution of the CVP is consequently obtained. Approach efficiency is confirmed by numerical examples.
Directory of Open Access Journals (Sweden)
Mariana Morales-de la Peña
2016-06-01
Full Text Available Eating habits of western populations are changing due to modern lifestyles. As a result, people are becoming more susceptible to chronic and degenerative diseases. This fact has motivated the food industry to develop functional products that could decrease the incidence of those disorders. It is well known that fruit juices, milk and soymilk possess high concentrations of antioxidant and bioactive substances. Hence, the development of these functional beverages is a potential way to take advantage of their nutritional properties and exotic flavors that could attract the interest of consumers. At the same time, application of the right preservation treatment is of high relevance in order to obtain safe products with convenient shelf life and high concentration of health-related compounds. This fact represents a great challenge that scientists and technologists are currently facing. Today, novel preservation processes such as high hydrostatic pressure (HHP, high intensity pulsed electric fields (HIPEF and ultrasound (US, among others, are being evaluated as an alternative to heat pasteurization, obtaining promising results. Hence, this review gathers the most relevant information about the development of mixed beverages containing fruit juices and milk or soymilk. Furthermore, the advantages and drawbacks of the application of non-thermal treatments for functional beverages’ preservation with high content of bioactive compounds are also mentioned.
International Nuclear Information System (INIS)
Mornet, Stephane; Portier, Josik; Duguet, Etienne
2005-01-01
A new generation of susceptibility contrast agents for MRI and based on maghemite cores covalently bonded to dextran stabilizing macromolecules was investigated. The multistep preparation of these versatile ultrasmall superparamagnetic iron oxides (VUSPIO) consisted of colloidal maghemite synthesis, surface modification by aminopropylsilane groups, and coupling of partially oxidized dextran via Schiff's bases and secondary amine bonds. The dextran corona might be easily derivatized, e.g. by PEGylation
A meta-analysis based method for prioritizing candidate genes involved in a pre-specific function
Directory of Open Access Journals (Sweden)
Jingjing Zhai
2016-12-01
Full Text Available The identification of genes associated with a given biological function in plants remains a challenge, although network-based gene prioritization algorithms have been developed for Arabidopsis thaliana and many non-model plant species. Nevertheless, these network-based gene prioritization algorithms have encountered several problems; one in particular is that of unsatisfactory prediction accuracy due to limited network coverage, varying link quality, and/or uncertain network connectivity. Thus a model that integrates complementary biological data may be expected to increase the prediction accuracy of gene prioritization. Towards this goal, we developed a novel gene prioritization method named RafSee, to rank candidate genes using a random forest algorithm that integrates sequence, evolutionary, and epigenetic features of plants. Subsequently, we proposed an integrative approach named RAP (Rank Aggregation-based data fusion for gene Prioritization, in which an order statistics-based meta-analysis was used to aggregate the rank of the network-based gene prioritization method and RafSee, for accurately prioritizing candidate genes involved in a pre-specific biological function. Finally, we showcased the utility of RAP by prioritizing 380 flowering-time genes in Arabidopsis. The ‘leave-one-out’ cross-validation experiment showed that RafSee could work as a complement to a current state-of-art network-based gene prioritization system (AraNet v2. Moreover, RAP ranked 53.68% (204/380 flowering-time genes higher than AraNet v2, resulting in an 39.46% improvement in term of the first quartile rank. Further evaluations also showed that RAP was effective in prioritizing genes-related to different abiotic stresses. To enhance the usability of RAP for Arabidopsis and non-model plant species, an R package implementing the method is freely available at http://bioinfo.nwafu.edu.cn/software.
Coussot, Gaëlle; Le Postollec, Aurélie; Faye, Clément; Dobrijevic, Michel
2018-04-15
The scope of this paper is to present a gold standard method to evaluate functional activity of antibody (Ab)-based materials during the different phases of their development, after their exposure to forced degradations or even during routine quality control. Ab-based materials play a central role in the development of diagnostic devices, for example, for screening or therapeutic target characterization, in formulation development, and in novel micro(nano)technology approaches to develop immunosensors useful for the analysis of trace substances in pharmaceutical and food industries, clinical and environmental fields. A very important aspect in diagnostic device development is the construction of its biofunctional surfaces. These Ab surfaces require biocompatibility, homogeneity, stability, specificity and functionality. Thus, this work describes the validation and applications of a unique ligand binding assay to directly perform the quantitative measurement of functional Ab binding sites immobilized on the solid surfaces. The method called Antibody Anti-HorseRadish Peroxidase (A2HRP) method, uses a covalently coated anti-HRP antibody (anti-HRP Ab) and does not need for a secondary Ab during the detection step. The A2HRP method was validated and gave reliable results over a wide range of absorbance values. Analyzed validation criteria were fulfilled as requested by the food and drug administration (FDA) and European Medicines Agency (EMA) guidance for the validation of bioanalytical methods with 1) an accuracy mean value within +15% of the nominal value; 2) the within-assay precision less than 7.1%, and 3) the inter-day variability under 12.1%. With the A2HRP method, it is then possible to quantify from 0.04 × 10 12 to 2.98 × 10 12 functional Ab binding sites immobilized on the solid surfaces. A2HRP method was validated according to FDA and EMA guidance, allowing the creation of a gold standard method to evaluate Ab surfaces for their resistance under
Hirano, Toshiyuki; Sato, Fumitoshi
2014-07-28
We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules.
International Nuclear Information System (INIS)
Toshio, S.; Kazuo, A.
1983-01-01
A model for calculating the power distribution and the control rod worth in fast reactors has been developed. This model is based on the influence function method. The characteristics of the model are as follows: 1. Influence functions for any changes in the control rod insertion ratio are expressed by using an influence function for an appropriate control rod insertion in order to reduce the computer memory size required for the method. 2. A control rod worth is calculated on the basis of a one-group approximation in which cross sections are generated by bilinear (flux-adjoint) weighting, not the usual flux weighting, in order to reduce the collapse error. 3. An effective neutron multiplication factor is calculated by adjoint weighting in order to reduce the effect of the error in the one-group flux distribution. The results obtained in numerical examinations of a prototype fast reactor indicate that this method is suitable for on-line core performance evaluation because of a short computing time and a small memory size
International Nuclear Information System (INIS)
Sanda, T.; Azekura, K.
1983-01-01
A model for calculating the power distribution and the control rod worth in fast reactors has been developed. This model is based on the influence function method. The characteristics of the model are as follows: Influence functions for any changes in the control rod insertion ratio are expressed by using an influence function for an appropriate control rod insertion in order to reduce the computer memory size required for the method. A control rod worth is calculated on the basis of a one-group approximation in which cross sections are generated by bilinear (flux-adjoint) weighting, not the usual flux weighting, in order to reduce the collapse error. An effective neutron multiplication factor is calculated by adjoint weighting in order to reduce the effect of the error in the one-group flux distribution. The results obtained in numerical examinations of a prototype fast reactor indicate that this method is suitable for on-line core performance evaluation because of a short computing time and a small memory size
International Nuclear Information System (INIS)
Hintermüller, Michael; Rautenberg, Carlos N; Hahn, Jooyoung
2014-01-01
Variable splitting schemes for the function space version of the image reconstruction problem with total variation regularization (TV-problem) in its primal and pre-dual formulations are considered. For the primal splitting formulation, while existence of a solution cannot be guaranteed, it is shown that quasi-minimizers of the penalized problem are asymptotically related to the solution of the original TV-problem. On the other hand, for the pre-dual formulation, a family of parametrized problems is introduced and a parameter dependent contraction of an associated fixed point iteration is established. Moreover, the theory is validated by numerical tests. Additionally, the augmented Lagrangian approach is studied, details on an implementation on a staggered grid are provided and numerical tests are shown. (paper)
International Nuclear Information System (INIS)
Lee, Joo Hee
2006-02-01
There is growing interest in developing pebble bed reactors (PBRs) as a candidate of very high temperature gas-cooled reactors (VHTRs). Until now, most existing methods of nuclear design analysis for this type of reactors are base on old finite-difference solvers or on statistical methods. But for realistic analysis of PBRs, there is strong desire of making available high fidelity nodal codes in three-dimensional (r,θ,z) cylindrical geometry. Recently, the Analytic Function Expansion Nodal (AFEN) method developed quite extensively in Cartesian (x,y,z) geometry and in hexagonal-z geometry was extended to two-group (r,z) cylindrical geometry, and gave very accurate results. In this thesis, we develop a method for the full three-dimensional cylindrical (r,θ,z) geometry and implement the method into a code named TOPS. The AFEN methodology in this geometry as in hexagonal geometry is 'robus' (e.g., no occurrence of singularity), due to the unique feature of the AFEN method that it does not use the transverse integration. The transverse integration in the usual nodal methods, however, leads to an impasse, that is, failure of the azimuthal term to be transverse-integrated over r-z surface. We use 13 nodal unknowns in an outer node and 7 nodal unknowns in an innermost node. The general solution of the node can be expressed in terms of that nodal unknowns, and can be updated using the nodal balance equation and the current continuity condition. For more realistic analysis of PBRs, we implemented em Marshak boundary condition to treat the incoming current zero boundary condition and the partial current translation (PCT) method to treat voids in the core. The TOPS code was verified in the various numerical tests derived from Dodds problem and PBMR-400 benchmark problem. The results of the TOPS code show high accuracy and fast computing time than the VENTURE code that is based on finite difference method (FDM)
Kohei Arai; Tomoko Nishikawa
2013-01-01
Multi-Resolution Analysis: MRA based on the mother wavelet function with which support length differs from the image of the automobile rear under run is performed, and the run characteristic of a car is searched for. Speed, deflection, etc. are analyzed and the method of detecting vehicles with high accident danger is proposed. The experimental results show that vehicles in a dangerous action can be detected by the proposed method.
Secure Method for Biometric-Based Recognition with Integrated Cryptographic Functions
Chiou, Shin-Yan
2013-01-01
Biometric systems refer to biometric technologies which can be used to achieve authentication. Unlike cryptography-based technologies, the ratio for certification in biometric systems needs not to achieve 100% accuracy. However, biometric data can only be directly compared through proximal access to the scanning device and cannot be combined with cryptographic techniques. Moreover, repeated use, improper storage, or transmission leaks may compromise security. Prior studies have attempted to c...
Energy Technology Data Exchange (ETDEWEB)
Lukose, Binit; Supronowicz, Barbara; Kuc, Agnieszka B.; Heine, Thomas [School of Engineering and Science, Jacobs University Bremen (Germany); Petkov, Petko S.; Vayssilov, Georgi N. [Faculty of Chemistry, University of Sofia (Bulgaria); Frenzel, Johannes [Lehrstuhl fuer Theoretische Chemie, Ruhr-Universitaet Bochum (Germany); Seifert, Gotthard [Physikalische Chemie, Technische Universitaet Dresden (Germany)
2012-02-15
Density-functional based tight-binding (DFTB) is a powerful method to describe large molecules and materials. Metal-organic frameworks (MOFs), materials with interesting catalytic properties and with very large surface areas, have been developed and have become commercially available. Unit cells of MOFs typically include hundreds of atoms, which make the application of standard density-functional methods computationally very expensive, sometimes even unfeasible. The aim of this paper is to prepare and to validate the self-consistent charge-DFTB (SCC-DFTB) method for MOFs containing Cu, Zn, and Al metal centers. The method has been validated against full hybrid density-functional calculations for model clusters, against gradient corrected density-functional calculations for supercells, and against experiment. Moreover, the modular concept of MOF chemistry has been discussed on the basis of their electronic properties. We concentrate on MOFs comprising three common connector units: copper paddlewheels (HKUST-1), zinc oxide Zn{sub 4}O tetrahedron (MOF-5, MOF-177, DUT-6 (MOF-205)), and aluminum oxide AlO{sub 4}(OH){sub 2} octahedron (MIL-53). We show that SCC-DFTB predicts structural parameters with a very good accuracy (with less than 5% deviation, even for adsorbed CO and H{sub 2}O on HKUST-1), while adsorption energies differ by 12 kJ mol{sup -1} or less for CO and water compared to DFT benchmark calculations. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Yan, Yumeng; Wen, Zeyu; Zhang, Di; Huang, Sheng-You
2018-05-18
RNA-RNA interactions play fundamental roles in gene and cell regulation. Therefore, accurate prediction of RNA-RNA interactions is critical to determine their complex structures and understand the molecular mechanism of the interactions. Here, we have developed a physics-based double-iterative strategy to determine the effective potentials for RNA-RNA interactions based on a training set of 97 diverse RNA-RNA complexes. The double-iterative strategy circumvented the reference state problem in knowledge-based scoring functions by updating the potentials through iteration and also overcame the decoy-dependent limitation in previous iterative methods by constructing the decoys iteratively. The derived scoring function, which is referred to as DITScoreRR, was evaluated on an RNA-RNA docking benchmark of 60 test cases and compared with three other scoring functions. It was shown that for bound docking, our scoring function DITScoreRR obtained the excellent success rates of 90% and 98.3% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 63.3% and 71.7% for van der Waals interactions, 45.0% and 65.0% for ITScorePP, and 11.7% and 26.7% for ZDOCK 2.1, respectively. For unbound docking, DITScoreRR achieved the good success rates of 53.3% and 71.7% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 13.3% and 28.3% for van der Waals interactions, 11.7% and 26.7% for our ITScorePP, and 3.3% and 6.7% for ZDOCK 2.1, respectively. DITScoreRR also performed significantly better in ranking decoys and obtained significantly higher score-RMSD correlations than the other three scoring functions. DITScoreRR will be of great value for the prediction and design of RNA structures and RNA-RNA complexes.
Liu, Keqin; Beck, Dominik; Thoms, Julie A I; Liu, Liang; Zhao, Weiling; Pimanda, John E; Zhou, Xiaobo
2017-09-01
Long non-coding RNAs (lncRNAs) have been implicated in the regulation of diverse biological functions. The number of newly identified lncRNAs has increased dramatically in recent years but their expression and function have not yet been described from most diseases. To elucidate lncRNA function in human disease, we have developed a novel network based method (NLCFA) integrating correlations between lncRNA, protein coding genes and noncoding miRNAs. We have also integrated target gene associations and protein-protein interactions and designed our model to provide information on the combined influence of mRNAs, lncRNAs and miRNAs on cellular signal transduction networks. We have generated lncRNA expression profiles from the CD34+ haematopoietic stem and progenitor cells (HSPCs) from patients with Myelodysplastic syndromes (MDS) and healthy donors. We report, for the first time, aberrantly expressed lncRNAs in MDS and further prioritize biologically relevant lncRNAs using the NLCFA. Taken together, our data suggests that aberrant levels of specific lncRNAs are intimately involved in network modules that control multiple cancer-associated signalling pathways and cellular processes. Importantly, our method can be applied to prioritize aberrantly expressed lncRNAs for functional validation in other diseases and biological contexts. The method is implemented in R language and Matlab. xizhou@wakehealth.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Fujiwara, Takeo; Nishino, Shinya; Yamamoto, Susumu; Suzuki, Takashi; Ikeda, Minoru; Ohtani, Yasuaki
2018-06-01
A novel tight-binding method is developed, based on the extended Hückel approximation and charge self-consistency, with referring the band structure and the total energy of the local density approximation of the density functional theory. The parameters are so adjusted by computer that the result reproduces the band structure and the total energy, and the algorithm for determining parameters is established. The set of determined parameters is applicable to a variety of crystalline compounds and change of lattice constants, and, in other words, it is transferable. Examples are demonstrated for Si crystals of several crystalline structures varying lattice constants. Since the set of parameters is transferable, the present tight-binding method may be applicable also to molecular dynamics simulations of large-scale systems and long-time dynamical processes.
International Nuclear Information System (INIS)
Kim, Hyun Keol; Charette, Andre
2007-01-01
The Sensitivity Function-based Conjugate Gradient Method (SFCGM) is described. This method is used to solve the inverse problems of function estimation, such as the local maps of absorption and scattering coefficients, as applied to optical tomography for biomedical imaging. A highly scattering, absorbing, non-reflecting, non-emitting medium is considered here and simultaneous reconstructions of absorption and scattering coefficients inside the test medium are achieved with the proposed optimization technique, by using the exit intensity measured at boundary surfaces. The forward problem is solved with a discrete-ordinates finite-difference method on the framework of the frequency-domain full equation of radiative transfer. The modulation frequency is set to 600 MHz and the frequency data, obtained with the source modulation, is used as the input data. The inversion results demonstrate that the SFCGM can retrieve simultaneously the spatial distributions of optical properties inside the medium within a reasonable accuracy, by significantly reducing a cross-talk between inter-parameters. It is also observed that the closer-to-detector objects are better retrieved
Zhang, Da; Li, Xinhua; Liu, Bob
2012-03-01
Since the introduction of ASiR, its potential in noise reduction has been reported in various clinical applications. However, the influence of different scan and reconstruction parameters on the trade off between ASiR's blurring effect and noise reduction in low contrast imaging has not been fully studied. Simple measurements on low contrast images, such as CNR or phantom scores could not explore the nuance nature of this problem. We tackled this topic using a method which compares the performance of ASiR in low contrast helical imaging based on an assumed filter layer on top of the FBP reconstruction. Transfer functions of this filter layer were obtained from the noise power spectra (NPS) of corresponding FBP and ASiR images that share the same scan and reconstruction parameters. 2D transfer functions were calculated as sqrt[NPSASiR(u, v)/NPSFBP(u, v)]. Synthesized ACR phantom images were generated by filtering the FBP images with the transfer functions of specific (FBP, ASiR) pairs, and were compared with the ASiR images. It is shown that the transfer functions could predict the deterministic blurring effect of ASiR on low contrast objects, as well as the degree of noise reductions. Using this method, the influence of dose, scan field of view (SFOV), display field of view (DFOV), ASiR level, and Recon Mode on the behavior of ASiR in low contrast imaging was studied. It was found that ASiR level, dose level, and DFOV play more important roles in determining the behavior of ASiR than the other two parameters.
Teichert, M.; Aalst, A. van der; Wit, H. de; Stroo, M.; Smet, P.A.G.M. de
2007-01-01
OBJECTIVES: The objective of the study was to assess the association between the quality of drug prescribing based on three indicator types derived from the DU90% method and different levels of functioning in pharmacotherapy audit meetings (PTAMs). MATERIALS AND METHODS: The level of functioning in
Liao, Stephen Shaoyi; Wang, Huai Qing; Li, Qiu Dan; Liu, Wei Yi
2006-06-01
This paper presents a new method for learning Bayesian networks from functional dependencies (FD) and third normal form (3NF) tables in relational databases. The method sets up a linkage between the theory of relational databases and probabilistic reasoning models, which is interesting and useful especially when data are incomplete and inaccurate. The effectiveness and practicability of the proposed method is demonstrated by its implementation in a mobile commerce system.
Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng
2015-01-01
The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.
Wang, Peng; Guo, Qiuyan; Gao, Yue; Zhi, Hui; Zhang, Yan; Liu, Yue; Zhang, Jizhou; Yue, Ming; Guo, Maoni; Ning, Shangwei; Zhang, Guangmei; Li, Xia
2017-01-17
Although several computational models that predict disease-associated lncRNAs (long non-coding RNAs) exist, only a limited number of disease-associated lncRNAs are known. In this study, we mapped lncRNAs to their functional genomics context using competing endogenous RNAs (ceRNAs) theory. Based on the criteria that similar lncRNAs are likely involved in similar diseases, we proposed a disease lncRNA prioritization method, DisLncPri, to identify novel disease-lncRNA associations. Using a leave-one-out cross validation (LOOCV) strategy, DisLncPri achieved reliable area under curve (AUC) values of 0.89 and 0.87 for the LncRNADisease and Lnc2Cancer datasets that further improved to 0.90 and 0.89 by integrating a multiple rank fusion strategy. We found that DisLncPri had the highest rank enrichment score and AUC value in comparison to several other methods for case studies of alzheimer's disease, ovarian cancer, pancreatic cancer and gastric cancer. Several novel lncRNAs in the top ranks of these diseases were found to be newly verified by relevant databases or reported in recent studies. Prioritization of lncRNAs from a microarray (GSE53622) of oesophageal cancer patients highlighted ENSG00000226029 (top 2), a previously unidentified lncRNA as a potential prognostic biomarker. Our analysis thus indicates that DisLncPri is an excellent tool for identifying lncRNAs that could be novel biomarkers and therapeutic targets in a variety of human diseases.
Venkatraman, Prasanna; Balakrishnan, Satish; Rao, Shashidhar; Hooda, Yogesh; Pol, Suyog
2009-01-01
Background Proteases play a central role in cellular homeostasis and are responsible for the spatio- temporal regulation of function. Many putative proteases have been recently identified through genomic approaches, leading to a surge in global profiling attempts to characterize their function. Through such efforts and others it has become evident that many proteases play non-traditional roles. Accordingly, the number and the variety of the substrate repertoire of proteases are expected to be much larger than previously assumed. In line with such global profiling attempts, we present here a method for the prediction of natural substrates of endo proteases (human proteases used as an example) by employing short peptide sequences as specificity determinants. Methodology/Principal Findings Our method incorporates specificity determinants unique to individual enzymes and physiologically relevant dual filters namely, solvent accessible surface area-a parameter dependent on protein three-dimensional structure and subcellular localization. By incorporating such hitherto unused principles in prediction methods, a novel ligand docking strategy to mimic substrate binding at the active site of the enzyme, and GO functions, we identify and perform subjective validation on putative substrates of matriptase and highlight new functions of the enzyme. Using relative solvent accessibility to rank order we show how new protease regulatory networks and enzyme cascades can be created. Conclusion We believe that our physiologically relevant computational approach would be a very useful complementary method in the current day attempts to profile proteases (endo proteases in particular) and their substrates. In addition, by using functional annotations, we have demonstrated how normal and unknown functions of a protease can be envisaged. We have developed a network which can be integrated to create a proteolytic world. This network can in turn be extended to integrate other regulatory
Chung, Jun Young; Douglas, Jack F; Stafford, Christopher M
2017-10-21
We investigate the relaxation dynamics of thin polymer films at temperatures below the bulk glass transition T g by first compressing polystyrene films supported on a polydimethylsiloxane substrate to create wrinkling patterns and then observing the slow relaxation of the wrinkled films back to their final equilibrium flat state by small angle light scattering. As with recent relaxation measurements on thin glassy films reported by Fakhraai and co-workers, we find the relaxation time of our wrinkled films to be strongly dependent on film thickness below an onset thickness on the order of 100 nm. By varying the temperature between room temperature and T g (≈100 °C), we find that the relaxation time follows an Arrhenius-type temperature dependence to a good approximation at all film thicknesses investigated, where both the activation energy and the relaxation time pre-factor depend appreciably on film thickness. The wrinkling relaxation curves tend to cross at a common temperature somewhat below T g , indicating an entropy-enthalpy compensation relation between the activation free energy parameters. This compensation effect has also been observed recently in simulated supported polymer films in the high temperature Arrhenius relaxation regime rather than the glassy state. In addition, we find that the film stress relaxation function, as well as the height of the wrinkle ridges, follows a stretched exponential time dependence and the short-time effective Young's modulus derived from our modeling decreases sigmoidally with increasing temperature-both characteristic features of glassy materials. The relatively facile nature of the wrinkling-based measurements in comparison to other film relaxation measurements makes our method attractive for practical materials development, as well as fundamental studies of glass formation.
Directory of Open Access Journals (Sweden)
Denok Lestari
2017-01-01
Full Text Available This research is aimed to analyse language functions in English, specifically those which are used in the context of Food and Beverage Service. The findings of the analysis related to the language functions are then applied in a teaching method which is designed to improve the students’ abilities in speaking English. There are two novelties in this research. The first one is the theory of language functions which is reconstructed in accordance with the Food and Beverage Service context. Those language functions are: permisive (to soften utterances, to avoid repetition, and to adjust intonation; interactive (to greet, to have small talks, and farewell; informative (to introduce, to show, to state, to explain, to ask, to agree, to reject, and to confirm; persuasive (to offer, to promise, to suggest, and to persuade; directive (to tell, to order, and to request; indicative (to praise, to complain, to thank, and to apologize. The second novelty which is more practical is the design of the ASRI method which consists of four basic components, namely: Aims (the purpose in communicating; Sequence (the operational procedure in handling guests in the restaurant; Role play (the simmulation activities in language learning; and Interaction (the interactive communications between participants. The method of ASRI with the application of the language functions in its ABCD procedure, namely Acquire, Brainstorm, Chance and Develop is proven to be effective in improving the students’ abilities in speaking English, specifically in the context of Food and Beverage Service.
International Nuclear Information System (INIS)
Khericha, Soli T.
2000-01-01
One-energy group, two-dimensional computer code was developed to calculate the response of a detector to a vibrating absorber in a reactor core. A concept of local/global components, based on the frequency dependent detector adjoint function, and a nodalization technique were utilized. The frequency dependent detector adjoint functions presented by complex equations were expanded into real and imaginary parts. In the nodalization technique, the flux is expanded into polynomials about the center point of each node. The phase angle and the magnitude of the one-energy group detector adjoint function were calculated for a detector located in the center of a 200x200 cm reactor using a two-dimensional nodalization technique, the computer code EXTERMINATOR, and the analytical solution. The purpose of this research was to investigate the applicability of a polynomial nodal model technique to the calculations of the real and the imaginary parts of the detector adjoint function for one-energy group two-dimensional polynomial nodal model technique. From the results as discussed earlier, it is concluded that the nodal model technique can be used to calculate the detector adjoint function and the phase angle. Using the computer code developed for nodal model technique, the magnitude of one energy group frequency dependent detector adjoint function and the phase angle were calculated for the detector located in the center of a 200x200 cm homogenous reactor. The real part of the detector adjoint function was compared with the results obtained from the EXTERMINATOR computer code as well as the analytical solution based on a double sine series expansion using the classical Green's Function solution. The values were found to be less than 1% greater at 20 cm away from the source region and about 3% greater closer to the source compared to the values obtained from the analytical solution and the EXTERMINATOR code. The currents at the node interface matched within 1% of the average
Computational Methods and Function Theory
Saff, Edward; Salinas, Luis; Varga, Richard
1990-01-01
The volume is devoted to the interaction of modern scientific computation and classical function theory. Many problems in pure and more applied function theory can be tackled using modern computing facilities: numerically as well as in the sense of computer algebra. On the other hand, computer algorithms are often based on complex function theory, and dedicated research on their theoretical foundations can lead to great enhancements in performance. The contributions - original research articles, a survey and a collection of problems - cover a broad range of such problems.
Zaffran, Jeremie; Caspary Toroker, Maytal
2016-08-09
NiOOH has recently been used to catalyze water oxidation by way of electrochemical water splitting. Few experimental data are available to rationalize the successful catalytic capability of NiOOH. Thus, theory has a distinctive role for studying its properties. However, the unique layered structure of NiOOH is associated with the presence of essential dispersion forces within the lattice. Hence, the choice of an appropriate exchange-correlation functional within Density Functional Theory (DFT) is not straightforward. In this work, we will show that standard DFT is sufficient to evaluate the geometry, but DFT+U and hybrid functionals are required to calculate the oxidation states. Notably, the benefit of DFT with van der Waals correction is marginal. Furthermore, only hybrid functionals succeed in opening a bandgap, and such methods are necessary to study NiOOH electronic structure. In this work, we expect to give guidelines to theoreticians dealing with this material and to present a rational approach in the choice of the DFT method of calculation.
The Boundary Function Method. Fundamentals
Kot, V. A.
2017-03-01
The boundary function method is proposed for solving applied problems of mathematical physics in the region defined by a partial differential equation of the general form involving constant or variable coefficients with a Dirichlet, Neumann, or Robin boundary condition. In this method, the desired function is defined by a power polynomial, and a boundary function represented in the form of the desired function or its derivative at one of the boundary points is introduced. Different sequences of boundary equations have been set up with the use of differential operators. Systems of linear algebraic equations constructed on the basis of these sequences allow one to determine the coefficients of a power polynomial. Constitutive equations have been derived for initial boundary-value problems of all the main types. With these equations, an initial boundary-value problem is transformed into the Cauchy problem for the boundary function. The determination of the boundary function by its derivative with respect to the time coordinate completes the solution of the problem.
Liu, Yang; Yang, Linghui; Guo, Yin; Lin, Jiarui; Cui, Pengfei; Zhu, Jigui
2018-02-01
An interferometer technique based on temporal coherence function of femtosecond pulses is demonstrated for practical distance measurement. Here, the pulse-to-pulse alignment is analyzed for large delay distance measurement. Firstly, a temporal coherence function model between two femtosecond pulses is developed in the time domain for the dispersive unbalanced Michelson interferometer. Then, according to this model, the fringes analysis and the envelope extraction process are discussed. Meanwhile, optimization methods of pulse-to-pulse alignment for practical long distance measurement are presented. The order of the curve fitting and the selection of points for envelope extraction are analyzed. Furthermore, an averaging method based on the symmetry of the coherence function is demonstrated. Finally, the performance of the proposed methods is evaluated in the absolute distance measurement of 20 μ m with path length difference of 9 m. The improvement of standard deviation in experimental results shows that these approaches have the potential for practical distance measurement.
Directory of Open Access Journals (Sweden)
Fernando Almeida
2017-12-01
Full Text Available Many clinical patients present to mental health clinics with depressive symptoms, anxiety, psychosomatic complaints, and sleeping problems. These symptoms which originated may originate from marital problems, conflictual interpersonal relationships, problems in securing work, and housing issues, among many others. These issues might interfere which underlie the difficulties that with the ability of the patients face in maintaining faultless logical reasoning (FLR and faultless logical functioning (FLF. FLR implies to assess correctly premises, rules, and conclusions. And FLF implies assessing not only FLR, but also the circumstances, life experience, personality, events that validate a conclusion. Almost always, the symptomatology is accompanied by intense emotional changes. Clinical experience shows that a logic-based psychotherapy (LBP approach is not practiced, and that therapists’ resort to psychopharmacotherapy or other types of psychotherapeutic approaches that are not focused on logical reasoning and, especially, logical functioning. Because of this, patients do not learn to overcome their reasoning and functioning errors. The aim of this work was to investigate how LBP works to improve the patients’ ability to think and function in a faultless logical way. This work describes the case studies of three patients. For this purpose we described the treatment of three patients. With this psychotherapeutic approach, patients gain knowledge that can then be applied not only to the issues that led them to the consultation, but also to other problems they have experienced, thus creating a learning experience and helping to prevent such patients from becoming involved in similar problematic situations. This highlights that LBP is a way of treating symptoms that interfere on some level with daily functioning. This psychotherapeutic approach is relevant for improving patients’ quality of life, and it fills a gap in the literature by describing
Penalty parameter of the penalty function method
DEFF Research Database (Denmark)
Si, Cheng Yong; Lan, Tian; Hu, Junjie
2014-01-01
The penalty parameter of penalty function method is systematically analyzed and discussed. For the problem that Deb's feasibility-based rule doesnot give the detailed instruction as how to rank two solutions when they have the same constraint violation, an improved Deb's feasibility-based rule is...
Energy Technology Data Exchange (ETDEWEB)
Giesel, F.L.; Weber, M.A.; Zechmann, C.; Tengg-Kobligk, H. von; Essig, M.; Kauczor, H.U. [Radiologie, Deutsches Krebsforschungszentrum (DKFZ), Heidelberg (Germany); Wuestenberg, T. [Abt. fuer Medizinische Psychologie, Georg-August-Univ. Goettingen (Germany); Bongers, A.; Baudendistel, K.T. [Medizinische Physik in der Radiologie, Deutsches Krebsforschungszentrum (DKFZ), Heidelberg (Germany); Hahn, H.K. [MeVis, Zentrum fuer Medizinische Diagnosesysteme und Visualisierung, Bremen (Germany)
2005-05-01
This review presents the basic principles of functional imaging of the central nervous system utilizing magnetic resonance imaging. The focus is set on visualization of different functional aspects of the brain and related pathologies. Additionally, clinical cases are presented to illustrate the applications of functional imaging techniques in the clinical setting. The relevant physics and physiology of contrast-enhanced and non-contrast-enhanced methods are discussed. The two main functional MR techniques requiring contrast-enhancement are dynamic T1- and T2{sup *}-MRI to image perfusion. Based on different pharmacokinetic models of contrast enhancement diagnostic applications for neurology and radio-oncology are discussed. The functional non-contrast enhanced imaging techniques are based on ''blood oxygenation level dependent (BOLD)-fMRI and arterial spin labeling (ASL) technique. They have gained clinical impact particularly in the fields of psychiatry and neurosurgery. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Cho, Nam Zin; Lee, Joo Hee; Lee, Jae Jun; Yu, Hui; Lee, Gil Soo [Korea Advanced Institute of Science and Tehcnology, Daejeon (Korea, Republic of)
2006-03-15
There is growing interest in developing Pebble Bed Reactors(PBRs) as a candidate of Very High Temperature gas-cooled Reactors(VHTRs). Until now, most existing methods of nuclear design analysis for this type of reactors are base on old finite-difference solvers or on statistical methods. And other existing nodal cannot be adapted for this kind of reactors because of transverse integration problem. In this project, we developed the TOPS code in three dimensional cylindrical geometry based on Analytic Function Expansion Nodal (AFEN) method developed at KAIST. The TOPS code showed better results in computing time than FDM and MCNP. Also TOPS showed very accurate results in reactor analysis.
International Nuclear Information System (INIS)
Cho, Nam Zin; Lee, Joo Hee; Lee, Jae Jun; Yu, Hui; Lee, Gil Soo
2006-03-01
There is growing interest in developing Pebble Bed Reactors(PBRs) as a candidate of Very High Temperature gas-cooled Reactors(VHTRs). Until now, most existing methods of nuclear design analysis for this type of reactors are base on old finite-difference solvers or on statistical methods. And other existing nodal cannot be adapted for this kind of reactors because of transverse integration problem. In this project, we developed the TOPS code in three dimensional cylindrical geometry based on Analytic Function Expansion Nodal (AFEN) method developed at KAIST. The TOPS code showed better results in computing time than FDM and MCNP. Also TOPS showed very accurate results in reactor analysis
Directory of Open Access Journals (Sweden)
Tian Wu
2014-11-01
Full Text Available This paper presents a model for the projection of Chinese vehicle stocks and road vehicle energy demand through 2050 based on low-, medium-, and high-growth scenarios. To derive a gross-domestic product (GDP-dependent Gompertz function, Chinese GDP is estimated using a recursive dynamic Computable General Equilibrium (CGE model. The Gompertz function is estimated using historical data on vehicle development trends in North America, Pacific Rim and Europe to overcome the problem of insufficient long-running data on Chinese vehicle ownership. Results indicate that the number of projected vehicle stocks for 2050 is 300, 455 and 463 million for low-, medium-, and high-growth scenarios respectively. Furthermore, the growth in China’s vehicle stock will increase beyond the inflection point of Gompertz curve by 2020, but will not reach saturation point during the period 2014–2050. Of major road vehicle categories, cars are the largest energy consumers, followed by trucks and buses. Growth in Chinese vehicle demand is primarily determined by per capita GDP. Vehicle saturation levels solely influence the shape of the Gompertz curve and population growth weakly affects vehicle demand. Projected total energy consumption of road vehicles in 2050 is 380, 575 and 586 million tonnes of oil equivalent for each scenario.
Oyama, Takuro; Ikabata, Yasuhiro; Seino, Junji; Nakai, Hiromi
2017-07-01
This Letter proposes a density functional treatment based on the two-component relativistic scheme at the infinite-order Douglas-Kroll-Hess (IODKH) level. The exchange-correlation energy and potential are calculated using the electron density based on the picture-change corrected density operator transformed by the IODKH method. Numerical assessments indicated that the picture-change uncorrected density functional terms generate significant errors, on the order of hartree for heavy atoms. The present scheme was found to reproduce the energetics in the four-component treatment with high accuracy.
Fraga, María; Vilariño, Natalia; Louzao, M Carmen; Fernández, Diego A; Poli, Mark; Botana, Luis M
2016-01-15
Palytoxin (PLTX) is a complex marine toxin produced by zoanthids (i.e. Palythoa), dinoflagellates (Ostreopsis) and cyanobacteria (Trichodesmium). PLTX outbreaks are usually associated with Indo-Pacific waters, however their recent repeated occurrence in Mediterranean-European Atlantic coasts demonstrate their current worldwide distribution. Human sickness and fatalities have been associated with toxic algal blooms and ingestion of seafood contaminated with PLTX-like molecules. These toxins represent a serious threat to human health. There is an immediate need to develop easy-to-use, rapid detection methods due to the lack of validated protocols for their detection and quantification. We have developed an immuno-detection method for PLTX-like molecules based on the use of microspheres coupled to flow-cytometry detection (Luminex 200™). The assay consisted of the competition between free PLTX-like compounds in solution and PLTX immobilized on the surface of microspheres for binding to a specific monoclonal anti-PLTX antibody. This method displays an IC50 of 1.83 ± 0.21 nM and a dynamic range of 0.47-6.54 nM for PLTX. An easy-to-perform extraction protocol, based on a mixture of methanol and acetate buffer, was applied to spiked mussel samples providing a recovery rate of 104 ± 8% and a range of detection from 374 ± 81 to 4430 ± 150 μg kg(-1) when assayed with this method. Extracts of Ostreopsis cf. siamensis and Palythoa tuberculosa were tested and yielded positive results for PLTX-like molecules. However, the data obtained for the coral sample suggested that this antibody did not detect 42-OH-PLTX efficiently. The same samples were further analyzed using a neuroblastoma cytotoxicity assay and UPLC-IT-TOF spectrometry, which also pointed to the presence of PLTX-like compounds. Therefore, this single detection method for PLTX provides a semi-quantitative tool useful for the screening of PLTX-like molecules in different matrixes. Copyright © 2015
Dronova, I.; Gong, P.; Wang, L.; Clinton, N.; Fu, W.; Qi, S.
2011-12-01
Remote sensing-based vegetation classifications representing plant function such as photosynthesis and productivity are challenging in wetlands with complex cover and difficult field access. Recent advances in object-based image analysis (OBIA) and machine-learning algorithms offer new classification tools; however, few comparisons of different algorithms and spatial scales have been discussed to date. We applied OBIA to delineate wetland plant functional types (PFTs) for Poyang Lake, the largest freshwater lake in China and Ramsar wetland conservation site, from 30-m Landsat TM scene at the peak of spring growing season. We targeted major PFTs (C3 grasses, C3 forbs and different types of C4 grasses and aquatic vegetation) that are both key players in system's biogeochemical cycles and critical providers of waterbird habitat. Classification results were compared among: a) several object segmentation scales (with average object sizes 900-9000 m2); b) several families of statistical classifiers (including Bayesian, Logistic, Neural Network, Decision Trees and Support Vector Machines) and c) two hierarchical levels of vegetation classification, a generalized 3-class set and more detailed 6-class set. We found that classification benefited from object-based approach which allowed including object shape, texture and context descriptors in classification. While a number of classifiers achieved high accuracy at the finest pixel-equivalent segmentation scale, the highest accuracies and best agreement among algorithms occurred at coarser object scales. No single classifier was consistently superior across all scales, although selected algorithms of Neural Network, Logistic and K-Nearest Neighbors families frequently provided the best discrimination of classes at different scales. The choice of vegetation categories also affected classification accuracy. The 6-class set allowed for higher individual class accuracies but lower overall accuracies than the 3-class set because
Kishi, Ryohei; Nakano, Masayoshi
2011-04-21
A novel method for the calculation of the dynamic polarizability (α) of open-shell molecular systems is developed based on the quantum master equation combined with the broken-symmetry (BS) time-dependent density functional theory within the Tamm-Dancoff approximation, referred to as the BS-DFTQME method. We investigate the dynamic α density distribution obtained from BS-DFTQME calculations in order to analyze the spatial contributions of electrons to the field-induced polarization and clarify the contributions of the frontier orbital pair to α and its density. To demonstrate the performance of this method, we examine the real part of dynamic α of singlet 1,3-dipole systems having a variety of diradical characters (y). The frequency dispersion of α, in particular in the resonant region, is shown to strongly depend on the exchange-correlation functional as well as on the diradical character. Under sufficiently off-resonant condition, the dynamic α is found to decrease with increasing y and/or the fraction of Hartree-Fock exchange in the exchange-correlation functional, which enhances the spin polarization, due to the decrease in the delocalization effects of π-diradical electrons in the frontier orbital pair. The BS-DFTQME method with the BHandHLYP exchange-correlation functional also turns out to semiquantitatively reproduce the α spectra calculated by a strongly correlated ab initio molecular orbital method, i.e., the spin-unrestricted coupled-cluster singles and doubles.
White, David J.; Congedo, Marco; Ciorciari, Joseph
2014-01-01
A developing literature explores the use of neurofeedback in the treatment of a range of clinical conditions, particularly ADHD and epilepsy, whilst neurofeedback also provides an experimental tool for studying the functional significance of endogenous brain activity. A critical component of any neurofeedback method is the underlying physiological signal which forms the basis for the feedback. While the past decade has seen the emergence of fMRI-based protocols training spatially confined BOLD activity, traditional neurofeedback has utilized a small number of electrode sites on the scalp. As scalp EEG at a given electrode site reflects a linear mixture of activity from multiple brain sources and artifacts, efforts to successfully acquire some level of control over the signal may be confounded by these extraneous sources. Further, in the event of successful training, these traditional neurofeedback methods are likely influencing multiple brain regions and processes. The present work describes the use of source-based signal processing methods in EEG neurofeedback. The feasibility and potential utility of such methods were explored in an experiment training increased theta oscillatory activity in a source derived from Blind Source Separation (BSS) of EEG data obtained during completion of a complex cognitive task (spatial navigation). Learned increases in theta activity were observed in two of the four participants to complete 20 sessions of neurofeedback targeting this individually defined functional brain source. Source-based EEG neurofeedback methods using BSS may offer important advantages over traditional neurofeedback, by targeting the desired physiological signal in a more functionally and spatially specific manner. Having provided preliminary evidence of the feasibility of these methods, future work may study a range of clinically and experimentally relevant brain processes where individual brain sources may be targeted by source-based EEG neurofeedback. PMID
Network-based functional enrichment
Directory of Open Access Journals (Sweden)
Poirel Christopher L
2011-11-01
Full Text Available Abstract Background Many methods have been developed to infer and reason about molecular interaction networks. These approaches often yield networks with hundreds or thousands of nodes and up to an order of magnitude more edges. It is often desirable to summarize the biological information in such networks. A very common approach is to use gene function enrichment analysis for this task. A major drawback of this method is that it ignores information about the edges in the network being analyzed, i.e., it treats the network simply as a set of genes. In this paper, we introduce a novel method for functional enrichment that explicitly takes network interactions into account. Results Our approach naturally generalizes Fisher’s exact test, a gene set-based technique. Given a function of interest, we compute the subgraph of the network induced by genes annotated to this function. We use the sequence of sizes of the connected components of this sub-network to estimate its connectivity. We estimate the statistical significance of the connectivity empirically by a permutation test. We present three applications of our method: i determine which functions are enriched in a given network, ii given a network and an interesting sub-network of genes within that network, determine which functions are enriched in the sub-network, and iii given two networks, determine the functions for which the connectivity improves when we merge the second network into the first. Through these applications, we show that our approach is a natural alternative to network clustering algorithms. Conclusions We presented a novel approach to functional enrichment that takes into account the pairwise relationships among genes annotated by a particular function. Each of the three applications discovers highly relevant functions. We used our methods to study biological data from three different organisms. Our results demonstrate the wide applicability of our methods. Our algorithms are
Horn, Kevin M [Albuquerque, NM
2008-05-20
A broad-beam laser irradiation apparatus can measure the parametric or functional response of a semiconductor device to exposure to dose-rate equivalent infrared laser light. Comparisons of dose-rate response from before, during, and after accelerated aging of a device, or from periodic sampling of devices from fielded operational systems can determine if aging has affected the device's overall functionality. The dependence of these changes on equivalent dose-rate pulse intensity and/or duration can be measured with the apparatus. The synchronized introduction of external electrical transients into the device under test can be used to simulate the electrical effects of the surrounding circuitry's response to a radiation exposure while exposing the device to dose-rate equivalent infrared laser light.
International Nuclear Information System (INIS)
Zhang Shixun; Yamagia, Shinichi; Yunoki, Seiji
2013-01-01
Models of fermions interacting with classical degrees of freedom are applied to a large variety of systems in condensed matter physics. For this class of models, Weiße [Phys. Rev. Lett. 102, 150604 (2009)] has recently proposed a very efficient numerical method, called O(N) Green-Function-Based Monte Carlo (GFMC) method, where a kernel polynomial expansion technique is used to avoid the full numerical diagonalization of the fermion Hamiltonian matrix of size N, which usually costs O(N 3 ) computational complexity. Motivated by this background, in this paper we apply the GFMC method to the double exchange model in three spatial dimensions. We mainly focus on the implementation of GFMC method using both MPI on a CPU-based cluster and Nvidia's Compute Unified Device Architecture (CUDA) programming techniques on a GPU-based (Graphics Processing Unit based) cluster. The time complexity of the algorithm and the parallel implementation details on the clusters are discussed. We also show the performance scaling for increasing Hamiltonian matrix size and increasing number of nodes, respectively. The performance evaluation indicates that for a 32 3 Hamiltonian a single GPU shows higher performance equivalent to more than 30 CPU cores parallelized using MPI
Directory of Open Access Journals (Sweden)
Roderick M Card
Full Text Available The aim of this study was to screen for the presence of antimicrobial resistance genes within the saliva and faecal microbiomes of healthy adult human volunteers from five European countries. Two non-culture based approaches were employed to obviate potential bias associated with difficult to culture members of the microbiota. In a gene target-based approach, a microarray was employed to screen for the presence of over 70 clinically important resistance genes in the saliva and faecal microbiomes. A total of 14 different resistance genes were detected encoding resistances to six antibiotic classes (aminoglycosides, β-lactams, macrolides, sulphonamides, tetracyclines and trimethoprim. The most commonly detected genes were erm(B, blaTEM, and sul2. In a functional-based approach, DNA prepared from pooled saliva samples was cloned into Escherichia coli and screened for expression of resistance to ampicillin or sulphonamide, two of the most common resistances found by array. The functional ampicillin resistance screen recovered genes encoding components of a predicted AcrRAB efflux pump. In the functional sulphonamide resistance screen, folP genes were recovered encoding mutant dihydropteroate synthase, the target of sulphonamide action. The genes recovered from the functional screens were from the chromosomes of commensal species that are opportunistically pathogenic and capable of exchanging DNA with related pathogenic species. Genes identified by microarray were not recovered in the activity-based screen, indicating that these two methods can be complementary in facilitating the identification of a range of resistance mechanisms present within the human microbiome. It also provides further evidence of the diverse reservoir of resistance mechanisms present in bacterial populations in the human gut and saliva. In future the methods described in this study can be used to monitor changes in the resistome in response to antibiotic therapy.
Finding function: evaluation methods for functional genomic data
Directory of Open Access Journals (Sweden)
Barrett Daniel R
2006-07-01
Full Text Available Abstract Background Accurate evaluation of the quality of genomic or proteomic data and computational methods is vital to our ability to use them for formulating novel biological hypotheses and directing further experiments. There is currently no standard approach to evaluation in functional genomics. Our analysis of existing approaches shows that they are inconsistent and contain substantial functional biases that render the resulting evaluations misleading both quantitatively and qualitatively. These problems make it essentially impossible to compare computational methods or large-scale experimental datasets and also result in conclusions that generalize poorly in most biological applications. Results We reveal issues with current evaluation methods here and suggest new approaches to evaluation that facilitate accurate and representative characterization of genomic methods and data. Specifically, we describe a functional genomics gold standard based on curation by expert biologists and demonstrate its use as an effective means of evaluation of genomic approaches. Our evaluation framework and gold standard are freely available to the community through our website. Conclusion Proper methods for evaluating genomic data and computational approaches will determine how much we, as a community, are able to learn from the wealth of available data. We propose one possible solution to this problem here but emphasize that this topic warrants broader community discussion.
Pinti, Paola; Merla, Arcangelo; Aichelburg, Clarisse; Lind, Frida; Power, Sarah; Swingler, Elizabeth; Hamilton, Antonia; Gilbert, Sam; Burgess, Paul W; Tachtsidis, Ilias
2017-07-15
Recent technological advances have allowed the development of portable functional Near-Infrared Spectroscopy (fNIRS) devices that can be used to perform neuroimaging in the real-world. However, as real-world experiments are designed to mimic everyday life situations, the identification of event onsets can be extremely challenging and time-consuming. Here, we present a novel analysis method based on the general linear model (GLM) least square fit analysis for the Automatic IDentification of functional Events (or AIDE) directly from real-world fNIRS neuroimaging data. In order to investigate the accuracy and feasibility of this method, as a proof-of-principle we applied the algorithm to (i) synthetic fNIRS data simulating both block-, event-related and mixed-design experiments and (ii) experimental fNIRS data recorded during a conventional lab-based task (involving maths). AIDE was able to recover functional events from simulated fNIRS data with an accuracy of 89%, 97% and 91% for the simulated block-, event-related and mixed-design experiments respectively. For the lab-based experiment, AIDE recovered more than the 66.7% of the functional events from the fNIRS experimental measured data. To illustrate the strength of this method, we then applied AIDE to fNIRS data recorded by a wearable system on one participant during a complex real-world prospective memory experiment conducted outside the lab. As part of the experiment, there were four and six events (actions where participants had to interact with a target) for the two different conditions respectively (condition 1: social-interact with a person; condition 2: non-social-interact with an object). AIDE managed to recover 3/4 events and 3/6 events for conditions 1 and 2 respectively. The identified functional events were then corresponded to behavioural data from the video recordings of the movements and actions of the participant. Our results suggest that "brain-first" rather than "behaviour-first" analysis is
Arora, Sanjeevani; Huwe, Peter J.; Sikder, Rahmat; Shah, Manali; Browne, Amanda J.; Lesh, Randy; Nicolas, Emmanuelle; Deshpande, Sanat; Hall, Michael J.; Dunbrack, Roland L.; Golemis, Erica A.
2017-01-01
ABSTRACT The cancer-predisposing Lynch Syndrome (LS) arises from germline mutations in DNA mismatch repair (MMR) genes, predominantly MLH1, MSH2, MSH6, and PMS2. A major challenge for clinical diagnosis of LS is the frequent identification of variants of uncertain significance (VUS) in these genes, as it is often difficult to determine variant pathogenicity, particularly for missense variants. Generic programs such as SIFT and PolyPhen-2, and MMR gene-specific programs such as PON-MMR and MAPP-MMR, are often used to predict deleterious or neutral effects of VUS in MMR genes. We evaluated the performance of multiple predictive programs in the context of functional biologic data for 15 VUS in MLH1, MSH2, and PMS2. Using cell line models, we characterized VUS predicted to range from neutral to pathogenic on mRNA and protein expression, basal cellular viability, viability following treatment with a panel of DNA-damaging agents, and functionality in DNA damage response (DDR) signaling, benchmarking to wild-type MMR proteins. Our results suggest that the MMR gene-specific classifiers do not always align with the experimental phenotypes related to DDR. Our study highlights the importance of complementary experimental and computational assessment to develop future predictors for the assessment of VUS. PMID:28494185
Directory of Open Access Journals (Sweden)
Gaosheng Luo
2014-01-01
Full Text Available A robust adaptive control method with full-state feedback is proposed based on the fact that the elbow joint of a seven-function hydraulic manipulator with double-screw-pair transmission features the following control characteristics: a strongly nonlinear hydraulic system, parameter uncertainties susceptible to temperature and pressure changes of the external environment, and unknown outer disturbances. Combined with the design method of the back-stepping controller, the asymptotic stability of the control system in the presence of disturbances from uncertain systematic parameters and unknown external disturbances was demonstrated using Lyapunov stability theory. Based on the elbow joint of the seven-function master-slave hydraulic manipulator for the 4500 m Deep-Sea Working System as the research subject, a comparative study was conducted using the control method presented in this paper for unknown external disturbances. Simulations and experiments of different unknown outer disturbances showed that (1 the proposed controller could robustly track the desired reference trajectory with satisfactory dynamic performance and steady accuracy and that (2 the modified parameter adaptive laws could also guarantee that the estimated parameters are bounded.
Czech Academy of Sciences Publication Activity Database
Dobeš, Petr; Fanfrlík, Jindřich; Řezáč, Jan; Otyepka, M.; Hobza, Pavel
2011-01-01
Roč. 25, č. 3 (2011), s. 223-235 ISSN 0920-654X R&D Projects: GA MŠk LC512; GA ČR GAP208/11/0295 Grant - others:European Social Fund(XE) CZ.1.05/2.1.00/03.0058 Institutional research plan: CEZ:AV0Z40550506 Keywords : CDK2 * semiempirical quantum mechanical method PM6-DH2 * drug design Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.386, year: 2011
O-hydroxy-functionalized diamines, polymides, methods of making each, and methods of use
Ma, Xiaohua; Ghanem, Bader S.; Pinnau, Ingo
2016-01-01
Embodiments of the present disclosure provide for an ortho (o)-hydroxy-functionalized diamine, a method of making an o-hydroxy-functionalized diamine, an o-hydroxy-functionalized diamine-based polyimide, a method of making an o-hydroxy-functionalized diamine imide, methods of gas separation, and the like.
O-hydroxy-functionalized diamines, polymides, methods of making each, and methods of use
Ma, Xiaohua
2016-01-21
Embodiments of the present disclosure provide for an ortho (o)-hydroxy-functionalized diamine, a method of making an o-hydroxy-functionalized diamine, an o-hydroxy-functionalized diamine-based polyimide, a method of making an o-hydroxy-functionalized diamine imide, methods of gas separation, and the like.
Energy Technology Data Exchange (ETDEWEB)
Patriarca, Riccardo, E-mail: riccardo.patriarca@uniroma1.it; Di Gravio, Giulio; Costantino, Francesco; Tronci, Massimo
2017-03-15
Environmental auditing is a main issue for any production plant and assessing environmental performance is crucial to identify risks factors. The complexity of current plants arises from interactions among technological, human and organizational system components, which are often transient and not easily detectable. The auditing thus requires a systemic perspective, rather than focusing on individual behaviors, as emerged in recent research in the safety domain for socio-technical systems. We explore the significance of modeling the interactions of system components in everyday work, by the application of a recent systemic method, i.e. the Functional Resonance Analysis Method (FRAM), in order to define dynamically the system structure. We present also an innovative evolution of traditional FRAM following a semi-quantitative approach based on Monte Carlo simulation. This paper represents the first contribution related to the application of FRAM in the environmental context, moreover considering a consistent evolution based on Monte Carlo simulation. The case study of an environmental risk auditing in a sinter plant validates the research, showing the benefits in terms of identifying potential critical activities, related mitigating actions and comprehensive environmental monitoring indicators. - Highlights: • We discuss the relevance of a systemic risk based environmental audit. • We present FRAM to represent functional interactions of the system. • We develop a semi-quantitative FRAM framework to assess environmental risks. • We apply the semi-quantitative FRAM framework to build a model for a sinter plant.
International Nuclear Information System (INIS)
Patriarca, Riccardo; Di Gravio, Giulio; Costantino, Francesco; Tronci, Massimo
2017-01-01
Environmental auditing is a main issue for any production plant and assessing environmental performance is crucial to identify risks factors. The complexity of current plants arises from interactions among technological, human and organizational system components, which are often transient and not easily detectable. The auditing thus requires a systemic perspective, rather than focusing on individual behaviors, as emerged in recent research in the safety domain for socio-technical systems. We explore the significance of modeling the interactions of system components in everyday work, by the application of a recent systemic method, i.e. the Functional Resonance Analysis Method (FRAM), in order to define dynamically the system structure. We present also an innovative evolution of traditional FRAM following a semi-quantitative approach based on Monte Carlo simulation. This paper represents the first contribution related to the application of FRAM in the environmental context, moreover considering a consistent evolution based on Monte Carlo simulation. The case study of an environmental risk auditing in a sinter plant validates the research, showing the benefits in terms of identifying potential critical activities, related mitigating actions and comprehensive environmental monitoring indicators. - Highlights: • We discuss the relevance of a systemic risk based environmental audit. • We present FRAM to represent functional interactions of the system. • We develop a semi-quantitative FRAM framework to assess environmental risks. • We apply the semi-quantitative FRAM framework to build a model for a sinter plant.
International Nuclear Information System (INIS)
Suhendi, Endi; Syariati, Rifki; Noor, Fatimah A.; Khairurrijal; Kurniasih, Neny
2014-01-01
We modeled a tunneling current in a p-n junction based on armchair graphene nanoribbons (AGNRs) by using an Airy function approach (AFA) and a transfer matrix method (TMM). We used β-type AGNRs, in which its band gap energy and electron effective mass depends on its width as given by the extended Huckel theory. It was shown that the tunneling currents evaluated by employing the AFA are the same as those obtained under the TMM. Moreover, the calculated tunneling current was proportional to the voltage bias and inversely with temperature
Directory of Open Access Journals (Sweden)
Emre Guney
Full Text Available Complex biological systems usually pose a trade-off between robustness and fragility where a small number of perturbations can substantially disrupt the system. Although biological systems are robust against changes in many external and internal conditions, even a single mutation can perturb the system substantially, giving rise to a pathophenotype. Recent advances in identifying and analyzing the sequential variations beneath human disorders help to comprehend a systemic view of the mechanisms underlying various disease phenotypes. Network-based disease-gene prioritization methods rank the relevance of genes in a disease under the hypothesis that genes whose proteins interact with each other tend to exhibit similar phenotypes. In this study, we have tested the robustness of several network-based disease-gene prioritization methods with respect to the perturbations of the system using various disease phenotypes from the Online Mendelian Inheritance in Man database. These perturbations have been introduced either in the protein-protein interaction network or in the set of known disease-gene associations. As the network-based disease-gene prioritization methods are based on the connectivity between known disease-gene associations, we have further used these methods to categorize the pathophenotypes with respect to the recoverability of hidden disease-genes. Our results have suggested that, in general, disease-genes are connected through multiple paths in the human interactome. Moreover, even when these paths are disturbed, network-based prioritization can reveal hidden disease-gene associations in some pathophenotypes such as breast cancer, cardiomyopathy, diabetes, leukemia, parkinson disease and obesity to a greater extend compared to the rest of the pathophenotypes tested in this study. Gene Ontology (GO analysis highlighted the role of functional diversity for such diseases.
Directory of Open Access Journals (Sweden)
Chuanfa Chen
2015-03-01
Full Text Available Remote-sensing-derived elevation data sets often suffer from noise and outliers due to various reasons, such as the physical limitations of sensors, multiple reflectance, occlusions and low contrast of texture. Outliers generally have a seriously negative effect on DEM construction. Some interpolation methods like ordinary kriging (OK are capable of smoothing noise inherent in sample points, but are sensitive to outliers. In this paper, a robust algorithm of multiquadric method (MQ based on an Improved Huber loss function (MQ-IH has been developed to decrease the impact of outliers on DEM construction. Theoretically, the improved Huber loss function is null for outliers, quadratic for small errors, and linear for others. Simulated data sets drawn from a mathematical surface with different error distributions were employed to analyze the robustness of MQ-IH. Results indicate that MQ-IH obtains a good balance between efficiency and robustness. Namely, the performance of MQ-IH is comparative to those of the classical MQ and MQ based on the Classical Huber loss function (MQ-CH when sample points follow a normal distribution, and the former outperforms the latter two when sample points are subject to outliers. For example, for the Cauchy error distribution with the location parameter of 0 and scale parameter of 1, the root mean square errors (RMSEs of MQ-CH and the classical MQ are 0.3916 and 1.4591, respectively, whereas that of MQ-IH is 0.3698. The performance of MQ-IH is further evaluated by qualitative and quantitative analysis through a real-world example of DEM construction with the stereo-images-derived elevation points. Results demonstrate that compared with the classical interpolation methods, including natural neighbor (NN, OK and ANUDEM (a program that calculates regular grid digital elevation models (DEMs with sensible shape and drainage structure from arbitrarily large topographic data sets, and two versions of MQ, including the
Energy Technology Data Exchange (ETDEWEB)
Conte, Elio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari, Bari (Italy); School of Advanced International Studies on Nuclear, Theoretical and Nonlinear Methodologies-Bari (Italy)], E-mail: fisio2@fisiol.uniba.it; Federici, Antonio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari, Bari (Italy); Zbilut, Joseph P. [Department of Molecular Biophysics and Physiology, Rush University Medical Center, 1653W Congress, Chicago, IL 60612 (United States)
2009-08-15
It is known that R-R time series calculated from a recorded ECG, are strongly correlated to sympathetic and vagal regulation of the sinus pacemaker activity. In human physiology it is a crucial question to estimate such components with accuracy. Fourier analysis dominates still to day the data analysis efforts of such data ignoring that FFT is valid under some crucial restrictions that results largely violated in R-R time series data as linearity and stationarity. In order to go over such approach, we introduce a new method, called CZF. It is based on variogram analysis. It is aimed from a profound link with Recurrence Quantification Analysis that is a basic tool for investigation of non linear and non stationary time series. Therefore, a relevant feature of the method is that it finally may be applied also in cases of non linear and non stationary time series analysis. In addition, the method enables also to analyze the fractal variance function, the Generalized Fractal Dimension and, finally, the relative probability density function of the data. The CZF gives very satisfactory results. In the present paper it has been applied to direct experimental cases of normal subjects, patients with hypertension before and after therapy and in children under some different conditions of experimentation.
International Nuclear Information System (INIS)
Conte, Elio; Federici, Antonio; Zbilut, Joseph P.
2009-01-01
It is known that R-R time series calculated from a recorded ECG, are strongly correlated to sympathetic and vagal regulation of the sinus pacemaker activity. In human physiology it is a crucial question to estimate such components with accuracy. Fourier analysis dominates still to day the data analysis efforts of such data ignoring that FFT is valid under some crucial restrictions that results largely violated in R-R time series data as linearity and stationarity. In order to go over such approach, we introduce a new method, called CZF. It is based on variogram analysis. It is aimed from a profound link with Recurrence Quantification Analysis that is a basic tool for investigation of non linear and non stationary time series. Therefore, a relevant feature of the method is that it finally may be applied also in cases of non linear and non stationary time series analysis. In addition, the method enables also to analyze the fractal variance function, the Generalized Fractal Dimension and, finally, the relative probability density function of the data. The CZF gives very satisfactory results. In the present paper it has been applied to direct experimental cases of normal subjects, patients with hypertension before and after therapy and in children under some different conditions of experimentation.
Energy Technology Data Exchange (ETDEWEB)
Barraclough, Brendan; Lebron, Sharon [Department of Radiation Oncology, University of Florida, Gainesville, Florida 32608 and J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611 (United States); Li, Jonathan G.; Fan, Qiyong; Liu, Chihray; Yan, Guanghua, E-mail: yangua@shands.ufl.edu [Department of Radiation Oncology, University of Florida, Gainesville, Florida 32608 (United States)
2016-05-15
Purpose: To investigate the geometry dependence of the detector response function (DRF) of three commonly used scanning ionization chambers and its impact on a convolution-based method to address the volume averaging effect (VAE). Methods: A convolution-based approach has been proposed recently to address the ionization chamber VAE. It simulates the VAE in the treatment planning system (TPS) by iteratively convolving the calculated beam profiles with the DRF while optimizing the beam model. Since the convolved and the measured profiles are subject to the same VAE, the calculated profiles match the implicit “real” ones when the optimization converges. Three DRFs (Gaussian, Lorentzian, and parabolic function) were used for three ionization chambers (CC04, CC13, and SNC125c) in this study. Geometry dependent/independent DRFs were obtained by minimizing the difference between the ionization chamber-measured profiles and the diode-measured profiles convolved with the DRFs. These DRFs were used to obtain eighteen beam models for a commercial TPS. Accuracy of the beam models were evaluated by assessing the 20%–80% penumbra width difference (PWD) between the computed and diode-measured beam profiles. Results: The convolution-based approach was found to be effective for all three ionization chambers with significant improvement for all beam models. Up to 17% geometry dependence of the three DRFs was observed for the studied ionization chambers. With geometry dependent DRFs, the PWD was within 0.80 mm for the parabolic function and CC04 combination and within 0.50 mm for other combinations; with geometry independent DRFs, the PWD was within 1.00 mm for all cases. When using the Gaussian function as the DRF, accounting for geometry dependence led to marginal improvement (PWD < 0.20 mm) for CC04; the improvement ranged from 0.38 to 0.65 mm for CC13; for SNC125c, the improvement was slightly above 0.50 mm. Conclusions: Although all three DRFs were found adequate to
Methods for Functional Connectivity Analyses
2012-12-13
motor , or hand motor function (green, red, or blue shading, respectively). Thus, this work produced the first comprehensive analysis of ECoG...Computer Engineering, University of Texas at El Paso , TX, USA 3Department of Neurology, Albany Medical College, Albany, NY, USA 4Department of Computer...Department of Health, Albany, NY, USA bDepartment of Electrical and Computer Engineering, University of Texas at El Paso , TX, USA cDepartment of Neurology
DEFF Research Database (Denmark)
Madsen, Per Printz
1998-01-01
The purpose of this paper is to describe a neural network (SNN), that is based on Shannons ideas of reconstruction of a real continuous function from its samples. The basic function, used in this network, is the Sinc-function. Two learning algorithms are described. A simple one called IM...
Gómez-Coca, Silvia; Ruiz, Eliseo
2012-03-07
The magnetic properties of a new family of single-molecule magnet Ni(3)Mn(2) complexes were studied using theoretical methods based on Density Functional Theory (DFT). The first part of this study is devoted to analysing the exchange coupling constants, focusing on the intramolecular as well as the intermolecular interactions. The calculated intramolecular J values were in excellent agreement with the experimental data, which show that all the couplings are ferromagnetic, leading to an S = 7 ground state. The intermolecular interactions were investigated because the two complexes studied do not show tunnelling at zero magnetic field. Usually, this exchange-biased quantum tunnelling is attributed to the presence of intermolecular interactions calculated with the help of theoretical methods. The results indicate the presence of weak intermolecular antiferromagnetic couplings that cannot explain the ferromagnetic value found experimentally for one of the systems. In the second part, the goal is to analyse magnetic anisotropy through the calculation of the zero-field splitting parameters (D and E), using DFT methods including the spin-orbit effect.
Thomas P. Holmes; Wiktor L. Adamowicz
2003-01-01
Stated preference methods of environmental valuation have been used by economists for decades where behavioral data have limitations. The contingent valuation method (Chapter 5) is the oldest stated preference approach, and hundreds of contingent valuation studies have been conducted. More recently, and especially over the last decade, a class of stated preference...
International Nuclear Information System (INIS)
Tu, X.; Zhao, Y.; Luo, S.; Luo, X.; Feng, L.
2012-01-01
We report on a novel amperometric glassy carbon biosensing electrode for glucose. It is based on the immobilization of a highly sensitive glucose oxidase (GOx) by affinity interaction on carbon nanotubes (CNTs) functionalized with iminodiacetic acid and metal chelates. The new technique for immobilization is exploiting the affinity of Co(II) ions to the histidine and cysteine moieties on the surface of GOx. The direct electrochemistry of immobilized GOx revealed that the functionalized CNTs greatly improve the direct electron transfer between GOx and the surface of the electrode to give a pair of well-defined and almost reversible redox peaks and undergoes fast heterogeneous electron transfer with a rate constant (k s) of 0. 59 s -1 . The GOx immobilized in this way fully retained its activity for the oxidation of glucose. The resulting biosensor is capable of detecting glucose at levels as low as 0.01 mM, and has excellent operational stability (with no decrease in the activity of enzyme over a 10 days period). The method of immobilizing GOx is easy and also provides a model technique for potential use with other redox enzymes and proteins. (author)
Masoud Ghanbari; Jahan B Ghasemi; Amir M Mortazavian
2017-01-01
Milk can be modified by several processes to yield numerous kinds of food products with specific functional properties besides increasing the food value. This study aimed to evaluate the effect of various concentration of cereal flours (10–16%), inulin (6 and 8%) and sugar (2 and 4%) on sensory characteristic, consumer acceptance and drivers of liking of a new low sugar/fat prebiotic dairy dessert. In this way, descriptive analysis with trained panelists and three consumer profiling technique...
Radrizzani, Marina; Lo Cicero, Viviana; Soncin, Sabrina; Bolis, Sara; Sürder, Daniel; Torre, Tiziano; Siclari, Francesco; Moccetti, Tiziano; Vassalli, Giuseppe; Turchetto, Lucia
2014-09-27
Cardiovascular cell therapy represents a promising field, with several approaches currently being tested. The advanced therapy medicinal product (ATMP) for the ongoing METHOD clinical study ("Bone marrow derived cell therapy in the stable phase of chronic ischemic heart disease") consists of fresh mononuclear cells (MNC) isolated from autologous bone marrow (BM) through density gradient centrifugation on standard Ficoll-Paque. Cells are tested for safety (sterility, endotoxin), identity/potency (cell count, CD45/CD34/CD133, viability) and purity (contaminant granulocytes and platelets). BM-MNC were isolated by density gradient centrifugation on Ficoll-Paque. The following process parameters were optimized throughout the study: gradient medium density; gradient centrifugation speed and duration; washing conditions. A new manufacturing method was set up, based on gradient centrifugation on low density Ficoll-Paque, followed by 2 washing steps, of which the second one at low speed. It led to significantly higher removal of contaminant granulocytes and platelets, improving product purity; the frequencies of CD34+ cells, CD133+ cells and functional hematopoietic and mesenchymal precursors were significantly increased. The methodological optimization described here resulted in a significant improvement of ATMP quality, a crucial issue to clinical applications in cardiovascular cell therapy.
Meiyanti, R.; Subandi, A.; Fuqara, N.; Budiman, M. A.; Siahaan, A. P. U.
2018-03-01
A singer doesn’t just recite the lyrics of a song, but also with the use of particular sound techniques to make it more beautiful. In the singing technique, more female have a diverse sound registers than male. There are so many registers of the human voice, but the voice registers used while singing, among others, Chest Voice, Head Voice, Falsetto, and Vocal fry. Research of speech recognition based on the female’s voice registers in singing technique is built using Borland Delphi 7.0. Speech recognition process performed by the input recorded voice samples and also in real time. Voice input will result in weight energy values based on calculations using Hankel Transformation method and Macdonald Functions. The results showed that the accuracy of the system depends on the accuracy of sound engineering that trained and tested, and obtained an average percentage of the successful introduction of the voice registers record reached 48.75 percent, while the average percentage of the successful introduction of the voice registers in real time to reach 57 percent.
Microscopically Based Nuclear Energy Functionals
International Nuclear Information System (INIS)
Bogner, S. K.
2009-01-01
A major goal of the SciDAC project 'Building a Universal Nuclear Energy Density Functional' is to develop next-generation nuclear energy density functionals that give controlled extrapolations away from stability with improved performance across the mass table. One strategy is to identify missing physics in phenomenological Skyrme functionals based on our understanding of the underlying internucleon interactions and microscopic many-body theory. In this contribution, I describe ongoing efforts to use the density matrix expansion of Negele and Vautherin to incorporate missing finite-range effects from the underlying two- and three-nucleon interactions into phenomenological Skyrme functionals.
Directory of Open Access Journals (Sweden)
Masoud Ghanbari
2017-08-01
Full Text Available Milk can be modified by several processes to yield numerous kinds of food products with specific functional properties besides increasing the food value. This study aimed to evaluate the effect of various concentration of cereal flours (10–16%, inulin (6 and 8% and sugar (2 and 4% on sensory characteristic, consumer acceptance and drivers of liking of a new low sugar/fat prebiotic dairy dessert. In this way, descriptive analysis with trained panelists and three consumer profiling techniques were used and the agreement between them was compared. Nine samples of desserts with different concentration of flour, inulin and sugar were formulated using a mixture design. The samples were evaluated by a panel of 120 consumers, randomly divided into three groups of 40, who evaluated sensory characteristics of the desserts using intensity scale, or a check-all-that-apply (CATA questions or open-ended questions. Results revealed that various concentration of cereal flours, inulin and sugar resulted in significant changes in the sensory properties of the desserts. Adding higher levels of inulin and sugar led to lower intensities in attributes thickness and creaminess. Samples with higher level of flour and lower level of inulin and sugar were liked by consumers and their high intensities in creaminess and thickness drove liking. Results showed that all the three consumer profiling techniques yielded similar information to descriptive analysis with the trained panel. Likewise, sample configurations from the CATA questions were the most similar to those afforded by the panel of trained assessors. These methodologies could be appealing techniques to investigate the relationship between sensory data and consumer description. Moreover, sensory techniques using consumer perception showed to be valuable to develop functional dessert, which is very important in market succession.
Numerical methods for hyperbolic differential functional problems
Directory of Open Access Journals (Sweden)
Roman Ciarski
2008-01-01
Full Text Available The paper deals with the initial boundary value problem for quasilinear first order partial differential functional systems. A general class of difference methods for the problem is constructed. Theorems on the error estimate of approximate solutions for difference functional systems are presented. The convergence results are proved by means of consistency and stability arguments. A numerical example is given.
Temporal quadratic expansion nodal Green's function method
International Nuclear Information System (INIS)
Liu Cong; Jing Xingqing; Xu Xiaolin
2000-01-01
A new approach is presented to efficiently solve the three-dimensional space-time reactor dynamics equation which overcomes the disadvantages of current methods. In the Temporal Quadratic Expansion Nodal Green's Function Method (TQE/NGFM), the Quadratic Expansion Method (QEM) is used for the temporal solution with the Nodal Green's Function Method (NGFM) employed for the spatial solution. Test calculational results using TQE/NGFM show that its time step size can be 5-20 times larger than that of the Fully Implicit Method (FIM) for similar precision. Additionally, the spatial mesh size with NGFM can be nearly 20 times larger than that using the finite difference method. So, TQE/NGFM is proved to be an efficient reactor dynamics analysis method
Sum rules in the response function method
International Nuclear Information System (INIS)
Takayanagi, Kazuo
1990-01-01
Sum rules in the response function method are studied in detail. A sum rule can be obtained theoretically by integrating the imaginary part of the response function over the excitation energy with a corresponding energy weight. Generally, the response function is calculated perturbatively in terms of the residual interaction, and the expansion can be described by diagrammatic methods. In this paper, we present a classification of the diagrams so as to clarify which diagram has what contribution to which sum rule. This will allow us to get insight into the contributions to the sum rules of all the processes expressed by Goldstone diagrams. (orig.)
International Nuclear Information System (INIS)
Leinonen, Heli; Lajunen, Marja
2012-01-01
Reactivity of five-membered, variously substituted, heteroaromatic diazonium salts was studied toward pristine single-walled carbon nanotubes (SWCNTs), prepared by high-pressure CO conversion (HiPCO) method. Average size range of individual HiPCO SWCNTs was 0.8–1.2 nm (diameter) and 100–1,000 nm (length). Functionalizations were performed by a one-pot diazotization–dediazotization method with methyl-2-aminothiophene-3-carboxylate, 2-aminothiophene-3-carbonitrile, 2-aminoimidazole sulfate, or 3-aminopyrazole in acetic acid using sodium nitrite at room temperature or by heating. According to Raman and Fourier transform infrared spectroscopy, all used heterocyclic diazonium salts formed a covalent bond with SWCNTs and yielded new kinds of five-membered heterocycle-functionalized SWCNTs. Methyl-2-thiophenyl-3-carboxylate-functionalized SWCNTs formed a highly soluble, stable dispersion in tetrahydrofuran (THF), 3-pyrazoyl-functionalized SWCNTs in ethanol, and 2-imidazoyl- or 2-thiophenyl-3-carbonitrile-functionalized SWCNTs in ethanol and THF. The thermogravimetric analysis as well as energy-filtered transmission electron microscopy imaging of the products confirmed the successful functionalization of SWCNTs.
Energy Technology Data Exchange (ETDEWEB)
Leinonen, Heli; Lajunen, Marja, E-mail: marja.lajunen@oulu.fi [University of Oulu, Department of Chemistry (Finland)
2012-09-15
Reactivity of five-membered, variously substituted, heteroaromatic diazonium salts was studied toward pristine single-walled carbon nanotubes (SWCNTs), prepared by high-pressure CO conversion (HiPCO) method. Average size range of individual HiPCO SWCNTs was 0.8-1.2 nm (diameter) and 100-1,000 nm (length). Functionalizations were performed by a one-pot diazotization-dediazotization method with methyl-2-aminothiophene-3-carboxylate, 2-aminothiophene-3-carbonitrile, 2-aminoimidazole sulfate, or 3-aminopyrazole in acetic acid using sodium nitrite at room temperature or by heating. According to Raman and Fourier transform infrared spectroscopy, all used heterocyclic diazonium salts formed a covalent bond with SWCNTs and yielded new kinds of five-membered heterocycle-functionalized SWCNTs. Methyl-2-thiophenyl-3-carboxylate-functionalized SWCNTs formed a highly soluble, stable dispersion in tetrahydrofuran (THF), 3-pyrazoyl-functionalized SWCNTs in ethanol, and 2-imidazoyl- or 2-thiophenyl-3-carbonitrile-functionalized SWCNTs in ethanol and THF. The thermogravimetric analysis as well as energy-filtered transmission electron microscopy imaging of the products confirmed the successful functionalization of SWCNTs.
Leinonen, Heli; Lajunen, Marja
2012-09-01
Reactivity of five-membered, variously substituted, heteroaromatic diazonium salts was studied toward pristine single-walled carbon nanotubes (SWCNTs), prepared by high-pressure CO conversion (HiPCO) method. Average size range of individual HiPCO SWCNTs was 0.8-1.2 nm (diameter) and 100-1,000 nm (length). Functionalizations were performed by a one-pot diazotization-dediazotization method with methyl-2-aminothiophene-3-carboxylate, 2-aminothiophene-3-carbonitrile, 2-aminoimidazole sulfate, or 3-aminopyrazole in acetic acid using sodium nitrite at room temperature or by heating. According to Raman and Fourier transform infrared spectroscopy, all used heterocyclic diazonium salts formed a covalent bond with SWCNTs and yielded new kinds of five-membered heterocycle-functionalized SWCNTs. Methyl-2-thiophenyl-3-carboxylate-functionalized SWCNTs formed a highly soluble, stable dispersion in tetrahydrofuran (THF), 3-pyrazoyl-functionalized SWCNTs in ethanol, and 2-imidazoyl- or 2-thiophenyl-3-carbonitrile-functionalized SWCNTs in ethanol and THF. The thermogravimetric analysis as well as energy-filtered transmission electron microscopy imaging of the products confirmed the successful functionalization of SWCNTs.
Determination of resonance parameters in QCD by functional analysis methods
International Nuclear Information System (INIS)
Ciulli, S.; Geniet, F.; Papadopoulos, N.A.; Schilcher, K.
1988-01-01
A mathematically rigorous method based on functional analysis is used to determine resonance parameters of an amplitude from its given asymptotic expression in the space-like region. This method is checked on a model amplitude where both the asymptotic expression and the exact function are known. This method is then applied to the determination of the mass and the width of the ρ-meson from the corresponding space-like asymptotic QCD expression. (orig.)
Doubly stochastic radial basis function methods
Yang, Fenglian; Yan, Liang; Ling, Leevan
2018-06-01
We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).
DEFF Research Database (Denmark)
Hjollund, Niels Henrik Ingvar
2017-01-01
BACKGROUND: Information to the patient about the long-term prognosis of symptom burden and functioning is an integrated part of clinical practice, but relies mostly on the clinician’s personal experience. Relevant prognostic models based on patient-reported outcome (PRO) data with repeated measur...
Introducing trimming and function ranking to Solid Works based on function analysis
Chechurin, Leonid S.; Wits, Wessel Willems; Bakker, Hans M.; Cascini, G.; Vaneker, Thomas H.J.
2011-01-01
TRIZ based Function Analysis models existing products based on functional interactions between product parts. Such a function model description is the ideal starting point for product innovation. Design engineers can apply (TRIZ) methods such as trimming and function ranking to this function model
Introducing Trimming and Function Ranking to SolidWorks based on Function Analysis
Chechurin, L.S.; Wits, Wessel Willems; Bakker, Hans M.; Vaneker, Thomas H.J.
2015-01-01
TRIZ based Function Analysis models existing products based on functional interactions between product parts. Such a function model description is the ideal starting point for product innovation. Design engineers can apply (TRIZ) methods such as trimming and function ranking to this function model
Methods in Logic Based Control
DEFF Research Database (Denmark)
Christensen, Georg Kronborg
1999-01-01
Desing and theory of Logic Based Control systems.Boolean Algebra, Karnaugh Map, Quine McClusky's algorithm. Sequential control design. Logic Based Control Method, Cascade Control Method. Implementation techniques: relay, pneumatic, TTL/CMOS,PAL and PLC- and Soft_PLC implementation. PLC...
BLUES function method in computational physics
Indekeu, Joseph O.; Müller-Nedebock, Kristian K.
2018-04-01
We introduce a computational method in physics that goes ‘beyond linear use of equation superposition’ (BLUES). A BLUES function is defined as a solution of a nonlinear differential equation (DE) with a delta source that is at the same time a Green’s function for a related linear DE. For an arbitrary source, the BLUES function can be used to construct an exact solution to the nonlinear DE with a different, but related source. Alternatively, the BLUES function can be used to construct an approximate piecewise analytical solution to the nonlinear DE with an arbitrary source. For this alternative use the related linear DE need not be known. The method is illustrated in a few examples using analytical calculations and numerical computations. Areas for further applications are suggested.
Activity based costing (ABC Method
Directory of Open Access Journals (Sweden)
Prof. Ph.D. Saveta Tudorache
2008-05-01
Full Text Available In the present paper the need and advantages are presented of using the Activity BasedCosting method, need arising from the need of solving the information pertinence issue. This issue has occurreddue to the limitation of classic methods in this field, limitation also reflected by the disadvantages ofsuch classic methods in establishing complete costs.
Image based rendering of iterated function systems
Wijk, van J.J.; Saupe, D.
2004-01-01
A fast method to generate fractal imagery is presented. Iterated function systems (IFS) are based on repeatedly copying transformed images. We show that this can be directly translated into standard graphics operations: Each image is generated by texture mapping and blending copies of the previous
A Numerical Method for Lane-Emden Equations Using Hybrid Functions and the Collocation Method
Directory of Open Access Journals (Sweden)
Changqing Yang
2012-01-01
Full Text Available A numerical method to solve Lane-Emden equations as singular initial value problems is presented in this work. This method is based on the replacement of unknown functions through a truncated series of hybrid of block-pulse functions and Chebyshev polynomials. The collocation method transforms the differential equation into a system of algebraic equations. It also has application in a wide area of differential equations. Corresponding numerical examples are presented to demonstrate the accuracy of the proposed method.
convergent methods for calculating thermodynamic Green functions
Bowen, S. P.; Williams, C. D.; Mancini, J. D.
1984-01-01
A convergent method of approximating thermodynamic Green functions is outlined briefly. The method constructs a sequence of approximants which converges independently of the strength of the Hamiltonian's coupling constants. Two new concepts associated with the approximants are introduced: the resolving power of the approximation, and conditional creation (annihilation) operators. These ideas are illustrated on an exactly soluble model and a numerical example. A convergent expression for the s...
On some methods of NPP functional diagnostics
International Nuclear Information System (INIS)
Babkin, N.A.
1988-01-01
Methods for NPP functional diagnosis, in which space and time dependences for controlled variable anomalous deviations change are used as characteristic features, are suggested. The methods are oriented for operative recognition of suddenly appearing defects and envelop quite a wide range of possible anomalous effects in an onject under diagnostics. Analysis of transients dynamic properties caused by a failure is realized according to the rules, which do not depend on the character of anomalous situation development
International Nuclear Information System (INIS)
Kovalinska, T.V.; Ostapenko, I.A.; Sakhno, V.I.; Zelinskyy, A.G.
2012-01-01
The ways of the improvement of technical base of the INR of NAS of Ukraine for functional researches, and new technologies of control over the state of the equipment on NPPs are discussed. The scientific work is completed in the department of radiation technologies within the national program of the enhancement of the reliability of nuclear energetic and the prolongation of exploitation terms of nuclear power installations
Directory of Open Access Journals (Sweden)
Shuang Wang
2012-01-01
Full Text Available As an efficient tool, radial basis function (RBF has been widely used for the multivariate approximation, interpolating continuous, and the solution of the particle differential equations. However, ill-conditioned interpolation matrix may be encountered when the interpolation points are very dense or irregularly arranged. To avert this problem, RBFs with variable shape parameters are introduced, and several new variation strategies are proposed. Comparison with the RBF with constant shape parameters are made, and the results show that the condition number of the interpolation matrix grows much slower with our strategies. As an application, an improved collocation meshless method is formulated by employing the new RBF. In addition, the Hermite-type interpolation is implemented to handle the Neumann boundary conditions and an additional sine/cosine basis is introduced for the Helmlholtz equation. Then, two interior acoustic problems are solved with the presented method; the results demonstrate the robustness and effectiveness of the method.
Exponential function method for solving nonlinear ordinary ...
Indian Academy of Sciences (India)
[14] introduced a new system of rational. 79 ..... Also, for k-power of function f (η), by induction, we have ..... reliability and efficiency of the method. .... electric field and the polarization effects are negligible and B(x) is assumed by Chaim [8] as.
Minimizing convex functions by continuous descent methods
Directory of Open Access Journals (Sweden)
Sergiu Aizicovici
2010-01-01
Full Text Available We study continuous descent methods for minimizing convex functions, defined on general Banach spaces, which are associated with an appropriate complete metric space of vector fields. We show that there exists an everywhere dense open set in this space of vector fields such that each of its elements generates strongly convergent trajectories.
Methods for selective functionalization and separation of carbon nanotubes
Strano, Michael S. (Inventor); Usrey, Monica (Inventor); Barone, Paul (Inventor); Dyke, Christopher A. (Inventor); Tour, James M. (Inventor); Kittrell, W. Carter (Inventor); Hauge, Robert H (Inventor); Smalley, Richard E. (Inventor); Marek, legal representative, Irene Marie (Inventor)
2011-01-01
The present invention is directed toward methods of selectively functionalizing carbon nanotubes of a specific type or range of types, based on their electronic properties, using diazonium chemistry. The present invention is also directed toward methods of separating carbon nanotubes into populations of specific types or range(s) of types via selective functionalization and electrophoresis, and also to the novel compositions generated by such separations.
Quality functions for requirements engineering in system development methods.
Johansson, M; Timpka, T
1996-01-01
Based on a grounded theory framework, this paper analyses the quality characteristics for methods to be used for requirements engineering in the development of medical decision support systems (MDSS). The results from a Quality Function Deployment (QFD) used to rank functions connected to user value and a focus group study were presented to a validation focus group. The focus group studies take advantage of a group process to collect data for further analyses. The results describe factors considered by the participants as important in the development of methods for requirements engineering in health care. Based on the findings, the content which, according to the user a MDSS method should support is established.
Georgopoulos, A. P.; Tan, H.-R. M.; Lewis, S. M.; Leuthold, A. C.; Winskowski, A. M.; Lynch, J. K.; Engdahl, B.
2010-02-01
Traumatic experiences can produce post-traumatic stress disorder (PTSD) which is a debilitating condition and for which no biomarker currently exists (Institute of Medicine (US) 2006 Posttraumatic Stress Disorder: Diagnosis and Assessment (Washington, DC: National Academies)). Here we show that the synchronous neural interactions (SNI) test which assesses the functional interactions among neural populations derived from magnetoencephalographic (MEG) recordings (Georgopoulos A P et al 2007 J. Neural Eng. 4 349-55) can successfully differentiate PTSD patients from healthy control subjects. Externally cross-validated, bootstrap-based analyses yielded >90% overall accuracy of classification. In addition, all but one of 18 patients who were not receiving medications for their disease were correctly classified. Altogether, these findings document robust differences in brain function between the PTSD and control groups that can be used for differential diagnosis and which possess the potential for assessing and monitoring disease progression and effects of therapy.
Density-functional expansion methods: Grand challenges.
Giese, Timothy J; York, Darrin M
2012-03-01
We discuss the source of errors in semiempirical density functional expansion (VE) methods. In particular, we show that VE methods are capable of well-reproducing their standard Kohn-Sham density functional method counterparts, but suffer from large errors upon using one or more of these approximations: the limited size of the atomic orbital basis, the Slater monopole auxiliary basis description of the response density, and the one- and two-body treatment of the core-Hamiltonian matrix elements. In the process of discussing these approximations and highlighting their symptoms, we introduce a new model that supplements the second-order density-functional tight-binding model with a self-consistent charge-dependent chemical potential equalization correction; we review our recently reported method for generalizing the auxiliary basis description of the atomic orbital response density; and we decompose the first-order potential into a summation of additive atomic components and many-body corrections, and from this examination, we provide new insights and preliminary results that motivate and inspire new approximate treatments of the core-Hamiltonian.
Developing rapid methods for analyzing upland riparian functions and values.
Hruby, Thomas
2009-06-01
Regulators protecting riparian areas need to understand the integrity, health, beneficial uses, functions, and values of this resource. Up to now most methods providing information about riparian areas are based on analyzing condition or integrity. These methods, however, provide little information about functions and values. Different methods are needed that specifically address this aspect of riparian areas. In addition to information on functions and values, regulators have very specific needs that include: an analysis at the site scale, low cost, usability, and inclusion of policy interpretations. To meet these needs a rapid method has been developed that uses a multi-criteria decision matrix to categorize riparian areas in Washington State, USA. Indicators are used to identify the potential of the site to provide a function, the potential of the landscape to support the function, and the value the function provides to society. To meet legal needs fixed boundaries for assessment units are established based on geomorphology, the distance from "Ordinary High Water Mark" and different categories of land uses. Assessment units are first classified based on ecoregions, geomorphic characteristics, and land uses. This simplifies the data that need to be collected at a site, but it requires developing and calibrating a separate model for each "class." The approach to developing methods is adaptable to other locations as its basic structure is not dependent on local conditions.
Directory of Open Access Journals (Sweden)
Jie LIU
2015-07-01
Full Text Available Objective The objective of this study was to evaluate the basic changes in brain activity of pilots after hypoxic exposure with the use of resting-state functional magnetic resonance imaging (rs-fMRI and regional homogeneity (ReHo method. Methods Thirty healthy male pilots were successively subjected to normal and hypoxic exposure (with an oxygen concentration of 14.5%. Both the fALFF and ReHo methods were adopted to analyze the resting-state functional MRI data before and after hypoxic exposure of the subjects, the areas of the brain with fALFF and ReHo changes after hypoxic exposure were observed. Results After hypoxic exposure, the pulse was 64.0±10.6 beats/min, and the oxygen saturation was 92.4%±3.9% in these 30 pilots, and it was lower than those before exposure (71.4±10.9 beats/min, 96.3%±1.3%, P<0.05. Compared with the condition before hypoxic exposure, the fALFF value was decreased in superior temporal gyri on both sides and the right superior frontal gyrus, and increase in the left precuneus, while the value of ReHo was decreased in the right superior frontal gyrus (P<0.05. No brain area with an increase in ReHo value was found. Conclusions Hypoxic exposure could significantly affect the brain functions of pilots, which may contribute to change in their cognitive ability. DOI: 10.11855/j.issn.0577-7402.2015.06.18
An advanced probabilistic structural analysis method for implicit performance functions
Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.
1989-01-01
In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.
Hazard identification based on plant functional modelling
International Nuclear Information System (INIS)
Rasmussen, B.; Whetton, C.
1993-10-01
A major objective of the present work is to provide means for representing a process plant as a socio-technical system, so as to allow hazard identification at a high level. The method includes technical, human and organisational aspects and is intended to be used for plant level hazard identification so as to identify critical areas and the need for further analysis using existing methods. The first part of the method is the preparation of a plant functional model where a set of plant functions link together hardware, software, operations, work organisation and other safety related aspects of the plant. The basic principle of the functional modelling is that any aspect of the plant can be represented by an object (in the sense that this term is used in computer science) based upon an Intent (or goal); associated with each Intent are Methods, by which the Intent is realized, and Constraints, which limit the Intent. The Methods and Constraints can themselves be treated as objects and decomposed into lower-level Intents (hence the procedure is known as functional decomposition) so giving rise to a hierarchical, object-oriented structure. The plant level hazard identification is carried out on the plant functional model using the Concept Hazard Analysis method. In this, the user will be supported by checklists and keywords and the analysis is structured by pre-defined worksheets. The preparation of the plant functional model and the performance of the hazard identification can be carried out manually or with computer support. (au) (4 tabs., 10 ills., 7 refs.)
Reliability analysis of software based safety functions
International Nuclear Information System (INIS)
Pulkkinen, U.
1993-05-01
The methods applicable in the reliability analysis of software based safety functions are described in the report. Although the safety functions also include other components, the main emphasis in the report is on the reliability analysis of software. The check list type qualitative reliability analysis methods, such as failure mode and effects analysis (FMEA), are described, as well as the software fault tree analysis. The safety analysis based on the Petri nets is discussed. The most essential concepts and models of quantitative software reliability analysis are described. The most common software metrics and their combined use with software reliability models are discussed. The application of software reliability models in PSA is evaluated; it is observed that the recent software reliability models do not produce the estimates needed in PSA directly. As a result from the study some recommendations and conclusions are drawn. The need of formal methods in the analysis and development of software based systems, the applicability of qualitative reliability engineering methods in connection to PSA and the need to make more precise the requirements for software based systems and their analyses in the regulatory guides should be mentioned. (orig.). (46 refs., 13 figs., 1 tab.)
Creep analysis by the path function method
International Nuclear Information System (INIS)
Akin, J.E.; Pardue, R.M.
1977-01-01
The finite element method has become a common analysis procedure for the creep analysis of structures. The most recent programs are designed to handle a general class of material properties and are able to calculate elastic, plastic, and creep components of strain under general loading histories. The constant stress approach is too crude a model to accurately represent the actual behaviour of the stress for large time steps. The true path of a point in the effective stress-effective strain (sigmasup(e)-epsilonsup(c)) plane is often one in which the slope is rapidly changing. Thus the stress level quickly moves away from the initial stress level and then gradually approaches the final one. The result is that the assumed constant stress level quickly becomes inaccurate. What is required is a better method of approximation of the true path in the sigmasup(e)-epsilonsup(c) space. The method described here is called the path function approach because it employs an assumed function to estimate the motion of points in the sigmasup(e)-epsilonsup(c) space. (Auth.)
Green close-quote s function method with energy-independent vertex functions
International Nuclear Information System (INIS)
Tsay Tzeng, S.Y.; Kuo, T.T.; Tzeng, Y.; Geyer, H.B.; Navratil, P.
1996-01-01
In conventional Green close-quote s function methods the vertex function Γ is generally energy dependent. However, a model-space Green close-quote s function method where the vertex function is manifestly energy independent can be formulated using energy-independent effective interaction theories based on folded diagrams and/or similarity transformations. This is discussed in general and then illustrated for a 1p1h model-space Green close-quote s function applied to a solvable Lipkin many-fermion model. The poles of the conventional Green close-quote s function are obtained by solving a self-consistent Dyson equation and model space calculations may lead to unphysical poles. For the energy-independent model-space Green close-quote s function only the physical poles of the model problem are reproduced and are in satisfactory agreement with the exact excitation energies. copyright 1996 The American Physical Society
Entropy-based benchmarking methods
Temurshoev, Umed
2012-01-01
We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth
Modulation Based on Probability Density Functions
Williams, Glenn L.
2009-01-01
A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.
International Nuclear Information System (INIS)
Koyama, Shin-ichi; Ozawa, Masaki; Suzuki, Tatsuya; Fujii, Yasuhiko
2006-01-01
A series of separation experiment was performed in order to study a multi-functional spent fuel reprocessing process based on ion-exchange technique. The tertiary pyridine-type anion-exchange resin was used in this experiment and the mixed oxide fuel highly irradiated in the experimental fast reactor ''JOYO'' was used as a reference spent fuel. As the result, 106 Ru + 125 Sb, 137 Cs + 155 Eu + 144 Ce, plutonium, americium and curium could be separated from the irradiated fuel by only three steps of ion-exchange. The decontamination factor of 137 Cs and trivalent lanthanides ( 155 Eu, 144 Ce) in the final americium product exceeded 3.9 x 10 4 and 1.0 x 10 5 , respectively. The decontamination factor for the mutual separation of 243 Cm and 241 Am was larger than 2.2 x 10 3 for the americium product and, moreover, the content of 137 Cs, trivalent lanthanides and 243 Cm included in 241 Am product did not exceed 2 ppm. These results prove that the proposed simplified separation process has a reality as a candidate for future reprocessing process based on the partitioning and transmutation concept. (author)
Arterial endothelial function measurement method and apparatus
Energy Technology Data Exchange (ETDEWEB)
Maltz, Jonathan S; Budinger, Thomas F
2014-03-04
A "relaxoscope" (100) detects the degree of arterial endothelial function. Impairment of arterial endothelial function is an early event in atherosclerosis and correlates with the major risk factors for cardiovascular disease. An artery (115), such as the brachial artery (BA) is measured for diameter before and after several minutes of either vasoconstriction or vasorelaxation. The change in arterial diameter is a measure of flow-mediated vasomodification (FMVM). The relaxoscope induces an artificial pulse (128) at a superficial radial artery (115) via a linear actuator (120). An ultrasonic Doppler stethoscope (130) detects this pulse 10-20 cm proximal to the point of pulse induction (125). The delay between pulse application and detection provides the pulse transit time (PTT). By measuring PTT before (160) and after arterial diameter change (170), FMVM may be measured based on the changes in PTT caused by changes in vessel caliber, smooth muscle tone and wall thickness.
History based batch method preserving tally means
International Nuclear Information System (INIS)
Shim, Hyung Jin; Choi, Sung Hoon
2012-01-01
In the Monte Carlo (MC) eigenvalue calculations, the sample variance of a tally mean calculated from its cycle-wise estimates is biased because of the inter-cycle correlations of the fission source distribution (FSD). Recently, we proposed a new real variance estimation method named the history-based batch method in which a MC run is treated as multiple runs with small number of histories per cycle to generate independent tally estimates. In this paper, the history-based batch method based on the weight correction is presented to preserve the tally mean from the original MC run. The effectiveness of the new method is examined for the weakly coupled fissile array problem as a function of the dominance ratio and the batch size, in comparison with other schemes available
A novel method to solve functional differential equations
International Nuclear Information System (INIS)
Tapia, V.
1990-01-01
A method to solve differential equations containing the variational operator as the derivation operation is presented. They are called variational differential equations (VDE). The solution to a VDE should be a function containing the derivatives, with respect to the base space coordinates, of the fields up to a generic order s: a s-th-order function. The variational operator doubles the order of the function on which it acts. Therefore, in order to make compatible the orders of the different terms appearing in a VDE, the solution should be a function containing the derivatives of the fields at all orders. But this takes us again back to the functional methods. In order to avoid this, one must restrict the considerations, in the case of second-order VDEs, to the space of s-th-order functions on which the variational operator acts transitively. These functions have been characterized for a one-dimensional base space for the first- and second-order cases. These functions turn out to be polynomial in the highest-order derivatives of the fields with functions of the lower-order derivatives as coefficients. Then VDEs reduce to a system of coupled partial differential equations for the coefficients above mentioned. The importance of the method lies on the fact that the solutions to VDEs are in a one-to-one correspondence with the solutions of functional differential equations. The previous method finds direct applications in quantum field theory, where the Schroedinger equation plays a central role. Since the Schroedinger equation is reduced to a system of coupled partial differential equations, this provides a nonperturbative scheme for quantum field theory. As an example, the massless scalar field is considered
Functional discriminant method and neuronal net
International Nuclear Information System (INIS)
Minh-Quan Tran.
1993-02-01
The ZEUS detector at the ep storage ring HERA at DESY is equipped with a 3 level trigger system. This enormous effort is necessary to fight against the high proton beamgas background that was estimated to be at the level of 100 kHz. In this thesis two methods were investigated to calculate a trigger decision from a set of various trigger parameters. The Functional Discriminant Analysis evalutes a decision parameter that is optimized by means of a linear algebra technic. A method is shown how to determine the most important trigger parameters. A 'feed forward' neuralnetwork was analyzed in order to allow none lineare cuts in the n dimensinal configuration space spanned by the trigger parameters. The error back propagation method was used to teach the neural network. It is shown that both decision methods are able to abstract the important characteristics of event samples. As soon as they are tought they will seperate events from these classes even though they were not part of the training sample. (orig.) [de
Trial-Based Functional Analysis and Functional Communication Training in an Early Childhood Setting
Lambert, Joseph M.; Bloom, Sarah E.; Irvin, Jennifer
2012-01-01
Problem behavior is common in early childhood special education classrooms. Functional communication training (FCT; Carr & Durand, 1985) may reduce problem behavior but requires identification of its function. The trial-based functional analysis (FA) is a method that can be used to identify problem behavior function in schools. We conducted…
Generalization of the influence function method in mining subsidence
International Nuclear Information System (INIS)
Bello Garcia, A.; Mendendez Diaz, A.; Ordieres Mere, J.B.; Gonzalez Nicieza, C.
1996-01-01
A generic approach to subsidence prediction based on the influence function method is presented. The changes proposed to the classical approach are the result of a previous analysis stage where a generalization to the 3D problem was made. In addition other hypothesis in order to relax the structural principles of the classical model are suggested. The quantitative results of this process and a brief discussion of its method of employment is presented. 13 refs., 8 figs., 5 tabs
Exp-function method for solving fractional partial differential equations.
Zheng, Bin
2013-01-01
We extend the Exp-function method to fractional partial differential equations in the sense of modified Riemann-Liouville derivative based on nonlinear fractional complex transformation. For illustrating the validity of this method, we apply it to the space-time fractional Fokas equation and the nonlinear fractional Sharma-Tasso-Olver (STO) equation. As a result, some new exact solutions for them are successfully established.
Systems and methods for interpolation-based dynamic programming
Rockwood, Alyn
2013-01-03
Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.
Systems and methods for interpolation-based dynamic programming
Rockwood, Alyn
2013-01-01
Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.
New approach to equipment quality evaluation method with distinct functions
Directory of Open Access Journals (Sweden)
Milisavljević Vladimir M.
2016-01-01
Full Text Available The paper presents new approach for improving method for quality evaluation and selection of equipment (devices and machinery by applying distinct functions. Quality evaluation and selection of devices and machinery is a multi-criteria problem which involves the consideration of numerous parameters of various origins. Original selection method with distinct functions is based on technical parameters with arbitrary evaluation of each parameter importance (weighting. Improvement of this method, presented in this paper, addresses the issue of weighting of parameters by using Delphi Method. Finally, two case studies are provided, which included quality evaluation of standard boilers for heating and evaluation of load-haul-dump (LHD machines, to demonstrate applicability of this approach. Analytical Hierarchical Process (AHP is used as a control method.
Approximation of the exponential integral (well function) using sampling methods
Baalousha, Husam Musa
2015-04-01
Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.
Model-Based Method for Sensor Validation
Vatan, Farrokh
2012-01-01
Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).
[Functional methods of the esophagus examination].
Valitova, E R; Bordin, D S; Ianova, O B; Vasnev, O S; Masharova, A A
2010-01-01
Manometry of the esophagus is the "gold standard" in diagnosing diseases of the esophagus associated with motor disorders. The combination of manometry with impedance gives an indication of violation of bolus transport along the esophagus. High resolution manometry is new method that provides the most accurate information about the functional anatomy of the esophagus and its sphincters, as well as accurately characterizes the esophageal-gastric junction. We can increase the diagnostic value of daily pH-monitoring by analyzing communication with reflux symptoms. The combination of pH and impedance can identify different types of reflux (acid, sour, gas, liquid and mixed) in patients with symptoms of GERD and related Ahil, after gastric resection in children and infants, to evaluate the effectiveness of antireflux therapy.
Systems and methods for producing low work function electrodes
Kippelen, Bernard; Fuentes-Hernandez, Canek; Zhou, Yinhua; Kahn, Antoine; Meyer, Jens; Shim, Jae Won; Marder, Seth R.
2015-07-07
According to an exemplary embodiment of the invention, systems and methods are provided for producing low work function electrodes. According to an exemplary embodiment, a method is provided for reducing a work function of an electrode. The method includes applying, to at least a portion of the electrode, a solution comprising a Lewis basic oligomer or polymer; and based at least in part on applying the solution, forming an ultra-thin layer on a surface of the electrode, wherein the ultra-thin layer reduces the work function associated with the electrode by greater than 0.5 eV. According to another exemplary embodiment of the invention, a device is provided. The device includes a semiconductor; at least one electrode disposed adjacent to the semiconductor and configured to transport electrons in or out of the semiconductor.
Ghavami, Raouf; Sadeghi, Faridoon; Rasouli, Zolikha; Djannati, Farhad
2012-12-01
Experimental values for the 13C NMR chemical shifts (ppm, TMS = 0) at 300 K ranging from 96.28 ppm (C4' of indole derivative 17) to 159.93 ppm (C4' of indole derivative 23) relative to deuteride chloroform (CDCl3, 77.0 ppm) or dimethylsulfoxide (DMSO, 39.50 ppm) as internal reference in CDCl3 or DMSO-d6 solutions have been collected from literature for thirty 2-functionalized 5-(methylsulfonyl)-1-phenyl-1H-indole derivatives containing different substituted groups. An effective quantitative structure-property relationship (QSPR) models were built using hybrid method combining genetic algorithm (GA) based on stepwise selection multiple linear regression (SWS-MLR) as feature-selection tools and correlation models between each carbon atom of indole derivative and calculated descriptors. Each compound was depicted by molecular structural descriptors that encode constitutional, topological, geometrical, electrostatic, and quantum chemical features. The accuracy of all developed models were confirmed using different types of internal and external procedures and various statistical tests. Furthermore, the domain of applicability for each model which indicates the area of reliable predictions was defined.
Activity based costing method
Directory of Open Access Journals (Sweden)
Èuchranová Katarína
2001-06-01
Full Text Available Activity based costing is a method of identifying and tracking the operating costs directly associated with processing items. It is the practice of focusing on some unit of output, such as a purchase order or an assembled automobile and attempting to determine its total as precisely as poccible based on the fixed and variable costs of the inputs.You use ABC to identify, quantify and analyze the various cost drivers (such as labor, materials, administrative overhead, rework. and to determine which ones are candidates for reduction.A processes any activity that accepts inputs, adds value to these inputs for customers and produces outputs for these customers. The customer may be either internal or external to the organization. Every activity within an organization comprimes one or more processes. Inputs, controls and resources are all supplied to the process.A process owner is the person responsible for performing and or controlling the activity.The direction of cost through their contact to partial activity and processes is a new modern theme today. Beginning of this method is connected with very important changes in the firm processes.ABC method is a instrument , that bring a competitive advantages for the firm.
Atlas-based functional radiosurgery: Early results
Energy Technology Data Exchange (ETDEWEB)
Stancanello, J.; Romanelli, P.; Pantelis, E.; Sebastiano, F.; Modugno, N. [Politecnico di Milano, Bioengineering Department and NEARlab, Milano, 20133 (Italy) and Siemens AG, Research and Clinical Collaborations, Erlangen, 91052 (Germany); Functional Neurosurgery Deptartment, Neuromed IRCCS, Pozzilli, 86077 (Italy); CyberKnife Center, Iatropolis, Athens, 15231 (Greece); Functional Neurosurgery Deptartment, Neuromed IRCCS, Pozzilli, 86077 (Italy)
2009-02-15
Functional disorders of the brain, such as dystonia and neuropathic pain, may respond poorly to medical therapy. Deep brain stimulation (DBS) of the globus pallidus pars interna (GPi) and the centromedian nucleus of the thalamus (CMN) may alleviate dystonia and neuropathic pain, respectively. A noninvasive alternative to DBS is radiosurgical ablation [internal pallidotomy (IP) and medial thalamotomy (MT)]. The main technical limitation of radiosurgery is that targets are selected only on the basis of MRI anatomy, without electrophysiological confirmation. This means that, to be feasible, image-based targeting must be highly accurate and reproducible. Here, we report on the feasibility of an atlas-based approach to targeting for functional radiosurgery. In this method, masks of the GPi, CMN, and medio-dorsal nucleus were nonrigidly registered to patients' T1-weighted MRI (T1w-MRI) and superimposed on patients' T2-weighted MRI (T2w-MRI). Radiosurgical targets were identified on the T2w-MRI registered to the planning CT by an expert functional neurosurgeon. To assess its feasibility, two patients were treated with the CyberKnife using this method of targeting; a patient with dystonia received an IP (120 Gy prescribed to the 65% isodose) and a patient with neuropathic pain received a MT (120 Gy to the 77% isodose). Six months after treatment, T2w-MRIs and contrast-enhanced T1w-MRIs showed edematous regions around the lesions; target placements were reevaluated by DW-MRIs. At 12 months post-treatment steroids for radiation-induced edema and medications for dystonia and neuropathic pain were suppressed. Both patients experienced significant relief from pain and dystonia-related problems. Fifteen months after treatment edema had disappeared. Thus, this work shows promising feasibility of atlas-based functional radiosurgery to improve patient condition. Further investigations are indicated for optimizing treatment dose.
Functional renormalization group methods in quantum chromodynamics
International Nuclear Information System (INIS)
Braun, J.
2006-01-01
We apply functional Renormalization Group methods to Quantum Chromodynamics (QCD). First we calculate the mass shift for the pion in a finite volume in the framework of the quark-meson model. In particular, we investigate the importance of quark effects. As in lattice gauge theory, we find that the choice of quark boundary conditions has a noticeable effect on the pion mass shift in small volumes. A comparison of our results to chiral perturbation theory and lattice QCD suggests that lattice QCD has not yet reached volume sizes for which chiral perturbation theory can be applied to extrapolate lattice results for low-energy observables. Phase transitions in QCD at finite temperature and density are currently very actively researched. We study the chiral phase transition at finite temperature with two approaches. First, we compute the phase transition temperature in infinite and in finite volume with the quark-meson model. Though qualitatively correct, our results suggest that the model does not describe the dynamics of QCD near the finite-temperature phase boundary accurately. Second, we study the approach to chiral symmetry breaking in terms of quarks and gluons. We compute the running QCD coupling for all temperatures and scales. We use this result to determine quantitatively the phase boundary in the plane of temperature and number of quark flavors and find good agreement with lattice results. (orig.)
Functional renormalization group methods in quantum chromodynamics
Energy Technology Data Exchange (ETDEWEB)
Braun, J.
2006-12-18
We apply functional Renormalization Group methods to Quantum Chromodynamics (QCD). First we calculate the mass shift for the pion in a finite volume in the framework of the quark-meson model. In particular, we investigate the importance of quark effects. As in lattice gauge theory, we find that the choice of quark boundary conditions has a noticeable effect on the pion mass shift in small volumes. A comparison of our results to chiral perturbation theory and lattice QCD suggests that lattice QCD has not yet reached volume sizes for which chiral perturbation theory can be applied to extrapolate lattice results for low-energy observables. Phase transitions in QCD at finite temperature and density are currently very actively researched. We study the chiral phase transition at finite temperature with two approaches. First, we compute the phase transition temperature in infinite and in finite volume with the quark-meson model. Though qualitatively correct, our results suggest that the model does not describe the dynamics of QCD near the finite-temperature phase boundary accurately. Second, we study the approach to chiral symmetry breaking in terms of quarks and gluons. We compute the running QCD coupling for all temperatures and scales. We use this result to determine quantitatively the phase boundary in the plane of temperature and number of quark flavors and find good agreement with lattice results. (orig.)
Method of angular potential functions. Hypernuclei
Energy Technology Data Exchange (ETDEWEB)
Gorbatov, A M [Kalininskij Gosudarstvennyj Univ. USSR
1979-01-01
The method of microscopic calculation of hypernuclei with realistic ..lambda..N interaction is developed. It is shown that the ..lambda..+core model and the model of collective motion of the hypernuclear baryons cannot yield correct values of the B/sub ..lambda../-particle separation energy. The first starting point of the method is introduction of the rho collective variable of nucleons and the distance of the ..lambda.. particle from the center-of-inertia of the rho/sub ..lambda../ nucleons (or a universal collective variable which is the same for all particles). The second starting point is the building of the physical bases for the NN and ..lambda..N interaction in the space of multidimensional angles. The convergence of the ..lambda..N potential harmonic expansion is studied for various amplitudes and radii of the ..lambda..N potential with the /sub ..lambda..//sup 5/ He hypernucleous as an example. The ..lambda..-particle induced excitation probability of collective and single-particle degrees of freedom of the core is estimated. The single-particle excitations of zero orbital momentum nucleons are shown to dominate.
Imaging of brain function based on the analysis of functional ...
African Journals Online (AJOL)
Objective: This Study observed the relevant brain areas activated by acupuncture at the Taichong acupoint (LR3) and analyzed the functional connectivity among brain areas using resting state functional magnetic resonance imaging (fMRI) to explore the acupoint specificity of the Taichong acupoint. Methods: A total of 45 ...
Quantal density functional theory II. Approximation methods and applications
International Nuclear Information System (INIS)
Sahni, Viraht
2010-01-01
This book is on approximation methods and applications of Quantal Density Functional Theory (QDFT), a new local effective-potential-energy theory of electronic structure. What distinguishes the theory from traditional density functional theory is that the electron correlations due to the Pauli exclusion principle, Coulomb repulsion, and the correlation contribution to the kinetic energy -- the Correlation-Kinetic effects -- are separately and explicitly defined. As such it is possible to study each property of interest as a function of the different electron correlations. Approximations methods based on the incorporation of different electron correlations, as well as a many-body perturbation theory within the context of QDFT, are developed. The applications are to the few-electron inhomogeneous electron gas systems in atoms and molecules, as well as to the many-electron inhomogeneity at metallic surfaces. (orig.)
Methods of filtering the graph images of the functions
Directory of Open Access Journals (Sweden)
Олександр Григорович Бурса
2017-06-01
Full Text Available The theoretical aspects of cleaning raster images of scanned graphs of functions from digital, chromatic and luminance distortions by using computer graphics techniques have been considered. The basic types of distortions characteristic of graph images of functions have been stated. To suppress the distortion several methods, providing for high-quality of the resulting images and saving their topological features, were suggested. The paper describes the techniques developed and improved by the authors: the method of cleaning the image of distortions by means of iterative contrasting, based on the step-by-step increase in image contrast in the graph by 1%; the method of small entities distortion restoring, based on the thinning of the known matrix of contrast increase filter (the allowable dimensions of the nucleus dilution radius convolution matrix, which provide for the retention of the graph lines have been established; integration technique of the noise reduction method by means of contrasting and distortion restoring method of small entities with known σ-filter. Each method in the complex has been theoretically substantiated. The developed methods involve treatment of graph images as the entire image (global processing and its fragments (local processing. The metrics assessing the quality of the resulting image with the global and local processing have been chosen, the substantiation of the choice as well as the formulas have been given. The proposed complex methods of cleaning the graphs images of functions from grayscale image distortions is adaptive to the form of an image carrier, the distortion level in the image and its distribution. The presented results of testing the developed complex of methods for a representative sample of images confirm its effectiveness
Bootstrapping conformal field theories with the extremal functional method.
El-Showk, Sheer; Paulos, Miguel F
2013-12-13
The existence of a positive linear functional acting on the space of (differences between) conformal blocks has been shown to rule out regions in the parameter space of conformal field theories (CFTs). We argue that at the boundary of the allowed region the extremal functional contains, in principle, enough information to determine the dimensions and operator product expansion (OPE) coefficients of an infinite number of operators appearing in the correlator under analysis. Based on this idea we develop the extremal functional method (EFM), a numerical procedure for deriving the spectrum and OPE coefficients of CFTs lying on the boundary (of solution space). We test the EFM by using it to rederive the low lying spectrum and OPE coefficients of the two-dimensional Ising model based solely on the dimension of a single scalar quasiprimary--no Virasoro algebra required. Our work serves as a benchmark for applications to more interesting, less known CFTs in the near future.
Approximation of the Doppler broadening function by Frobenius method
International Nuclear Information System (INIS)
Palma, Daniel A.P.; Martinez, Aquilino S.; Silva, Fernando C.
2005-01-01
An analytical approximation of the Doppler broadening function ψ(x,ξ) is proposed. This approximation is based on the solution of the differential equation for ψ(x,ξ) using the methods of Frobenius and the parameters variation. The analytical form derived for ψ(x,ξ) in terms of elementary functions is very simple and precise. It can be useful for applications related to the treatment of nuclear resonances mainly for the calculations of multigroup parameters and self-protection factors of the resonances, being the last used to correct microscopic cross-sections measurements by the activation technique. (author)
The derivation of the Doppler broadening function using Frobenius method
International Nuclear Information System (INIS)
Palma, Daniel A.P.; Martinez, Aquilino S.; Silva, Fernando C.
2006-01-01
An analytical approximation of the Doppler broadening function ψ(ξ,x) is proposed. This approximation is based on the solution of the differential equation for ψ(ξ,x) using the methods of Frobenius and parameters variation. The analytical form derived for ψ(ξ,x) in terms of elementary functions is very simple and precise. It can be useful for applications related to the treatment of nuclear resonances, mainly for calculations of multigroup parameters and resonances self-protection factors, the latter being used to correct microscopic cross section measurements by the activation technique. (author)
Czech Academy of Sciences Publication Activity Database
Duarte-Mermoud, M.A.; Ordonez-Hurtado, R.H.; Zagalak, Petr
2012-01-01
Roč. 43, č. 11 (2012), s. 2015-2029 ISSN 0020-7721 R&D Projects: GA ČR(CZ) GAP103/12/2431 Institutional support: RVO:67985556 Keywords : Switched linear systems * Lyapunov function * particle swarm optimization Subject RIV: BC - Control Systems Theory Impact factor: 1.305, year: 2012 http://library.utia.cas.cz/separaty/2012/AS/zagalak-0382169.pdf
Identification of fractional order systems using modulating functions method
Liu, Dayan
2013-06-01
The modulating functions method has been used for the identification of linear and nonlinear systems. In this paper, we generalize this method to the on-line identification of fractional order systems based on the Riemann-Liouville fractional derivatives. First, a new fractional integration by parts formula involving the fractional derivative of a modulating function is given. Then, we apply this formula to a fractional order system, for which the fractional derivatives of the input and the output can be transferred into the ones of the modulating functions. By choosing a set of modulating functions, a linear system of algebraic equations is obtained. Hence, the unknown parameters of a fractional order system can be estimated by solving a linear system. Using this method, we do not need any initial values which are usually unknown and not equal to zero. Also we do not need to estimate the fractional derivatives of noisy output. Moreover, it is shown that the proposed estimators are robust against high frequency sinusoidal noises and the ones due to a class of stochastic processes. Finally, the efficiency and the stability of the proposed method is confirmed by some numerical simulations.
Strak, Pawel; Kempisty, Pawel; Sakowski, Konrad; Krukowski, Stanislaw
2014-09-01
Density functional theory studies were conducted to determine an influence of the carrier concentration on the optical and electronic properties of InN/GaN superlattice system. The oscillator strength values, energy gaps and the band profiles were obtained. The band profiles were found to be strongly affected for technically possible heavy n-type doping while for p-type doping the carrier influence, both screening and band shift, is negligible. Blue shift of the transition energy between conduction band minima and valence band maxima was observed for high concentrations of both type carriers.
A logic circuit for solving linear function by digital method
International Nuclear Information System (INIS)
Ma Yonghe
1986-01-01
A mathematical method for determining the linear relation of physical quantity with rediation intensity is described. A logic circuit has been designed for solving linear function by digital method. Some applications and the circuit function are discussed
Influence function method for fast estimation of BWR core performance
International Nuclear Information System (INIS)
Rahnema, F.; Martin, C.L.; Parkos, G.R.; Williams, R.D.
1993-01-01
The model, which is based on the influence function method, provides rapid estimate of important quantities such as margins to fuel operating limits, the effective multiplication factor, nodal power and void and bundle flow distributions as well as the traversing in-core probe (TIP) and local power range monitor (LPRM) readings. The fast model has been incorporated into GE's three-dimensional core monitoring system (3D Monicore). In addition to its predicative capability, the model adapts to LPRM readings in the monitoring mode. Comparisons have shown that the agreement between the results of the fast method and those of the standard 3D Monicore is within a few percent. (orig.)
Dai, Honglin; Luo, Yongdao
2013-12-01
In recent years, with the development of the Flat-Field Holographic Concave Grating, they are adopted by all kinds of UV spectrometers. By means of single optical surface, the Flat-Field Holographic Concave Grating can implement dispersion and imaging that make the UV spectrometer system design quite compact. However, the calibration of the Flat-Field Holographic Concave Grating is very difficult. Various factors make its imaging quality difficult to be guaranteed. So we have to process the spectrum signal with signal restoration before using it. Guiding by the theory of signals and systems, and after a series of experiments, we found that our UV spectrometer system is a Linear Space- Variant System. It means that we have to measure PSF of every pixel of the system which contains thousands of pixels. Obviously, that's a large amount of calculation .For dealing with this problem, we proposes a novel signal restoration method. This method divides the system into several Linear Space-Invariant subsystems and then makes signal restoration with PSFs. Our experiments turn out that this method is effective and inexpensive.
Analytic function expansion nodal method for nuclear reactor core design
International Nuclear Information System (INIS)
Noh, Hae Man
1995-02-01
than the analytic function. The second variation of the AFEN method we developed is the AFEN/PEN hybrid method. This method is designed especially for the multigroup reactor analysis. This hybrid method solves the diffusion equations for the fast energy groups by the PEN method, and those for the thermal energy groups by the AFEN method. This method is based on the observation that the fast group neutron flux distributions are generally so smooth that they can be approximated by a high-order polynomial and that, on the other hand, the thermal fluxes require the analytic function expansion for the representation of their strong gradients near the interface between assemblies having different neutronic properties. The results of benchmark problems on which this method was tested indicate that performance of the hybrid method is much better than that of the PEN method and is nearly the same to that of the AFEN method. In order for the AFEN method and its variations to be used in analyzing the neutron behavior in an actual reactor core, we also developed a new burnup correction model to reduce the errors in nodal flux distributions induced by the intranodal burnup gradients. It is essential for the nodal methods to maintain their accuracy in fuel depletion analysis. The burnup correction model developed in this study homogenizes equivalently the node with the burnup-induced cross section variations into the homogeneous node with the equivalent parameters such as the flux-volume-weighted constant cross sections and the discontinuity factors. The results of a benchmark problem show that this model eliminates almost all the errors in the nodal unknowns which are induced by the intranodal burnup gradients
Basic methods of linear functional analysis
Pryce, John D
2011-01-01
Introduction to the themes of mathematical analysis, geared toward advanced undergraduate and graduate students. Topics include operators, function spaces, Hilbert spaces, and elementary Fourier analysis. Numerous exercises and worked examples.1973 edition.
Lung function imaging methods in Cystic Fibrosis pulmonary disease.
Kołodziej, Magdalena; de Veer, Michael J; Cholewa, Marian; Egan, Gary F; Thompson, Bruce R
2017-05-17
Monitoring of pulmonary physiology is fundamental to the clinical management of patients with Cystic Fibrosis. The current standard clinical practise uses spirometry to assess lung function which delivers a clinically relevant functional readout of total lung function, however does not supply any visible or localised information. High Resolution Computed Tomography (HRCT) is a well-established current 'gold standard' method for monitoring lung anatomical changes in Cystic Fibrosis patients. HRCT provides excellent morphological information, however, the X-ray radiation dose can become significant if multiple scans are required to monitor chronic diseases such as cystic fibrosis. X-ray phase-contrast imaging is another emerging X-ray based methodology for Cystic Fibrosis lung assessment which provides dynamic morphological and functional information, albeit with even higher X-ray doses than HRCT. Magnetic Resonance Imaging (MRI) is a non-ionising radiation imaging method that is garnering growing interest among researchers and clinicians working with Cystic Fibrosis patients. Recent advances in MRI have opened up the possibilities to observe lung function in real time to potentially allow sensitive and accurate assessment of disease progression. The use of hyperpolarized gas or non-contrast enhanced MRI can be tailored to clinical needs. While MRI offers significant promise it still suffers from poor spatial resolution and the development of an objective scoring system especially for ventilation assessment.
Methods for evaluation of platelet function.
Lindahl, Tomas L; Ramström, Sofia
2009-10-01
There are a multitude of platelet function tests available, reflecting the complex nature of the platelet in haemostasis. No simple single test will ever cover all aspects of platelet function. Some tests focus on the aggregation of platelets, for example aggregometry, other on the swelling in response to hypotonic solutions, i.e. the well-known hypotonic shock response, or adhesion or coagulation and clot retraction, for example thromboelastography. In general there is a lack of clinical studies showing a predictive value of analysis of platelet concentrates.
Neurocardiology: Structure-Based Function.
Ardell, Jeffrey L; Armour, John Andrew
2016-09-15
Cardiac control is mediated via a series of reflex control networks involving somata in the (i) intrinsic cardiac ganglia (heart), (ii) intrathoracic extracardiac ganglia (stellate, middle cervical), (iii) superior cervical ganglia, (iv) spinal cord, (v) brainstem, and (vi) higher centers. Each of these processing centers contains afferent, efferent, and local circuit neurons, which interact locally and in an interdependent fashion with the other levels to coordinate regional cardiac electrical and mechanical indices on a beat-to-beat basis. This control system is optimized to respond to normal physiological stressors (standing, exercise, and temperature); however, it can be catastrophically disrupted by pathological events such as myocardial ischemia. In fact, it is now recognized that autonomic dysregulation is central to the evolution of heart failure and arrhythmias. Autonomic regulation therapy is an emerging modality in the management of acute and chronic cardiac pathologies. Neuromodulation-based approaches that target select nexus points of this hierarchy for cardiac control offer unique opportunities to positively affect therapeutic outcomes via improved efficacy of cardiovascular reflex control. As such, understanding the anatomical and physiological basis for such control is necessary to implement effectively novel neuromodulation therapies. © 2016 American Physiological Society. Compr Physiol 6:1635-1653, 2016. Copyright © 2016 John Wiley & Sons, Inc.
Zeta function methods and quantum fluctuations
International Nuclear Information System (INIS)
Elizalde, Emilio
2008-01-01
A review of some recent advances in zeta function techniques is given, in problems of pure mathematical nature but also as applied to the computation of quantum vacuum fluctuations in different field theories, and specially with a view to cosmological applications
Efficient pseudospectral methods for density functional calculations
International Nuclear Information System (INIS)
Murphy, R. B.; Cao, Y.; Beachy, M. D.; Ringnalda, M. N.; Friesner, R. A.
2000-01-01
Novel improvements of the pseudospectral method for assembling the Coulomb operator are discussed. These improvements consist of a fast atom centered multipole method and a variation of the Head-Gordan J-engine analytic integral evaluation. The details of the methodology are discussed and performance evaluations presented for larger molecules within the context of DFT energy and gradient calculations. (c) 2000 American Institute of Physics
Learning Methods for Radial Basis Functions Networks
Czech Academy of Sciences Publication Activity Database
Neruda, Roman; Kudová, Petra
2005-01-01
Roč. 21, - (2005), s. 1131-1142 ISSN 0167-739X R&D Projects: GA ČR GP201/03/P163; GA ČR GA201/02/0428 Institutional research plan: CEZ:AV0Z10300504 Keywords : radial basis function networks * hybrid supervised learning * genetic algorithms * benchmarking Subject RIV: BA - General Mathematics Impact factor: 0.555, year: 2005
Exp-function method for solving Maccari's system
International Nuclear Information System (INIS)
Zhang Sheng
2007-01-01
In this Letter, the Exp-function method is used to seek exact solutions of Maccari's system. As a result, single and combined generalized solitonary solutions are obtained, from which some known solutions obtained by extended sine-Gordon equation method and improved hyperbolic function method are recovered as special cases. It is shown that the Exp-function method provides a very effective and powerful mathematical tool for solving nonlinear evolution equations in mathematical physics
A nodal method based on matrix-response method
International Nuclear Information System (INIS)
Rocamora Junior, F.D.; Menezes, A.
1982-01-01
A nodal method based in the matrix-response method, is presented, and its application to spatial gradient problems, such as those that exist in fast reactors, near the core - blanket interface, is investigated. (E.G.) [pt
Pilates Method for Lung Function and Functional Capacity in Obese Adults.
Niehues, Janaina Rocha; Gonzáles, Inês; Lemos, Robson Rodrigues; Haas, Patrícia
2015-01-01
Obesity is defined as the condition in which the body mass index (BMI) is ≥ 30 kg/m2 and is responsible for decreased quality of life and functional limitations. The harmful effects on ventilatory function include reduced lung capacity and volume; diaphragmatic muscle weakness; decreased lung compliance and stiffness; and weakness of the abdominal muscles, among others. Pilates is a method of resistance training that works with low-impact muscle exercises and is based on isometric exercises. The current article is a review of the literature that aims to investigate the hypothesis that the Pilates method, as a complementary method of training, might be beneficial to pulmonary function and functional capacity in obese adults. The intent of the review was to evaluate the use of Pilates as an innovative intervention in the respiratory dysfunctions of obese adults. In studies with other populations, it has been observed that Pilates can be effective in improving chest capacity and expansion and lung volume. That finding is due to the fact that Pilates works through the center of force, made up of the abdominal muscles and gluteus muscles lumbar, which are responsible for the stabilization of the static and dynamic body that is associated with breath control. It has been observed that different Pilates exercises increase the activation and recruitment of the abdominal muscles. Those muscles are important in respiration, both in expiration and inspiration, through the facilitation of diaphragmatic action. In that way, strengthening the abdominal muscles can help improve respiratory function, leading to improvements in lung volume and capacity. The results found in the current literature review support the authors' observations that Pilates promotes the strengthening of the abdominal muscles and that improvements in diaphragmatic function may result in positive outcomes in respiratory function, thereby improving functional capacity. However, the authors did not
Functional Size Measurement applied to UML-based user requirements
van den Berg, Klaas; Dekkers, Ton; Oudshoorn, Rogier; Dekkers, T.
There is a growing interest in applying standardized methods for Functional Size Measurement (FSM) to Functional User Requirements (FUR) based on models in the Unified Modelling Language (UML). No consensus exists on this issue. We analyzed the demands that FSM places on FURs. We propose a
Effective-range function methods for charged particle collisions
Gaspard, David; Sparenberg, Jean-Marc
2018-04-01
Different versions of the effective-range function method for charged particle collisions are studied and compared. In addition, a novel derivation of the standard effective-range function is presented from the analysis of Coulomb wave functions in the complex plane of the energy. The recently proposed effective-range function denoted as Δℓ [Ramírez Suárez and Sparenberg, Phys. Rev. C 96, 034601 (2017), 10.1103/PhysRevC.96.034601] and an earlier variant [Hamilton et al., Nucl. Phys. B 60, 443 (1973), 10.1016/0550-3213(73)90193-4] are related to the standard function. The potential interest of Δℓ for the study of low-energy cross sections and weakly bound states is discussed in the framework of the proton-proton S10 collision. The resonant state of the proton-proton collision is successfully computed from the extrapolation of Δℓ instead of the standard function. It is shown that interpolating Δℓ can lead to useful extrapolation to negative energies, provided scattering data are known below one nuclear Rydberg energy (12.5 keV for the proton-proton system). This property is due to the connection between Δℓ and the effective-range function by Hamilton et al. that is discussed in detail. Nevertheless, such extrapolations to negative energies should be used with caution because Δℓ is not analytic at zero energy. The expected analytic properties of the main functions are verified in the complex energy plane by graphical color-based representations.
Quasihomogeneous function method and Fock's problem
International Nuclear Information System (INIS)
Smyshlyaev, V.P.
1987-01-01
The diffraction of a high-frequency wave by a smooth convex body near the tangency point of the limiting ray to the surface is restated as the scattering problem for the Schrodinger equation with a linear potential on a half-axis. Various prior estimates for the scattering problem are used in order to prove existence, uniqueness, and smoothness theorems. The corresponding solution satisfies the principle of limiting absorption. The formal solution of the corresponding Schrodinger equation in the form of quasihomogeneous functions is essentially used in their constructions
Machine function based control code algebras
Bergstra, J.A.
Machine functions have been introduced by Earley and Sturgis in [6] in order to provide a mathematical foundation of the use of the T-diagrams proposed by Bratman in [5]. Machine functions describe the operation of a machine at a very abstract level. A theory of hardware and software based on
MHCcluster, a method for functional clustering of MHC molecules
DEFF Research Database (Denmark)
Thomsen, Martin Christen Frølund; Lundegaard, Claus; Buus, Søren
2013-01-01
The identification of peptides binding to major histocompatibility complexes (MHC) is a critical step in the understanding of T cell immune responses. The human MHC genomic region (HLA) is extremely polymorphic comprising several thousand alleles, many encoding a distinct molecule. The potentially...... binding specificity. The method has a flexible web interface that allows the user to include any MHC of interest in the analysis. The output consists of a static heat map and graphical tree-based visualizations of the functional relationship between MHC variants and a dynamic TreeViewer interface where...
Method of synchronizing independent functional unit
Kim, Changhoan
2018-03-13
A system for synchronizing parallel processing of a plurality of functional processing units (FPU), a first FPU and a first program counter to control timing of a first stream of program instructions issued to the first FPU by advancement of the first program counter; a second FPU and a second program counter to control timing of a second stream of program instructions issued to the second FPU by advancement of the second program counter, the first FPU is in communication with a second FPU to synchronize the issuance of a first stream of program instructions to the second stream of program instructions and the second FPU is in communication with the first FPU to synchronize the issuance of the second stream program instructions to the first stream of program instructions.
Modeling photonic crystal waveguides with noncircular geometry using green function method
International Nuclear Information System (INIS)
Uvarovaa, I.; Tsyganok, B.; Bashkatov, Y.; Khomenko, V.
2012-01-01
Currently in the field of photonics is an acute problem fast and accurate simulation photonic crystal waveguides with complex geometry. This paper describes an improved method of Green's functions for non-circular geometries. Based on comparison of selected efficient numerical method for finding the eigenvalues for the Green's function method for non-circular holes chosen effective method for our purposes. Simulation is realized in Maple environment. The simulation results confirmed experimentally. Key words: photonic crystal, waveguide, modeling, Green function, complex geometry
[Bases and methods of suturing].
Vogt, P M; Altintas, M A; Radtke, C; Meyer-Marcotty, M
2009-05-01
If pharmaceutic modulation of scar formation does not improve the quality of the healing process over conventional healing, the surgeon must rely on personal skill and experience. Therefore a profound knowledge of wound healing based on experimental and clinical studies supplemented by postsurgical means of scar management and basic techniques of planning incisions, careful tissue handling, and thorough knowledge of suturing remain the most important ways to avoid abnormal scarring. This review summarizes the current experimental and clinical bases of surgical scar management.
GOMA: functional enrichment analysis tool based on GO modules
Institute of Scientific and Technical Information of China (English)
Qiang Huang; Ling-Yun Wu; Yong Wang; Xiang-Sun Zhang
2013-01-01
Analyzing the function of gene sets is a critical step in interpreting the results of high-throughput experiments in systems biology.A variety of enrichment analysis tools have been developed in recent years,but most output a long list of significantly enriched terms that are often redundant,making it difficult to extract the most meaningful functions.In this paper,we present GOMA,a novel enrichment analysis method based on the new concept of enriched functional Gene Ontology (GO) modules.With this method,we systematically revealed functional GO modules,i.e.,groups of functionally similar GO terms,via an optimization model and then ranked them by enrichment scores.Our new method simplifies enrichment analysis results by reducing redundancy,thereby preventing inconsistent enrichment results among functionally similar terms and providing more biologically meaningful results.
Hash function based on chaotic map lattices.
Wang, Shihong; Hu, Gang
2007-06-01
A new hash function system, based on coupled chaotic map dynamics, is suggested. By combining floating point computation of chaos and some simple algebraic operations, the system reaches very high bit confusion and diffusion rates, and this enables the system to have desired statistical properties and strong collision resistance. The chaos-based hash function has its advantages for high security and fast performance, and it serves as one of the most highly competitive candidates for practical applications of hash function for software realization and secure information communications in computer networks.
A valuation method on physiological functionality of food materials
Energy Technology Data Exchange (ETDEWEB)
NONE
2001-10-15
This reports is about valuation method on physiological functionality of food materials. It includes ten reports: maintenance condition of functional foods in Korea by Kim, Byeong Tae, management plan and classification of functional foods by Jung, Myeong Seop, measurement method vitality of functional foods for preventing diabetes, measurement way of aging delayed activation by Lee, Jae Yong, improvement on effectiveness of anti hypertension by functional foods by Park, Jeon Hong, and practice case for the method of test on anti gastritis antiulcer by Lee, Eun Bang.
A valuation method on physiological functionality of food materials
International Nuclear Information System (INIS)
2001-10-01
This reports is about valuation method on physiological functionality of food materials. It includes ten reports: maintenance condition of functional foods in Korea by Kim, Byeong Tae, management plan and classification of functional foods by Jung, Myeong Seop, measurement method vitality of functional foods for preventing diabetes, measurement way of aging delayed activation by Lee, Jae Yong, improvement on effectiveness of anti hypertension by functional foods by Park, Jeon Hong, and practice case for the method of test on anti gastritis antiulcer by Lee, Eun Bang.
Theoretical method for determining particle distribution functions of classical systems
International Nuclear Information System (INIS)
Johnson, E.
1980-01-01
An equation which involves the triplet distribution function and the three-particle direct correlation function is obtained. This equation was derived using an analogue of the Ornstein--Zernike equation. The new equation is used to develop a variational method for obtaining the triplet distribution function of uniform one-component atomic fluids from the pair distribution function. The variational method may be used with the first and second equations in the YBG hierarchy to obtain pair and triplet distribution functions. It should be easy to generalize the results to the n-particle distribution function
COMPANY VALUATION METHODS BASED ON PATRIMONY
Directory of Open Access Journals (Sweden)
SUCIU GHEORGHE
2013-02-01
Full Text Available The methods used for the company valuation can be divided into 3 main groups: methods based on patrimony,methods based on financial performance, methods based both on patrimony and on performance. The companyvaluation methods based on patrimony are implemented taking into account the balance sheet or the financialstatement. The financial statement refers to that type of balance in which the assets are arranged according to liquidity,and the liabilities according to their financial maturity date. The patrimonial methods are based on the principle thatthe value of the company equals that of the patrimony it owns. From a legal point of view, the patrimony refers to allthe rights and obligations of a company. The valuation of companies based on their financial performance can be donein 3 ways: the return value, the yield value, the present value of the cash flows. The mixed methods depend both onpatrimony and on financial performance or can make use of other methods.
Bruni, S.; Llombart Juan, N.; Neto, A.; Gerini, G.; Maci, S.
2004-01-01
A general algorithm for the analysis of microstrip coupled leaky wave slot antennas was discussed. The method was based on the construction of physically appealing entire domain Methods of Moments (MoM) basis function that allowed a consistent reduction of the number of unknowns and of total
Determination of acoustical transfer functions using an impulse method
MacPherson, J.
1985-02-01
The Transfer Function of a system may be defined as the relationship of the output response to the input of a system. Whilst recent advances in digital processing systems have enabled Impulse Transfer Functions to be determined by computation of the Fast Fourier Transform, there has been little work done in applying these techniques to room acoustics. Acoustical Transfer Functions have been determined for auditoria, using an impulse method. The technique is based on the computation of the Fast Fourier Transform (FFT) of a non-ideal impulsive source, both at the source and at the receiver point. The Impulse Transfer Function (ITF) is obtained by dividing the FFT at the receiver position by the FFT of the source. This quantity is presented both as linear frequency scale plots and also as synthesized one-third octave band data. The technique enables a considerable quantity of data to be obtained from a small number of impulsive signals recorded in the field, thereby minimizing the time and effort required on site. As the characteristics of the source are taken into account in the calculation, the choice of impulsive source is non-critical. The digital analysis equipment required for the analysis is readily available commercially.
MIMIC Methods for Assessing Differential Item Functioning in Polytomous Items
Wang, Wen-Chung; Shih, Ching-Lin
2010-01-01
Three multiple indicators-multiple causes (MIMIC) methods, namely, the standard MIMIC method (M-ST), the MIMIC method with scale purification (M-SP), and the MIMIC method with a pure anchor (M-PA), were developed to assess differential item functioning (DIF) in polytomous items. In a series of simulations, it appeared that all three methods…
Geometric optical transfer function and tis computation method
International Nuclear Information System (INIS)
Wang Qi
1992-01-01
Geometric Optical Transfer Function formula is derived after expound some content to be easily ignored, and the computation method is given with Bessel function of order zero and numerical integration and Spline interpolation. The method is of advantage to ensure accuracy and to save calculation
Taylor-series method for four-nucleon wave functions
International Nuclear Information System (INIS)
Sandulescu, A.; Tarnoveanu, I.; Rizea, M.
1977-09-01
Taylor-series method for transforming the infinite or finite well two-nucleon wave functions from individual coordinates to relative and c.m. coordinates, by expanding the single particle shell model wave functions around c.m. of the system, is generalized to four-nucleon wave functions. Also the connections with the Talmi-Moshinsky method for two and four harmonic oscillator wave functions are deduced. For both methods Fortran IV programs for the expansion coefficients have been written and the equivalence of corresponding expressions numerically proved. (author)
Linear regression methods a ccording to objective functions
Yasemin Sisman; Sebahattin Bektas
2012-01-01
The aim of the study is to explain the parameter estimation methods and the regression analysis. The simple linear regressionmethods grouped according to the objective function are introduced. The numerical solution is achieved for the simple linear regressionmethods according to objective function of Least Squares and theLeast Absolute Value adjustment methods. The success of the appliedmethods is analyzed using their objective function values.
HMM-Based Gene Annotation Methods
Energy Technology Data Exchange (ETDEWEB)
Haussler, David; Hughey, Richard; Karplus, Keven
1999-09-20
Development of new statistical methods and computational tools to identify genes in human genomic DNA, and to provide clues to their functions by identifying features such as transcription factor binding sites, tissue, specific expression and splicing patterns, and remove homologies at the protein level with genes of known function.
A hybrid method for the parallel computation of Green's functions
International Nuclear Information System (INIS)
Petersen, Dan Erik; Li Song; Stokbro, Kurt; Sorensen, Hans Henrik B.; Hansen, Per Christian; Skelboe, Stig; Darve, Eric
2009-01-01
Quantum transport models for nanodevices using the non-equilibrium Green's function method require the repeated calculation of the block tridiagonal part of the Green's and lesser Green's function matrices. This problem is related to the calculation of the inverse of a sparse matrix. Because of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only require computing a small number of entries of the inverse matrix. Then, we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size.
Protein Function Prediction Based on Sequence and Structure Information
Smaili, Fatima Z.
2016-05-25
The number of available protein sequences in public databases is increasing exponentially. However, a significant fraction of these sequences lack functional annotation which is essential to our understanding of how biological systems and processes operate. In this master thesis project, we worked on inferring protein functions based on the primary protein sequence. In the approach we follow, 3D models are first constructed using I-TASSER. Functions are then deduced by structurally matching these predicted models, using global and local similarities, through three independent enzyme commission (EC) and gene ontology (GO) function libraries. The method was tested on 250 “hard” proteins, which lack homologous templates in both structure and function libraries. The results show that this method outperforms the conventional prediction methods based on sequence similarity or threading. Additionally, our method could be improved even further by incorporating protein-protein interaction information. Overall, the method we use provides an efficient approach for automated functional annotation of non-homologous proteins, starting from their sequence.
A Hybrid Positioning Method Based on Hypothesis Testing
DEFF Research Database (Denmark)
Amiot, Nicolas; Pedersen, Troels; Laaraiedh, Mohamed
2012-01-01
maxima. We propose to first estimate the support region of the two peaks of the likelihood function using a set membership method, and then decide between the two regions using a rule based on the less reliable observations. Monte Carlo simulations show that the performance of the proposed method...
Some Remarks on Exp-Function Method and Its Applications
International Nuclear Information System (INIS)
Aslan Ismail; Marinakis Vangelis
2011-01-01
Recently, many important nonlinear partial differential equations arising in the applied physical and mathematical sciences have been tackled by a popular approach, the so-called Exp-function method. In this paper, we present some shortcomings of this method by analyzing the results of recently published papers. We also discuss the possible improvement of the effectiveness of the method. (general)
Harris functional and related methods for calculating total energies in density-functional theory
International Nuclear Information System (INIS)
Averill, F.W.; Painter, G.S.
1990-01-01
The simplified energy functional of Harris has given results of useful accuracy for systems well outside the limits of weakly interacting fragments for which the method was originally proposed. In the present study, we discuss the source of the frequent good agreement of the Harris energy with full Kohn-Sham self-consistent results. A procedure is described for extending the applicability of the scheme to more strongly interacting systems by going beyond the frozen-atom fragment approximation. A gradient-force expression is derived, based on the Harris functional, which accounts for errors in the fragment charge representation. Results are presented for some diatomic molecules, illustrating the points of this study
Green's function matching method for adjoining regions having different masses
International Nuclear Information System (INIS)
Morgenstern Horing, Norman J
2006-01-01
We present a primer on the method of Green's function matching for the determination of the global Schroedinger Green's function for all space subject to joining conditions at an interface between two (or more) separate parts of the region having different masses. The object of this technique is to determine the full space Schroedinger Green's function in terms of the individual Green's functions of the constituent parts taken as if they were themselves extended to all space. This analytical method has had successful applications in the theory of surface states, and remains of interest for nanostructures
On the Reliability of Source Time Functions Estimated Using Empirical Green's Function Methods
Gallegos, A. C.; Xie, J.; Suarez Salas, L.
2017-12-01
The Empirical Green's Function (EGF) method (Hartzell, 1978) has been widely used to extract source time functions (STFs). In this method, seismograms generated by collocated events with different magnitudes are deconvolved. Under a fundamental assumption that the STF of the small event is a delta function, the deconvolved Relative Source Time Function (RSTF) yields the large event's STF. While this assumption can be empirically justified by examination of differences in event size and frequency content of the seismograms, there can be a lack of rigorous justification of the assumption. In practice, a small event might have a finite duration when the RSTF is retrieved and interpreted as the large event STF with a bias. In this study, we rigorously analyze this bias using synthetic waveforms generated by convolving a realistic Green's function waveform with pairs of finite-duration triangular or parabolic STFs. The RSTFs are found using a time-domain based matrix deconvolution. We find when the STFs of smaller events are finite, the RSTFs are a series of narrow non-physical spikes. Interpreting these RSTFs as a series of high-frequency source radiations would be very misleading. The only reliable and unambiguous information we can retrieve from these RSTFs is the difference in durations and the moment ratio of the two STFs. We can apply a Tikhonov smoothing to obtain a single-pulse RSTF, but its duration is dependent on the choice of weighting, which may be subjective. We then test the Multi-Channel Deconvolution (MCD) method (Plourde & Bostock, 2017) which assumes that both STFs have finite durations to be solved for. A concern about the MCD method is that the number of unknown parameters is larger, which would tend to make the problem rank-deficient. Because the kernel matrix is dependent on the STFs to be solved for under a positivity constraint, we can only estimate the rank-deficiency with a semi-empirical approach. Based on the results so far, we find that the
A G-function-based reliability-based design methodology applied to a cam roller system
International Nuclear Information System (INIS)
Wang, W.; Sui, P.; Wu, Y.T.
1996-01-01
Conventional reliability-based design optimization methods treats the reliability function as an ordinary function and applies existing mathematical programming techniques to solve the design problem. As a result, the conventional approach requires nested loops with respect to g-function, and is very time consuming. A new reliability-based design method is proposed in this paper that deals with the g-function directly instead of the reliability function. This approach has the potential of significantly reducing the number of calls for g-function calculations since it requires only one full reliability analysis in a design iteration. A cam roller system in a typical high pressure fuel injection diesel engine is designed using both the proposed and the conventional approach. The proposed method is much more efficient for this application
The method of images and Green's function for spherical domains
International Nuclear Information System (INIS)
Gutkin, Eugene; Newton, Paul K
2004-01-01
Motivated by problems in electrostatics and vortex dynamics, we develop two general methods for constructing Green's function for simply connected domains on the surface of the unit sphere. We prove a Riemann mapping theorem showing that such domains can be conformally mapped to the upper hemisphere. We then categorize all domains on the sphere for which Green's function can be constructed by an extension of the classical method of images. We illustrate our methods by several examples, such as the upper hemisphere, geodesic triangles, and latitudinal rectangles. We describe the point vortex motion in these domains, which is governed by a Hamiltonian determined by the Dirichlet Green's function
Vibrational Spectroscopic Studies of Tenofovir Using Density Functional Theory Method
Directory of Open Access Journals (Sweden)
G. R. Ramkumaar
2013-01-01
Full Text Available A systematic vibrational spectroscopic assignment and analysis of tenofovir has been carried out by using FTIR and FT-Raman spectral data. The vibrational analysis was aided by electronic structure calculations—hybrid density functional methods (B3LYP/6-311++G(d,p, B3LYP/6-31G(d,p, and B3PW91/6-31G(d,p. Molecular equilibrium geometries, electronic energies, IR intensities, and harmonic vibrational frequencies have been computed. The assignments proposed based on the experimental IR and Raman spectra have been reviewed and complete assignment of the observed spectra have been proposed. UV-visible spectrum of the compound was also recorded and the electronic properties such as HOMO and LUMO energies and were determined by time-dependent DFT (TD-DFT method. The geometrical, thermodynamical parameters, and absorption wavelengths were compared with the experimental data. The B3LYP/6-311++G(d,p-, B3LYP/6-31G(d,p-, and B3PW91/6-31G(d,p-based NMR calculation procedure was also done. It was used to assign the 13C and 1H NMR chemical shift of tenofovir.
A New Filled Function Method with One Parameter for Global Optimization
Directory of Open Access Journals (Sweden)
Fei Wei
2013-01-01
Full Text Available The filled function method is an effective approach to find the global minimizer of multidimensional multimodal functions. The conventional filled functions are numerically unstable due to exponential or logarithmic term and sensitive to parameters. In this paper, a new filled function with only one parameter is proposed, which is continuously differentiable and proved to satisfy all conditions of the filled function definition. Moreover, this filled function is not sensitive to parameter, and the overflow can not happen for this function. Based on these, a new filled function method is proposed, and it is numerically stable to the initial point and the parameter variable. The computer simulations indicate that the proposed filled function method is efficient and effective.
Cross-organism learning method to discover new gene functionalities.
Domeniconi, Giacomo; Masseroli, Marco; Moro, Gianluca; Pinoli, Pietro
2016-04-01
Knowledge of gene and protein functions is paramount for the understanding of physiological and pathological biological processes, as well as in the development of new drugs and therapies. Analyses for biomedical knowledge discovery greatly benefit from the availability of gene and protein functional feature descriptions expressed through controlled terminologies and ontologies, i.e., of gene and protein biomedical controlled annotations. In the last years, several databases of such annotations have become available; yet, these valuable annotations are incomplete, include errors and only some of them represent highly reliable human curated information. Computational techniques able to reliably predict new gene or protein annotations with an associated likelihood value are thus paramount. Here, we propose a novel cross-organisms learning approach to reliably predict new functionalities for the genes of an organism based on the known controlled annotations of the genes of another, evolutionarily related and better studied, organism. We leverage a new representation of the annotation discovery problem and a random perturbation of the available controlled annotations to allow the application of supervised algorithms to predict with good accuracy unknown gene annotations. Taking advantage of the numerous gene annotations available for a well-studied organism, our cross-organisms learning method creates and trains better prediction models, which can then be applied to predict new gene annotations of a target organism. We tested and compared our method with the equivalent single organism approach on different gene annotation datasets of five evolutionarily related organisms (Homo sapiens, Mus musculus, Bos taurus, Gallus gallus and Dictyostelium discoideum). Results show both the usefulness of the perturbation method of available annotations for better prediction model training and a great improvement of the cross-organism models with respect to the single-organism ones
A Matrix Splitting Method for Composite Function Minimization
Yuan, Ganzhao
2016-12-07
Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.
A Matrix Splitting Method for Composite Function Minimization
Yuan, Ganzhao; Zheng, Wei-Shi; Ghanem, Bernard
2016-01-01
Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.
Gene function prediction based on Gene Ontology Hierarchy Preserving Hashing.
Zhao, Yingwen; Fu, Guangyuan; Wang, Jun; Guo, Maozu; Yu, Guoxian
2018-02-23
Gene Ontology (GO) uses structured vocabularies (or terms) to describe the molecular functions, biological roles, and cellular locations of gene products in a hierarchical ontology. GO annotations associate genes with GO terms and indicate the given gene products carrying out the biological functions described by the relevant terms. However, predicting correct GO annotations for genes from a massive set of GO terms as defined by GO is a difficult challenge. To combat with this challenge, we introduce a Gene Ontology Hierarchy Preserving Hashing (HPHash) based semantic method for gene function prediction. HPHash firstly measures the taxonomic similarity between GO terms. It then uses a hierarchy preserving hashing technique to keep the hierarchical order between GO terms, and to optimize a series of hashing functions to encode massive GO terms via compact binary codes. After that, HPHash utilizes these hashing functions to project the gene-term association matrix into a low-dimensional one and performs semantic similarity based gene function prediction in the low-dimensional space. Experimental results on three model species (Homo sapiens, Mus musculus and Rattus norvegicus) for interspecies gene function prediction show that HPHash performs better than other related approaches and it is robust to the number of hash functions. In addition, we also take HPHash as a plugin for BLAST based gene function prediction. From the experimental results, HPHash again significantly improves the prediction performance. The codes of HPHash are available at: http://mlda.swu.edu.cn/codes.php?name=HPHash. Copyright © 2018 Elsevier Inc. All rights reserved.
Information encryption systems based on Boolean functions
Directory of Open Access Journals (Sweden)
Aureliu Zgureanu
2011-02-01
Full Text Available An information encryption system based on Boolean functions is proposed. Information processing is done using multidimensional matrices, performing logical operations with these matrices. At the basis of ensuring high level security of the system the complexity of solving the problem of building systems of Boolean functions that depend on many variables (tens and hundreds is set. Such systems represent the private key. It varies both during the encryption and decryption of information, and during the transition from one message to another.
[Pituitary function of dysgenesic femal rats. Studies with grafting method].
Vanhems, E; Busquet, J
1975-01-01
Misulban administered to pregnant rats on the 15th day of gestation provoked gonadal dysgenesia in the offspring. Study of the pituitary function of dysgenesic female rats, realized by grafting method, showed gonadotrophic hypersecretion.
The functional variable method for solving the fractional Korteweg ...
Indian Academy of Sciences (India)
The physical and engineering processes have been modelled by means of fractional ... very important role in various fields such as economics, chemistry, notably control the- .... In §3, the functional variable method is applied for finding exact.
Research on Fault Diagnosis Method Based on Rule Base Neural Network
Directory of Open Access Journals (Sweden)
Zheng Ni
2017-01-01
Full Text Available The relationship between fault phenomenon and fault cause is always nonlinear, which influences the accuracy of fault location. And neural network is effective in dealing with nonlinear problem. In order to improve the efficiency of uncertain fault diagnosis based on neural network, a neural network fault diagnosis method based on rule base is put forward. At first, the structure of BP neural network is built and the learning rule is given. Then, the rule base is built by fuzzy theory. An improved fuzzy neural construction model is designed, in which the calculated methods of node function and membership function are also given. Simulation results confirm the effectiveness of this method.
Perturbation methods and the Melnikov functions for slowly varying oscillators
International Nuclear Information System (INIS)
Lakrad, Faouzi; Charafi, Moulay Mustapha
2005-01-01
A new approach to obtaining the Melnikov function for homoclinic orbits in slowly varying oscillators is proposed. The present method applies the Lindstedt-Poincare method to determine an approximation of homoclinic solutions. It is shown that the resultant Melnikov condition is the same as that obtained in the usual way involving distance functions in three dimensions by Wiggins and Holmes [Homoclinic orbits in slowly varying oscillators. SIAM J Math Anal 1987;18(3):612
Directory of Open Access Journals (Sweden)
Jean-Louis Dornstetter
2002-12-01
Full Text Available This paper is devoted to the presentation of a combinatorial approach, based on the theory of symmetric functions, for analyzing the performance of a family of demodulation methods used in mobile telecommunications.
Jean-Louis Dornstetter; Daniel Krob; Jean-Yves Thibon; Ekaterina A. Vassilieva
2002-01-01
This paper is devoted to the presentation of a combinatorial approach, based on the theory of symmetric functions, for analyzing the performance of a family of demodulation methods used in mobile telecommunications.
Methods for deconvolving sparse positive delta function series
International Nuclear Information System (INIS)
Trussell, H.J.; Schwalbe, L.A.
1981-01-01
Sparse delta function series occur as data in many chemical analyses and seismic methods. These original data are often sufficiently degraded by the recording instrument response that the individual delta function peaks are difficult to distinguish and measure. A method, which has been used to measure these peaks, is to fit a parameterized model by a nonlinear least-squares fitting algorithm. The deconvolution approaches described have the advantage of not requiring a parameterized point spread function, nor do they expect a fixed number of peaks. Two new methods are presented. The maximum power technique is reviewed. A maximum a posteriori technique is introduced. Results on both simulated and real data by the two methods are presented. The characteristics of the data can determine which method gives superior results. 5 figures
Methods for assessing the effects of dehydration on cognitive function.
Lieberman, Harris R
2012-11-01
Studying the effects of dehydration on cognitive function presents a variety of unique and difficult challenges to investigators. These challenges, which are addressed in this article, can be divided into three general categories: 1) choosing an appropriate method of generating a consistent level of dehydration; 2) determining and effectively employing appropriate and sensitive measures of cognitive state; and 3) adequately controlling the many confounding factors that interfere with assessment of cognitive function. The design and conduct of studies on the effects of dehydration on cognitive function should carefully consider various methodological issues, and investigators should carefully weigh the benefits and disadvantages of particular methods and procedures. © 2012 International Life Sciences Institute.
[Standardization of the terms for Chinese herbal functions based on functional targeting].
Xiao, Bin; Tao, Ou; Gu, Hao; Wang, Yun; Qiao, Yan-Jiang
2011-03-01
Functional analysis concisely summarizes and concentrates on the therapeutic characteristics and features of Chinese herbal medicine. Standardization of the terms for Chinese herbal functions not only plays a key role in modern research and development of Chinese herbal medicine, but also has far-reaching clinical applications. In this paper, a new method for standardizing the terms for Chinese herbal function was proposed. Firstly, functional targets were collected. Secondly, the pathological conditions and the mode of action of every functional target were determined by analyzing the references. Thirdly, the relationships between the pathological condition and the mode of action were determined based on Chinese medicine theory and data. This three-step approach allows for standardization of the terms for Chinese herbal functions. Promoting the standardization of Chinese medicine terms will benefit the overall clinical application of Chinese herbal medicine.
Impact of Base Functional Component Types on Software Functional Size based Effort Estimation
Gencel, Cigdem; Buglione, Luigi
2008-01-01
Software effort estimation is still a significant challenge for software management. Although Functional Size Measurement (FSM) methods have been standardized and have become widely used by the software organizations, the relationship between functional size and development effort still needs further investigation. Most of the studies focus on the project cost drivers and consider total software functional size as the primary input to estimation models. In this study, we investigate whether u...
Functional geometric method for solving free boundary problems for harmonic functions
Energy Technology Data Exchange (ETDEWEB)
Demidov, Aleksander S [M. V. Lomonosov Moscow State University, Moscow (Russian Federation)
2010-01-01
A survey is given of results and approaches for a broad spectrum of free boundary problems for harmonic functions of two variables. The main results are obtained by the functional geometric method. The core of these methods is an interrelated analysis of the functional and geometric characteristics of the problems under consideration and of the corresponding non-linear Riemann-Hilbert problems. An extensive list of open questions is presented. Bibliography: 124 titles.
Modified Strum functions method in the nuclear three body problem
International Nuclear Information System (INIS)
Nasyrov, M.; Abdurakhmanov, A.; Yunusova, M.
1997-01-01
Fadeev-Hahn equations in the nuclear three-body problem were solved by modified Sturm functions method. Numerical calculations were carried out the square well potential. It was shown that the convergence of the method is high and the binding energy value is in agreement with experimental one (A.A.D.)
Score Function of Distribution and Revival of the Moment Method
Czech Academy of Sciences Publication Activity Database
Fabián, Zdeněk
2016-01-01
Roč. 45, č. 4 (2016), s. 1118-1136 ISSN 0361-0926 R&D Projects: GA MŠk(CZ) LG12020 Institutional support: RVO:67985807 Keywords : characteristics of distributions * data characteristics * general moment method * Huber moment estimator * parametric methods * score function Subject RIV: BB - Applied Statistics , Operational Research Impact factor: 0.311, year: 2016
The functional variable method for finding exact solutions of some ...
Indian Academy of Sciences (India)
Abstract. In this paper, we implemented the functional variable method and the modified. Riemann–Liouville derivative for the exact solitary wave solutions and periodic wave solutions of the time-fractional Klein–Gordon equation, and the time-fractional Hirota–Satsuma coupled. KdV system. This method is extremely simple ...
A nodal method based on the response-matrix method
International Nuclear Information System (INIS)
Cunha Menezes Filho, A. da; Rocamora Junior, F.D.
1983-02-01
A nodal approach based on the Response-Matrix method is presented with the purpose of investigating the possibility of mixing two different allocations in the same problem. It is found that the use of allocation of albedo combined with allocation of direct reflection produces good results for homogeneous fast reactor configurations. (Author) [pt
Linear density response function in the projector augmented wave method
DEFF Research Database (Denmark)
Yan, Jun; Mortensen, Jens Jørgen; Jacobsen, Karsten Wedel
2011-01-01
We present an implementation of the linear density response function within the projector-augmented wave method with applications to the linear optical and dielectric properties of both solids, surfaces, and interfaces. The response function is represented in plane waves while the single...... functions of Si, C, SiC, AlP, and GaAs compare well with previous calculations. While optical properties of semiconductors, in particular excitonic effects, are generally not well described by ALDA, we obtain excellent agreement with experiments for the surface loss function of graphene and the Mg(0001...
García de la Vega, J M; Omar, S; San Fabián, J
2017-04-01
Spin-spin coupling constants in water monomer and dimer have been calculated using several wave function and density functional-based methods. CCSD, MCSCF, and SOPPA wave functions methods yield similar results, specially when an additive approach is used with the MCSCF. Several functionals have been used to analyze their performance with the Jacob's ladder and a set of functionals with different HF exchange were tested. Functionals with large HF exchange appropriately predict 1 J O H , 2 J H H and 2h J O O couplings, while 1h J O H is better calculated with functionals that include a reduced fraction of HF exchange. Accurate functionals for 1 J O H and 2 J H H have been tested in a tetramer water model. The hydrogen bond effects on these intramolecular couplings are additive when they are calculated by SOPPA(CCSD) wave function and DFT methods. Graphical Abstract Evaluation of the additive effect of the hydrogen bond on spin-spin coupling constants of water using WF and DFT methods.
Lagrange polynomial interpolation method applied in the calculation of the J({xi},{beta}) function
Energy Technology Data Exchange (ETDEWEB)
Fraga, Vinicius Munhoz; Palma, Daniel Artur Pinheiro [Centro Federal de Educacao Tecnologica de Quimica de Nilopolis, RJ (Brazil)]. E-mails: munhoz.vf@gmail.com; dpalma@cefeteq.br; Martinez, Aquilino Senra [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE) (COPPE). Programa de Engenharia Nuclear]. E-mail: aquilino@lmp.ufrj.br
2008-07-01
The explicit dependence of the Doppler broadening function creates difficulties in the obtaining an analytical expression for J function . The objective of this paper is to present a method for the quick and accurate calculation of J function based on the recent advances in the calculation of the Doppler broadening function and on a systematic analysis of its integrand. The methodology proposed, of a semi-analytical nature, uses the Lagrange polynomial interpolation method and the Frobenius formulation in the calculation of Doppler broadening function . The results have proven satisfactory from the standpoint of accuracy and processing time. (author)
Lagrange polynomial interpolation method applied in the calculation of the J(ξ,β) function
International Nuclear Information System (INIS)
Fraga, Vinicius Munhoz; Palma, Daniel Artur Pinheiro; Martinez, Aquilino Senra
2008-01-01
The explicit dependence of the Doppler broadening function creates difficulties in the obtaining an analytical expression for J function . The objective of this paper is to present a method for the quick and accurate calculation of J function based on the recent advances in the calculation of the Doppler broadening function and on a systematic analysis of its integrand. The methodology proposed, of a semi-analytical nature, uses the Lagrange polynomial interpolation method and the Frobenius formulation in the calculation of Doppler broadening function . The results have proven satisfactory from the standpoint of accuracy and processing time. (author)
Improved quasi-static nodal green's function method
International Nuclear Information System (INIS)
Li Junli; Jing Xingqing; Hu Dapu
1997-01-01
Improved Quasi-Static Green's Function Method (IQS/NGFM) is presented, as an new kinetic method. To solve the three-dimensional transient problem, improved Quasi-Static Method is adopted to deal with the temporal problem, which will increase the time step as long as possible so as to decrease the number of times of space calculation. The time step of IQS/NGFM can be increased to 5∼10 times longer than that of Full Implicit Differential Method. In spatial calculation, the NGFM is used to get the distribution of shape function, and it's spatial mesh can be nearly 20 times larger than that of Definite Differential Method. So the IQS/NGFM is considered as an efficient kinetic method
FUSION SEGMENTATION METHOD BASED ON FUZZY THEORY FOR COLOR IMAGES
Directory of Open Access Journals (Sweden)
J. Zhao
2017-09-01
Full Text Available The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.
Application of the Characteristic Basis Function Method Using CUDA
Directory of Open Access Journals (Sweden)
Juan Ignacio Pérez
2014-01-01
Full Text Available The characteristic basis function method (CBFM is a popular technique for efficiently solving the method of moments (MoM matrix equations. In this work, we address the adaptation of this method to a relatively new computing infrastructure provided by NVIDIA, the Compute Unified Device Architecture (CUDA, and take into account some of the limitations which appear when the geometry under analysis becomes too big to fit into the Graphics Processing Unit’s (GPU’s memory.
Green's functions in quantum chemistry - I. The Σ perturbation method
International Nuclear Information System (INIS)
Sebastian, K.L.
1978-01-01
As an improvement over the Hartree-Fock approximation, a Green's Function method - the Σ perturbation method - is investigated for molecular calculations. The method is applied to the hydrogen molecule and to the π-electron system of ethylene under PPP approximation. It is found that when the algebraic approximation is used, the energy obtained is better than that of the HF approach, but is not as good as that of the configuration-interaction method. The main advantage of this procedure is that it is devoid of the most serious defect of HF method, viz. incorrect dissociation limits. (K.B.)
Reduced density matrix functional theory via a wave function based approach
Energy Technology Data Exchange (ETDEWEB)
Schade, Robert; Bloechl, Peter [Institute for Theoretical Physics, Clausthal University of Technology, Clausthal (Germany); Pruschke, Thomas [Institute for Theoretical Physics, University of Goettingen, Goettingen (Germany)
2016-07-01
We propose a new method for the calculation of the electronic and atomic structure of correlated electron systems based on reduced density matrix functional theory (rDMFT). The density-matrix functional is evaluated on the fly using Levy's constrained search formalism. The present implementation rests on a local approximation of the interaction reminiscent to that of dynamical mean field theory (DMFT). We focus here on additional approximations to the exact density-matrix functional in the local approximation and evaluate their performance.
Point Set Denoising Using Bootstrap-Based Radial Basis Function.
Directory of Open Access Journals (Sweden)
Khang Jie Liew
Full Text Available This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.
Point Set Denoising Using Bootstrap-Based Radial Basis Function.
Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad
2016-01-01
This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.
Bortz, John; Shatz, Narkis
2011-04-01
The recently developed generalized functional method provides a means of designing nonimaging concentrators and luminaires for use with extended sources and receivers. We explore the mathematical relationships between optical designs produced using the generalized functional method and edge-ray, aplanatic, and simultaneous multiple surface (SMS) designs. Edge-ray and dual-surface aplanatic designs are shown to be special cases of generalized functional designs. In addition, it is shown that dual-surface SMS designs are closely related to generalized functional designs and that certain computational advantages accrue when the two design methods are combined. A number of examples are provided. © 2011 Optical Society of America
Investigation of MLE in nonparametric estimation methods of reliability function
International Nuclear Information System (INIS)
Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo
2001-01-01
There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not
[Cognitive functions, their development and modern diagnostic methods].
Klasik, Adam; Janas-Kozik, Małgorzata; Krupka-Matuszczyk, Irena; Augustyniak, Ewa
2006-01-01
provided a theory. The psychometric approach concentrates on studying the differences in intelligence. The aim of this approach is to test intelligence by means of standardized tests (e.g. WISC-R, WAIS-R) used to show the individual differences among humans. Human cognitive functions determine individuals' adaptation capabilities and disturbances in this area indicate a number of psychopathological changes and are a symptom enabling to differentiate or diagnose one with a disorder. That is why the psychological assessment of cognitive functions is an important part of patients' diagnosis. Contemporary neuropsychological studies are to a great extent based computer tests. The use of computer methods has a number of measurement-related advantages. It allows for standardized testing environment, increasing therefore its reliability and standardizes the patient assessment process. Special attention should be paid to the neuropsychological tests included in the Vienna Test System (Cognitron, SIGNAL, RT, VIGIL, DAUF), which are used to assess the operational memory span, learning processes, reaction time, attention selective function, attention continuity as well as attention interference resistance. It also seems justified to present the CPT id test (Continuous Performance Test) as well as Free Recall. CPT is a diagnostic tool used to assess the attention selective function, attention continuity of attention, attention interference resistance as well as attention alertness. The Free Recall test is used in the memory processes diagnostics to assess patients' operational memory as well as the information organization degree in operational memory. The above mentioned neuropsychological tests are tools used in clinical assessment of cognitive function disorders.
Inferring biological functions of guanylyl cyclases with computational methods
Alquraishi, May Majed; Meier, Stuart Kurt
2013-01-01
A number of studies have shown that functionally related genes are often co-expressed and that computational based co-expression analysis can be used to accurately identify functional relationships between genes and by inference, their encoded proteins. Here we describe how a computational based co-expression analysis can be used to link the function of a specific gene of interest to a defined cellular response. Using a worked example we demonstrate how this methodology is used to link the function of the Arabidopsis Wall-Associated Kinase-Like 10 gene, which encodes a functional guanylyl cyclase, to host responses to pathogens. © Springer Science+Business Media New York 2013.
Inferring biological functions of guanylyl cyclases with computational methods
Alquraishi, May Majed
2013-09-03
A number of studies have shown that functionally related genes are often co-expressed and that computational based co-expression analysis can be used to accurately identify functional relationships between genes and by inference, their encoded proteins. Here we describe how a computational based co-expression analysis can be used to link the function of a specific gene of interest to a defined cellular response. Using a worked example we demonstrate how this methodology is used to link the function of the Arabidopsis Wall-Associated Kinase-Like 10 gene, which encodes a functional guanylyl cyclase, to host responses to pathogens. © Springer Science+Business Media New York 2013.
Mindfulness-Based Cognitive Therapy for severe Functional Disorders
DEFF Research Database (Denmark)
Fjorback, Lone Overby
MINDFULNESS-BASED COGNITIVE THERAPY FOR FUNCTIONAL DISORDERS- A RANDOMISED CONTROLLED TRIAL Background: Mindfulness-Based Stress Reduction (MBSR) is a group skills-training program developed by Kabat-Zinn. It is designed to teach patients to become more aware of and relate differently...... to their thoughts, feelings, and bodily sensations. Randomised controlled studies of MBSR have shown mitigation of stress, anxiety, and dysphoria in general population and reduction in total mood disturbance and stress symptoms in a medical population. In Mindfulness Based Cognitive Therapy MBSR is recombined...... with cognitive therapy. Aim: To examine the efficacy of Mindfulness-Based Cognitive Therapy in severe Functional disorders, defined as severe Bodily Distress Disorder. Method: 120 patients are randomised to either Mindfulness Based Cognitive Therapy: a manualized programme with eight weekly 3 ½ hour group...
Mindfulness-Based Cognitive Therapy for severe Functional Disorders
DEFF Research Database (Denmark)
Fjorback, Lone Overby
with cognitive therapy. Aim: To examine the efficacy of Mindfulness-Based Cognitive Therapy in severe Functional disorders, defined as severe Bodily Distress Disorder. Method: 120 patients are randomised to either Mindfulness Based Cognitive Therapy: a manualized programme with eight weekly 3 ½ hour group......MINDFULNESS-BASED COGNITIVE THERAPY FOR FUNCTIONAL DISORDERS- A RANDOMISED CONTROLLED TRIAL Background: Mindfulness-Based Stress Reduction (MBSR) is a group skills-training program developed by Kabat-Zinn. It is designed to teach patients to become more aware of and relate differently...... to their thoughts, feelings, and bodily sensations. Randomised controlled studies of MBSR have shown mitigation of stress, anxiety, and dysphoria in general population and reduction in total mood disturbance and stress symptoms in a medical population. In Mindfulness Based Cognitive Therapy MBSR is recombined...
Information filtering via a scaling-based function.
Qiu, Tian; Zhang, Zi-Ke; Chen, Guang
2013-01-01
Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem.
Color image definition evaluation method based on deep learning method
Liu, Di; Li, YingChun
2018-01-01
In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.
Heuristic method for searching global maximum of multimodal unknown function
Energy Technology Data Exchange (ETDEWEB)
Kamei, K; Araki, Y; Inoue, K
1983-06-01
The method is composed of three kinds of searches. They are called g (grasping)-mode search, f (finding)-mode search and c (confirming)-mode search. In the g-mode search and the c-mode search, a heuristic method is used which was extracted from search behaviors of human subjects. In f-mode search, the simplex method is used which is well known as a search method for unimodal unknown function. Each mode search and its transitions are shown in the form of flowchart. The numerical results for one-dimensional through six-dimensional multimodal functions prove the proposed search method to be an effective one. 11 references.
Approximation methods for the partition functions of anharmonic systems
International Nuclear Information System (INIS)
Lew, P.; Ishida, T.
1979-07-01
The analytical approximations for the classical, quantum mechanical and reduced partition functions of the diatomic molecule oscillating internally under the influence of the Morse potential have been derived and their convergences have been tested numerically. This successful analytical method is used in the treatment of anharmonic systems. Using Schwinger perturbation method in the framework of second quantization formulism, the reduced partition function of polyatomic systems can be put into an expression which consists separately of contributions from the harmonic terms, Morse potential correction terms and interaction terms due to the off-diagonal potential coefficients. The calculated results of the reduced partition function from the approximation method on the 2-D and 3-D model systems agree well with the numerical exact calculations
Functionalized Media and Methods of Making and Using Therefor
Huang, Yongsong (Inventor); Dillon, James (Inventor)
2017-01-01
Methods, compositions, devices and kits are provided herein for separating, scavenging, capturing or identifying a metal from a target using a medium or scaffold with a selenium-containing functional group. The medium or the scaffold including the selenium-containing functional group has affinity and specificity to metal ions or compounds having one or more metals, and efficiently separates, recovers, and scavenges of the metals from a target such as a sample, solution, suspension, or mixture.
[Soil carbohydrates: their determination methods and indication functions].
Zhang, Wei; Xie, Hongtu; He, Hongbo; Zheng, Lichen; Wang, Ge
2006-08-01
Soil carbohydrates are the important component of soil organic matter, and play an important role in soil aggregation formation. Their hydrolysis methods involve sulfur acid (H2SO4), hydrochloric acid (HCl), and trifluoroacetic acid (TFA) hydrolysis, and their determination methods include colorimetry, gas-liquid chromatography (GLC) , high performance liquid chromatography (HPLC), and high performance anion-exchange chromatography with pulsed amperometric detection (HPAE-PAD). This paper summarized the methods of carbohydrates' hydrolysis, purification and detection, with focus on the derived methods of GLC, and briefly introduced the indication functions of carbohydrates in soil organic matter turnover.
Function combined method for design innovation of children's bike
Wu, Xiaoli; Qiu, Tingting; Chen, Huijuan
2013-03-01
As children mature, bike products for children in the market develop at the same time, and the conditions are frequently updated. Certain problems occur when using a bike, such as cycle overlapping, repeating function, and short life cycle, which go against the principles of energy conservation and the environmental protection intensive design concept. In this paper, a rational multi-function method of design through functional superposition, transformation, and technical implementation is proposed. An organic combination of frog-style scooter and children's tricycle is developed using the multi-function method. From the ergonomic perspective, the paper elaborates on the body size of children aged 5 to 12 and effectively extracts data for a multi-function children's bike, which can be used for gliding and riding. By inverting the body, parts can be interchanged between the handles and the pedals of the bike. Finally, the paper provides a detailed analysis of the components and structural design, body material, and processing technology of the bike. The study of Industrial Product Innovation Design provides an effective design method to solve the bicycle problems, extends the function problems, improves the product market situation, and enhances the energy saving feature while implementing intensive product development effectively at the same time.
Recent advances in radial basis function collocation methods
Chen, Wen; Chen, C S
2014-01-01
This book surveys the latest advances in radial basis function (RBF) meshless collocation methods which emphasis on recent novel kernel RBFs and new numerical schemes for solving partial differential equations. The RBF collocation methods are inherently free of integration and mesh, and avoid tedious mesh generation involved in standard finite element and boundary element methods. This book focuses primarily on the numerical algorithms, engineering applications, and highlights a large class of novel boundary-type RBF meshless collocation methods. These methods have shown a clear edge over the traditional numerical techniques especially for problems involving infinite domain, moving boundary, thin-walled structures, and inverse problems. Due to the rapid development in RBF meshless collocation methods, there is a need to summarize all these new materials so that they are available to scientists, engineers, and graduate students who are interest to apply these newly developed methods for solving real world’s ...
An improved method for estimating the frequency correlation function
Chelli, Ali; Pä tzold, Matthias
2012-01-01
For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.
An improved method for estimating the frequency correlation function
Chelli, Ali
2012-04-01
For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.
Directory of Open Access Journals (Sweden)
Salih Yalcinbas
2016-01-01
Full Text Available In this paper, a new collocation method based on the Fibonacci polynomials is introduced to solve the high-order linear Volterra integro-differential equations under the conditions. Numerical examples are included to demonstrate the applicability and validity of the proposed method and comparisons are made with the existing results. In addition, an error estimation based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation.
Platelet function testing: methods of assessment and clinical utility.
LENUS (Irish Health Repository)
Mylotte, Darren
2012-02-01
Platelets play a central role in the regulation of both thrombosis and haemostasis yet tests of platelet function have, until recently, been exclusively used in the diagnosis and management of bleeding disorders. Recent advances have demonstrated the clinical utility of platelet function testing in patients with cardiovascular disease. The ex vivo measurement of response to antiplatelet therapies (aspirin and clopidogrel), by an ever-increasing array of platelet function tests, is with some assays, predictive of adverse clinical events and thus, represents an emerging area of interest for both the clinician and basic scientist. This review article will describe the advantages and disadvantages of the currently available methods of measuring platelet function and discuss both the limitations and emerging data supporting the role of platelet function studies in clinical practice.
Platelet function testing: methods of assessment and clinical utility.
LENUS (Irish Health Repository)
Mylotte, Darren
2011-01-01
Platelets play a central role in the regulation of both thrombosis and haemostasis yet tests of platelet function have, until recently, been exclusively used in the diagnosis and management of bleeding disorders. Recent advances have demonstrated the clinical utility of platelet function testing in patients with cardiovascular disease. The ex vivo measurement of response to antiplatelet therapies (aspirin and clopidogrel), by an ever-increasing array of platelet function tests, is with some assays, predictive of adverse clinical events and thus, represents an emerging area of interest for both the clinician and basic scientist. This review article will describe the advantages and disadvantages of the currently available methods of measuring platelet function and discuss both the limitations and emerging data supporting the role of platelet function studies in clinical practice.
On the trial functions in nested element method
International Nuclear Information System (INIS)
Altiparmakov, D.V.
1985-01-01
The R-function method is applied to the multidimensional steady-state neutron diffusion equation. Using a variational principle the nested element approximation is formulated. Trial functions taking into account the geometrical shape of material regions are constructed. The influence of both the surrounding regions and the corner singularities at the external boundary is incorporated into the approximate solution. Benchmark calculations show that such an approximation can yield satisfactory results. Moreover, in the case of complex geometry, the presented approach would result in a significant reduction of the number of unknowns compared to other methods
DESCRIBING FUNCTION METHOD FOR PI-FUZZY CONTROLLED SYSTEMS STABILITY ANALYSIS
Directory of Open Access Journals (Sweden)
Stefan PREITL
2004-12-01
Full Text Available The paper proposes a global stability analysis method dedicated to fuzzy control systems containing Mamdani PI-fuzzy controllers with output integration to control SISO linear / linearized plants. The method is expressed in terms of relatively simple steps, and it is based on: the generalization of the describing function method for the considered fuzzy control systems to the MIMO case, the approximation of the describing functions by applying the least squares method. The method is applied to the stability analysis of a class of PI-fuzzy controlled servo-systems, and validated by considering a case study.
Spectrum estimation method based on marginal spectrum
International Nuclear Information System (INIS)
Cai Jianhua; Hu Weiwen; Wang Xianchun
2011-01-01
FFT method can not meet the basic requirements of power spectrum for non-stationary signal and short signal. A new spectrum estimation method based on marginal spectrum from Hilbert-Huang transform (HHT) was proposed. The procession of obtaining marginal spectrum in HHT method was given and the linear property of marginal spectrum was demonstrated. Compared with the FFT method, the physical meaning and the frequency resolution of marginal spectrum were further analyzed. Then the Hilbert spectrum estimation algorithm was discussed in detail, and the simulation results were given at last. The theory and simulation shows that under the condition of short data signal and non-stationary signal, the frequency resolution and estimation precision of HHT method is better than that of FFT method. (authors)
Razavi, Sayed Ali Akbar; Masoomi, Mohammad Yaser; Morsali, Ali
2017-08-21
To design a robust, π-conjugated, low-cost, and easy to synthesize metal-organic framework (MOF) for cation sensing by the photoluminescence (PL) method, 4,4'-oxybis(benzoic acid) (H 2 OBA) has been used in combination with 3,6-di(pyridin-4-yl)-1,2,4,5-tetrazine (DPT) as a tetrazine-functionalized spacer to construct [Zn(OBA)(DPT) 0.5 ]·DMF (TMU-34(-2H)). The tetrazine motif is a π-conjugated, water-soluble/stable fluorophore with relatively weak σ-donating Lewis basic sites. These characteristics of tetrazine make TMU-34(-2H) a good candidate for cation sensing. Because of hydrogen bonding between tetrazine moieties and water molecules, TMU-34(-2H) shows different PL emissions in water and acetonitrile. Cation sensing in these two solvents revealed that TMU-34(-2H) can selectively detect Hg 2+ in water (by 243% enhancement) and in acetonitrile (by 90% quenching). The contribution of electron-donating/accepting characteristics along with solvation effects on secondary interactions of the tetrazine motifs inside the TMU-34(-2H) framework results in different signal transductions. Improved sensitivity and accuracy of detection were obtained using the double solvent sensing method (DSSM), in which different signal transductions of TMU-34(-2H) in water and acetonitrile were combined simultaneously to construct a double solvent sensing curve and formulate a sensitivity factor. Calculation of sensitivity factors for all of the tested cations demonstrated that it is possible to detect Hg 2+ by DSSM with ultrahigh sensitivity. Such a tremendous distinction in the Hg 2+ sensitivity factor is visualizable in the double solvent sensing curve. Thus, by application of DSSM instead of one-dimensional sensing, the interfering effects of other cations are completely eliminated and the sensitivity toward Hg(II) is highly improved. Strong interactions between Hg 2+ and the nitrogen atoms of the tetrazine groups along with easy accessibility of Hg 2+ to the tetrazine groups lead
Evaluation of snubber functional test methods: Tier 1
International Nuclear Information System (INIS)
Brown, D.P.
1993-07-01
The objective of the research is to establish technical bases in support of efforts on the part of the Snubber Utility Group (SNUG) and the Subsection ISTD Working Group of the ASME O ampersand M Code in developing guidelines and methodologies for snubber functional testing to ensure that snubbers are tested in a manner that ensures reliable and meaningful test results. The methodology used in this research includes both a review of available industry information as well as the testing of different snubber models using various test machines. Information is provided pertaining to current industry practices in regard to snubber testing including recommended test procedures, technical description of various test machines, and the number and types of snubbers used in the nuclear power industry. A review of previous test methodology research conducted by the Snubber Utility Group is also included. The effects of variations in controllable test parameters on snubber test results are discussed. Also included are the results of confirmatory tests in which various snubber models were tested using various test machines. Recommendations are provided for standard test methods to be included in Subsection ISTD of the ASME O ampersand M Code [4]. General information and recommendations are provided that may be used by utility personnel in specifying snubber test equipment that is most suited for plant-specific needs as well as information that may be effectively used in the review and interpretation of test results
Multiquark masses and wave functions through modified Green's function Monte Carlo method
International Nuclear Information System (INIS)
Kerbikov, B.O.; Polikarpov, M.I.; Shevchenko, L.V.
1987-01-01
The Modified Green's function Monte Carlo method (MGFMC) is used to calculate the masses and ground-state wave functions of multiquark systems in the potential model. The previously developed MGFMC is generalized in order to treat systems containing quarks with inequal masses. The obtained results are presented with the Cornell potential for the masses and the wave functions of light and heavy flavoured baryons and multiquark states (N=6, 9, 12) made of light quarks
Exact density functional and wave function embedding schemes based on orbital localization
International Nuclear Information System (INIS)
Hégely, Bence; Nagy, Péter R.; Kállay, Mihály; Ferenczy, György G.
2016-01-01
Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up the system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.
Exact density functional and wave function embedding schemes based on orbital localization
Hégely, Bence; Nagy, Péter R.; Ferenczy, György G.; Kállay, Mihály
2016-08-01
Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up the system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.
Exact density functional and wave function embedding schemes based on orbital localization
Energy Technology Data Exchange (ETDEWEB)
Hégely, Bence; Nagy, Péter R.; Kállay, Mihály, E-mail: kallay@mail.bme.hu [MTA-BME Lendület Quantum Chemistry Research Group, Department of Physical Chemistry and Materials Science, Budapest University of Technology and Economics, P.O. Box 91, H-1521 Budapest (Hungary); Ferenczy, György G. [Medicinal Chemistry Research Group, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Magyar tudósok körútja 2, H-1117 Budapest (Hungary); Department of Biophysics and Radiation Biology, Semmelweis University, Tűzoltó u. 37-47, H-1094 Budapest (Hungary)
2016-08-14
Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up the system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.
Managerial Methods Based on Analysis, Recommended to a Boarding House
Directory of Open Access Journals (Sweden)
Solomia Andreş
2015-06-01
Full Text Available The paper presents a few theoretical and practical contributions regarding the implementing of analysis based methods, respectively a SWOT and an economic analysis, from the perspective and the demands of a firm management which functions with profits due to the activity of a boarding house. The two types of managerial methods recommended to the firm offer real and complex information necessary for the knowledge of the firm status and the elaboration of prediction for the maintaining of business viability.
Interchange Recognition Method Based on CNN
Directory of Open Access Journals (Sweden)
HE Haiwei
2018-03-01
Full Text Available The identification and classification of interchange structures in OSM data can provide important information for the construction of multi-scale model, navigation and location services, congestion analysis, etc. The traditional method of interchange identification relies on the low-level characteristics of artificial design, and cannot distinguish the complex interchange structure with interference section effectively. In this paper, a new method based on convolutional neural network for identification of the interchange is proposed. The method combines vector data with raster image, and uses neural network to learn the fuzzy characteristics of the interchange, and classifies the complex interchange structure in OSM. Experiments show that this method has strong anti-interference, and has achieved good results in the classification of complex interchange shape, and there is room for further improvement with the expansion of the case base and the optimization of neural network model.
Recommendation advertising method based on behavior retargeting
Zhao, Yao; YIN, Xin-Chun; CHEN, Zhi-Min
2011-10-01
Online advertising has become an important business in e-commerce. Ad recommended algorithms are the most critical part in recommendation systems. We propose a recommendation advertising method based on behavior retargeting which can avoid leakage click of advertising due to objective reasons and can observe the changes of the user's interest in time. Experiments show that our new method can have a significant effect and can be further to apply to online system.
Personnel Selection Based on Fuzzy Methods
Directory of Open Access Journals (Sweden)
Lourdes Cañós
2011-03-01
Full Text Available The decisions of managers regarding the selection of staff strongly determine the success of the company. A correct choice of employees is a source of competitive advantage. We propose a fuzzy method for staff selection, based on competence management and the comparison with the valuation that the company considers the best in each competence (ideal candidate. Our method is based on the Hamming distance and a Matching Level Index. The algorithms, implemented in the software StaffDesigner, allow us to rank the candidates, even when the competences of the ideal candidate have been evaluated only in part. Our approach is applied in a numerical example.
Deterministic and fuzzy-based methods to evaluate community resilience
Kammouh, Omar; Noori, Ali Zamani; Taurino, Veronica; Mahin, Stephen A.; Cimellaro, Gian Paolo
2018-04-01
Community resilience is becoming a growing concern for authorities and decision makers. This paper introduces two indicator-based methods to evaluate the resilience of communities based on the PEOPLES framework. PEOPLES is a multi-layered framework that defines community resilience using seven dimensions. Each of the dimensions is described through a set of resilience indicators collected from literature and they are linked to a measure allowing the analytical computation of the indicator's performance. The first method proposed in this paper requires data on previous disasters as an input and returns as output a performance function for each indicator and a performance function for the whole community. The second method exploits a knowledge-based fuzzy modeling for its implementation. This method allows a quantitative evaluation of the PEOPLES indicators using descriptive knowledge rather than deterministic data including the uncertainty involved in the analysis. The output of the fuzzy-based method is a resilience index for each indicator as well as a resilience index for the community. The paper also introduces an open source online tool in which the first method is implemented. A case study illustrating the application of the first method and the usage of the tool is also provided in the paper.
The continuous, desingularized Newton method for meromorphic functions
Jongen, H.Th.; Jonker, P.; Twilt, F.
For any (nonconstant) meromorphic function, we present a real analytic dynamical system, which may be interpreted as an infinitesimal version of Newton's method for finding its zeros. A fairly complete description of the local and global features of the phase portrait of such a system is obtained
Further Stable methods for the calculation of partition functions
International Nuclear Information System (INIS)
Wilson, B G; Gilleron, F; Pain, J
2007-01-01
The extension to recursion over holes of the Gilleron and Pain method for calculating partition functions of a canonical ensemble of non-interacting bound electrons is presented as well as a generalization for the efficient computation of collisional line broadening
The Functions and Methods of Mental Training on Competitive Sports
Xiong, Jianshe
Mental training is the major training method of the competitive sports and the main factor of athletes skill and tactics level.By combining the psychological factor with the current competitive sports characteristics, this paper presents the function of mental training forward athletes, and how to improve the comprehensive psychological quality by using mental training.
Fast methods for spatially correlated multilevel functional data
Staicu, A.-M.
2010-01-19
We propose a new methodological framework for the analysis of hierarchical functional data when the functions at the lowest level of the hierarchy are correlated. For small data sets, our methodology leads to a computational algorithm that is orders of magnitude more efficient than its closest competitor (seconds versus hours). For large data sets, our algorithm remains fast and has no current competitors. Thus, in contrast to published methods, we can now conduct routine simulations, leave-one-out analyses, and nonparametric bootstrap sampling. Our methods are inspired by and applied to data obtained from a state-of-the-art colon carcinogenesis scientific experiment. However, our models are general and will be relevant to many new data sets where the object of inference are functions or images that remain dependent even after conditioning on the subject on which they are measured. Supplementary materials are available at Biostatistics online.
Method of vacuum correlation functions: Results and prospects
International Nuclear Information System (INIS)
Badalian, A. M.; Simonov, Yu. A.; Shevchenko, V. I.
2006-01-01
Basic results obtained within the QCD method of vacuum correlation functions over the past 20 years in the context of investigations into strong-interaction physics at the Institute of Theoretical and Experimental Physics (ITEP, Moscow) are formulated Emphasis is placed primarily on the prospects of the general theory developed within QCD by employing both nonperturbative and perturbative methods. On the basis of ab initio arguments, it is shown that the lowest two field correlation functions play a dominant role in QCD dynamics. A quantitative theory of confinement and deconfinement, as well as of the spectra of light and heavy quarkonia, glueballs, and hybrids, is given in terms of these two correlation functions. Perturbation theory in a nonperturbative vacuum (background perturbation theory) plays a significant role, not possessing drawbacks of conventional perturbation theory and leading to the infrared freezing of the coupling constant α s
Energy Technology Data Exchange (ETDEWEB)
Kim, Sung-Hou; Shin, Dong Hae; Hou, Jingtong; Chandonia, John-Marc; Das, Debanu; Choi, In-Geol; Kim, Rosalind; Kim, Sung-Hou
2007-09-02
Advances in sequence genomics have resulted in an accumulation of a huge number of protein sequences derived from genome sequences. However, the functions of a large portion of them cannot be inferred based on the current methods of sequence homology detection to proteins of known functions. Three-dimensional structure can have an important impact in providing inference of molecular function (physical and chemical function) of a protein of unknown function. Structural genomics centers worldwide have been determining many 3-D structures of the proteins of unknown functions, and possible molecular functions of them have been inferred based on their structures. Combined with bioinformatics and enzymatic assay tools, the successful acceleration of the process of protein structure determination through high throughput pipelines enables the rapid functional annotation of a large fraction of hypothetical proteins. We present a brief summary of the process we used at the Berkeley Structural Genomics Center to infer molecular functions of proteins of unknown function.
Exact solitary wave solutions for some nonlinear evolution equations via Exp-function method
International Nuclear Information System (INIS)
Ebaid, A.
2007-01-01
Based on the Exp-function method, exact solutions for some nonlinear evolution equations are obtained. The KdV equation, Burgers' equation and the combined KdV-mKdV equation are chosen to illustrate the effectiveness of the method
A Survey of Functional Behavior Assessment Methods Used by Behavior Analysts in Practice
Oliver, Anthony C.; Pratt, Leigh A.; Normand, Matthew P.
2015-01-01
To gather information about the functional behavior assessment (FBA) methods behavior analysts use in practice, we sent a web-based survey to 12,431 behavior analysts certified by the Behavior Analyst Certification Board. Ultimately, 724 surveys were returned, with the results suggesting that most respondents regularly use FBA methods, especially…
Improving protein function prediction methods with integrated literature data
Directory of Open Access Journals (Sweden)
Gabow Aaron P
2008-04-01
Full Text Available Abstract Background Determining the function of uncharacterized proteins is a major challenge in the post-genomic era due to the problem's complexity and scale. Identifying a protein's function contributes to an understanding of its role in the involved pathways, its suitability as a drug target, and its potential for protein modifications. Several graph-theoretic approaches predict unidentified functions of proteins by using the functional annotations of better-characterized proteins in protein-protein interaction networks. We systematically consider the use of literature co-occurrence data, introduce a new method for quantifying the reliability of co-occurrence and test how performance differs across species. We also quantify changes in performance as the prediction algorithms annotate with increased specificity. Results We find that including information on the co-occurrence of proteins within an abstract greatly boosts performance in the Functional Flow graph-theoretic function prediction algorithm in yeast, fly and worm. This increase in performance is not simply due to the presence of additional edges since supplementing protein-protein interactions with co-occurrence data outperforms supplementing with a comparably-sized genetic interaction dataset. Through the combination of protein-protein interactions and co-occurrence data, the neighborhood around unknown proteins is quickly connected to well-characterized nodes which global prediction algorithms can exploit. Our method for quantifying co-occurrence reliability shows superior performance to the other methods, particularly at threshold values around 10% which yield the best trade off between coverage and accuracy. In contrast, the traditional way of asserting co-occurrence when at least one abstract mentions both proteins proves to be the worst method for generating co-occurrence data, introducing too many false positives. Annotating the functions with greater specificity is harder
A spray based method for biofilm removal
Cense, A.W.
2005-01-01
Biofilm growth on human teeth is the cause of oral diseases such as caries (tooth decay), gingivitis (inflammation of the gums) and periodontitis (inflammation of the tooth bone). In this thesis, a water based cleaning method is designed for removal of oral biofilms, or dental plaque. The first part
Arts-Based Methods in Education
DEFF Research Database (Denmark)
Chemi, Tatiana; Du, Xiangyun
2017-01-01
This chapter introduces the field of arts-based methods in education with a general theoretical perspective, reviewing the journey of learning in connection to the arts, and the contribution of the arts to societies from an educational perspective. Also presented is the rationale and structure...
Formal methods in design and verification of functional specifications
International Nuclear Information System (INIS)
Vaelisuo, H.
1995-01-01
It is claimed that formal methods should be applied already when specifying the functioning of the control/monitoring system, i.e. when planning how to implement the desired operation of the plant. Formal methods are seen as a way to mechanize and thus automate part of the planning. All mathematical methods which can be applied on related problem solving should be considered as formal methods. Because formal methods can only support the designer, not replace him/her, they must be integrated into a design support tool. Such a tool must also aid the designer in getting the correct conception of the plant and its behaviour. The use of a hypothetic design support tool is illustrated to clarify the requirements such a tool should fulfill. (author). 3 refs, 5 figs
Optimising Job-Shop Functions Utilising the Score-Function Method
DEFF Research Database (Denmark)
Nielsen, Erland Hejn
2000-01-01
During the last 1-2 decades, simulation optimisation of discrete event dynamic systems (DEDS) has made considerable theoretical progress with respect to computational efficiency. The score-function (SF) method and the infinitesimal perturbation analysis (IPA) are two candidates belonging to this ......During the last 1-2 decades, simulation optimisation of discrete event dynamic systems (DEDS) has made considerable theoretical progress with respect to computational efficiency. The score-function (SF) method and the infinitesimal perturbation analysis (IPA) are two candidates belonging...... of a Job-Shop can be handled by the SF method....
Development of thermal stress screening method. Application of green function method
International Nuclear Information System (INIS)
Furuhashi, Ichiro; Shibamoto, Hiroshi; Kasahara, Naoto
2004-01-01
This work was achieved for the development of the screening method of thermal transient stresses in FBR components. We proposed an approximation method for evaluations of thermal stress under variable heat transfer coefficients (non-linear problems) using the Green functions of thermal stresses with constant heat transfer coefficients (linear problems). Detailed thermal stress analyses provided Green functions for a skirt structure and a tube-sheet of Intermediate Heat Exchanger. The upper bound Green functions were obtained by the analyses using those upper bound heat transfer coefficients. The medium and the lower bound Green functions were got by the analyses of those under medium and the lower bound heat transfer coefficients. Conventional evaluations utilized the upper bound Green functions. On the other hand, we proposed a new evaluation method by using the upper bound, medium and the lower bound Green functions. The comparison of above results gave the results as follows. The conventional evaluations were conservative and appropriate for the cases under one fluid thermal transient structure such as the skirt. The conventional evaluations were generally conservative for the complicated structures under two or more fluids thermal transients such as the tube-sheet. But the danger locations could exists for the complicated structures under two or more fluids transients, namely the conventional evaluations were non-conservative. The proposed evaluations gave good estimations for these complicated structures. Though above results, we have made the basic documents of the screening method of thermal transient stresses using the conventional method and the new method. (author)
Computer Animation Based on Particle Methods
Directory of Open Access Journals (Sweden)
Rafal Wcislo
1999-01-01
Full Text Available The paper presents the main issues of a computer animation of a set of elastic macroscopic objects based on the particle method. The main assumption of the generated animations is to achieve very realistic movements in a scene observed on the computer display. The objects (solid bodies interact mechanically with each other, The movements and deformations of solids are calculated using the particle method. Phenomena connected with the behaviour of solids in the gravitational field, their defomtations caused by collisions and interactions with the optional liquid medium are simulated. The simulation ofthe liquid is performed using the cellular automata method. The paper presents both simulation schemes (particle method and cellular automata rules an the method of combining them in the single animation program. ln order to speed up the execution of the program the parallel version based on the network of workstation was developed. The paper describes the methods of the parallelization and it considers problems of load-balancing, collision detection, process synchronization and distributed control of the animation.
Ankle-brachial index by automated method and renal function
Directory of Open Access Journals (Sweden)
Ricardo Pereira Silva
2017-05-01
Full Text Available Background The Ankle-brachial index (ABI is a non-invasive method used for the diagnosis of peripheral arterial occlusive disease (PAOD. Aims To determine the clinical features of patients submitted to ABI measurement by automatic method. To investigate association between ABI and renal function. Methods The present is a cross-sectional study. The study was performed in a private clinic in the city of Fortaleza (Ce- Brazil. For ABI analysis, we utilized automatic methodology using a Microlife device. Data collection took place from March 2012 to January 2016. During this period, ABI was measured in 375 patients aged >50 years, who had a diagnosis of hypertension, diabetes or vascular disease. Results Of the 375 patients, 18 were categorized as having abnormal ABI (4.8 per cent and 357 were normal ABI (95.2 per cent. Patients with abnormal ABI showed older mean age when compared to patients with normal ABI. Among patients with normal renal function, only 0.95 per cent showed abnormal ABI; among patients with abnormal renal function, 6 per cent showed abnormal ABI. Conclusion 1 No differences were observed when comparing the groups regarding gender or the prevalence of hypertension, diabetes, dyslipidaemia or CAD. 2 Group with abnormal ABI had renal function greater impairment.
Optimizing distance-based methods for large data sets
Scholl, Tobias; Brenner, Thomas
2015-10-01
Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.
International Nuclear Information System (INIS)
Nazareth, J. L.
1979-01-01
1 - Description of problem or function: OCOPTR and DRVOCR are computer programs designed to find minima of non-linear differentiable functions f: R n →R with n dimensional domains. OCOPTR requires that the user only provide function values (i.e. it is a derivative-free routine). DRVOCR requires the user to supply both function and gradient information. 2 - Method of solution: OCOPTR and DRVOCR use the variable metric (or quasi-Newton) method of Davidon (1975). For OCOPTR, the derivatives are estimated by finite differences along a suitable set of linearly independent directions. For DRVOCR, the derivatives are user- supplied. Some features of the codes are the storage of the approximation to the inverse Hessian matrix in lower trapezoidal factored form and the use of an optimally-conditioned updating method. Linear equality constraints are permitted subject to the initial Hessian factor being chosen correctly. 3 - Restrictions on the complexity of the problem: The functions to which the routine is applied are assumed to be differentiable. The routine also requires (n 2 /2) + 0(n) storage locations where n is the problem dimension
Dhage Iteration Method for Generalized Quadratic Functional Integral Equations
Directory of Open Access Journals (Sweden)
Bapurao C. Dhage
2015-01-01
Full Text Available In this paper we prove the existence as well as approximations of the solutions for a certain nonlinear generalized quadratic functional integral equation. An algorithm for the solutions is developed and it is shown that the sequence of successive approximations starting at a lower or upper solution converges monotonically to the solutions of related quadratic functional integral equation under some suitable mixed hybrid conditions. We rely our main result on Dhage iteration method embodied in a recent hybrid fixed point theorem of Dhage (2014 in partially ordered normed linear spaces. An example is also provided to illustrate the abstract theory developed in the paper.
Methods library of embedded R functions at Statistics Norway
Directory of Open Access Journals (Sweden)
Øyvind Langsrud
2017-11-01
Full Text Available Statistics Norway is modernising the production processes. An important element in this work is a library of functions for statistical computations. In principle, the functions in such a methods library can be programmed in several languages. A modernised production environment demand that these functions can be reused for different statistics products, and that they are embedded within a common IT system. The embedding should be done in such a way that the users of the methods do not need to know the underlying programming language. As a proof of concept, Statistics Norway soon has established a methods library offering a limited number of methods for macro-editing, imputation and confidentiality. This is done within an area of municipal statistics with R as the only programming language. This paper presents the details and experiences from this work. The problem of fitting real word applications to simple and strict standards is discussed and exemplified by the development of solutions to regression imputation and table suppression.
Image Inpainting Based on Coherence Transport with Adapted Distance Functions
März, Thomas
2011-01-01
We discuss an extension of our method image inpainting based on coherence transport. For the latter method the pixels of the inpainting domain have to be serialized into an ordered list. Until now, to induce the serialization we have used the distance to boundary map. But there are inpainting problems where the distance to boundary serialization causes unsatisfactory inpainting results. In the present work we demonstrate cases where we can resolve the difficulties by employing other distance functions which better suit the problem at hand. © 2011 Society for Industrial and Applied Mathematics.
SYNTHESIS METHODS OF ALGEBRAIC NORMAL FORM OF MANY-VALUED LOGIC FUNCTIONS
Directory of Open Access Journals (Sweden)
A. V. Sokolov
2016-01-01
Full Text Available The rapid development of methods of error-correcting coding, cryptography, and signal synthesis theory based on the principles of many-valued logic determines the need for a more detailed study of the forms of representation of functions of many-valued logic. In particular the algebraic normal form of Boolean functions, also known as Zhegalkin polynomial, that well describe many of the cryptographic properties of Boolean functions is widely used. In this article, we formalized the notion of algebraic normal form for many-valued logic functions. We developed a fast method of synthesis of algebraic normal form of 3-functions and 5-functions that work similarly to the Reed-Muller transform for Boolean functions: on the basis of recurrently synthesized transform matrices. We propose the hypothesis, which determines the rules of the synthesis of these matrices for the transformation from the truth table to the coefficients of the algebraic normal form and the inverse transform for any given number of variables of 3-functions or 5-functions. The article also introduces the definition of algebraic degree of nonlinearity of the functions of many-valued logic and the S-box, based on the principles of many-valued logic. Thus, the methods of synthesis of algebraic normal form of 3-functions applied to the known construction of recurrent synthesis of S-boxes of length N = 3k, whereby their algebraic degrees of nonlinearity are computed. The results could be the basis for further theoretical research and practical applications such as: the development of new cryptographic primitives, error-correcting codes, algorithms of data compression, signal structures, and algorithms of block and stream encryption, all based on the perspective principles of many-valued logic. In addition, the fast method of synthesis of algebraic normal form of many-valued logic functions is the basis for their software and hardware implementation.
A nonlinear analytic function expansion nodal method for transient calculations
Energy Technology Data Exchange (ETDEWEB)
Joo, Han Gyn; Park, Sang Yoon; Cho, Byung Oh; Zee, Sung Quun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1998-12-31
The nonlinear analytic function expansion nodal (AFEN) method is applied to the solution of the time-dependent neutron diffusion equation. Since the AFEN method requires both the particular solution and the homogeneous solution to the transient fixed source problem, the derivation of the solution method is focused on finding the particular solution efficiently. To avoid complicated particular solutions, the source distribution is approximated by quadratic polynomials and the transient source is constructed such that the error due to the quadratic approximation is minimized, In addition, this paper presents a new two-node solution scheme that is derived by imposing the constraint of current continuity at the interface corner points. The method is verified through a series of application to the NEACRP PWR rod ejection benchmark problems. 6 refs., 2 figs., 1 tab. (Author)
A nonlinear analytic function expansion nodal method for transient calculations
Energy Technology Data Exchange (ETDEWEB)
Joo, Han Gyn; Park, Sang Yoon; Cho, Byung Oh; Zee, Sung Quun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1999-12-31
The nonlinear analytic function expansion nodal (AFEN) method is applied to the solution of the time-dependent neutron diffusion equation. Since the AFEN method requires both the particular solution and the homogeneous solution to the transient fixed source problem, the derivation of the solution method is focused on finding the particular solution efficiently. To avoid complicated particular solutions, the source distribution is approximated by quadratic polynomials and the transient source is constructed such that the error due to the quadratic approximation is minimized, In addition, this paper presents a new two-node solution scheme that is derived by imposing the constraint of current continuity at the interface corner points. The method is verified through a series of application to the NEACRP PWR rod ejection benchmark problems. 6 refs., 2 figs., 1 tab. (Author)
Performance analysis, quality function deployment and structured methods
Maier, M. W.
Quality function deployment, (QFD), an approach to synthesizing several elements of system modeling and design into a single unit, is presented. Behavioral, physical, and performance modeling are usually considered as separate aspects of system design without explicit linkages. Structured methodologies have developed linkages between behavioral and physical models before, but have not considered the integration of performance models. QFD integrates performance models with traditional structured models. In this method, performance requirements such as cost, weight, and detection range are partitioned into matrices. Partitioning is done by developing a performance model, preferably quantitative, for each requirement. The parameters of the model become the engineering objectives in a QFD analysis and the models are embedded in a spreadsheet version of the traditional QFD matrices. The performance model and its parameters are used to derive part of the functional model by recognizing that a given performance model implies some structure to the functionality of the system.
The analytic regularization ζ function method and the cut-off method in Casimir effect
International Nuclear Information System (INIS)
Svaiter, N.F.; Svaiter, B.F.
1990-01-01
The zero point energy associated to a hermitian massless scalar field in the presence of perfectly reflecting plates in a three dimensional flat space-time is discussed. A new technique to unify two different methods - the ζ function and a variant of the cut-off method - used to obtain the so called Casimir energy is presented, and the proof of the analytic equivalence between both methods is given. (author)
Application of Influence Function Method to the Fretting Wear Problems
Energy Technology Data Exchange (ETDEWEB)
Lee, Choon Yeol; Tian, Li Si; Bae, Joon Woo; Chai, Young Suck [Yeungnam University, Gyongsan (Korea, Republic of)
2006-07-01
Numerical analysis by influence function method (IFM) is demonstrated in this study in order to investigate the fretting wear problems on the secondary side of the steam generator, caused by flow induced vibration. Two-dimensional numerical contact model in terms of Cauchy integral equation is developed. The distributions of normal pressures, shear stresses and displacement fields are derived between two contact bodies which have similar elastic properties. The work rate model is adopted to find the wear amounts between two materials. The results are compared with the solutions by finite element analyses, which show the utilization of the present method to the fretting wear problems.
Application of Influence Function Method to the Fretting Wear Problems
International Nuclear Information System (INIS)
Lee, Choon Yeol; Tian, Li Si; Bae, Joon Woo; Chai, Young Suck
2006-01-01
Numerical analysis by influence function method (IFM) is demonstrated in this study in order to investigate the fretting wear problems on the secondary side of the steam generator, caused by flow induced vibration. Two-dimensional numerical contact model in terms of Cauchy integral equation is developed. The distributions of normal pressures, shear stresses and displacement fields are derived between two contact bodies which have similar elastic properties. The work rate model is adopted to find the wear amounts between two materials. The results are compared with the solutions by finite element analyses, which show the utilization of the present method to the fretting wear problems
Robust fractional order differentiators using generalized modulating functions method
Liu, Dayan; Laleg-Kirati, Taous-Meriem
2015-01-01
This paper aims at designing a fractional order differentiator for a class of signals satisfying a linear differential equation with unknown parameters. A generalized modulating functions method is proposed first to estimate the unknown parameters, then to derive accurate integral formulae for the left-sided Riemann-Liouville fractional derivatives of the studied signal. Unlike the improper integral in the definition of the left-sided Riemann-Liouville fractional derivative, the integrals in the proposed formulae can be proper and be considered as a low-pass filter by choosing appropriate modulating functions. Hence, digital fractional order differentiators applicable for on-line applications are deduced using a numerical integration method in discrete noisy case. Moreover, some error analysis are given for noise error contributions due to a class of stochastic processes. Finally, numerical examples are given to show the accuracy and robustness of the proposed fractional order differentiators.
Application of Patterson-function direct methods to materials characterization.
Rius, Jordi
2014-09-01
The aim of this article is a general description of the so-called Patterson-function direct methods (PFDM), from their origin to their present state. It covers a 20-year period of methodological contributions to crystal structure solution, most of them published in Acta Crystallographica Section A. The common feature of these variants of direct methods is the introduction of the experimental intensities in the form of the Fourier coefficients of origin-free Patterson-type functions, which allows the active use of both strong and weak reflections. The different optimization algorithms are discussed and their performances compared. This review focuses not only on those PFDM applications related to powder diffraction data but also on some recent results obtained with electron diffraction tomography data.
Application of Patterson-function direct methods to materials characterization
Directory of Open Access Journals (Sweden)
Jordi Rius
2014-09-01
Full Text Available The aim of this article is a general description of the so-called Patterson-function direct methods (PFDM, from their origin to their present state. It covers a 20-year period of methodological contributions to crystal structure solution, most of them published in Acta Crystallographica Section A. The common feature of these variants of direct methods is the introduction of the experimental intensities in the form of the Fourier coefficients of origin-free Patterson-type functions, which allows the active use of both strong and weak reflections. The different optimization algorithms are discussed and their performances compared. This review focuses not only on those PFDM applications related to powder diffraction data but also on some recent results obtained with electron diffraction tomography data.
Robust fractional order differentiators using generalized modulating functions method
Liu, Dayan
2015-02-01
This paper aims at designing a fractional order differentiator for a class of signals satisfying a linear differential equation with unknown parameters. A generalized modulating functions method is proposed first to estimate the unknown parameters, then to derive accurate integral formulae for the left-sided Riemann-Liouville fractional derivatives of the studied signal. Unlike the improper integral in the definition of the left-sided Riemann-Liouville fractional derivative, the integrals in the proposed formulae can be proper and be considered as a low-pass filter by choosing appropriate modulating functions. Hence, digital fractional order differentiators applicable for on-line applications are deduced using a numerical integration method in discrete noisy case. Moreover, some error analysis are given for noise error contributions due to a class of stochastic processes. Finally, numerical examples are given to show the accuracy and robustness of the proposed fractional order differentiators.
Water hammer prediction and control: the Green's function method
Xuan, Li-Jun; Mao, Feng; Wu, Jie-Zhi
2012-04-01
By Green's function method we show that the water hammer (WH) can be analytically predicted for both laminar and turbulent flows (for the latter, with an eddy viscosity depending solely on the space coordinates), and thus its hazardous effect can be rationally controlled and minimized. To this end, we generalize a laminar water hammer equation of Wang et al. (J. Hydrodynamics, B2, 51, 1995) to include arbitrary initial condition and variable viscosity, and obtain its solution by Green's function method. The predicted characteristic WH behaviors by the solutions are in excellent agreement with both direct numerical simulation of the original governing equations and, by adjusting the eddy viscosity coefficient, experimentally measured turbulent flow data. Optimal WH control principle is thereby constructed and demonstrated.
An efficient method for hybrid density functional calculation with spin-orbit coupling
Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui
2018-03-01
In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.
Green's function method for perturbed Korteweg-de Vries equation
International Nuclear Information System (INIS)
Cai Hao; Huang Nianning
2003-01-01
The x-derivatives of squared Jost solution are the eigenfunctions with the zero eigenvalue of the linearized equation derived from the perturbed Korteweg-de Vries equation. A method similar to Green's function formalism is introduced to show the completeness of the squared Jost solutions in multi-soliton cases. It is not related to Lax equations directly, and thus it is beneficial to deal with the nonlinear equations with complicated Lax pair
Large deviations and queueing networks: Methods for rate function identification
Atar, Rami; Dupuis, Paul
1999-01-01
This paper considers the problem of rate function identification for multidimensional queueing models with feedback. A set of techniques are introduced which allow this identification when the model possesses certain structural properties. The main tools used are representation formulas for exponential integrals, weak convergence methods, and the regularity properties of associated Skorokhod Problems. Two examples are treated as special cases of the general theory: the classical Jackson netwo...
The Innovative Bike Conceptual Design by Using Modified Functional Element Design Method
Directory of Open Access Journals (Sweden)
Nien-Te Liu
2016-11-01
Full Text Available The purpose of the study is to propose a new design process by modifying functional element design approach which can commence a large amount of innovative concepts within a short period of time. Firstly, the original creative functional elements design method is analyzed and the drawbacks are discussed. Then, the modified is proposed and is divided into 6 steps. The creative functional element representations, generalization, specialization, and particularization are used in this method. Every step is described clearly, and users could design by following the process easily. In this paper, a clear and accurate design process is proposed based on the creative functional element design method. By following this method, a lot of innovative bicycles will be created quickly.
Comparison of lists of genes based on functional profiles
Directory of Open Access Journals (Sweden)
Salicrú Miquel
2011-10-01
Full Text Available Abstract Background How to compare studies on the basis of their biological significance is a problem of central importance in high-throughput genomics. Many methods for performing such comparisons are based on the information in databases of functional annotation, such as those that form the Gene Ontology (GO. Typically, they consist of analyzing gene annotation frequencies in some pre-specified GO classes, in a class-by-class way, followed by p-value adjustment for multiple testing. Enrichment analysis, where a list of genes is compared against a wider universe of genes, is the most common example. Results A new global testing procedure and a method incorporating it are presented. Instead of testing separately for each GO class, a single global test for all classes under consideration is performed. The test is based on the distance between the functional profiles, defined as the joint frequencies of annotation in a given set of GO classes. These classes may be chosen at one or more GO levels. The new global test is more powerful and accurate with respect to type I errors than the usual class-by-class approach. When applied to some real datasets, the results suggest that the method may also provide useful information that complements the tests performed using a class-by-class approach if gene counts are sparse in some classes. An R library, goProfiles, implements these methods and is available from Bioconductor, http://bioconductor.org/packages/release/bioc/html/goProfiles.html. Conclusions The method provides an inferential basis for deciding whether two lists are functionally different. For global comparisons it is preferable to the global chi-square test of homogeneity. Furthermore, it may provide additional information if used in conjunction with class-by-class methods.
New technology-based recruitment methods
Oksanen, Reija
2018-01-01
The transformation that recruitment might encounter due to big data analytics and artificial intelligence (AI) is particularly fascinating which is why this thesis focuses on the changes recruitment processes are and will be facing as new technological solutions are emerging. The aim and main objective of this study is to widen knowledge about new technology-based recruitment methods, focusing on how they are utilized by Finnish recruitment professionals and how the opportunities and risks th...
The Green's function method for critical heterogeneous slabs
International Nuclear Information System (INIS)
Kornreich, D.E.
1996-01-01
Recently, the Green's Function Method (GFM) has been employed to obtain benchmark-quality results for nuclear engineering and radiative transfer calculations. This was possible because of fast and accurate calculations of the Green's function and the associated Fourier and Laplace transform inversions. Calculations have been provided in one-dimensional slab geometries for both homogeneous and heterogeneous media. A heterogeneous medium is analyzed as a series of homogeneous slabs, and Placzek's lemma is used to extend each slab to infinity. This allows use of the infinite medium Green's function (the anisotropic plane source in an infinite homogeneous medium) in the solution. To this point, a drawback of the GFM has been the limitation to media with c 1; however, mathematical solutions exist which result in oscillating Green's functions. Such calculations are briefly discussing. The limitation to media with c < 1 has been relaxed so that the Green's function may also be calculated for media with c ≥ 1. Thus, materials that contain fissionable isotopes may be modeled
The Reliasep method used for the functional modeling of complex systems
International Nuclear Information System (INIS)
Dubiez, P.; Gaufreteau, P.; Pitton, J.P.
1997-07-01
The RELIASEP R method and its support tool have been recommended to carry out the functional analysis of large systems within the framework of the design of new power units. Let us first recall the principles of the method based on the breakdown of functions into tree(s). These functions are characterised by their performance and constraints. Then the main modifications made under EDF requirement and in particular the 'viewpoints' analyses are presented. The knowledge obtained from the first studies carried out are discussed. (author)
Solution of the generalized Emden-Fowler equations by the hybrid functions method
International Nuclear Information System (INIS)
Tabrizidooz, H R; Marzban, H R; Razzaghi, M
2009-01-01
In this paper, we present a numerical algorithm for solving the generalized Emden-Fowler equations, which have many applications in mathematical physics and astrophysics. The method is based on hybrid functions approximations. The properties of hybrid functions, which consist of block-pulse functions and Lagrange interpolating polynomials, are presented. These properties are then utilized to reduce the computation of the generalized Emden-Fowler equations to a system of nonlinear equations. The method is easy to implement and yields very accurate results.
The Reliasep method used for the functional modeling of complex systems
Energy Technology Data Exchange (ETDEWEB)
Dubiez, P.; Gaufreteau, P.; Pitton, J.P
1997-07-01
The RELIASEP{sup R} method and its support tool have been recommended to carry out the functional analysis of large systems within the framework of the design of new power units. Let us first recall the principles of the method based on the breakdown of functions into tree(s). These functions are characterised by their performance and constraints. Then the main modifications made under EDF requirement and in particular the `viewpoints` analyses are presented. The knowledge obtained from the first studies carried out are discussed. (author)
EPC: A Provably Secure Permutation Based Compression Function
DEFF Research Database (Denmark)
Bagheri, Nasour; Gauravaram, Praveen; Naderi, Majid
2010-01-01
The security of permutation-based hash functions in the ideal permutation model has been studied when the input-length of compression function is larger than the input-length of the permutation function. In this paper, we consider permutation based compression functions that have input lengths sh...
Method of hyperspherical functions in a few-body quantum mechanics
International Nuclear Information System (INIS)
Dzhibuti, R.I.; Krupennikova, N.B.
1984-01-01
A new method for solving a few-body problem in quantum mechanics based on the expansion of the wave function of many-particle system in terms of basis hyperspherical functions is outlined in the monograph. This method gives the possibility to obtain important results in nuclear physics. A materials of general character is presented which can be useful when considering a few-body problem in atomic and molecular physics as well as in elementary particle physics. The paper deals with the theory of hyperspherical functions and the method of expansion in terms of hyperspherical functions basis can be formally considered as a certain generalization of the partial expansion method in the two-body problem. The Raynal-Revai theory is stated for the three-body problem and coe-- fficients of unitary transformations for four-particle hyperspherical function coefficients are introduced. Five-particle hyperspherical functions are introduced and an attempt of generalization of the theory for the systems With any number of particles has been made. The rules of plotting symmetrized hyperspherical functions for three and four identical particles are given. Also described is the method of expansion in terms of hyperspherical functions basis in the coordinate and impulse representations for discrete and continuous spectrum, respectively
Level set method for image segmentation based on moment competition
Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai
2015-05-01
We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.
An overview of modal-based damage identification methods
Energy Technology Data Exchange (ETDEWEB)
Farrar, C.R.; Doebling, S.W. [Los Alamos National Lab., NM (United States). Engineering Analysis Group
1997-09-01
This paper provides an overview of methods that examine changes in measured vibration response to detect, locate, and characterize damage in structural and mechanical systems. The basic idea behind this technology is that modal parameters (notably frequencies, mode shapes, and modal damping) are functions of the physical properties of the structure (mass, damping, and stiffness). Therefore, changes in the physical properties will cause detectable changes in the modal properties. The motivation for the development of this technology is first provided. The methods are then categorized according to various criteria such as the level of damage detection provided, model-based vs. non-model-based methods and linear vs. nonlinear methods. This overview is limited to methods that can be adapted to a wide range of structures (i.e., are not dependent on a particular assumed model form for the system such as beam-bending behavior and methods and that are not based on updating finite element models). Next, the methods are described in general terms including difficulties associated with their implementation and their fidelity. Past, current and future-planned applications of this technology to actual engineering systems are summarized. The paper concludes with a discussion of critical issues for future research in the area of modal-based damage identification.
A multicore based parallel image registration method.
Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L; Foran, David J
2009-01-01
Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform.
Quasiaverages, symmetry breaking and irreducible Green functions method
Directory of Open Access Journals (Sweden)
A.L.Kuzemsky
2010-01-01
Full Text Available The development and applications of the method of quasiaverages to quantum statistical physics and to quantum solid state theory and, in particular, to quantum theory of magnetism, were considered. It was shown that the role of symmetry (and the breaking of symmetries in combination with the degeneracy of the system was reanalyzed and essentially clarified within the framework of the method of quasiaverages. The problem of finding the ferromagnetic, antiferromagnetic and superconducting "symmetry broken" solutions of the correlated lattice fermion models was discussed within the irreducible Green functions method. A unified scheme for the construction of generalized mean fields (elastic scattering corrections and self-energy (inelastic scattering in terms of the equations of motion and Dyson equation was generalized in order to include the "source fields". This approach complements previous studies of microscopic theory of antiferromagnetism and clarifies the concepts of Neel sublattices for localized and itinerant antiferromagnetism and "spin-aligning fields" of correlated lattice fermions.
Lagrangian based methods for coherent structure detection
Energy Technology Data Exchange (ETDEWEB)
Allshouse, Michael R., E-mail: mallshouse@chaos.utexas.edu [Center for Nonlinear Dynamics and Department of Physics, University of Texas at Austin, Austin, Texas 78712 (United States); Peacock, Thomas, E-mail: tomp@mit.edu [Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States)
2015-09-15
There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows.
Evaluation of left ventricular function by invasive and noninvasive methods
Energy Technology Data Exchange (ETDEWEB)
Kusukawa, R [Yamaguchi Univ., Ube (Japan). School of Medicine
1982-06-01
Noninvasive methods in cardiology have progressed very rapidly in recent years. Cardiac catheterization and angiocardiography are the standard methods for evaluating of cardiac performance, however, they need expensive apparatus and are time-consuming, arduous procedures which do not permit to repeat frequently, and sometimes risky. In this article, the indices of pump and muscle function of the heart obtained by invasive methods were compared to those indices obtained by noninvasive methods, and correlation between two groups and usefulness and limitation were discussed. Systolic time intervals are convenient and repeatable measures of left ventricular performance in clinical cardiology. There are significant correlations of PEP/LVET with stroke volume, ejection fraction and mean circumferential shortening velocity. Although some limitations are present in application of this method to certain diseases, these measures are useful in the evaluation of left ventricular performance. Echocardiography has made an era of the noninvasive cardiology. Left ventricular volume, ejection fraction, mean circumferential shortening velocity and PSP/ESVI are accurately calculated by echocardiographic measurement. Nuclear cardiology is also accurate noninvasive method in evaluation of cardiac performance. With this tremendous growth in this field, it will make next era of noninvasive cardiology.
International Nuclear Information System (INIS)
Conte, Elio; Khrennikov, Andrei; Federici, Antonio; Zbilut, Joseph P.
2009-01-01
We develop a new method for analysis of fundamental brain waves as recorded by the EEG. To this purpose we introduce a Fractal Variance Function that is based on the calculation of the variogram. The method is completed by using Random Matrix Theory. Some examples are given. We also discuss the link of such formulation with H. Weiss and V. Weiss golden ratio found in the brain, and with El Naschie fractal Cantorian space-time theory.
Walker, Mathew W; Lloyd-Evans, Emyr
2015-01-01
Lysosomes are an emerging and increasingly important cellular organelle. With every passing year, more novel proteins and key cellular functions are associated with lysosomes. Despite this, the methodologies for their purification have largely remained unchanged since the days of their discovery. With little advancement in this area, it is no surprise that analysis of lysosomal function has been somewhat stymied, largely in part by the change in buoyant densities that occur under conditions where lysosomes accumulate macromolecules. Such phenotypes are often associated with the lysosomal storage diseases but are increasingly being observed under conditions where lysosomal proteins or, in some cases, cellular functions associated with lysosomal proteins are being manipulated. These altered lysosomes poise a problem to the classical methods to purify lysosomes that are reliant largely on their correct sedimentation by density gradient centrifugation. Building upon a technique developed by others to purify lysosomes magnetically, we have developed a unique assay using superparamagnetic iron oxide nanoparticles (SPIONs) to purify high yields of ultrapure functional lysosomes from multiple cell types including the lysosomal storage disorders. Here we describe this method in detail, including the rationale behind using SPIONs, the potential pitfalls that can be avoided and the potential functional assays these lysosomes can be used for. Finally we also summarize the other methodologies and the exact reasons why magnetic purification of lysosomes is now the method of choice for lysosomal researchers. Copyright © 2015 Elsevier Inc. All rights reserved.
The multifacet graphically contracted function method. I. Formulation and implementation
International Nuclear Information System (INIS)
Shepard, Ron; Brozell, Scott R.; Gidofalvi, Gergely
2014-01-01
The basic formulation for the multifacet generalization of the graphically contracted function (MFGCF) electronic structure method is presented. The analysis includes the discussion of linear dependency and redundancy of the arc factor parameters, the computation of reduced density matrices, Hamiltonian matrix construction, spin-density matrix construction, the computation of optimization gradients for single-state and state-averaged calculations, graphical wave function analysis, and the efficient computation of configuration state function and Slater determinant expansion coefficients. Timings are given for Hamiltonian matrix element and analytic optimization gradient computations for a range of model problems for full-CI Shavitt graphs, and it is observed that both the energy and the gradient computation scale as O(N 2 n 4 ) for N electrons and n orbitals. The important arithmetic operations are within dense matrix-matrix product computational kernels, resulting in a computationally efficient procedure. An initial implementation of the method is used to present applications to several challenging chemical systems, including N 2 dissociation, cubic H 8 dissociation, the symmetric dissociation of H 2 O, and the insertion of Be into H 2 . The results are compared to the exact full-CI values and also to those of the previous single-facet GCF expansion form
The multifacet graphically contracted function method. I. Formulation and implementation
Shepard, Ron; Gidofalvi, Gergely; Brozell, Scott R.
2014-08-01
The basic formulation for the multifacet generalization of the graphically contracted function (MFGCF) electronic structure method is presented. The analysis includes the discussion of linear dependency and redundancy of the arc factor parameters, the computation of reduced density matrices, Hamiltonian matrix construction, spin-density matrix construction, the computation of optimization gradients for single-state and state-averaged calculations, graphical wave function analysis, and the efficient computation of configuration state function and Slater determinant expansion coefficients. Timings are given for Hamiltonian matrix element and analytic optimization gradient computations for a range of model problems for full-CI Shavitt graphs, and it is observed that both the energy and the gradient computation scale as O(N2n4) for N electrons and n orbitals. The important arithmetic operations are within dense matrix-matrix product computational kernels, resulting in a computationally efficient procedure. An initial implementation of the method is used to present applications to several challenging chemical systems, including N2 dissociation, cubic H8 dissociation, the symmetric dissociation of H2O, and the insertion of Be into H2. The results are compared to the exact full-CI values and also to those of the previous single-facet GCF expansion form.
Piret, Cécile
2012-05-01
Much work has been done on reconstructing arbitrary surfaces using the radial basis function (RBF) method, but one can hardly find any work done on the use of RBFs to solve partial differential equations (PDEs) on arbitrary surfaces. In this paper, we investigate methods to solve PDEs on arbitrary stationary surfaces embedded in . R3 using the RBF method. We present three RBF-based methods that easily discretize surface differential operators. We take advantage of the meshfree character of RBFs, which give us a high accuracy and the flexibility to represent the most complex geometries in any dimension. Two out of the three methods, which we call the orthogonal gradients (OGr) methods are the result of our work and are hereby presented for the first time. © 2012 Elsevier Inc.
Chapter 11. Community analysis-based methods
Energy Technology Data Exchange (ETDEWEB)
Cao, Y.; Wu, C.H.; Andersen, G.L.; Holden, P.A.
2010-05-01
Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. In increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.
Convex functions and optimization methods on Riemannian manifolds
Udrişte, Constantin
1994-01-01
This unique monograph discusses the interaction between Riemannian geometry, convex programming, numerical analysis, dynamical systems and mathematical modelling. The book is the first account of the development of this subject as it emerged at the beginning of the 'seventies. A unified theory of convexity of functions, dynamical systems and optimization methods on Riemannian manifolds is also presented. Topics covered include geodesics and completeness of Riemannian manifolds, variations of the p-energy of a curve and Jacobi fields, convex programs on Riemannian manifolds, geometrical constructions of convex functions, flows and energies, applications of convexity, descent algorithms on Riemannian manifolds, TC and TP programs for calculations and plots, all allowing the user to explore and experiment interactively with real life problems in the language of Riemannian geometry. An appendix is devoted to convexity and completeness in Finsler manifolds. For students and researchers in such diverse fields as pu...
Meshfree Local Radial Basis Function Collocation Method with Image Nodes
Energy Technology Data Exchange (ETDEWEB)
Baek, Seung Ki; Kim, Minjae [Pukyong National University, Busan (Korea, Republic of)
2017-07-15
We numerically solve two-dimensional heat diffusion problems by using a simple variant of the meshfree local radial-basis function (RBF) collocation method. The main idea is to include an additional set of sample nodes outside the problem domain, similarly to the method of images in electrostatics, to perform collocation on the domain boundaries. We can thereby take into account the temperature profile as well as its gradients specified by boundary conditions at the same time, which holds true even for a node where two or more boundaries meet with different boundary conditions. We argue that the image method is computationally efficient when combined with the local RBF collocation method, whereas the addition of image nodes becomes very costly in case of the global collocation. We apply our modified method to a benchmark test of a boundary value problem, and find that this simple modification reduces the maximum error from the analytic solution significantly. The reduction is small for an initial value problem with simpler boundary conditions. We observe increased numerical instability, which has to be compensated for by a sufficient number of sample nodes and/or more careful parameter choices for time integration.
Validating the JobFit system functional assessment method
Energy Technology Data Exchange (ETDEWEB)
Jenny Legge; Robin Burgess-Limerick
2007-05-15
Workplace injuries are costing the Australian coal mining industry and its communities $410 Million a year. This ACARP study aims to meet those demands by developing a safe, reliable and valid pre-employment functional assessment tool. All JobFit System Pre-Employment Functional Assessments (PEFAs) consist of a musculoskeletal screen, balance test, aerobic fitness test and job-specific postural tolerances and material handling tasks. The results of each component are compared to the applicant's job demands and an overall PEFA score between 1 and 4 is given with 1 being the better score. The reliability study and validity study were conducted concurrently. The reliability study examined test-retest, intra-tester and inter-tester reliability of the JobFit System Functional Assessment Method. Overall, good to excellent reliability was found, which was sufficient to be used for comparison with injury data for determining the validity of the assessment. The overall assessment score and material handling tasks had the greatest reliability. The validity study compared the assessment results of 336 records from a Queensland underground and open cut coal mine with their injury records. A predictive relationship was found between PEFA score and the risk of a back/trunk/shoulder injury from manual handling. An association was also found between PEFA score of 1 and increased length of employment. Lower aerobic fitness test results had an inverse relationship with injury rates. The study found that underground workers, regardless of PEFA score, were more likely to have an injury when compared to other departments. No relationship was found between age and risk of injury. These results confirm the validity of the JobFit System Functional Assessment method.
Membrane mimetic surface functionalization of nanoparticles: Methods and applications
Weingart, Jacob; Vabbilisetty, Pratima; Sun, Xue-Long
2013-01-01
Nanoparticles (NPs), due to their size-dependent physical and chemical properties, have shown remarkable potential for a wide range of applications over the past decades. Particularly, the biological compatibilities and functions of NPs have been extensively studied for expanding their potential in areas of biomedical application such as bioimaging, biosensing, and drug delivery. In doing so, surface functionalization of NPs by introducing synthetic ligands and/or natural biomolecules has become a critical component in regards to the overall performance of the NP system for its intended use. Among known examples of surface functionalization, the construction of an artificial cell membrane structure, based on phospholipids, has proven effective in enhancing biocompatibility and has become a viable alternative to more traditional modifications, such as direct polymer conjugation. Furthermore, certain bioactive molecules can be immobilized onto the surface of phospholipid platforms to generate displays more reminiscent of cellular surface components. Thus, NPs with membrane-mimetic displays have found use in a range of bioimaging, biosensing, and drug delivery applications. This review herein describes recent advances in the preparations and characterization of integrated functional NPs covered by artificial cell membrane structures and their use in various biomedical applications. PMID:23688632
International Nuclear Information System (INIS)
Kowalski, Karol; Valiev, Marat
2009-01-01
The recently introduced energy expansion based on the use of generating functional (GF) [K. Kowalski and P. D. Fan, J. Chem. Phys. 130, 084112 (2009)] provides a way of constructing size-consistent noniterative coupled cluster (CC) corrections in terms of moments of the CC equations. To take advantage of this expansion in a strongly interacting regime, the regularization of the cluster amplitudes is required in order to counteract the effect of excessive growth of the norm of the CC wave function. Although proven to be efficient, the previously discussed form of the regularization does not lead to rigorously size-consistent corrections. In this paper we address the issue of size-consistent regularization of the GF expansion by redefining the equations for the cluster amplitudes. The performance and basic features of proposed methodology are illustrated on several gas-phase benchmark systems. Moreover, the regularized GF approaches are combined with quantum mechanical molecular mechanics module and applied to describe the S N 2 reaction of CHCl 3 and OH - in aqueous solution.
Recent Advances in the Korringa-Kohn-Rostoker Green Function Method
Directory of Open Access Journals (Sweden)
Zeller Rudolf
2014-01-01
Full Text Available The Korringa-Kohn-Rostoker (KKR Green function (GF method is a technique for all-electron full-potential density-functional calculations. Similar to the historical Wigner-Seitz cellular method, the KKR-GF method uses a partitioning of space into atomic Wigner-Seitz cells. However, the numerically demanding wave-function matching at the cell boundaries is avoided by use of an integral equation formalism based on the concept of reference Green functions. The advantage of this formalism will be illustrated by the recent progress made for very large systems with thousands of inequivalent atoms and for very accurate calculations of atomic forces and total energies.
Clinical evaluation of right ventricular function using radionuclide method
International Nuclear Information System (INIS)
Ishii, Yasushi; Tamaki, Nagayoshi; Mukai, Takao; Motohara, Seiichito; Ikekubo, Katsuji.
1982-01-01
Essential thing to evaluate the right ventricular (RV) function is its volumetric measurement. However, its geometrical complexity has hampered this even with the contrast ventriculography, unlike left ventricle (LV). Meanwhile, the radionuclide time-activity curve from the first pass of a tracer through RV in the RAO view provides the most reliable data of the RV ejection fraction (RVEF), same data from the multigated equilibrium study in the LAO view is necessitated for a repeated intervention study, but the latter imposes a critical problem to locate ROI separately in the LAO view. Finally, three dimensional location of RV should be mandatory using new method as the dynamic SPECT in the future. (author)
Parametric potential determination by the canonical function method
International Nuclear Information System (INIS)
Tannous, C.; Fakhreddine, K.; Langlois, J.
1999-01-01
The canonical function method (CFM) is a powerful means for solving the radial Schroedinger equation (RSE). The mathematical difficulty of the RSE lies in the fact it is a singular boundary value problem. The CFM turns it into a regular initial value problem and allows the full determination of the spectrum of the Schroedinger operator without calculating the eigenfunctions. Following the parametrisation suggested by Klapisch and Green-Sellin-Zachor we develop a CFM to optimise the potential parameters in order to reproduce the experimental quantum defect results for various Rydberg series of He, Ne and Ar as evaluated from Moore's data. (orig.)
Simulation of ecological processes using response functions method
International Nuclear Information System (INIS)
Malkina-Pykh, I.G.; Pykh, Yu. A.
1998-01-01
The article describes further development and applications of the already well-known response functions method (MRF). The method is used as a basis for the development of mathematical models of a wide set of ecological processes. The model of radioactive contamination of the ecosystems is chosen as an example. The mathematical model was elaborated for the description of 90 Sr dynamics in the elementary ecosystems of various geographical zones. The model includes the blocks corresponding with the main units of any elementary ecosystem: lower atmosphere, soil, vegetation, surface water. Parameters' evaluation was provided on a wide set of experimental data. A set of computer simulations was done on the model to prove the possibility of the model's use for ecological forecasting
Model of coupling with core in the Green function method
International Nuclear Information System (INIS)
Kamerdzhiev, S.P.; Tselyaev, V.I.
1983-01-01
Models of coupling with core in the method of the Green functions, presenting generalization of conventional method of chaotic phases, i.e. account of configurations of more complex than monoparticle-monohole (1p1h) configurations, have been considered. Odd nuclei are studied only to the extent when the task of odd nucleus is solved for even-even nucleus. Microscopic model of the account of delay effects in mass operator M=M(epsilon), which corresponds to the account of the effects influence only on the change of quasiparticle behaviour in magic nucleus as compared with their behaviour, described by pure model of cores, has been considered. The change results in fragmentation of monoparticle levels, which is the main effect, and in the necessity to use new basis as compared with the shell one, corresponding to inoculative quasiparticles. When formulas have been devived concrete type of mass operator M(epsilon) is not used
Atlas-based identification of targets for functional radiosurgery
International Nuclear Information System (INIS)
Stancanello, Joseph; Romanelli, Pantaleo; Modugno, Nicola; Cerveri, Pietro; Ferrigno, Giancarlo; Uggeri, Fulvio; Cantore, Giampaolo
2006-01-01
Functional disorders of the brain, such as Parkinson's disease, dystonia, epilepsy, and neuropathic pain, may exhibit poor response to medical therapy. In such cases, surgical intervention may become necessary. Modern surgical approaches to such disorders include radio-frequency lesioning and deep brain stimulation (DBS). The subthalamic nucleus (STN) is one of the most useful stereotactic targets available: STN DBS is known to induce substantial improvement in patients with end-stage Parkinson's disease. Other targets include the Globus Pallidus pars interna (GPi) for dystonia and Parkinson's disease, and the centromedian nucleus of the thalamus (CMN) for neuropathic pain. Radiosurgery is an attractive noninvasive alternative to treat some functional brain disorders. The main technical limitation to radiosurgery is that the target can be selected only on the basis of magnetic resonance anatomy without electrophysiological confirmation. The aim of this work is to provide a method for the correct atlas-based identification of the target to be used in functional neurosurgery treatment planning. The coordinates of STN, CMN, and GPi were identified in the Talairach and Tournoux atlas and transformed to the corresponding regions of the Montreal Neurological Institute (MNI) electronic atlas. Binary masks describing the target nuclei were created. The MNI electronic atlas was deformed onto the patient magnetic resonance imaging-T1 scan by applying an affine transformation followed by a local nonrigid registration. The first transformation was based on normalized cross correlation and the second on optimization of a two-part objective function consisting of similarity criteria and weighted regularization. The obtained deformation field was then applied to the target masks. The minimum distance between the surface of an implanted electrode and the surface of the deformed mask was calculated. The validation of the method consisted of comparing the electrode-mask distance to
Methods for transient assay of gene function in floral tissues
Directory of Open Access Journals (Sweden)
Pathirana Nilangani N
2007-01-01
Full Text Available Abstract Background There is considerable interest in rapid assays or screening systems for assigning gene function. However, analysis of gene function in the flowers of some species is restricted due to the difficulty of producing stably transformed transgenic plants. As a result, experimental approaches based on transient gene expression assays are frequently used. Biolistics has long been used for transient over-expression of genes of interest, but has not been exploited for gene silencing studies. Agrobacterium-infiltration has also been used, but the focus primarily has been on the transient transformation of leaf tissue. Results Two constructs, one expressing an inverted repeat of the Antirrhinum majus (Antirrhinum chalcone synthase gene (CHS and the other an inverted repeat of the Antirrhinum transcription factor gene Rosea1, were shown to effectively induce CHS and Rosea1 gene silencing, respectively, when introduced biolistically into petal tissue of Antirrhinum flowers developing in vitro. A high-throughput vector expressing the Antirrhinum CHS gene attached to an inverted repeat of the nos terminator was also shown to be effective. Silencing spread systemically to create large zones of petal tissue lacking pigmentation, with transmission of the silenced state spreading both laterally within the affected epidermal cell layer and into lower cell layers, including the epidermis of the other petal surface. Transient Agrobacterium-mediated transformation of petal tissue of tobacco and petunia flowers in situ or detached was also achieved, using expression of the reporter genes GUS and GFP to visualise transgene expression. Conclusion We demonstrate the feasibility of using biolistics-based transient RNAi, and transient transformation of petal tissue via Agrobacterium infiltration to study gene function in petals. We have also produced a vector for high throughput gene silencing studies, incorporating the option of using T-A cloning to
Information filtering via a scaling-based function.
Directory of Open Access Journals (Sweden)
Tian Qiu
Full Text Available Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem.
Functional networks inference from rule-based machine learning models.
Lazzarini, Nicola; Widera, Paweł; Williamson, Stuart; Heer, Rakesh; Krasnogor, Natalio; Bacardit, Jaume
2016-01-01
Functional networks play an important role in the analysis of biological processes and systems. The inference of these networks from high-throughput (-omics) data is an area of intense research. So far, the similarity-based inference paradigm (e.g. gene co-expression) has been the most popular approach. It assumes a functional relationship between genes which are expressed at similar levels across different samples. An alternative to this paradigm is the inference of relationships from the structure of machine learning models. These models are able to capture complex relationships between variables, that often are different/complementary to the similarity-based methods. We propose a protocol to infer functional networks from machine learning models, called FuNeL. It assumes, that genes used together within a rule-based machine learning model to classify the samples, might also be functionally related at a biological level. The protocol is first tested on synthetic datasets and then evaluated on a test suite of 8 real-world datasets related to human cancer. The networks inferred from the real-world data are compared against gene co-expression networks of equal size, generated with 3 different methods. The comparison is performed from two different points of view. We analyse the enriched biological terms in the set of network nodes and the relationships between known disease-associated genes in a context of the network topology. The comparison confirms both the biological relevance and the complementary character of the knowledge captured by the FuNeL networks in relation to similarity-based methods and demonstrates its potential to identify known disease associations as core elements of the network. Finally, using a prostate cancer dataset as a case study, we confirm that the biological knowledge captured by our method is relevant to the disease and consistent with the specialised literature and with an independent dataset not used in the inference process. The
Bus Based Synchronization Method for CHIPPER Based NoC
Directory of Open Access Journals (Sweden)
D. Muralidharan
2016-01-01
Full Text Available Network on Chip (NoC reduces the communication delay of System on Chip (SoC. The main limitation of NoC is power consumption and area overhead. Bufferless NoC reduces the area complexity and power consumption by eliminating buffers in the traditional routers. The bufferless NoC design should include live lock freeness since they use hot potato routing. This increases the complexity of bufferless NoC design. Among the available propositions to reduce this complexity, CHIPPER based bufferless NoC is considered as one of the best options. Live lock freeness is provided in CHIPPER through golden epoch and golden packet. All routers follow some synchronization method to identify a golden packet. Clock based method is intuitively followed for synchronization in CHIPPER based NoCs. It is shown in this work that the worst-case latency of packets is unbearably high when the above synchronization is followed. To alleviate this problem, broadcast bus NoC (BBus NoC approach is proposed in this work. The proposed method decreases the worst-case latency of packets by increasing the golden epoch rate of CHIPPER.
Curvelet-domain multiple matching method combined with cubic B-spline function
Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming
2018-05-01
Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.
METHODICAL BASES OF MANAGEMENT OF INSURANCE PORTFOLIO
Directory of Open Access Journals (Sweden)
Serdechna Yulia
2018-01-01
Full Text Available Introduction. Despite the considerable arsenal of developments in the issues of assessing the management of the insurance portfolio remains unresolved. In order to detail, specify and further systematize the indicators for the indicated evaluation, the publications of scientists are analyzed. The purpose of the study is to analyze existing methods by which it is possible to formulate and manage the insurance portfolio in order to achieve its balance, which will contribute to ensuring the financial reliability of the insurance company. Results. The description of the essence of the concept of “management of insurance portfolio”, as the application of actuarial methods and techniques to the combination of various insurance risks offered for insurance or are already part of the insurance portfolio, allowing to adjust the size and structure of the portfolio in order to ensure its financial stability, achievement the maximum level of income of an insurance organization, preservation of the value of its equity and financial security of insurance liabilities. It is determined that the main methods by which the insurer’s insurance portfolio can be formed and managed is the selection of risks; reinsurance operations that ensure diversification of risks; formation and placement of insurance reserves, which form the financial basis of insurance activities. The method of managing an insurance portfolio, which can be both active and passive, is considered. Conclusions. It is determined that the insurance portfolio is the basis on which all the activities of the insurer are based and which determines its financial stability. The combination of methods and technologies applied to the insurance portfolio is a management method that can be both active and passive and has a number of specific methods through which the insurer’s insurance portfolio can be formed and managed. It is substantiated that each insurance company aims to form an efficient and
An integrated miRNA functional screening and target validation method for organ morphogenesis.
Rebustini, Ivan T; Vlahos, Maryann; Packer, Trevor; Kukuruzinska, Maria A; Maas, Richard L
2016-03-16
The relative ease of identifying microRNAs and their increasing recognition as important regulators of organogenesis motivate the development of methods to efficiently assess microRNA function during organ morphogenesis. In this context, embryonic organ explants provide a reliable and reproducible system that recapitulates some of the important early morphogenetic processes during organ development. Here we present a method to target microRNA function in explanted mouse embryonic organs. Our method combines the use of peptide-based nanoparticles to transfect specific microRNA inhibitors or activators into embryonic organ explants, with a microRNA pulldown assay that allows direct identification of microRNA targets. This method provides effective assessment of microRNA function during organ morphogenesis, allows prioritization of multiple microRNAs in parallel for subsequent genetic approaches, and can be applied to a variety of embryonic organs.
Statistical Method to Overcome Overfitting Issue in Rational Function Models
Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.
2017-09-01
Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.
Cut Based Method for Comparing Complex Networks.
Liu, Qun; Dong, Zhishan; Wang, En
2018-03-23
Revealing the underlying similarity of various complex networks has become both a popular and interdisciplinary topic, with a plethora of relevant application domains. The essence of the similarity here is that network features of the same network type are highly similar, while the features of different kinds of networks present low similarity. In this paper, we introduce and explore a new method for comparing various complex networks based on the cut distance. We show correspondence between the cut distance and the similarity of two networks. This correspondence allows us to consider a broad range of complex networks and explicitly compare various networks with high accuracy. Various machine learning technologies such as genetic algorithms, nearest neighbor classification, and model selection are employed during the comparison process. Our cut method is shown to be suited for comparisons of undirected networks and directed networks, as well as weighted networks. In the model selection process, the results demonstrate that our approach outperforms other state-of-the-art methods with respect to accuracy.
Preequilibrium decay models and the quantum Green function method
International Nuclear Information System (INIS)
Zhivopistsev, F.A.; Rzhevskij, E.S.; Gosudarstvennyj Komitet po Ispol'zovaniyu Atomnoj Ehnergii SSSR, Moscow. Inst. Teoreticheskoj i Ehksperimental'noj Fiziki)
1977-01-01
The nuclear process mechanism and preequilibrium decay involving complex particles are expounded on the basis of the Green function formalism without the weak interaction assumptions. The Green function method is generalized to a general nuclear reaction: A+α → B+β+γ+...rho, where A is the target nucleus, α is a complex particle in the initial state, B is the final nucleus, and β, γ, ... rho are nuclear fragments in the final state. The relationship between the generalized Green function and Ssub(fi)-matrix is established. The resultant equations account for: 1) direct and quasi-direct processes responsible for the angular distribution asymmetry of the preequilibrium component; 2) the appearance of addends corresponding to the excitation of complex states of final nucleus; and 3) the relationship between the preequilibrium decay model and the general models of nuclear reaction theories (Lippman-Schwinger formalism). The formulation of preequilibrium emission via the S(T) matrix allows to account for all the differential terms in succession important to an investigation of the angular distribution assymetry of emitted particles
Method for estimating modulation transfer function from sample images.
Saiga, Rino; Takeuchi, Akihisa; Uesugi, Kentaro; Terada, Yasuko; Suzuki, Yoshio; Mizutani, Ryuta
2018-02-01
The modulation transfer function (MTF) represents the frequency domain response of imaging modalities. Here, we report a method for estimating the MTF from sample images. Test images were generated from a number of images, including those taken with an electron microscope and with an observation satellite. These original images were convolved with point spread functions (PSFs) including those of circular apertures. The resultant test images were subjected to a Fourier transformation. The logarithm of the squared norm of the Fourier transform was plotted against the squared distance from the origin. Linear correlations were observed in the logarithmic plots, indicating that the PSF of the test images can be approximated with a Gaussian. The MTF was then calculated from the Gaussian-approximated PSF. The obtained MTF closely coincided with the MTF predicted from the original PSF. The MTF of an x-ray microtomographic section of a fly brain was also estimated with this method. The obtained MTF showed good agreement with the MTF determined from an edge profile of an aluminum test object. We suggest that this approach is an alternative way of estimating the MTF, independently of the image type. Copyright © 2017 Elsevier Ltd. All rights reserved.
Method of applying single higher order polynomial basis function over multiple domains
CSIR Research Space (South Africa)
Lysko, AA
2010-03-01
Full Text Available A novel method has been devised where one set of higher order polynomial-based basis functions can be applied over several wire segments, thus permitting to decouple the number of unknowns from the number of segments, and so from the geometrical...
Calculation of neutron importance function in fissionable assemblies using Monte Carlo method
International Nuclear Information System (INIS)
Feghhi, S.A.H.; Shahriari, M.; Afarideh, H.
2007-01-01
The purpose of the present work is to develop an efficient solution method for the calculation of neutron importance function in fissionable assemblies for all criticality conditions, based on Monte Carlo calculations. The neutron importance function has an important role in perturbation theory and reactor dynamic calculations. Usually this function can be determined by calculating the adjoint flux while solving the adjoint weighted transport equation based on deterministic methods. However, in complex geometries these calculations are very complicated. In this article, considering the capabilities of MCNP code in solving problems with complex geometries and its closeness to physical concepts, a comprehensive method based on the physical concept of neutron importance has been introduced for calculating the neutron importance function in sub-critical, critical and super-critical conditions. For this propose a computer program has been developed. The results of the method have been benchmarked with ANISN code calculations in 1 and 2 group modes for simple geometries. The correctness of these results has been confirmed for all three criticality conditions. Finally, the efficiency of the method for complex geometries has been shown by the calculation of neutron importance in Miniature Neutron Source Reactor (MNSR) research reactor
Accelerometer method and apparatus for integral display and control functions
Bozeman, Richard J., Jr.
1992-06-01
Vibration analysis has been used for years to provide a determination of the proper functioning of different types of machinery, including rotating machinery and rocket engines. A determination of a malfunction, if detected at a relatively early stage in its development, will allow changes in operating mode or a sequenced shutdown of the machinery prior to a total failure. Such preventative measures result in less extensive and/or less expensive repairs, and can also prevent a sometimes catastrophic failure of equipment. Standard vibration analyzers are generally rather complex, expensive, and of limited portability. They also usually result in displays and controls being located remotely from the machinery being monitored. Consequently, a need exists for improvements in accelerometer electronic display and control functions which are more suitable for operation directly on machines and which are not so expensive and complex. The invention includes methods and apparatus for detecting mechanical vibrations and outputting a signal in response thereto. The apparatus includes an accelerometer package having integral display and control functions. The accelerometer package is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine condition over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase over the selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated. The benefits of a vibration recording and monitoring system with controls and displays readily
Novel axolotl cardiac function analysis method using magnetic resonance imaging.
Directory of Open Access Journals (Sweden)
Pedro Gomes Sanches
Full Text Available The salamander axolotl is capable of complete regeneration of amputated heart tissue. However, non-invasive imaging tools for assessing its cardiac function were so far not employed. In this study, cardiac magnetic resonance imaging is introduced as a non-invasive technique to image heart function of axolotls. Three axolotls were imaged with magnetic resonance imaging using a retrospectively gated Fast Low Angle Shot cine sequence. Within one scanning session the axolotl heart was imaged three times in all planes, consecutively. Heart rate, ejection fraction, stroke volume and cardiac output were calculated using three techniques: (1 combined long-axis, (2 short-axis series, and (3 ultrasound (control for heart rate only. All values are presented as mean ± standard deviation. Heart rate (beats per minute among different animals was 32.2±6.0 (long axis, 30.4±5.5 (short axis and 32.7±4.9 (ultrasound and statistically similar regardless of the imaging method (p > 0.05. Ejection fraction (% was 59.6±10.8 (long axis and 48.1±11.3 (short axis and it differed significantly (p = 0.019. Stroke volume (μl/beat was 133.7±33.7 (long axis and 93.2±31.2 (short axis, also differed significantly (p = 0.015. Calculations were consistent among the animals and over three repeated measurements. The heart rate varied depending on depth of anaesthesia. We described a new method for defining and imaging the anatomical planes of the axolotl heart and propose one of our techniques (long axis analysis may prove useful in defining cardiac function in regenerating axolotl hearts.
Li, Q; He, Y L; Wang, Y; Tao, W Q
2007-11-01
A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.
Phases of strongly-interacting matter with functional methods
International Nuclear Information System (INIS)
Mitter, M.
2012-01-01
Non-perturbative aspects of strongly-interacting matter, in particular at non-vanishing temperatures, are investigated with functional methods. The consequences of confinement in terms of a linearly rising static quark potential arising from an infrared singular quark 4-point function are studied. Such a singularity is only consistent for a specific color structure and implies the existence of similar singularities in special color structures of n-point functions with n>3. A simple explanation for Casimir scaling is found within this mechanism of confinement.The deconfinement transition of fundamentally charged scalar and quark matter is investigated in terms of center symmetry. Novel dual order parameters are introduced that can be obtained from the corresponding matter propagators. In the case of quark matter the new order parameter compares well with the dual chiral condensate, with the advantage that no regularization is necessary even at non-vanishing quark masses.The influence of the axial anomaly on the chiral transition is studied in terms of a 't Hooft determinant with quarks and mesons as effective degrees of freedom in the functional renormalization group. In the case of two quark flavors, the calculated temperature dependent determinant results in a decrease of the anomalous eta'-mass close to the chiral transition temperature. This is connected to a partial Z(2) restoration at the chiral transition instead of the restoration of full axial U(1). With 2+1 quark flavors and a temperature independent 't Hooft term, the chiral transition is found to be of second order with three dimensional O(4) critical exponents in the limit of vanishing up and down quark mass, whereas a first-order transition is seen without U(1) violation. (author) [de
Dynamic Sensor Management Algorithm Based on Improved Efficacy Function
Directory of Open Access Journals (Sweden)
TANG Shujuan
2016-01-01
Full Text Available A dynamic sensor management algorithm based on improved efficacy function is proposed to solve the multi-target and multi-sensory management problem. The tracking task precision requirements (TPR, target priority and sensor use cost were considered to establish the efficacy function by weighted sum the normalized value of the three factors. The dynamic sensor management algorithm was accomplished through control the diversities of the desired covariance matrix (DCM and the filtering covariance matrix (FCM. The DCM was preassigned in terms of TPR and the FCM was obtained by the centralized sequential Kalman filtering algorithm. The simulation results prove that the proposed method could meet the requirements of desired tracking precision and adjust sensor selection according to target priority and cost of sensor source usage. This makes sensor management scheme more reasonable and effective.
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Nonequilibrium Green's function method for quantum thermal transport
Wang, Jian-Sheng; Agarwalla, Bijay Kumar; Li, Huanan; Thingna, Juzar
2014-12-01
This review deals with the nonequilibrium Green's function (NEGF) method applied to the problems of energy transport due to atomic vibrations (phonons), primarily for small junction systems. We present a pedagogical introduction to the subject, deriving some of the well-known results such as the Laudauer-like formula for heat current in ballistic systems. The main aim of the review is to build the machinery of the method so that it can be applied to other situations, which are not directly treated here. In addition to the above, we consider a number of applications of NEGF, not in routine model system calculations, but in a few new aspects showing the power and usefulness of the formalism. In particular, we discuss the problems of multiple leads, coupled left-right-lead system, and system without a center. We also apply the method to the problem of full counting statistics. In the case of nonlinear systems, we make general comments on the thermal expansion effect, phonon relaxation time, and a certain class of mean-field approximations. Lastly, we examine the relationship between NEGF, reduced density matrix, and master equation approaches to thermal transport.
Probability Density Function Method for Observing Reconstructed Attractor Structure
Institute of Scientific and Technical Information of China (English)
陆宏伟; 陈亚珠; 卫青
2004-01-01
Probability density function (PDF) method is proposed for analysing the structure of the reconstructed attractor in computing the correlation dimensions of RR intervals of ten normal old men. PDF contains important information about the spatial distribution of the phase points in the reconstructed attractor. To the best of our knowledge, it is the first time that the PDF method is put forward for the analysis of the reconstructed attractor structure. Numerical simulations demonstrate that the cardiac systems of healthy old men are about 6 - 6.5 dimensional complex dynamical systems. It is found that PDF is not symmetrically distributed when time delay is small, while PDF satisfies Gaussian distribution when time delay is big enough. A cluster effect mechanism is presented to explain this phenomenon. By studying the shape of PDFs, that the roles played by time delay are more important than embedding dimension in the reconstruction is clearly indicated. Results have demonstrated that the PDF method represents a promising numerical approach for the observation of the reconstructed attractor structure and may provide more information and new diagnostic potential of the analyzed cardiac system.
Effects of Computer-Based Training on Procedural Modifications to Standard Functional Analyses
Schnell, Lauren K.; Sidener, Tina M.; DeBar, Ruth M.; Vladescu, Jason C.; Kahng, SungWoo
2018-01-01
Few studies have evaluated methods for training decision-making when functional analysis data are undifferentiated. The current study evaluated computer-based training to teach 20 graduate students to arrange functional analysis conditions, analyze functional analysis data, and implement procedural modifications. Participants were exposed to…
Protein Function Prediction Based on Sequence and Structure Information
Smaili, Fatima Z.
2016-01-01
operate. In this master thesis project, we worked on inferring protein functions based on the primary protein sequence. In the approach we follow, 3D models are first constructed using I-TASSER. Functions are then deduced by structurally matching
A flocking based method for brain tractography.
Aranda, Ramon; Rivera, Mariano; Ramirez-Manzanares, Alonso
2014-04-01
We propose a new method to estimate axonal fiber pathways from Multiple Intra-Voxel Diffusion Orientations. Our method uses the multiple local orientation information for leading stochastic walks of particles. These stochastic particles are modeled with mass and thus they are subject to gravitational and inertial forces. As result, we obtain smooth, filtered and compact trajectory bundles. This gravitational interaction can be seen as a flocking behavior among particles that promotes better and robust axon fiber estimations because they use collective information to move. However, the stochastic walks may generate paths with low support (outliers), generally associated to incorrect brain connections. In order to eliminate the outlier pathways, we propose a filtering procedure based on principal component analysis and spectral clustering. The performance of the proposal is evaluated on Multiple Intra-Voxel Diffusion Orientations from two realistic numeric diffusion phantoms and a physical diffusion phantom. Additionally, we qualitatively demonstrate the performance on in vivo human brain data. Copyright © 2014 Elsevier B.V. All rights reserved.
Forced Ignition Study Based On Wavelet Method
Martelli, E.; Valorani, M.; Paolucci, S.; Zikoski, Z.
2011-05-01
The control of ignition in a rocket engine is a critical problem for combustion chamber design. Therefore it is essential to fully understand the mechanism of ignition during its earliest stages. In this paper the characteristics of flame kernel formation and initial propagation in a hydrogen-argon-oxygen mixing layer are studied using 2D direct numerical simulations with detailed chemistry and transport properties. The flame kernel is initiated by adding an energy deposition source term in the energy equation. The effect of unsteady strain rate is studied by imposing a 2D turbulence velocity field, which is initialized by means of a synthetic field. An adaptive wavelet method, based on interpolating wavelets is used in this study to solve the compressible reactive Navier- Stokes equations. This method provides an alternative means to refine the computational grid points according to local demands of the physical solution. The present simulations show that in the very early instants the kernel perturbed by the turbulent field is characterized by an increased burning area and a slightly increased rad- ical formation. In addition, the calculations show that the wavelet technique yields a significant reduction in the number of degrees of freedom necessary to achieve a pre- scribed solution accuracy.
Garrido-Martín, Diego; Pazos, Florencio
2018-02-27
The exponential accumulation of new sequences in public databases is expected to improve the performance of all the approaches for predicting protein structural and functional features. Nevertheless, this was never assessed or quantified for some widely used methodologies, such as those aimed at detecting functional sites and functional subfamilies in protein multiple sequence alignments. Using raw protein sequences as only input, these approaches can detect fully conserved positions, as well as those with a family-dependent conservation pattern. Both types of residues are routinely used as predictors of functional sites and, consequently, understanding how the sequence content of the databases affects them is relevant and timely. In this work we evaluate how the growth and change with time in the content of sequence databases affect five sequence-based approaches for detecting functional sites and subfamilies. We do that by recreating historical versions of the multiple sequence alignments that would have been obtained in the past based on the database contents at different time points, covering a period of 20 years. Applying the methods to these historical alignments allows quantifying the temporal variation in their performance. Our results show that the number of families to which these methods can be applied sharply increases with time, while their ability to detect potentially functional residues remains almost constant. These results are informative for the methods' developers and final users, and may have implications in the design of new sequencing initiatives.
DEFF Research Database (Denmark)
Paidarová, Ivana; Sauer, Stephan P. A.
2012-01-01
We have compared the performance of density functional theory (DFT) using five different exchange-correlation functionals with four coupled cluster theory based wave function methods in the calculation of geometrical derivatives of the polarizability tensor of methane. The polarizability gradient...
Synthesis of dye/fluorescent functionalized dendrons based on cyclotriphosphazene
Directory of Open Access Journals (Sweden)
Aurélien Hameau
2011-11-01
Full Text Available Functionalized phenols based on tyramine were synthesized in order to be selectively grafted on to hexachlorocyclotriphosphazene, affording a variety of functionalized dendrons of type AB5. The B functions comprised fluorescent groups (dansyl or dyes (dabsyl, whereas the A function was provided by either an aldehyde or an amine. The characterization of these dendrons is reported. An unexpected behaviour of a fluorescent and water-soluble dendron based on dansyl groups in mixtures of dioxane/water was observed.
Differential quadrature method of nonlinear bending of functionally graded beam
Gangnian, Xu; Liansheng, Ma; Wang, Youzhi; Quan, Yuan; Weijie, You
2018-02-01
Using the third-order shear deflection beam theory (TBT), nonlinear bending of functionally graded (FG) beams composed with various amounts of ceramic and metal is analyzed utilizing the differential quadrature method (DQM). The properties of beam material are supposed to accord with the power law index along to thickness. First, according to the principle of stationary potential energy, the partial differential control formulae of the FG beams subjected to a distributed lateral force are derived. To obtain numerical results of the nonlinear bending, non-dimensional boundary conditions and control formulae are dispersed by applying the DQM. To verify the present solution, several examples are analyzed for nonlinear bending of homogeneous beams with various edges. A minute parametric research is in progress about the effect of the law index, transverse shear deformation, distributed lateral force and boundary conditions.
The Gaussian radial basis function method for plasma kinetic theory
Energy Technology Data Exchange (ETDEWEB)
Hirvijoki, E., E-mail: eero.hirvijoki@chalmers.se [Department of Applied Physics, Chalmers University of Technology, SE-41296 Gothenburg (Sweden); Candy, J.; Belli, E. [General Atomics, PO Box 85608, San Diego, CA 92186-5608 (United States); Embréus, O. [Department of Applied Physics, Chalmers University of Technology, SE-41296 Gothenburg (Sweden)
2015-10-30
Description of a magnetized plasma involves the Vlasov equation supplemented with the non-linear Fokker–Planck collision operator. For non-Maxwellian distributions, the collision operator, however, is difficult to compute. In this Letter, we introduce Gaussian Radial Basis Functions (RBFs) to discretize the velocity space of the entire kinetic system, and give the corresponding analytical expressions for the Vlasov and collision operator. Outlining the general theory, we also highlight the connection to plasma fluid theories, and give 2D and 3D numerical solutions of the non-linear Fokker–Planck equation. Applications are anticipated in both astrophysical and laboratory plasmas. - Highlights: • A radically new method to address the velocity space discretization of the non-linear kinetic equation of plasmas. • Elegant and physically intuitive, flexible and mesh-free. • Demonstration of numerical solution of both 2-D and 3-D non-linear Fokker–Planck relaxation problem.
Method of making gold thiolate and photochemically functionalized microcantilevers
Boiadjiev, Vassil I [Knoxville, TN; Brown, Gilbert M [Knoxville, TN; Pinnaduwage, Lal A [Knoxville, TN; Thundat, Thomas G [Knoxville, TN; Bonnesen, Peter V [Knoxville, TN; Goretzki, Gudrun [Nottingham, GB
2009-08-25
Highly sensitive sensor platforms for the detection of specific reagents, such as chromate, gasoline and biological species, using microcantilevers and other microelectromechanical systems (MEMS) whose surfaces have been modified with photochemically attached organic monolayers, such as self-assembled monolayers (SAM), or gold-thiol surface linkage are taught. The microcantilever sensors use photochemical hydrosilylation to modify silicon surfaces and gold-thiol chemistry to modify metallic surfaces thereby enabling individual microcantilevers in multicantilever array chips to be modified separately. Terminal vinyl substituted hydrocarbons with a variety of molecular recognition sites can be attached to the surface of silicon via the photochemical hydrosilylation process. By focusing the activating UV light sequentially on selected silicon or silicon nitride hydrogen terminated surfaces and soaking or spotting selected metallic surfaces with organic thiols, sulfides, or disulfides, the microcantilevers are functionalized. The device and photochemical method are intended to be integrated into systems for detecting specific agents including chromate groundwater contamination, gasoline, and biological species.
Functional methods and mappings of dissipative quantum systems
International Nuclear Information System (INIS)
Baur, H.
2006-01-01
In the first part of this work we extract the algebraic structure behind the method of the influence functional in the context of dissipative quantum mechanics. Special emphasis was put on the transition from a quantum mechanical description to a classical one, since it allows a deeper understanding of the measurement-process. This is tightly connected with the transition from a microscopic to a macroscopic world where the former one is described by the rules of quantum mechanics whereas the latter follows the rules of classical mechanics. In addition we show how the results of the influence functional method can be interpreted as a stochastical process, which in turn allows an easy comparison with the well known time development of a quantum mechanical system by use of the Schroedinger equation. In the following we examine the tight-binding approximation of models of which their hamiltionian shows discrete eigenstates in position space and where transitions between those states are suppressed so that propagation either is described by tunneling or by thermal activation. In the framework of dissipative quantum mechanics this leads to a tremendous simplification of the effective description of the system since instead of looking at the full history of all paths in the path integral description, we only have to look at all possible jump times and the possible corresponding set of weights for the jump direction, which is much easier to handle both analytically and numerically. In addition we deal with the mapping and the connection of dissipative quantum mechanical models with ones in quantum field theory and in particular models in statistical field theory. As an example we mention conformal invariance in two dimensions which always becomes relevant if a statistical system only has local interaction and is invariant under scaling. (orig.)
Determination of differential pulmonary function by the radioisotopic method
International Nuclear Information System (INIS)
Molinari, J.F.; Chatkin, J.M.; Barreto, S.M.
1991-01-01
A study of twenty-one patients with bronchogenic carcinoma which were submitted to lobectomy or pneumonectomy has been done, with the purpose of evaluation of regional and differential function of the lungs or parts of them. To accomplish this subject the patients underwent simple spirometry with FEV (forced expiratory volume in the first second) and FVC (forced vital capacity) measurements plus quantitative perfusional scintigraphy using 99 Tc-MAA (aggregated albumin). The relationship between these tests allowed the calculation of predictive values of FEV and FVC for the post-operative period through proposed equations. From the third month on after the operation, the patients were again submitted to spirometry with measurement of FEV and FVC to attest the hypothesis that these values were similar to those calculated. The statistical study of these results, utilizing the Student's t test, has demonstrated that the values of FEV and FVC were similar to those found in the postoperative period. These results allowed the conclusion that the radioisotopic method had predictive capacity of FEV and FVC in the lobectomized and pneumonectomized patients and it is a contribution in the evaluation of the differential pulmonary function. (author)
Aminopropyl-Functionalized Silica CO2 Adsorbents via Sonochemical Methods
Directory of Open Access Journals (Sweden)
Gregory P. Knowles
2016-01-01
Full Text Available Aminopropyl-functionalized hexagonal mesoporous silica (HMS products, as are of interest for CO2 capture applications, were separately prepared by mixing aminopropyltrimethoxysilane (APTS and HMS in toluene via a conventional stirred reactor and via sonication assisted methods, to investigate the potential of sonication to facilitate the preparation of products with higher tether loadings and correspondingly higher CO2 sorption capacities. Sonication was expected to improve both the dispersion of the substrate in the solvent and the diffusion of the silane throughout the mesoporous substrate. Structural properties of the products were determined by X-ray diffraction, N2 adsorption/desorption (77 K, helium pycnometry, and elemental analysis, and CO2 adsorption/desorption properties were determined via thermogravimetric and differential thermal analysis. The tether loadings of the sonication products (up to 1.8 tethers·nm−2 were found to increase with sonication time and in each case were greater than the corresponding product prepared by the conventional approach. It was also found that the concentration of the reagent mixture influenced the extent of functionalization, that the crude products cured effectively under N2 flow as under vacuum, and that rinsing the crude products prior to curing was not essential. Sonication products with higher tether loadings were found to exhibit higher CO2 sorption capacities as expected.
Normalization methods in time series of platelet function assays
Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham
2016-01-01
Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217
A second-order unconstrained optimization method for canonical-ensemble density-functional methods
Nygaard, Cecilie R.; Olsen, Jeppe
2013-03-01
A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.
Green's function method and its application to verification of diffusion models of GASFLOW code
International Nuclear Information System (INIS)
Xu, Z.; Travis, J.R.; Breitung, W.
2007-07-01
To validate the diffusion model and the aerosol particle model of the GASFLOW computer code, theoretical solutions of advection diffusion problems are developed by using the Green's function method. The work consists of a theory part and an application part. In the first part, the Green's functions of one-dimensional advection diffusion problems are solved in infinite, semi-infinite and finite domains with the Dirichlet, the Neumann and/or the Robin boundary conditions. Novel and effective image systems especially for the advection diffusion problems are made to find the Green's functions in a semi-infinite domain. Eigenfunction method is utilized to find the Green's functions in a bounded domain. In the case, key steps of a coordinate transform based on a concept of reversed time scale, a Laplace transform and an exponential transform are proposed to solve the Green's functions. Then the product rule of the multi-dimensional Green's functions is discussed in a Cartesian coordinate system. Based on the building blocks of one-dimensional Green's functions, the multi-dimensional Green's function solution can be constructed by applying the product rule. Green's function tables are summarized to facilitate the application of the Green's function. In the second part, the obtained Green's function solutions benchmark a series of validations to the diffusion model of gas species in continuous phase and the diffusion model of discrete aerosol particles in the GASFLOW code. Perfect agreements are obtained between the GASFLOW simulations and the Green's function solutions in case of the gas diffusion. Very good consistencies are found between the theoretical solutions of the advection diffusion equations and the numerical particle distributions in advective flows, when the drag force between the micron-sized particles and the conveying gas flow meets the Stokes' law about resistance. This situation is corresponding to a very small Reynolds number based on the particle
Stand diameter distribution modelling and prediction based on Richards function.
Directory of Open Access Journals (Sweden)
Ai-guo Duan
Full Text Available The objective of this study was to introduce application of the Richards equation on modelling and prediction of stand diameter distribution. The long-term repeated measurement data sets, consisted of 309 diameter frequency distributions from Chinese fir (Cunninghamia lanceolata plantations in the southern China, were used. Also, 150 stands were used as fitting data, the other 159 stands were used for testing. Nonlinear regression method (NRM or maximum likelihood estimates method (MLEM were applied to estimate the parameters of models, and the parameter prediction method (PPM and parameter recovery method (PRM were used to predict the diameter distributions of unknown stands. Four main conclusions were obtained: (1 R distribution presented a more accurate simulation than three-parametric Weibull function; (2 the parameters p, q and r of R distribution proved to be its scale, location and shape parameters, and have a deep relationship with stand characteristics, which means the parameters of R distribution have good theoretical interpretation; (3 the ordinate of inflection point of R distribution has significant relativity with its skewness and kurtosis, and the fitted main distribution range for the cumulative diameter distribution of Chinese fir plantations was 0.4∼0.6; (4 the goodness-of-fit test showed diameter distributions of unknown stands can be well estimated by applying R distribution based on PRM or the combination of PPM and PRM under the condition that only quadratic mean DBH or plus stand age are known, and the non-rejection rates were near 80%, which are higher than the 72.33% non-rejection rate of three-parametric Weibull function based on the combination of PPM and PRM.
International Nuclear Information System (INIS)
Shang Yadong
2008-01-01
The extended hyperbolic functions method for nonlinear wave equations is presented. Based on this method, we obtain a multiple exact explicit solutions for the nonlinear evolution equations which describe the resonance interaction between the long wave and the short wave. The solutions obtained in this paper include (a) the solitary wave solutions of bell-type for S and L, (b) the solitary wave solutions of kink-type for S and bell-type for L, (c) the solitary wave solutions of a compound of the bell-type and the kink-type for S and L, (d) the singular travelling wave solutions, (e) periodic travelling wave solutions of triangle function types, and solitary wave solutions of rational function types. The variety of structure to the exact solutions of the long-short wave equation is illustrated. The methods presented here can also be used to obtain exact solutions of nonlinear wave equations in n dimensions
Real reproduction and evaluation of color based on BRDF method
Qin, Feng; Yang, Weiping; Yang, Jia; Li, Hongning; Luo, Yanlin; Long, Hongli
2013-12-01
It is difficult to reproduce the original color of targets really in different illuminating environment using the traditional methods. So a function which can reconstruct the characteristics of reflection about every point on the surface of target is required urgently to improve the authenticity of color reproduction, which known as the Bidirectional Reflectance Distribution Function(BRDF). A method of color reproduction based on the BRDF measurement is introduced in this paper. Radiometry is combined with the colorimetric theories to measure the irradiance and radiance of GretagMacbeth 24 ColorChecker by using PR-715 Radiation Spectrophotometer of PHOTO RESEARCH, Inc, USA. The BRDF and BRF (Bidirectional Reflectance Factor) values of every color piece corresponding to the reference area are calculated according to irradiance and radiance, thus color tristimulus values of 24 ColorChecker are reconstructed. The results reconstructed by BRDF method are compared with values calculated by the reflectance using PR-715, at last, the chromaticity coordinates in color space and color difference between each other are analyzed. The experimental result shows average color difference and sample standard deviation between the method proposed in this paper and traditional reconstruction method depended on reflectance are 2.567 and 1.3049 respectively. The conclusion indicates that the method of color reproduction based on BRDF has the more obvious advantages to describe the color information of object than the reflectance in hemisphere space through the theoretical and experimental analysis. This method proposed in this paper is effective and feasible during the research of reproducing the chromaticity.
Math-Based Simulation Tools and Methods
National Research Council Canada - National Science Library
Arepally, Sudhakar
2007-01-01
.... The following methods are reviewed: matrix operations, ordinary and partial differential system of equations, Lagrangian operations, Fourier transforms, Taylor Series, Finite Difference Methods, implicit and explicit finite element...
Institute of Scientific and Technical Information of China (English)
梁立俊; 莫洁玲
2012-01-01
近年来，伊斯兰银行发展迅速，并在金融危机愈演愈烈的情况下经营稳定，它与传统商业银行业务最大的区别是禁止利息。本文通过剖析伊斯兰银行无息贷款与商业银行有息贷款运作的特点，运用熵权法对它们之间的绩效进行分析比较，结果表明伊斯兰银行的经营效益显然低于传统商业银行。本文在结论中指出了伊斯兰银行无息运作机制的弊端，分析了其存在和发展的合理性，并提出其对我国银行业发展的借鉴意义。%In recent years, the Islamic Bank features rapid development and stable performance under the increasingly intense financial crisis. The most distinctive feature of the Islamic Bank is the prohibition of interest charging. Thus by a comparison of no-interest loan transactions in the Islamic Bank and conventional loan transactions, this paper intends to appraise the difference in banking performance with the entropy method, and eventually finds that the Islamic Bank generates much lower operating benefits than the traditional commercial banks. In the conclusion, the paper points out the drawbacks of no-interest loan transactions as well as illustrates its social functions and future development, serving as a reference to China＇s improvement of her banking development.
A hybrid method for the parallel computation of Green's functions
DEFF Research Database (Denmark)
Petersen, Dan Erik; Li, Song; Stokbro, Kurt
2009-01-01
of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds...... of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only...... require computing a small number of entries of the inverse matrix. Then. we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size....
International Nuclear Information System (INIS)
Tsujita, K.; Endo, T.; Yamamoto, A.
2013-01-01
An efficient numerical method for time-dependent transport equation, the mutigrid amplitude function (MAF) method, is proposed. The method of characteristics (MOC) is being widely used for reactor analysis thanks to the advances of numerical algorithms and computer hardware. However, efficient kinetic calculation method for MOC is still desirable since it requires significant computation time. Various efficient numerical methods for solving the space-dependent kinetic equation, e.g., the improved quasi-static (IQS) and the frequency transform methods, have been developed so far mainly for diffusion calculation. These calculation methods are known as effective numerical methods and they offer a way for faster computation. However, they have not been applied to the kinetic calculation method using MOC as the authors' knowledge. Thus, the MAF method is applied to the kinetic calculation using MOC aiming to reduce computation time. The MAF method is a unified numerical framework of conventional kinetic calculation methods, e.g., the IQS, the frequency transform, and the theta methods. Although the MAF method is originally developed for the space-dependent kinetic calculation based on the diffusion theory, it is extended to transport theory in the present study. The accuracy and computational time are evaluated though the TWIGL benchmark problem. The calculation results show the effectiveness of the MAF method. (authors)
Appreciation the damage of kidney function with RIA method
International Nuclear Information System (INIS)
Wang Haodan
1992-01-01
Using RIA method, the authors took 4 kinds of urine specimen from 100 normal persons which were taken in the morning 1 h after drinking voluntary and all- 24 h, and stored at 4 C deg and -30 C deg respectively, in order to detect the concentration of the urine protein Β 2 -MG, ALb, IgG and THP. The results are as follows: for 3-days-storage at 4 C deg and 2-weeks-storage at -30 C deg, P > 0.05; for the ALb, IgG and THP between voluntary urine and 24 h urine, α = 0.7565, 0.7865 and 0.7537 respectively; for Β 2 -MG, between the 1h-urine after drinking and voluntary urine, α = 0.7238. The urinary levels were measured of Β 2 -MG, ALb, IgG and THP with voluntary urine specimen in 177 cases of various types of nephropathy, urino-infection, and diabetic nephrosis, hypertesion-nephrosis, systemic lupus erythematosus. It is considered that the method of testing urine protein with voluntary urine specimen is not only accurate for collecting but also convenient for the patient. It is more accurate and sensitive than the traditional BUN and Cr for the appreciation of kidney function damage. And it gives a early stage index of kidney damage
Collision analysis of one kind of chaos-based hash function
International Nuclear Information System (INIS)
Xiao Di; Peng Wenbing; Liao Xiaofeng; Xiang Tao
2010-01-01
In the last decade, various chaos-based hash functions have been proposed. Nevertheless, the corresponding analyses of them lag far behind. In this Letter, we firstly take a chaos-based hash function proposed very recently in Amin, Faragallah and Abd El-Latif (2009) as a sample to analyze its computational collision problem, and then generalize the construction method of one kind of chaos-based hash function and summarize some attentions to avoid the collision problem. It is beneficial to the hash function design based on chaos in the future.
Functional properties of edible agar-based and starch-based films for food quality preservation.
Phan, The D; Debeaufort, F; Luu, D; Voilley, A
2005-02-23
Edible films made of agar (AG), cassava starch (CAS), normal rice starch (NRS), and waxy (glutinous) rice starch (WRS) were elaborated and tested for a potential use as edible packaging or coating. Their water vapor permeabilities (WVP) were comparable with those of most of the polysaccharide-based films and with some protein-based films. Depending on the environmental moisture pressure, the WVP of the films varies and remains constant when the relative humidity (RH) is >84%. Equilibrium sorption isotherms of these films have been measured; the Guggenheim-Anderson-de Boer (GAB) model was used to describe the sorption isotherm and contributed to a better knowledge of hydration properties. Surface hydrophobicity and wettability of these films were also investigated using the sessile drop contact angle method. The results obtained suggested the migration of the lipid fraction toward evaporation surface during film drying. Among these polysaccharide-based films, AG-based film and CAS-based film displayed more interesting mechanical properties: they are transparent, clear, homogeneous, flexible, and easily handled. NRS- and WRS-based films were relatively brittle and have a low tension resistance. Microstructure of film cross section was observed by environmental scanning electron microscopy to better understand the effect of the structure on the functional properties. The results suggest that AG-based film and CAS-based films, which show better functional properties, are promising systems to be used as food packaging or coating instead of NRS- and WRS-based films.
Filter-based reconstruction methods for tomography
Pelt, D.M.
2016-01-01
In X-ray tomography, a three-dimensional image of the interior of an object is computed from multiple X-ray images, acquired over a range of angles. Two types of methods are commonly used to compute such an image: analytical methods and iterative methods. Analytical methods are computationally
A Layered Searchable Encryption Scheme with Functional Components Independent of Encryption Methods
Luo, Guangchun; Qin, Ke
2014-01-01
Searchable encryption technique enables the users to securely store and search their documents over the remote semitrusted server, which is especially suitable for protecting sensitive data in the cloud. However, various settings (based on symmetric or asymmetric encryption) and functionalities (ranked keyword query, range query, phrase query, etc.) are often realized by different methods with different searchable structures that are generally not compatible with each other, which limits the scope of application and hinders the functional extensions. We prove that asymmetric searchable structure could be converted to symmetric structure, and functions could be modeled separately apart from the core searchable structure. Based on this observation, we propose a layered searchable encryption (LSE) scheme, which provides compatibility, flexibility, and security for various settings and functionalities. In this scheme, the outputs of the core searchable component based on either symmetric or asymmetric setting are converted to some uniform mappings, which are then transmitted to loosely coupled functional components to further filter the results. In such a way, all functional components could directly support both symmetric and asymmetric settings. Based on LSE, we propose two representative and novel constructions for ranked keyword query (previously only available in symmetric scheme) and range query (previously only available in asymmetric scheme). PMID:24719565
Research on image complexity evaluation method based on color information
Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo
2017-11-01
In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.
Adaptive Functional-Based Neuro-Fuzzy-PID Incremental Controller Structure
Directory of Open Access Journals (Sweden)
Ashraf Ahmed Fahmy
2014-03-01
Full Text Available This paper presents an adaptive functional-based Neuro-fuzzy-PID incremental (NFPID controller structure that can be tuned either offline or online according to required controller performance. First, differential membership functions are used to represent the fuzzy membership functions of the input-output space of the three term controller. Second, controller rules are generated based on the discrete proportional, derivative, and integral function for the fuzzy space. Finally, a fully differentiable fuzzy neural network is constructed to represent the developed controller for either offline or online controller parameter adaptation. Two different adaptation methods are used for controller tuning, offline method based on controller transient performance cost function optimization using Bees Algorithm, and online method based on tracking error minimization using back-propagation with momentum algorithm. The proposed control system was tested to show the validity of the controller structure over a fixed PID controller gains to control SCARA type robot arm.
Calculation of neutron importance function in fissionable assemblies using Monte Carlo method
International Nuclear Information System (INIS)
Feghhi, S. A. H.; Afarideh, H.; Shahriari, M.
2007-01-01
The purpose of the present work is to develop an efficient solution method to calculate neutron importance function in fissionable assemblies for all criticality conditions, using Monte Carlo Method. The neutron importance function has a well important role in perturbation theory and reactor dynamic calculations. Usually this function can be determined by calculating adjoint flux through out solving the Adjoint weighted transport equation with deterministic methods. However, in complex geometries these calculations are very difficult. In this article, considering the capabilities of MCNP code in solving problems with complex geometries and its closeness to physical concepts, a comprehensive method based on physical concept of neutron importance has been introduced for calculating neutron importance function in sub-critical, critical and supercritical conditions. For this means a computer program has been developed. The results of the method has been benchmarked with ANISN code calculations in 1 and 2 group modes for simple geometries and their correctness has been approved for all three criticality conditions. Ultimately, the efficiency of the method for complex geometries has been shown by calculation of neutron importance in MNSR research reactor
Digital Resonant Controller based on Modified Tustin Discretization Method
Directory of Open Access Journals (Sweden)
STOJIC, D.
2016-11-01
Full Text Available Resonant controllers are used in power converter voltage and current control due to their simplicity and accuracy. However, digital implementation of resonant controllers introduces problems related to zero and pole mapping from the continuous to the discrete time domain. Namely, some discretization methods introduce significant errors in the digital controller resonant frequency, resulting in the loss of the asymptotic AC reference tracking, especially at high resonant frequencies. The delay compensation typical for resonant controllers can also be compromised. Based on the existing analysis, it can be concluded that the Tustin discretization with frequency prewarping represents a preferable choice from the point of view of the resonant frequency accuracy. However, this discretization method has a shortcoming in applications that require real-time frequency adaptation, since complex trigonometric evaluation is required for each frequency change. In order to overcome this problem, in this paper the modified Tustin discretization method is proposed based on the Taylor series approximation of the frequency prewarping function. By comparing the novel discretization method with commonly used two-integrator-based proportional-resonant (PR digital controllers, it is shown that the resulting digital controller resonant frequency and time delay compensation errors are significantly reduced for the novel controller.
DNA-based methods of geochemical prospecting
Energy Technology Data Exchange (ETDEWEB)
Ashby, Matthew [Mill Valley, CA
2011-12-06
The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.
Gradient-based methods for production optimization of oil reservoirs
Energy Technology Data Exchange (ETDEWEB)
Suwartadi, Eka
2012-07-01
Production optimization for water flooding in the secondary phase of oil recovery is the main topic in this thesis. The emphasis has been on numerical optimization algorithms, tested on case examples using simple hypothetical oil reservoirs. Gradientbased optimization, which utilizes adjoint-based gradient computation, is used to solve the optimization problems. The first contribution of this thesis is to address output constraint problems. These kinds of constraints are natural in production optimization. Limiting total water production and water cut at producer wells are examples of such constraints. To maintain the feasibility of an optimization solution, a Lagrangian barrier method is proposed to handle the output constraints. This method incorporates the output constraints into the objective function, thus avoiding additional computations for the constraints gradient (Jacobian) which may be detrimental to the efficiency of the adjoint method. The second contribution is the study of the use of second-order adjoint-gradient information for production optimization. In order to speedup convergence rate in the optimization, one usually uses quasi-Newton approaches such as BFGS and SR1 methods. These methods compute an approximation of the inverse of the Hessian matrix given the first-order gradient from the adjoint method. The methods may not give significant speedup if the Hessian is ill-conditioned. We have developed and implemented the Hessian matrix computation using the adjoint method. Due to high computational cost of the Newton method itself, we instead compute the Hessian-timesvector product which is used in a conjugate gradient algorithm. Finally, the last contribution of this thesis is on surrogate optimization for water flooding in the presence of the output constraints. Two kinds of model order reduction techniques are applied to build surrogate models. These are proper orthogonal decomposition (POD) and the discrete empirical interpolation method (DEIM
Springback Compensation Based on FDM-DTF Method
International Nuclear Information System (INIS)
Liu Qiang; Kang Lan
2010-01-01
Stamping part error caused by springback is usually considered to be a tooling defect in sheet metal forming process. This problem can be corrected by adjusting the tooling shape to appropriate shape. In this paper, springback compensation based on FDM-DTF method is proposed to be used for design and modification of the tooling shape. Firstly, based on FDM method, the tooling shape is designed by reversing inner force's direction at the end of forming simulation, the required tooling shape can be got through some iterations. Secondly actual tooling is produced based on results got in the first step. When the tooling and part surface discrete data are investigated, the transfer function between numerical springback error and real springback error can be calculated based on wavelet transform results, which can be used in predicting the tooling shape for the desired product. Finally the FDM-DTF method is proved to control springback effectively after it has been applied in the 2D irregular product springback control.
Zero Field Splitting of the chalcogen diatomics using relativistic correlated wave-function methods
DEFF Research Database (Denmark)
Rota, Jean-Baptiste; Knecht, Stefan; Fleig, Timo
2011-01-01
The spectrum arising from the (π*)2 configuration of the chalcogen dimers, namely the X21, a2 and b0+ states, is calculated using Wave-Function Theory (WFT) based methods. Two-component (2c) and four-component (4c) MultiReference Configuration Interaction (MRCI) and Fock-Space Coupled Cluster (FSCC......) methods are used as well as two-step methods Spin-Orbit Complete Active Space Perturbation Theory at 2nd order (SO-CASPT2) and Spin-Orbit Difference Dedicated Configuration Interaction (SODDCI). The energy of the X21 state corresponds to the Zero-Field Splitting (ZFS) of the ground state spin triplet...
Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem
2017-01-01
In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations
Development of redesign method of production system based on QFD
Kondoh, Shinsuke; Umeda, Yasusi; Togawa, Hisashi
In order to catch up with rapidly changing market environment, rapid and flexible redesign of production system is quite important. For effective and rapid redesign of production system, a redesign support system is eagerly needed. To this end, this paper proposes a redesign method of production system based on Quality Function Deployment (QFD). This method represents a designer's intention in the form of QFD, collects experts' knowledge as “Production Method (PM) modules,” and formulates redesign guidelines as seven redesign operations so as to support a designer to find out improvement ideas in a systematical manner. This paper also illustrates a redesign support tool of a production system we have developed based on this method, and demonstrates its feasibility with a practical example of a production system of a contact probe. A result from this example shows that comparable cost reduction to those of veteran designers can be achieved by a novice designer. From this result, we conclude our redesign method is effective and feasible for supporting redesign of a production system.
Process identification method based on the Z transformation
International Nuclear Information System (INIS)
Zwingelstein, G.
1968-01-01
A simple method is described for identifying the transfer function of a linear retard-less system, based on the inversion of the Z transformation of the transmittance using a computer. It is assumed in this study that the signals at the entrance and at the exit of the circuit considered are of the deterministic type. The study includes: the theoretical principle of the inversion of the Z transformation, details about programming simulation, and identification of filters whose degrees vary from the first to the fifth order. (authors) [fr
Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method
Energy Technology Data Exchange (ETDEWEB)
Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)
2017-05-15
The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.
Convex-based void filling method for CAD-based Monte Carlo geometry modeling
International Nuclear Information System (INIS)
Yu, Shengpeng; Cheng, Mengyun; Song, Jing; Long, Pengcheng; Hu, Liqin
2015-01-01
Highlights: • We present a new void filling method named CVF for CAD based MC geometry modeling. • We describe convex based void description based and quality-based space subdivision. • The results showed improvements provided by CVF for both modeling and MC calculation efficiency. - Abstract: CAD based automatic geometry modeling tools have been widely applied to generate Monte Carlo (MC) calculation geometry for complex systems according to CAD models. Automatic void filling is one of the main functions in the CAD based MC geometry modeling tools, because the void space between parts in CAD models is traditionally not modeled while MC codes such as MCNP need all the problem space to be described. A dedicated void filling method, named Convex-based Void Filling (CVF), is proposed in this study for efficient void filling and concise void descriptions. The method subdivides all the problem space into disjointed regions using Quality based Subdivision (QS) and describes the void space in each region with complementary descriptions of the convex volumes intersecting with that region. It has been implemented in SuperMC/MCAM, the Multiple-Physics Coupling Analysis Modeling Program, and tested on International Thermonuclear Experimental Reactor (ITER) Alite model. The results showed that the new method reduced both automatic modeling time and MC calculation time
Larkin, Wallace; Hawkins, Renee O.; Collins, Tai
2016-01-01
Functional behavior assessments and function-based interventions are effective methods for addressing the challenging behaviors of children; however, traditional functional analysis has limitations that impact usability in applied settings. Trial-based functional analysis addresses concerns relating to the length of time, level of expertise…
A Method to Measure the Bracelet Based on Feature Energy
Liu, Hongmin; Li, Lu; Wang, Zhiheng; Huo, Zhanqiang
2017-12-01
To measure the bracelet automatically, a novel method based on feature energy is proposed. Firstly, the morphological method is utilized to preprocess the image, and the contour consisting of a concentric circle is extracted. Then, a feature energy function, which is relevant to the distances from one pixel to the edge points, is defined taking into account the geometric properties of the concentric circle. The input image is subsequently transformed to the feature energy distribution map (FEDM) by computing the feature energy of each pixel. The center of the concentric circle is thus located by detecting the maximum on the FEDM; meanwhile, the radii of the concentric circle are determined according to the feature energy function of the center pixel. Finally, with the use of a calibration template, the internal diameter and thickness of the bracelet are measured. The experimental results show that the proposed method can measure the true sizes of the bracelet accurately with the simplicity, directness and robustness compared to the existing methods.
Automated Functional Testing based on the Navigation of Web Applications
Directory of Open Access Journals (Sweden)
Boni García
2011-08-01
Full Text Available Web applications are becoming more and more complex. Testing such applications is an intricate hard and time-consuming activity. Therefore, testing is often poorly performed or skipped by practitioners. Test automation can help to avoid this situation. Hence, this paper presents a novel approach to perform automated software testing for web applications based on its navigation. On the one hand, web navigation is the process of traversing a web application using a browser. On the other hand, functional requirements are actions that an application must do. Therefore, the evaluation of the correct navigation of web applications results in the assessment of the specified functional requirements. The proposed method to perform the automation is done in four levels: test case generation, test data derivation, test case execution, and test case reporting. This method is driven by three kinds of inputs: i UML models; ii Selenium scripts; iii XML files. We have implemented our approach in an open-source testing framework named Automatic Testing Platform. The validation of this work has been carried out by means of a case study, in which the target is a real invoice management system developed using a model-driven approach.
Classroom Application of a Trial-Based Functional Analysis
Bloom, Sarah E.; Iwata, Brian A.; Fritz, Jennifer N.; Roscoe, Eileen M.; Carreau, Abbey B.
2011-01-01
We evaluated a trial-based approach to conducting functional analyses in classroom settings. Ten students referred for problem behavior were exposed to a series of assessment trials, which were interspersed among classroom activities throughout the day. Results of these trial-based functional analyses were compared to those of more traditional…
A recursive Monte Carlo method for estimating importance functions in deep penetration problems
International Nuclear Information System (INIS)
Goldstein, M.
1980-04-01
A pratical recursive Monte Carlo method for estimating the importance function distribution, aimed at importance sampling for the solution of deep penetration problems in three-dimensional systems, was developed. The efficiency of the recursive method was investigated for sample problems including one- and two-dimensional, monoenergetic and and multigroup problems, as well as for a practical deep-penetration problem with streaming. The results of the recursive Monte Carlo calculations agree fairly well with Ssub(n) results. It is concluded that the recursive Monte Carlo method promises to become a universal method for estimating the importance function distribution for the solution of deep-penetration problems, in all kinds of systems: for many systems the recursive method is likely to be more efficient than previously existing methods; for three-dimensional systems it is the first method that can estimate the importance function with the accuracy required for an efficient solution based on importance sampling of neutron deep-penetration problems in those systems
Xie, J.; Schaff, D. P.; Chen, Y.; Schult, F.
2013-12-01
Reliably estimated source time functions (STFs) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection and discrimination, and minimization of parameter trade-off in attenuation studies. We have searched for candidate pairs of larger and small earthquakes in and around China that share the same focal mechanism but significantly differ in magnitudes, so that the empirical Green's function (EGF) method can be applied to study the STFs of the larger events. We conducted about a million deconvolutions using waveforms from 925 earthquakes, and screened the deconvolved traces to exclude those that are from event pairs that involved different mechanisms. Only 2,700 traces passed this screening and could be further analyzed using the EGF method. We have developed a series of codes for speeding up the final EGF analysis by implementing automations and user-graphic interface procedures. The codes have been fully tested with a subset of screened data and we are currently applying them to all the screened data. We will present a large number of deconvolved STFs retrieved using various phases (Lg, Pn, Sn and Pg and coda) with information on any directivities, any possible dependence of pulse durations on the wave types, on scaling relations for the pulse durations and event sizes, and on the estimated source static stress drops.
A Formal Verification Method of Function Block Diagram
International Nuclear Information System (INIS)
Koh, Kwang Yong; Seong, Poong Hyun; Jee, Eun Kyoung; Jeon, Seung Jae; Park, Gee Yong; Kwon, Kee Choon
2007-01-01
Programmable Logic Controller (PLC), an industrial computer specialized for real-time applications, is widely used in diverse control systems in chemical processing plants, nuclear power plants or traffic control systems. As a PLC is often used to implement safety, critical embedded software, rigorous safety demonstration of PLC code is necessary. Function block diagram (FBD) is a standard application programming language for the PLC and currently being used in the development of a fully-digitalized reactor protection system (RPS), which is called the IDiPS, under the KNICS project. Therefore, verification issue of FBD programs is a pressing problem, and hence is of great importance. In this paper, we propose a formal verification method of FBD programs; we defined FBD programs formally in compliance with IEC 61131-3, and then translate the programs into Verilog model, and finally the model is verified using a model checker SMV. To demonstrate the feasibility and effective of this approach, we applied it to IDiPS which currently being developed under KNICS project. The remainder of this paper is organized as follows. Section 2 briefly describes Verilog and Cadence SMV. In Section 3, we introduce FBD2V which is a tool implemented to support the proposed FBD verification framework. A summary and conclusion are provided in Section 4
Estimation of Cumulative Absolute Velocity using Empirical Green's Function Method
International Nuclear Information System (INIS)
Park, Dong Hee; Yun, Kwan Hee; Chang, Chun Joong; Park, Se Moon
2009-01-01
In recognition of the needs to develop a new criterion for determining when the OBE (Operating Basis Earthquake) has been exceeded at nuclear power plants, Cumulative Absolute Velocity (CAV) was introduced by EPRI. The concept of CAV is the area accumulation with the values more than 0.025g occurred during every one second. The equation of the CAV is as follows. CAV = ∫ 0 max |a(t)|dt (1) t max = duration of record, a(t) = acceleration (>0.025g) Currently, the OBE exceedance criteria in Korea is Peak Ground Acceleration (PGA, PGA>0.1g). When Odesan earthquake (M L =4.8, January 20th, 2007) and Gyeongju earthquake (M L =3.4, June 2nd, 1999) were occurred, we have had already experiences of PGA greater than 0.1g that did not even cause any damage to the poorly-designed structures nearby. This moderate earthquake has motivated Korea to begin the use of the CAV for OBE exceedance criteria for NPPs. Because the present OBE level has proved itself to be a poor indicator for small-to-moderate earthquakes, for which the low OBE level can cause an inappropriate shut down the plant. A more serious possibility is that this scenario will become a reality at a very high level. Empirical Green's Function method was a simulation technique which can estimate the CAV value and it is hereby introduced
Methods for integrating a functional component into a microfluidic device
Simmons, Blake; Domeier, Linda; Woo, Noble; Shepodd, Timothy; Renzi, Ronald F.
2014-08-19
Injection molding is used to form microfluidic devices with integrated functional components. One or more functional components are placed in a mold cavity, which is then closed. Molten thermoplastic resin is injected into the mold and then cooled, thereby forming a solid substrate including the functional component(s). The solid substrate including the functional component(s) is then bonded to a second substrate, which may include microchannels or other features.
Plato: A localised orbital based density functional theory code
Kenny, S. D.; Horsfield, A. P.
2009-12-01
The Plato package allows both orthogonal and non-orthogonal tight-binding as well as density functional theory (DFT) calculations to be performed within a single framework. The package also provides extensive tools for analysing the results of simulations as well as a number of tools for creating input files. The code is based upon the ideas first discussed in Sankey and Niklewski (1989) [1] with extensions to allow high-quality DFT calculations to be performed. DFT calculations can utilise either the local density approximation or the generalised gradient approximation. Basis sets from minimal basis through to ones containing multiple radial functions per angular momenta and polarisation functions can be used. Illustrations of how the package has been employed are given along with instructions for its utilisation. Program summaryProgram title: Plato Catalogue identifier: AEFC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 219 974 No. of bytes in distributed program, including test data, etc.: 1 821 493 Distribution format: tar.gz Programming language: C/MPI and PERL Computer: Apple Macintosh, PC, Unix machines Operating system: Unix, Linux and Mac OS X Has the code been vectorised or parallelised?: Yes, up to 256 processors tested RAM: Up to 2 Gbytes per processor Classification: 7.3 External routines: LAPACK, BLAS and optionally ScaLAPACK, BLACS, PBLAS, FFTW Nature of problem: Density functional theory study of electronic structure and total energies of molecules, crystals and surfaces. Solution method: Localised orbital based density functional theory. Restrictions: Tight-binding and density functional theory only, no exact exchange. Unusual features: Both atom centred and uniform meshes available
Triptycene-based ladder monomers and polymers, methods of making each, and methods of use
Pinnau, Ingo
2015-02-05
Embodiments of the present disclosure provide for a triptycene-based A-B monomer, a method of making a triptycene-based A-B monomer, a triptycene-based ladder polymer, a method of making a triptycene-based ladder polymers, a method of using triptycene-based ladder polymers, a structure incorporating triptycene-based ladder polymers, a method of gas separation, and the like.
Performance of density functional theory methods to describe ...
Indian Academy of Sciences (India)
Fukui function shows a small dependence with both the exchange and correlation functional and the basis set. Evolution of the Fukui function along the reaction path describes important changes in the basic sites of the corresponding molecules. These results are in agreement with the chemical behavior of those species.
Generalized perturbation theory based on the method of cyclic characteristics
Energy Technology Data Exchange (ETDEWEB)
Assawaroongruengchot, M.; Marleau, G. [Institut de Genie Nucleaire, Departement de Genie Physique, Ecole Polytechnique de Montreal, 2900 Boul. Edouard-Montpetit, Montreal, Que. H3T 1J4 (Canada)
2006-07-01
A GPT algorithm for estimation of eigenvalues and reaction-rate ratios is developed for the neutron transport problems in 2D fuel assemblies with isotropic scattering. In our study the GPT formulation is based on the integral transport equations. The mathematical relationship between the generalized flux importance and generalized source importance functions is applied to transform the generalized flux importance transport equations into the integro-differential forms. The resulting adjoint and generalized adjoint transport equations are then solved using the method of cyclic characteristics (MOCC). Because of the presence of negative adjoint sources, a biasing/decontamination scheme is applied to make the generalized adjoint functions positive in such a way that it can be used for the multigroup re-balance technique. To demonstrate the efficiency of the algorithms, perturbative calculations are performed on a 17 x 17 PWR lattice. (authors)
Generalized perturbation theory based on the method of cyclic characteristics
International Nuclear Information System (INIS)
Assawaroongruengchot, M.; Marleau, G.
2006-01-01
A GPT algorithm for estimation of eigenvalues and reaction-rate ratios is developed for the neutron transport problems in 2D fuel assemblies with isotropic scattering. In our study the GPT formulation is based on the integral transport equations. The mathematical relationship between the generalized flux importance and generalized source importance functions is applied to transform the generalized flux importance transport equations into the integro-differential forms. The resulting adjoint and generalized adjoint transport equations are then solved using the method of cyclic characteristics (MOCC). Because of the presence of negative adjoint sources, a biasing/decontamination scheme is applied to make the generalized adjoint functions positive in such a way that it can be used for the multigroup re-balance technique. To demonstrate the efficiency of the algorithms, perturbative calculations are performed on a 17 x 17 PWR lattice. (authors)
Alternative methods of flexible base compaction acceptance.
2012-11-01
"This report presents the results from the second year of research work investigating issues with flexible base acceptance testing within the Texas Department of Transportation. This second year of work focused on shadow testing non-density-based acc...
Assessment of soil microbial diversity with functional multi-endpoint methods
DEFF Research Database (Denmark)
Winding, Anne; Creamer, R. E.; Rutgers, M.
on CO2 development by the microbes such as substrate induced respiration (SIR) on specific substrates have lead to the development of MicroResp™ and Community Level Physiological Profile (CLPP) with Biolog™ plates, and soil enzymatic activity assayed by Extracellular Enzyme Activity (EEA) based on MUF......Soil microbial diversity provides the cornerstone for support of soil ecosystem services by key roles in soil organic matter turnover, carbon sequestration and water infiltration. However, standardized methods to quantify the multitude of microbial functions in soils are lacking. Methods based...... to the lack of principle methods, the data obtained from these substitute methods are currently not used in classification and assessment schemes, making quantification of natural capital and ecosystems services of the soil a difficult venture. In this contribution, we compare and contrast the three...
Method of trial distribution function for quantum turbulence
International Nuclear Information System (INIS)
Nemirovskii, Sergey K.
2012-01-01
Studying quantum turbulence the necessity of calculation the various characteristics of the vortex tangle (VT) appears. Some of 'crude' quantities can be expressed directly via the total length of vortex lines (per unit of volume) or the vortex line density L(t) and the structure parameters of the VT. Other more 'subtle' quantities require knowledge of the vortex line configurations {s(xi,t) }. Usually, the corresponding calculations are carried out with the use of more or less truthful speculations concerning arrangement of the VT. In this paper we review other way to solution of this problem. It is based on the trial distribution functional (TDF) in space of vortex loop configurations. The TDF is constructed on the basis of well established properties of the vortex tangle. It is designed to calculate various averages taken over stochastic vortex loop configurations. In this paper we also review several applications of the use this model to calculate some important characteristics of the vortex tangle. In particular we discussed the average superfluid mass current J induced by vortices and its dynamics. We also describe the diffusion-like processes in the nonuniform vortex tangle and propagation of turbulent fronts.
Method for recycling radioactive noble gases for functional pulmonary imaging
International Nuclear Information System (INIS)
Forouzan-Rad, M.
1976-05-01
A theoretical treatment of the dynamic adsorption and desorption processes in the adsorption column is developed. The results of this analysis are compared with the space-time measurements of 133 Xe activity distribution in a charcoal column, when trace amounts of this gas in exponentially decreasing concentrations are fed into the column. Based on these investigations, a recycling apparatus is designed for use with xenon isotopes, especially 127 Xe, in studies of pulmonary function. The apparatus takes advantage of the high adsorbability of activated coconut charcoal for xenon a low temperature (-78 0 C) in order to trap the radioactive xenon gas that is exhaled during each ventilation-perfusion study. The trapped xenon is then recovered by passing low-pressure steam through the charcoal column. It is found that steam removes xenon from the surface of the charcoal more effectively than does heating and evacuation of the charcoal bed. As a result, an average xenon recovery of 96 percent has been achieved. Improved design parameters are discussed
Method for recycling radioactive noble gases for functional pulmonary imaging
Energy Technology Data Exchange (ETDEWEB)
Forouzan-Rad, M.
1976-05-01
A theoretical treatment of the dynamic adsorption and desorption processes in the adsorption column is developed. The results of this analysis are compared with the space-time measurements of /sup 133/Xe activity distribution in a charcoal column, when trace amounts of this gas in exponentially decreasing concentrations are fed into the column. Based on these investigations, a recycling apparatus is designed for use with xenon isotopes, especially /sup 127/Xe, in studies of pulmonary function. The apparatus takes advantage of the high adsorbability of activated coconut charcoal for xenon a low temperature (-78/sup 0/C) in order to trap the radioactive xenon gas that is exhaled during each ventilation-perfusion study. The trapped xenon is then recovered by passing low-pressure steam through the charcoal column. It is found that steam removes xenon from the surface of the charcoal more effectively than does heating and evacuation of the charcoal bed. As a result, an average xenon recovery of 96 percent has been achieved. Improved design parameters are discussed. (auth)
Carbon nanotube based functional superhydrophobic coatings
Sethi, Sunny
The main objective of this dissertation is synthesis of carbon nanotube (CNT) based superhydrophobic materials. The materials were designed such that electrical and mechanical properties of CNTs could be combined with superhydrophobicity to create materials with unique properties, such as self-cleaning adhesives, miniature flotation devices, ice-repellant coatings, and coatings for heat transfer furnaces. The coatings were divided into two broad categories based on CNT structure: Vertically aligned CNT arrays (VA coatings) and mesh-like (non-aligned) carbon nanotube arrays (NA coatings). VA coatings were used to create self-cleaning adhesives and flexible field emission devices. Coatings with self cleaning property along with high adhesiveness were inspired from structure found on gecko foot. Gecko foot is covered with thousands of microscopic hairs called setae; these setae are further divided into hundreds of nanometer sized hairs called spatulas. When gecko presses its foot against any surface, these hairs bend and conform to the topology of the surface resulting into very large area of contact. Such large area of intimate contact allows geckos to adhere to surfaces using van der Waals (vdW) interactions alone. VA-CNTs adhere to a variety of surfaces using a similar mechanism. CNTs of suitable diameter could withstand four times higher adhesion force than gecko foot. We found that upon soiling these CNT based adhesives (gecko tape) could be cleaned using a water droplet (lotus effect) or by applying vibrations. These materials could be used for applications requiring reversible adhesion. VA coatings were also used for developing field emission devices. A single CNT can emit electrons at very low threshold voltages. Achieving efficient electron emission on large scale has a lot of challenges such as screening effect, pull-off and lower current efficiency. We have explored the use of polymer-CNT composite structures to overcome these challenges in this work. NA
International Nuclear Information System (INIS)
Freed, K.F.; Herman, M.F.; Yeager, D.L.
1980-01-01
A description is provided of the common conceptual origins of many-body equations of motion and Green's function methods in Liouville operator formulations of the quantum mechanics of atomic and molecular electronic structure. Numerical evidence is provided to show the inadequacies of the traditional strictly perturbative approaches to these methods. Nonperturbative methods are introduced by analogy with techniques developed for handling large configuration interaction calculations and by evaluating individual matrix elements to higher accuracy. The important role of higher excitations is exhibited by the numerical calculations, and explicit comparisons are made between converged equations of motion and configuration interaction calculations for systems where a fundamental theorem requires the equality of the energy differences produced by these different approaches. (Auth.)
Concomitant prediction of function and fold at the domain level with GO-based profiles.
Lopez, Daniel; Pazos, Florencio
2013-01-01
Predicting the function of newly sequenced proteins is crucial due to the pace at which these raw sequences are being obtained. Almost all resources for predicting protein function assign functional terms to whole chains, and do not distinguish which particular domain is responsible for the allocated function. This is not a limitation of the methodologies themselves but it is due to the fact that in the databases of functional annotations these methods use for transferring functional terms to new proteins, these annotations are done on a whole-chain basis. Nevertheless, domains are the basic evolutionary and often functional units of proteins. In many cases, the domains of a protein chain have distinct molecular functions, independent from each other. For that reason resources with functional annotations at the domain level, as well as methodologies for predicting function for individual domains adapted to these resources are required.We present a methodology for predicting the molecular function of individual domains, based on a previously developed database of functional annotations at the domain level. The approach, which we show outperforms a standard method based on sequence searches in assigning function, concomitantly predicts the structural fold of the domains and can give hints on the functionally important residues associated to the predicted function.
Math-Based Simulation Tools and Methods
National Research Council Canada - National Science Library
Arepally, Sudhakar
2007-01-01
...: HMMWV 30-mph Rollover Test, Soldier Gear Effects, Occupant Performance in Blast Effects, Anthropomorphic Test Device, Human Models, Rigid Body Modeling, Finite Element Methods, Injury Criteria...
Adaptive endpoint detection of seismic signal based on auto-correlated function
International Nuclear Information System (INIS)
Fan Wanchun; Shi Ren
2001-01-01
Based on the analysis of auto-correlation function, the notion of the distance between auto-correlation function was quoted, and the characterization of the noise and the signal with noise were discussed by using the distance. Then, the method of auto- adaptable endpoint detection of seismic signal based on auto-correlated similarity was summed up. The steps of implementation and determining of the thresholds were presented in detail. The experimental results that were compared with the methods based on artificial detecting show that this method has higher sensitivity even in a low signal with noise ratio circumstance
Function-based Biosensor for Hazardous Waste Toxin Detection
Energy Technology Data Exchange (ETDEWEB)
James J Hickman
2008-07-09
There is a need for new types of toxicity sensors in the DOE and other agencies that are based on biological function as the toxins encountered during decontamination or waste remediation may be previously unknown or their effects subtle. Many times the contents of the environmental waste, especially the minor components, have not been fully identified and characterized. New sensors of this type could target unknown toxins that cause death as well as intermediate levels of toxicity that impair function or cause long term impairment that may eventually lead to death. The primary question posed in this grant was to create an electronically coupled neuronal cellular circuit to be used as sensor elements for a hybrid non-biological/biological toxin sensor system. A sensor based on the electrical signals transmitted between two mammalian neurons would allow the marriage of advances in solid state electronics with a functioning biological system to develop a new type of biosensor. Sensors of this type would be a unique addition to the field of sensor technology but would also be complementary to existing sensor technology that depends on knowledge of what is to be detected beforehand. We integrated physics, electronics, surface chemistry, biotechnology, and fundamental neuroscience in the development of this biosensor. Methods were developed to create artificial surfaces that enabled the patterning of discrete cells, and networks of cells, in culture; the networks were then aligned with transducers. The transducers were designed to measure electromagnetic fields (EMF) at low field strength. We have achieved all of the primary goals of the project. We can now pattern neurons routinely in our labs as well as align them with transducers. We have also shown the signals between neurons can be modulated by different biochemicals. In addition, we have made another significant advance where we have repeated the patterning results with adult hippocampal cells. Finally, we
DEFF Research Database (Denmark)
Kim, Oleksiy S.; Jørgensen, Erik; Meincke, Peter
2004-01-01
An efficient higher-order method of moments (MoM) solution of volume integral equations is presented. The higher-order MoM solution is based on higher-order hierarchical Legendre basis functions and higher-order geometry modeling. An unstructured mesh composed of 8-node trilinear and/or curved 27...... of magnitude in comparison to existing higher-order hierarchical basis functions. Consequently, an iterative solver can be applied even for high expansion orders. Numerical results demonstrate excellent agreement with the analytical Mie series solution for a dielectric sphere as well as with results obtained...
Functional Mobility Testing: A Novel Method to Create Suit Design Requirements
England, Scott A.; Benson, Elizabeth A.; Rajulu, Sudhakar L.
2008-01-01
This study was performed to aide in the creation of design requirements for the next generation of space suits that more accurately describe the level of mobility necessary for a suited crewmember through the use of an innovative methodology utilizing functional mobility. A novel method was utilized involving the collection of kinematic data while 20 subjects (10 male, 10 female) performed pertinent functional tasks that will be required of a suited crewmember during various phases of a lunar mission. These tasks were selected based on relevance and criticality from a larger list of tasks that may be carried out by the crew. Kinematic data was processed through Vicon BodyBuilder software to calculate joint angles for the ankle, knee, hip, torso, shoulder, elbow, and wrist. Maximum functional mobility was consistently lower than maximum isolated mobility. This study suggests that conventional methods for establishing design requirements for human-systems interfaces based on maximal isolated joint capabilities may overestimate the required mobility. Additionally, this method provides a valuable means of evaluating systems created from these requirements by comparing the mobility available in a new spacesuit, or the mobility required to use a new piece of hardware, to this newly established database of functional mobility.
Hybrid Fundamental Solution Based Finite Element Method: Theory and Applications
Directory of Open Access Journals (Sweden)
Changyong Cao
2015-01-01
Full Text Available An overview on the development of hybrid fundamental solution based finite element method (HFS-FEM and its application in engineering problems is presented in this paper. The framework and formulations of HFS-FEM for potential problem, plane elasticity, three-dimensional elasticity, thermoelasticity, anisotropic elasticity, and plane piezoelectricity are presented. In this method, two independent assumed fields (intraelement filed and auxiliary frame field are employed. The formulations for all cases are derived from the modified variational functionals and the fundamental solutions to a given problem. Generation of elemental stiffness equations from the modified variational principle is also described. Typical numerical examples are given to demonstrate the validity and performance of the HFS-FEM. Finally, a brief summary of the approach is provided and future trends in this field are identified.
HAM-Based Adaptive Multiscale Meshless Method for Burgers Equation
Directory of Open Access Journals (Sweden)
Shu-Li Mei
2013-01-01
Full Text Available Based on the multilevel interpolation theory, we constructed a meshless adaptive multiscale interpolation operator (MAMIO with the radial basis function. Using this operator, any nonlinear partial differential equations such as Burgers equation can be discretized adaptively in physical spaces as a nonlinear matrix ordinary differential equation. In order to obtain the analytical solution of the system of ODEs, the homotopy analysis method (HAM proposed by Shijun Liao was developed to solve the system of ODEs by combining the precise integration method (PIM which can be employed to get the analytical solution of linear system of ODEs. The numerical experiences show that HAM is not sensitive to the time step, and so the arithmetic error is mainly derived from the discrete in physical space.
Neurophysiological Based Methods of Guided Image Search
National Research Council Canada - National Science Library
Marchak, Frank
2003-01-01
.... We developed a model of visual feature detection, the Neuronal Synchrony Model, based on neurophysiological models of temporal neuronal processing, to improve the accuracy of automatic detection...
Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi
2014-04-01
Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.
Triptycene-based ladder monomers and polymers, methods of making each, and methods of use
Pinnau, Ingo; Ghanem, Bader; Swaidan, Raja
2015-01-01
Embodiments of the present disclosure provide for a triptycene-based A-B monomer, a method of making a triptycene-based A-B monomer, a triptycene-based ladder polymer, a method of making a triptycene-based ladder polymers, a method of using
Explicit appropriate basis function method for numerical solution of stiff systems
International Nuclear Information System (INIS)
Chen, Wenzhen; Xiao, Hongguang; Li, Haofeng; Chen, Ling
2015-01-01
Highlights: • An explicit numerical method called the appropriate basis function method is presented. • The method differs from the power series method for obtaining approximate numerical solutions. • Two cases show the method is fit for linear and nonlinear stiff systems. • The method is very simple and effective for most of differential equation systems. - Abstract: In this paper, an explicit numerical method, called the appropriate basis function method, is presented. The explicit appropriate basis function method differs from the power series method because it employs an appropriate basis function such as the exponential function, or periodic function, other than a polynomial, to obtain approximate numerical solutions. The method is successful and effective for the numerical solution of the first order ordinary differential equations. Two examples are presented to show the ability of the method for dealing with linear and nonlinear systems of differential equations
Directory of Open Access Journals (Sweden)
Hai-peng Wang
2017-01-01
Full Text Available Voluntary participation of hemiplegic patients is crucial for functional electrical stimulation therapy. A wearable functional electrical stimulation system has been proposed for real-time volitional hand motor function control using the electromyography bridge method. Through a series of novel design concepts, including the integration of a detecting circuit and an analog-to-digital converter, a miniaturized functional electrical stimulation circuit technique, a low-power super-regeneration chip for wireless receiving, and two wearable armbands, a prototype system has been established with reduced size, power, and overall cost. Based on wrist joint torque reproduction and classification experiments performed on six healthy subjects, the optimized surface electromyography thresholds and trained logistic regression classifier parameters were statistically chosen to establish wrist and hand motion control with high accuracy. Test results showed that wrist flexion/extension, hand grasp, and finger extension could be reproduced with high accuracy and low latency. This system can build a bridge of information transmission between healthy limbs and paralyzed limbs, effectively improve voluntary participation of hemiplegic patients, and elevate efficiency of rehabilitation training.
OCL-BASED TEST CASE GENERATION USING CATEGORY PARTITIONING METHOD
Directory of Open Access Journals (Sweden)
A. Jalila
2015-10-01
Full Text Available The adoption of fault detection techniques during initial stages of software development life cycle urges to improve reliability of a software product. Specification-based testing is one of the major criterions to detect faults in the requirement specification or design of a software system. However, due to the non-availability of implementation details, test case generation from formal specifications become a challenging task. As a novel approach, the proposed work presents a methodology to generate test cases from OCL (Object constraint Language formal specification using Category Partitioning Method (CPM. The experiment results indicate that the proposed methodology is more effective in revealing specification based faults. Furthermore, it has been observed that OCL and CPM form an excellent combination for performing functional testing at the earliest to improve software quality with reduced cost.
A PBOM configuration and management method based on templates
Guo, Kai; Qiao, Lihong; Qie, Yifan
2018-03-01
The design of Process Bill of Materials (PBOM) holds a hinge position in the process of product development. The requirements of PBOM configuration design and management for complex products are analysed in this paper, which include the reuse technique of configuration procedure and urgent management need of huge quantity of product family PBOM data. Based on the analysis, the function framework of PBOM configuration and management has been established. Configuration templates and modules are defined in the framework to support the customization and the reuse of configuration process. The configuration process of a detection sensor PBOM is shown as an illustration case in the end. The rapid and agile PBOM configuration and management can be achieved utilizing template-based method, which has a vital significance to improve the development efficiency for complex products.
Improved Saturated Hydraulic Conductivity Pedotransfer Functions Using Machine Learning Methods
Araya, S. N.; Ghezzehei, T. A.
2017-12-01
Saturated hydraulic conductivity (Ks) is one of the fundamental hydraulic properties of soils. Its measurement, however, is cumbersome and instead pedotransfer functions (PTFs) are often used to estimate it. Despite a lot of progress over the years, generic PTFs that estimate hydraulic conductivity generally don't have a good performance. We develop significantly improved PTFs by applying state of the art machine learning techniques coupled with high-performance computing on a large database of over 20,000 soils—USKSAT and the Florida Soil Characterization databases. We compared the performance of four machine learning algorithms (k-nearest neighbors, gradient boosted model, support vector machine, and relevance vector machine) and evaluated the relative importance of several soil properties in explaining Ks. An attempt is also made to better account for soil structural properties; we evaluated the importance of variables derived from transformations of soil water retention characteristics and other soil properties. The gradient boosted models gave the best performance with root mean square errors less than 0.7 and mean errors in the order of 0.01 on a log scale of Ks [cm/h]. The effective particle size, D10, was found to be the single most important predictor. Other important predictors included percent clay, bulk density, organic carbon percent, coefficient of uniformity and values derived from water retention characteristics. Model performances were consistently better for Ks values greater than 10 cm/h. This study maximizes the extraction of information from a large database to develop generic machine learning based PTFs to estimate Ks. The study also evaluates the importance of various soil properties and their transformations in explaining Ks.
The Method of a Standalone Functional Verifying Operability of Sonar Control Systems
Directory of Open Access Journals (Sweden)
A. A. Sotnikov
2014-01-01
Full Text Available This article describes a method of standalone verifying sonar control system, which is based on functional checking of control system operability.The main features of realized method are a development of the valid mathematic model for simulation of sonar signals at the point of hydroacoustic antenna, a valid representation of the sonar control system modes as a discrete Markov model, providing functional object verification in real time mode.Some ways are proposed to control computational complexity in case of insufficient computing resources of the simulation equipment, namely the way of model functionality reduction and the way of adequacy reduction.Experiments were made using testing equipment, which was developed by department of Research Institute of Information Control System at Bauman Moscow State Technical University to verify technical validity of industrial sonar complexes.On-board software was artificially changed to create malfunctions in functionality of sonar control systems during the verifying process in order to estimate verifying system performances.The method efficiency was proved by the theory and experiment results in comparison with the basic methodology of verifying technical systems.This method could be also used in debugging of on-board software of sonar complexes and in development of new promising algorithms of sonar signal processing.
Developing safety performance functions incorporating reliability-based risk measures.
Ibrahim, Shewkar El-Bassiouni; Sayed, Tarek
2011-11-01
Current geometric design guides provide deterministic standards where the safety margin of the design output is generally unknown and there is little knowledge of the safety implications of deviating from these standards. Several studies have advocated probabilistic geometric design where reliability analysis can be used to account for the uncertainty in the design parameters and to provide a risk measure of the implication of deviation from design standards. However, there is currently no link between measures of design reliability and the quantification of safety using collision frequency. The analysis presented in this paper attempts to bridge this gap by incorporating a reliability-based quantitative risk measure such as the probability of non-compliance (P(nc)) in safety performance functions (SPFs). Establishing this link will allow admitting reliability-based design into traditional benefit-cost analysis and should lead to a wider application of the reliability technique in road design. The present application is concerned with the design of horizontal curves, where the limit state function is defined in terms of the available (supply) and stopping (demand) sight distances. A comprehensive collision and geometric design database of two-lane rural highways is used to investigate the effect of the probability of non-compliance on safety. The reliability analysis was carried out using the First Order Reliability Method (FORM). Two Negative Binomial (NB) SPFs were developed to compare models with and without the reliability-based risk measures. It was found that models incorporating the P(nc) provided a better fit to the data set than the traditional (without risk) NB SPFs for total, injury and fatality (I+F) and property damage only (PDO) collisions. Copyright © 2011 Elsevier Ltd. All rights reserved.
portfolio optimization based on nonparametric estimation methods
Directory of Open Access Journals (Sweden)
mahsa ghandehari
2017-03-01
Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.
Image Inpainting Based on Coherence Transport with Adapted Distance Functions
Mä rz, Thomas
2011-01-01
We discuss an extension of our method image inpainting based on coherence transport. For the latter method the pixels of the inpainting domain have to be serialized into an ordered list. Until now, to induce the serialization we have used
Annotation and retrieval system of CAD models based on functional semantics
Wang, Zhansong; Tian, Ling; Duan, Wenrui
2014-11-01
CAD model retrieval based on functional semantics is more significant than content-based 3D model retrieval during the mechanical conceptual design phase. However, relevant research is still not fully discussed. Therefore, a functional semantic-based CAD model annotation and retrieval method is proposed to support mechanical conceptual design and design reuse, inspire designer creativity through existing CAD models, shorten design cycle, and reduce costs. Firstly, the CAD model functional semantic ontology is constructed to formally represent the functional semantics of CAD models and describe the mechanical conceptual design space comprehensively and consistently. Secondly, an approach to represent CAD models as attributed adjacency graphs(AAG) is proposed. In this method, the geometry and topology data are extracted from STEP models. On the basis of AAG, the functional semantics of CAD models are annotated semi-automatically by matching CAD models that contain the partial features of which functional semantics have been annotated manually, thereby constructing CAD Model Repository that supports model retrieval based on functional semantics. Thirdly, a CAD model retrieval algorithm that supports multi-function extended retrieval is proposed to explore more potential creative design knowledge in the semantic level. Finally, a prototype system, called Functional Semantic-based CAD Model Annotation and Retrieval System(FSMARS), is implemented. A case demonstrates that FSMARS can successfully botain multiple potential CAD models that conform to the desired function. The proposed research addresses actual needs and presents a new way to acquire CAD models in the mechanical conceptual design phase.
Identification of fractional order systems using modulating functions method
Liu, Dayan; Laleg-Kirati, Taous-Meriem; Gibaru, O.; Perruquetti, Wilfrid
2013-01-01
can be transferred into the ones of the modulating functions. By choosing a set of modulating functions, a linear system of algebraic equations is obtained. Hence, the unknown parameters of a fractional order system can be estimated by solving a linear
Novel axolotl cardiac function analysis method using magnetic resonance imaging
Sanches, Pedro Gomes; Op 't Veld, Roel C.; de Graaf, Wolter; Strijkers, Gustav J.; Grüll, Holger
2017-01-01
The salamander axolotl is capable of complete regeneration of amputated heart tissue. However, non-invasive imaging tools for assessing its cardiac function were so far not employed. In this study, cardiac magnetic resonance imaging is introduced as a non-invasive technique to image heart function
Novel axolotl cardiac function analysis method using magnetic resonance imaging
Sanches, P.G.; Op ‘t Veld, R.C.; de Graaf, W.; Strijkers, G.J.; Grüll, H.
2017-01-01
The salamander axolotl is capable of complete regeneration of amputated heart tissue. However, non-invasive imaging tools for assessing its cardiac function were so far not employed. In this study, cardiac magnetic resonance imaging is introduced as a noninvasive technique to image heart function of
STRUCTURE OF THE METHOD OF EDUCATION AND ITS PROGNOSTIC FUNCTION
Directory of Open Access Journals (Sweden)
Azat Minabutdinovich Gaifutdinov
2018-05-01
Full Text Available Aim. The article is devoted to a poorly studied issue in the theory of upbringing – the structure of methods of upbringing. The subject of analysis is the content of methods of upbringing. The authors aim to determine the method of strict fixation and description of the method, the main elements of the structure of the method of upbringing and its species. Methodology. The basis of the research is historical and pedagogical analysis, theoretical generalization and interpretation of the results of pedagogical and historical pedagogical research, the method of analogy. Results. Authors, on the basis of studying the content of methods of upbringing, determine the basic elements of their structure: the nature of the actions, the sequence of their implementation, the result of applying the method. A unified approach to the description of methods is proposed, which includes: the goal (the planned results of applying this method of upbringing; initial data (age of students and other features; nature of actions; actions (methods of the method and the sequence of their execution. One of the three main types of the structure of the method of education is determined on the basis of the nature of the method’s actions in practice: linear, cyclic or branching. Practical implications. The results of the research can be applied in the field of pedagogical design and organization of the educational process, scientific and pedagogical search and training of pedagogical personnel.
Graph-based Operational Semantics of a Lazy Functional Languages
DEFF Research Database (Denmark)
Rose, Kristoffer Høgsbro
1992-01-01
Presents Graph Operational Semantics (GOS): a semantic specification formalism based on structural operational semantics and term graph rewriting. Demonstrates the method by specifying the dynamic ...
Directory of Open Access Journals (Sweden)
Yonghan Choi
2014-01-01
Full Text Available An adjoint sensitivity-based data assimilation (ASDA method is proposed and applied to a heavy rainfall case over the Korean Peninsula. The heavy rainfall case, which occurred on 26 July 2006, caused torrential rainfall over the central part of the Korean Peninsula. The mesoscale convective system (MCS related to the heavy rainfall was classified as training line/adjoining stratiform (TL/AS-type for the earlier period, and back building (BB-type for the later period. In the ASDA method, an adjoint model is run backwards with forecast-error gradient as input, and the adjoint sensitivity of the forecast error to the initial condition is scaled by an optimal scaling factor. The optimal scaling factor is determined by minimising the observational cost function of the four-dimensional variational (4D-Var method, and the scaled sensitivity is added to the original first guess. Finally, the observations at the analysis time are assimilated using a 3D-Var method with the improved first guess. The simulated rainfall distribution is shifted northeastward compared to the observations when no radar data are assimilated or when radar data are assimilated using the 3D-Var method. The rainfall forecasts are improved when radar data are assimilated using the 4D-Var or ASDA method. Simulated atmospheric fields such as horizontal winds, temperature, and water vapour mixing ratio are also improved via the 4D-Var or ASDA method. Due to the improvement in the analysis, subsequent forecasts appropriately simulate the observed features of the TL/AS- and BB-type MCSs and the corresponding heavy rainfall. The computational cost associated with the ASDA method is significantly lower than that of the 4D-Var method.
A method for synthesizing response functions of NaI detectors to gamma rays
International Nuclear Information System (INIS)
Sie, S.H.
1978-08-01
A simple method of parametrizing the response function of NaI detectors to gamma rays is described, based on decomposition of the pulse-height spectrum into components associated with the actual detection processes. Smooth dependence of the derived parameters on the gamma-ray energy made it possible to generate a lineshape for any gamma-ray energy by suitable interpolation techniques. The method is applied in analysis of spectra measured with a 7.6 x 7.6 cm NaI detector in continuum gamma-ray study following (HI,xn) reaction
A calculation method for finite depth free-surface green function
Directory of Open Access Journals (Sweden)
Yingyi Liu
2015-03-01
Full Text Available An improved boundary element method is presented for numerical analysis of hydrodynamic behavior of marine structures. A new algorithm for numerical solution of the finite depth free-surface Green function in three dimensions is developed based on multiple series representations. The whole range of the key parameter R/h is divided into four regions, within which different representation is used to achieve fast convergence. The well-known epsilon algorithm is also adopted to accelerate the convergence. The critical convergence criteria for each representation are investigated and provided. The proposed method is validated by several well-documented benchmark problems.
[The method of quality function deployment --QFD-- in nursing services planning].
Matsuda, L M; Evora, Y D; Boan, F S
2000-10-01
"Focus on the client" is the posture that must be adopted in order to offer quality products. Based on the Total Quality Management approach, the Quality Function Deployment method (QFD) is a tool to achieve this goal. The purpose of this study is to create a proposal for planning the nursing services following the steps and actions of this methodology. The basic procedure was to survey the necessity of 106 hospitalized patients. Data were deployed using the seventeen steps proposed. Results showed that the interaction is more important than the technique according to the clients and also that this method enables the implementation of quality in nursing care.
Decreasing Multicollinearity: A Method for Models with Multiplicative Functions.
Smith, Kent W.; Sasaki, M. S.
1979-01-01
A method is proposed for overcoming the problem of multicollinearity in multiple regression equations where multiplicative independent terms are entered. The method is not a ridge regression solution. (JKS)
Topology-Based Methods in Visualization 2015
Garth, Christoph; Weinkauf, Tino
2017-01-01
This book presents contributions on topics ranging from novel applications of topological analysis for particular problems, through studies of the effectiveness of modern topological methods, algorithmic improvements on existing methods, and parallel computation of topological structures, all the way to mathematical topologies not previously applied to data analysis. Topological methods are broadly recognized as valuable tools for analyzing the ever-increasing flood of data generated by simulation or acquisition. This is particularly the case in scientific visualization, where the data sets have long since surpassed the ability of the human mind to absorb every single byte of data. The biannual TopoInVis workshop has supported researchers in this area for a decade, and continues to serve as a vital forum for the presentation and discussion of novel results in applications in the area, creating a platform to disseminate knowledge about such implementations throughout and beyond the community. The present volum...
Some functional properties of composite material based on scrap tires
Plesuma, Renate; Malers, Laimonis
2013-09-01
The utilization of scrap tires still obtains a remarkable importance from the aspect of unloading the environment from non-degradable waste [1]. One of the most prospective ways for scrap tires reuse is a production of composite materials [2] This research must be considered as a continuation of previous investigations [3, 4]. It is devoted to the clarification of some functional properties, which are considered important for the view of practical applications, of the composite material. Some functional properties of the material were investigated, for instance, the compressive stress at different extent of deformation of sample (till 67% of initial thickness) (LVS EN 826) [5] and the resistance to UV radiation (modified method based on LVS EN 14836) [6]. Experiments were realized on the purposefully selected samples. The results were evaluated in the correlation with potential changes of Shore C hardness (Shore scale, ISO 7619-1, ISO 868) [7, 8]. The results showed noticeable resistance of the composite material against the mechanical influence and ultraviolet (UV) radiation. The correlation with the composition of the material, activity of binder, definite technological parameters, and the conditions supported during the production, were determined. It was estimated that selected properties and characteristics of the material are strongly dependent from the composition and technological parameters used in production of the composite material, and from the size of rubber crumb. Obtained results show possibility to attain desirable changes in the composite material properties by changing both the composition and technological parameters of examined material.
Computational Methods for Large Spatio-temporal Datasets and Functional Data Ranking
Huang, Huang
2017-07-16
This thesis focuses on two topics, computational methods for large spatial datasets and functional data ranking. Both are tackling the challenges of big and high-dimensional data. The first topic is motivated by the prohibitive computational burden in fitting Gaussian process models to large and irregularly spaced spatial datasets. Various approximation methods have been introduced to reduce the computational cost, but many rely on unrealistic assumptions about the process and retaining statistical efficiency remains an issue. We propose a new scheme to approximate the maximum likelihood estimator and the kriging predictor when the exact computation is infeasible. The proposed method provides different types of hierarchical low-rank approximations that are both computationally and statistically efficient. We explore the improvement of the approximation theoretically and investigate the performance by simulations. For real applications, we analyze a soil moisture dataset with 2 million measurements with the hierarchical low-rank approximation and apply the proposed fast kriging to fill gaps for satellite images. The second topic is motivated by rank-based outlier detection methods for functional data. Compared to magnitude outliers, it is more challenging to detect shape outliers as they are often masked among samples. We develop a new notion of functional data depth by taking the integration of a univariate depth function. Having a form of the integrated depth, it shares many desirable features. Furthermore, the novel formation leads to a useful decomposition for detecting both shape and magnitude outliers. Our simulation studies show the proposed outlier detection procedure outperforms competitors in various outlier models. We also illustrate our methodology using real datasets of curves, images, and video frames. Finally, we introduce the functional data ranking technique to spatio-temporal statistics for visualizing and assessing covariance properties, such as
Psychophysical "blinding" methods reveal a functional hierarchy of unconscious visual processing.
Breitmeyer, Bruno G
2015-09-01
Numerous non-invasive experimental "blinding" methods exist for suppressing the phenomenal awareness of visual stimuli. Not all of these suppressive methods occur at, and thus index, the same level of unconscious visual processing. This suggests that a functional hierarchy of unconscious visual processing can in principle be established. The empirical results of extant studies that have used a number of different methods and additional reasonable theoretical considerations suggest the following tentative hierarchy. At the highest levels in this hierarchy is unconscious processing indexed by object-substitution masking. The functional levels indexed by crowding, the attentional blink (and other attentional blinding methods), backward pattern masking, metacontrast masking, continuous flash suppression, sandwich masking, and single-flash interocular suppression, fall at progressively lower levels, while unconscious processing at the lowest levels is indexed by eye-based binocular-rivalry suppression. Although unconscious processing levels indexed by additional blinding methods is yet to be determined, a tentative placement at lower levels in the hierarchy is also given for unconscious processing indexed by Troxler fading and adaptation-induced blindness, and at higher levels in the hierarchy indexed by attentional blinding effects in addition to the level indexed by the attentional blink. The full mapping of levels in the functional hierarchy onto cortical activation sites and levels is yet to be determined. The existence of such a hierarchy bears importantly on the search for, and the distinctions between, neural correlates of conscious and unconscious vision. Copyright © 2015 Elsevier Inc. All rights reserved.
A Tomographic method based on genetic algorithms
International Nuclear Information System (INIS)
Turcanu, C.; Alecu, L.; Craciunescu, T.; Niculae, C.
1997-01-01
Computerized tomography being a non-destructive and non-evasive technique is frequently used in medical application to generate three dimensional images of objects. Genetic algorithms are efficient, domain independent for a large variety of problems. The proposed method produces good quality reconstructions even in case of very small number of projection angles. It requests no a priori knowledge about the solution and takes into account the statistical uncertainties. The main drawback of the method is the amount of computer memory and time needed. (author)