WorldWideScience

Sample records for proposed method compares

  1. Evaluation of a proposed optimization method for discrete-event simulation models

    Directory of Open Access Journals (Sweden)

    Alexandre Ferreira de Pinho

    2012-12-01

    Full Text Available Optimization methods combined with computer-based simulation have been utilized in a wide range of manufacturing applications. However, in terms of current technology, these methods exhibit low performance levels which are only able to manipulate a single decision variable at a time. Thus, the objective of this article is to evaluate a proposed optimization method for discrete-event simulation models based on genetic algorithms which exhibits more efficiency in relation to computational time when compared to software packages on the market. It should be emphasized that the variable's response quality will not be altered; that is, the proposed method will maintain the solutions' effectiveness. Thus, the study draws a comparison between the proposed method and that of a simulation instrument already available on the market and has been examined in academic literature. Conclusions are presented, confirming the proposed optimization method's efficiency.

  2. Comparative law as method and the method of comparative law

    NARCIS (Netherlands)

    Hage, J.C.; Adams, M.; Heirbaut, D.

    2014-01-01

    This article addresses both the justificatory role of comparative law within legal research (comparative law as method) and the method of comparative law itself. In this connection two questions will be answered: 1. Is comparative law a method, or a set of methods, for legal research? 2. Does

  3. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    National Research Council Canada - National Science Library

    Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen

    2006-01-01

    .... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...

  4. Precision of glucose measurements in control sera by isotope dilution/mass spectrometry: proposed definitive method compared with a reference method

    International Nuclear Information System (INIS)

    Pelletier, O.; Arratoon, C.

    1987-01-01

    This improved isotope-dilution gas chromatographic/mass spectrometric (GC/MS) method, in which [ 13 C]glucose is the internal standard, meets the requirements of a Definitive Method. In a first study with five reconstituted lyophilized sera, a nested analysis of variance of GC/MS values indicated considerable among-vial variation. The CV for 32 measurements per serum ranged from 0.5 to 0.9%. However, concentration and uncertainty values (mmol/L per gram of serum) assigned to one serum by the NBS Definitive Method (7.56 +/- 0.28) were practically identical to those obtained with the proposed method (7.57 +/- 0.20). In the second study, we used twice more [ 13 C]glucose diluent to assay four serum pools and two lyophilized sera. The CV ranged from 0.26 to 0.5% for the serum pools and from 0.28 to 0.59% for the lyophilized sera. In comparison, results by the hexokinase/glucose-6-phosphate dehydrogenase reference method agreed within acceptable limits with those by the Definitive Method but tended to be slightly higher (up to 3%) for lyophilized serum samples or slightly lower (up to 2.5%) for serum pools

  5. Comparative Evaluations of Four Specification Methods for Real-Time Systems

    Science.gov (United States)

    1989-12-01

    December 1989 Comparative Evaluations of Four Specification Methods for Real - Time Systems David P. Wood William G. Wood Specification and Design Methods...Methods for Real - Time Systems Abstract: A number of methods have been proposed in the last decade for the specification of system and software requirements...and software specification for real - time systems . Our process for the identification of methods that meet the above criteria is described in greater

  6. A Proposal of Operational Risk Management Method Using FMEA for Drug Manufacturing Computerized System

    Science.gov (United States)

    Takahashi, Masakazu; Nanba, Reiji; Fukue, Yoshinori

    This paper proposes operational Risk Management (RM) method using Failure Mode and Effects Analysis (FMEA) for drug manufacturing computerlized system (DMCS). The quality of drug must not be influenced by failures and operational mistakes of DMCS. To avoid such situation, DMCS has to be conducted enough risk assessment and taken precautions. We propose operational RM method using FMEA for DMCS. To propose the method, we gathered and compared the FMEA results of DMCS, and develop a list that contains failure modes, failures and countermeasures. To apply this list, we can conduct RM in design phase, find failures, and conduct countermeasures efficiently. Additionally, we can find some failures that have not been found yet.

  7. Statistical Methods for Comparative Phenomics Using High-Throughput Phenotype Microarrays

    KAUST Repository

    Sturino, Joseph

    2010-01-24

    We propose statistical methods for comparing phenomics data generated by the Biolog Phenotype Microarray (PM) platform for high-throughput phenotyping. Instead of the routinely used visual inspection of data with no sound inferential basis, we develop two approaches. The first approach is based on quantifying the distance between mean or median curves from two treatments and then applying a permutation test; we also consider a permutation test applied to areas under mean curves. The second approach employs functional principal component analysis. Properties of the proposed methods are investigated on both simulated data and data sets from the PM platform.

  8. Instrumental variable methods in comparative safety and effectiveness research.

    Science.gov (United States)

    Brookhart, M Alan; Rassen, Jeremy A; Schneeweiss, Sebastian

    2010-06-01

    Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial.

  9. Instrumental variable methods in comparative safety and effectiveness research†

    Science.gov (United States)

    Brookhart, M. Alan; Rassen, Jeremy A.; Schneeweiss, Sebastian

    2010-01-01

    Summary Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial. PMID:20354968

  10. A proposal on alternative sampling-based modeling method of spherical particles in stochastic media for Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-08-15

    Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media.

  11. A proposal on alternative sampling-based modeling method of spherical particles in stochastic media for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung; Noh, Jae Man

    2015-01-01

    Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media

  12. A proposal on evaluation method of neutron absorption performance to substitute conventional neutron attenuation test

    International Nuclear Information System (INIS)

    Kim, Je Hyun; Shim, Chang Ho; Kim, Sung Hyun; Choe, Jung Hun; Cho, In Hak; Park, Hwan Seo; Park, Hyun Seo; Kim, Jung Ho; Kim, Yoon Ho

    2016-01-01

    For a verification of newly-developed neutron absorbers, one of guidelines on the qualification and acceptance of neutron absorbers is the neutron attenuation test. However, this approach can cause a problem for the qualifications that it cannot distinguish how the neutron attenuates from materials. In this study, an estimation method of neutron absorption performances for materials is proposed to detect both direct penetration and back-scattering neutrons. For the verification of the proposed method, MCNP simulations with the experimental system designed in this study were pursued using the polyethylene, iron, normal glass and the vitrified form. The results show that it can easily test neutron absorption ability using single absorber model. Also, from simulation results of single absorber and double absorbers model, it is verified that the proposed method can evaluate not only the direct thermal neutrons passing through materials, but also the scattered neutrons reflected to the materials. Therefore, the neutron absorption performances can be accurately estimated using the proposed method comparing with the conventional neutron attenuation test. It is expected that the proposed method can contribute to increase the reliability of the performance of neutron absorbers

  13. A proposal on evaluation method of neutron absorption performance to substitute conventional neutron attenuation test

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Je Hyun; Shim, Chang Ho [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Kim, Sung Hyun [Nuclear Fuel Cycle Waste Treatment Research Division, Research Reactor Institute, Kyoto University, Osaka (Japan); Choe, Jung Hun; Cho, In Hak; Park, Hwan Seo [Ionizing Radiation Center, Nuclear Fuel Cycle Waste Treatment Research Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Park, Hyun Seo; Kim, Jung Ho; Kim, Yoon Ho [Ionizing Radiation Center, Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of)

    2016-12-15

    For a verification of newly-developed neutron absorbers, one of guidelines on the qualification and acceptance of neutron absorbers is the neutron attenuation test. However, this approach can cause a problem for the qualifications that it cannot distinguish how the neutron attenuates from materials. In this study, an estimation method of neutron absorption performances for materials is proposed to detect both direct penetration and back-scattering neutrons. For the verification of the proposed method, MCNP simulations with the experimental system designed in this study were pursued using the polyethylene, iron, normal glass and the vitrified form. The results show that it can easily test neutron absorption ability using single absorber model. Also, from simulation results of single absorber and double absorbers model, it is verified that the proposed method can evaluate not only the direct thermal neutrons passing through materials, but also the scattered neutrons reflected to the materials. Therefore, the neutron absorption performances can be accurately estimated using the proposed method comparing with the conventional neutron attenuation test. It is expected that the proposed method can contribute to increase the reliability of the performance of neutron absorbers.

  14. A comparative study of the maximum power point tracking methods for PV systems

    International Nuclear Information System (INIS)

    Liu, Yali; Li, Ming; Ji, Xu; Luo, Xi; Wang, Meidi; Zhang, Ying

    2014-01-01

    Highlights: • An improved maximum power point tracking method for PV system was proposed. • Theoretical derivation procedure of the proposed method was provided. • Simulation models of MPPT trackers were established based on MATLAB/Simulink. • Experiments were conducted to verify the effectiveness of the proposed MPPT method. - Abstract: Maximum power point tracking (MPPT) algorithms play an important role in the optimization of the power and efficiency of a photovoltaic (PV) generation system. According to the contradiction of the classical Perturb and Observe (P and Oa) method between the corresponding speed and the tracking accuracy on steady-state, an improved P and O (P and Ob) method has been put forward in this paper by using the Atken interpolation algorithm. To validate the correctness and performance of the proposed method, simulation and experimental study have been implemented. Simulation models of classical P and Oa method and improved P and Ob method have been established by MATLAB/Simulink to analyze each technique under varying solar irradiation and temperature. The experimental results show that the tracking efficiency of P and Ob method is an average of 93% compared to 72% for P and Oa method, this conclusion basically agree with the simulation study. Finally, we proposed the applicable conditions and scope of these MPPT methods in the practical application

  15. Comparative analysis of methods for classification in predicting the quality of bread

    OpenAIRE

    E. A. Balashova; V. K. Bitjukov; E. A. Savvina

    2013-01-01

    The comparative analysis of classification methods of two-stage cluster and discriminant analysis and neural networks was performed. System of informative signs which classifies with a minimum of errors has been proposed.

  16. Comparative analysis of methods for classification in predicting the quality of bread

    Directory of Open Access Journals (Sweden)

    E. A. Balashova

    2013-01-01

    Full Text Available The comparative analysis of classification methods of two-stage cluster and discriminant analysis and neural networks was performed. System of informative signs which classifies with a minimum of errors has been proposed.

  17. Comparative Analysis of Hydrogen Production Methods with Nuclear Reactors

    International Nuclear Information System (INIS)

    Morozov, Andrey

    2008-01-01

    Hydrogen is highly effective and ecologically clean fuel. It can be produced by a variety of methods. Presently the most common are through electrolysis of water and through the steam reforming of natural gas. It is evident that the leading method for the future production of hydrogen is nuclear energy. Several types of reactors are being considered for hydrogen production, and several methods exist to produce hydrogen, including thermochemical cycles and high-temperature electrolysis. In the article the comparative analysis of various hydrogen production methods is submitted. It is considered the possibility of hydrogen production with the nuclear reactors and is proposed implementation of research program in this field at the IPPE sodium-potassium eutectic cooling high temperature experimental facility (VTS rig). (authors)

  18. Assessment of proposed electromagnetic quantum vacuum energy extraction methods

    OpenAIRE

    Moddel, Garret

    2009-01-01

    In research articles and patents several methods have been proposed for the extraction of zero-point energy from the vacuum. None has been reliably demonstrated, but the proposals remain largely unchallenged. In this paper the feasibility of these methods is assessed in terms of underlying thermodynamics principles of equilibrium, detailed balance, and conservation laws. The methods are separated into three classes: nonlinear processing of the zero-point field, mechanical extraction using Cas...

  19. Large-scale Comparative Study of Hi-C-based Chromatin 3D Structure Modeling Methods

    KAUST Repository

    Wang, Cheng

    2018-05-17

    Chromatin is a complex polymer molecule in eukaryotic cells, primarily consisting of DNA and histones. Many works have shown that the 3D folding of chromatin structure plays an important role in DNA expression. The recently proposed Chro- mosome Conformation Capture technologies, especially the Hi-C assays, provide us an opportunity to study how the 3D structures of the chromatin are organized. Based on the data from Hi-C experiments, many chromatin 3D structure modeling methods have been proposed. However, there is limited ground truth to validate these methods and no robust chromatin structure alignment algorithms to evaluate the performance of these methods. In our work, we first made a thorough literature review of 25 publicly available population Hi-C-based chromatin 3D structure modeling methods. Furthermore, to evaluate and to compare the performance of these methods, we proposed a novel data simulation method, which combined the population Hi-C data and single-cell Hi-C data without ad hoc parameters. Also, we designed a global and a local alignment algorithms to measure the similarity between the templates and the chromatin struc- tures predicted by different modeling methods. Finally, the results from large-scale comparative tests indicated that our alignment algorithms significantly outperform the algorithms in literature.

  20. A comparative study of two stochastic mode reduction methods

    Energy Technology Data Exchange (ETDEWEB)

    Stinis, Panagiotis

    2005-09-01

    We present a comparative study of two methods for thereduction of the dimensionality of a system of ordinary differentialequations that exhibits time-scale separation. Both methods lead to areduced system of stochastic differential equations. The novel feature ofthese methods is that they allow the use, in the reduced system, ofhigher order terms in the resolved variables. The first method, proposedby Majda, Timofeyev and Vanden-Eijnden, is based on an asymptoticstrategy developed by Kurtz. The second method is a short-memoryapproximation of the Mori-Zwanzig projection formalism of irreversiblestatistical mechanics, as proposed by Chorin, Hald and Kupferman. Wepresent conditions under which the reduced models arising from the twomethods should have similar predictive ability. We apply the two methodsto test cases that satisfy these conditions. The form of the reducedmodels and the numerical simulations show that the two methods havesimilar predictive ability as expected.

  1. Proposal of Evolutionary Simplex Method for Global Optimization Problem

    Science.gov (United States)

    Shimizu, Yoshiaki

    To make an agile decision in a rational manner, role of optimization engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary method serving as an optimization technique in the paradigm of optimization engineering. The developed method has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional method known as Nelder and Mead’s Simplex method by virtue of idea borrowed from recent meta-heuristic method such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed method through comparison with other methods using several benchmark problems.

  2. A Proposed Arabic Handwritten Text Normalization Method

    Directory of Open Access Journals (Sweden)

    Tarik Abu-Ain

    2014-11-01

    Full Text Available Text normalization is an important technique in document image analysis and recognition. It consists of many preprocessing stages, which include slope correction, text padding, skew correction, and straight the writing line. In this side, text normalization has an important role in many procedures such as text segmentation, feature extraction and characters recognition. In the present article, a new method for text baseline detection, straightening, and slant correction for Arabic handwritten texts is proposed. The method comprises a set of sequential steps: first components segmentation is done followed by components text thinning; then, the direction features of the skeletons are extracted, and the candidate baseline regions are determined. After that, selection of the correct baseline region is done, and finally, the baselines of all components are aligned with the writing line.  The experiments are conducted on IFN/ENIT benchmark Arabic dataset. The results show that the proposed method has a promising and encouraging performance.

  3. Gene set analysis: limitations in popular existing methods and proposed improvements.

    Science.gov (United States)

    Mishra, Pashupati; Törönen, Petri; Leino, Yrjö; Holm, Liisa

    2014-10-01

    Gene set analysis is the analysis of a set of genes that collectively contribute to a biological process. Most popular gene set analysis methods are based on empirical P-value that requires large number of permutations. Despite numerous gene set analysis methods developed in the past decade, the most popular methods still suffer from serious limitations. We present a gene set analysis method (mGSZ) based on Gene Set Z-scoring function (GSZ) and asymptotic P-values. Asymptotic P-value calculation requires fewer permutations, and thus speeds up the gene set analysis process. We compare the GSZ-scoring function with seven popular gene set scoring functions and show that GSZ stands out as the best scoring function. In addition, we show improved performance of the GSA method when the max-mean statistics is replaced by the GSZ scoring function. We demonstrate the importance of both gene and sample permutations by showing the consequences in the absence of one or the other. A comparison of asymptotic and empirical methods of P-value estimation demonstrates a clear advantage of asymptotic P-value over empirical P-value. We show that mGSZ outperforms the state-of-the-art methods based on two different evaluations. We compared mGSZ results with permutation and rotation tests and show that rotation does not improve our asymptotic P-values. We also propose well-known asymptotic distribution models for three of the compared methods. mGSZ is available as R package from cran.r-project.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Comments and Remarks over Classic Linear Loop-Gain Method for Oscillator Design and Analysis. New Proposed Method Based on NDF/RRT

    Directory of Open Access Journals (Sweden)

    J. L. Jimenez-Martin

    2012-04-01

    Full Text Available Present paper describes a new method for designing oscillators based on the Normalized Determinant Function (NDF and Return Relations (RRT . First a review of the loop-gain method will be performed, showing pros, cons and including some examples for exploring wrong so- lutions provided by this method. Wrong solutions, because some conditions have to be previously fulfilled in order to obtain right ones, which will be described and finally, demonstrate that NDF analysis is necessary, including Return Relations (RRT usefulness, which in fact are related with the True Loop-Gain. Finally concluding this paper, steps for oscillator design and analysis, using the proposed NDF/RRT method will be presented, compared to wrong previous solutions pointing out new accuracy achieved on oscillation frequency and QL prediction. Also, more new examples, of plane reference oscillators (Z/Y/rho, will be added for which loop gain method application is clearly difficult or even impossible, solving them with the new proposed NDF/RRT method.

  5. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  6. Creep-fatigue evaluation method for weld joint of Mod.9Cr-1Mo steel Part II: Plate bending test and proposal of a simplified evaluation method

    Energy Technology Data Exchange (ETDEWEB)

    Ando, Masanori, E-mail: ando.masanori@jaea.go.jp; Takaya, Shigeru, E-mail: takaya.shigeru@jaea.go.jp

    2016-12-15

    Highlights: • Creep-fatigue evaluation method for weld joint of Mod.9Cr-1Mo steel is proposed. • A simplified evaluation method is also proposed for the codification. • Both proposed evaluation method was validated by the plate bending test. • For codification, the local stress and strain behavior was analyzed. - Abstract: In the present study, to develop an evaluation procedure and design rules for Mod.9Cr-1Mo steel weld joints, a method for evaluating the creep-fatigue life of Mod.9Cr-1Mo steel weld joints was proposed based on finite element analysis (FEA) and a series of cyclic plate bending tests of longitudinal and horizontal seamed plates. The strain concentration and redistribution behaviors were evaluated and the failure cycles were estimated using FEA by considering the test conditions and metallurgical discontinuities in the weld joints. Inelastic FEA models consisting of the base metal, heat-affected zone and weld metal were employed to estimate the elastic follow-up behavior caused by the metallurgical discontinuities. The elastic follow-up factors determined by comparing the elastic and inelastic FEA results were determined to be less than 1.5. Based on the estimated elastic follow-up factors obtained via inelastic FEA, a simplified technique using elastic FEA was proposed for evaluating the creep-fatigue life in Mod.9Cr-1Mo steel weld joints. The creep-fatigue life obtained using the plate bending test was compared to those estimated from the results of inelastic FEA and by a simplified evaluation method.

  7. Creep-fatigue evaluation method for weld joint of Mod.9Cr-1Mo steel Part II: Plate bending test and proposal of a simplified evaluation method

    International Nuclear Information System (INIS)

    Ando, Masanori; Takaya, Shigeru

    2016-01-01

    Highlights: • Creep-fatigue evaluation method for weld joint of Mod.9Cr-1Mo steel is proposed. • A simplified evaluation method is also proposed for the codification. • Both proposed evaluation method was validated by the plate bending test. • For codification, the local stress and strain behavior was analyzed. - Abstract: In the present study, to develop an evaluation procedure and design rules for Mod.9Cr-1Mo steel weld joints, a method for evaluating the creep-fatigue life of Mod.9Cr-1Mo steel weld joints was proposed based on finite element analysis (FEA) and a series of cyclic plate bending tests of longitudinal and horizontal seamed plates. The strain concentration and redistribution behaviors were evaluated and the failure cycles were estimated using FEA by considering the test conditions and metallurgical discontinuities in the weld joints. Inelastic FEA models consisting of the base metal, heat-affected zone and weld metal were employed to estimate the elastic follow-up behavior caused by the metallurgical discontinuities. The elastic follow-up factors determined by comparing the elastic and inelastic FEA results were determined to be less than 1.5. Based on the estimated elastic follow-up factors obtained via inelastic FEA, a simplified technique using elastic FEA was proposed for evaluating the creep-fatigue life in Mod.9Cr-1Mo steel weld joints. The creep-fatigue life obtained using the plate bending test was compared to those estimated from the results of inelastic FEA and by a simplified evaluation method.

  8. A Proposed Method for Solving Fuzzy System of Linear Equations

    Directory of Open Access Journals (Sweden)

    Reza Kargar

    2014-01-01

    Full Text Available This paper proposes a new method for solving fuzzy system of linear equations with crisp coefficients matrix and fuzzy or interval right hand side. Some conditions for the existence of a fuzzy or interval solution of m×n linear system are derived and also a practical algorithm is introduced in detail. The method is based on linear programming problem. Finally the applicability of the proposed method is illustrated by some numerical examples.

  9. Share Valuation Using the Comparative Method

    Directory of Open Access Journals (Sweden)

    Jana Marková

    2016-12-01

    Full Text Available The comparative method is one of the methods of assessment of equity securities in theory and practice. The practical application of the comparative method is arranged in MS Decree No. 492/2004 Coll. on the establishment of the universal value of property and in Act No. 431/2002 Coll. on accounting, as amended by later regulations. According to this method, internal (general, real value is derived from information on specific prices or values of shares of similar companies. The comparative method can be applied without serious problems only provided that the differences between the companies are small; otherwise, its use has been problematic. To find a comparable public limited company on a mature capital market, where the number of traded comparable companies is high, is not a problem. It is very difficult for a small market such as Slovakia’s stock market. This paper discusses the application of comparative methods to the non-standard Slovak capital market.

  10. Space-partition method for the variance-based sensitivity analysis: Optimal partition scheme and comparative study

    International Nuclear Information System (INIS)

    Zhai, Qingqing; Yang, Jun; Zhao, Yu

    2014-01-01

    Variance-based sensitivity analysis has been widely studied and asserted itself among practitioners. Monte Carlo simulation methods are well developed in the calculation of variance-based sensitivity indices but they do not make full use of each model run. Recently, several works mentioned a scatter-plot partitioning method to estimate the variance-based sensitivity indices from given data, where a single bunch of samples is sufficient to estimate all the sensitivity indices. This paper focuses on the space-partition method in the estimation of variance-based sensitivity indices, and its convergence and other performances are investigated. Since the method heavily depends on the partition scheme, the influence of the partition scheme is discussed and the optimal partition scheme is proposed based on the minimized estimator's variance. A decomposition and integration procedure is proposed to improve the estimation quality for higher order sensitivity indices. The proposed space-partition method is compared with the more traditional method and test cases show that it outperforms the traditional one

  11. Proposing a sequential comparative analysis for assessing multilateral health agency transformation and sustainable capacity: exploring the advantages of institutional theory.

    Science.gov (United States)

    Gómez, Eduardo J

    2014-05-20

    This article proposes an approach to comparing and assessing the adaptive capacity of multilateral health agencies in meeting country and individual healthcare needs. Most studies comparing multilateral health agencies have failed to clearly propose a method for conducting agency comparisons. This study conducted a qualitative case study methodological approach, such that secondary and primary case study literature was used to conduct case study comparisons of multilateral health agencies. Through the proposed Sequential Comparative Analysis (SCA), the author found a more effective way to justify the selection of cases, compare and assess organizational transformative capacity, and to learn from agency success in policy sustainability processes. To more affectively understand and explain why some multilateral health agencies are more capable of adapting to country and individual healthcare needs, SCA provides a methodological approach that may help to better understand why these agencies are so different and what we can learn from successful reform processes. As funding challenges continue to hamper these agencies' adaptive capacity, learning from each other will become increasingly important.

  12. Comparative analysis of stress in a new proposal of dental implants.

    Science.gov (United States)

    Valente, Mariana Lima da Costa; de Castro, Denise Tornavoi; Macedo, Ana Paula; Shimano, Antonio Carlos; Dos Reis, Andréa Cândido

    2017-08-01

    The purpose of this study was to compare, through photoelastic analysis, the stress distribution around conventional and modified external hexagon (EH) and morse taper (MT) dental implant connections. Four photoelastic models were prepared (n=1): Model 1 - conventional EH cylindrical implant (Ø 4.0mm×11mm - Neodent®), Model 2 - modified EH cylindrical implant, Model 3 - conventional MT Conical implant (Ø 4.3mm×10mm - Neodent®) and Model 4 - modified MT conical implant. 100 and 150N axial and oblique loads (30° tilt) were applied in the devices coupled to the implants. A plane transmission polariscope was used in the analysis of fringes and each position of interest was recorded by a digital camera. The Tardy method was used to quantify the fringe order (n), that calculates the maximum shear stress (τ) value in each selected point. The results showed lower stress concentration in the modified cylindrical implant (EH) compared to the conventional model, with application of 150N axial and 100N oblique loads. Lower stress was observed for the modified conical (MT) implant with the application of 100 and 150N oblique loads, which was not observed for the conventional implant model. The comparative analysis of the models showed that the new design proposal generates good stress distribution, especially in the cervical third, suggesting the preservation of bone tissue in the bone crest region. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Construction Method of Display Proposal for Commodities in Sales Promotion by Genetic Algorithm

    Science.gov (United States)

    Yumoto, Masaki

    In a sales promotion task, wholesaler prepares and presents the display proposal for commodities in order to negotiate with retailer's buyers what commodities they should sell. For automating the sales promotion tasks, the proposal has to be constructed according to the target retailer's buyer. However, it is difficult to construct the proposal suitable for the target retail store because of too much combination of commodities. This paper proposes a construction method by Genetic algorithm (GA). The proposed method represents initial display proposals for commodities with genes, improve ones with the evaluation value by GA, and rearrange one with the highest evaluation value according to the classification of commodity. Through practical experiment, we can confirm that display proposal by the proposed method is similar with the one constructed by a wholesaler.

  14. Visual assessment of BIPV retrofit design proposals for selected historical buildings using the saliency map method

    Directory of Open Access Journals (Sweden)

    Ran Xu

    2015-06-01

    Full Text Available With the increasing awareness of energy efficiency, many old buildings have to undergo a massive facade energy retrofit. How to predict the visual impact which solar installations on the aesthetic cultural value of these buildings has been a heated debate in Switzerland (and throughout the world. The usual evaluation method to describe the visual impact of BIPV is based on semantic and qualitative descriptors, and strongly dependent on personal preferences. The evaluation scale is therefore relative, flexible and imprecise. This paper proposes a new method to accurately measure the visual impact which BIPV installations have on a historical building by using the saliency map method. By imitating working principles of the human eye, it is measured how much the BIPV design proposals differ from the original building facade in the aspect of attracting human visual attention. The result is directly presented in a quantitative manner, and can be used to compare the fitness of different BIPV design proposals. The measuring process is numeric, objective and more precise.  

  15. Comparative study of methods for recognition of an unknown person's action from a video sequence

    Science.gov (United States)

    Hori, Takayuki; Ohya, Jun; Kurumisawa, Jun

    2009-02-01

    This paper proposes a Tensor Decomposition Based method that can recognize an unknown person's action from a video sequence, where the unknown person is not included in the database (tensor) used for the recognition. The tensor consists of persons, actions and time-series image features. For the observed unknown person's action, one of the actions stored in the tensor is assumed. Using the motion signature obtained from the assumption, the unknown person's actions are synthesized. The actions of one of the persons in the tensor are replaced by the synthesized actions. Then, the core tensor for the replaced tensor is computed. This process is repeated for the actions and persons. For each iteration, the difference between the replaced and original core tensors is computed. The assumption that gives the minimal difference is the action recognition result. For the time-series image features to be stored in the tensor and to be extracted from the observed video sequence, the human body silhouette's contour shape based feature is used. To show the validity of our proposed method, our proposed method is experimentally compared with Nearest Neighbor rule and Principal Component analysis based method. Experiments using 33 persons' seven kinds of action show that our proposed method achieves better recognition accuracies for the seven actions than the other methods.

  16. Proposal for Requirement Validation Criteria and Method Based on Actor Interaction

    Science.gov (United States)

    Hattori, Noboru; Yamamoto, Shuichiro; Ajisaka, Tsuneo; Kitani, Tsuyoshi

    We propose requirement validation criteria and a method based on the interaction between actors in an information system. We focus on the cyclical transitions of one actor's situation against another and clarify observable stimuli and responses based on these transitions. Both actors' situations can be listed in a state transition table, which describes the observable stimuli or responses they send or receive. Examination of the interaction between both actors in the state transition tables enables us to detect missing or defective observable stimuli or responses. Typically, this method can be applied to the examination of the interaction between a resource managed by the information system and its user. As a case study, we analyzed 332 requirement defect reports of an actual system development project in Japan. We found that there were a certain amount of defects regarding missing or defective stimuli and responses, which can be detected using our proposed method if this method is used in the requirement definition phase. This means that we can reach a more complete requirement definition with our proposed method.

  17. Collaborative framework for PIV uncertainty quantification: comparative assessment of methods

    International Nuclear Information System (INIS)

    Sciacchitano, Andrea; Scarano, Fulvio; Neal, Douglas R; Smith, Barton L; Warner, Scott O; Vlachos, Pavlos P; Wieneke, Bernhard

    2015-01-01

    A posteriori uncertainty quantification of particle image velocimetry (PIV) data is essential to obtain accurate estimates of the uncertainty associated with a given experiment. This is particularly relevant when measurements are used to validate computational models or in design and decision processes. In spite of the importance of the subject, the first PIV uncertainty quantification (PIV-UQ) methods have been developed only in the last three years. The present work is a comparative assessment of four approaches recently proposed in the literature: the uncertainty surface method (Timmins et al 2012), the particle disparity approach (Sciacchitano et al 2013), the peak ratio criterion (Charonko and Vlachos 2013) and the correlation statistics method (Wieneke 2015). The analysis is based upon experiments conducted for this specific purpose, where several measurement techniques are employed simultaneously. The performances of the above approaches are surveyed across different measurement conditions and flow regimes. (paper)

  18. A Comparative Survey of Methods for Remote Heart Rate Detection From Frontal Face Videos

    Directory of Open Access Journals (Sweden)

    Chen Wang

    2018-05-01

    Full Text Available Remotely measuring physiological activity can provide substantial benefits for both the medical and the affective computing applications. Recent research has proposed different methodologies for the unobtrusive detection of heart rate (HR using human face recordings. These methods are based on subtle color changes or motions of the face due to cardiovascular activities, which are invisible to human eyes but can be captured by digital cameras. Several approaches have been proposed such as signal processing and machine learning. However, these methods are compared with different datasets, and there is consequently no consensus on method performance. In this article, we describe and evaluate several methods defined in literature, from 2008 until present day, for the remote detection of HR using human face recordings. The general HR processing pipeline is divided into three stages: face video processing, face blood volume pulse (BVP signal extraction, and HR computation. Approaches presented in the paper are classified and grouped according to each stage. At each stage, algorithms are analyzed and compared based on their performance using the public database MAHNOB-HCI. Results found in this article are limited on MAHNOB-HCI dataset. Results show that extracted face skin area contains more BVP information. Blind source separation and peak detection methods are more robust with head motions for estimating HR.

  19. Determination of the oxidizing property: proposal of an alternative method based on differential scanning calorimetry

    International Nuclear Information System (INIS)

    Gigante, L.; Dellavedova, M.; Pasturenzi, C.; Lunghi, A.; Mattarella, M.; Cardillo, P.

    2008-01-01

    Determination of chemical-physical and hazardous properties of substances is a very important matter in the chemical industry, considering the growing attention of public opinion regarding safety and eco-compatibility aspects of products. In the present work, attention was focused on characterization of oxidizing properties. In case of solid compounds, the current method (Dir 84/449/CEE 6) compares the maximum combustion rate of the examined substance to the maximum combustion rate of a reference mixture. This method shows a lot of disvantages and does not provide a quantitative result. In the following work an alternative method, based on DSC measurements, is proposed for the determination of oxidizing properties. [it

  20. Comparative analysis of assessment methods for operational and anesthetic risks in ulcerative gastroduodenal bleeding

    Directory of Open Access Journals (Sweden)

    Potakhin S.N.

    2015-09-01

    Full Text Available Aim of the investigation: to conduct a comparative analysis of methods of evaluation of surgical and anesthetic risks in ulcerative gastroduodenal bleeding. Materials and methods. A retrospective analysis ofthe extent of the surgical and anesthetic risks and results of treatment of 71 patients with peptic ulcer bleeding has been conducted in the study. To evaluate the surgical and anesthetic risks classification trees are used, scale ТА. Rockall and prognosis System of rebleeding (SPRK, proposed by N. V. Lebedev et al. in 2009, enabling to evaluate the probability of a fatal outcome. To compare the efficacy ofthe methods the following indicators are used: sensitivity, specificity and prediction of positive result. Results. The study compared the results ofthe risk assessment emergency operation by using these methods with the outcome ofthe operation. The comparison ofthe prognosis results in sensitivity leads to the conclusion that the scales ТА. Rockall and SPRK are worse than the developed method of classification trees in recognizing patients with poor outcome of surgery. Conclusion. The method of classification trees can be considered as the most accurate method of evaluation of surgical and anesthetic risks in ulcerative gastroduodenal bleeding.

  1. Applicability of the proposed evaluation method for social infrastructures to nuclear power plants

    International Nuclear Information System (INIS)

    Ichimura, Tomiyasu

    2015-01-01

    This study proposes an evaluation method for social infrastructures, and verifies the applicability of the proposed evaluation method to social infrastructures by applying it to nuclear power plants, which belong to social infrastructures. In the proposed evaluation method for social infrastructures, the authors chose four evaluation viewpoints and proposed common evaluation standards for the evaluation indexes obtained from each viewpoint. By applying this system to the evaluation of nuclear power plants, the evaluation index examples were obtained from the evaluation viewpoints. Furthermore, when the level of the common evaluation standards of the proposed evaluation method was applied to the evaluation of the activities of nuclear power plants based on the regulations, it was confirmed that these activities are at the highest level. Through this application validation, it was clarified that the proposed evaluation method for social infrastructures had certain effectiveness. The four evaluation viewpoints are 'service,' 'environment,' 'action factor,' and 'operation and management.' Part of the application examples to a nuclear power plant are as follows: (1) in the viewpoint of service: the operation rate of the power plant, and operation costs, and (2) in the viewpoint of environment: external influence related to nuclear waste and radioactivity, and external effect related to cooling water. (A.O.)

  2. A Proposal of Estimation Methodology to Improve Calculation Efficiency of Sampling-based Method in Nuclear Data Sensitivity and Uncertainty Analysis

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2014-01-01

    The uncertainty with the sampling-based method is evaluated by repeating transport calculations with a number of cross section data sampled from the covariance uncertainty data. In the transport calculation with the sampling-based method, the transport equation is not modified; therefore, all uncertainties of the responses such as k eff , reaction rates, flux and power distribution can be directly obtained all at one time without code modification. However, a major drawback with the sampling-based method is that it requires expensive computational load for statistically reliable results (inside confidence level 0.95) in the uncertainty analysis. The purpose of this study is to develop a method for improving the computational efficiency and obtaining highly reliable uncertainty result in using the sampling-based method with Monte Carlo simulation. The proposed method is a method to reduce the convergence time of the response uncertainty by using the multiple sets of sampled group cross sections in a single Monte Carlo simulation. The proposed method was verified by estimating GODIVA benchmark problem and the results were compared with that of conventional sampling-based method. In this study, sampling-based method based on central limit theorem is proposed to improve calculation efficiency by reducing the number of repetitive Monte Carlo transport calculation required to obtain reliable uncertainty analysis results. Each set of sampled group cross sections is assigned to each active cycle group in a single Monte Carlo simulation. The criticality uncertainty for the GODIVA problem is evaluated by the proposed and previous method. The results show that the proposed sampling-based method can efficiently decrease the number of Monte Carlo simulation required for evaluate uncertainty of k eff . It is expected that the proposed method will improve computational efficiency of uncertainty analysis with sampling-based method

  3. Methodological proposal for environmental impact evaluation since different specific methods

    International Nuclear Information System (INIS)

    Leon Pelaez, Juan Diego; Lopera Arango Gabriel Jaime

    1999-01-01

    Some conceptual and practical elements related to environmental impact evaluation are described and related to the preparation of technical reports (environmental impact studies and environmental management plans) to be presented to environmental authorities for obtaining the environmental permits for development projects. In the first part of the document a summary of the main aspects of normative type is made that support the studies of environmental impact in Colombia. We propose a diagram for boarding and elaboration of the evaluation of environmental impact, which begins with the description of the project and of the environmental conditions in the area of the same. Passing then to identify the impacts through a method matricial and continuing with the quantitative evaluation of the same. For which we propose the use of the method developed by Arboleda (1994). Also we propose to qualify the activities of the project and the components of the environment in their relative importance, by means of a method here denominated agglomerate evaluation. Which allows finding those activities more impacting and the mostly impacted components. Lastly it is presented some models for the elaboration and presentation of the environmental management plans. The pursuit programs and those of environmental supervision

  4. Comparative analysis of methods for detecting interacting loci.

    Science.gov (United States)

    Chen, Li; Yu, Guoqiang; Langefeld, Carl D; Miller, David J; Guy, Richard T; Raghuram, Jayaram; Yuan, Xiguo; Herrington, David M; Wang, Yue

    2011-07-05

    Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies. Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive comparison of performance and limitations of available interaction detection methods is warranted. We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs), with the last a popular main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG), Bayesian epistasis association mapping (BEAM), SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIT), and logistic regression (LR) were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First, while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the methods to control the type I error rate

  5. Comparative analysis of methods for detecting interacting loci

    Directory of Open Access Journals (Sweden)

    Yuan Xiguo

    2011-07-01

    Full Text Available Abstract Background Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies. Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive comparison of performance and limitations of available interaction detection methods is warranted. Results We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs, with the last a popular main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR, full interaction model (FIM, information gain (IG, Bayesian epistasis association mapping (BEAM, SNP harvester (SH, maximum entropy conditional probability modeling (MECPM, logistic regression with an interaction term (LRIT, and logistic regression (LR were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First, while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the

  6. A proposed assessment method for image of regional educational institutions

    Directory of Open Access Journals (Sweden)

    Kataeva Natalya

    2017-01-01

    Full Text Available Market of educational services in the current Russian economic conditions is a complex of a huge variety of educational institutions. Market of educational services is already experiencing a significant influence of the demographic situation in Russia. This means that higher education institutions are forced to fight in a tough competition for high school students. Increased competition in the educational market forces universities to find new methods of non-price competition in attraction of potential students and throughout own educational and economic activities. Commercialization of education places universities in a single plane with commercial companies who study a positive perception of the image and reputation as a competitive advantage, which is quite acceptable for use in strategic and current activities of higher education institutions to ensure the competitiveness of educational services and educational institution in whole. Nevertheless, due to lack of evidence-based proposals in this area there is a need for scientific research in terms of justification of organizational and methodological aspects of image use as a factor in the competitiveness of the higher education institution. Theoretically and practically there are different methods and ways of evaluating the company’s image. The article provides a comparative assessment of the existing valuation methods of corporate image and the author’s method of estimating the image of higher education institutions based on the key influencing factors. The method has been tested on the Vyatka State Agricultural Academy (Russia. The results also indicate the strengths and weaknesses of the institution, highlights ways of improving, and adjusts the efforts for image improvement.

  7. Comparison among four proposed direct blood culture microbial identification methods using MALDI-TOF MS.

    Science.gov (United States)

    Bazzi, Ali M; Rabaan, Ali A; El Edaily, Zeyad; John, Susan; Fawarah, Mahmoud M; Al-Tawfiq, Jaffar A

    Matrix-assisted laser desorption-ionization time-of-flight (MALDI-TOF) mass spectrometry facilitates rapid and accurate identification of pathogens, which is critical for sepsis patients. In this study, we assessed the accuracy in identification of both Gram-negative and Gram-positive bacteria, except for Streptococcus viridans, using four rapid blood culture methods with Vitek MALDI-TOF-MS. We compared our proposed lysis centrifugation followed by washing and 30% acetic acid treatment method (method 2) with two other lysis centrifugation methods (washing and 30% formic acid treatment (method 1); 100% ethanol treatment (method 3)), and picking colonies from 90 to 180min subculture plates (method 4). Methods 1 and 2 identified all organisms down to species level with 100% accuracy, except for Streptococcus viridans, Streptococcus pyogenes, Enterobacter cloacae and Proteus vulgaris. The latter two were identified to genus level with 100% accuracy. Each method exhibited excellent accuracy and precision in terms of identification to genus level with certain limitations. Copyright © 2016 King Saud Bin Abdulaziz University for Health Sciences. Published by Elsevier Ltd. All rights reserved.

  8. Comparison among four proposed direct blood culture microbial identification methods using MALDI-TOF MS

    Directory of Open Access Journals (Sweden)

    Ali M. Bazzi

    2017-05-01

    Full Text Available Summary: Matrix-assisted laser desorption-ionization time-of-flight (MALDI-TOF mass spectrometry facilitates rapid and accurate identification of pathogens, which is critical for sepsis patients.In this study, we assessed the accuracy in identification of both Gram-negative and Gram-positive bacteria, except for Streptococcus viridans, using four rapid blood culture methods with Vitek MALDI-TOF-MS. We compared our proposed lysis centrifugation followed by washing and 30% acetic acid treatment method (method 2 with two other lysis centrifugation methods (washing and 30% formic acid treatment (method 1; 100% ethanol treatment (method 3, and picking colonies from 90 to 180 min subculture plates (method 4. Methods 1 and 2 identified all organisms down to species level with 100% accuracy, except for Streptococcus viridans, Streptococcus pyogenes, Enterobacter cloacae and Proteus vulgaris. The latter two were identified to genus level with 100% accuracy. Each method exhibited excellent accuracy and precision in terms of identification to genus level with certain limitations. Keywords: MALDI-TOF, Gram-negative, Gram-positive, Sepsis, Blood culture

  9. Proposed method for regulating major materials licensees

    International Nuclear Information System (INIS)

    1992-02-01

    The Director, Office of Nuclear Material Safety and Safeguards, US Nuclear Regulatory Commission, appointed a Materials Regulatory Review Task Force to conduct a broad-based review of the Commission's current licensing and oversight programs for fuel cycle and large materials plants. The task force, as requested, defined the components and subcomponents of an ideal regulatory evaluation system for these types of licensed plants and compared they to the components and subcomponents of the existing regulatory evaluation system. This report discusses findings from this comparison and proposed recommendations on the basis of these findings

  10. On the use of shape spaces to compare morphometric methods

    Directory of Open Access Journals (Sweden)

    F. James Rohlf

    2000-06-01

    Full Text Available Abstract Several methods have been proposed to use differences in configurations of landmark points to measure the amount of shape difference between two structures. Shape difference coefficients ignore differences in the configurations that could be due to the effects of translation, rotation, and scale. One way to understand the differences between these methods is to compare the multidimensional shape spaces corresponding to each coefficient. This paper compares Kendall's shape space, Kendall tangent space, the shape spaces implied by EDMA-I and EDMA-II test statistics, the shape space of log size-scaled inter-landmark distances, and the shape space implied by differences in angles of lines connecting pairs of landmarks. The case of three points in the plane (i.e., landmarks at the vertices of a triangle is given special emphasis because the various shape spaces can be illustrated in just 2 or 3 dimensions. The results of simulalions are shown both for random samples of all possible triangles as well as for normally distributed independent variation at each landmark. Generalizations to studies of more than three landmarks are suggested. It is shown that methods other than those based on Procrustes distances strongly constrain the possible results obtained by ordination analyses, can give misleading results when used in studies of growth and evolutionary trajectories.

  11. The Method of Immersion the Problem of Comparing Technical Objects in an Expert Shell in the Class of Artificial Intelligence Algorithms

    Science.gov (United States)

    Sergey Vasilievich, Buharin; Aleksandr Vladimirovich, Melnikov; Svetlana Nikolaevna, Chernyaeva; Lyudmila Anatolievna, Korobova

    2017-08-01

    The method of dip of the underlying computational problem of comparing technical object in an expert shell in the class of data mining methods is examined. An example of using the proposed method is given.

  12. Fracture toughness of glasses and hydroxyapatite: a comparative study of 7 methods by using Vickers indenter

    OpenAIRE

    HERVAS , Isabel; MONTAGNE , Alex; Van Gorp , Adrien; BENTOUMI , M.; THUAULT , A.; IOST , Alain

    2016-01-01

    International audience; Numerous methods have been proposed to estimate the indentation fracture toughness Kic for brittle materials. These methods generally uses formulæ established from empirical correlations between critical applied force, or average crack length, and classical fracture mechanics tests. This study compares several models of fracture toughness calculation obtained by using Vickers indenters. Two optical glasses (Crown and Flint), one vitroceramic (Zerodur) and one ceramic (...

  13. Proposed frustrated-total-reflection acoustic sensing method

    International Nuclear Information System (INIS)

    Hull, J.R.

    1981-01-01

    Modulation of electromagnetic energy transmission through a frustrated-total-reflection device by pressure-induced changes in the index of refraction is proposed for use as an acoustic detector. Maximum sensitivity occurs for angles of incidence near the critical angle. The minimum detectable pressure in air is limited by Brownian noise. Acoustic propagation losses and diffraction of the optical beam by the acoustic signal limit the minimum acoustic wavelength to lengths of the order of the spatial extent of the optical beam. The response time of the method is fast enough to follow individual acoustic waves

  14. Comparative study of discretization methods of microarray data for inferring transcriptional regulatory networks

    Directory of Open Access Journals (Sweden)

    Ji Wei

    2010-10-01

    Full Text Available Abstract Background Microarray data discretization is a basic preprocess for many algorithms of gene regulatory network inference. Some common discretization methods in informatics are used to discretize microarray data. Selection of the discretization method is often arbitrary and no systematic comparison of different discretization has been conducted, in the context of gene regulatory network inference from time series gene expression data. Results In this study, we propose a new discretization method "bikmeans", and compare its performance with four other widely-used discretization methods using different datasets, modeling algorithms and number of intervals. Sensitivities, specificities and total accuracies were calculated and statistical analysis was carried out. Bikmeans method always gave high total accuracies. Conclusions Our results indicate that proper discretization methods can consistently improve gene regulatory network inference independent of network modeling algorithms and datasets. Our new method, bikmeans, resulted in significant better total accuracies than other methods.

  15. Using a fuzzy comprehensive evaluation method to determine product usability: A proposed theoretical framework.

    Science.gov (United States)

    Zhou, Ronggang; Chan, Alan H S

    2017-01-01

    In order to compare existing usability data to ideal goals or to that for other products, usability practitioners have tried to develop a framework for deriving an integrated metric. However, most current usability methods with this aim rely heavily on human judgment about the various attributes of a product, but often fail to take into account of the inherent uncertainties in these judgments in the evaluation process. This paper presents a universal method of usability evaluation by combining the analytic hierarchical process (AHP) and the fuzzy evaluation method. By integrating multiple sources of uncertain information during product usability evaluation, the method proposed here aims to derive an index that is structured hierarchically in terms of the three usability components of effectiveness, efficiency, and user satisfaction of a product. With consideration of the theoretical basis of fuzzy evaluation, a two-layer comprehensive evaluation index was first constructed. After the membership functions were determined by an expert panel, the evaluation appraisals were computed by using the fuzzy comprehensive evaluation technique model to characterize fuzzy human judgments. Then with the use of AHP, the weights of usability components were elicited from these experts. Compared to traditional usability evaluation methods, the major strength of the fuzzy method is that it captures the fuzziness and uncertainties in human judgments and provides an integrated framework that combines the vague judgments from multiple stages of a product evaluation process.

  16. The Method of Adaptive Comparative Judgement

    Science.gov (United States)

    Pollitt, Alastair

    2012-01-01

    Adaptive Comparative Judgement (ACJ) is a modification of Thurstone's method of comparative judgement that exploits the power of adaptivity, but in scoring rather than testing. Professional judgement by teachers replaces the marking of tests; a judge is asked to compare the work of two students and simply to decide which of them is the better.…

  17. Proposal of a method for evaluating tsunami risk using response-surface methodology

    Science.gov (United States)

    Fukutani, Y.

    2017-12-01

    Information on probabilistic tsunami inundation hazards is needed to define and evaluate tsunami risk. Several methods for calculating these hazards have been proposed (e.g. Løvholt et al. (2012), Thio (2012), Fukutani et al. (2014), Goda et al. (2015)). However, these methods are inefficient, and their calculation cost is high, since they require multiple tsunami numerical simulations, therefore lacking versatility. In this study, we proposed a simpler method for tsunami risk evaluation using response-surface methodology. Kotani et al. (2016) proposed an evaluation method for the probabilistic distribution of tsunami wave-height using a response-surface methodology. We expanded their study and developed a probabilistic distribution of tsunami inundation depth. We set the depth (x1) and the slip (x2) of an earthquake fault as explanatory variables and tsunami inundation depth (y) as an object variable. Subsequently, tsunami risk could be evaluated by conducting a Monte Carlo simulation, assuming that the generation probability of an earthquake follows a Poisson distribution, the probability distribution of tsunami inundation depth follows the distribution derived from a response-surface, and the damage probability of a target follows a log normal distribution. We applied the proposed method to a wood building located on the coast of Tokyo Bay. We implemented a regression analysis based on the results of 25 tsunami numerical calculations and developed a response-surface, which was defined as y=ax1+bx2+c (a:0.2615, b:3.1763, c=-1.1802). We assumed proper probabilistic distribution for earthquake generation, inundation height, and vulnerability. Based on these probabilistic distributions, we conducted Monte Carlo simulations of 1,000,000 years. We clarified that the expected damage probability of the studied wood building is 22.5%, assuming that an earthquake occurs. The proposed method is therefore a useful and simple way to evaluate tsunami risk using a response

  18. A Penalized Likelihood Framework For High-Dimensional Phylogenetic Comparative Methods And An Application To New-World Monkeys Brain Evolution.

    Science.gov (United States)

    Julien, Clavel; Leandro, Aristide; Hélène, Morlon

    2018-06-19

    Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.

  19. Estimation of body fluids with bioimpedance spectroscopy: state of the art methods and proposal of novel methods

    International Nuclear Information System (INIS)

    Buendia, R; Seoane, F; Lindecrantz, K; Bosaeus, I; Gil-Pita, R; Johannsson, G; Ellegård, L; Ward, L C

    2015-01-01

    Determination of body fluids is a useful common practice in determination of disease mechanisms and treatments. Bioimpedance spectroscopy (BIS) methods are non-invasive, inexpensive and rapid alternatives to reference methods such as tracer dilution. However, they are indirect and their robustness and validity are unclear. In this article, state of the art methods are reviewed, their drawbacks identified and new methods are proposed. All methods were tested on a clinical database of patients receiving growth hormone replacement therapy. Results indicated that most BIS methods are similarly accurate (e.g.  <  0.5   ±   3.0% mean percentage difference for total body water) for estimation of body fluids. A new model for calculation is proposed that performs equally well for all fluid compartments (total body water, extra- and intracellular water). It is suggested that the main source of error in extracellular water estimation is due to anisotropy, in total body water estimation to the uncertainty associated with intracellular resistivity and in determination of intracellular water a combination of both. (paper)

  20. Toward comparability of coronary magnetic resonance angiography: proposal for a standardized quantitative assessment

    International Nuclear Information System (INIS)

    Dirksen, Martijn S.; Lamb, Hildo J.; Geest, Rob van der; Roos, Albert de

    2003-01-01

    A method is proposed for the quantitative assessment of coronary magnetic resonance angiography (MRA) acquisitions. The method is based on four parameters: signal-to-noise ratio (SNR); contrast-to-noise ratio (CNR); vessel length; and vessel-edge definition. A pig model (n=7) was used to illustrate the proposed quantitative analysis method. Three-dimensional gradient-echo coronary MRA was performed with and without exogenous contrast enhancement using a gadolinium-based blood-pool contrast agent (Vistarem, Guerbet, Aulnay-Sous-Bois, France). The acquired images could be well differentiated based on the four parameters. The SNR was calculated as 9.0±1.4 vs 10.4±2.1, the CNR as 6.2±0.8 vs 8.2±0.9, the vessel length as 48.2±11.6 vs 86.5±13.8 mm, and the vessel-edge definition as 4.9±1.5 vs 7.7±3.4. Different coronary MRA techniques can be evaluated objectively with the combined use of SNR, CNR, vessel length, and vessel-edge parameters. (orig.)

  1. Expanding Comparative Literature into Comparative Sciences Clusters with Neutrosophy and Quad-stage Method

    Directory of Open Access Journals (Sweden)

    Fu Yuhua

    2016-08-01

    Full Text Available By using Neutrosophy and Quad-stage Method, the expansions of comparative literature include: comparative social sciences clusters, comparative natural sciences clusters, comparative interdisciplinary sciences clusters, and so on. Among them, comparative social sciences clusters include: comparative literature, comparative history, comparative philosophy, and so on; comparative natural sciences clusters include: comparative mathematics, comparative physics, comparative chemistry, comparative medicine, comparative biology, and so on.

  2. Comparative study on flow rate measurement by nuclear and conventional methods in selected rivers at Ulu Langat district, Selangor

    Energy Technology Data Exchange (ETDEWEB)

    Wan Mohamad Tahir, Wan Zakaria; Mohamad, Daud; Hamzah, Abdul Razak; Yusuf, Johari Mohamad; Aziz Wan Mohamad, Wan Abdul

    1986-06-01

    A radiotracer technique using Tc-99 to measure flows of small rivers was introduced in Malaysia. Three rivers in the Ulu Langat District were selected for a comparative study on flow rate determination by conventional and radioisotope methods. Radioisotopic approach, comprising injection procedures, calibration, mixing length and safety aspects are discussed. The results measured by radioisotope method are compared to the Drainage and Irrigation Department's (DID's) discharge curves data collected from 1980 to 1982 which is calibrated using a current meter. The results are comparable and fall within the range obtained by conventional method. Related to this study, a comprehensive work on stream gauging of moderate and high flow rates using both methods is proposed to be carried out directly.

  3. Proposed Sandia frequency shift for anti-islanding detection method based on artificial immune system

    Directory of Open Access Journals (Sweden)

    A.Y. Hatata

    2018-03-01

    Full Text Available Sandia frequency shift (SFS is one of the active anti-islanding detection methods that depend on frequency drift to detect an islanding condition for inverter-based distributed generation. The non-detection zone (NDZ of the SFS method depends to a great extent on its parameters. Improper adjusting of these parameters may result in failure of the method. This paper presents a proposed artificial immune system (AIS-based technique to obtain optimal parameters of SFS anti-islanding detection method. The immune system is highly distributed, highly adaptive, and self-organizing in nature, maintains a memory of past encounters, and has the ability to continually learn about new encounters. The proposed method generates less total harmonic distortion (THD than the conventional SFS, which results in faster island detection and better non-detection zone. The performance of the proposed method is derived analytically and simulated using Matlab/Simulink. Two case studies are used to verify the proposed method. The first case includes a photovoltaic (PV connected to grid and the second includes a wind turbine connected to grid. The deduced optimized parameter setting helps to achieve the “non-islanding inverter” as well as least potential adverse impact on power quality. Keywords: Anti-islanding detection, Sandia frequency shift (SFS, Non-detection zone (NDZ, Total harmonic distortion (THD, Artificial immune system (AIS, Clonal selection algorithm

  4. Comparative analysis of the quality of biometric methods

    OpenAIRE

    Filipčík, Jan

    2010-01-01

    The main objective is to describe and analyze the types of biometric identification and selected biometric methods and identify their strengths and weaknesses compared to the current document type of identification and verification of persons and compared to other biometric methods and then focus on the relationships and support of biometric methods in terms of IS / ICT services. The work will consist of 5 types of biometric methods namely dactyloscopy, hand geometry scanning, facial scanning...

  5. Comparative Analysis of Fuzzy Set Defuzzification Methods in the Context of Ecological Risk Assessment

    Directory of Open Access Journals (Sweden)

    Užga-Rebrovs Oļegs

    2017-12-01

    Full Text Available Fuzzy inference systems are widely used in various areas of human activity. Their most widespread use lies in the field of fuzzy control of technical devices of different kind. Another direction of using fuzzy inference systems is modelling and assessment of different kind of risks under insufficient or missing objective initial data. Fuzzy inference is concluded by the procedure of defuzzification of the resulting fuzzy sets. A large number of techniques for implementing the defuzzification procedure are available nowadays. The paper presents a comparative analysis of some widespread methods of fuzzy set defuzzification, and proposes the most appropriate methods in the context of ecological risk assessment.

  6. Disintegration of sublingual tablets: proposal for a validated test method and acceptance criterion.

    Science.gov (United States)

    Weda, M; van Riet-Nales, D A; van Aalst, P; de Kaste, D; Lekkerkerker, J F F

    2006-12-01

    In the Netherlands the market share of isosorbide dinitrate 5 mg sublingual tablets is dominated by 2 products (A and B). In the last few years complaints have been received from health care professionals on product B. During patient use the disintegration of the tablet was reported to be slow and/or incomplete, and ineffectiveness was experienced. In the European Pharmacopoeia (Ph. Eur.) no requirement is present for the disintegration time of sublingual tablets. The purpose of this study was to compare the in vitro disintegration time of products A and B, and to establish a suitable test method and acceptance criterion. A and B were tested with the Ph. Eur. method described in the monograph on disintegration of tablets and capsules as well as with 3 modified tests using the same Ph. Eur. apparatus, but without movement of the basket-rack assembly. In modified test 1 and modified test 2 water was used as medium (900 ml and 50 ml respectively), whereas in modified test 3 artificial saliva was used (50 ml). In addition, disintegration was tested in Nessler tubes with 0.5 and 2 ml of water. Finally, the Ph. Eur. method was also applied to other sublingual tablets with other drug substances on the Dutch market. With modified test 3 no disintegration could be achieved within 20 min. With the Ph. Eur. method and modified tests 1 and 2 product A and B differed significantly (p disintegration times. These 3 methods were capable of discriminating between products and between batches. The time measured with the Ph. Eur. method was significantly lower compared to modified tests 1 and 2 (p tablets the disintegration time should be tested. The Ph. Eur. method is considered suitable for this test. In view of the products currently on the market and taking into consideration requirements in the United States Pharmacopeia and Japanese Pharmacopoeia, an acceptance criterion of not more than 2 min is proposed.

  7. Validation of a method for assessing resident physicians' quality improvement proposals.

    Science.gov (United States)

    Leenstra, James L; Beckman, Thomas J; Reed, Darcy A; Mundell, William C; Thomas, Kris G; Krajicek, Bryan J; Cha, Stephen S; Kolars, Joseph C; McDonald, Furman S

    2007-09-01

    Residency programs involve trainees in quality improvement (QI) projects to evaluate competency in systems-based practice and practice-based learning and improvement. Valid approaches to assess QI proposals are lacking. We developed an instrument for assessing resident QI proposals--the Quality Improvement Proposal Assessment Tool (QIPAT-7)-and determined its validity and reliability. QIPAT-7 content was initially obtained from a national panel of QI experts. Through an iterative process, the instrument was refined, pilot-tested, and revised. Seven raters used the instrument to assess 45 resident QI proposals. Principal factor analysis was used to explore the dimensionality of instrument scores. Cronbach's alpha and intraclass correlations were calculated to determine internal consistency and interrater reliability, respectively. QIPAT-7 items comprised a single factor (eigenvalue = 3.4) suggesting a single assessment dimension. Interrater reliability for each item (range 0.79 to 0.93) and internal consistency reliability among the items (Cronbach's alpha = 0.87) were high. This method for assessing resident physician QI proposals is supported by content and internal structure validity evidence. QIPAT-7 is a useful tool for assessing resident QI proposals. Future research should determine the reliability of QIPAT-7 scores in other residency and fellowship training programs. Correlations should also be made between assessment scores and criteria for QI proposal success such as implementation of QI proposals, resident scholarly productivity, and improved patient outcomes.

  8. Proposal of Constraints Analysis Method Based on Network Model for Task Planning

    Science.gov (United States)

    Tomiyama, Tomoe; Sato, Tatsuhiro; Morita, Toyohisa; Sasaki, Toshiro

    Deregulation has been accelerating several activities toward reengineering business processes, such as railway through service and modal shift in logistics. Making those activities successful, business entities have to regulate new business rules or know-how (we call them ‘constraints’). According to the new constraints, they need to manage business resources such as instruments, materials, workers and so on. In this paper, we propose a constraint analysis method to define constraints for task planning of the new business processes. To visualize each constraint's influence on planning, we propose a network model which represents allocation relations between tasks and resources. The network can also represent task ordering relations and resource grouping relations. The proposed method formalizes the way of defining constraints manually as repeatedly checking the network structure and finding conflicts between constraints. Being applied to crew scheduling problems shows that the method can adequately represent and define constraints of some task planning problems with the following fundamental features, (1) specifying work pattern to some resources, (2) restricting the number of resources for some works, (3) requiring multiple resources for some works, (4) prior allocation of some resources to some works and (5) considering the workload balance between resources.

  9. Comparing biological networks via graph compression

    Directory of Open Access Journals (Sweden)

    Hayashida Morihiro

    2010-09-01

    Full Text Available Abstract Background Comparison of various kinds of biological data is one of the main problems in bioinformatics and systems biology. Data compression methods have been applied to comparison of large sequence data and protein structure data. Since it is still difficult to compare global structures of large biological networks, it is reasonable to try to apply data compression methods to comparison of biological networks. In existing compression methods, the uniqueness of compression results is not guaranteed because there is some ambiguity in selection of overlapping edges. Results This paper proposes novel efficient methods, CompressEdge and CompressVertices, for comparing large biological networks. In the proposed methods, an original network structure is compressed by iteratively contracting identical edges and sets of connected edges. Then, the similarity of two networks is measured by a compression ratio of the concatenated networks. The proposed methods are applied to comparison of metabolic networks of several organisms, H. sapiens, M. musculus, A. thaliana, D. melanogaster, C. elegans, E. coli, S. cerevisiae, and B. subtilis, and are compared with an existing method. These results suggest that our methods can efficiently measure the similarities between metabolic networks. Conclusions Our proposed algorithms, which compress node-labeled networks, are useful for measuring the similarity of large biological networks.

  10. Comparing subjective image quality measurement methods for the creation of public databases

    Science.gov (United States)

    Redi, Judith; Liu, Hantao; Alers, Hani; Zunino, Rodolfo; Heynderickx, Ingrid

    2010-01-01

    The Single Stimulus (SS) method is often chosen to collect subjective data testing no-reference objective metrics, as it is straightforward to implement and well standardized. At the same time, it exhibits some drawbacks; spread between different assessors is relatively large, and the measured ratings depend on the quality range spanned by the test samples, hence the results from different experiments cannot easily be merged . The Quality Ruler (QR) method has been proposed to overcome these inconveniences. This paper compares the performance of the SS and QR method for pictures impaired by Gaussian blur. The research goal is, on one hand, to analyze the advantages and disadvantages of both methods for quality assessment and, on the other, to make quality data of blur impaired images publicly available. The obtained results show that the confidence intervals of the QR scores are narrower than those of the SS scores. This indicates that the QR method enhances consistency across assessors. Moreover, QR scores exhibit a higher linear correlation with the distortion applied. In summary, for the purpose of building datasets of subjective quality, the QR approach seems promising from the viewpoint of both consistency and repeatability.

  11. Proposal for an Evaluation Method for the Performance of Work Procedures.

    Science.gov (United States)

    Mohammed, Mouda; Mébarek, Djebabra; Wafa, Boulagouas; Makhlouf, Chati

    2016-12-01

    Noncompliance of operators with work procedures is a recurrent problem. This human behavior has been said to be situational and studied by many different approaches (ergonomic and others), which consider the noncompliance with work procedures to be obvious and seek to analyze its causes as well as consequences. The object of the proposed method is to solve this problem by focusing on the performance of work procedures and ensuring improved performance on a continuous basis. This study has multiple results: (1) assessment of the work procedures' performance by a multicriteria approach; (2) the use of a continuous improvement approach as a framework for the sustainability of the assessment method of work procedures' performance; and (3) adaptation of the Stop-Card as a facilitator support for continuous improvement of work procedures. The proposed method emphasizes to put in value the inputs of continuous improvement of the work procedures in relation with the conventional approaches which adopt the obvious evidence of the noncompliance to the working procedures and seek to analyze the cause-effect relationships related to this unacceptable phenomenon, especially in strategic industry.

  12. A comparative study on medical image segmentation methods

    Directory of Open Access Journals (Sweden)

    Praylin Selva Blessy SELVARAJ ASSLEY

    2014-03-01

    Full Text Available Image segmentation plays an important role in medical images. It has been a relevant research area in computer vision and image analysis. Many segmentation algorithms have been proposed for medical images. This paper makes a review on segmentation methods for medical images. In this survey, segmentation methods are divided into five categories: region based, boundary based, model based, hybrid based and atlas based. The five different categories with their principle ideas, advantages and disadvantages in segmenting different medical images are discussed.

  13. Comparing registration methods for mapping brain change using tensor-based morphometry.

    Science.gov (United States)

    Yanovsky, Igor; Leow, Alex D; Lee, Suh; Osher, Stanley J; Thompson, Paul M

    2009-10-01

    Measures of brain changes can be computed from sequential MRI scans, providing valuable information on disease progression for neuroscientific studies and clinical trials. Tensor-based morphometry (TBM) creates maps of these brain changes, visualizing the 3D profile and rates of tissue growth or atrophy. In this paper, we examine the power of different nonrigid registration models to detect changes in TBM, and their stability when no real changes are present. Specifically, we investigate an asymmetric version of a recently proposed Unbiased registration method, using mutual information as the matching criterion. We compare matching functionals (sum of squared differences and mutual information), as well as large-deformation registration schemes (viscous fluid and inverse-consistent linear elastic registration methods versus Symmetric and Asymmetric Unbiased registration) for detecting changes in serial MRI scans of 10 elderly normal subjects and 10 patients with Alzheimer's Disease scanned at 2-week and 1-year intervals. We also analyzed registration results when matching images corrupted with artificial noise. We demonstrated that the unbiased methods, both symmetric and asymmetric, have higher reproducibility. The unbiased methods were also less likely to detect changes in the absence of any real physiological change. Moreover, they measured biological deformations more accurately by penalizing bias in the corresponding statistical maps.

  14. New clinical validation method for automated sphygmomanometer: a proposal by Japan ISO-WG for sphygmomanometer standard.

    Science.gov (United States)

    Shirasaki, Osamu; Asou, Yosuke; Takahashi, Yukio

    2007-12-01

    Owing to fast or stepwise cuff deflation, or measuring at places other than the upper arm, the clinical accuracy of most recent automated sphygmomanometers (auto-BPMs) cannot be validated by one-arm simultaneous comparison, which would be the only accurate validation method based on auscultation. Two main alternative methods are provided by current standards, that is, two-arm simultaneous comparison (method 1) and one-arm sequential comparison (method 2); however, the accuracy of these validation methods might not be sufficient to compensate for the suspicious accuracy in lateral blood pressure (BP) differences (LD) and/or BP variations (BPV) between the device and reference readings. Thus, the Japan ISO-WG for sphygmomanometer standards has been studying a new method that might improve validation accuracy (method 3). The purpose of this study is to determine the appropriateness of method 3 by comparing immunity to LD and BPV with those of the current validation methods (methods 1 and 2). The validation accuracy of the above three methods was assessed in human participants [N=120, 45+/-15.3 years (mean+/-SD)]. An oscillometric automated monitor, Omron HEM-762, was used as the tested device. When compared with the others, methods 1 and 3 showed a smaller intra-individual standard deviation of device error (SD1), suggesting their higher reproducibility of validation. The SD1 by method 2 (P=0.004) significantly correlated with the participant's BP, supporting our hypothesis that the increased SD of device error by method 2 is at least partially caused by essential BPV. Method 3 showed a significantly (P=0.0044) smaller interparticipant SD of device error (SD2), suggesting its higher interparticipant consistency of validation. Among the methods of validation of the clinical accuracy of auto-BPMs, method 3, which showed the highest reproducibility and highest interparticipant consistency, can be proposed as being the most appropriate.

  15. Proposed method to calculate FRMAC intervention levels for the assessment of radiologically contaminated food and comparison of the proposed method to the U.S. FDA's method to calculate derived intervention levels

    Energy Technology Data Exchange (ETDEWEB)

    Kraus, Terrence D.; Hunt, Brian D.

    2014-02-01

    This report reviews the method recommended by the U.S. Food and Drug Administration for calculating Derived Intervention Levels (DILs) and identifies potential improvements to the DIL calculation method to support more accurate ingestion pathway analyses and protective action decisions. Further, this report proposes an alternate method for use by the Federal Emergency Radiological Assessment Center (FRMAC) to calculate FRMAC Intervention Levels (FILs). The default approach of the FRMAC during an emergency response is to use the FDA recommended methods. However, FRMAC recommends implementing the FIL method because we believe it to be more technically accurate. FRMAC will only implement the FIL method when approved by the FDA representative on the Federal Advisory Team for Environment, Food, and Health.

  16. Proposed method for assigning metric tons of heavy metal values to defense high-level waste forms to be disposed of in a geologic repository

    International Nuclear Information System (INIS)

    1987-08-01

    A proposed method is described for assigning an equivalent metric ton heavy metal (eMTHM) value to defense high-level waste forms to be disposed of in a geologic repository. This method for establishing a curie equivalency between defense high-level waste and irradiated commercial fuel is based on the ratio of defense fuel exposure to the typical commercial fuel exposure, MWd/MTHM. application of this technique to defense high-level wastes is described. Additionally, this proposed technique is compared to several alternate calculations for eMTHM. 15 refs., 2 figs., 10 tabs

  17. Comparative Study of Daylighting Calculation Methods

    Directory of Open Access Journals (Sweden)

    Mandala Ariani

    2018-01-01

    Full Text Available The aim of this study is to assess five daylighting calculation method commonly used in architectural study. The methods used include hand calculation methods (SNI/DPMB method and BRE Daylighting Protractors, scale models studied in an artificial sky simulator and computer programs using Dialux and Velux lighting software. The test room is conditioned by the uniform sky conditions, simple room geometry with variations of the room reflectance (black, grey, and white color. The analyses compared the result (including daylight factor, illumination, and coefficient of uniformity value and examines the similarity and contrast the result different. The color variations trial is used to analyses the internally reflection factor contribution to the result.

  18. Comparative assessment of cyclic J-R curve determination by different methods in a pressure vessel steel

    Energy Technology Data Exchange (ETDEWEB)

    Chowdhury, Tamshuk, E-mail: tamshuk@gmail.com [Deep Sea Technologies, National Institute of Ocean Technology, Chennai, 600100 (India); Sivaprasad, S.; Bar, H.N.; Tarafder, S. [Fatigue & Fracture Group, Materials Science and Technology Division, CSIR-National Metallurgical Laboratory, Jamshedpur, 831007 (India); Bandyopadhyay, N.R. [School of Materials Science and Engineering, Indian Institute of Engineering, Science and Technology, Shibpur, Howrah, 711103 (India)

    2016-04-15

    Cyclic J-R behaviour of a reactor pressure vessel steel using different methods available in literature has been examined to identify the best suitable method for cyclic fracture problems. Crack opening point was determined by moving average method. The η factor was experimentally determined for cyclic loading conditions and found to be similar to that of ASTM value. Analyses showed that adopting a procedure analogous to the ASTM standard for monotonic fracture is reasonable for cyclic fracture problems, and makes the comparison to monotonic fracture results straightforward. - Highlights: • Different methods of cyclic J-R evaluation compared. • A moving average method for closure point proposed. • η factor for cyclic J experimentally validated. • Method 1 is easier, provides a lower bound and direct comparison to monotonic fracture.

  19. A Comparative Study of Applying Active-Set and Interior Point Methods in MPC for Controlling Nonlinear pH Process

    Directory of Open Access Journals (Sweden)

    Syam Syafiie

    2014-06-01

    Full Text Available A comparative study of Model Predictive Control (MPC using active-set method and interior point methods is proposed as a control technique for highly non-linear pH process. The process is a strong acid-strong base system. A strong acid of hydrochloric acid (HCl and a strong base of sodium hydroxide (NaOH with the presence of buffer solution sodium bicarbonate (NaHCO3 are used in a neutralization process flowing into reactor. The non-linear pH neutralization model governed in this process is presented by multi-linear models. Performance of both controllers is studied by evaluating its ability of set-point tracking and disturbance-rejection. Besides, the optimization time is compared between these two methods; both MPC shows the similar performance with no overshoot, offset, and oscillation. However, the conventional active-set method gives a shorter control action time for small scale optimization problem compared to MPC using IPM method for pH control.

  20. Comparing the index-flood and multiple-regression methods using L-moments

    Science.gov (United States)

    Malekinezhad, H.; Nachtnebel, H. P.; Klik, A.

    In arid and semi-arid regions, the length of records is usually too short to ensure reliable quantile estimates. Comparing index-flood and multiple-regression analyses based on L-moments was the main objective of this study. Factor analysis was applied to determine main influencing variables on flood magnitude. Ward’s cluster and L-moments approaches were applied to several sites in the Namak-Lake basin in central Iran to delineate homogeneous regions based on site characteristics. Homogeneity test was done using L-moments-based measures. Several distributions were fitted to the regional flood data and index-flood and multiple-regression methods as two regional flood frequency methods were compared. The results of factor analysis showed that length of main waterway, compactness coefficient, mean annual precipitation, and mean annual temperature were the main variables affecting flood magnitude. The study area was divided into three regions based on the Ward’s method of clustering approach. The homogeneity test based on L-moments showed that all three regions were acceptably homogeneous. Five distributions were fitted to the annual peak flood data of three homogeneous regions. Using the L-moment ratios and the Z-statistic criteria, GEV distribution was identified as the most robust distribution among five candidate distributions for all the proposed sub-regions of the study area, and in general, it was concluded that the generalised extreme value distribution was the best-fit distribution for every three regions. The relative root mean square error (RRMSE) measure was applied for evaluating the performance of the index-flood and multiple-regression methods in comparison with the curve fitting (plotting position) method. In general, index-flood method gives more reliable estimations for various flood magnitudes of different recurrence intervals. Therefore, this method should be adopted as regional flood frequency method for the study area and the Namak-Lake basin

  1. Comparative Study of Different Processing Methods for the ...

    African Journals Online (AJOL)

    The result of the two processing methods reduced the cyanide concentration to the barest minimum level required by World Health Organization (10mg/kg). The mechanical pressing-fermentation method removed more cyanide when compared to fermentation processing method. Keywords: Cyanide, Fermentation, Manihot ...

  2. Comparing early design methods for children

    NARCIS (Netherlands)

    Sluis - Thiescheffer, R.J.W.; Bekker, M.M.; Eggen, J.H.; Robertson, J.; Skov, M.B.; Bekker, M.M.

    2007-01-01

    This paper describes a study which compares the outcome of two early design methods for children: brainstorming and prototyping. The hypothesis is that children will uncover more design ideas when prototyping than when brainstorming, because prototyping requires the use of a wider range of

  3. Studies on the instrumental neutron activation analysis by cadmium ratio method and pair comparator method

    Energy Technology Data Exchange (ETDEWEB)

    Chao, H E; Lu, W D; Wu, S C

    1977-12-01

    The cadmium ratio method and pair comparator method provide a solution for the effects on the effective activation factors resulting from the variation of neutron spectrum at different irradiation positions as usually encountered in the single comparator method. The relations between the activation factors and neutron spectrum in terms of cadmium ratio of the comparator Au or of the activation factor of Co-Au pair for the elements, Sc, Cr, Mn, Co, La, Ce, Sm, and Th have been determined. The activation factors of the elements at any irradiation position can then be obtained from the cadmium ratio of the comparator and/or the activation factor of the comparator pair. The relations determined should be able to apply to different reactors and/or different positions of a reactor. It is shown that, for the isotopes /sup 46/Sc, /sup 51/Cr, /sup 56/Mn, /sup 60/Co, /sup 140/La, /sup 141/Ce, /sup 153/Sm and /sup 233/Pa, the thermal neutron activation factors determined by these two methods were generally in agreement with theoretical values. Their I/sub 0//sigma/sub th/ values appeared to agree with literature values also. The methods were applied to determine the contents of elements Sc, Cr, Mn, La, Ce, Sm, and Th in U.S.G.S. Standard Rock G-2, and the results were also in agreement with literature values. The cadmium ratio method and pair comparator method improved the single comparator method, and they are more suitable to analysis for multi-elements of a large number of samples.

  4. Optimal plot size in the evaluation of papaya scions: proposal and comparison of methods

    Directory of Open Access Journals (Sweden)

    Humberto Felipe Celanti

    Full Text Available ABSTRACT Evaluating the quality of scions is extremely important and it can be done by characteristics of shoots and roots. This experiment evaluated height of the aerial part, stem diameter, number of leaves, petiole length and length of roots of papaya seedlings. Analyses were performed from a blank trial with 240 seedlings of "Golden Pecíolo Curto". The determination of the optimum plot size was done by applying the methods of maximum curvature, maximum curvature of coefficient of variation and a new proposed method, which incorporates the bootstrap resampling simulation to the maximum curvature method. According to the results obtained, five is the optimal number of seedlings of papaya "Golden Pecíolo Curto" per plot. The proposed method of bootstrap simulation with replacement provides optimal plot sizes equal or higher than the maximum curvature method and provides same plot size than maximum curvature method of the coefficient of variation.

  5. Proposed Project Selection Method for Human Support Research and Technology Development (HSR&TD)

    Science.gov (United States)

    Jones, Harry

    2005-01-01

    The purpose of HSR&TD is to deliver human support technologies to the Exploration Systems Mission Directorate (ESMD) that will be selected for future missions. This requires identifying promising candidate technologies and advancing them in technology readiness until they are acceptable. HSR&TD must select an may of technology development projects, guide them, and either terminate or continue them, so as to maximize the resulting number of usable advanced human support technologies. This paper proposes an effective project scoring methodology to support managing the HSR&TD project portfolio. Researchers strongly disagree as to what are the best technology project selection methods, or even if there are any proven ones. Technology development is risky and outstanding achievements are rare and unpredictable. There is no simple formula for success. Organizations that are satisfied with their project selection approach typically use a mix of financial, strategic, and scoring methods in an open, established, explicit, formal process. This approach helps to build consensus and develop management insight. It encourages better project proposals by clarifying the desired project attributes. We propose a project scoring technique based on a method previously used in a federal laboratory and supported by recent research. Projects are ranked by their perceived relevance, risk, and return - a new 3 R's. Relevance is the degree to which the project objective supports the HSR&TD goal of developing usable advanced human support technologies. Risk is the estimated probability that the project will achieve its specific objective. Return is the reduction in mission life cycle cost obtained if the project is successful. If the project objective technology performs a new function with no current cost, its return is the estimated cash value of performing the new function. The proposed project selection scoring method includes definitions of the criteria, a project evaluation

  6. The clinically-integrated randomized trial: proposed novel method for conducting large trials at low cost

    Directory of Open Access Journals (Sweden)

    Scardino Peter T

    2009-03-01

    Full Text Available Abstract Introduction Randomized controlled trials provide the best method of determining which of two comparable treatments is preferable. Unfortunately, contemporary randomized trials have become increasingly expensive, complex and burdened by regulation, so much so that many trials are of doubtful feasibility. Discussion Here we present a proposal for a novel, streamlined approach to randomized trials: the "clinically-integrated randomized trial". The key aspect of our methodology is that the clinical experience of the patient and doctor is virtually indistinguishable whether or not the patient is randomized, primarily because outcome data are obtained from routine clinical data, or from short, web-based questionnaires. Integration of a randomized trial into routine clinical practice also implies that there should be an attempt to randomize every patient, a corollary of which is that eligibility criteria are minimized. The similar clinical experience of patients on- and off-study also entails that the marginal cost of putting an additional patient on trial is negligible. We propose examples of how the clinically-integrated randomized trial might be applied in four distinct areas of medicine: comparisons of surgical techniques, "me too" drugs, rare diseases and lifestyle interventions. Barriers to implementing clinically-integrated randomized trials are discussed. Conclusion The proposed clinically-integrated randomized trial may allow us to enlarge dramatically the number of clinical questions that can be addressed by randomization.

  7. Comparing Methods for Cardiac Output

    DEFF Research Database (Denmark)

    Graeser, Karin; Zemtsovski, Mikhail; Kofoed, Klaus F

    2018-01-01

    of the left ventricular outflow tract. METHODS: The primary aim was a systematic comparison of CO with Doppler-derived 3D TEE and CO by thermodilution in a broad population of patients undergoing cardiac surgery. A subanalysis was performed comparing cross-sectional area by TEE with cardiac computed...... tomography (CT) angiography. Sixty-two patients, scheduled for elective heart surgery, were included; 1 was subsequently excluded for logistic reasons. Inclusion criteria were coronary artery bypass surgery (N = 42) and aortic valve replacement (N = 19). Exclusion criteria were chronic atrial fibrillation......, left ventricular ejection fraction below 0.40 and intracardiac shunts. Nineteen randomly selected patients had a cardiac CT the day before surgery. All images were stored for blinded post hoc analyses, and Bland-Altman plots were used to assess agreement between measurement methods, defined as the bias...

  8. Dimensionality Reduction Methods: Comparative Analysis of methods PCA, PPCA and KPCA

    Directory of Open Access Journals (Sweden)

    Jorge Arroyo-Hernández

    2016-01-01

    Full Text Available The dimensionality reduction methods are algorithms mapping the set of data in subspaces derived from the original space, of fewer dimensions, that allow a description of the data at a lower cost. Due to their importance, they are widely used in processes associated with learning machine. This article presents a comparative analysis of PCA, PPCA and KPCA dimensionality reduction methods. A reconstruction experiment of worm-shape data was performed through structures of landmarks located in the body contour, with methods having different number of main components. The results showed that all methods can be seen as alternative processes. Nevertheless, thanks to the potential for analysis in the features space and the method for calculation of its preimage presented, KPCA offers a better method for recognition process and pattern extraction

  9. Comparative study of the geostatistical ore reserve estimation method over the conventional methods

    International Nuclear Information System (INIS)

    Kim, Y.C.; Knudsen, H.P.

    1975-01-01

    Part I contains a comprehensive treatment of the comparative study of the geostatistical ore reserve estimation method over the conventional methods. The conventional methods chosen for comparison were: (a) the polygon method, (b) the inverse of the distance squared method, and (c) a method similar to (b) but allowing different weights in different directions. Briefly, the overall result from this comparative study is in favor of the use of geostatistics in most cases because the method has lived up to its theoretical claims. A good exposition on the theory of geostatistics, the adopted study procedures, conclusions and recommended future research are given in Part I. Part II of this report contains the results of the second and the third study objectives, which are to assess the potential benefits that can be derived by the introduction of the geostatistical method to the current state-of-the-art in uranium reserve estimation method and to be instrumental in generating the acceptance of the new method by practitioners through illustrative examples, assuming its superiority and practicality. These are given in the form of illustrative examples on the use of geostatistics and the accompanying computer program user's guide

  10. Urea ammoniation compared to urea supplementation as a method ...

    African Journals Online (AJOL)

    Urea ammoniation compared to urea supplementation as a method of improving the nutritive value of wheat straw for sheep. S.W.P. Cloete, N.M. Kritzinger. Winter Rainfall Region, Eisenburg. The ammoniation of wheat straw by urea in a stack method was in- vestigated and compared to urea supplemented and untreated ...

  11. Book Review: Comparative Education Research: Approaches and Methods

    Directory of Open Access Journals (Sweden)

    Noel Mcginn

    2014-10-01

    Full Text Available Book Review Comparative Education Research: Approaches and Methods (2nd edition By Mark Bray, Bob Adamson and Mark Mason (Eds. (2014, 453p ISBN: 978-988-17852-8-2, Hong Kong: Comparative Education Research Centre and Springer

  12. A novel method for rapid comparative quantitative analysis of nuclear fuel cycles

    International Nuclear Information System (INIS)

    Eastham, Sebastian D.; Coates, David J.; Parks, Geoffrey T.

    2012-01-01

    Highlights: ► Metric framework determined to compare nuclear fuel cycles. ► Fast and thermal reactors simulated using MATLAB models, including thorium. ► Modelling uses deterministic methods instead of Monte–Carlo for speed. ► Method rapidly identifies relative cycle strengths and weaknesses. ► Significant scope for use in project planning and cycle optimisation. - Abstract: One of the greatest obstacles facing the nuclear industry is that of sustainability, both in terms of the finite reserves of uranium ore and the production of highly radiotoxic spent fuel which presents proliferation and environmental hazards. Alternative nuclear technologies have been suggested as a means of delivering enhanced sustainability with proposals including fast reactors, the use of thorium fuel and tiered fuel cycles. The debate as to which is the most appropriate technology continues, with each fuel system and reactor type delivering specific advantages and disadvantages which can be difficult to compare fairly. This paper demonstrates a framework of performance metrics which, coupled with a first-order lumped reactor model to determine nuclide population balances, can be used to quantify the aforementioned pros and cons for a range of different fuel and reactor combinations. The framework includes metrics such as fuel efficiency, spent fuel toxicity and proliferation resistance, and relative cycle performance is analysed through parallel coordinate plots, yielding a quantitative comparison of disparate cycles.

  13. Comparative Method for Indirect Sensitivity Measurement of UHF RFID Reader with Respect to Interoperability and Conformance Requirements

    Directory of Open Access Journals (Sweden)

    Lukas Kypus

    2014-01-01

    Full Text Available There is never-ending race for the competitive advantage that forces RFID technology service integrators to focus more on used technology qualitative aspects and theirs impacts inside RFID ecosystem. This paper contributes to UHF RFID reader qualitative parameters evaluation and assessment problematic. It presents and describes in details indirect method and procedure of sensitivity measurement created for UHF RFID readers. We applied this method on RFID readers within prepared test environment and confirmed long term intention and recognized trend. Due to regulations limitations, there is not possible to increase output power over defined limits, but there are possibilities to influence reader sensitivity. Our proposal is to use customized comparative measurement method with insertion loss compensation for return link. Beside the main goal achievement, results show as well the qualitative status of development snapshot of reader. Method and following experiment helped us to gain an external view, current values of important parameters and motivation we want to follow up on as well as compared developed reader with its commercial competitors.

  14. Proposal for a Five-Step Method to Elicit Expert Judgment

    Directory of Open Access Journals (Sweden)

    Duco Veen

    2017-12-01

    Full Text Available Elicitation is a commonly used tool to extract viable information from experts. The information that is held by the expert is extracted and a probabilistic representation of this knowledge is constructed. A promising avenue in psychological research is to incorporated experts’ prior knowledge in the statistical analysis. Systematic reviews on elicitation literature however suggest that it might be inappropriate to directly obtain distributional representations from experts. The literature qualifies experts’ performance on estimating elements of a distribution as unsatisfactory, thus reliably specifying the essential elements of the parameters of interest in one elicitation step seems implausible. Providing feedback within the elicitation process can enhance the quality of the elicitation and interactive software can be used to facilitate the feedback. Therefore, we propose to decompose the elicitation procedure into smaller steps with adjustable outcomes. We represent the tacit knowledge of experts as a location parameter and their uncertainty concerning this knowledge by a scale and shape parameter. Using a feedback procedure, experts can accept the representation of their beliefs or adjust their input. We propose a Five-Step Method which consists of (1 Eliciting the location parameter using the trial roulette method. (2 Provide feedback on the location parameter and ask for confirmation or adjustment. (3 Elicit the scale and shape parameter. (4 Provide feedback on the scale and shape parameter and ask for confirmation or adjustment. (5 Use the elicited and calibrated probability distribution in a statistical analysis and update it with data or to compute a prior-data conflict within a Bayesian framework. User feasibility and internal validity for the Five-Step Method are investigated using three elicitation studies.

  15. Cut Based Method for Comparing Complex Networks.

    Science.gov (United States)

    Liu, Qun; Dong, Zhishan; Wang, En

    2018-03-23

    Revealing the underlying similarity of various complex networks has become both a popular and interdisciplinary topic, with a plethora of relevant application domains. The essence of the similarity here is that network features of the same network type are highly similar, while the features of different kinds of networks present low similarity. In this paper, we introduce and explore a new method for comparing various complex networks based on the cut distance. We show correspondence between the cut distance and the similarity of two networks. This correspondence allows us to consider a broad range of complex networks and explicitly compare various networks with high accuracy. Various machine learning technologies such as genetic algorithms, nearest neighbor classification, and model selection are employed during the comparison process. Our cut method is shown to be suited for comparisons of undirected networks and directed networks, as well as weighted networks. In the model selection process, the results demonstrate that our approach outperforms other state-of-the-art methods with respect to accuracy.

  16. Comparative methods for PET image segmentation in pharyngolaryngeal squamous cell carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Zaidi, Habib [Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva (Switzerland); Geneva University, Geneva Neuroscience Center, Geneva (Switzerland); University of Groningen, Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen (Netherlands); Abdoli, Mehrsima [University of Groningen, Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen (Netherlands); Fuentes, Carolina Llina [Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva (Switzerland); Naqa, Issam M.El [McGill University, Department of Medical Physics, Montreal (Canada)

    2012-05-15

    Several methods have been proposed for the segmentation of {sup 18}F-FDG uptake in PET. In this study, we assessed the performance of four categories of {sup 18}F-FDG PET image segmentation techniques in pharyngolaryngeal squamous cell carcinoma using clinical studies where the surgical specimen served as the benchmark. Nine PET image segmentation techniques were compared including: five thresholding methods; the level set technique (active contour); the stochastic expectation-maximization approach; fuzzy clustering-based segmentation (FCM); and a variant of FCM, the spatial wavelet-based algorithm (FCM-SW) which incorporates spatial information during the segmentation process, thus allowing the handling of uptake in heterogeneous lesions. These algorithms were evaluated using clinical studies in which the segmentation results were compared to the 3-D biological tumour volume (BTV) defined by histology in PET images of seven patients with T3-T4 laryngeal squamous cell carcinoma who underwent a total laryngectomy. The macroscopic tumour specimens were collected ''en bloc'', frozen and cut into 1.7- to 2-mm thick slices, then digitized for use as reference. The clinical results suggested that four of the thresholding methods and expectation-maximization overestimated the average tumour volume, while a contrast-oriented thresholding method, the level set technique and the FCM-SW algorithm underestimated it, with the FCM-SW algorithm providing relatively the highest accuracy in terms of volume determination (-5.9 {+-} 11.9%) and overlap index. The mean overlap index varied between 0.27 and 0.54 for the different image segmentation techniques. The FCM-SW segmentation technique showed the best compromise in terms of 3-D overlap index and statistical analysis results with values of 0.54 (0.26-0.72) for the overlap index. The BTVs delineated using the FCM-SW segmentation technique were seemingly the most accurate and approximated closely the 3-D BTVs

  17. A Proposal of New Spherical Particle Modeling Method Based on Stochastic Sampling of Particle Locations in Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Hyun; Kim, Do Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jea Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    To the high computational efficiency and user convenience, the implicit method had received attention; however, it is noted that the implicit method in the previous studies has low accuracy at high packing fraction. In this study, a new implicit method, which can be used at any packing fraction with high accuracy, is proposed. In this study, the implicit modeling method in the spherical particle distributed medium for using the MC simulation is proposed. A new concept in the spherical particle sampling was developed to solve the problems in the previous implicit methods. The sampling method was verified by simulating the sampling method in the infinite and finite medium. The results show that the particle implicit modeling with the proposed method was accurately performed in all packing fraction boundaries. It is expected that the proposed method can be efficiently utilized for the spherical particle distributed mediums, which are the fusion reactor blanket, VHTR reactors, and shielding analysis.

  18. Comparative Analysis of Reduced-Rule Compressed Fuzzy Logic Control and Incremental Conductance MPPT Methods

    Science.gov (United States)

    Kandemir, Ekrem; Borekci, Selim; Cetin, Numan S.

    2018-04-01

    Photovoltaic (PV) power generation has been widely used in recent years, with techniques for increasing the power efficiency representing one of the most important issues. The available maximum power of a PV panel is dependent on environmental conditions such as solar irradiance and temperature. To extract the maximum available power from a PV panel, various maximum-power-point tracking (MPPT) methods are used. In this work, two different MPPT methods were implemented for a 150-W PV panel. The first method, known as incremental conductance (Inc. Cond.) MPPT, determines the maximum power by measuring the derivative of the PV voltage and current. The other method is based on reduced-rule compressed fuzzy logic control (RR-FLC), using which it is relatively easier to determine the maximum power because a single input variable is used to reduce computing loads. In this study, a 150-W PV panel system model was realized using these MPPT methods in MATLAB and the results compared. According to the simulation results, the proposed RR-FLC-based MPPT could increase the response rate and tracking accuracy by 4.66% under standard test conditions.

  19. Operational auditing versus traditional method: A comparative investigation

    Directory of Open Access Journals (Sweden)

    Reza Tehrani

    2013-06-01

    Full Text Available Operational auditing is one of the management consultancy services whose significance is on the rise day by day. This approach is, clearly, a systematic and methodical process used to evaluate economic savings of financial processes in organizations and the results of the evaluations are reported to interested people along with some comments to improve operational processes. Accordingly, it appears that the proper employment of the existing rationale in operational auditing can be a significant step towards the improvement of financial efficiency in Iranian public and private banking sector. This paper studies the effects of operational auditing on the improvement of economic saving of financial processes in Iranian private banks compared with traditional approaches where the operations are based on financial statements. The population of this survey includes 15 private and public Iranian banks and the proposed study selects 78 branches, randomly. The Cronbach alpha was used to test the reliability a questionnaire employed to collect the needed data in this study. The results obtained by SPSS Software indicated that the reliability of the instrumentsanged between 0.752 and 0.867, suggesting an acceptable level of the reliability for the questionnaire. Besides, content validity was used to confirm the validity of the instrument. The results of the study indicated that the operational auditing as a useful approach influencing the financial efficiency of public and private banks has significantly transformed the traditional thinking in the field of management auditing. The operational auditing has a number of significant advantages including a better method of controlling financial operations within Iranian banks, efficient planning in the future, facilitating efficient, appropriate, and accurate management decision making, and sound evaluation of managers’ financial operations.

  20. A comparative study between three stability indicating spectrophotometric methods for the determination of diatrizoate sodium in presence of its cytotoxic degradation product based on two-wavelength selection

    Science.gov (United States)

    Riad, Safaa M.; El-Rahman, Mohamed K. Abd; Fawaz, Esraa M.; Shehata, Mostafa A.

    2015-06-01

    Three sensitive, selective, and precise stability indicating spectrophotometric methods for the determination of the X-ray contrast agent, diatrizoate sodium (DTA) in the presence of its acidic degradation product (highly cytotoxic 3,5-diamino metabolite) and in pharmaceutical formulation, were developed and validated. The first method is ratio difference, the second one is the bivariate method, and the third one is the dual wavelength method. The calibration curves for the three proposed methods are linear over a concentration range of 2-24 μg/mL. The selectivity of the proposed methods was tested using laboratory prepared mixtures. The proposed methods have been successfully applied to the analysis of DTA in pharmaceutical dosage forms without interference from other dosage form additives. The results were statistically compared with the official US pharmacopeial method. No significant difference for either accuracy or precision was observed.

  1. A proposed safety assurance method and its application to the fusion experimental reactor

    International Nuclear Information System (INIS)

    Okazaki, T.; Seki, Y.; Inabe, T.; Aoki, I.

    1995-01-01

    Importance categorization and hazard identification methods have been proposed for a fusion experimental reactor. A parameter, the system index, is introduced in the categorization method. The relative importance of systems with safety functions can be classified by the largeness of the system index and whether or not the system acts as a boundary for radioactive materials. This categorization can be used as the basic principle in determining structure design assessment, seismic design criteria etc. For the hazard identification the system time energy matrix is proposed, where the time and spatial distributions of hazard energies are used. This approach is formulated more systematically than an ad-hoc identification of hazard events and it is useful to select design basis events which are employed in the assessment of safety designs. (orig.)

  2. A Proposal on the Advanced Sampling Based Sensitivity and Uncertainty Analysis Method for the Eigenvalue Uncertainty Analysis

    International Nuclear Information System (INIS)

    Kim, Song Hyun; Song, Myung Sub; Shin, Chang Ho; Noh, Jae Man

    2014-01-01

    In using the perturbation theory, the uncertainty of the response can be estimated by a single transport simulation, and therefore it requires small computational load. However, it has a disadvantage that the computation methodology must be modified whenever estimating different response type such as multiplication factor, flux, or power distribution. Hence, it is suitable for analyzing few responses with lots of perturbed parameters. Statistical approach is a sampling based method which uses randomly sampled cross sections from covariance data for analyzing the uncertainty of the response. XSUSA is a code based on the statistical approach. The cross sections are only modified with the sampling based method; thus, general transport codes can be directly utilized for the S/U analysis without any code modifications. However, to calculate the uncertainty distribution from the result, code simulation should be enough repeated with randomly sampled cross sections. Therefore, this inefficiency is known as a disadvantage of the stochastic method. In this study, an advanced sampling method of the cross sections is proposed and verified to increase the estimation efficiency of the sampling based method. In this study, to increase the estimation efficiency of the sampling based S/U method, an advanced sampling and estimation method was proposed. The main feature of the proposed method is that the cross section averaged from each single sampled cross section is used. For the use of the proposed method, the validation was performed using the perturbation theory

  3. In support of the importance of development comparative method in sociology

    Directory of Open Access Journals (Sweden)

    Baščarević Ivan M.

    2012-01-01

    Full Text Available The importance of comparative methods in social research is now almost non-debatable. In contemporary sociological literature there are many papers that indicate the use of the sociological (and not only sociological method. In addition, in international surveys carried out in the comparative-historical studies, there is a long-term tradition of development issues of comparative method in the study of social phenomena. This text is designed as a contribution to understanding the contribution of important comparative method in contemporary social research. Usefulness of comparative method became apparent to the classicists of sociology - K. Marx, E. Durkheim and M. Weber, and even much earlier to Auguste Comte. Their work (with the exception of Comte, despite many criticisms that are often moving in the direction of a lack of systematic and reliable source material, has an outstanding contribution in terms of highlighting the importance and application of comparative method. E. Durkheim was among the first to try to determine its epistemological significance. Critics of their actions, however, do not sufficiently take into account the limitations of that age, especially in underdeveloped standardization and classification of the data collected. The period of the sixties marked that, after a short delay, following re-affirmation and development of comparative methods in social research emerged. This was largely contributed by criticism of classical ideas related to this method, together with the 'fake controversy' between supporters of quantitative and qualitative methodology. A lot has been done in terms of standardization of comparative method and empirical information by training funds for the classification and measurement, and modern technical means by which it is possible to achieve a simple collection and processing of comparable data. By improvement of these methods, the shortcomings that accompanied comparative research in the past would

  4. Comparative Analysis of Three Proposed Federal Renewable Electricity Standards

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, P.; Logan, J.; Bird, L.; Short, W.

    2009-05-01

    This paper analyzes potential impacts of proposed national renewable electricity standard (RES) legislation. An RES is a mandate requiring certain electricity retailers to provide a minimum share of their electricity sales from qualifying renewable power generation. The analysis focuses on draft bills introduced individually by Senator Jeff Bingaman and Representative Edward Markey, and jointly by Representative Henry Waxman and Markey. The analysis uses NREL's Regional Energy Deployment System (ReEDS) model to evaluate the impacts of the proposed RES requirements on the U.S. energy sector in four scenarios.

  5. Comparative analysis of different methods for image enhancement

    Institute of Scientific and Technical Information of China (English)

    吴笑峰; 胡仕刚; 赵瑾; 李志明; 李劲; 唐志军; 席在芳

    2014-01-01

    Image enhancement technology plays a very important role to improve image quality in image processing. By enhancing some information and restraining other information selectively, it can improve image visual effect. The objective of this work is to implement the image enhancement to gray scale images using different techniques. After the fundamental methods of image enhancement processing are demonstrated, image enhancement algorithms based on space and frequency domains are systematically investigated and compared. The advantage and defect of the above-mentioned algorithms are analyzed. The algorithms of wavelet based image enhancement are also deduced and generalized. Wavelet transform modulus maxima (WTMM) is a method for detecting the fractal dimension of a signal, it is well used for image enhancement. The image techniques are compared by using the mean (μ), standard deviation (s), mean square error (MSE) and PSNR (peak signal to noise ratio). A group of experimental results demonstrate that the image enhancement algorithm based on wavelet transform is effective for image de-noising and enhancement. Wavelet transform modulus maxima method is one of the best methods for image enhancement.

  6. A framework for comparative evaluation of dosimetric methods to triage a large population following a radiological event

    International Nuclear Information System (INIS)

    Flood, Ann Barry; Nicolalde, Roberto J.; Demidenko, Eugene; Williams, Benjamin B.; Shapiro, Alla; Wiley, Albert L.; Swartz, Harold M.

    2011-01-01

    Background: To prepare for a possible major radiation disaster involving large numbers of potentially exposed people, it is important to be able to rapidly and accurately triage people for treatment or not, factoring in the likely conditions and available resources. To date, planners have had to create guidelines for triage based on methods for estimating dose that are clinically available and which use evidence extrapolated from unrelated conditions. Current guidelines consequently focus on measuring clinical symptoms (e.g., time-to-vomiting), which may not be subject to the same verification of standard methods and validation processes required for governmental approval processes of new and modified procedures. Biodosimeters under development have not yet been formally approved for this use. Neither set of methods has been tested in settings involving large-scale populations at risk for exposure. Objective: To propose a framework for comparative evaluation of methods for such triage and to evaluate biodosimetric methods that are currently recommended and new methods as they are developed. Methods: We adapt the NIH model of scientific evaluations and sciences needed for effective translational research to apply to biodosimetry for triaging very large populations following a radiation event. We detail criteria for translating basic science about dosimetry into effective multi-stage triage of large populations and illustrate it by analyzing 3 current guidelines and 3 advanced methods for biodosimetry. Conclusions: This framework for evaluating dosimetry in large populations is a useful technique to compare the strengths and weaknesses of different dosimetry methods. It can help policy-makers and planners not only to compare the methods' strengths and weaknesses for their intended use but also to develop an integrated approach to maximize their effectiveness. It also reveals weaknesses in methods that would benefit from further research and evaluation.

  7. A framework for comparative evaluation of dosimetric methods to triage a large population following a radiological event

    Energy Technology Data Exchange (ETDEWEB)

    Flood, Ann Barry, E-mail: Ann.B.Flood@Dartmouth.Edu [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Nicolalde, Roberto J., E-mail: Roberto.J.Nicolalde@Dartmouth.Edu [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Demidenko, Eugene, E-mail: Eugene.Demidenko@Dartmouth.Edu [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Williams, Benjamin B., E-mail: Benjamin.B.Williams@Dartmouth.Edu [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Shapiro, Alla, E-mail: Alla.Shapiro@fda.hhs.gov [Food and Drug Administration (FDA), Rockville, MD (United States); Wiley, Albert L., E-mail: Albert.Wiley@orise.orau.gov [Oak Ridge Institute for Science and Education (ORISE), Oak Ridge, TN (United States); Swartz, Harold M., E-mail: Harold.M.Swartz@Dartmouth.Edu [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States)

    2011-09-15

    Background: To prepare for a possible major radiation disaster involving large numbers of potentially exposed people, it is important to be able to rapidly and accurately triage people for treatment or not, factoring in the likely conditions and available resources. To date, planners have had to create guidelines for triage based on methods for estimating dose that are clinically available and which use evidence extrapolated from unrelated conditions. Current guidelines consequently focus on measuring clinical symptoms (e.g., time-to-vomiting), which may not be subject to the same verification of standard methods and validation processes required for governmental approval processes of new and modified procedures. Biodosimeters under development have not yet been formally approved for this use. Neither set of methods has been tested in settings involving large-scale populations at risk for exposure. Objective: To propose a framework for comparative evaluation of methods for such triage and to evaluate biodosimetric methods that are currently recommended and new methods as they are developed. Methods: We adapt the NIH model of scientific evaluations and sciences needed for effective translational research to apply to biodosimetry for triaging very large populations following a radiation event. We detail criteria for translating basic science about dosimetry into effective multi-stage triage of large populations and illustrate it by analyzing 3 current guidelines and 3 advanced methods for biodosimetry. Conclusions: This framework for evaluating dosimetry in large populations is a useful technique to compare the strengths and weaknesses of different dosimetry methods. It can help policy-makers and planners not only to compare the methods' strengths and weaknesses for their intended use but also to develop an integrated approach to maximize their effectiveness. It also reveals weaknesses in methods that would benefit from further research and evaluation.

  8. Proposal and Implementation of a Robust Sensing Method for DVB-T Signal

    Science.gov (United States)

    Song, Chunyi; Rahman, Mohammad Azizur; Harada, Hiroshi

    This paper proposes a sensing method for TV signals of DVB-T standard to realize effective TV White Space (TVWS) Communication. In the TVWS technology trial organized by the Infocomm Development Authority (iDA) of Singapore, with regard to the sensing level and sensing time, detecting DVB-T signal at the level of -120dBm over an 8MHz channel with a sensing time below 1 second is required. To fulfill such a strict sensing requirement, we propose a smart sensing method which combines feature detection and energy detection (CFED), and is also characterized by using dynamic threshold selection (DTS) based on a threshold table to improve sensing robustness to noise uncertainty. The DTS based CFED (DTS-CFED) is evaluated by computer simulations and is also implemented into a hardware sensing prototype. The results show that the DTS-CFED achieves a detection probability above 0.9 for a target false alarm probability of 0.1 for DVB-T signals at the level of -120dBm over an 8MHz channel with the sensing time equals to 0.1 second.

  9. A comparative study on effective dynamic modeling methods for flexible pipe

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang Ho; Hong, Sup; Kim, Hyung Woo [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of); Kim, Sung Soo [Chungnam National University, Daejeon (Korea, Republic of)

    2015-07-15

    In this paper, in order to select a suitable method that is applicable to the large deflection with a small strain problem of pipe systems in the deep seabed mining system, the finite difference method with lumped mass from the field of cable dynamics and the substructure method from the field of flexible multibody dynamics were compared. Due to the difficulty of obtaining experimental results from an actual pipe system in the deep seabed mining system, a thin cantilever beam model with experimental results was employed for the comparative study. Accuracy of the methods was investigated by comparing the experimental results and simulation results from the cantilever beam model with different numbers of elements. Efficiency of the methods was also examined by comparing the operational counts required for solving equations of motion. Finally, this cantilever beam model with comparative study results can be promoted to be a benchmark problem for the flexible multibody dynamics.

  10. A Comparative Evaluation of Methods for the Determination of ...

    African Journals Online (AJOL)

    Ultraviolet/Visible (UV/Vis) methods, a normal phase High Pressure Liquid Chromatographic (HPLC) method and a reverse phase HPLC method for vitamin A were compared and subsequently used to analyze samples of margarine, edible oil milk and milk drinks purchased from the Abule Egba and Oke Odo market in ...

  11. Comparative analysis of minor histocompatibility antigens genotyping methods

    Directory of Open Access Journals (Sweden)

    A. S. Vdovin

    2016-01-01

    Full Text Available The wide range of techniques could be employed to find mismatches in minor histocompatibility antigens between transplant recipients and their donors. In the current study we compared three genotyping methods based on polymerase chain reaction (PCR for four minor antigens. Three of the tested methods: allele-specific PCR, restriction fragment length polymorphism and real-time PCR with TaqMan probes demonstrated 100% reliability when compared to Sanger sequencing for all of the studied polymorphisms. High resolution melting analysis was unsuitable for genotyping of one of the tested minor antigens (HA-1 as it has linked synonymous polymorphism. Obtained data could be used to select the strategy for large-scale clinical genotyping.

  12. Comparing methods for involving users in ideation

    DEFF Research Database (Denmark)

    Nicolajsen, Hanne Westh; Scupola, Ada; Sørensen, Flemming

    2015-01-01

    workshop method (involving users and employees) is especially good at qualifying and further developing ideas. The findings suggest that methods for involving users in ideation should be carefully selected and combined to achieve optimum benefits and avoid potential disadvantages.......In this paper we discuss how users may be involved in the ideation phase of innovation. The study compares the use of a blog and three future workshops (students, employees and a mix of the two) in a library. Our study shows that the blog is efficient in giving the users voice whereas the mixed...

  13. Proposal and Evaluation of Management Method for College Mechatronics Education Applying the Project Management

    Science.gov (United States)

    Ando, Yoshinobu; Eguchi, Yuya; Mizukawa, Makoto

    In this research, we proposed and evaluated a management method of college mechatronics education. We applied the project management to college mechatronics education. We practiced our management method to the seminar “Microcomputer Seminar” for 3rd grade students who belong to Department of Electrical Engineering, Shibaura Institute of Technology. We succeeded in management of Microcomputer Seminar in 2006. We obtained the good evaluation for our management method by means of questionnaire.

  14. Comparative Analysis for Robust Penalized Spline Smoothing Methods

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2014-01-01

    Full Text Available Smoothing noisy data is commonly encountered in engineering domain, and currently robust penalized regression spline models are perceived to be the most promising methods for coping with this issue, due to their flexibilities in capturing the nonlinear trends in the data and effectively alleviating the disturbance from the outliers. Against such a background, this paper conducts a thoroughly comparative analysis of two popular robust smoothing techniques, the M-type estimator and S-estimation for penalized regression splines, both of which are reelaborated starting from their origins, with their derivation process reformulated and the corresponding algorithms reorganized under a unified framework. Performances of these two estimators are thoroughly evaluated from the aspects of fitting accuracy, robustness, and execution time upon the MATLAB platform. Elaborately comparative experiments demonstrate that robust penalized spline smoothing methods possess the capability of resistance to the noise effect compared with the nonrobust penalized LS spline regression method. Furthermore, the M-estimator exerts stable performance only for the observations with moderate perturbation error, whereas the S-estimator behaves fairly well even for heavily contaminated observations, but consuming more execution time. These findings can be served as guidance to the selection of appropriate approach for smoothing the noisy data.

  15. Consensus of recommendations guiding comparative effectiveness research methods.

    Science.gov (United States)

    Morton, Jacob B; McConeghy, Robert; Heinrich, Kirstin; Gatto, Nicolle M; Caffrey, Aisling R

    2016-12-01

    Because of an increasing demand for quality comparative effectiveness research (CER), methods guidance documents have been published, such as those from the Agency for Healthcare Research and Quality (AHRQ) and the Patient-Centered Outcomes Research Institute (PCORI). Our objective was to identify CER methods guidance documents and compare them to produce a summary of important recommendations which could serve as a consensus of CER method recommendations. We conducted a systematic literature review to identify CER methods guidance documents published through 2014. Identified documents were analyzed for methods guidance recommendations. Individual recommendations were categorized to determine the degree of overlap. We identified nine methods guidance documents, which contained a total of 312 recommendations, 97% of which were present in two or more documents. All nine documents recommended transparency and adaptation for relevant stakeholders in the interpretation and dissemination of results. Other frequently shared CER methods recommendations included: study design and operational definitions should be developed a priori and allow for replication (n = 8 documents); focus on areas with gaps in current clinical knowledge that are relevant to decision-makers (n = 7); validity of measures, instruments, and data should be assessed and discussed (n = 7); outcomes, including benefits and harms, should be clinically meaningful, and objectively measured (n = 7). Assessment for and strategies to minimize bias (n = 6 documents), confounding (n = 6), and heterogeneity (n = 4) were also commonly shared recommendations between documents. We offer a field-consensus guide based on nine CER methods guidance documents that will aid researchers in designing CER studies and applying CER methods. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. COMPARING PARTICLE SIZE DISTRIBUTION ANALYSIS BY SEDIMENTATION AND LASER DIFFRACTION METHOD

    Directory of Open Access Journals (Sweden)

    Vito Ferro

    2009-06-01

    Full Text Available In this paper a brief review of the laser diffraction method is firstly carried out. Then, for 30 soil samples having a different texture classification sampled in Sicilian basin, a comparison between the two techniques is developed. The analysis demonstrated that the sand content measured by Sieve-Hydrometer method can be assumed equal to the one determinated by laser diffraction technique while an overestimation of the clay fraction measured by Sieve-Hydrometer method respect to laser diffraction technique was obtained. Finally a set of equations useful to refer LD measurements to SH method was proposed.

  17. Method Points: towards a metric for method complexity

    Directory of Open Access Journals (Sweden)

    Graham McLeod

    1998-11-01

    Full Text Available A metric for method complexity is proposed as an aid to choosing between competing methods, as well as in validating the effects of method integration or the products of method engineering work. It is based upon a generic method representation model previously developed by the author and adaptation of concepts used in the popular Function Point metric for system size. The proposed technique is illustrated by comparing two popular I.E. deliverables with counterparts in the object oriented Unified Modeling Language (UML. The paper recommends ways to improve the practical adoption of new methods.

  18. A comparative study of sensor fault diagnosis methods based on observer for ECAS system

    Science.gov (United States)

    Xu, Xing; Wang, Wei; Zou, Nannan; Chen, Long; Cui, Xiaoli

    2017-03-01

    The performance and practicality of electronically controlled air suspension (ECAS) system are highly dependent on the state information supplied by kinds of sensors, but faults of sensors occur frequently. Based on a non-linearized 3-DOF 1/4 vehicle model, different methods of fault detection and isolation (FDI) are used to diagnose the sensor faults for ECAS system. The considered approaches include an extended Kalman filter (EKF) with concise algorithm, a strong tracking filter (STF) with robust tracking ability, and the cubature Kalman filter (CKF) with numerical precision. We propose three filters of EKF, STF, and CKF to design a state observer of ECAS system under typical sensor faults and noise. Results show that three approaches can successfully detect and isolate faults respectively despite of the existence of environmental noise, FDI time delay and fault sensitivity of different algorithms are different, meanwhile, compared with EKF and STF, CKF method has best performing FDI of sensor faults for ECAS system.

  19. Comparative study on the welded structure fatigue strength assessment method

    Science.gov (United States)

    Hu, Tao

    2018-04-01

    Due to the welding structure is widely applied in various industries, especially the pressure container, motorcycle, automobile, aviation, ship industry, such as large crane steel structure, so for welded structure fatigue strength evaluation is particularly important. For welded structure fatigue strength evaluation method mainly has four kinds of, the more from the use of two kinds of welded structure fatigue strength evaluation method, namely the nominal stress method and the hot spot stress evaluation method, comparing from its principle, calculation method for the process analysis and research, compare the similarities and the advantages and disadvantages, the analysis of practical engineering problems to provide the reference for every profession and trade, as well as the future welded structure fatigue strength and life evaluation method put forward outlook.

  20. Comparability of river suspended-sediment sampling and laboratory analysis methods

    Science.gov (United States)

    Groten, Joel T.; Johnson, Gregory D.

    2018-03-06

    Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.

  1. Alignment methods: strategies, challenges, benchmarking, and comparative overview.

    Science.gov (United States)

    Löytynoja, Ari

    2012-01-01

    Comparative evolutionary analyses of molecular sequences are solely based on the identities and differences detected between homologous characters. Errors in this homology statement, that is errors in the alignment of the sequences, are likely to lead to errors in the downstream analyses. Sequence alignment and phylogenetic inference are tightly connected and many popular alignment programs use the phylogeny to divide the alignment problem into smaller tasks. They then neglect the phylogenetic tree, however, and produce alignments that are not evolutionarily meaningful. The use of phylogeny-aware methods reduces the error but the resulting alignments, with evolutionarily correct representation of homology, can challenge the existing practices and methods for viewing and visualising the sequences. The inter-dependency of alignment and phylogeny can be resolved by joint estimation of the two; methods based on statistical models allow for inferring the alignment parameters from the data and correctly take into account the uncertainty of the solution but remain computationally challenging. Widely used alignment methods are based on heuristic algorithms and unlikely to find globally optimal solutions. The whole concept of one correct alignment for the sequences is questionable, however, as there typically exist vast numbers of alternative, roughly equally good alignments that should also be considered. This uncertainty is hidden by many popular alignment programs and is rarely correctly taken into account in the downstream analyses. The quest for finding and improving the alignment solution is complicated by the lack of suitable measures of alignment goodness. The difficulty of comparing alternative solutions also affects benchmarks of alignment methods and the results strongly depend on the measure used. As the effects of alignment error cannot be predicted, comparing the alignments' performance in downstream analyses is recommended.

  2. Proposal for outline of training and evaluation method for non-technical skills

    International Nuclear Information System (INIS)

    Nagasaka, Akihiko; Shibue, Hisao

    2015-01-01

    The purpose of this study is to systematize measures for improvement of emergency response capability focused on non-technical skills. As the results of investigation of some emergency training in nuclear power plant and referring to CRM training, following two issues were picked up. 1) Lack of practical training method for improvement of non-technical skills. 2) Lack of evaluation method of non-technical skills. Then, based on these 7 non-technical skills 'situational awareness' 'decision making' 'communication' 'teamworking' 'leadership' 'managing stress' 'coping with fatigue' are promotion factors to improve emergency response capability, we propose practical training method for each non-technical skill. Also we give example of behavioral markers as evaluation factor, and indicate approaches to introduce the evaluation method of non-technical skills. (author)

  3. Characterisation of PV CIS module by artificial neural networks. A comparative study with other methods

    International Nuclear Information System (INIS)

    Almonacid, F.; Rus, C.; Hontoria, L.; Munoz, F.J.

    2010-01-01

    The presence of PV modules made with new technologies and materials is increasing in PV market, in special Thin Film Solar Modules (TFSM). They are ready to make a substantial contribution to the world's electricity generation. Although Si wafer-based cells account for the most of increase, technologies of thin film have been those of the major growth in last three years. During 2007 they grew 133%. On the other hand, manufacturers provide ratings for PV modules for conditions referred to as Standard Test Conditions (STC). However, these conditions rarely occur outdoors, so the usefulness and applicability of the indoors characterisation in standard test conditions of PV modules is a controversial issue. Therefore, to carry out a correct photovoltaic engineering, a suitable characterisation of PV module electrical behaviour is necessary. The IDEA Research Group from Jaen University has developed a method based on artificial neural networks (ANNs) to electrical characterisation of PV modules. An ANN was able to generate V-I curves of si-crystalline PV modules for any irradiance and module cell temperature. The results show that the proposed ANN introduces a good accurate prediction for si-crystalline PV modules performance when compared with the measured values. Now, this method is going to be applied for electrical characterisation of PV CIS modules. Finally, a comparative study with other methods, of electrical characterisation, is done. (author)

  4. A proposed impact assessment method for genetically modified plants (AS-GMP Method)

    International Nuclear Information System (INIS)

    Jesus-Hitzschky, Katia Regina Evaristo de; Silveira, Jose Maria F.J. da

    2009-01-01

    An essential step in the development of products based on biotechnology is an assessment of their potential economic impacts and safety, including an evaluation of the potential impact of transgenic crops and practices related to their cultivation on the environment and human or animal health. The purpose of this paper is to provide an assessment method to evaluate the impact of biotechnologies that uses quantifiable parameters and allows a comparative analysis between conventional technology and technologies using GMOs. This paper introduces a method to perform an impact analysis associated with the commercial release and use of genetically modified plants, the Assessment System GMP Method. The assessment is performed through indicators that are arranged according to their dimension criterion likewise: environmental, economic, social, capability and institutional approach. To perform an accurate evaluation of the GMP specific indicators related to genetic modification are grouped in common fields: genetic insert features, GM plant features, gene flow, food/feed field, introduction of the GMP, unexpected occurrences and specific indicators. The novelty is the possibility to include specific parameters to the biotechnology under assessment. In this case by case analysis the factors of moderation and the indexes are parameterized to perform an available assessment.

  5. Comparative study of age estimation using dentinal translucency by digital and conventional methods

    Science.gov (United States)

    Bommannavar, Sushma; Kulkarni, Meena

    2015-01-01

    Introduction: Estimating age using the dentition plays a significant role in identification of the individual in forensic cases. Teeth are one of the most durable and strongest structures in the human body. The morphology and arrangement of teeth vary from person-to-person and is unique to an individual as are the fingerprints. Therefore, the use of dentition is the method of choice in the identification of the unknown. Root dentin translucency is considered to be one of the best parameters for dental age estimation. Traditionally, root dentin translucency was measured using calipers. Recently, the use of custom built software programs have been proposed for the same. Objectives: The present study describes a method to measure root dentin translucency on sectioned teeth using a custom built software program Adobe Photoshop 7.0 version (Adobe system Inc, Mountain View California). Materials and Methods: A total of 50 single rooted teeth were sectioned longitudinally to derive a 0.25 mm uniform thickness and the root dentin translucency was measured using digital and caliper methods and compared. The Gustafson's morphohistologic approach is used in this study. Results: Correlation coefficients of translucency measurements to age were statistically significant for both the methods (P < 0.125) and linear regression equations derived from both methods revealed better ability of the digital method to assess age. Conclusion: The custom built software program used in the present study is commercially available and widely used image editing software. Furthermore, this method is easy to use and less time consuming. The measurements obtained using this method are more precise and thus help in more accurate age estimation. Considering these benefits, the present study recommends the use of digital method to assess translucency for age estimation. PMID:25709325

  6. Comparing DIF methods for data with dual dependency

    Directory of Open Access Journals (Sweden)

    Ying Jin

    2016-09-01

    Full Text Available Abstract Background The current study compared four differential item functioning (DIF methods to examine their performances in terms of accounting for dual dependency (i.e., person and item clustering effects simultaneously by a simulation study, which is not sufficiently studied under the current DIF literature. The four methods compared are logistic regression accounting neither person nor item clustering effect, hierarchical logistic regression accounting for person clustering effect, the testlet model accounting for the item clustering effect, and the multilevel testlet model accounting for both person and item clustering effects. The secondary goal of the current study was to evaluate the trade-off between simple models and complex models for the accuracy of DIF detection. An empirical example analyzing the 2011 TIMSS Mathematics data was also included to demonstrate the differential performances of the four DIF methods. A number of DIF analyses have been done on the TIMSS data, and rarely had these analyses accounted for the dual dependence of the data. Results Results indicated the complex models did not outperform simple models under certain conditions, especially when DIF parameters were considered in addition to significance tests. Conclusions Results of the current study could provide supporting evidence for applied researchers in selecting the appropriate DIF methods under various conditions.

  7. The comparative method of language acquisition research: a Mayan case study.

    Science.gov (United States)

    Pye, Clifton; Pfeiler, Barbara

    2014-03-01

    This article demonstrates how the Comparative Method can be applied to cross-linguistic research on language acquisition. The Comparative Method provides a systematic procedure for organizing and interpreting acquisition data from different languages. The Comparative Method controls for cross-linguistic differences at all levels of the grammar and is especially useful in drawing attention to variation in contexts of use across languages. This article uses the Comparative Method to analyze the acquisition of verb suffixes in two Mayan languages: K'iche' and Yucatec. Mayan status suffixes simultaneously mark distinctions in verb transitivity, verb class, mood, and clause position. Two-year-old children acquiring K'iche' and Yucatec Maya accurately produce the status suffixes on verbs, in marked distinction to the verbal prefixes for aspect and agreement. We find evidence that the contexts of use for the suffixes differentially promote the children's production of cognate status suffixes in K'iche' and Yucatec.

  8. A method proposal for cumulative environmental impact assessment based on the landscape vulnerability evaluation

    International Nuclear Information System (INIS)

    Pavlickova, Katarina; Vyskupova, Monika

    2015-01-01

    Cumulative environmental impact assessment deals with the occasional use in practical application of environmental impact assessment process. The main reasons are the difficulty of cumulative impact identification caused by lack of data, inability to measure the intensity and spatial effect of all types of impacts and the uncertainty of their future evolution. This work presents a method proposal to predict cumulative impacts on the basis of landscape vulnerability evaluation. For this purpose, qualitative assessment of landscape ecological stability is conducted and major vulnerability indicators of environmental and socio-economic receptors are specified and valuated. Potential cumulative impacts and the overall impact significance are predicted quantitatively in modified Argonne multiple matrixes while considering the vulnerability of affected landscape receptors and the significance of impacts identified individually. The method was employed in the concrete environmental impact assessment process conducted in Slovakia. The results obtained in this case study reflect that this methodology is simple to apply, valid for all types of impacts and projects, inexpensive and not time-consuming. The objectivity of the partial methods used in this procedure is improved by quantitative landscape ecological stability evaluation, assignment of weights to vulnerability indicators based on the detailed characteristics of affected factors, and grading impact significance. - Highlights: • This paper suggests a method proposal for cumulative impact prediction. • The method includes landscape vulnerability evaluation. • The vulnerability of affected receptors is determined by their sensitivity. • This method can increase the objectivity of impact prediction in the EIA process

  9. Development of multilateral comparative evaluation method for fuel cycle system

    International Nuclear Information System (INIS)

    Tamaki, Hitoshi; Ikushima, Takeshi; Nomura, Yasushi; Nakajima, Kiyoshi.

    1998-03-01

    In the near future, Japanese nuclear fuel cycle system will be promoted by national nuclear energy policy, and it''s options i.e. once through, thermal cycle and fast breeder cycle must be selected by multilateral comparative evaluation method from various aspects of safety, society, economy, and e.t.c. Therefore such a problem can be recognized as a social problem of decision making and applied for AHP (Analytic Hierarchy Process) that can multilaterally and comparatively evaluate the problem. On comparative evaluation, much information are needed for decision making, therefore two kinds of databases having these information have been constructed. And then, the multilateral comparative evaluation method consisting of two kinds of databases and AHP for optimum selection of fuel cycle system option have been developed. (author)

  10. Comparative methods for PET image segmentation in pharyngolaryngeal squamous cell carcinoma

    NARCIS (Netherlands)

    Zaidi, Habib; Abdoli, Mehrsima; Fuentes, Carolina Llina; El Naqa, Issam M.

    Several methods have been proposed for the segmentation of F-18-FDG uptake in PET. In this study, we assessed the performance of four categories of F-18-FDG PET image segmentation techniques in pharyngolaryngeal squamous cell carcinoma using clinical studies where the surgical specimen served as the

  11. Proposed waste form performance criteria and testing methods for low-level mixed waste

    International Nuclear Information System (INIS)

    Franz, E.M.; Fuhrmann, M.; Bowerman, B.

    1995-01-01

    Proposed waste form performance criteria and testing methods were developed as guidance in judging the suitability of solidified waste as a physico-chemical barrier to releases of radionuclides and RCRA regulated hazardous components. The criteria follow from the assumption that release of contaminants by leaching is the single most important property for judging the effectiveness of a waste form. A two-tier regimen is proposed. The first tier consists of a leach test designed to determine the net, forward leach rate of the solidified waste and a leach test required by the Environmental Protection Agency (EPA). The second tier of tests is to determine if a set of stresses (i.e., radiation, freeze-thaw, wet-dry cycling) on the waste form adversely impacts its ability to retain contaminants and remain physically intact. In the absence of site-specific performance assessments (PA), two generic modeling exercises are described which were used to calculate proposed acceptable leachates

  12. Comparative analysis of gradient-field-based orientation estimation methods and regularized singular-value decomposition for fringe pattern processing.

    Science.gov (United States)

    Sun, Qi; Fu, Shujun

    2017-09-20

    Fringe orientation is an important feature of fringe patterns and has a wide range of applications such as guiding fringe pattern filtering, phase unwrapping, and abstraction. Estimating fringe orientation is a basic task for subsequent processing of fringe patterns. However, various noise, singular and obscure points, and orientation data degeneration lead to inaccurate calculations of fringe orientation. Thus, to deepen the understanding of orientation estimation and to better guide orientation estimation in fringe pattern processing, some advanced gradient-field-based orientation estimation methods are compared and analyzed. At the same time, following the ideas of smoothing regularization and computing of bigger gradient fields, a regularized singular-value decomposition (RSVD) technique is proposed for fringe orientation estimation. To compare the performance of these gradient-field-based methods, quantitative results and visual effect maps of orientation estimation are given on simulated and real fringe patterns that demonstrate that the RSVD produces the best estimation results at a cost of relatively less time.

  13. Comparative study of methods for potential and actual evapotranspiration determination

    International Nuclear Information System (INIS)

    Kolev, B.

    2004-01-01

    Two types of methods for potential and actual evapotranspiration determining were compared. The first type includes neutron gauge, tensiometers, gypsum blocks and lysimeters. The actual and potential evapotranspiration were calculated by water balance equation. The second type of methods used a simulation model for all calculation. The aim of this study was not only to compare and estimate the methods using. It was mainly pointed on calculations of water use efficiency and transpiration coefficient in potential production situation. This makes possible to choose the best way for water consumption optimization for a given crop. The final results find with the best of the methods could be used for applying the principles of sustainable agriculture in random region of Bulgarian territory. (author)

  14. Early detection of pharmacovigilance signals with automated methods based on false discovery rates: a comparative study.

    Science.gov (United States)

    Ahmed, Ismaïl; Thiessard, Frantz; Miremont-Salamé, Ghada; Haramburu, Françoise; Kreft-Jais, Carmen; Bégaud, Bernard; Tubert-Bitter, Pascale

    2012-06-01

    Improving the detection of drug safety signals has led several pharmacovigilance regulatory agencies to incorporate automated quantitative methods into their spontaneous reporting management systems. The three largest worldwide pharmacovigilance databases are routinely screened by the lower bound of the 95% confidence interval of proportional reporting ratio (PRR₀₂.₅), the 2.5% quantile of the Information Component (IC₀₂.₅) or the 5% quantile of the Gamma Poisson Shrinker (GPS₀₅). More recently, Bayesian and non-Bayesian False Discovery Rate (FDR)-based methods were proposed that address the arbitrariness of thresholds and allow for a built-in estimate of the FDR. These methods were also shown through simulation studies to be interesting alternatives to the currently used methods. The objective of this work was twofold. Based on an extensive retrospective study, we compared PRR₀₂.₅, GPS₀₅ and IC₀₂.₅ with two FDR-based methods derived from the Fisher's exact test and the GPS model (GPS(pH0) [posterior probability of the null hypothesis H₀ calculated from the Gamma Poisson Shrinker model]). Secondly, restricting the analysis to GPS(pH0), we aimed to evaluate the added value of using automated signal detection tools compared with 'traditional' methods, i.e. non-automated surveillance operated by pharmacovigilance experts. The analysis was performed sequentially, i.e. every month, and retrospectively on the whole French pharmacovigilance database over the period 1 January 1996-1 July 2002. Evaluation was based on a list of 243 reference signals (RSs) corresponding to investigations launched by the French Pharmacovigilance Technical Committee (PhVTC) during the same period. The comparison of detection methods was made on the basis of the number of RSs detected as well as the time to detection. Results comparing the five automated quantitative methods were in favour of GPS(pH0) in terms of both number of detections of true signals and

  15. Comparative Study on Two Melting Simulation Methods: Melting Curve of Gold

    International Nuclear Information System (INIS)

    Liu Zhong-Li; Li Rui; Sun Jun-Sheng; Zhang Xiu-Lu; Cai Ling-Cang

    2016-01-01

    Melting simulation methods are of crucial importance to determining melting temperature of materials efficiently. A high-efficiency melting simulation method saves much simulation time and computational resources. To compare the efficiency of our newly developed shock melting (SM) method with that of the well-established two-phase (TP) method, we calculate the high-pressure melting curve of Au using the two methods based on the optimally selected interatomic potentials. Although we only use 640 atoms to determine the melting temperature of Au in the SM method, the resulting melting curve accords very well with the results from the TP method using much more atoms. Thus, this shows that a much smaller system size in SM method can still achieve a fully converged melting curve compared with the TP method, implying the robustness and efficiency of the SM method. (paper)

  16. Proposal of evaluation method of tsunami wave pressure using 2D depth-integrated flow simulation

    International Nuclear Information System (INIS)

    Arimitsu, Tsuyoshi; Ooe, Kazuya; Kawasaki, Koji

    2012-01-01

    To design and construct land structures resistive to tsunami force, it is most essential to evaluate tsunami pressure quantitatively. The existing hydrostatic formula, in general, tended to underestimate tsunami wave pressure under the condition of inundation flow with large Froude number. Estimation method of tsunami pressure acting on a land structure was proposed using inundation depth and horizontal velocity at the front of the structure, which were calculated employing a 2D depth-integrated flow model based on the unstructured grid system. The comparison between the numerical and experimental results revealed that the proposed method could reasonably reproduce the vertical distribution of the maximum tsunami pressure as well as the time variation of the tsunami pressure exerting on the structure. (author)

  17. A Comparative Analysis of Three Proposed Federal Renewable Electricity Standards

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, Patrick [National Renewable Energy Lab. (NREL), Golden, CO (United States); Logan, Jeffrey [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bird, Lori [National Renewable Energy Lab. (NREL), Golden, CO (United States); Short, Walter [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2009-05-01

    This paper analyzes potential impacts of proposed national renewable electricity standard (RES) legislation. An RES is a mandate requiring certain electricity retailers to provide a minimum share of their electricity sales from qualifying renewable power generation. The analysis focuses on draft bills introduced individually by Senator Jeff Bingaman and Representative Edward Markey, and jointly by Representative Henry Waxman and Markey. The analysis uses NREL's Regional Energy Deployment System (ReEDS) model to evaluate the impacts of the proposed RES requirements on the U.S. energy sector in four scenarios.

  18. Comparing the normalization methods for the differential analysis of Illumina high-throughput RNA-Seq data.

    Science.gov (United States)

    Li, Peipei; Piao, Yongjun; Shon, Ho Sun; Ryu, Keun Ho

    2015-10-28

    Recently, rapid improvements in technology and decrease in sequencing costs have made RNA-Seq a widely used technique to quantify gene expression levels. Various normalization approaches have been proposed, owing to the importance of normalization in the analysis of RNA-Seq data. A comparison of recently proposed normalization methods is required to generate suitable guidelines for the selection of the most appropriate approach for future experiments. In this paper, we compared eight non-abundance (RC, UQ, Med, TMM, DESeq, Q, RPKM, and ERPKM) and two abundance estimation normalization methods (RSEM and Sailfish). The experiments were based on real Illumina high-throughput RNA-Seq of 35- and 76-nucleotide sequences produced in the MAQC project and simulation reads. Reads were mapped with human genome obtained from UCSC Genome Browser Database. For precise evaluation, we investigated Spearman correlation between the normalization results from RNA-Seq and MAQC qRT-PCR values for 996 genes. Based on this work, we showed that out of the eight non-abundance estimation normalization methods, RC, UQ, Med, TMM, DESeq, and Q gave similar normalization results for all data sets. For RNA-Seq of a 35-nucleotide sequence, RPKM showed the highest correlation results, but for RNA-Seq of a 76-nucleotide sequence, least correlation was observed than the other methods. ERPKM did not improve results than RPKM. Between two abundance estimation normalization methods, for RNA-Seq of a 35-nucleotide sequence, higher correlation was obtained with Sailfish than that with RSEM, which was better than without using abundance estimation methods. However, for RNA-Seq of a 76-nucleotide sequence, the results achieved by RSEM were similar to without applying abundance estimation methods, and were much better than with Sailfish. Furthermore, we found that adding a poly-A tail increased alignment numbers, but did not improve normalization results. Spearman correlation analysis revealed that RC, UQ

  19. Comparative study on gene set and pathway topology-based enrichment methods.

    Science.gov (United States)

    Bayerlová, Michaela; Jung, Klaus; Kramer, Frank; Klemm, Florian; Bleckmann, Annalen; Beißbarth, Tim

    2015-10-22

    Enrichment analysis is a popular approach to identify pathways or sets of genes which are significantly enriched in the context of differentially expressed genes. The traditional gene set enrichment approach considers a pathway as a simple gene list disregarding any knowledge of gene or protein interactions. In contrast, the new group of so called pathway topology-based methods integrates the topological structure of a pathway into the analysis. We comparatively investigated gene set and pathway topology-based enrichment approaches, considering three gene set and four topological methods. These methods were compared in two extensive simulation studies and on a benchmark of 36 real datasets, providing the same pathway input data for all methods. In the benchmark data analysis both types of methods showed a comparable ability to detect enriched pathways. The first simulation study was conducted with KEGG pathways, which showed considerable gene overlaps between each other. In this study with original KEGG pathways, none of the topology-based methods outperformed the gene set approach. Therefore, a second simulation study was performed on non-overlapping pathways created by unique gene IDs. Here, methods accounting for pathway topology reached higher accuracy than the gene set methods, however their sensitivity was lower. We conducted one of the first comprehensive comparative works on evaluating gene set against pathway topology-based enrichment methods. The topological methods showed better performance in the simulation scenarios with non-overlapping pathways, however, they were not conclusively better in the other scenarios. This suggests that simple gene set approach might be sufficient to detect an enriched pathway under realistic circumstances. Nevertheless, more extensive studies and further benchmark data are needed to systematically evaluate these methods and to assess what gain and cost pathway topology information introduces into enrichment analysis. Both

  20. Comparative study on diagonal equivalent methods of masonry infill panel

    Science.gov (United States)

    Amalia, Aniendhita Rizki; Iranata, Data

    2017-06-01

    Infrastructure construction in earthquake prone area needs good design process, including modeling a structure in a correct way to reduce damages caused by an earthquake. Earthquakes cause many damages e.g. collapsed buildings that are dangerous. An incorrect modeling in design process certainly affects the structure's ability in responding to load, i.e. an earthquake load, and it needs to be paid attention to in order to reduce damages and fatalities. A correct modeling considers every aspect that affects the strength of a building, including stiffness of resisting lateral loads caused by an earthquake. Most of structural analyses still use open frame method that does not consider the effect of stiffness of masonry panel to the stiffness and strength of the whole structure. Effect of masonry panel is usually not included in design process, but the presence of this panel greatly affects behavior of the building in responding to an earthquake. In worst case scenario, it can even cause the building to collapse as what has been reported after great earthquakes worldwide. Modeling a structure with masonry panel as consideration can be performed by designing the panel as compression brace or shell element. In designing masonry panel as a compression brace, there are fourteen methods popular to be used by structure designers formulated by Saneinejad-Hobbs, Holmes, Stafford-Smith, Mainstones, Mainstones-Weeks, Bazan-Meli, Liauw Kwan, Paulay and Priestley, FEMA 356, Durani Luo, Hendry, Al-Chaar, Papia and Chen-Iranata. Every method has its own equation and parameters to use, therefore the model of every method was compared to results of experimental test to see which one gives closer values. Moreover, those methods also need to be compared to the open frame to see if they can result values within limits. Experimental test that was used in comparing all methods was taken from Mehrabi's research (Fig. 1), which was a prototype of a frame in a structure with 0.5 scale and the

  1. Mean protein evolutionary distance: a method for comparative protein evolution and its application.

    Directory of Open Access Journals (Sweden)

    Michael J Wise

    Full Text Available Proteins are under tight evolutionary constraints, so if a protein changes it can only do so in ways that do not compromise its function. In addition, the proteins in an organism evolve at different rates. Leveraging the history of patristic distance methods, a new method for analysing comparative protein evolution, called Mean Protein Evolutionary Distance (MeaPED, measures differential resistance to evolutionary pressure across viral proteomes and is thereby able to point to the proteins' roles. Different species' proteomes can also be compared because the results, consistent across virus subtypes, concisely reflect the very different lifestyles of the viruses. The MeaPED method is here applied to influenza A virus, hepatitis C virus, human immunodeficiency virus (HIV, dengue virus, rotavirus A, polyomavirus BK and measles, which span the positive and negative single-stranded, doubled-stranded and reverse transcribing RNA viruses, and double-stranded DNA viruses. From this analysis, host interaction proteins including hemagglutinin (influenza, and viroporins agnoprotein (polyomavirus, p7 (hepatitis C and VPU (HIV emerge as evolutionary hot-spots. By contrast, RNA-directed RNA polymerase proteins including L (measles, PB1/PB2 (influenza and VP1 (rotavirus, and internal serine proteases such as NS3 (dengue and hepatitis C virus emerge as evolutionary cold-spots. The hot spot influenza hemagglutinin protein is contrasted with the related cold spot H protein from measles. It is proposed that evolutionary cold-spot proteins can become significant targets for second-line anti-viral therapeutics, in cases where front-line vaccines are not available or have become ineffective due to mutations in the hot-spot, generally more antigenically exposed proteins. The MeaPED package is available from www.pam1.bcs.uwa.edu.au/~michaelw/ftp/src/meaped.tar.gz.

  2. Mean protein evolutionary distance: a method for comparative protein evolution and its application.

    Science.gov (United States)

    Wise, Michael J

    2013-01-01

    Proteins are under tight evolutionary constraints, so if a protein changes it can only do so in ways that do not compromise its function. In addition, the proteins in an organism evolve at different rates. Leveraging the history of patristic distance methods, a new method for analysing comparative protein evolution, called Mean Protein Evolutionary Distance (MeaPED), measures differential resistance to evolutionary pressure across viral proteomes and is thereby able to point to the proteins' roles. Different species' proteomes can also be compared because the results, consistent across virus subtypes, concisely reflect the very different lifestyles of the viruses. The MeaPED method is here applied to influenza A virus, hepatitis C virus, human immunodeficiency virus (HIV), dengue virus, rotavirus A, polyomavirus BK and measles, which span the positive and negative single-stranded, doubled-stranded and reverse transcribing RNA viruses, and double-stranded DNA viruses. From this analysis, host interaction proteins including hemagglutinin (influenza), and viroporins agnoprotein (polyomavirus), p7 (hepatitis C) and VPU (HIV) emerge as evolutionary hot-spots. By contrast, RNA-directed RNA polymerase proteins including L (measles), PB1/PB2 (influenza) and VP1 (rotavirus), and internal serine proteases such as NS3 (dengue and hepatitis C virus) emerge as evolutionary cold-spots. The hot spot influenza hemagglutinin protein is contrasted with the related cold spot H protein from measles. It is proposed that evolutionary cold-spot proteins can become significant targets for second-line anti-viral therapeutics, in cases where front-line vaccines are not available or have become ineffective due to mutations in the hot-spot, generally more antigenically exposed proteins. The MeaPED package is available from www.pam1.bcs.uwa.edu.au/~michaelw/ftp/src/meaped.tar.gz.

  3. Comparative Study of Three Data Assimilation Methods for Ice Sheet Model Initialisation

    Science.gov (United States)

    Mosbeux, Cyrille; Gillet-Chaulet, Fabien; Gagliardini, Olivier

    2015-04-01

    The current global warming has direct consequences on ice-sheet mass loss contributing to sea level rise. This loss is generally driven by an acceleration of some coastal outlet glaciers and reproducing these mechanisms is one of the major issues in ice-sheet and ice flow modelling. The construction of an initial state, as close as possible to current observations, is required as a prerequisite before producing any reliable projection of the evolution of ice-sheets. For this step, inverse methods are often used to infer badly known or unknown parameters. For instance, the adjoint inverse method has been implemented and applied with success by different authors in different ice flow models in order to infer the basal drag [ Schafer et al., 2012; Gillet-chauletet al., 2012; Morlighem et al., 2010]. Others data fields, such as ice surface and bedrock topography, are easily measurable with more or less uncertainty but only locally along tracks and interpolated on finer model grid. All these approximations lead to errors on the data elevation model and give rise to an ill-posed problem inducing non-physical anomalies in flux divergence [Seroussi et al, 2011]. A solution to dissipate these divergences of flux is to conduct a surface relaxation step at the expense of the accuracy of the modelled surface [Gillet-Chaulet et al., 2012]. Other solutions, based on the inversion of ice thickness and basal drag were proposed [Perego et al., 2014; Pralong & Gudmundsson, 2011]. In this study, we create a twin experiment to compare three different assimilation algorithms based on inverse methods and nudging to constrain the bedrock friction and the bedrock elevation: (i) cyclic inversion of friction parameter and bedrock topography using adjoint method, (ii) cycles coupling inversion of friction parameter using adjoint method and nudging of bedrock topography, (iii) one step inversion of both parameters with adjoint method. The three methods show a clear improvement in parameters

  4. Comparing transformation methods for DNA microarray data

    Directory of Open Access Journals (Sweden)

    Zwinderman Aeilko H

    2004-06-01

    Full Text Available Abstract Background When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include subtraction of an estimated background signal, subtracting the reference signal, smoothing (to account for nonlinear measurement effects, and more. Different authors use different approaches, and it is generally not clear to users which method they should prefer. Results We used the ratio between biological variance and measurement variance (which is an F-like statistic as a quality measure for transformation methods, and we demonstrate a method for maximizing that variance ratio on real data. We explore a number of transformations issues, including Box-Cox transformation, baseline shift, partial subtraction of the log-reference signal and smoothing. It appears that the optimal choice of parameters for the transformation methods depends on the data. Further, the behavior of the variance ratio, under the null hypothesis of zero biological variance, appears to depend on the choice of parameters. Conclusions The use of replicates in microarray experiments is important. Adjustment for the null-hypothesis behavior of the variance ratio is critical to the selection of transformation method.

  5. Comparative study of two commercially pure titanium casting methods

    Directory of Open Access Journals (Sweden)

    Renata Cristina Silveira Rodrigues

    2010-10-01

    Full Text Available The interest in using titanium to fabricate removable partial denture (RPD frameworks has increased, but there are few studies evaluating the effects of casting methods on clasp behavior. OBJECTIVE: This study compared the occurrence of porosities and the retentive force of commercially pure titanium (CP Ti and cobalt-chromium (Co-Cr removable partial denture circumferential clasps cast by induction/centrifugation and plasma/vacuum-pressure. MATERIAL AND METHODS: 72 frameworks were cast from CP Ti (n=36 and Co-Cr alloy (n=36; control group. For each material, 18 frameworks were casted by electromagnetic induction and injected by centrifugation, whereas the other 18 were casted by plasma and injected by vacuum-pressure. For each casting method, three subgroups (n=6 were formed: 0.25 mm, 0.50 mm, and 0.75 mm undercuts. The specimens were radiographed and subjected to an insertion/removal test simulating 5 years of framework use. Data were analyzed by ANOVA and Tukey's to compare materials and cast methods (α=0.05. RESULTS: Three of 18 specimens of the induction/centrifugation group and 9 of 18 specimens of plasma/vacuum-pressure cast presented porosities, but only 1 and 7 specimens, respectively, were rejected for simulation test. For Co-Cr alloy, no defects were found. Comparing the casting methods, statistically significant differences (p<0.05 were observed only for the Co-Cr alloy with 0.25 mm and 0.50 mm undercuts. Significant differences were found for the 0.25 mm and 0.75 mm undercuts dependent on the material used. For the 0.50 mm undercut, significant differences were found when the materials were induction casted. CONCLUSION: Although both casting methods produced satisfactory CP Ti RPD frameworks, the occurrence of porosities was greater in the plasma/vacuum-pressure than in the induction/centrifugation method, the latter resulting in higher clasp rigidity, generating higher retention force values.

  6. Comparing different methods to assess weaver ant abundance in plantation trees

    DEFF Research Database (Denmark)

    Wargui, Rosine; Offenberg, Joachim; Sinzogan, Antonio

    2015-01-01

    Weaver ants (Oecophylla spp.) are widely used as effective biological control agents. In order to optimize their use, ant abundance needs to be tracked. As several methods have been used to estimate ant abundance on plantation trees, abundances are not comparable between studies and no guideline...... is available on which method to apply in a particular study. This study compared four existing methods: three methods based on the number of ant trails on the main branches of a tree (called the Peng 1, Peng 2 and Offenberg index) and one method based on the number of ant nests per tree. Branch indices did...... not produce equal scores and cannot be compared directly. The Peng 1 index was the fastest to assess, but showed only limited seasonal fluctuations when ant abundance was high, because it approached its upper limit. The Peng 2 and Offenberg indices were lower and not close to the upper limit and therefore...

  7. Determination of Matric Suction and Saturation Degree for Unsaturated Soils, Comparative Study - Numerical Method versus Analytical Method

    Science.gov (United States)

    Chiorean, Vasile-Florin

    2017-10-01

    Matric suction is a soil parameter which influences the behaviour of unsaturated soils in both terms of shear strength and permeability. It is a necessary aspect to know the variation of matric suction in unsaturated soil zone for solving geotechnical issues like unsaturated soil slopes stability or bearing capacity for unsaturated foundation ground. Mathematical expression of the dependency between soil moisture content and it’s matric suction (soil water characteristic curve) has a powerful character of nonlinearity. This paper presents two methods to determine the variation of matric suction along the depth included between groundwater level and soil level. First method is an analytical approach to emphasize one direction steady state unsaturated infiltration phenomenon that occurs between the groundwater level and the soil level. There were simulated three different situations in terms of border conditions: precipitations (inflow conditions on ground surface), evaporation (outflow conditions on ground surface), and perfect equilibrium (no flow on ground surface). Numerical method is finite element method used for steady state, two-dimensional, unsaturated infiltration calculus. Regarding boundary conditions there were simulated identical situations as in analytical approach. For both methods, was adopted the equation proposed by van Genuchten-Mualen (1980) for mathematical expression of soil water characteristic curve. Also for the unsaturated soil permeability prediction model was adopted the equation proposed by van Genuchten-Mualen. The fitting parameters of these models were adopted according to RETC 6.02 software in function of soil type. The analyses were performed in both methods for three major soil types: clay, silt and sand. For each soil type were concluded analyses for three situations in terms of border conditions applied on soil surface: inflow, outflow, and no flow. The obtained results are presented in order to highlight the differences

  8. A proposed method for accurate 3D analysis of cochlear implant migration using fusion of cone beam CT

    Directory of Open Access Journals (Sweden)

    Guido eDees

    2016-01-01

    Full Text Available IntroductionThe goal of this investigation was to compare fusion of sequential cone beam CT volumes to the gold standard (fiducial registration in order to be able to analyze clinical CI migration with high accuracy in three dimensions. Materials and MethodsPaired time-lapsed cone beam CT volumes were performed on five human cadaver temporal bones and one human subject. These volumes were fused using 3D Slicer 4 and BRAINSFit software. Using a gold standard fiducial technique, the accuracy, robustness and performance time of the fusion process were assessed.Results This proposed fusion protocol achieves a sub voxel mean Euclidean distance of 0.05 millimeter in human cadaver temporal bones and 0.16 millimeter when applied to the described in vivo human synthetic data set in over 95% of all fusions. Performance times are less than two minutes.ConclusionHere a new and validated method based on existing techniques is described which could be used to accurately quantify migration of cochlear implant electrodes.

  9. 77 FR 24684 - Proposed Information Collection; Comment Request; 2013-2015 American Community Survey Methods...

    Science.gov (United States)

    2012-04-25

    ... proposed content changes. Thus, we need to test an alternative questionnaire design to accommodate additional content on the ACS mail questionnaire. In the 2013 ACS Questionnaire Design Test, we will study... in Puerto Rico. II. Method of Collection Questionnaire Design Test--Data collection for this test...

  10. Comparative analysis of clustering methods for gene expression time course data

    Directory of Open Access Journals (Sweden)

    Ivan G. Costa

    2004-01-01

    Full Text Available This work performs a data driven comparative study of clustering methods used in the analysis of gene expression time courses (or time series. Five clustering methods found in the literature of gene expression analysis are compared: agglomerative hierarchical clustering, CLICK, dynamical clustering, k-means and self-organizing maps. In order to evaluate the methods, a k-fold cross-validation procedure adapted to unsupervised methods is applied. The accuracy of the results is assessed by the comparison of the partitions obtained in these experiments with gene annotation, such as protein function and series classification.

  11. Theoretical and methodological basis of the comparative historical and legal method development

    Directory of Open Access Journals (Sweden)

    Д. А. Шигаль

    2015-05-01

    Full Text Available Problem setting. Development of any scientific method is always both a question of its structural and functional characteristics and place in the system of scientific methods, and a comment as for practicability of such methodological work. This paper attempts to give a detailed response to the major comments and objections arising in respect of the separation as an independent means of special and scientific knowledge of comparative historical and legal method. Recent research and publications analysis. Analyzing research and publications within the theme of the scientific article, it should be noted that attention to methodological issues of both general and legal science at the time was paid by such prominent foreign and domestic scholars as I. D. Andreev, Yu. Ya. Baskin, O. L. Bygych, M. A. Damirli, V. V. Ivanov, I. D. Koval'chenko, V. F. Kolomyitsev, D. V. Lukyanov, L. A. Luts, J. Maida, B. G. Mogilnytsky, N. M. Onishchenko, N. M. Parkhomenko, O. V. Petryshyn, S. P. Pogrebnyak, V. I. Synaisky, V. M. Syryh, O. F. Skakun, A. O. Tille, D. I. Feldman and others. It should be noted that, despite a large number of scientific papers in this field, the interest of research partnership in the methodology of history of state and law science still unfairly remains very low. Paper objective. The purpose of this scientific paper is theoretical and methodological rationale for the need of separation and development of comparative historical and legal method in the form of answers to more common questions and objections that arise in scientific partnership in this regard. Paper main body. Development of comparative historical and legal means of knowledge is quite justified because it meets the requirements of the scientific method efficiency, which criteria are the speed for achieving this goal, ease of use of one or another way of scientific knowledge, universality of research methods, convenience of techniques that are used and so on. Combining the

  12. Comparing methods for measuring the rate of spread of invading populations

    Science.gov (United States)

    Marius Gilbert; Andrew. Liebhold

    2010-01-01

    Measuring rates of spread during biological invasions is important for predicting where and when invading organisms will spread in the future as well as for quantifying the influence of environmental conditions on invasion speed. While several methods have been proposed in the literature to measure spread rates, a comprehensive comparison of their accuracy when applied...

  13. Comparing Traditional and Crowdsourcing Methods for Pretesting Survey Questions

    Directory of Open Access Journals (Sweden)

    Jennifer Edgar

    2016-10-01

    Full Text Available Cognitive interviewing is a common method used to evaluate survey questions. This study compares traditional cognitive interviewing methods with crowdsourcing, or “tapping into the collective intelligence of the public to complete a task.” Crowdsourcing may provide researchers with access to a diverse pool of potential participants in a very timely and cost-efficient way. Exploratory work found that crowdsourcing participants, with self-administered data collection, may be a viable alternative, or addition, to traditional pretesting methods. Using three crowdsourcing designs (TryMyUI, Amazon Mechanical Turk, and Facebook, we compared the participant characteristics, costs, and quantity and quality of data with traditional laboratory-based cognitive interviews. Results suggest that crowdsourcing and self-administered protocols may be a viable way to collect survey pretesting information, as participants were able to complete the tasks and provide useful information; however, complex tasks may require the skills of an interviewer to administer unscripted probes.

  14. Comparing four methods to estimate usual intake distributions

    NARCIS (Netherlands)

    Souverein, O.W.; Dekkers, A.L.; Geelen, A.; Haubrock, J.; Vries, de J.H.M.; Ocke, M.C.; Harttig, U.; Boeing, H.; Veer, van 't P.

    2011-01-01

    Background/Objectives: The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As ‘true’ usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data

  15. A Method for Proposing Valued-Adding Attributes in Customized Housing

    Directory of Open Access Journals (Sweden)

    Cynthia S. Hentschke

    2014-12-01

    Full Text Available In most emerging economies, there has been many incentives and high availability of funding for low-cost housing projects. This has encouraged product standardization and the application of mass production ideas, based on the assumption that this is the most effective strategy for reducing costs. However, the delivery of highly standardized housing units to customers with different needs, without considering their lifestyle and perception of value, often results in inadequate products. Mass customization has been pointed out as an effective strategy to improve value generation in low-cost housing projects, and to avoid waste caused by renovations done in dwellings soon after occupancy. However, one of the main challenges for the implementation of mass customization is the definition of a set of relevant options based on users’ perceived value. The aim of this paper is to propose a method for defining value adding attributes in customized housing projects, which can support decision-making in product development. The means-end chain theory was used as theoretical framework to connect product attributes and costumers’ values, through the application of the laddering technique. The method was tested in two house-building projects delivered by a company from Brazil. The main contribution of this method is to indicate the customization units that are most important for users along with the explanation of why those units are the most relevant ones.

  16. Transfer Pricing: Is the Comparable Uncontrolled Price Method the Best Method in all Cases?

    Directory of Open Access Journals (Sweden)

    Pranvera Dalloshi

    2012-12-01

    Full Text Available The transfer price scope is becoming a very important issue for all companies that comprise from different departments or have a network of branches. These companies are obliged to present the way of price determination for transactions that they have with their branches or other relevant members of their network. The establishment of the multinational companies that develop their activities in various countries is being increased. It has increased the need to supervise their transactions and approval of laws and administrative orders that do not leave space for misuses. The paper is focused in the response to the question if the Comparable Uncontrolled Price Method is the best method to be used in all cases. It is presented through a concrete example that shows how the price of a product determined through the Comparable Uncontrolled Price Method or market price has an impact to the profit of the mother company and other subsidiaries.

  17. Bronchial histamine challenge. A combined interrupter-dosimeter method compared with a standard method

    DEFF Research Database (Denmark)

    Pavlovic, M; Holstein-Rathlou, N H; Madsen, F

    1985-01-01

    We compared the provocative concentration (PC) values obtained by two different methods of performing bronchial histamine challenge. One test was done on an APTA, an apparatus which allows simultaneous provocation with histamine and measurement of airway resistance (Rtot) by the interrupter metho...

  18. Comparative study of durability test methods for pellets and briquettes

    Energy Technology Data Exchange (ETDEWEB)

    Temmerman, Michaeel; Rabier, Fabienne [Centre wallon de Recherches agronomiques (CRA-W), 146, chaussee de Namur, B-5030, Gembloux (Belgium); Jensen, Peter Daugbjerg [Forest and Landscape, The Royal Veterinary and Agricultural University, Rolighedsvej 23, DK-1958 Frederiksberg C (Denmark); Hartmann, Hans; Boehm, Thorsten [Technologie- und Foerderzentrum fuer Nachwachsende Rohstoffe-TFZ, Schulgasse 18, D-94315 Straubing (Germany)

    2006-11-15

    Different methods for the determination of the mechanical durability (DU) of pellets and briquettes were compared by international round robin tests including different laboratories. The DUs of five briquette and 26 pellet types were determined. For briquettes, different rotation numbers of a prototype tumbler and a calculated DU index are compared. For pellets testing, the study compares two standard methods, a tumbling device according to ASAE S 269.4, the Lignotester according to ONORM M 7135 and a second tumbling method with a prototype tumbler. For the tested methods, the repeatability, the reproducibility and the required minimum number of replications to achieve given accuracy levels were calculated. Additionally, this study evaluates the relation between DU and particle density. The results show for both pellets and briquettes, that the measured DU values and their variability are influenced by the applied method. Moreover, the variability of the results depend on the biofuel itself. For briquettes of DU above 90%, five replications lead to an accuracy of 2%, while 39 replications are needed to achieve an accuracy of 10%, when briquettes of DU below 90% are tested. For pellets, the tumbling device described by the ASAE standard allows to reach acceptable accuracy levels (1%) with a limited number of replications. Finally, for the tested pellets and briquettes no relation between DU and particle density was found. (author)

  19. Comparing performance of standard and iterative linear unmixing methods for hyperspectral signatures

    Science.gov (United States)

    Gault, Travis R.; Jansen, Melissa E.; DeCoster, Mallory E.; Jansing, E. David; Rodriguez, Benjamin M.

    2016-05-01

    Linear unmixing is a method of decomposing a mixed signature to determine the component materials that are present in sensor's field of view, along with the abundances at which they occur. Linear unmixing assumes that energy from the materials in the field of view is mixed in a linear fashion across the spectrum of interest. Traditional unmixing methods can take advantage of adjacent pixels in the decomposition algorithm, but is not the case for point sensors. This paper explores several iterative and non-iterative methods for linear unmixing, and examines their effectiveness at identifying the individual signatures that make up simulated single pixel mixed signatures, along with their corresponding abundances. The major hurdle addressed in the proposed method is that no neighboring pixel information is available for the spectral signature of interest. Testing is performed using two collections of spectral signatures from the Johns Hopkins University Applied Physics Laboratory's Signatures Database software (SigDB): a hand-selected small dataset of 25 distinct signatures from a larger dataset of approximately 1600 pure visible/near-infrared/short-wave-infrared (VIS/NIR/SWIR) spectra. Simulated spectra are created with three and four material mixtures randomly drawn from a dataset originating from SigDB, where the abundance of one material is swept in 10% increments from 10% to 90%with the abundances of the other materials equally divided amongst the remainder. For the smaller dataset of 25 signatures, all combinations of three or four materials are used to create simulated spectra, from which the accuracy of materials returned, as well as the correctness of the abundances, is compared to the inputs. The experiment is expanded to include the signatures from the larger dataset of almost 1600 signatures evaluated using a Monte Carlo scheme with 5000 draws of three or four materials to create the simulated mixed signatures. The spectral similarity of the inputs to the

  20. A comparative study of three different gene expression analysis methods.

    Science.gov (United States)

    Choe, Jae Young; Han, Hyung Soo; Lee, Seon Duk; Lee, Hanna; Lee, Dong Eun; Ahn, Jae Yun; Ryoo, Hyun Wook; Seo, Kang Suk; Kim, Jong Kun

    2017-12-04

    TNF-α regulates immune cells and acts as an endogenous pyrogen. Reverse transcription polymerase chain reaction (RT-PCR) is one of the most commonly used methods for gene expression analysis. Among the alternatives to PCR, loop-mediated isothermal amplification (LAMP) shows good potential in terms of specificity and sensitivity. However, few studies have compared RT-PCR and LAMP for human gene expression analysis. Therefore, in the present study, we compared one-step RT-PCR, two-step RT-LAMP and one-step RT-LAMP for human gene expression analysis. We compared three gene expression analysis methods using the human TNF-α gene as a biomarker from peripheral blood cells. Total RNA from the three selected febrile patients were subjected to the three different methods of gene expression analysis. In the comparison of three gene expression analysis methods, the detection limit of both one-step RT-PCR and one-step RT-LAMP were the same, while that of two-step RT-LAMP was inferior. One-step RT-LAMP takes less time, and the experimental result is easy to determine. One-step RT-LAMP is a potentially useful and complementary tool that is fast and reasonably sensitive. In addition, one-step RT-LAMP could be useful in environments lacking specialized equipment or expertise.

  1. Comparative analysis of various methods for modelling permanent magnet machines

    NARCIS (Netherlands)

    Ramakrishnan, K.; Curti, M.; Zarko, D.; Mastinu, G.; Paulides, J.J.H.; Lomonova, E.A.

    2017-01-01

    In this paper, six different modelling methods for permanent magnet (PM) electric machines are compared in terms of their computational complexity and accuracy. The methods are based primarily on conformal mapping, mode matching, and harmonic modelling. In the case of conformal mapping, slotted air

  2. Comparing concentration methods: parasitrap® versus Kato-Katz for ...

    African Journals Online (AJOL)

    Comparing concentration methods: parasitrap® versus Kato-Katz for studying the prevalence of Helminths in Bengo province, Angola. Clara Mirante, Isabel Clemente, Graciette Zambu, Catarina Alexandre, Teresa Ganga, Carlos Mayer, Miguel Brito ...

  3. Proposed hybrid-classifier ensemble algorithm to map snow cover area

    Science.gov (United States)

    Nijhawan, Rahul; Raman, Balasubramanian; Das, Josodhir

    2018-01-01

    Metaclassification ensemble approach is known to improve the prediction performance of snow-covered area. The methodology adopted in this case is based on neural network along with four state-of-art machine learning algorithms: support vector machine, artificial neural networks, spectral angle mapper, K-mean clustering, and a snow index: normalized difference snow index. An AdaBoost ensemble algorithm related to decision tree for snow-cover mapping is also proposed. According to available literature, these methods have been rarely used for snow-cover mapping. Employing the above techniques, a study was conducted for Raktavarn and Chaturangi Bamak glaciers, Uttarakhand, Himalaya using multispectral Landsat 7 ETM+ (enhanced thematic mapper) image. The study also compares the results with those obtained from statistical combination methods (majority rule and belief functions) and accuracies of individual classifiers. Accuracy assessment is performed by computing the quantity and allocation disagreement, analyzing statistic measures (accuracy, precision, specificity, AUC, and sensitivity) and receiver operating characteristic curves. A total of 225 combinations of parameters for individual classifiers were trained and tested on the dataset and results were compared with the proposed approach. It was observed that the proposed methodology produced the highest classification accuracy (95.21%), close to (94.01%) that was produced by the proposed AdaBoost ensemble algorithm. From the sets of observations, it was concluded that the ensemble of classifiers produced better results compared to individual classifiers.

  4. Direct risk standardisation: a new method for comparing casemix adjusted event rates using complex models.

    Science.gov (United States)

    Nicholl, Jon; Jacques, Richard M; Campbell, Michael J

    2013-10-29

    Comparison of outcomes between populations or centres may be confounded by any casemix differences and standardisation is carried out to avoid this. However, when the casemix adjustment models are large and complex, direct standardisation has been described as "practically impossible", and indirect standardisation may lead to unfair comparisons. We propose a new method of directly standardising for risk rather than standardising for casemix which overcomes these problems. Using a casemix model which is the same model as would be used in indirect standardisation, the risk in individuals is estimated. Risk categories are defined, and event rates in each category for each centre to be compared are calculated. A weighted sum of the risk category specific event rates is then calculated. We have illustrated this method using data on 6 million admissions to 146 hospitals in England in 2007/8 and an existing model with over 5000 casemix combinations, and a second dataset of 18,668 adult emergency admissions to 9 centres in the UK and overseas and a published model with over 20,000 casemix combinations and a continuous covariate. Substantial differences between conventional directly casemix standardised rates and rates from direct risk standardisation (DRS) were found. Results based on DRS were very similar to Standardised Mortality Ratios (SMRs) obtained from indirect standardisation, with similar standard errors. Direct risk standardisation using our proposed method is as straightforward as using conventional direct or indirect standardisation, always enables fair comparisons of performance to be made, can use continuous casemix covariates, and was found in our examples to have similar standard errors to the SMR. It should be preferred when there is a risk that conventional direct or indirect standardisation will lead to unfair comparisons.

  5. Comparing parametric and nonparametric regression methods for panel data

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    We investigate and compare the suitability of parametric and non-parametric stochastic regression methods for analysing production technologies and the optimal firm size. Our theoretical analysis shows that the most commonly used functional forms in empirical production analysis, Cobb......-Douglas and Translog, are unsuitable for analysing the optimal firm size. We show that the Translog functional form implies an implausible linear relationship between the (logarithmic) firm size and the elasticity of scale, where the slope is artificially related to the substitutability between the inputs....... The practical applicability of the parametric and non-parametric regression methods is scrutinised and compared by an empirical example: we analyse the production technology and investigate the optimal size of Polish crop farms based on a firm-level balanced panel data set. A nonparametric specification test...

  6. An overview of methods for comparative effectiveness research.

    Science.gov (United States)

    Meyer, Anne-Marie; Wheeler, Stephanie B; Weinberger, Morris; Chen, Ronald C; Carpenter, William R

    2014-01-01

    Comparative effectiveness research (CER) is a broad category of outcomes research encompassing many different methods employed by researchers and clinicians from numerous disciplines. The goal of cancer-focused CER is to generate new knowledge to assist cancer stakeholders in making informed decisions that will improve health care and outcomes of both individuals and populations. There are numerous CER methods that may be used to examine specific questions, including randomized controlled trials, observational studies, systematic literature reviews, and decision sciences modeling. Each has its strengths and weaknesses. To both inform and serve as a reference for readers of this issue of Seminars in Radiation Oncology as well as the broader oncology community, we describe CER and several of the more commonly used approaches and analytical methods. © 2013 Published by Elsevier Inc.

  7. K0-INAA method accuracy using Zn as comparator

    International Nuclear Information System (INIS)

    Bedregal, P.; Mendoza, P.; Ubillus, M.; Montoya, E.

    2010-01-01

    An evaluation of the accuracy in the application of the k 0 -INAA method using Zn foil as comparator is presented. A good agreement was found in the precision within analysts and between them, as well as in the assessment of trueness for most elements. The determination of important experimental parameters like gamma peak counting efficiency, γ-γ true coincidence, comparator preparation and quality assurance/quality control is also described and discussed.

  8. A new ART iterative method and a comparison of performance among various ART methods

    International Nuclear Information System (INIS)

    Tan, Yufeng; Sato, Shunsuke

    1993-01-01

    Many algebraic reconstruction techniques (ART) image reconstruction algorithms, for instance, simultaneous iterative reconstruction technique (SIRT), the relaxation method and multiplicative ART (MART), have been proposed and their convergent properties have been studied. SIRT and the underrelaxed relaxation method converge to the least-squares solution, but the convergent speeds are very slow. The Kaczmarz method converges very quickly, but the reconstructed images contain a lot of noise. The comparative studies between these algorithms have been done by Gilbert and others, but are not adequate. In this paper, we (1) propose a new method which is a modified Kaczmarz method and prove its convergence property, (2) study performance of 7 algorithms including the one proposed here by computer simulation for 3 kinds of typical phantoms. The method proposed here does not give the least-square solution, but the root mean square errors of its reconstructed images decrease very quickly after few interations. The result shows that the method proposed here gives a better reconstructed image. (author)

  9. Substoichiometric method in the simple radiometric analysis

    International Nuclear Information System (INIS)

    Ikeda, N.; Noguchi, K.

    1979-01-01

    The substoichiometric method is applied to simple radiometric analysis. Two methods - the standard reagent method and the standard sample method - are proposed. The validity of the principle of the methods is verified experimentally in the determination of silver by the precipitation method, or of zinc by the ion-exchange or solvent-extraction method. The proposed methods are simple and rapid compared with the conventional superstoichiometric method. (author)

  10. Comparative study between EDXRF and ASTM E572 methods using two-way ANOVA

    Science.gov (United States)

    Krummenauer, A.; Veit, H. M.; Zoppas-Ferreira, J.

    2018-03-01

    Comparison with reference method is one of the necessary requirements for the validation of non-standard methods. This comparison was made using the experiment planning technique with two-way ANOVA. In ANOVA, the results obtained using the EDXRF method, to be validated, were compared with the results obtained using the ASTM E572-13 standard test method. Fisher's tests (F-test) were used to comparative study between of the elements: molybdenum, niobium, copper, nickel, manganese, chromium and vanadium. All F-tests of the elements indicate that the null hypothesis (Ho) has not been rejected. As a result, there is no significant difference between the methods compared. Therefore, according to this study, it is concluded that the EDXRF method was approved in this method comparison requirement.

  11. A qualitative method proposal to improve environmental impact assessment

    International Nuclear Information System (INIS)

    Toro, Javier; Requena, Ignacio; Duarte, Oscar; Zamorano, Montserrat

    2013-01-01

    In environmental impact assessment, qualitative methods are used because they are versatile and easy to apply. This methodology is based on the evaluation of the strength of the impact by grading a series of qualitative attributes that can be manipulated by the evaluator. The results thus obtained are not objective, and all too often impacts are eliminated that should be mitigated with corrective measures. However, qualitative methodology can be improved if the calculation of Impact Importance is based on the characteristics of environmental factors and project activities instead on indicators assessed by evaluators. In this sense, this paper proposes the inclusion of the vulnerability of environmental factors and the potential environmental impact of project activities. For this purpose, the study described in this paper defined Total Impact Importance and specified a quantification procedure. The results obtained in the case study of oil drilling in Colombia reflect greater objectivity in the evaluation of impacts as well as a positive correlation between impact values, the environmental characteristics at and near the project location, and the technical characteristics of project activities. -- Highlights: • Concept of vulnerability has been used to calculate the importance impact assessment. • This paper defined Total Impact Importance and specified a quantification procedure. • The method includes the characteristics of environmental and project activities. • The application has shown greater objectivity in the evaluation of impacts. • Better correlation between impact values, environment and the project has been shown

  12. A qualitative method proposal to improve environmental impact assessment

    Energy Technology Data Exchange (ETDEWEB)

    Toro, Javier, E-mail: jjtoroca@unal.edu.co [Institute of Environmental Studies, National University of Colombia at Bogotá (Colombia); Requena, Ignacio, E-mail: requena@decsai.ugr.es [Department of Computer Science and Artificial Intelligence, University of Granada (Spain); Duarte, Oscar, E-mail: ogduartev@unal.edu.co [National University of Colombia at Bogotá, Department of Electrical Engineering and Electronics (Colombia); Zamorano, Montserrat, E-mail: zamorano@ugr.es [Department of Civil Engineering, University of Granada (Spain)

    2013-11-15

    In environmental impact assessment, qualitative methods are used because they are versatile and easy to apply. This methodology is based on the evaluation of the strength of the impact by grading a series of qualitative attributes that can be manipulated by the evaluator. The results thus obtained are not objective, and all too often impacts are eliminated that should be mitigated with corrective measures. However, qualitative methodology can be improved if the calculation of Impact Importance is based on the characteristics of environmental factors and project activities instead on indicators assessed by evaluators. In this sense, this paper proposes the inclusion of the vulnerability of environmental factors and the potential environmental impact of project activities. For this purpose, the study described in this paper defined Total Impact Importance and specified a quantification procedure. The results obtained in the case study of oil drilling in Colombia reflect greater objectivity in the evaluation of impacts as well as a positive correlation between impact values, the environmental characteristics at and near the project location, and the technical characteristics of project activities. -- Highlights: • Concept of vulnerability has been used to calculate the importance impact assessment. • This paper defined Total Impact Importance and specified a quantification procedure. • The method includes the characteristics of environmental and project activities. • The application has shown greater objectivity in the evaluation of impacts. • Better correlation between impact values, environment and the project has been shown.

  13. Methods and models used in comparative risk studies

    International Nuclear Information System (INIS)

    Devooght, J.

    1983-01-01

    Comparative risk studies make use of a large number of methods and models based upon a set of assumptions incompletely formulated or of value judgements. Owing to the multidimensionality of risks and benefits, the economic and social context may notably influence the final result. Five classes of models are briefly reviewed: accounting of fluxes of effluents, radiation and energy; transport models and health effects; systems reliability and bayesian analysis; economic analysis of reliability and cost-risk-benefit analysis; decision theory in presence of uncertainty and multiple objectives. Purpose and prospect of comparative studies are assessed in view of probable diminishing returns for large generic comparisons [fr

  14. Correlation based method for comparing and reconstructing quasi-identical two-dimensional structures

    International Nuclear Information System (INIS)

    Mejia-Barbosa, Y.

    2000-03-01

    We show a method for comparing and reconstructing two similar amplitude-only structures, which are composed by the same number of identical apertures. The structures are two-dimensional and differ only in the location of one of the apertures. The method is based on a subtraction algorithm, which involves the auto-correlations and cross-correlation functions of the compared structures. Experimental results illustrate the feasibility of the method. (author)

  15. Proposed waste form performance criteria and testing methods for low-level mixed waste

    International Nuclear Information System (INIS)

    Franz, E.M.; Fuhrmann, M.; Bowerman, B.; Bates, S.; Peters, R.

    1994-08-01

    This document describes proposed waste form performance criteria and testing method that could be used as guidance in judging viability of a waste form as a physico-chemical barrier to releases of radionuclides and RCRA regulated hazardous components. It is assumed that release of contaminants by leaching is the single most important property by which the effectiveness of a waste form is judged. A two-tier regimen is proposed. The first tier includes a leach test required by the Environmental Protection Agency and a leach test designed to determine the net forward leach rate for a variety of materials. The second tier of tests are to determine if a set of stresses (i.e., radiation, freeze-thaw, wet-dry cycling) on the waste form adversely impact its ability to retain contaminants and remain physically intact. It is recommended that the first tier tests be performed first to determine acceptability. Only on passing the given specifications for the leach tests should other tests be performed. In the absence of site-specific performance assessments (PA), two generic modeling exercises are described which were used to calculate proposed acceptable leach rates

  16. Robust sleep quality quantification method for a personal handheld device.

    Science.gov (United States)

    Shin, Hangsik; Choi, Byunghun; Kim, Doyoon; Cho, Jaegeol

    2014-06-01

    The purpose of this study was to develop and validate a novel method for sleep quality quantification using personal handheld devices. The proposed method used 3- or 6-axes signals, including acceleration and angular velocity, obtained from built-in sensors in a smartphone and applied a real-time wavelet denoising technique to minimize the nonstationary noise. Sleep or wake status was decided on each axis, and the totals were finally summed to calculate sleep efficiency (SE), regarded as sleep quality in general. The sleep experiment was carried out for performance evaluation of the proposed method, and 14 subjects participated. An experimental protocol was designed for comparative analysis. The activity during sleep was recorded not only by the proposed method but also by well-known commercial applications simultaneously; moreover, activity was recorded on different mattresses and locations to verify the reliability in practical use. Every calculated SE was compared with the SE of a clinically certified medical device, the Philips (Amsterdam, The Netherlands) Actiwatch. In these experiments, the proposed method proved its reliability in quantifying sleep quality. Compared with the Actiwatch, accuracy and average bias error of SE calculated by the proposed method were 96.50% and -1.91%, respectively. The proposed method was vastly superior to other comparative applications with at least 11.41% in average accuracy and at least 6.10% in average bias; average accuracy and average absolute bias error of comparative applications were 76.33% and 17.52%, respectively.

  17. European experiences of the proposed ASTM test method for crack arrest toughness of ferritic materials

    International Nuclear Information System (INIS)

    Jutla, T.; Lidbury, D.P.G.; Ziebs, J.; Zimmermann, C.

    1986-01-01

    The proposed ASTM test method for measuring the crack arrest toughness of ferritic materials using wedge-loaded, side-grooved, compact specimens was applied to three steels: A514 bridge steel, A588 bridge steel, and A533B pressure vessel steel. Five sets of results from different laboratories are discussed here. Notches were prepared by spark erosion, although root radii varied from ∝0.1-1.5 mm. Although fast fractures were successfully initiated, arrest did not occur in a significant number of cases. The results showed no obvious dependence of crack arrest toughness, K a , (determined by a static analysis) on crack initiation toughness, K 0 . It was found that K a decreases markedly with increasing crack jump distance. A limited amount of further work on smaller specimens of the A533B steel showed that lower K a values tended to be recorded. It is concluded that a number of points relating to the proposed test method and notch preparation are worthy of further consideration. It is pointed out that the proposed validity criteria may screen out lower bound data. Nevertheless, for present practical purposes, K a values may be regarded as useful in providing an estimate of arrest toughness - although not necessarily a conservative estimate. (orig./HP)

  18. A Proposal on the Quantitative Homogeneity Analysis Method of SEM Images for Material Inspections

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Hyun; Kim, Jong Woo; Shin, Chang Ho [Hanyang University, Seoul (Korea, Republic of); Choi, Jung-Hoon; Cho, In-Hak; Park, Hwan Seo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    A scanning electron microscope (SEM) is a method to inspect the surface microstructure of materials. The SEM uses electron beams for imaging high magnifications of material surfaces; therefore, various chemical analyses can be performed from the SEM images. Therefore, it is widely used for the material inspection, chemical characteristic analysis, and biological analysis. For the nuclear criticality analysis field, it is an important parameter to check the homogeneity of the compound material for using it in the nuclear system. In our previous study, the SEM was tried to use for the homogeneity analysis of the materials. In this study, a quantitative homogeneity analysis method of SEM images is proposed for the material inspections. The method is based on the stochastic analysis method with the information of the grayscales of the SEM images.

  19. A Proposal on the Quantitative Homogeneity Analysis Method of SEM Images for Material Inspections

    International Nuclear Information System (INIS)

    Kim, Song Hyun; Kim, Jong Woo; Shin, Chang Ho; Choi, Jung-Hoon; Cho, In-Hak; Park, Hwan Seo

    2015-01-01

    A scanning electron microscope (SEM) is a method to inspect the surface microstructure of materials. The SEM uses electron beams for imaging high magnifications of material surfaces; therefore, various chemical analyses can be performed from the SEM images. Therefore, it is widely used for the material inspection, chemical characteristic analysis, and biological analysis. For the nuclear criticality analysis field, it is an important parameter to check the homogeneity of the compound material for using it in the nuclear system. In our previous study, the SEM was tried to use for the homogeneity analysis of the materials. In this study, a quantitative homogeneity analysis method of SEM images is proposed for the material inspections. The method is based on the stochastic analysis method with the information of the grayscales of the SEM images

  20. Finding the magnetic center of a quadrupole to high resolution: A draft proposal

    International Nuclear Information System (INIS)

    Fischer, G.E.; Cobb, J.K.; Jensen, D.R.

    1989-03-01

    In a companion proposal it is proposed to align quadrupoles of a transport line to within transverse tolerances of 5 to 10 micrometers. Such a proposal is meaningful only if the effective magnetic center of such lenses can in fact be repeatably located with respect to some external mechanical tooling to comparable accuracy. It is the purpose of this note to describe some new methods and procedures that will accomplish this aim. It will be shown that these methods are capable of yielding greater sensitivity than the more traditional methods used in the past. The notion of the ''nodal'' point is exploited. 4 refs., 5 figs., 1 tab

  1. Comparing different methods for estimating radiation dose to the conceptus

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Rendon, X.; Dedulle, A. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); Walgraeve, M.S.; Woussen, S.; Zhang, G. [University Hospitals Leuven, Department of Radiology, Leuven (Belgium); Bosmans, H. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); University Hospitals Leuven, Department of Radiology, Leuven (Belgium); Zanca, F. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); GE Healthcare, Buc (France)

    2017-02-15

    To compare different methods available in the literature for estimating radiation dose to the conceptus (D{sub conceptus}) against a patient-specific Monte Carlo (MC) simulation and a commercial software package (CSP). Eight voxel models from abdominopelvic CT exams of pregnant patients were generated. D{sub conceptus} was calculated with an MC framework including patient-specific longitudinal tube current modulation (TCM). For the same patients, dose to the uterus, D{sub uterus}, was calculated as an alternative for D{sub conceptus}, with a CSP that uses a standard-size, non-pregnant phantom and a generic TCM curve. The percentage error between D{sub uterus} and D{sub conceptus} was studied. Dose to the conceptus and percent error with respect to D{sub conceptus} was also estimated for three methods in the literature. The percentage error ranged from -15.9% to 40.0% when comparing MC to CSP. When comparing the TCM profiles with the generic TCM profile from the CSP, differences were observed due to patient habitus and conceptus position. For the other methods, the percentage error ranged from -30.1% to 13.5% but applicability was limited. Estimating an accurate D{sub conceptus} requires a patient-specific approach that the CSP investigated cannot provide. Available methods in the literature can provide a better estimation if applicable to patient-specific cases. (orig.)

  2. A comparative assessment of the value of imaging methods in diagnostic of osteonecrosis of the femoral head in adults

    International Nuclear Information System (INIS)

    Peshev, A.; Mlachkova, D.; Mlachkov, N.

    2006-01-01

    Full text: The aim of the presentation is to study the possibilities of contemporary methods for early diagnosis of osteonecrosis of the femoral head in adults with a view to make a suitable diagnostic protocol. 156 hip joints were examined. A conventional radiography was performed in all, CT - in 112, bone scan - in 123, MRI - in 42. The findings of the imaging methods were compared. The results of imaging methods were put together with the extension of the clinical complaints. The findings of conventional radiography were classified according to Ficat and Arlet, CT findings were classified according to Magit. The size and location of the necrotic area as a prognostic factor was evaluated. MRI was the most sensitive method in the early stages of necrosis of femoral head, followed by bone scan and CT. Conventional radiography is suitable for late stages of osteonecrosis. On the basis of our investigations we propose a diagnostic protocol for early diagnosis of osteonecrosis of the femoral head in adults

  3. Comparing a recursive digital filter with the moving-average and sequential probability-ratio detection methods for SNM portal monitors

    International Nuclear Information System (INIS)

    Fehlau, P.E.

    1993-01-01

    The author compared a recursive digital filter proposed as a detection method for French special nuclear material monitors with the author's detection methods, which employ a moving-average scaler or a sequential probability-ratio test. Each of these nine test subjects repeatedly carried a test source through a walk-through portal monitor that had the same nuisance-alarm rate with each method. He found that the average detection probability for the test source is also the same for each method. However, the recursive digital filter may have on drawback: its exponentially decreasing response to past radiation intensity prolongs the impact of any interference from radiation sources of radiation-producing machinery. He also examined the influence of each test subject on the monitor's operation by measuring individual attenuation factors for background and source radiation, then ranked the subjects' attenuation factors against their individual probabilities for detecting the test source. The one inconsistent ranking was probably caused by that subject's unusually long stride when passing through the portal

  4. Soil hydrophobicity: comparative study of usual determination methods

    Directory of Open Access Journals (Sweden)

    Eduardo Saldanha Vogelmann

    2015-02-01

    Full Text Available Hydrophobic or water repellent soils slowly absorb water because of the low wett ability of the soil particles which are coated with hydrophobic organic substances. These pose significant effects on plant growth, water infiltration and retention, surface runoff and erosion. The objective of this study was to compare the performance of tension micro-infiltrometer(TMI and the water drop penetration time (WDPT methods in the determination of the hydrophobicity index of eighteen soils from southern Brazil. Soil samples were collected from the 0-5cm soil layer to determine particle size distribution, organic matter content, hydrophobicity index of soil aggregates and droplet penetration time of disaggregated and sieved soil samples. For the TMI method the soil samples were subjected to minor changes due to the use of macroaggregates to preserve the distribution of solid constituents in the soil. Due to the homogeneity of the soil samples the WDPT method gave smaller coefficients of variation unlike the TMI method where the soil structure is preserved. However, both methods had low coefficients of variation, and are thus effective for determining the soil hydrophobicity, especially when the log hydrophobicity index or log WDPT is >1.

  5. Proposal of a segmentation procedure for skid resistance data

    International Nuclear Information System (INIS)

    Tejeda, S. V.; Tampier, Hernan de Solominihac; Navarro, T.E.

    2008-01-01

    Skin resistance of pavements presents a high spatial variability along a road. This pavement characteristic is directly related to wet weather accidents; therefore, it is important to identify and characterize the skid resistance of homogeneous segments along a road in order to implement proper road safety management. Several data segmentation methods have been applied to other pavement characteristics (e.g. roughness). However, no application to skin resistance data was found during the literature review for this study. Typical segmentation methods are rather too general or too specific to ensure a detailed segmentation of skid resistance data, which can be used for managing pavement performance. The main objective of this paper is to propose a procedure for segmenting skid resistance data, based on existing data segmentation methods. The procedure needs to be efficient and to fulfill road management requirements. The proposed procedure considers the Leverage method to identify outlier data, the CUSUM method to accomplish initial data segmentation and a statistical method to group consecutive segments that are statistically similar. The statistical method applies the Student's t-test of mean equities, along with analysis of variance and the Tuckey test for the multiple comparison of means. The proposed procedure was applied to a sample of skid resistance data measured with SCRIM (Side Force Coefficient Routine Investigatory Machine) on a 4.2 km section of Chilean road and was compared to conventional segmentation methods. Results showed that the proposed procedure is more efficient than the conventional segmentation procedures, achieving the minimum weighted sum of square errors (SSEp) with all the identified segments statistically different. Due to its mathematical basis, proposed procedure can be easily adapted and programmed for use in road safety management. (author)

  6. R package imputeTestbench to compare imputations methods for univariate time series

    OpenAIRE

    Bokde, Neeraj; Kulat, Kishore; Beck, Marcus W; Asencio-Cortés, Gualberto

    2016-01-01

    This paper describes the R package imputeTestbench that provides a testbench for comparing imputation methods for missing data in univariate time series. The imputeTestbench package can be used to simulate the amount and type of missing data in a complete dataset and compare filled data using different imputation methods. The user has the option to simulate missing data by removing observations completely at random or in blocks of different sizes. Several default imputation methods are includ...

  7. Q-Method Extended Kalman Filter

    Science.gov (United States)

    Zanetti, Renato; Ainscough, Thomas; Christian, John; Spanos, Pol D.

    2012-01-01

    A new algorithm is proposed that smoothly integrates non-linear estimation of the attitude quaternion using Davenport s q-method and estimation of non-attitude states through an extended Kalman filter. The new method is compared to a similar existing algorithm showing its similarities and differences. The validity of the proposed approach is confirmed through numerical simulations.

  8. A proposed method of measuring the electric-dipole moment of the neutron by ultracold neutron interferometry

    International Nuclear Information System (INIS)

    Freedman, M.S.; Peshkin, M.; Ringo, G.R.; Dombeck, T.W.

    1989-08-01

    The use of an ultracold neutron interferometer incorporating an electrostatic accelerator having a strong electric field gradient to accelerate neutrons by their possible electric dipole moment is proposed as a method of measuring the neutron electric dipole moment. The method appears to have the possibility of extending the sensitivity of the measurement by several orders of magnitude, perhaps to 10 -30 e-cm. 9 refs., 3 figs

  9. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.

    2010-02-16

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks theorem for the limiting distributions of the empirical likelihood ratios is derived. It is shown that one of the proposed methods is locally efficient among a class of within-subject variance-covariance matrices. A simulation study is conducted to investigate the finite sample properties of the proposed methods and compare them with the block empirical likelihood method by You et al. (2006) and the normal approximation with a correctly estimated variance-covariance. The results suggest that the proposed methods are generally more efficient than existing methods which ignore the correlation structure, and better in coverage compared to the normal approximation with correctly specified within-subject correlation. An application illustrating our methods and supporting the simulation study results is also presented.

  10. A Proposal of a Method to Measure and Evaluate the Effect to Apply External Support Measures for Owners by Construction Management Method, etc

    Science.gov (United States)

    Tada, Hiroshi; Miyatake, Ichiro; Mouri, Junji; Ajiki, Norihiko; Fueta, Toshiharu

    In Japan, various approaches have been taken to ensure the quality of public works or to support the procurement regime of the governmental agencies, as a means to utilize external resources, which include the procurement support service or the construction management (CM) method. Although discussions on these measures to utilize external resources (hereinafter referred to as external support measure) have been going on, as well as the follow-up surveys showing the positive effects of such measures have been conducted, the surveys only deal with the matters concerning the overall effects of the external support measure on the whole, meaning that the effect of each item of the tasks have not been addressed, and that the extent it dealt with the expectations of the client is unknown. However, the effective use of the external support measure in future cannot be achieved without knowing what was the purpose to introduce the external support measure, and what effect was expected on each task item, and what extent the expectation fulfilled. Furthermore, it is important to clarify not only the effect as compared to the client's expectation (performance), but also the public benefit of this measure (value improvement). From this point of view, there is not an established method to figure out the effect of the client's measure to utilize external resources. In view of this background, this study takes the CM method as an example of the external support measure, and proposes a method to measure and evaluate the effect by each task item, and suggests the future issues and possible responses, in the aim of contributing the promotion, improvement, and proper implementation of the external support measures in future.

  11. Proposal of Innovative Approaches of Relationship Marketing in Business

    Directory of Open Access Journals (Sweden)

    Viliam Lendel

    2015-03-01

    Full Text Available The aim of this paper is to propose innovative approaches to relationship marketing that affect the process of building relationships with customers, based on a detailed analysis of the literary sources and the research. This proposal is supported by the information technology e-CRM and social CRM. The paper contains a detailed description of the procedure for successfully implementing innovative approaches to relationship marketing in business. This should serve mainly to marketing managers as a valuable tool in their use of innovative approaches to relationship marketing, especially in the process of obtaining innovative ideas from customers, in order to identify their needs and requirements. Furthermore, the paper contains the main results of our research aimed at identifying the extent of utilization of innovative approaches to relationship marketing in Slovak businesses. A total of 207 respondents were involved in the research (medium and large businesses and following methods were used: comparative method of qualitative evaluation method, the method of structured and structured interview method, observation, document analysis method (method of content analysis and questionnaire method.

  12. An experimental-numerical method for comparative analysis of joint prosthesis

    International Nuclear Information System (INIS)

    Claramunt, R.; Rincon, E.; Zubizarreta, V.; Ros, A.

    2001-01-01

    The difficulty that exists in the analysis of mechanical stresses in bones is high due to its complex mechanical and morphological characteristics. This complexity makes generalists modelling and conclusions derived from prototype tests very questionable. In this article a relatively simple comparative analysis systematic method that allow us to establish some behaviour differences in different kind of prosthesis is presented. The method, applicable in principle to any joint problem, is based on analysing perturbations produced in natural stress states of a bone after insertion of a joint prosthesis and combines numerical analysis using a 3-D finite element model and experimental studies based on photoelastic coating and electric extensometry. The experimental method is applied to compare two total hip prosthesis cement-free femoral stems of different philosophy. One anatomic of new generation, being of oblique setting over cancellous bone and the other madreporique of trochantero-diaphyseal support over cortical bone. (Author) 4 refs

  13. Proposal for a method to estimate nutrient shock effects in bacteria

    Directory of Open Access Journals (Sweden)

    Azevedo Nuno F

    2012-08-01

    Full Text Available Abstract Background Plating methods are still the golden standard in microbiology; however, some studies have shown that these techniques can underestimate the microbial concentrations and diversity. A nutrient shock is one of the mechanisms proposed to explain this phenomenon. In this study, a tentative method to assess nutrient shock effects was tested. Findings To estimate the extent of nutrient shock effects, two strains isolated from tap water (Sphingomonas capsulata and Methylobacterium sp. and two culture collection strains (E. coli CECT 434 and Pseudomonas fluorescens ATCC 13525 were exposed both to low and high nutrient conditions for different times and then placed in low nutrient medium (R2A and rich nutrient medium (TSA. The average improvement (A.I. of recovery between R2A and TSA for the different times was calculated to more simply assess the difference obtained in culturability between each medium. As expected, A.I. was higher when cells were plated after the exposition to water than when they were recovered from high-nutrient medium showing the existence of a nutrient shock for the diverse bacteria used. S. capsulata was the species most affected by this phenomenon. Conclusions This work provides a method to consistently determine the extent of nutrient shock effects on different microorganisms and hence quantify the ability of each species to deal with sudden increases in substrate concentration.

  14. Contribution for an Urban Geomorphoheritage Assessment Method: Proposal from Three Geomorphosites in Rome (Italy

    Directory of Open Access Journals (Sweden)

    Pica Alessia

    2017-09-01

    Full Text Available Urban geomorphology has important implications in spatial planning of human activities, and it also has a geotouristic potential due to the relationship between cultural and geomorphological heritage. Despite the introduction of the term Anthropocene to describe the deep influence that human activities have had in recent times on Earth evolution, urban geomorphological heritage studies are relatively rare and limited and urban geotourism development is recent. The analysis of the complex urban landscape often need the integration of multidisciplinary data. This study aims to propose the first urban geomorphoheritage assessment method, which originates after long-lasting previous geomorphological and geotouristic studies on Rome city centre, it depict rare examples of the geomorphological mapping of a metropolis and, at the same time, of an inventory of urban geomorphosites. The proposal is applied to geomorphosites in the Esquilino neighbourhood of Rome, whose analysis confirm the need for an ad hoc method for assessing urban geomorphosites, as already highlighted in the most recent literature on the topic. The urban geomorphoheritage assessment method is based on: (i the urban geomorphological analysis by means of multitemporal and multidisciplinary data; (ii the geomorphosite inventory; and (iii the geomorphoheritage assessment and enhancement. One challenge is to assess invisible geomorphosites that are widespread in urban context. To this aim, we reworked the attributes describing the Value of a site for Geotourism in order to build up a specific methodology for the analysis of the urban geomorphological heritage.

  15. Comparative characteristic of the methods of protein antigens epitope mapping

    Directory of Open Access Journals (Sweden)

    O. Yu. Galkin

    2014-08-01

    Full Text Available Comparative analysis of experimental methods of epitope mapping of protein antigens has been carried out. The vast majority of known techniques are involved in immunochemical study of the interaction of protein molecules or peptides with antibodies of corresponding specifici­ty. The most effective and widely applicable metho­dological techniques are those that use synthetic and genetically engineered peptides. Over the past 30 years, these groups of methods have travelled a notable evolutionary path up to the maximum automation and the detection of antigenic determinants of various types (linear and conformational epitopes, and mimotopes. Most of epitope searching algorithms were integrated into a computer program, which greatly facilitates the analysis of experimental data and makes it possible to create spatial models. It is possible to use comparative epitope mapping for solving the applied problems; this less time-consuming method is based on the analysis of competition between different antibodies interactions with the same antigen. The physical method of antigenic structure study is X-ray analysis of antigen-antibody complexes, which may be applied only to crystallizing­ proteins, and nuclear magnetic resonance.

  16. COMPARATIVE ANALYSIS OF ESTIMATION METHODS OF PHARMACY ORGANIZATION BANKRUPTCY PROBABILITY

    Directory of Open Access Journals (Sweden)

    V. L. Adzhienko

    2014-01-01

    Full Text Available A purpose of this study was to determine the probability of bankruptcy by various methods in order to predict the financial crisis of pharmacy organization. Estimating the probability of pharmacy organization bankruptcy was conducted using W. Beaver’s method adopted in the Russian Federation, with integrated assessment of financial stability use on the basis of scoring analysis. The results obtained by different methods are comparable and show that the risk of bankruptcy of the pharmacy organization is small.

  17. Comparability and repeatability of three commonly used methods for measuring endurance capacity.

    Science.gov (United States)

    Baxter-Gilbert, James; Mühlenhaupt, Max; Whiting, Martin J

    2017-12-01

    Measures of endurance (time to exhaustion) have been used to address a wide range of questions in ecomorphological and physiological research, as well as being used as a proxy for survival and fitness. Swimming, stationary (circular) track running, and treadmill running are all commonly used methods for measuring endurance. Despite the use of these methods across a broad range of taxa, how comparable these methods are to one another, and whether they are biologically relevant, is rarely examined. We used Australian water dragons (Intellagama lesueurii), a species that is morphologically adept at climbing, swimming, and running, to compare these three methods of endurance and examined if there is repeatability within and between trial methods. We found that time to exhaustion was not highly repeatable within a method, suggesting that single measures or a mean time to exhaustion across trials are not appropriate. Furthermore, we compared mean maximal endurance times among the three methods, and found that the two running methods (i.e., stationary track and treadmill) were similar, but swimming was distinctly different, resulting in lower mean maximal endurance times. Finally, an individual's endurance rank was not repeatable across methods, suggesting that the three endurance trial methods are not providing similar information about an individual's performance capacity. Overall, these results highlight the need to carefully match a measure of performance capacity with the study species and the research questions being asked so that the methods being used are behaviorally, ecologically, and physiologically relevant. © 2018 Wiley Periodicals, Inc.

  18. Same Content, Different Methods: Comparing Lecture, Engaged Classroom, and Simulation.

    Science.gov (United States)

    Raleigh, Meghan F; Wilson, Garland Anthony; Moss, David Alan; Reineke-Piper, Kristen A; Walden, Jeffrey; Fisher, Daniel J; Williams, Tracy; Alexander, Christienne; Niceler, Brock; Viera, Anthony J; Zakrajsek, Todd

    2018-02-01

    There is a push to use classroom technology and active teaching methods to replace didactic lectures as the most prevalent format for resident education. This multisite collaborative cohort study involving nine residency programs across the United States compared a standard slide-based didactic lecture, a facilitated group discussion via an engaged classroom, and a high-fidelity, hands-on simulation scenario for teaching the topic of acute dyspnea. The primary outcome was knowledge retention at 2 to 4 weeks. Each teaching method was assigned to three different residency programs in the collaborative according to local resources. Learning objectives were determined by faculty. Pre- and posttest questions were validated and utilized as a measurement of knowledge retention. Each site administered the pretest, taught the topic of acute dyspnea utilizing their assigned method, and administered a posttest 2 to 4 weeks later. Differences between the groups were compared using paired t-tests. A total of 146 residents completed the posttest, and scores increased from baseline across all groups. The average score increased 6% in the standard lecture group (n=47), 11% in the engaged classroom (n=53), and 9% in the simulation group (n=56). The differences in improvement between engaged classroom and simulation were not statistically significant. Compared to standard lecture, both engaged classroom and high-fidelity simulation were associated with a statistically significant improvement in knowledge retention. Knowledge retention after engaged classroom and high-fidelity simulation did not significantly differ. More research is necessary to determine if different teaching methods result in different levels of comfort and skill with actual patient care.

  19. Mixed-Methods for Comparing Tobacco Cessation Interventions.

    Science.gov (United States)

    Momin, Behnoosh; Neri, Antonio; Zhang, Lei; Kahende, Jennifer; Duke, Jennifer; Green, Sonya Goode; Malarcher, Ann; Stewart, Sherri L

    2017-03-01

    The National Comprehensive Cancer Control Program (NCCCP) and National Tobacco Control Program (NTCP) are both well-positioned to promote the use of population-based tobacco cessation interventions, such as state quitlines and Web-based interventions. This paper outlines the methodology used to conduct a comparative effectiveness research study of traditional and Web-based tobacco cessation and quitline promotion approaches. A mixed-methods study with three components was designed to address the effect of promotional activities on service usage and the comparative effectiveness of population-based smoking cessation activities across multiple states. The cessation intervention component followed 7,902 smokers (4,307 quitline users and 3,595 Web intervention users) to ascertain prevalence of 30-day abstinence rates 7 months after registering for smoking cessation services. User characteristics and quit success was compared across the two modalities. In the promotions component, reach and use of traditional and innovative promotion strategies were assessed for 24 states, including online advertising, state Web sites, social media, mobile applications, and their effects on quitline call volume. The partnership intervention component studied the extent of collaboration among six selected NCCCPs and NTCPs. This study will guide program staff and clinicians with evidence-based recommendations and best practices for implementation of tobacco cessation within their patient and community populations and establish an evidence base that can be used for decision making.

  20. A proposal of parameter determination method in the residual strength degradation model for the prediction of fatigue life (I)

    International Nuclear Information System (INIS)

    Kim, Sang Tae; Jang, Seong Soo

    2001-01-01

    The static and fatigue tests have been carried out to verify the validity of a generalized residual strength degradation model. And a new method of parameter determination in the model is verified experimentally to account for the effect of tension-compression fatigue loading of spheroidal graphite cast iron. It is shown that the correlation between the experimental results and the theoretical prediction on the statistical distribution of fatigue life by using the proposed method is very reasonable. Furthermore, it is found that the correlation between the theoretical prediction and the experimental results of fatigue life in case of tension-tension fatigue data in composite material appears to be reasonable. Therefore, the proposed method is more adjustable in the determination of the parameter than maximum likelihood method and minimization technique

  1. Methods for the comparative evaluation of pharmaceuticals

    Directory of Open Access Journals (Sweden)

    Busse, Reinhard

    2005-11-01

    Full Text Available Political background: As a German novelty, the Institute for Quality and Efficiency in Health Care (Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen; IGWiG was established in 2004 to, among other tasks, evaluate the benefit of pharmaceuticals. In this context it is of importance that patented pharmaceuticals are only excluded from the reference pricing system if they offer a therapeutic improvement. The institute is commissioned by the Federal Joint Committee (Gemeinsamer Bundesausschuss, G-BA or by the Ministry of Health and Social Security. The German policy objective expressed by the latest health care reform (Gesetz zur Modernisierung der Gesetzlichen Krankenversicherung, GMG is to base decisions on a scientific assessment of pharmaceuticals in comparison to already available treatments. However, procedures and methods are still to be established. Research questions and methods: This health technology assessment (HTA report was commissioned by the German Agency for HTA at the Institute for Medical Documentation and Information (DAHTA@DIMDI. It analysed criteria, procedures, and methods of comparative drug assessment in other EU-/OECD-countries. The research question was the following: How do national public institutions compare medicines in connection with pharmaceutical regulation, i.e. licensing, reimbursement and pricing of drugs? Institutions as well as documents concerning comparative drug evaluation (e.g. regulations, guidelines were identified through internet, systematic literature, and hand searches. Publications were selected according to pre-defined inclusion and exclusion criteria. Documents were analysed in a qualitative matter following an analytic framework that had been developed in advance. Results were summarised narratively and presented in evidence tables. Results and discussion: Currently licensing agencies do not systematically assess a new drug's added value for patients and society. This is why many

  2. Comparative study of in-situ filter test methods

    International Nuclear Information System (INIS)

    Marshall, M.; Stevens, D.C.

    1981-01-01

    Available methods of testing high efficiency particulate aerosol (HEPA) filters in-situ have been reviewed. In order to understand the relationship between the results produced by different methods a selection has been compared. Various pieces of equipment for generating and detecting aerosols have been tested and their suitability assessed. Condensation-nuclei, DOP (di-octyl phthalate) and sodium-flame in-situ filter test methods have been studied, using the 500 cfm (9000 m 3 /h) filter test rig at Harwell and in the field. Both the sodium-flame and DOP methods measure the penetration through leaks and filter material. However the measured penetration through filtered leaks depends on the aerosol size distribution and the detection method. Condensation-nuclei test methods can only be used to measure unfiltered leaks since condensation nuclei have a very low penetration through filtered leaks. A combination of methods would enable filtered and unfiltered leaks to be measured. A condensation-nucleus counter using n-butyl alcohol as the working fluid has the advantage of being able to detect any particle up to 1 μm in diameter, including DOP, and so could be used for this purpose. A single-particle counter has not been satisfactory because of interference from particles leaking into systems under extract, particularly downstream of filters, and because the concentration of the input aerosol has to be severely limited. The sodium-flame method requires a skilled operator and may cause safety and corrosion problems. The DOP method using a total light scattering detector has so far been the most satisfactory. It is fairly easy to use, measures reasonably low values of penetration and gives rapid results. DOP has had no adverse effect on HEPA filters over a long series of tests

  3. Principles, Methods of Participatory Research: Proposal for Draft Animal Power

    Directory of Open Access Journals (Sweden)

    E. Chia

    2004-03-01

    Full Text Available The meeting of researchers, who question themselves on the efficiency of their actions when they accompany stakeholders during change processes, provides the opportunity to ponder on the research methods to develop when working together with the stakeholders: participative research, research-action, research-intervention… The author proposes to present the research-action approach as new. If the three phases of research-action are important, the negotiation phase is essential, because it enables contract formalization among partners (ethical aspect, development of a common language, and formalization of structuring efforts between researchers with various specialties and stakeholders. In the research-action approach, the managing set-ups (scientific committees… play a major role: they guarantee at the same time a solution to problems, production, and the legitimacy of the scientific knowledge produced. In conclusion, the author suggests ways to develop research-action in the field of animal traction in order to conceive new socio-technical and organizational innovations that will make the use of this technique easier.

  4. Proposal for a new detection method of substance abuse risk in Croatian adolescents

    Directory of Open Access Journals (Sweden)

    Sanja Tatalovic Vorkapic

    2011-01-01

    Full Text Available One of the most important factors of successful substance abuse treatment is the early start of the same treatment. Recent selection method for identification of Croatian adolescents in the substance abuse risk that has been using drug tests from urine samples, has been simple and exact on the one hand, but on the other, has been very rare and usually guided by the pressure of parents or the court. Besides, such method presented the source of legal and ethical questions. So, the proposal of application of standardized psychological tests during systematic medical exams of Croatian adolescents at the age range of 15-22 years could help with the early detection of those adolescents who were in the substance abuse risk or already had the developed addiction problem.

  5. Comparing the Efficacy of Excitatory Transcranial Stimulation Methods Measuring Motor Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Vera Moliadze

    2014-01-01

    Full Text Available The common aim of transcranial stimulation methods is the induction or alterations of cortical excitability in a controlled way. Significant effects of each individual stimulation method have been published; however, conclusive direct comparisons of many of these methods are rare. The aim of the present study was to compare the efficacy of three widely applied stimulation methods inducing excitability enhancement in the motor cortex: 1 mA anodal transcranial direct current stimulation (atDCS, intermittent theta burst stimulation (iTBS, and 1 mA transcranial random noise stimulation (tRNS within one subject group. The effect of each stimulation condition was quantified by evaluating motor-evoked-potential amplitudes (MEPs in a fixed time sequence after stimulation. The analyses confirmed a significant enhancement of the M1 excitability caused by all three types of active stimulations compared to sham stimulation. There was no significant difference between the types of active stimulations, although the time course of the excitatory effects slightly differed. Among the stimulation methods, tRNS resulted in the strongest and atDCS significantly longest MEP increase compared to sham. Different time courses of the applied stimulation methods suggest different underlying mechanisms of action. Better understanding may be useful for better targeting of different transcranial stimulation techniques.

  6. The Constant Comparative Analysis Method Outside of Grounded Theory

    Science.gov (United States)

    Fram, Sheila M.

    2013-01-01

    This commentary addresses the gap in the literature regarding discussion of the legitimate use of Constant Comparative Analysis Method (CCA) outside of Grounded Theory. The purpose is to show the strength of using CCA to maintain the emic perspective and how theoretical frameworks can maintain the etic perspective throughout the analysis. My…

  7. A Method to Compare the Descriptive Power of Different Types of Petri Nets

    DEFF Research Database (Denmark)

    Jensen, Kurt

    1980-01-01

    The purpose of this paper is to show how the descriptive power of different types of Petri nets can be compared, without the use of Petri net languages. Moreover the paper proposes an extension of condition/event-nets and it is shown that this extension has the same descriptive power as condition/event-nets....

  8. FIFRA Peer Review: Proposed Risk Assessment Methods Process

    Science.gov (United States)

    From September 11-14, 2012, EPA participated in a Federal Insecticide, Fungicide and Rodenticide Act Scientific Advisory Panel (SAP) meeting on a proposed pollinator risk assessment framework for determining the potential risks of pesticides to honey bees.

  9. Comparing a novel automatic 3D method for LGE-CMR quantification of scar size with established methods.

    Science.gov (United States)

    Woie, Leik; Måløy, Frode; Eftestøl, Trygve; Engan, Kjersti; Edvardsen, Thor; Kvaløy, Jan Terje; Ørn, Stein

    2014-02-01

    Current methods for the estimation of infarct size by late-enhanced cardiac magnetic imaging are based upon 2D analysis that first determines the size of the infarction in each slice, and thereafter adds the infarct sizes from each slice to generate a volume. We present a novel, automatic 3D method that estimates infarct size by a simultaneous analysis of all pixels from all slices. In a population of 54 patients with ischemic scars, the infarct size estimated by the automatic 3D method was compared with four established 2D methods. The new 3D method defined scar as the sum of all pixels with signal intensity (SI) ≥35 % of max SI from the complete myocardium, border zone: SI 35-50 % of max SI and core as SI ≥50 % of max SI. The 3D method yielded smaller infarct size (-2.8 ± 2.3 %) and core size (-3.0 ± 1.7 %) than the 2D method most similar to ours. There was no difference in the size of the border zone (0.2 ± 1.4 %). The 3D method demonstrated stronger correlations between scar size and left ventricular (LV) remodelling parameters (LV ejection fraction: r = -0.71, p 3D automatic method is without the need for manual demarcation of the scar; it is less time-consuming and has a stronger correlation with remodelling parameters compared with existing methods.

  10. Maxillary sinusitis - a comparative study of different imaging diagnosis methods

    International Nuclear Information System (INIS)

    Hueb, Marcelo Miguel; Borges, Fabiano de Almeida; Pulcinelli, Emilte; Souza, Wandir Ferreira; Borges, Luiz Marcondes

    1999-01-01

    We conducted prospective study comparing different methods (plain X-rays, computed tomography and ultrasonography mode-A) for the initial diagnosis of maxillary sinusitis. Twenty patients (40 maxillary sinuses) with a clinical history suggestive of sinusitis included in this study. The results were classified as abnormal or normal, using computed tomography as gold standard. The sensitivity for ultrasonography and plain X-rays was 84.6% and 69.2%, respectively. The specificity of both methods was 92.6%. This study suggests that ultrasonography can be used as a good follow-up method for patients with maxillary. sinusitis. (author)

  11. Proposed efficient method for ticket booking (PEMTB) | Ahmed ...

    African Journals Online (AJOL)

    Journal of Fundamental and Applied Sciences. Journal Home ... We used angular JS, ionic for a front end and node.js, express.js for a back end and mongo DB for a database. ... Our proposed system is totally softcopy and in digitalized.

  12. Proposal of Screening Method of Sleep Disordered Breathing Using Fiber Grating Vision Sensor

    Science.gov (United States)

    Aoki, Hirooki; Nakamura, Hidetoshi; Nakajima, Masato

    Every conventional respiration monitoring technique requires at least one sensor to be attached to the body of the subject during measurement, thereby imposing a sense of restraint that results in aversion against measurements that would last over consecutive days. To solve this problem, we developed a respiration monitoring system for sleepers, and it uses a fiber-grating vision sensor, which is a type of active image sensor to achieve non-contact respiration monitoring. In this paper, we verified the effectiveness of the system, and proposed screening method of the sleep disordered breathing. It was shown that our system could equivalently measure the respiration with thermistor and accelerograph. And, the respiratory condition of sleepers can be grasped by our screening method in one look, and it seems to be useful for the support of the screening of sleep disordered breathing.

  13. A Comparative Numerical Study of the Spectral Theory Approach of Nishimura and the Roots Method Based on the Analysis of BDMMAP/G/1 Queue

    Directory of Open Access Journals (Sweden)

    Arunava Maity

    2015-01-01

    Full Text Available This paper considers an infinite-buffer queuing system with birth-death modulated Markovian arrival process (BDMMAP with arbitrary service time distribution. BDMMAP is an excellent representation of the arrival process, where the fractal behavior such as burstiness, correlation, and self-similarity is observed, for example, in ethernet LAN traffic systems. This model was first apprised by Nishimura (2003, and to analyze it, he proposed a twofold spectral theory approach. It seems from the investigations that Nishimura’s approach is tedious and difficult to employ for practical purposes. The objective of this paper is to analyze the same model with an alternative methodology proposed by Chaudhry et al. (2013 (to be referred to as CGG method. The CGG method appears to be rather simple, mathematically tractable, and easy to implement as compared to Nishimura’s approach. The Achilles tendon of the CGG method is the roots of the characteristic equation associated with the probability generating function (pgf of the queue length distribution, which absolves any eigenvalue algebra and iterative analysis. Both the methods are presented in stepwise manner for easy accessibility, followed by some illustrative examples in accordance with the context.

  14. Accounting comparability and the accuracy of peer-based valuation models

    NARCIS (Netherlands)

    Young, S.; Zeng, Y.

    2015-01-01

    We examine the link between enhanced accounting comparability and the valuation performance of pricing multiples. Using the warranted multiple method proposed by Bhojraj and Lee (2002, Journal of Accounting Research), we demonstrate how enhanced accounting comparability leads to better peer-based

  15. Comparative analysis of methods for concentrating venom from jellyfish Rhopilema esculentum Kishinouye

    Science.gov (United States)

    Li, Cuiping; Yu, Huahua; Feng, Jinhua; Chen, Xiaolin; Li, Pengcheng

    2009-02-01

    In this study, several methods were compared for the efficiency to concentrate venom from the tentacles of jellyfish Rhopilema esculentum Kishinouye. The results show that the methods using either freezing-dry or gel absorption to remove water to concentrate venom are not applicable due to the low concentration of the compounds dissolved. Although the recovery efficiency and the total venom obtained using the dialysis dehydration method are high, some proteins can be lost during the concentrating process. Comparing to the lyophilization method, ultrafiltration is a simple way to concentrate the compounds at high percentage but the hemolytic activities of the proteins obtained by ultrafiltration appear to be lower. Our results suggest that overall lyophilization is the best and recommended method to concentrate venom from the tentacles of jellyfish. It shows not only the high recovery efficiency for the venoms but high hemolytic activities as well.

  16. PHARMACOPOEIA METHODS FOR ELEMENTAL ANALYSIS OF MEDICINES: A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Tetiana M. Derkach

    2018-01-01

    Full Text Available The article is devoted to the problem of quality assurance of medicinal products, namely the determination of elemental impurity concentration compared to permitted daily exposures for and the correct choice analytical methods that are adequate to the formulated tasks. The paper goal is to compare characteristics of four analytical methods recommended by the Pharmacopoeia of various countries to control the content of elemental impurities in medicines, including medicinal plant raw materials and herbal medicines. Both advantages and disadvantages were described for atomic absorption spectroscopy with various atomising techniques, as well as atomic emission spectroscopy and mass spectrometry with inductively coupled plasma. The choice of the most rational analysis method depends on a research task and is reasoned from the viewpoint of analytical objectives, possible complications, performance attributes, and economic considerations. The methods of ICP-MS and GFAAS were shown to provide the greatest potential for determining the low and ultra-low concentrations of chemical elements in medicinal plants and herbal medicinal products. The other two methods, FAAS and ICP-AES, are limited to the analysis of the main essential elements and the largest impurities. The ICP-MS is the most efficient method for determining ultra-low concentrations. However, the interference of mass peaks is typical for ICP-MS. It is formed not only by impurities but also by polyatomic ions with the participation of argon, as well as atoms of gases from the air (C, N and O or matrices (O, N, H, P, S and Cl. Therefore, a correct sample preparation, which guarantees minimisation of impurity contamination and loss of analytes becomes the most crucial stage of analytical applications of ICP-MS. The detections limits for some chemical elements, which content is regulated in modern Pharmacopoeia, were estimated for each method and analysis conditions of medicinal plant raw

  17. Comparing groups randomization and bootstrap methods using R

    CERN Document Server

    Zieffler, Andrew S; Long, Jeffrey D

    2011-01-01

    A hands-on guide to using R to carry out key statistical practices in educational and behavioral sciences research Computing has become an essential part of the day-to-day practice of statistical work, broadening the types of questions that can now be addressed by research scientists applying newly derived data analytic techniques. Comparing Groups: Randomization and Bootstrap Methods Using R emphasizes the direct link between scientific research questions and data analysis. Rather than relying on mathematical calculations, this book focus on conceptual explanations and

  18. Comparison of power curve monitoring methods

    Directory of Open Access Journals (Sweden)

    Cambron Philippe

    2017-01-01

    Full Text Available Performance monitoring is an important aspect of operating wind farms. This can be done through the power curve monitoring (PCM of wind turbines (WT. In the past years, important work has been conducted on PCM. Various methodologies have been proposed, each one with interesting results. However, it is difficult to compare these methods because they have been developed using their respective data sets. The objective of this actual work is to compare some of the proposed PCM methods using common data sets. The metric used to compare the PCM methods is the time needed to detect a change in the power curve. Two power curve models will be covered to establish the effect the model type has on the monitoring outcomes. Each model was tested with two control charts. Other methodologies and metrics proposed in the literature for power curve monitoring such as areas under the power curve and the use of statistical copulas have also been covered. Results demonstrate that model-based PCM methods are more reliable at the detecting a performance change than other methodologies and that the effectiveness of the control chart depends on the types of shift observed.

  19. A Generalized Method for the Comparable and Rigorous Calculation of the Polytropic Efficiencies of Turbocompressors

    Science.gov (United States)

    Dimitrakopoulos, Panagiotis

    2018-03-01

    The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.

  20. Comparing risk profiles of individuals diagnosed with diabetes by OGTT and HbA1c

    DEFF Research Database (Denmark)

    Borg, R.; Vistisen, D.; Witte, D.R.

    2010-01-01

    Glycated haemoglobin (HbA(1c)) has been proposed as an alternative to the oral glucose tolerance test for diagnosing diabetes. We compared the cardiovascular risk profile of individuals identified by these two alternative methods.......Glycated haemoglobin (HbA(1c)) has been proposed as an alternative to the oral glucose tolerance test for diagnosing diabetes. We compared the cardiovascular risk profile of individuals identified by these two alternative methods....

  1. A CTSA Agenda to Advance Methods for Comparative Effectiveness Research

    Science.gov (United States)

    Helfand, Mark; Tunis, Sean; Whitlock, Evelyn P.; Pauker, Stephen G.; Basu, Anirban; Chilingerian, Jon; Harrell Jr., Frank E.; Meltzer, David O.; Montori, Victor M.; Shepard, Donald S.; Kent, David M.

    2011-01-01

    Abstract Clinical research needs to be more useful to patients, clinicians, and other decision makers. To meet this need, more research should focus on patient‐centered outcomes, compare viable alternatives, and be responsive to individual patients’ preferences, needs, pathobiology, settings, and values. These features, which make comparative effectiveness research (CER) fundamentally patient‐centered, challenge researchers to adopt or develop methods that improve the timeliness, relevance, and practical application of clinical studies. In this paper, we describe 10 priority areas that address 3 critical needs for research on patient‐centered outcomes (PCOR): (1) developing and testing trustworthy methods to identify and prioritize important questions for research; (2) improving the design, conduct, and analysis of clinical research studies; and (3) linking the process and outcomes of actual practice to priorities for research on patient‐centered outcomes. We argue that the National Institutes of Health, through its clinical and translational research program, should accelerate the development and refinement of methods for CER by linking a program of methods research to the broader portfolio of large, prospective clinical and health system studies it supports. Insights generated by this work should be of enormous value to PCORI and to the broad range of organizations that will be funding and implementing CER. Clin Trans Sci 2011; Volume 4: 188–198 PMID:21707950

  2. Relating two proposed methods for speedup of algorithms for fitting two- and three-way principal component and related multilinear models

    NARCIS (Netherlands)

    Kiers, Henk A.L.; Harshman, Richard A.

    Multilinear analysis methods such as component (and three-way component) analysis of very large data sets can become very computationally demanding and even infeasible unless some method is used to compress the data and/or speed up the algorithms. We discuss two previously proposed speedup methods.

  3. Comparative study of protoporphyrin IX fluorescence image enhancement methods to improve an optical imaging system for oral cancer detection

    Science.gov (United States)

    Jiang, Ching-Fen; Wang, Chih-Yu; Chiang, Chun-Ping

    2011-07-01

    Optoelectronics techniques to induce protoporphyrin IX fluorescence with topically applied 5-aminolevulinic acid on the oral mucosa have been developed to noninvasively detect oral cancer. Fluorescence imaging enables wide-area screening for oral premalignancy, but the lack of an adequate fluorescence enhancement method restricts the clinical imaging application of these techniques. This study aimed to develop a reliable fluorescence enhancement method to improve PpIX fluorescence imaging systems for oral cancer detection. Three contrast features, red-green-blue reflectance difference, R/B ratio, and R/G ratio, were developed first based on the optical properties of the fluorescence images. A comparative study was then carried out with one negative control and four biopsy confirmed clinical cases to validate the optimal image processing method for the detection of the distribution of malignancy. The results showed the superiority of the R/G ratio in terms of yielding a better contrast between normal and neoplastic tissue, and this method was less prone to errors in detection. Quantitative comparison with the clinical diagnoses in the four neoplastic cases showed that the regions of premalignancy obtained using the proposed method accorded with the expert's determination, suggesting the potential clinical application of this method for the detection of oral cancer.

  4. Gradient matching methods for computational inference in mechanistic models for systems biology: a review and comparative analysis

    Directory of Open Access Journals (Sweden)

    Benn eMacdonald

    2015-11-01

    Full Text Available Parameter inference in mathematical models of biological pathways, expressed as coupled ordinary differential equations (ODEs, is a challenging problem in contemporary systems biology. Conventional methods involve repeatedly solving the ODEs by numerical integration, which is computationally onerous and does not scale up to complex systems. Aimed at reducing the computational costs, new concepts based on gradient matching have recently been proposed in the computational statistics and machine learning literature. In a preliminary smoothing step, the time series data are interpolated; then, in a second step, the parameters of the ODEs are optimised so as to minimise some metric measuring the difference between the slopes of the tangents to the interpolants, and the time derivatives from the ODEs. In this way, the ODEs never have to be solved explicitly. This review provides a concise methodological overview of the current state-of-the-art methods for gradient matching in ODEs, followed by an empirical comparative evaluation based on a set of widely used and representative benchmark data.

  5. Comparing and improving reconstruction methods for proxies based on compositional data

    Science.gov (United States)

    Nolan, C.; Tipton, J.; Booth, R.; Jackson, S. T.; Hooten, M.

    2017-12-01

    Many types of studies in paleoclimatology and paleoecology involve compositional data. Often, these studies aim to use compositional data to reconstruct an environmental variable of interest; the reconstruction is usually done via the development of a transfer function. Transfer functions have been developed using many different methods. Existing methods tend to relate the compositional data and the reconstruction target in very simple ways. Additionally, the results from different methods are rarely compared. Here we seek to address these two issues. First, we introduce a new hierarchical Bayesian multivariate gaussian process model; this model allows for the relationship between each species in the compositional dataset and the environmental variable to be modeled in a way that captures the underlying complexities. Then, we compare this new method to machine learning techniques and commonly used existing methods. The comparisons are based on reconstructing the water table depth history of Caribou Bog (an ombrotrophic Sphagnum peat bog in Old Town, Maine, USA) from a new 7500 year long record of testate amoebae assemblages. The resulting reconstructions from different methods diverge in both their resulting means and uncertainties. In particular, uncertainty tends to be drastically underestimated by some common methods. These results will help to improve inference of water table depth from testate amoebae. Furthermore, this approach can be applied to test and improve inferences of past environmental conditions from a broad array of paleo-proxies based on compositional data

  6. Comparative study on γ-ray spectrum by several filtering method

    International Nuclear Information System (INIS)

    Yuan Xinyu; Liu Liangjun; Zhou Jianliang

    2011-01-01

    Comparative study was conducted on results of gamma-ray spectrum by using a majority of active smoothing method, which were used to show filtering effect. The results showed that peak was widened and overlap peaks increased with energy domain filter in γ-ray spectrum. Filter and its parameters should be seriously taken into consideration in frequency domain. Wavelet transformation can keep signal in high frequency region well. Improved threshold method showed the advantages of hard and soft threshold method at the same time by comparison, which was suitable for weak peaks detection. A new filter was put forward to eke out gravity model approach, whose denoise level was detected by standard deviation. This method not only kept signal and net area of peak well,but also attained better result and had simple computer program. (authors)

  7. Probabilistic Power Flow Method Considering Continuous and Discrete Variables

    Directory of Open Access Journals (Sweden)

    Xuexia Zhang

    2017-04-01

    Full Text Available This paper proposes a probabilistic power flow (PPF method considering continuous and discrete variables (continuous and discrete power flow, CDPF for power systems. The proposed method—based on the cumulant method (CM and multiple deterministic power flow (MDPF calculations—can deal with continuous variables such as wind power generation (WPG and loads, and discrete variables such as fuel cell generation (FCG. In this paper, continuous variables follow a normal distribution (loads or a non-normal distribution (WPG, and discrete variables follow a binomial distribution (FCG. Through testing on IEEE 14-bus and IEEE 118-bus power systems, the proposed method (CDPF has better accuracy compared with the CM, and higher efficiency compared with the Monte Carlo simulation method (MCSM.

  8. A comparative analysis of meta-heuristic methods for power management of a dual energy storage system for electric vehicles

    International Nuclear Information System (INIS)

    Trovão, João P.; Antunes, Carlos Henggeler

    2015-01-01

    Highlights: • Two meta-heuristic approaches are evaluated for multi-ESS management in electric vehicles. • An online global energy management strategy with two different layers is studied. • Meta-heuristic techniques are used to define optimized energy sharing mechanisms. • A comparative analysis for ARTEMIS driving cycle is addressed. • The effectiveness of the double-layer management with meta-heuristic is presented. - Abstract: This work is focused on the performance evaluation of two meta-heuristic approaches, simulated annealing and particle swarm optimization, to deal with power management of a dual energy storage system for electric vehicles. The proposed strategy is based on a global energy management system with two layers: long-term (energy) and short-term (power) management. A rule-based system deals with the long-term (strategic) layer and for the short-term (action) layer meta-heuristic techniques are developed to define optimized online energy sharing mechanisms. Simulations have been made for several driving cycles to validate the proposed strategy. A comparative analysis for ARTEMIS driving cycle is presented evaluating three performance indicators (computation time, final value of battery state of charge, and minimum value of supercapacitors state of charge) as a function of input parameters. The results show the effectiveness of an implementation based on a double-layer management system using meta-heuristic methods for online power management supported by a rule set that restricts the search space

  9. Novel Fingertip Image-Based Heart Rate Detection Methods for a Smartphone

    Directory of Open Access Journals (Sweden)

    Rifat Zaman

    2017-02-01

    Full Text Available We hypothesize that our fingertip image-based heart rate detection methods using smartphone reliably detect the heart rhythm and rate of subjects. We propose fingertip curve line movement-based and fingertip image intensity-based detection methods, which both use the movement of successive fingertip images obtained from smartphone cameras. To investigate the performance of the proposed methods, heart rhythm and rate of the proposed methods are compared to those of the conventional method, which is based on average image pixel intensity. Using a smartphone, we collected 120 s pulsatile time series data from each recruited subject. The results show that the proposed fingertip curve line movement-based method detects heart rate with a maximum deviation of 0.0832 Hz and 0.124 Hz using time- and frequency-domain based estimation, respectively, compared to the conventional method. Moreover, another proposed fingertip image intensity-based method detects heart rate with a maximum deviation of 0.125 Hz and 0.03 Hz using time- and frequency-based estimation, respectively.

  10. Cross-Cultural Adaptation and Validation of the MPAM-R to Brazilian Portuguese and Proposal of a New Method to Calculate Factor Scores

    Science.gov (United States)

    Albuquerque, Maicon R.; Lopes, Mariana C.; de Paula, Jonas J.; Faria, Larissa O.; Pereira, Eveline T.; da Costa, Varley T.

    2017-01-01

    In order to understand the reasons that lead individuals to practice physical activity, researchers developed the Motives for Physical Activity Measure-Revised (MPAM-R) scale. In 2010, a translation of MPAM-R to Portuguese and its validation was performed. However, psychometric measures were not acceptable. In addition, factor scores in some sports psychology scales are calculated by the mean of scores by items of the factor. Nevertheless, it seems appropriate that items with higher factor loadings, extracted by Factor Analysis, have greater weight in the factor score, as items with lower factor loadings have less weight in the factor score. The aims of the present study are to translate, validate the MPAM-R for Portuguese versions, and investigate agreement between two methods used to calculate factor scores. Three hundred volunteers who were involved in physical activity programs for at least 6 months were collected. Confirmatory Factor Analysis of the 30 items indicated that the version did not fit the model. After excluding four items, the final model with 26 items showed acceptable model fit measures by Exploratory Factor Analysis, as well as it conceptually supports the five factors as the original proposal. When two methods are compared to calculate factors scores, our results showed that only “Enjoyment” and “Appearance” factors showed agreement between methods to calculate factor scores. So, the Portuguese version of the MPAM-R can be used in a Brazilian context, and a new proposal for the calculation of the factor score seems to be promising. PMID:28293203

  11. Quantifying and Comparing Effects of Climate Engineering Methods on the Earth System

    Science.gov (United States)

    Sonntag, Sebastian; Ferrer González, Miriam; Ilyina, Tatiana; Kracher, Daniela; Nabel, Julia E. M. S.; Niemeier, Ulrike; Pongratz, Julia; Reick, Christian H.; Schmidt, Hauke

    2018-02-01

    To contribute to a quantitative comparison of climate engineering (CE) methods, we assess atmosphere-, ocean-, and land-based CE measures with respect to Earth system effects consistently within one comprehensive model. We use the Max Planck Institute Earth System Model (MPI-ESM) with prognostic carbon cycle to compare solar radiation management (SRM) by stratospheric sulfur injection and two carbon dioxide removal methods: afforestation and ocean alkalinization. The CE model experiments are designed to offset the effect of fossil-fuel burning on global mean surface air temperature under the RCP8.5 scenario to follow or get closer to the RCP4.5 scenario. Our results show the importance of feedbacks in the CE effects. For example, as a response to SRM the land carbon uptake is enhanced by 92 Gt by the year 2100 compared to the reference RCP8.5 scenario due to reduced soil respiration thus reducing atmospheric CO2. Furthermore, we show that normalizations allow for a better comparability of different CE methods. For example, we find that due to compensating processes such as biogeophysical effects of afforestation more carbon needs to be removed from the atmosphere by afforestation than by alkalinization to reach the same global warming reduction. Overall, we illustrate how different CE methods affect the components of the Earth system; we identify challenges arising in a CE comparison, and thereby contribute to developing a framework for a comparative assessment of CE.

  12. Proposal for a Method for Business Model Performance Assessment: Toward an Experimentation Tool for Business Model Innovation

    Directory of Open Access Journals (Sweden)

    Antonio Batocchio

    2017-04-01

    Full Text Available The representation of business models has been recently widespread, especially in the pursuit of innovation. However, defining a company’s business model is sometimes limited to discussion and debates. This study observes the need for performance measurement so that business models can be data-driven. To meet this goal, the work proposed as a hypothesis the creation of a method that combines the practices of the Balanced Scorecard with a method of business models representation – the Business Model Canvas. Such a combination was based on study of conceptual adaptation, resulting in an application roadmap. A case study application was performed to check the functionality of the proposition, focusing on startup organizations. It was concluded that based on the performance assessment of the business model it is possible to propose the search for change through experimentation, a path that can lead to business model innovation.

  13. Doubly stochastic radial basis function methods

    Science.gov (United States)

    Yang, Fenglian; Yan, Liang; Ling, Leevan

    2018-06-01

    We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).

  14. Comparing and combining biomarkers as principle surrogates for time-to-event clinical endpoints.

    Science.gov (United States)

    Gabriel, Erin E; Sachs, Michael C; Gilbert, Peter B

    2015-02-10

    Principal surrogate endpoints are useful as targets for phase I and II trials. In many recent trials, multiple post-randomization biomarkers are measured. However, few statistical methods exist for comparison of or combination of biomarkers as principal surrogates, and none of these methods to our knowledge utilize time-to-event clinical endpoint information. We propose a Weibull model extension of the semi-parametric estimated maximum likelihood method that allows for the inclusion of multiple biomarkers in the same risk model as multivariate candidate principal surrogates. We propose several methods for comparing candidate principal surrogates and evaluating multivariate principal surrogates. These include the time-dependent and surrogate-dependent true and false positive fraction, the time-dependent and the integrated standardized total gain, and the cumulative distribution function of the risk difference. We illustrate the operating characteristics of our proposed methods in simulations and outline how these statistics can be used to evaluate and compare candidate principal surrogates. We use these methods to investigate candidate surrogates in the Diabetes Control and Complications Trial. Copyright © 2014 John Wiley & Sons, Ltd.

  15. A Bayesian method for comparing and combining binary classifiers in the absence of a gold standard

    Directory of Open Access Journals (Sweden)

    Keith Jonathan M

    2012-07-01

    Full Text Available Abstract Background Many problems in bioinformatics involve classification based on features such as sequence, structure or morphology. Given multiple classifiers, two crucial questions arise: how does their performance compare, and how can they best be combined to produce a better classifier? A classifier can be evaluated in terms of sensitivity and specificity using benchmark, or gold standard, data, that is, data for which the true classification is known. However, a gold standard is not always available. Here we demonstrate that a Bayesian model for comparing medical diagnostics without a gold standard can be successfully applied in the bioinformatics domain, to genomic scale data sets. We present a new implementation, which unlike previous implementations is applicable to any number of classifiers. We apply this model, for the first time, to the problem of finding the globally optimal logical combination of classifiers. Results We compared three classifiers of protein subcellular localisation, and evaluated our estimates of sensitivity and specificity against estimates obtained using a gold standard. The method overestimated sensitivity and specificity with only a small discrepancy, and correctly ranked the classifiers. Diagnostic tests for swine flu were then compared on a small data set. Lastly, classifiers for a genome-wide association study of macular degeneration with 541094 SNPs were analysed. In all cases, run times were feasible, and results precise. The optimal logical combination of classifiers was also determined for all three data sets. Code and data are available from http://bioinformatics.monash.edu.au/downloads/. Conclusions The examples demonstrate the methods are suitable for both small and large data sets, applicable to the wide range of bioinformatics classification problems, and robust to dependence between classifiers. In all three test cases, the globally optimal logical combination of the classifiers was found to be

  16. Comparative Study of Inference Methods for Bayesian Nonnegative Matrix Factorisation

    DEFF Research Database (Denmark)

    Brouwer, Thomas; Frellsen, Jes; Liò, Pietro

    2017-01-01

    In this paper, we study the trade-offs of different inference approaches for Bayesian matrix factorisation methods, which are commonly used for predicting missing values, and for finding patterns in the data. In particular, we consider Bayesian nonnegative variants of matrix factorisation and tri......-factorisation, and compare non-probabilistic inference, Gibbs sampling, variational Bayesian inference, and a maximum-a-posteriori approach. The variational approach is new for the Bayesian nonnegative models. We compare their convergence, and robustness to noise and sparsity of the data, on both synthetic and real...

  17. Resolution enhancement of holographic printer using a hogel overlapping method.

    Science.gov (United States)

    Hong, Keehoon; Park, Soon-gi; Yeom, Jiwoon; Kim, Jonghyun; Chen, Ni; Pyun, Kyungsuk; Choi, Chilsung; Kim, Sunil; An, Jungkwuen; Lee, Hong-Seok; Chung, U-in; Lee, Byoungho

    2013-06-17

    We propose a hogel overlapping method for the holographic printer to enhance the lateral resolution of holographic stereograms. The hogel size is directly related to the lateral resolution of the holographic stereogram. Our analysis by computer simulation shows that there is a limit to decreasing the hogel size while printing holographic stereograms. Instead of reducing the size of hogel, the lateral resolution of holographic stereograms can be enhanced by printing overlapped hogels, which makes it possible to take advantage of multiplexing property of the volume hologram. We built a holographic printer, and recorded two holographic stereograms using the conventional and proposed overlapping methods. The images and movies of the holographic stereograms experimentally captured were compared between the conventional and proposed methods. The experimental results confirm that the proposed hogel overlapping method improves the lateral resolution of holographic stereograms compared to the conventional holographic printing method.

  18. Comparative analysis of methods for the microcircuit assembly on flexible polyimide carriers

    Directory of Open Access Journals (Sweden)

    Verbitskiy V. G.

    2013-10-01

    Full Text Available The article presents a classification of methods for the microcircuit assembly with the use of flexible polyimide carriers of different types, and their comparative analysis. The most appropriate method for the manufacturing of flexible dual-layer carriers is singled out.

  19. Proposals of counting method for bubble detectors and their intercomparisons

    International Nuclear Information System (INIS)

    Ramalho, Eduardo; Silva, Ademir X.; Bellido, Luis F.; Facure, Alessandro; Pereira, Mario

    2009-01-01

    The study of neutron's spectrometry and dosimetry has become significantly easier due to relatively new devices called bubble detectors. Insensitive to gamma rays and composed by superheated emulsions, they still are subjects of many researches in Radiation Physics and Nuclear Engineering. In bubble detectors, either exposed to more intense neutron fields or for a long time, when more bubbles are produced, the statistical uncertainty during the dosimetric and spectrometric processes is reduced. A proposal of this nature is set up in this work, which presents ways to perform counting processes for bubble detectors and an updated proceeding to get the irradiated detectors' images in order to make the manual counting easier. Twelve BDS detectors were irradiated by RDS111 cyclotron from IEN's (Instituto de Engenharia Nuclear) and photographed using an assembly specially designed for this experiment. Counting was proceeded manually in a first moment; simultaneously, ImagePro was used in order to perform counting automatically. The bubble counting values, either manual or automatic, were compared and the time to get them and their difficult levels as well. After the bubble counting, the detectors' standardizes responses were calculated in both cases, according to BDS's manual and they were also compared. Among the results, the counting on these devices really becomes very hard at a large number of bubbles, besides higher variations in counting of many bubbles. Because of the good agreement between manual counting and the custom program, the last one revealed a good alternative in practical and economical levels. Despite the good results, the custom program needs of more adjustments in order to achieve more accuracy on higher counting on bubble detectors for neutron measurement applications. (author)

  20. Comparing methods of classifying life courses: Sequence analysis and latent class analysis

    NARCIS (Netherlands)

    Elzinga, C.H.; Liefbroer, Aart C.; Han, Sapphire

    2017-01-01

    We compare life course typology solutions generated by sequence analysis (SA) and latent class analysis (LCA). First, we construct an analytic protocol to arrive at typology solutions for both methodologies and present methods to compare the empirical quality of alternative typologies. We apply this

  1. Comparing methods of classifying life courses: sequence analysis and latent class analysis

    NARCIS (Netherlands)

    Han, Y.; Liefbroer, A.C.; Elzinga, C.

    2017-01-01

    We compare life course typology solutions generated by sequence analysis (SA) and latent class analysis (LCA). First, we construct an analytic protocol to arrive at typology solutions for both methodologies and present methods to compare the empirical quality of alternative typologies. We apply this

  2. Parameter estimation method that directly compares gravitational wave observations to numerical relativity

    Science.gov (United States)

    Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.

    2017-11-01

    We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.

  3. Comparative study between the hand-wrist method and cervical vertebral maturation method for evaluation skeletal maturity in cleft patients.

    Science.gov (United States)

    Manosudprasit, Montian; Wangsrimongkol, Tasanee; Pisek, Poonsak; Chantaramungkorn, Melissa

    2013-09-01

    To test the measure of agreement between use of the Skeletal Maturation Index (SMI) method of Fishman using hand-wrist radiographs and the Cervical Vertebral Maturation Index (CVMI) method for assessing skeletal maturity of the cleft patients. Hand-wrist and lateral cephalometric radiographs of 60 cleft subjects (35 females and 25 males, age range: 7-16 years) were used. Skeletal age was assessed using an adjustment to the SMI method of Fishman to compare with the CVMI method of Hassel and Farman. Agreement between skeletal age assessed by both methods and the intra- and inter-examiner reliability of both methods were tested by weighted kappa analysis. There was good agreement between the two methods with a kappa value of 0.80 (95% CI = 0.66-0.88, p-value <0.001). Reliability of intra- and inter-examiner of both methods was very good with kappa value ranging from 0.91 to 0.99. The CVMI method can be used as an alternative to the SMI method in skeletal age assessment in cleft patients with the benefit of no need of an additional radiograph and avoiding extra-radiation exposure. Comparing the two methods, the present study found better agreement from peak of adolescence onwards.

  4. The Use of Laser Microdissection in Forensic Sexual Assault Casework: Pros and Cons Compared to Standard Methods.

    Science.gov (United States)

    Costa, Sergio; Correia-de-Sá, Paulo; Porto, Maria J; Cainé, Laura

    2017-07-01

    Sexual assault samples are among the most frequently analyzed in a forensic laboratory. These account for almost half of all samples processed routinely, and a large portion of these cases remain unsolved. These samples often pose problems to traditional analytic methods of identification because they consist most frequently of cell mixtures from at least two contributors: the victim (usually female) and the perpetrator (usually male). In this study, we propose the use of current preliminary testing for sperm detection in order to determine the chances of success when faced with samples which can be good candidates to undergo analysis with the laser microdissection technology. Also, we used laser microdissection technology to capture fluorescently stained cells of interest differentiated by gender. Collected materials were then used for DNA genotyping with commercially available amplification kits such as Minifiler, Identifiler Plus, NGM, and Y-Filer. Both the methodology and the quality of the results were evaluated to assess the pros and cons of laser microdissection compared with standard methods. Overall, the combination of fluorescent staining combined with the Minifiler amplification kit provided the best results for autosomal markers, whereas the Y-Filer kit returned the expected results regardless of the used method. © 2017 American Academy of Forensic Sciences.

  5. Comparative numerical study of kaolin clay with three drying methods: Convective, convective–microwave and convective infrared modes

    International Nuclear Information System (INIS)

    Hammouda, I.; Mihoubi, D.

    2014-01-01

    Highlights: • Modelling of drying of deformable media. • Theoretical study of kaolin clay with three drying methods: convective, convective–microwave and convective infrared mode. • The stresses generated during convective, microwave/convective drying and infrared/convective drying. • The combined drying decrease the intensity of stresses developed during drying. - Abstract: A mathematical model is developed to simulate the response of a kaolin clay sample when subjected to convective, convective–microwave and convective–infrared mode. This model is proposed to describe heat, mass, and momentum transfers applied to a viscoelastic medium described by a Maxwell model with two branches. The combined drying methods were investigated to examine whether these types of drying may minimize cracking that can be generated in the product and to know whether the best enhancement is developed by the use of infra-red or microwave radiation. The numerical code allowed us to determine, and thus, compare the effect of the drying mode on drying rate, temperature, moisture content and mechanical stress evolutions during drying. The numerical results show that the combined drying decrease the intensity of stresses developed during drying and that convective–microwave drying is the best method that gives a good quality of dried product

  6. Proposed method to produce a highly polarized e+ beam for future linear colliders

    International Nuclear Information System (INIS)

    Okugi, Toshiyuki; Chiba, Masami; Kurihara, Yoshimasa

    1996-01-01

    We propose a method to produce a spin-polarized e + beam using e + e - pair-creation by circularly polarized photons. Assuming Compton scattering of an unpolarized e - beam and circularly polarized laser light, scattered γ-rays at the high end of the energy spectrum are also circularly polarized. If those γ-rays are utilized to create e ± pairs on a thin target, the spin-polarization is preserved for e + 's at the high end of their energy spectrum. By using the injector linac of Accelerator Test Facility at KEK and a commercially available Nd:YAG pulse laser, we can expect about 10 5 polarized e + 's per second with a degree of polarization of 80% and a kinetic energy of 35-80 MeV. The apparatus for creation and measurement of polarized e + 's is being constructed. We present new idea for possible application of our method to future linear colliders by utilizing a high-power CO 2 laser. (author)

  7. Improved non-dimensional dynamic influence function method based on tow-domain method for vibration analysis of membranes

    Directory of Open Access Journals (Sweden)

    SW Kang

    2015-02-01

    Full Text Available This article introduces an improved non-dimensional dynamic influence function method using a sub-domain method for efficiently extracting the eigenvalues and mode shapes of concave membranes with arbitrary shapes. The non-dimensional dynamic influence function method (non-dimensional dynamic influence function method, which was developed by the authors in 1999, gives highly accurate eigenvalues for membranes, plates, and acoustic cavities, compared with the finite element method. However, it needs the inefficient procedure of calculating the singularity of a system matrix in the frequency range of interest for extracting eigenvalues and mode shapes. To overcome the inefficient procedure, this article proposes a practical approach to make the system matrix equation of the concave membrane of interest into a form of algebraic eigenvalue problem. It is shown by several case studies that the proposed method has a good convergence characteristics and yields very accurate eigenvalues, compared with an exact method and finite element method (ANSYS.

  8. Proposed method for determining the thickness of glass in solar collector panels

    Science.gov (United States)

    Moore, D. M.

    1980-01-01

    An analytical method was developed for determining the minimum thickness for simply supported, rectangular glass plates subjected to uniform normal pressure environmental loads such as wind, earthquake, snow, and deadweight. The method consists of comparing an analytical prediction of the stress in the glass panel to a glass breakage stress determined from fracture mechanics considerations. Based on extensive analysis using the nonlinear finite element structural analysis program ARGUS, design curves for the structural analysis of simply supported rectangular plates were developed. These curves yield the center deflection, center stress and corner stress as a function of a dimensionless parameter describing the load intensity. A method of estimating the glass breakage stress as a function of a specified failure rate, degree of glass temper, design life, load duration time, and panel size is also presented.

  9. Sensitivity analysis of a complex, proposed geologic waste disposal system using the Fourier Amplitude Sensitivity Test method

    International Nuclear Information System (INIS)

    Lu Yichi; Mohanty, Sitakanta

    2001-01-01

    The Fourier Amplitude Sensitivity Test (FAST) method has been used to perform a sensitivity analysis of a computer model developed for conducting total system performance assessment of the proposed high-level nuclear waste repository at Yucca Mountain, Nevada, USA. The computer model has a large number of random input parameters with assigned probability density functions, which may or may not be uniform, for representing data uncertainty. The FAST method, which was previously applied to models with parameters represented by the uniform probability distribution function only, has been modified to be applied to models with nonuniform probability distribution functions. Using an example problem with a small input parameter set, several aspects of the FAST method, such as the effects of integer frequency sets and random phase shifts in the functional transformations, and the number of discrete sampling points (equivalent to the number of model executions) on the ranking of the input parameters have been investigated. Because the number of input parameters of the computer model under investigation is too large to be handled by the FAST method, less important input parameters were first screened out using the Morris method. The FAST method was then used to rank the remaining parameters. The validity of the parameter ranking by the FAST method was verified using the conditional complementary cumulative distribution function (CCDF) of the output. The CCDF results revealed that the introduction of random phase shifts into the functional transformations, proposed by previous investigators to disrupt the repetitiveness of search curves, does not necessarily improve the sensitivity analysis results because it destroys the orthogonality of the trigonometric functions, which is required for Fourier analysis

  10. Comparative analysis of methods for integrating various environmental impacts as a single index in life cycle assessment

    International Nuclear Information System (INIS)

    Ji, Changyoon; Hong, Taehoon

    2016-01-01

    Previous studies have proposed several methods for integrating characterized environmental impacts as a single index in life cycle assessment. Each of them, however, may lead to different results. This study presents internal and external normalization methods, weighting factors proposed by panel methods, and a monetary valuation based on an endpoint life cycle impact assessment method as the integration methods. Furthermore, this study investigates the differences among the integration methods and identifies the causes of the differences through a case study in which five elementary school buildings were used. As a result, when using internal normalization with weighting factors, the weighting factors had a significant influence on the total environmental impacts whereas the normalization had little influence on the total environmental impacts. When using external normalization with weighting factors, the normalization had more significant influence on the total environmental impacts than weighing factors. Due to such differences, the ranking of the five buildings varied depending on the integration methods. The ranking calculated by the monetary valuation method was significantly different from that calculated by the normalization and weighting process. The results aid decision makers in understanding the differences among these integration methods, and, finally, help them select the method most appropriate for the goal at hand.

  11. Comparative analysis of methods for integrating various environmental impacts as a single index in life cycle assessment

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    2016-02-15

    Previous studies have proposed several methods for integrating characterized environmental impacts as a single index in life cycle assessment. Each of them, however, may lead to different results. This study presents internal and external normalization methods, weighting factors proposed by panel methods, and a monetary valuation based on an endpoint life cycle impact assessment method as the integration methods. Furthermore, this study investigates the differences among the integration methods and identifies the causes of the differences through a case study in which five elementary school buildings were used. As a result, when using internal normalization with weighting factors, the weighting factors had a significant influence on the total environmental impacts whereas the normalization had little influence on the total environmental impacts. When using external normalization with weighting factors, the normalization had more significant influence on the total environmental impacts than weighing factors. Due to such differences, the ranking of the five buildings varied depending on the integration methods. The ranking calculated by the monetary valuation method was significantly different from that calculated by the normalization and weighting process. The results aid decision makers in understanding the differences among these integration methods, and, finally, help them select the method most appropriate for the goal at hand.

  12. Comparative study of fracture mechanical test methods for concrete

    DEFF Research Database (Denmark)

    Østergaard, Lennart; Olesen, John Forbes

    2004-01-01

    and the interpretation, i.e. the analysis needed to extract the stress-crack opening relationship, the fracture energy etc. Experiments are carried out with each test configuration using mature, high performance concrete. The results show that the UTT is a highly complicated test, which only under very well controlled...... circumstances will yield the true fracture mechanical properties. It is also shown that both the three point bending test and the WST are well-suited substitutes for the uniaxial tension test.......This paper describes and compares three different fracture mechanical test methods; the uniaxial tension test (UTT), the three point bending test (TPBT) and the wedge splitting test (WST). Potentials and problems with the test methods will be described with regard to the experiment...

  13. Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2007-01-01

    Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.

  14. Comparative Analysis of Volatile Defensive Secretions of Three Species of Pyrrhocoridae (Insecta: Heteroptera by Gas Chromatography-Mass Spectrometric Method.

    Directory of Open Access Journals (Sweden)

    Jan Krajicek

    Full Text Available The true bugs (Hemiptera: Heteroptera have evolved a system of well-developed scent glands that produce diverse and frequently strongly odorous compounds that act mainly as chemical protection against predators. A new method of non-lethal sampling with subsequent separation using gas chromatography with mass spectrometric detection was proposed for analysis of these volatile defensive secretions. Separation was performed on Rtx-200 column containing fluorinated polysiloxane stationary phase. Various mechanical irritation methods (ultrasonics, shaking, pressing bugs with plunger of syringe were tested for secretion sampling with a special focus on non-lethal irritation. The preconcentration step was performed by sorption on solid phase microextraction (SPME fibers with different polarity. For optimization of sampling procedure, Pyrrhocoris apterus was selected. The entire multi-parameter optimization procedure of secretion sampling was performed using response surface methodology. The irritation of bugs by pressing them with a plunger of syringe was shown to be the most suitable. The developed method was applied to analysis of secretions produced by adult males and females of Pyrrhocoris apterus, Pyrrhocoris tibialis and Scantius aegyptius (all Heteroptera: Pyrrhocoridae. The chemical composition of secretion, particularly that of alcohols, aldehydes and esters, is species-specific in all three pyrrhocorid species studied. The sexual dimorphism in occurrence of particular compounds is largely limited to alcohols and suggests their epigamic intraspecific function. The phenetic overall similarities in composition of secretion do not reflect either relationship of species or similarities in antipredatory color pattern. The similarities of secretions may be linked with antipredatory strategies. The proposed method requires only a few individuals which remain alive after the procedure. Thus secretions of a number of species including even the rare

  15. Comparative analysis among several methods used to solve the point kinetic equations

    International Nuclear Information System (INIS)

    Nunes, Anderson L.; Goncalves, Alessandro da C.; Martinez, Aquilino S.; Silva, Fernando Carvalho da

    2007-01-01

    The main objective of this work consists on the methodology development for comparison of several methods for the kinetics equations points solution. The evaluated methods are: the finite differences method, the stiffness confinement method, improved stiffness confinement method and the piecewise constant approximations method. These methods were implemented and compared through a systematic analysis that consists basically of confronting which one of the methods consume smaller computational time with higher precision. It was calculated the relative which function is to combine both criteria in order to reach the goal. Through the analyses of the performance factor it is possible to choose the best method for the solution of point kinetics equations. (author)

  16. Comparative analysis among several methods used to solve the point kinetic equations

    Energy Technology Data Exchange (ETDEWEB)

    Nunes, Anderson L.; Goncalves, Alessandro da C.; Martinez, Aquilino S.; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear; E-mails: alupo@if.ufrj.br; agoncalves@con.ufrj.br; aquilino@lmp.ufrj.br; fernando@con.ufrj.br

    2007-07-01

    The main objective of this work consists on the methodology development for comparison of several methods for the kinetics equations points solution. The evaluated methods are: the finite differences method, the stiffness confinement method, improved stiffness confinement method and the piecewise constant approximations method. These methods were implemented and compared through a systematic analysis that consists basically of confronting which one of the methods consume smaller computational time with higher precision. It was calculated the relative which function is to combine both criteria in order to reach the goal. Through the analyses of the performance factor it is possible to choose the best method for the solution of point kinetics equations. (author)

  17. Circuit and method for comparator offset error detection and correction in ADC

    NARCIS (Netherlands)

    2017-01-01

    PROBLEM TO BE SOLVED: To provide a method for calibrating an analog-to-digital converter (ADC).SOLUTION: The method comprises: sampling an input voltage signal; comparing the sampled input voltage signal with an output signal of a feedback digital-to-analog converter (DAC) 40; determining in a

  18. Genetic Synthesis of New Reversible/Quantum Ternary Comparator

    Directory of Open Access Journals (Sweden)

    DEIBUK, V.

    2015-08-01

    Full Text Available Methods of quantum/reversible logic synthesis are based on the use of the binary nature of quantum computing. However, multiple-valued logic is a promising choice for future quantum computer technology due to a number of advantages over binary circuits. In this paper we have developed a synthesis of ternary reversible circuits based on Muthukrishnan-Stroud gates using a genetic algorithm. The method of coding chromosome is presented, and well-grounded choice of algorithm parameters allowed obtaining better circuit schemes of one- and n-qutrit ternary comparators compared with other methods. These parameters are quantum cost of received reversible devices, delay time and number of constant input (ancilla lines. Proposed implementation of the genetic algorithm has led to reducing of the device delay time and the number of ancilla qutrits to 1 and 2n-1 for one- and n-qutrits full comparators, respectively. For designing of n-qutrit comparator we have introduced a complementary device which compares output functions of 1-qutrit comparators.

  19. A Comparative Analysis of Method Books for Class Jazz Instruction

    Science.gov (United States)

    Watson, Kevin E.

    2017-01-01

    The purpose of this study was to analyze and compare instructional topics and teaching approaches included in selected class method books for jazz pedagogy through content analysis methodology. Frequency counts for the number of pages devoted to each defined instructional content category were compiled and percentages of pages allotted to each…

  20. A proposed architecture and method of operation for improving the protection of privacy and confidentiality in disease registers

    Directory of Open Access Journals (Sweden)

    Churches Tim

    2003-01-01

    Full Text Available Abstract Background Disease registers aim to collect information about all instances of a disease or condition in a defined population of individuals. Traditionally methods of operating disease registers have required that notifications of cases be identified by unique identifiers such as social security number or national identification number, or by ensembles of non-unique identifying data items, such as name, sex and date of birth. However, growing concern over the privacy and confidentiality aspects of disease registers may hinder their future operation. Technical solutions to these legitimate concerns are needed. Discussion An alternative method of operation is proposed which involves splitting the personal identifiers from the medical details at the source of notification, and separately encrypting each part using asymmetrical (public key cryptographic methods. The identifying information is sent to a single Population Register, and the medical details to the relevant disease register. The Population Register uses probabilistic record linkage to assign a unique personal identification (UPI number to each person notified to it, although not necessarily everyone in the entire population. This UPI is shared only with a single trusted third party whose sole function is to translate between this UPI and separate series of personal identification numbers which are specific to each disease register. Summary The system proposed would significantly improve the protection of privacy and confidentiality, while still allowing the efficient linkage of records between disease registers, under the control and supervision of the trusted third party and independent ethics committees. The proposed architecture could accommodate genetic databases and tissue banks as well as a wide range of other health and social data collections. It is important that proposals such as this are subject to widespread scrutiny by information security experts, researchers and

  1. Proposal of a simple screening method for a rapid preliminary evaluation of ''heavy metals'' mobility in soils of contaminated sites

    Energy Technology Data Exchange (ETDEWEB)

    Pinto, Valentina; Chiusolo, Francesca; Cremisini, Carlo [ENEA - Italian Agency for New Technologies, Energy and Environment, Rome (Italy). Section PROTCHIM

    2010-09-15

    Risks associated to ''heavy metals'' (HM) soil contamination depend not only on their total content but, mostly, on their mobility. Many extraction procedures have been developed to evaluate HM mobility in contaminated soils, but they are generally time consuming (especially the sequential extraction procedures (SEPs)) and consequently applicable on a limited number of samples. For this reason, a simple screening method, applicable even ''in field'', has been proposed in order to obtain a rapid evaluation of HM mobility in polluted soils, mainly focused on the fraction associated to Fe and Mn oxide/hydroxides. A buffer solution of trisodium citrate and hydroxylamine hydrochloride was used as extractant for a single-step leaching test. The choice of this buffered solution was strictly related to the possibility of directly determining, via titration with dithizone (DZ), the content of Zn, Cu, Pb and Cd, which are among the most representative contaminants in highly mineralised soils. Moreover, the extraction solution is similar, aside from for the pH value, which is the one used in the BCR SEP second step. The analysis of bivalents ions through DZ titration was exploited in order to further simplify and quicken the whole procedure. The proposed method generically measures, in few minutes, the concentration of total extractable ''heavy metals'' expressed as molL{sup -1} without distinguishing between elements. The proposed screening method has been developed and applied on soil samples collected from rural, urban and mining areas, representing different situation of soil contamination. Results were compared with data obtained from the BCR procedure. The screening method demonstrated to be a reliable tool for a rapid evaluation of metals mobility. Therefore, it could be very useful, even ''in field'', both to guide the sampling activity on site and to monitor the efficacy of the subsequent

  2. Methods for the comparative evaluation of pharmaceuticals.

    Science.gov (United States)

    Zentner, Annette; Velasco-Garrido, Marcial; Busse, Reinhard

    2005-11-15

    POLITICAL BACKGROUND: As a German novelty, the Institute for Quality and Efficiency in Health Care (Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen; IGWiG) was established in 2004 to, among other tasks, evaluate the benefit of pharmaceuticals. In this context it is of importance that patented pharmaceuticals are only excluded from the reference pricing system if they offer a therapeutic improvement. The institute is commissioned by the Federal Joint Committee (Gemeinsamer Bundesausschuss, G-BA) or by the Ministry of Health and Social Security. The German policy objective expressed by the latest health care reform (Gesetz zur Modernisierung der Gesetzlichen Krankenversicherung, GMG) is to base decisions on a scientific assessment of pharmaceuticals in comparison to already available treatments. However, procedures and methods are still to be established. This health technology assessment (HTA) report was commissioned by the German Agency for HTA at the Institute for Medical Documentation and Information (DAHTA@DIMDI). It analysed criteria, procedures, and methods of comparative drug assessment in other EU-/OECD-countries. The research question was the following: How do national public institutions compare medicines in connection with pharmaceutical regulation, i.e. licensing, reimbursement and pricing of drugs? Institutions as well as documents concerning comparative drug evaluation (e.g. regulations, guidelines) were identified through internet, systematic literature, and hand searches. Publications were selected according to pre-defined inclusion and exclusion criteria. Documents were analysed in a qualitative matter following an analytic framework that had been developed in advance. Results were summarised narratively and presented in evidence tables. Currently licensing agencies do not systematically assess a new drug's added value for patients and society. This is why many countries made post-licensing evaluation of pharmaceuticals a

  3. Comparative reading support system for lung cancer CT screening

    International Nuclear Information System (INIS)

    Kubo, Mitsuru; Saita, Shinsuke; Kawata, Yoshiki; Niki, Noboru; Suzuki, Hidenobu; Ohmatsu, Hironobu; Eguchi, Kenji; Kaneko, Masahiro; Moriyama, Noriyuki

    2010-01-01

    The comparative reading is performed using current and past images of the same case obtained from lung cancer CT screening. The result of this is useful for the early detection of lung cancer. Our paper describes the efficiency improvement of comparative reading using 10 mm slice thickness CT images by developing the system consists of slice registration method, pulmonary nodule registration method, and quantitative evaluation method of pulmonary nodule's degree of change. The proposed system is applied to CT images scanned for 1107 times of 85 cases with 198 pulmonary nodules and is evaluated by comparing it with the reading result of the doctors. We show the effectiveness of the system. (author)

  4. Generalized Truncated Methods for an Efficient Solution of Retrial Systems

    Directory of Open Access Journals (Sweden)

    Ma Jose Domenech-Benlloch

    2008-01-01

    Full Text Available We are concerned with the analytic solution of multiserver retrial queues including the impatience phenomenon. As there are not closed-form solutions to these systems, approximate methods are required. We propose two different generalized truncated methods to effectively solve this type of systems. The methods proposed are based on the homogenization of the state space beyond a given number of users in the retrial orbit. We compare the proposed methods with the most well-known methods appeared in the literature in a wide range of scenarios. We conclude that the proposed methods generally outperform previous proposals in terms of accuracy for the most common performance parameters used in retrial systems with a moderated growth in the computational cost.

  5. How Good Are Trainers' Personal Methods Compared to Two Structured Training Strategies?

    Science.gov (United States)

    Walls, Richard T.; And Others

    Training methods naturally employed by trainers were analyzed and compared to systematic structured training procedures. Trainers were observed teaching retarded subjects how to assemble a bicycle brake, roller skate, carburetor, and lawn mower engine. Trainers first taught using their own (personal) method, which was recorded in terms of types of…

  6. Comparative effectiveness of instructional methods: oral and pharyngeal cancer examination.

    Science.gov (United States)

    Clark, Nereyda P; Marks, John G; Sandow, Pamela R; Seleski, Christine E; Logan, Henrietta L

    2014-04-01

    This study compared the effectiveness of different methods of instruction for the oral and pharyngeal cancer examination. A group of thirty sophomore students at the University of Florida College of Dentistry were randomly assigned to three training groups: video instruction, a faculty-led hands-on instruction, or both video and hands-on instruction. The training intervention involved attending two sessions spaced two weeks apart. The first session used a pretest to assess students' baseline didactic knowledge and clinical examination technique. The second session utilized two posttests to assess the comparative effectiveness of the training methods on didactic knowledge and clinical technique. The key findings were that students performed the clinical examination significantly better with the combination of video and faculty-led hands-on instruction (p<0.01). All students improved their clinical exam skills, knowledge, and confidence in performing the oral and pharyngeal cancer examination independent of which training group they were assigned. Utilizing both video and interactive practice promoted greater performance of the clinical technique on the oral and pharyngeal cancer examination.

  7. Cutaneous blood flow. A comparative study between the thermal recovery method and the radioxenon clearance method

    Energy Technology Data Exchange (ETDEWEB)

    Tavares, C M; Ferreira, J M; Fernandes, F V

    1975-01-01

    Since 1968 a thermal recovery method to study the cutaneous circulation has been utilized in the detection of skin circulation changes caused by certain pharmacological agents or by some pathological conditions. This method is based in the determination of the thermal recuperation of a small area of the skin previously cooled. In this work, we want to present the results of a comparative analysis between the thermal recovery method and the clearance of the radioactive xenon injected intracutaneously. The study was performed in the distal extremity of the lower limbs in 16 normal subjects, 16 hyperthyroid patients with increased cutaneous temperature and 11 patients with presumably low cutaneous blood flow (3 patients with hypothyroidism and 8 with obstructive arteriosclerosis).

  8. Comparing three methods for teaching Newton's third law

    Science.gov (United States)

    Smith, Trevor I.; Wittmann, Michael C.

    2007-12-01

    Although guided-inquiry methods for teaching introductory physics have been individually shown to be more effective at improving conceptual understanding than traditional lecture-style instruction, researchers in physics education have not studied differences among reform-based curricula in much detail. Several researchers have developed University of Washington style tutorial materials, but the different curricula have not been compared against each other. Our study examines three tutorials designed to improve student understanding of Newton’s third law: the University of Washington’s Tutorials in Introductory Physics (TIP), the University of Maryland’s Activity-Based Tutorials (ABT), and the Open Source Tutorials (OST) also developed at the University of Maryland. Each tutorial was designed with different goals and agendas, and each employs different methods to help students understand the physics. We analyzed pretest and post-test data, including course examinations and data from the Force and Motion Conceptual Evaluation (FMCE). Using both FMCE and course data, we find that students using the OST version of the tutorial perform better than students using either of the other two.

  9. GenoSets: visual analytic methods for comparative genomics.

    Directory of Open Access Journals (Sweden)

    Aurora A Cain

    Full Text Available Many important questions in biology are, fundamentally, comparative, and this extends to our analysis of a growing number of sequenced genomes. Existing genomic analysis tools are often organized around literal views of genomes as linear strings. Even when information is highly condensed, these views grow cumbersome as larger numbers of genomes are added. Data aggregation and summarization methods from the field of visual analytics can provide abstracted comparative views, suitable for sifting large multi-genome datasets to identify critical similarities and differences. We introduce a software system for visual analysis of comparative genomics data. The system automates the process of data integration, and provides the analysis platform to identify and explore features of interest within these large datasets. GenoSets borrows techniques from business intelligence and visual analytics to provide a rich interface of interactive visualizations supported by a multi-dimensional data warehouse. In GenoSets, visual analytic approaches are used to enable querying based on orthology, functional assignment, and taxonomic or user-defined groupings of genomes. GenoSets links this information together with coordinated, interactive visualizations for both detailed and high-level categorical analysis of summarized data. GenoSets has been designed to simplify the exploration of multiple genome datasets and to facilitate reasoning about genomic comparisons. Case examples are included showing the use of this system in the analysis of 12 Brucella genomes. GenoSets software and the case study dataset are freely available at http://genosets.uncc.edu. We demonstrate that the integration of genomic data using a coordinated multiple view approach can simplify the exploration of large comparative genomic data sets, and facilitate reasoning about comparisons and features of interest.

  10. Multicriteria Personnel Selection by the Modified Fuzzy VIKOR Method

    Directory of Open Access Journals (Sweden)

    Rasim M. Alguliyev

    2015-01-01

    Full Text Available Personnel evaluation is an important process in human resource management. The multicriteria nature and the presence of both qualitative and quantitative factors make it considerably more complex. In this study, a fuzzy hybrid multicriteria decision-making (MCDM model is proposed to personnel evaluation. This model solves personnel evaluation problem in a fuzzy environment where both criteria and weights could be fuzzy sets. The triangular fuzzy numbers are used to evaluate the suitability of personnel and the approximate reasoning of linguistic values. For evaluation, we have selected five information culture criteria. The weights of the criteria were calculated using worst-case method. After that, modified fuzzy VIKOR is proposed to rank the alternatives. The outcome of this research is ranking and selecting best alternative with the help of fuzzy VIKOR and modified fuzzy VIKOR techniques. A comparative analysis of results by fuzzy VIKOR and modified fuzzy VIKOR methods is presented. Experiments showed that the proposed modified fuzzy VIKOR method has some advantages over fuzzy VIKOR method. Firstly, from a computational complexity point of view, the presented model is effective. Secondly, compared to fuzzy VIKOR method, it has high acceptable advantage compared to fuzzy VIKOR method.

  11. A Learning Method for Neural Networks Based on a Pseudoinverse Technique

    Directory of Open Access Journals (Sweden)

    Chinmoy Pal

    1996-01-01

    Full Text Available A theoretical formulation of a fast learning method based on a pseudoinverse technique is presented. The efficiency and robustness of the method are verified with the help of an Exclusive OR problem and a dynamic system identification of a linear single degree of freedom mass–spring problem. It is observed that, compared with the conventional backpropagation method, the proposed method has a better convergence rate and a higher degree of learning accuracy with a lower equivalent learning coefficient. It is also found that unlike the steepest descent method, the learning capability of which is dependent on the value of the learning coefficient ν, the proposed pseudoinverse based backpropagation algorithm is comparatively robust with respect to its equivalent variable learning coefficient. A combination of the pseudoinverse method and the steepest descent method is proposed for a faster, more accurate learning capability.

  12. A Gradient Taguchi Method for Engineering Optimization

    Science.gov (United States)

    Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song

    2017-10-01

    To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.

  13. The Proposal to “Snapshot” Raim Method for Gnss Vessel Receivers Working in Poor Space Segment Geometry

    Directory of Open Access Journals (Sweden)

    Nowak Aleksander

    2015-12-01

    Full Text Available Nowadays, we can observe an increase in research on the use of small unmanned autonomous vessel (SUAV to patrol and guiding critical areas including harbours. The proposal to “snapshot” RAIM (Receiver Autonomous Integrity Monitoring method for GNSS receivers mounted on SUAV operating in poor space segment geometry is presented in the paper. Existing “snapshot” RAIM methods and algorithms which are used in practical applications have been developed for airborne receivers, thus two main assumptions have been made. The first one is that the geometry of visible satellites is strong. It means that the exclusion of any satellite from the positioning solution don’t cause significant deterioration of Dilution of Precision (DOP coefficients. The second one is that only one outlier could appear in pseudorange measurements. In case of SUAV operating in harbour these two assumptions cannot be accepted. Because of their small dimensions, GNSS antenna is only a few decimetres above sea level and regular ships, buildings and harbour facilities block and reflect satellite signals. Thus, different approach to “snapshot” RAIM is necessary. The proposal to method based on analyses of allowable maximal separation of positioning sub-solutions with using some information from EGNOS messages is described in the paper. Theoretical assumptions and results of numerical experiments are presented.

  14. Comparative Study of Different Methods for the Prediction of Drug-Polymer Solubility

    DEFF Research Database (Denmark)

    Knopp, Matthias Manne; Tajber, Lidia; Tian, Yiwei

    2015-01-01

    monomer weight ratios. The drug–polymer solubility at 25 °C was predicted using the Flory–Huggins model, from data obtained at elevated temperature using thermal analysis methods based on the recrystallization of a supersaturated amorphous solid dispersion and two variations of the melting point......, which suggests that this method can be used as an initial screening tool if a liquid analogue is available. The learnings of this important comparative study provided general guidance for the selection of the most suitable method(s) for the screening of drug–polymer solubility....

  15. Anatomically-aided PET reconstruction using the kernel method.

    Science.gov (United States)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  16. The Independent Evolution Method Is Not a Viable Phylogenetic Comparative Method.

    Directory of Open Access Journals (Sweden)

    Randi H Griffin

    Full Text Available Phylogenetic comparative methods (PCMs use data on species traits and phylogenetic relationships to shed light on evolutionary questions. Recently, Smaers and Vinicius suggested a new PCM, Independent Evolution (IE, which purportedly employs a novel model of evolution based on Felsenstein's Adaptive Peak Model. The authors found that IE improves upon previous PCMs by producing more accurate estimates of ancestral states, as well as separate estimates of evolutionary rates for each branch of a phylogenetic tree. Here, we document substantial theoretical and computational issues with IE. When data are simulated under a simple Brownian motion model of evolution, IE produces severely biased estimates of ancestral states and changes along individual branches. We show that these branch-specific changes are essentially ancestor-descendant or "directional" contrasts, and draw parallels between IE and previous PCMs such as "minimum evolution". Additionally, while comparisons of branch-specific changes between variables have been interpreted as reflecting the relative strength of selection on those traits, we demonstrate through simulations that regressing IE estimated branch-specific changes against one another gives a biased estimate of the scaling relationship between these variables, and provides no advantages or insights beyond established PCMs such as phylogenetically independent contrasts. In light of our findings, we discuss the results of previous papers that employed IE. We conclude that Independent Evolution is not a viable PCM, and should not be used in comparative analyses.

  17. Numerical solution of sixth-order boundary-value problems using Legendre wavelet collocation method

    Science.gov (United States)

    Sohaib, Muhammad; Haq, Sirajul; Mukhtar, Safyan; Khan, Imad

    2018-03-01

    An efficient method is proposed to approximate sixth order boundary value problems. The proposed method is based on Legendre wavelet in which Legendre polynomial is used. The mechanism of the method is to use collocation points that converts the differential equation into a system of algebraic equations. For validation two test problems are discussed. The results obtained from proposed method are quite accurate, also close to exact solution, and other different methods. The proposed method is computationally more effective and leads to more accurate results as compared to other methods from literature.

  18. The effective atomic numbers of some biomolecules calculated by two methods: A comparative study

    Energy Technology Data Exchange (ETDEWEB)

    Manohara, S. R.; Hanagodimath, S. M.; Gerward, L. [Department of Physics, Gulbarga University, Gulbarga, Karnataka 585 106 (India); Department of Physics, Technical University of Denmark, Lyngby DK-2800 (Denmark)

    2009-01-15

    The effective atomic numbers Z{sub eff} of some fatty acids and amino acids have been calculated by two numerical methods, a direct method and an interpolation method, in the energy range of 1 keV-20 MeV. The notion of Z{sub eff} is given a new meaning by using a modern database of photon interaction cross sections (WinXCom). The results of the two methods are compared and discussed. It is shown that for all biomolecules the direct method gives larger values of Z{sub eff} than the interpolation method, in particular at low energies (1-100 keV) At medium energies (0.1-5 MeV), Z{sub eff} for both methods is about constant and equal to the mean atomic number of the material. Wherever possible, the calculated values of Z{sub eff} are compared with experimental data.

  19. The effective atomic numbers of some biomolecules calculated by two methods: A comparative study

    International Nuclear Information System (INIS)

    Manohara, S. R.; Hanagodimath, S. M.; Gerward, L.

    2009-01-01

    The effective atomic numbers Z eff of some fatty acids and amino acids have been calculated by two numerical methods, a direct method and an interpolation method, in the energy range of 1 keV-20 MeV. The notion of Z eff is given a new meaning by using a modern database of photon interaction cross sections (WinXCom). The results of the two methods are compared and discussed. It is shown that for all biomolecules the direct method gives larger values of Z eff than the interpolation method, in particular at low energies (1-100 keV) At medium energies (0.1-5 MeV), Z eff for both methods is about constant and equal to the mean atomic number of the material. Wherever possible, the calculated values of Z eff are compared with experimental data.

  20. Comparing Symptoms of Autism Spectrum Disorders Using the Current "DSM-IV-TR" Diagnostic Criteria and the Proposed "DSM-V" Diagnostic Criteria

    Science.gov (United States)

    Worley, Julie A.; Matson, Johnny L.

    2012-01-01

    The American Psychiatric Association has proposed major revisions for the diagnostic category encompassing Autism Spectrum Disorders (ASD), which will reportedly increase the specificity and maintain the sensitivity of diagnoses. As a result, the aim of the current study was to compare symptoms of ASD in children and adolescents (N = 208) who met…

  1. Enhancing the Social Network Dimension of Lifelong Competence Development and Management Systems: A Proposal of Methods and Tools

    NARCIS (Netherlands)

    Cheak, Alicia; Angehrn, Albert; Sloep, Peter

    2006-01-01

    Cheak, A. M., Angehrn, A. A., & Sloep, P. (2006). Enhancing the social network dimension of lifelong competence development and management systems: A proposal of methods and tools. In R. Koper & K. Stefanov (Eds.). Proceedings of International Workshop in Learning Networks for Lifelong Competence

  2. Comparing Methods of Calculating Expected Annual Damage in Urban Pluvial Flood Risk Assessments

    DEFF Research Database (Denmark)

    Skovgård Olsen, Anders; Zhou, Qianqian; Linde, Jens Jørgen

    2015-01-01

    Estimating the expected annual damage (EAD) due to flooding in an urban area is of great interest for urban water managers and other stakeholders. It is a strong indicator for a given area showing how vulnerable it is to flood risk and how much can be gained by implementing e.g., climate change...... adaptation measures. This study identifies and compares three different methods for estimating the EAD based on unit costs of flooding of urban assets. One of these methods was used in previous studies and calculates the EAD based on a few extreme events by assuming a log-linear relationship between cost...... of an event and the corresponding return period. This method is compared to methods that are either more complicated or require more calculations. The choice of method by which the EAD is calculated appears to be of minor importance. At all three case study areas it seems more important that there is a shift...

  3. Statistical method to compare massive parallel sequencing pipelines.

    Science.gov (United States)

    Elsensohn, M H; Leblay, N; Dimassi, S; Campan-Fournier, A; Labalme, A; Roucher-Boulez, F; Sanlaville, D; Lesca, G; Bardel, C; Roy, P

    2017-03-01

    Today, sequencing is frequently carried out by Massive Parallel Sequencing (MPS) that cuts drastically sequencing time and expenses. Nevertheless, Sanger sequencing remains the main validation method to confirm the presence of variants. The analysis of MPS data involves the development of several bioinformatic tools, academic or commercial. We present here a statistical method to compare MPS pipelines and test it in a comparison between an academic (BWA-GATK) and a commercial pipeline (TMAP-NextGENe®), with and without reference to a gold standard (here, Sanger sequencing), on a panel of 41 genes in 43 epileptic patients. This method used the number of variants to fit log-linear models for pairwise agreements between pipelines. To assess the heterogeneity of the margins and the odds ratios of agreement, four log-linear models were used: a full model, a homogeneous-margin model, a model with single odds ratio for all patients, and a model with single intercept. Then a log-linear mixed model was fitted considering the biological variability as a random effect. Among the 390,339 base-pairs sequenced, TMAP-NextGENe® and BWA-GATK found, on average, 2253.49 and 1857.14 variants (single nucleotide variants and indels), respectively. Against the gold standard, the pipelines had similar sensitivities (63.47% vs. 63.42%) and close but significantly different specificities (99.57% vs. 99.65%; p < 0.001). Same-trend results were obtained when only single nucleotide variants were considered (99.98% specificity and 76.81% sensitivity for both pipelines). The method allows thus pipeline comparison and selection. It is generalizable to all types of MPS data and all pipelines.

  4. Comparative study to develop a single method for retrieving wide class of recombinant proteins from classical inclusion bodies.

    Science.gov (United States)

    Padhiar, Arshad Ahmed; Chanda, Warren; Joseph, Thomson Patrick; Guo, Xuefang; Liu, Min; Sha, Li; Batool, Samana; Gao, Yifan; Zhang, Wei; Huang, Min; Zhong, Mintao

    2018-03-01

    The formation of inclusion bodies (IBs) is considered as an Achilles heel of heterologous protein expression in bacterial hosts. Wide array of techniques has been developed to recover biochemically challenging proteins from IBs. However, acquiring the active state even from the same protein family was found to be an independent of single established method. Here, we present a new strategy for the recovery of wide sub-classes of recombinant protein from harsh IBs. We found that numerous methods and their combinations for reducing IB formation and producing soluble proteins were not effective, if the inclusion bodies were harsh in nature. On the other hand, different practices with mild solubilization buffers were able to solubilize IBs completely, yet the recovery of active protein requires large screening of refolding buffers. With the integration of previously reported mild solubilization techniques, we proposed an improved method, which comprised low sarkosyl concentration, ranging from 0.05 to 0.1% coupled with slow freezing (- 1 °C/min) and fast thaw (room temperature), resulting in greater solubility and the integrity of solubilized protein. Dilution method was employed with single buffer to restore activity for every sub-class of recombinant protein. Results showed that the recovered protein's activity was significantly higher compared with traditional solubilization/refolding approach. Solubilization of IBs by the described method was proved milder in nature, which restored native-like conformation of proteins within IBs.

  5. Innovative spectrophotometric methods for simultaneous estimation of the novel two-drug combination: Sacubitril/Valsartan through two manipulation approaches and a comparative statistical study

    Science.gov (United States)

    Eissa, Maya S.; Abou Al Alamein, Amal M.

    2018-03-01

    Different innovative spectrophotometric methods were introduced for the first time for simultaneous quantification of sacubitril/valsartan in their binary mixture and in their combined dosage form without prior separation through two manipulation approaches. These approaches were developed and based either on two wavelength selection in zero-order absorption spectra namely; dual wavelength method (DWL) at 226 nm and 275 nm for valsartan, induced dual wavelength method (IDW) at 226 nm and 254 nm for sacubitril and advanced absorbance subtraction (AAS) based on their iso-absorptive point at 246 nm (λiso) and 261 nm (sacubitril shows equal absorbance values at the two selected wavelengths) or on ratio spectra using their normalized spectra namely; ratio difference spectrophotometric method (RD) at 225 nm and 264 nm for both of them in their ratio spectra, first derivative of ratio spectra (DR1) at 232 nm for valsartan and 239 nm for sacubitril and mean centering of ratio spectra (MCR) at 260 nm for both of them. Both sacubitril and valsartan showed linearity upon application of these methods in the range of 2.5-25.0 μg/mL. The developed spectrophotmetric methods were successfully applied to the analysis of their combined tablet dosage form ENTRESTO™. The adopted spectrophotometric methods were also validated according to ICH guidelines. The results obtained from the proposed methods were statistically compared to a reported HPLC method using Student t-test, F-test and a comparative study was also developed with one-way ANOVA, showing no statistical difference in accordance to precision and accuracy.

  6. A method based on moving least squares for XRII image distortion correction

    International Nuclear Information System (INIS)

    Yan Shiju; Wang Chengtao; Ye Ming

    2007-01-01

    This paper presents a novel integrated method to correct geometric distortions of XRII (x-ray image intensifier) images. The method has been compared, in terms of mean-squared residual error measured at control and intermediate points, with two traditional local methods and a traditional global methods. The proposed method is based on the methods of moving least squares (MLS) and polynomial fitting. Extensive experiments were performed on simulated and real XRII images. In simulation, the effect of pincushion distortion, sigmoidal distortion, local distortion, noise, and the number of control points was tested. The traditional local methods were sensitive to pincushion and sigmoidal distortion. The traditional global method was only sensitive to sigmoidal distortion. The proposed method was found neither sensitive to pincushion distortion nor sensitive to sigmoidal distortion. The sensitivity of the proposed method to local distortion was lower than or comparable with that of the traditional global method. The sensitivity of the proposed method to noise was higher than that of all three traditional methods. Nevertheless, provided the standard deviation of noise was not greater than 0.1 pixels, accuracy of the proposed method is still higher than the traditional methods. The sensitivity of the proposed method to the number of control points was greatly lower than that of the traditional methods. Provided that a proper cutoff radius is chosen, accuracy of the proposed method is higher than that of the traditional methods. Experiments on real images, carried out by using a 9 in. XRII, showed that residual error of the proposed method (0.2544±0.2479 pixels) is lower than that of the traditional global method (0.4223±0.3879 pixels) and local methods (0.4555±0.3518 pixels and 0.3696±0.4019 pixels, respectively)

  7. Comparing Methods for Estimating Direct Costs of Adverse Drug Events.

    Science.gov (United States)

    Gyllensten, Hanna; Jönsson, Anna K; Hakkarainen, Katja M; Svensson, Staffan; Hägg, Staffan; Rehnberg, Clas

    2017-12-01

    To estimate how direct health care costs resulting from adverse drug events (ADEs) and cost distribution are affected by methodological decisions regarding identification of ADEs, assigning relevant resource use to ADEs, and estimating costs for the assigned resources. ADEs were identified from medical records and diagnostic codes for a random sample of 4970 Swedish adults during a 3-month study period in 2008 and were assessed for causality. Results were compared for five cost evaluation methods, including different methods for identifying ADEs, assigning resource use to ADEs, and for estimating costs for the assigned resources (resource use method, proportion of registered cost method, unit cost method, diagnostic code method, and main diagnosis method). Different levels of causality for ADEs and ADEs' contribution to health care resource use were considered. Using the five methods, the maximum estimated overall direct health care costs resulting from ADEs ranged from Sk10,000 (Sk = Swedish krona; ~€1,500 in 2016 values) using the diagnostic code method to more than Sk3,000,000 (~€414,000) using the unit cost method in our study population. The most conservative definitions for ADEs' contribution to health care resource use and the causality of ADEs resulted in average costs per patient ranging from Sk0 using the diagnostic code method to Sk4066 (~€500) using the unit cost method. The estimated costs resulting from ADEs varied considerably depending on the methodological choices. The results indicate that costs for ADEs need to be identified through medical record review and by using detailed unit cost data. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  8. Proposed Model for Integrating RAMS Method in the Design Process in Construction

    Directory of Open Access Journals (Sweden)

    Saad Al-Jibouri

    2010-05-01

    Full Text Available There is a growing trend in the Netherlands for outsourcing public construction activities to the private sector through the use of integrated contracts. There is also an increasing emphasis from public clients on the use of RAMS and life cycle costing (LCC in the design process of infrastructural projects to improve the performance of designed systems and optimize the project cost. RAMS is an acronym for `reliability, availability, maintainability and safety' and represents a collection of techniques to provide predictions of the performance targets of the required system. Increasingly, RAMS targets are being specified in invitation to tender or contract documents and the parties responsible for the design are required to provide evidence of its application in their design. Recent evidence from practice, complemented with a literature study, has shown that the knowledge and application of RAMS in infrastructural designs are in their infancy compared with other industrial sectors and many designers in construction do not have the necessary knowledge and experience to apply it. This paper describes a proposed model for the integration of RAMS and LCC into the design process in construction. A variation of the model for the application of RAMS in `design, build, finance and maintain' (DBFM contracts that include maintenance requirements is also proposed. The two models involve providing guidelines to simplify the application of RAMs by the designers. The model has been validated for its practicality and usefulness during a workshop by experienced designers. DOI: 10.3763/aedm.2008.0100 Published in the Journal AEDM - Volume 5, Number 4, 2009 , pp. 179-192(14

  9. Enhancing the Social Network Dimension of Lifelong Competence Development and Management Systems: A proposal of methods and tools

    NARCIS (Netherlands)

    Cheak, Alicia; Angehrn, Albert; Sloep, Peter

    2006-01-01

    Cheak, A. M., Angehrn, A. A., & Sloep, P. B. (2006). Enhancing the social network dimension of lifelong competence development and management systems: A proposal of methods and tools. In E. J. R. Koper & K. Stefanov (Eds.), Proceedings of International Workshop on Learning Networks for Lifelong

  10. Familiarity Vs Trust: A Comparative Study of Domain Scientists' Trust in Visual Analytics and Conventional Analysis Methods.

    Science.gov (United States)

    Dasgupta, Aritra; Lee, Joon-Yong; Wilson, Ryan; Lafrance, Robert A; Cramer, Nick; Cook, Kristin; Payne, Samuel

    2017-01-01

    Combining interactive visualization with automated analytical methods like statistics and data mining facilitates data-driven discovery. These visual analytic methods are beginning to be instantiated within mixed-initiative systems, where humans and machines collaboratively influence evidence-gathering and decision-making. But an open research question is that, when domain experts analyze their data, can they completely trust the outputs and operations on the machine-side? Visualization potentially leads to a transparent analysis process, but do domain experts always trust what they see? To address these questions, we present results from the design and evaluation of a mixed-initiative, visual analytics system for biologists, focusing on analyzing the relationships between familiarity of an analysis medium and domain experts' trust. We propose a trust-augmented design of the visual analytics system, that explicitly takes into account domain-specific tasks, conventions, and preferences. For evaluating the system, we present the results of a controlled user study with 34 biologists where we compare the variation of the level of trust across conventional and visual analytic mediums and explore the influence of familiarity and task complexity on trust. We find that despite being unfamiliar with a visual analytic medium, scientists seem to have an average level of trust that is comparable with the same in conventional analysis medium. In fact, for complex sense-making tasks, we find that the visual analytic system is able to inspire greater trust than other mediums. We summarize the implications of our findings with directions for future research on trustworthiness of visual analytic systems.

  11. Hybrid recommendation methods in complex networks.

    Science.gov (United States)

    Fiasconaro, A; Tumminello, M; Nicosia, V; Latora, V; Mantegna, R N

    2015-07-01

    We propose two recommendation methods, based on the appropriate normalization of already existing similarity measures, and on the convex combination of the recommendation scores derived from similarity between users and between objects. We validate the proposed measures on three data sets, and we compare the performance of our methods to other recommendation systems recently proposed in the literature. We show that the proposed similarity measures allow us to attain an improvement of performances of up to 20% with respect to existing nonparametric methods, and that the accuracy of a recommendation can vary widely from one specific bipartite network to another, which suggests that a careful choice of the most suitable method is highly relevant for an effective recommendation on a given system. Finally, we study how an increasing presence of random links in the network affects the recommendation scores, finding that one of the two recommendation algorithms introduced here can systematically outperform the others in noisy data sets.

  12. Comparative study of electromagnetic compatibility methods in printed circuit board design tools

    International Nuclear Information System (INIS)

    Marinova, Galia

    2002-01-01

    The paper considers the state-of-the art in electromagnetic compatibility (EMC) oriented printed circuit board (PCB) design. A general methodology of EMC oriented PCB design is synthesized. The main CAD tools available today are estimated and compared for their abilities to treat EMC oriented design. To help non experts a knowledge-base containing more than 50 basic rules for EMC-oriented PCB design is proposed. It can be applied in the PCB design CAD tools that possess rule-builders or it can help interactive design. Trends in this area of EMC-oriented PCB design are deduced. (Author)

  13. Test Capability of Comparative NAA Method in Analysis of Long Lived Element in SRM 1648

    International Nuclear Information System (INIS)

    Sri-Wardani

    2005-01-01

    The comparative NAA method had been examine on the analysis of long-lived elements content in air particulate sample of NIST.SRM 1648 for evaluation of a capability of comparative NAA method that used at P2TRR. From the result of analysis it could be determined analysis elements contained in the sample, namely: Sc, Co, Zn, Br, Rb, Sb, Hf and Th with optimum results in bias of 10%. The optimum result of long-lived elements obtained on a good accuracy and precision. From the analysis data obtained showed that the comparative NAA method with Gamma Trac and APTEC software capable to analyze several kinds of elements in environmental samples. Therefore, this method could be implement in biological and healthy samples. (author)

  14. Are three methods better than one? A comparative assessment of usability evaluation methods in an EHR.

    Science.gov (United States)

    Walji, Muhammad F; Kalenderian, Elsbeth; Piotrowski, Mark; Tran, Duong; Kookal, Krishna K; Tokede, Oluwabunmi; White, Joel M; Vaderhobli, Ram; Ramoni, Rachel; Stark, Paul C; Kimmes, Nicole S; Lagerweij, Maxim; Patel, Vimla L

    2014-05-01

    To comparatively evaluate the effectiveness of three different methods involving end-users for detecting usability problems in an EHR: user testing, semi-structured interviews and surveys. Data were collected at two major urban dental schools from faculty, residents and dental students to assess the usability of a dental EHR for developing a treatment plan. These included user testing (N=32), semi-structured interviews (N=36), and surveys (N=35). The three methods together identified a total of 187 usability violations: 54% via user testing, 28% via the semi-structured interview and 18% from the survey method, with modest overlap. These usability problems were classified into 24 problem themes in 3 broad categories. User testing covered the broadest range of themes (83%), followed by the interview (63%) and survey (29%) methods. Multiple evaluation methods provide a comprehensive approach to identifying EHR usability challenges and specific problems. The three methods were found to be complementary, and thus each can provide unique insights for software enhancement. Interview and survey methods were found not to be sufficient by themselves, but when used in conjunction with the user testing method, they provided a comprehensive evaluation of the EHR. We recommend using a multi-method approach when testing the usability of health information technology because it provides a more comprehensive picture of usability challenges. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. Toward cost-efficient sampling methods

    Science.gov (United States)

    Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie

    2015-09-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.

  16. Weber, Durkheim, and the comparative method.

    Science.gov (United States)

    Kapsis, R E

    1977-10-01

    This essay compares and contrasts the means by which Durkheim and Weber dealt with methodological issues peculiar to the comparative study of societies, what Smelser has called "the problem of sociocultural variability and complexity." More specifically, it examines how Weber and Durkheim chose appropriate comparative units for their empirical studies. The approaches that Weber and Durkheim brought to theproblem of cross-cultural comparison have critical implications for more current procedures used in the comparative study of contemporary and historical societies.

  17. Comparative study of heuristic evaluation and usability testing methods.

    Science.gov (United States)

    Thyvalikakath, Thankam Paul; Monaco, Valerie; Thambuganipalle, Himabindu; Schleyer, Titus

    2009-01-01

    Usability methods, such as heuristic evaluation, cognitive walk-throughs and user testing, are increasingly used to evaluate and improve the design of clinical software applications. There is still some uncertainty, however, as to how those methods can be used to support the development process and evaluation in the most meaningful manner. In this study, we compared the results of a heuristic evaluation with those of formal user tests in order to determine which usability problems were detected by both methods. We conducted heuristic evaluation and usability testing on four major commercial dental computer-based patient records (CPRs), which together cover 80% of the market for chairside computer systems among general dentists. Both methods yielded strong evidence that the dental CPRs have significant usability problems. An average of 50% of empirically-determined usability problems were identified by the preceding heuristic evaluation. Some statements of heuristic violations were specific enough to precisely identify the actual usability problem that study participants encountered. Other violations were less specific, but still manifested themselves in usability problems and poor task outcomes. In this study, heuristic evaluation identified a significant portion of problems found during usability testing. While we make no assumptions about the generalizability of the results to other domains and software systems, heuristic evaluation may, under certain circumstances, be a useful tool to determine design problems early in the development cycle.

  18. Bulk Electric Load Cost Calculation Methods: Iraqi Network Comparative Study

    Directory of Open Access Journals (Sweden)

    Qais M. Alias

    2016-09-01

    Full Text Available It is vital in any industry to regain the spent capitals plus running costs and a margin of profits for the industry to flourish. The electricity industry is an everyday life touching industry which follows the same finance-economic strategy. Cost allocation is a major issue in all sectors of the electric industry, viz, generation, transmission and distribution. Generation and distribution service costing’s well documented in the literature, while the transmission share is still of need for research. In this work, the cost of supplying a bulk electric load connected to the EHV system is calculated. A sample basic lump-average method is used to provide a rough costing guide. Also, two transmission pricing methods are employed, namely, the postage-stamp and the load-flow based MW-distance methods to calculate transmission share in the total cost of each individual bulk load. The three costing methods results are then analyzed and compared for the 400kV Iraqi power grid considered for a case study.

  19. Real-time hybrid simulation using the convolution integral method

    International Nuclear Information System (INIS)

    Kim, Sung Jig; Christenson, Richard E; Wojtkiewicz, Steven F; Johnson, Erik A

    2011-01-01

    This paper proposes a real-time hybrid simulation method that will allow complex systems to be tested within the hybrid test framework by employing the convolution integral (CI) method. The proposed CI method is potentially transformative for real-time hybrid simulation. The CI method can allow real-time hybrid simulation to be conducted regardless of the size and complexity of the numerical model and for numerical stability to be ensured in the presence of high frequency responses in the simulation. This paper presents the general theory behind the proposed CI method and provides experimental verification of the proposed method by comparing the CI method to the current integration time-stepping (ITS) method. Real-time hybrid simulation is conducted in the Advanced Hazard Mitigation Laboratory at the University of Connecticut. A seismically excited two-story shear frame building with a magneto-rheological (MR) fluid damper is selected as the test structure to experimentally validate the proposed method. The building structure is numerically modeled and simulated, while the MR damper is physically tested. Real-time hybrid simulation using the proposed CI method is shown to provide accurate results

  20. Proposing New Methods to Enhance the Low-Resolution Simulated GPR Responses in the Frequency and Wavelet Domains

    Directory of Open Access Journals (Sweden)

    Reza Ahmadi

    2014-12-01

    Full Text Available To date, a number of numerical methods, including the popular Finite-Difference Time Domain (FDTD technique, have been proposed to simulate Ground-Penetrating Radar (GPR responses. Despite having a number of advantages, the finite-difference method also has pitfalls such as being very time consuming in simulating the most common case of media with high dielectric permittivity, causing the forward modelling process to be very long lasting, even with modern high-speed computers. In the present study the well-known hyperbolic pattern response of horizontal cylinders, usually found in GPR B-Scan images, is used as a basic model to examine the possibility of reducing the forward modelling execution time. In general, the simulated GPR traces of common reflected objects are time shifted, as with the Normal Moveout (NMO traces encountered in seismic reflection responses. This suggests the application of Fourier transform to the GPR traces, employing the time-shifting property of the transformation to interpolate the traces between the adjusted traces in the frequency domain (FD. Therefore, in the present study two post-processing algorithms have been adopted to increase the speed of forward modelling while maintaining the required precision. The first approach is based on linear interpolation in the Fourier domain, resulting in increasing lateral trace-to-trace interval of appropriate sampling frequency of the signal, preventing any aliasing. In the second approach, a super-resolution algorithm based on 2D-wavelet transform is developed to increase both vertical and horizontal resolution of the GPR B-Scan images through preserving scale and shape of hidden hyperbola features. Through comparing outputs from both methods with the corresponding actual high-resolution forward response, it is shown that both approaches can perform satisfactorily, although the wavelet-based approach outperforms the frequency-domain approach noticeably, both in amplitude and

  1. Multi-Objective Optimization for Energy Performance Improvement of Residential Buildings: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Kangji Li

    2017-02-01

    Full Text Available Numerous conflicting criteria exist in building design optimization, such as energy consumption, greenhouse gas emission and indoor thermal performance. Different simulation-based optimization strategies and various optimization algorithms have been developed. A few of them are analyzed and compared in solving building design problems. This paper presents an efficient optimization framework to facilitate optimization designs with the aid of commercial simulation software and MATLAB. The performances of three optimization strategies, including the proposed approach, GenOpt method and artificial neural network (ANN method, are investigated using a case study of a simple building energy model. Results show that the proposed optimization framework has competitive performances compared with the GenOpt method. Further, in another practical case, four popular multi-objective algorithms, e.g., the non-dominated sorting genetic algorithm (NSGA-II, multi-objective particle swarm optimization (MOPSO, the multi-objective genetic algorithm (MOGA and multi-objective differential evolution (MODE, are realized using the propose optimization framework and compared with three criteria. Results indicate that MODE achieves close-to-optimal solutions with the best diversity and execution time. An uncompetitive result is achieved by the MOPSO in this case study.

  2. A comparative experimental evaluation of uncertainty estimation methods for two-component PIV

    Science.gov (United States)

    Boomsma, Aaron; Bhattacharya, Sayantan; Troolin, Dan; Pothos, Stamatios; Vlachos, Pavlos

    2016-09-01

    Uncertainty quantification in planar particle image velocimetry (PIV) measurement is critical for proper assessment of the quality and significance of reported results. New uncertainty estimation methods have been recently introduced generating interest about their applicability and utility. The present study compares and contrasts current methods, across two separate experiments and three software packages in order to provide a diversified assessment of the methods. We evaluated the performance of four uncertainty estimation methods, primary peak ratio (PPR), mutual information (MI), image matching (IM) and correlation statistics (CS). The PPR method was implemented and tested in two processing codes, using in-house open source PIV processing software (PRANA, Purdue University) and Insight4G (TSI, Inc.). The MI method was evaluated in PRANA, as was the IM method. The CS method was evaluated using DaVis (LaVision, GmbH). Utilizing two PIV systems for high and low-resolution measurements and a laser doppler velocimetry (LDV) system, data were acquired in a total of three cases: a jet flow and a cylinder in cross flow at two Reynolds numbers. LDV measurements were used to establish a point validation against which the high-resolution PIV measurements were validated. Subsequently, the high-resolution PIV measurements were used as a reference against which the low-resolution PIV data were assessed for error and uncertainty. We compared error and uncertainty distributions, spatially varying RMS error and RMS uncertainty, and standard uncertainty coverages. We observed that qualitatively, each method responded to spatially varying error (i.e. higher error regions resulted in higher uncertainty predictions in that region). However, the PPR and MI methods demonstrated reduced uncertainty dynamic range response. In contrast, the IM and CS methods showed better response, but under-predicted the uncertainty ranges. The standard coverages (68% confidence interval) ranged from

  3. A comparative experimental evaluation of uncertainty estimation methods for two-component PIV

    International Nuclear Information System (INIS)

    Boomsma, Aaron; Troolin, Dan; Pothos, Stamatios; Bhattacharya, Sayantan; Vlachos, Pavlos

    2016-01-01

    Uncertainty quantification in planar particle image velocimetry (PIV) measurement is critical for proper assessment of the quality and significance of reported results. New uncertainty estimation methods have been recently introduced generating interest about their applicability and utility. The present study compares and contrasts current methods, across two separate experiments and three software packages in order to provide a diversified assessment of the methods. We evaluated the performance of four uncertainty estimation methods, primary peak ratio (PPR), mutual information (MI), image matching (IM) and correlation statistics (CS). The PPR method was implemented and tested in two processing codes, using in-house open source PIV processing software (PRANA, Purdue University) and Insight4G (TSI, Inc.). The MI method was evaluated in PRANA, as was the IM method. The CS method was evaluated using DaVis (LaVision, GmbH). Utilizing two PIV systems for high and low-resolution measurements and a laser doppler velocimetry (LDV) system, data were acquired in a total of three cases: a jet flow and a cylinder in cross flow at two Reynolds numbers. LDV measurements were used to establish a point validation against which the high-resolution PIV measurements were validated. Subsequently, the high-resolution PIV measurements were used as a reference against which the low-resolution PIV data were assessed for error and uncertainty. We compared error and uncertainty distributions, spatially varying RMS error and RMS uncertainty, and standard uncertainty coverages. We observed that qualitatively, each method responded to spatially varying error (i.e. higher error regions resulted in higher uncertainty predictions in that region). However, the PPR and MI methods demonstrated reduced uncertainty dynamic range response. In contrast, the IM and CS methods showed better response, but under-predicted the uncertainty ranges. The standard coverages (68% confidence interval) ranged from

  4. A comparative study of three cytotoxicity test methods for nanomaterials using sodium lauryl sulfate.

    Science.gov (United States)

    Kwon, Jae-Sung; Kim, Kwang-Mahn; Kim, Kyoung-Nam

    2014-10-01

    The biocompatibility evaluation of nanomaterials is essential for their medical diagnostic and therapeutic usage, where a cytotoxicity test is the simplest form of biocompatibility evaluation. Three methods have been commonly used in previous studies for the cytotoxicity testing of nanomaterials: trypan blue exclusion, colorimetric assay using water soluble tetrazolium (WST), and imaging under a microscope following calcein AM/ethidium homodimer-1 staining. However, there has yet to be a study to compare each method. Therefore, in this study three methods were compared using the standard reference material of sodium lauryl sulfate (SLS). Each method of the cytotoxicity test was carried out using mouse fibroblasts of L-929 exposed to different concentrations of SLS. Compared to the gold standard trypan blue exclusion test, both colorimetric assay using water soluble tetrazolium (WST) and imaging under microscope with calcein AM/ethidium homodimer-1 staining showed results that were not statistically different. Also, each method exhibited various advantages and disadvantages, which included the need of equipment, time taken for the experiment, and provision of additional information such as cell morphology. Therefore, this study concludes that all three methods of cytotoxicity testing may be valid, though careful consideration will be needed when selecting tests with regard to time, finances, and the amount of information required by the researcher(s).

  5. A comparative study of different aspects of manipulating ratio spectra applied for ternary mixtures: derivative spectrophotometry versus wavelet transform.

    Science.gov (United States)

    Salem, Hesham; Lotfy, Hayam M; Hassan, Nagiba Y; El-Zeiny, Mohamed B; Saleh, Sarah S

    2015-01-25

    This work represents a comparative study of different aspects of manipulating ratio spectra, which are: double divisor ratio spectra derivative (DR-DD), area under curve of derivative ratio (DR-AUC) and its novel approach, namely area under the curve correction method (AUCCM) applied for overlapped spectra; successive derivative of ratio spectra (SDR) and continuous wavelet transform (CWT) methods. The proposed methods represent different aspects of manipulating ratio spectra of the ternary mixture of Ofloxacin (OFX), Prednisolone acetate (PA) and Tetryzoline HCl (TZH) combined in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. COMPARATIVE ANALYSIS OF EXISTING INTENSIVE METHODS OF TEACHING FOREIGN LANGUAGES

    Directory of Open Access Journals (Sweden)

    Maria Mytnyk

    2016-12-01

    Full Text Available The article deals with the study and analysis of comparable existing intensive methods of teaching foreign languages. This work is carried out to identify the positive and negative aspects of intensive methods of teaching foreign languages. The author traces the idea of rational organization and intensification of teaching foreign languages from their inception to the moment of their preparation in an integrated system. advantages and disadvantages of the most popular methods of intensive training also analyzed the characteristic of different historical periods, namely cugestopedichny method G. Lozanov method activation of reserve possibilities of students G. Kitaygorodskoy, emotional-semantic method I. Schechter, an intensive course of learning a foreign language L. Gegechkori , sugestokibernetichny integral method of accelerated learning a foreign language B. Petrusinskogo, a crash course in the study of spoken language by immersion A. Plesnevich. Analyzed the principles of learning and the role of each method in the development of methods of intensive foreign language training. The author identified a number of advantages and disadvantages of intensive methods of teaching foreign languages: 1 the assimilation of a large number of linguistic, lexical and grammatical units; 2 active use of acquired knowledge, skills and abilities in the practice of oral speech communication in a foreign language; 3 the ability to use language material resulting not only in his speech, but also in understanding the interlocutor; 4 overcoming psychological barriers, including fear of the possibility of making a mistake; 5 high efficiency and fast learning; 6 too much new language material that is presented; 7 training of oral forms of communication; 8 decline of grammatical units and models.

  7. Comparative Visualization of Vector Field Ensembles Based on Longest Common Subsequence

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Richen; Guo, Hanqi; Zhang, Jiang; Yuan, Xiaoru

    2016-04-19

    We propose a longest common subsequence (LCS) based approach to compute the distance among vector field ensembles. By measuring how many common blocks the ensemble pathlines passing through, the LCS distance defines the similarity among vector field ensembles by counting the number of sharing domain data blocks. Compared to the traditional methods (e.g. point-wise Euclidean distance or dynamic time warping distance), the proposed approach is robust to outlier, data missing, and sampling rate of pathline timestep. Taking the advantages of smaller and reusable intermediate output, visualization based on the proposed LCS approach revealing temporal trends in the data at low storage cost, and avoiding tracing pathlines repeatedly. Finally, we evaluate our method on both synthetic data and simulation data, which demonstrate the robustness of the proposed approach.

  8. A Systematic Identification Method for Thermodynamic Property Modelling

    DEFF Research Database (Denmark)

    Ana Perederic, Olivia; Cunico, Larissa; Sarup, Bent

    2017-01-01

    In this work, a systematic identification method for thermodynamic property modelling is proposed. The aim of the method is to improve the quality of phase equilibria prediction by group contribution based property prediction models. The method is applied to lipid systems where the Original UNIFAC...... model is used. Using the proposed method for estimating the interaction parameters using only VLE data, a better phase equilibria prediction for both VLE and SLE was obtained. The results were validated and compared with the original model performance...

  9. Underground Mining Method Selection Using WPM and PROMETHEE

    Science.gov (United States)

    Balusa, Bhanu Chander; Singam, Jayanthu

    2018-04-01

    The aim of this paper is to represent the solution to the problem of selecting suitable underground mining method for the mining industry. It is achieved by using two multi-attribute decision making techniques. These two techniques are weighted product method (WPM) and preference ranking organization method for enrichment evaluation (PROMETHEE). In this paper, analytic hierarchy process is used for weight's calculation of the attributes (i.e. parameters which are used in this paper). Mining method selection depends on physical parameters, mechanical parameters, economical parameters and technical parameters. WPM and PROMETHEE techniques have the ability to consider the relationship between the parameters and mining methods. The proposed techniques give higher accuracy and faster computation capability when compared with other decision making techniques. The proposed techniques are presented to determine the effective mining method for bauxite mine. The results of these techniques are compared with methods used in the earlier research works. The results show, conventional cut and fill method is the most suitable mining method.

  10. Approximate Method for Solving the Linear Fuzzy Delay Differential Equations

    Directory of Open Access Journals (Sweden)

    S. Narayanamoorthy

    2015-01-01

    Full Text Available We propose an algorithm of the approximate method to solve linear fuzzy delay differential equations using Adomian decomposition method. The detailed algorithm of the approach is provided. The approximate solution is compared with the exact solution to confirm the validity and efficiency of the method to handle linear fuzzy delay differential equation. To show this proper features of this proposed method, numerical example is illustrated.

  11. comparative analysis of some existing kinetic models with proposed

    African Journals Online (AJOL)

    IGNATIUS NWIDI

    two statistical parameters namely; linear regression coefficient of correlation (R2) and ... Keynotes: Heavy metals, Biosorption, Kinetics Models, Comparative analysis, Average Relative Error. 1. ... If the flow rate is low, a simple manual batch.

  12. 48 CFR 315.605 - Content of unsolicited proposals.

    Science.gov (United States)

    2010-10-01

    ... CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Unsolicited Proposals 315.605 Content of... prepared under Government supervision; (b) The methods and approaches stated in the proposal were developed... Title (This certification shall be signed by a responsible management official of the proposing...

  13. The comparative evaluation of patients′ body dry weight under hemodialysis using two methods: Bioelectrical impedance analysis and conventional method

    Directory of Open Access Journals (Sweden)

    Neda Alijanian

    2012-01-01

    Full Text Available Background: Dry weight (DW is an important concept related to patients undergoing hemodialysis. Conventional method seems to be time consuming and operator dependent. Bio impedance analysis (BIA is a new and simple method reported to be an accurate way for estimating DW. In this study, we aimed to compare the conventional estimation of DW with measuring DW by BIA. Materials and Methods: This study involved 130 uremic patients, performed in Isfahan, Iran. DW was calculated by both conventional (CDW and BIA (BIADW method and results were compared based on different grouping factors including sex, underlying cause of renal failure (RF (diabetic RF and non-diabetic RF, body mass index (BMI status, and sessions of hemodialysis. We also calculated the difference between DWs of 2 methods (DW diff = CDW-BIADW. Results: The mean of BIADW was significantly lower than CDW (57.20 ± 1.82 vs 59.36 ± 1.77, P value < 0.001. After grouping cases according to the underlying cause, BMI, sex, and dialysis sessions BIADW was significantly lower than CDW. Conclusion: Based on the combination of problems with CDW measurement which are corrected by BIA, and more clinical reliability of CDW, we concluded that although conventional method is a time-consuming and operator-dependent way to assess DW, DW could be estimated by combining both of these methods by finding the mathematic correlation between these methods.

  14. Comparison of model reference and map based control method for vehicle stability enhancement

    NARCIS (Netherlands)

    Baek, S.; Son, M.; Song, J.; Boo, K.; Kim, H.

    2012-01-01

    A map based controller method to improve a vehicle lateral stability is proposed in this study and compared with the conventional method, a model referenced controller. A model referenced controller to determine compensated yaw moment uses the sliding mode method, but the proposed map based

  15. Automatic path proposal computation for CT-guided percutaneous liver biopsy.

    Science.gov (United States)

    Helck, A; Schumann, C; Aumann, J; Thierfelder, K; Strobl, F F; Braunagel, M; Niethammer, M; Clevert, D A; Hoffmann, R T; Reiser, M; Sandner, T; Trumm, C

    2016-12-01

    To evaluate feasibility of automatic software-based path proposals for CT-guided percutaneous biopsies. Thirty-three patients (60 [Formula: see text] 12 years) referred for CT-guided biopsy of focal liver lesions were consecutively included. Pre-interventional CT and dedicated software (FraunhoferMeVis Pathfinder) were used for (semi)automatic segmentation of relevant structures. The software subsequently generated three path proposals in downward quality for CT-guided biopsy. Proposed needle paths were compared with consensus proposal of two experts (comparable, less suitable, not feasible). In case of comparable results, equivalent approach to software-based path proposal was used. Quality of segmentation process was evaluated (Likert scale, 1 [Formula: see text] best, 6 [Formula: see text] worst), and time for processing was registered. All biopsies were performed successfully without complications. In 91 % one of the three automatic path proposals was rated comparable to experts' proposal. None of the first proposals was rated not feasible, and 76 % were rated comparable to the experts' proposal. 7 % automatic path proposals were rated not feasible, all being second choice ([Formula: see text]) or third choice ([Formula: see text]). In 79 %, segmentation at least was good. Average total time for establishing automatic path proposal was 42 [Formula: see text] 9 s. Automatic software-based path proposal for CT-guided liver biopsies in the majority provides path proposals that are easy to establish and comparable to experts' insertion trajectories.

  16. The next GUM and its proposals: a comparison study

    Science.gov (United States)

    Damasceno, J. C.; Couto, P. R. G.

    2018-03-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) is currently under revision. New proposals for its implementation were circulated in the form of a draft document. Two of the main changes are explored in this work using a Brinell hardness model example. Changes in the evaluation of uncertainty for repeated indications and in the construction of coverage intervals are compared with the classic GUM and with Monte Carlo simulation method.

  17. Greater Prevalence of Proposed ICD-11 Alcohol and Cannabis Dependence Compared to ICD-10, DSM-IV, and DSM-5 in Treated Adolescents.

    Science.gov (United States)

    Chung, Tammy; Cornelius, Jack; Clark, Duncan; Martin, Christopher

    2017-09-01

    Proposed International Classification of Diseases, 11th edition (ICD-11), criteria for substance use disorder (SUD) radically simplify the algorithm used to diagnose substance dependence. Major differences in case identification across DSM and ICD impact determinations of treatment need and conceptualizations of substance dependence. This study compared the draft algorithm for ICD-11 SUD against DSM-IV, DSM-5, and ICD-10, for alcohol and cannabis. Adolescents (n = 339, ages 14 to 18) admitted to intensive outpatient addictions treatment completed, as part of a research study, a Structured Clinical Interview for DSM SUDs adapted for use with adolescents and which has been used to assess DSM and ICD SUD diagnoses. Analyses examined prevalence across classification systems, diagnostic concordance, and sources of diagnostic disagreement. Prevalence of any past-year proposed ICD-11 alcohol or cannabis use disorder was significantly lower compared to DSM-IV and DSM-5 (ps DSM-5, and ICD-10 (ps DSM-5 SUD diagnoses showed only moderate concordance. For both alcohol and cannabis, youth typically met criteria for an ICD-11 dependence diagnosis by reporting tolerance and much time spent using or recovering from the substance, rather than symptoms indicating impaired control over use. The proposed ICD-11 dependence algorithm appears to "overdiagnose" dependence on alcohol and cannabis relative to DSM-IV and ICD-10 dependence, and DSM-5 moderate/severe use disorder, generating potential "false-positive" cases of dependence. Among youth who met criteria for proposed ICD-11 dependence, few reported impaired control over substance use, highlighting ongoing issues in the conceptualization and diagnosis of SUD. Copyright © 2017 by the Research Society on Alcoholism.

  18. Detailed characterizations of a Comparative Reactivity Method (CRM) instrument: experiments vs. modelling

    Science.gov (United States)

    Michoud, V.; Hansen, R. F.; Locoge, N.; Stevens, P. S.; Dusanter, S.

    2015-04-01

    The Hydroxyl radical (OH) is an important oxidant in the daytime troposphere that controls the lifetime of most trace gases, whose oxidation leads to the formation of harmful secondary pollutants such as ozone (O3) and Secondary Organic Aerosols (SOA). In spite of the importance of OH, uncertainties remain concerning its atmospheric budget and integrated measurements of the total sink of OH can help reducing these uncertainties. In this context, several methods have been developed to measure the first-order loss rate of ambient OH, called total OH reactivity. Among these techniques, the Comparative Reactivity Method (CRM) is promising and has already been widely used in the field and in atmospheric simulation chambers. This technique relies on monitoring competitive OH reactions between a reference molecule (pyrrole) and compounds present in ambient air inside a sampling reactor. However, artefacts and interferences exist for this method and a thorough characterization of the CRM technique is needed. In this study, we present a detailed characterization of a CRM instrument, assessing the corrections that need to be applied on ambient measurements. The main corrections are, in the order of their integration in the data processing: (1) a correction for a change in relative humidity between zero air and ambient air, (2) a correction for the formation of spurious OH when artificially produced HO2 react with NO in the sampling reactor, and (3) a correction for a deviation from pseudo first-order kinetics. The dependences of these artefacts to various measurable parameters, such as the pyrrole-to-OH ratio or the bimolecular reaction rate constants of ambient trace gases with OH are also studied. From these dependences, parameterizations are proposed to correct the OH reactivity measurements from the abovementioned artefacts. A comparison of experimental and simulation results is then discussed. The simulations were performed using a 0-D box model including either (1) a

  19. A Comparative Study on the Architecture Internet of Things and its’ Implementation method

    Science.gov (United States)

    Xiao, Zhiliang

    2017-08-01

    With the rapid development of science and technology, Internet-based the Internet of things was born and achieved good results. In order to further build a complete Internet of things system, to achieve the design of the Internet of things, we need to constitute the object of the network structure of the indicators of comparative study, and on this basis, the Internet of things connected to the way and do more in-depth to achieve the unity of the object network architecture and implementation methods. This paper mainly analyzes the two types of Internet of Things system, and makes a brief comparative study of the important indicators, and then introduces the connection method and realization method of Internet of Things based on the concept of Internet of Things and architecture.

  20. Fundamental Frequency Estimation using Polynomial Rooting of a Subspace-Based Method

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2010-01-01

    improvements compared to HMUSIC. First, by using the proposed method we can obtain an estimate of the fundamental frequency without doing a grid search like in HMUSIC. This is due to that the fundamental frequency is estimated as the argument of the root lying closest to the unit circle. Second, we obtain...... a higher spectral resolution compared to HMUSIC which is a property of polynomial rooting methods. Our simulation results show that the proposed method is applicable to real-life signals, and that we in most cases obtain a higher spectral resolution than HMUSIC....

  1. Comparative performance evaluation of automated segmentation methods of hippocampus from magnetic resonance images of temporal lobe epilepsy patients.

    Science.gov (United States)

    Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad-Reza; Pompili, Dario; Jafari-Khouzani, Kourosh; Elisevich, Kost; Soltanian-Zadeh, Hamid

    2016-01-01

    Segmentation of the hippocampus from magnetic resonance (MR) images is a key task in the evaluation of mesial temporal lobe epilepsy (mTLE) patients. Several automated algorithms have been proposed although manual segmentation remains the benchmark. Choosing a reliable algorithm is problematic since structural definition pertaining to multiple edges, missing and fuzzy boundaries, and shape changes varies among mTLE subjects. Lack of statistical references and guidance for quantifying the reliability and reproducibility of automated techniques has further detracted from automated approaches. The purpose of this study was to develop a systematic and statistical approach using a large dataset for the evaluation of automated methods and establish a method that would achieve results better approximating those attained by manual tracing in the epileptogenic hippocampus. A template database of 195 (81 males, 114 females; age range 32-67 yr, mean 49.16 yr) MR images of mTLE patients was used in this study. Hippocampal segmentation was accomplished manually and by two well-known tools (FreeSurfer and hammer) and two previously published methods developed at their institution [Automatic brain structure segmentation (ABSS) and LocalInfo]. To establish which method was better performing for mTLE cases, several voxel-based, distance-based, and volume-based performance metrics were considered. Statistical validations of the results using automated techniques were compared with the results of benchmark manual segmentation. Extracted metrics were analyzed to find the method that provided a more similar result relative to the benchmark. Among the four automated methods, ABSS generated the most accurate results. For this method, the Dice coefficient was 5.13%, 14.10%, and 16.67% higher, Hausdorff was 22.65%, 86.73%, and 69.58% lower, precision was 4.94%, -4.94%, and 12.35% higher, and the root mean square (RMS) was 19.05%, 61.90%, and 65.08% lower than LocalInfo, FreeSurfer, and

  2. Communicative Competence of the Fourth Year Students: Basis for Proposed English Language Program

    Science.gov (United States)

    Tuan, Vu Van

    2017-01-01

    This study on level of communicative competence covering linguistic/grammatical and discourse has aimed at constructing a proposed English language program for 5 key universities in Vietnam. The descriptive method utilized was scientifically employed with comparative techniques and correlational analysis. The researcher treated the surveyed data…

  3. Highly comparative time-series analysis: the empirical structure of time series and their methods.

    Science.gov (United States)

    Fulcher, Ben D; Little, Max A; Jones, Nick S

    2013-06-06

    The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.

  4. Comparing the Costs and Acceptability of Three Fidelity Assessment Methods for Assertive Community Treatment.

    Science.gov (United States)

    Rollins, Angela L; Kukla, Marina; Salyers, Michelle P; McGrew, John H; Flanagan, Mindy E; Leslie, Doug L; Hunt, Marcia G; McGuire, Alan B

    2017-09-01

    Successful implementation of evidence-based practices requires valid, yet practical fidelity monitoring. This study compared the costs and acceptability of three fidelity assessment methods: on-site, phone, and expert-scored self-report. Thirty-two randomly selected VA mental health intensive case management teams completed all fidelity assessments using a standardized scale and provided feedback on each. Personnel and travel costs across the three methods were compared for statistical differences. Both phone and expert-scored self-report methods demonstrated significantly lower costs than on-site assessments, even when excluding travel costs. However, participants preferred on-site assessments. Remote fidelity assessments hold promise in monitoring large scale program fidelity with limited resources.

  5. Conceptual design of covering method for the proposed LILW near-surface repository at Cernavoda

    International Nuclear Information System (INIS)

    Diaconu, Daniela

    2003-01-01

    The disposal concept of the low and intermediate level (LIL) wastes resulting during NPP operation combines both the natural and engineered barriers in order to ensure the safety of the environment and population. Saligny site has been proposed for LIL waste disposal. Preliminary performance assessments indicate that the loess and clay layers are efficient natural barriers against water flow and radionuclide migration through the vadose zone to the local aquifers. At present, the studies on site characterization are concentrated on investigation of the potential factors affecting the long-term integrity of the disposal facility. This analysis showed that surface erosion by wind and water and bio-intrusion by plant roots and burrowing animals could affect the long-term disposal safety. Based on the preliminary erosion results, as well as on the high probability of bio-intrusion by the plant roots and burrowing animals (i.e. moles, mice), different covering systems able to ensure the long-term safety of the repository has been proposed and analyzed. FEHM and HYDRUS 2D water flow simulations have been performed in order to compare their efficiency in the diminution of the infiltration rate in the repository. From this point of view, the covering system combining the capillary barrier and the resistive layer proved to have the best behavior

  6. Annotated Computer Output for Illustrative Examples of Clustering Using the Mixture Method and Two Comparable Methods from SAS.

    Science.gov (United States)

    1987-06-26

    BUREAU OF STANDAR-S1963-A Nw BOM -ILE COPY -. 4eo .?3sa.9"-,,A WIN* MAT HEMATICAL SCIENCES _*INSTITUTE AD-A184 687 DTICS!ELECTE ANNOTATED COMPUTER OUTPUT...intoduction to the use of mixture models in clustering. Cornell University Biometrics Unit Technical Report BU-920-M and Mathematical Sciences Institute...mixture method and two comparable methods from SAS. Cornell University Biometrics Unit Technical Report BU-921-M and Mathematical Sciences Institute

  7. Proposed method for reconstructing velocity profiles using a multi-electrode electromagnetic flow meter

    International Nuclear Information System (INIS)

    Kollár, László E; Lucas, Gary P; Zhang, Zhichao

    2014-01-01

    An analytical method is developed for the reconstruction of velocity profiles using measured potential distributions obtained around the boundary of a multi-electrode electromagnetic flow meter (EMFM). The method is based on the discrete Fourier transform (DFT), and is implemented in Matlab. The method assumes the velocity profile in a section of a pipe as a superposition of polynomials up to sixth order. Each polynomial component is defined along a specific direction in the plane of the pipe section. For a potential distribution obtained in a uniform magnetic field, this direction is not unique for quadratic and higher-order components; thus, multiple possible solutions exist for the reconstructed velocity profile. A procedure for choosing the optimum velocity profile is proposed. It is applicable for single-phase or two-phase flows, and requires measurement of the potential distribution in a non-uniform magnetic field. The potential distribution in this non-uniform magnetic field is also calculated for the possible solutions using weight values. Then, the velocity profile with the calculated potential distribution which is closest to the measured one provides the optimum solution. The reliability of the method is first demonstrated by reconstructing an artificial velocity profile defined by polynomial functions. Next, velocity profiles in different two-phase flows, based on results from the literature, are used to define the input velocity fields. In all cases, COMSOL Multiphysics is used to model the physical specifications of the EMFM and to simulate the measurements; thus, COMSOL simulations produce the potential distributions on the internal circumference of the flow pipe. These potential distributions serve as inputs for the analytical method. The reconstructed velocity profiles show satisfactory agreement with the input velocity profiles. The method described in this paper is most suitable for stratified flows and is not applicable to axisymmetric flows in

  8. Human Detection System by Fusing Depth Map-Based Method and Convolutional Neural Network-Based Method

    Directory of Open Access Journals (Sweden)

    Anh Vu Le

    2017-01-01

    Full Text Available In this paper, the depth images and the colour images provided by Kinect sensors are used to enhance the accuracy of human detection. The depth-based human detection method is fast but less accurate. On the other hand, the faster region convolutional neural network-based human detection method is accurate but requires a rather complex hardware configuration. To simultaneously leverage the advantages and relieve the drawbacks of each method, one master and one client system is proposed. The final goal is to make a novel Robot Operation System (ROS-based Perception Sensor Network (PSN system, which is more accurate and ready for the real time application. The experimental results demonstrate the outperforming of the proposed method compared with other conventional methods in the challenging scenarios.

  9. Deciding the way. Comparing energy risks: methodologies and issues

    International Nuclear Information System (INIS)

    Matsuki, Yoshio; Lee, R.

    1999-01-01

    The following major issues in comparative assessment of energy systems are discussed: target users; decision making process; subject policy-making; setting boundaries; aggregated health indicators; monetary valuation; long-term health effects; global warming; methods to reflect uncertainties. Suggestions for study approaches of the mentioned issues are proposed

  10. [Titration comparative study of TOPINA Tablets in patients with localization related epilepsy: double-blind comparative study by rapid and slow titration methods].

    Science.gov (United States)

    Kaneko, Sunao; Inoue, Yushi; Sasagawa, Mutsuo; Kato, Masaaki

    2012-04-01

    To compare the tolerability and efficacy of two titration methods (rapid and slow titration) for TOPINA Tablets with different dosages and periods of escalation, a double-blind comparative study was conducted in patients with localization-related epilepsy. A total of 183 patients were randomized to either rapid titration (initial dosage 100 mg/day increased by 100-200 mg at weekly intervals) or to slow titration (initial dosage 50 mg/day increased in 50 mg/day increments at weekly intervals). TOPINA Tablets were administered for 12 weeks to the maximum dosage of 400 mg/day. The incident of adverse events leading to treatment interruptions or withdrawals was 18.9% in rapid titration and 14.8% in slow titration, with no statistical significance (p = 0.554). The incident of adverse events and adverse reactions of slow titration was slightly lower than that of rapid titration. The common adverse events and adverse reactions reported in the two titration methods were comparable and were well tolerated. On the other hand, the efficacy of slow titration, percent reduction in seizure rate and responder rate, was comparable with that of rapid titration. In conclusion, there were no significant differences of therapeutic response to TOPINA Tablets between the two titration methods.

  11. The discrete ordinate method in association with the finite-volume method in non-structured mesh; Methode des ordonnees discretes associee a la methode des volumes finis en maillage non structure

    Energy Technology Data Exchange (ETDEWEB)

    Le Dez, V; Lallemand, M [Ecole Nationale Superieure de Mecanique et d` Aerotechnique (ENSMA), 86 - Poitiers (France); Sakami, M; Charette, A [Quebec Univ., Chicoutimi, PQ (Canada). Dept. des Sciences Appliquees

    1997-12-31

    The description of an efficient method of radiant heat transfer field determination in a grey semi-transparent environment included in a 2-D polygonal cavity with surface boundaries that reflect the radiation in a purely diffusive manner is proposed, at the equilibrium and in radiation-conduction coupling situation. The technique uses simultaneously the finite-volume method in non-structured triangular mesh, the discrete ordinate method and the ray shooting method. The main mathematical developments and comparative results with the discrete ordinate method in orthogonal curvilinear coordinates are included. (J.S.) 10 refs.

  12. The discrete ordinate method in association with the finite-volume method in non-structured mesh; Methode des ordonnees discretes associee a la methode des volumes finis en maillage non structure

    Energy Technology Data Exchange (ETDEWEB)

    Le Dez, V.; Lallemand, M. [Ecole Nationale Superieure de Mecanique et d`Aerotechnique (ENSMA), 86 - Poitiers (France); Sakami, M.; Charette, A. [Quebec Univ., Chicoutimi, PQ (Canada). Dept. des Sciences Appliquees

    1996-12-31

    The description of an efficient method of radiant heat transfer field determination in a grey semi-transparent environment included in a 2-D polygonal cavity with surface boundaries that reflect the radiation in a purely diffusive manner is proposed, at the equilibrium and in radiation-conduction coupling situation. The technique uses simultaneously the finite-volume method in non-structured triangular mesh, the discrete ordinate method and the ray shooting method. The main mathematical developments and comparative results with the discrete ordinate method in orthogonal curvilinear coordinates are included. (J.S.) 10 refs.

  13. A Comparative Study of Feature Selection and Classification Methods for Gene Expression Data

    KAUST Repository

    Abusamra, Heba

    2013-05-01

    Microarray technology has enriched the study of gene expression in such a way that scientists are now able to measure the expression levels of thousands of genes in a single experiment. Microarray gene expression data gained great importance in recent years due to its role in disease diagnoses and prognoses which help to choose the appropriate treatment plan for patients. This technology has shifted a new era in molecular classification, interpreting gene expression data remains a difficult problem and an active research area due to their native nature of “high dimensional low sample size”. Such problems pose great challenges to existing classification methods. Thus, effective feature selection techniques are often needed in this case to aid to correctly classify different tumor types and consequently lead to a better understanding of genetic signatures as well as improve treatment strategies. This thesis aims on a comparative study of state-of-the-art feature selection methods, classification methods, and the combination of them, based on gene expression data. We compared the efficiency of three different classification methods including: support vector machines, k- nearest neighbor and random forest, and eight different feature selection methods, including: information gain, twoing rule, sum minority, max minority, gini index, sum of variances, t- statistics, and one-dimension support vector machine. Five-fold cross validation was used to evaluate the classification performance. Two publicly available gene expression data sets of glioma were used for this study. Different experiments have been applied to compare the performance of the classification methods with and without performing feature selection. Results revealed the important role of feature selection in classifying gene expression data. By performing feature selection, the classification accuracy can be significantly boosted by using a small number of genes. The relationship of features selected in

  14. CERES: A new cerebellum lobule segmentation method.

    Science.gov (United States)

    Romero, Jose E; Coupé, Pierrick; Giraud, Rémi; Ta, Vinh-Thong; Fonov, Vladimir; Park, Min Tae M; Chakravarty, M Mallar; Voineskos, Aristotle N; Manjón, Jose V

    2017-02-15

    The human cerebellum is involved in language, motor tasks and cognitive processes such as attention or emotional processing. Therefore, an automatic and accurate segmentation method is highly desirable to measure and understand the cerebellum role in normal and pathological brain development. In this work, we propose a patch-based multi-atlas segmentation tool called CERES (CEREbellum Segmentation) that is able to automatically parcellate the cerebellum lobules. The proposed method works with standard resolution magnetic resonance T1-weighted images and uses the Optimized PatchMatch algorithm to speed up the patch matching process. The proposed method was compared with related recent state-of-the-art methods showing competitive results in both accuracy (average DICE of 0.7729) and execution time (around 5 minutes). Copyright © 2016 Elsevier Inc. All rights reserved.

  15. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    Science.gov (United States)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  16. [Comparative data regarding two HPLC methods for determination of isoniazid].

    Science.gov (United States)

    Gârbuleţ, Daniela; Spac, A F; Dorneanu, V

    2009-01-01

    For the determination of isoniazide (isonicotinic acid hydrazide - HIN) two different HPLC methods were developed and validated. Both experiments were performed using a Waters 2695 liquid chromatograph and a UV - Waters 2489 detector. The first method (I) used a Nucleosil 100-10 C18 column (250 x 4.6 mm), a mobile phase formed by a mixture of acetonitrile/10(-2) M oxalic acid (80/20) and a flow of 1.5 mL/ min; detection was done at 230 nm. The second method (II) used a Luna 100-5 C18 column (250 x 4.6 mm), a mobile phase formed by a mixture of methanol/acetate buffer, pH = 5.0 (20/ 80), a flow of 1 mL/min; detection was done at 270 nm. Both methods were validated, the correlation coefficients were 0.9998 (I) and 0.9999 (II), the detection limits were 0.6 microg/mL (I) and 0.055 microg/mL (II), the quantitation limits were 1.9 microg/mL (I) and 0.2 microg/ mL (II). There were also studied: the system precision (RSD = 0.1692% (I) and 0.2000% (II)), the method precision (RSD = 1.1844% (I) and 0.6170% (II)) and the intermediate precision (RSD = 1.8058% (I) and 0.5970% (II)). The accuracy was good, the calculated recoveries were 102.66% (I) and 101.36 (II). Both validated methods were applied for HIN determination from tablets with good and comparable results.

  17. Numerical Solution of Nonlinear Fredholm Integro-Differential Equations Using Spectral Homotopy Analysis Method

    Directory of Open Access Journals (Sweden)

    Z. Pashazadeh Atabakan

    2013-01-01

    Full Text Available Spectral homotopy analysis method (SHAM as a modification of homotopy analysis method (HAM is applied to obtain solution of high-order nonlinear Fredholm integro-differential problems. The existence and uniqueness of the solution and convergence of the proposed method are proved. Some examples are given to approve the efficiency and the accuracy of the proposed method. The SHAM results show that the proposed approach is quite reasonable when compared to homotopy analysis method, Lagrange interpolation solutions, and exact solutions.

  18. Further comments on the sequential probability ratio testing methods

    Energy Technology Data Exchange (ETDEWEB)

    Kulacsy, K. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics

    1997-05-23

    The Bayesian method for belief updating proposed in Racz (1996) is examined. The interpretation of the belief function introduced therein is found, and the method is compared to the classical binary Sequential Probability Ratio Testing method (SPRT). (author).

  19. Novel Crosstalk Measurement Method for Multi-Core Fiber Fan-In/Fan-Out Devices

    DEFF Research Database (Denmark)

    Ye, Feihong; Ono, Hirotaka; Abe, Yoshiteru

    2016-01-01

    We propose a new crosstalk measurement method for multi-core fiber fan-in/fan-out devices utilizing the Fresnel reflection. Compared with the traditional method using core-to-core coupling between a multi-core fiber and a single-mode fiber, the proposed method has the advantages of high reliability...

  20. Biclustering via optimal re-ordering of data matrices in systems biology: rigorous methods and comparative studies

    Directory of Open Access Journals (Sweden)

    Feng Xiao-Jiang

    2008-10-01

    Full Text Available Abstract Background The analysis of large-scale data sets via clustering techniques is utilized in a number of applications. Biclustering in particular has emerged as an important problem in the analysis of gene expression data since genes may only jointly respond over a subset of conditions. Biclustering algorithms also have important applications in sample classification where, for instance, tissue samples can be classified as cancerous or normal. Many of the methods for biclustering, and clustering algorithms in general, utilize simplified models or heuristic strategies for identifying the "best" grouping of elements according to some metric and cluster definition and thus result in suboptimal clusters. Results In this article, we present a rigorous approach to biclustering, OREO, which is based on the Optimal RE-Ordering of the rows and columns of a data matrix so as to globally minimize the dissimilarity metric. The physical permutations of the rows and columns of the data matrix can be modeled as either a network flow problem or a traveling salesman problem. Cluster boundaries in one dimension are used to partition and re-order the other dimensions of the corresponding submatrices to generate biclusters. The performance of OREO is tested on (a metabolite concentration data, (b an image reconstruction matrix, (c synthetic data with implanted biclusters, and gene expression data for (d colon cancer data, (e breast cancer data, as well as (f yeast segregant data to validate the ability of the proposed method and compare it to existing biclustering and clustering methods. Conclusion We demonstrate that this rigorous global optimization method for biclustering produces clusters with more insightful groupings of similar entities, such as genes or metabolites sharing common functions, than other clustering and biclustering algorithms and can reconstruct underlying fundamental patterns in the data for several distinct sets of data matrices arising

  1. Falsification Testing of Instrumental Variables Methods for Comparative Effectiveness Research.

    Science.gov (United States)

    Pizer, Steven D

    2016-04-01

    To demonstrate how falsification tests can be used to evaluate instrumental variables methods applicable to a wide variety of comparative effectiveness research questions. Brief conceptual review of instrumental variables and falsification testing principles and techniques accompanied by an empirical application. Sample STATA code related to the empirical application is provided in the Appendix. Comparative long-term risks of sulfonylureas and thiazolidinediones for management of type 2 diabetes. Outcomes include mortality and hospitalization for an ambulatory care-sensitive condition. Prescribing pattern variations are used as instrumental variables. Falsification testing is an easily computed and powerful way to evaluate the validity of the key assumption underlying instrumental variables analysis. If falsification tests are used, instrumental variables techniques can help answer a multitude of important clinical questions. © Health Research and Educational Trust.

  2. Assessment of SKB's proposal for encapsulation

    International Nuclear Information System (INIS)

    Lundin, M.; Gustafsson, Oskar; Broemsen, B. von; Troell, E.

    2001-01-01

    This report accounts for an independent assessment of a proposal regarding manufacturing of copper canisters, which has been presented by SKB (Swedish Nuclear Fuel and Waste Management Co) in cooperation with MABU Consulting. IVF (The Swedish Institute for Production Engineering Research) has performed the assessment by commission of SKI (Swedish Nuclear Power Inspectorate). IVF generally believe that the proposed method, recommended manufacturing equipment and organisation will most likely mean that a functioning manufacturing of canisters can be realised. No significant deficiencies have been identified, which would mean serious problems during the manufacturing process. In some cases IVF recommends a further evaluation regarding proposed methods and/or equipment. Basically these concerns the welding processes. However, it should be stressed that SKB has emphasised that further investigation will be performed regarding this subject. Furthermore IVF recommend that proposed methods and equipment for machining of copper cylinders and for blasting of inserts should be further evaluated

  3. Small Private Online Research: A Proposal for A Numerical Methods Course Based on Technology Use and Blended Learning

    Science.gov (United States)

    Cepeda, Francisco Javier Delgado

    2017-01-01

    This work presents a proposed model in blended learning for a numerical methods course evolved from traditional teaching into a research lab in scientific visualization. The blended learning approach sets a differentiated and flexible scheme based on a mobile setup and face to face sessions centered on a net of research challenges. Model is…

  4. Enhanced Sensitivity to Detection Nanomolar Level of Cu2 + Compared to Spectrophotometry Method by Functionalized Gold Nanoparticles: Design of Sensor Assisted by Exploiting First-order Data with Chemometrics

    Science.gov (United States)

    Rasouli, Zolaikha; Ghavami, Raouf

    2018-02-01

    A simple, sensitive and efficient colorimetric assay platform for the determination of Cu2 + was proposed with the aim of developing sensitive detection based on the aggregation of AuNPs in presence of a histamine H2-receptor antagonist (famotidine, FAM) as recognition site. This study is the first to demonstrate that the molar extinction coefficients of the complexes formed by FAM and Cu2 + are very low (by analyzing the chemometrics methods on the first order data arising from different metal to ligand ratio method), leading to the undesirable sensitivity of FAM-based assays. To resolve the problem of low sensitivity, the colorimetry method based on the Cu2 +-induced aggregation of AuNPs functionalized with FAM was introduced. This procedure is accompanied by a color change from bright red to blue which can be observed with the naked eyes. Detection sensitivity obtained by the developed method increased about 100 fold compared with the spectrophotometry method. This sensor exhibited a good linear relation between the absorbance ratios at 670 to 520 nm (A670/520) and the concentration in the range 2-110 nM with LOD = 0.76 nM. The satisfactory analytical performance of the proposed sensor facilitates the development of simple and affordable UV-Vis chemosensors for environmental applications.

  5. Comparative study of the coprecipitation methods for the preparation of Layered Double Hydroxides

    Directory of Open Access Journals (Sweden)

    Crepaldi Eduardo L.

    2000-01-01

    Full Text Available Coprecipitation is the method most frequently applied to prepare Layered Double Hydroxides (LDHs. Two variations of this method can be used, depending on the pH control conditions during the precipitation step. In one case the pH values are allowed to vary while in the other they are kept constant throughout coprecipitation. Although research groups have their preferences, no systematic comparison of the two variations of the coprecipitation method is available in the literature. On this basis, the objective of the present study was to compare the properties of LDHs prepared using the two forms of pH control in the coprecipitation method. The results showed that even though coprecipitation is easier to perform under conditions of variable pH values, materials with more interesting properties, from the point of view of technological applications, are obtained at constant pH. Higher crystallinity, smaller particle size, higher specific surface area and higher average pore diameter were found for materials obtained by coprecipitation at constant pH, when compared to the materials obtained at variable pH.

  6. A hardenability test proposal

    Energy Technology Data Exchange (ETDEWEB)

    Murthy, N.V.S.N. [Ingersoll-Rand (I) Ltd., Bangalore (India)

    1996-12-31

    A new approach for hardenability evaluation and its application to heat treatable steels will be discussed. This will include an overview and deficiencies of the current methods and discussion on the necessity for a new approach. Hardenability terminology will be expanded to avoid ambiguity and over-simplification as encountered with the current system. A new hardenability definition is proposed. Hardenability specification methods are simplified and rationalized. The new hardenability evaluation system proposed here utilizes a test specimen with varying diameter as an alternative to the cylindrical Jominy hardenability test specimen and is readily applicable to the evaluation of a wide variety of steels with different cross-section sizes.

  7. Generating region proposals for histopathological whole slide image retrieval.

    Science.gov (United States)

    Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu; Shi, Jun

    2018-06-01

    Content-based image retrieval is an effective method for histopathological image analysis. However, given a database of huge whole slide images (WSIs), acquiring appropriate region-of-interests (ROIs) for training is significant and difficult. Moreover, histopathological images can only be annotated by pathologists, resulting in the lack of labeling information. Therefore, it is an important and challenging task to generate ROIs from WSI and retrieve image with few labels. This paper presents a novel unsupervised region proposing method for histopathological WSI based on Selective Search. Specifically, the WSI is over-segmented into regions which are hierarchically merged until the WSI becomes a single region. Nucleus-oriented similarity measures for region mergence and Nucleus-Cytoplasm color space for histopathological image are specially defined to generate accurate region proposals. Additionally, we propose a new semi-supervised hashing method for image retrieval. The semantic features of images are extracted with Latent Dirichlet Allocation and transformed into binary hashing codes with Supervised Hashing. The methods are tested on a large-scale multi-class database of breast histopathological WSIs. The results demonstrate that for one WSI, our region proposing method can generate 7.3 thousand contoured regions which fit well with 95.8% of the ROIs annotated by pathologists. The proposed hashing method can retrieve a query image among 136 thousand images in 0.29 s and reach precision of 91% with only 10% of images labeled. The unsupervised region proposing method can generate regions as predictions of lesions in histopathological WSI. The region proposals can also serve as the training samples to train machine-learning models for image retrieval. The proposed hashing method can achieve fast and precise image retrieval with small amount of labels. Furthermore, the proposed methods can be potentially applied in online computer-aided-diagnosis systems. Copyright

  8. Comparative analysis of methods for real-time analytical control of chemotherapies preparations.

    Science.gov (United States)

    Bazin, Christophe; Cassard, Bruno; Caudron, Eric; Prognon, Patrice; Havard, Laurent

    2015-10-15

    Control of chemotherapies preparations are now an obligation in France, though analytical control is compulsory. Several methods are available and none of them is presumed as ideal. We wanted to compare them so as to determine which one could be the best choice. We compared non analytical (visual and video-assisted, gravimetric) and analytical (HPLC/FIA, UV/FT-IR, UV/Raman, Raman) methods thanks to our experience and a SWOT analysis. The results of the analysis show great differences between the techniques, but as expected none us them is without defects. However they can probably be used in synergy. Overall for the pharmacist willing to get involved, the implementation of the control for chemotherapies preparations must be widely anticipated, with the listing of every parameter, and remains according to us an analyst's job. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. An Improved Pansharpening Method for Misaligned Panchromatic and Multispectral Data.

    Science.gov (United States)

    Li, Hui; Jing, Linhai; Tang, Yunwei; Ding, Haifeng

    2018-02-11

    Numerous pansharpening methods were proposed in recent decades for fusing low-spatial-resolution multispectral (MS) images with high-spatial-resolution (HSR) panchromatic (PAN) bands to produce fused HSR MS images, which are widely used in various remote sensing tasks. The effect of misregistration between MS and PAN bands on quality of fused products has gained much attention in recent years. An improved method for misaligned MS and PAN imagery is proposed, through two improvements made on a previously published method named RMI (reduce misalignment impact). The performance of the proposed method was assessed by comparing with some outstanding fusion methods, such as adaptive Gram-Schmidt and generalized Laplacian pyramid. Experimental results show that the improved version can reduce spectral distortions of fused dark pixels and sharpen boundaries between different image objects, as well as obtain similar quality indexes with the original RMI method. In addition, the proposed method was evaluated with respect to its sensitivity to misalignments between MS and PAN bands. It is certified that the proposed method is more robust to misalignments between MS and PAN bands than the other methods.

  10. A volume of fluid method based on multidimensional advection and spline interface reconstruction

    International Nuclear Information System (INIS)

    Lopez, J.; Hernandez, J.; Gomez, P.; Faura, F.

    2004-01-01

    A new volume of fluid method for tracking two-dimensional interfaces is presented. The method involves a multidimensional advection algorithm based on the use of edge-matched flux polygons to integrate the volume fraction evolution equation, and a spline-based reconstruction algorithm. The accuracy and efficiency of the proposed method are analyzed using different tests, and the results are compared with those obtained recently by other authors. Despite its simplicity, the proposed method represents a significant improvement, and compares favorably with other volume of fluid methods as regards the accuracy and efficiency of both the advection and reconstruction steps

  11. A comparative evaluation of emerging methods for errors of commission based on applications to the Davis-Besse (1985) event

    International Nuclear Information System (INIS)

    Reer, B.; Dang, V.N.; Hirschberg, S.; Straeter, O.

    1999-12-01

    In considering the human role in accidents, the classical PSA methodology applied today focuses primarily on the omissions of actions required of the operators at specific points in the scenario models. A practical, proven methodology is not available for systematically identifying and analyzing the scenario contexts in which the operators might perform inappropriate actions that aggravate the scenario. As a result, typical PSA's do not comprehensively treat these actions, referred to as errors of commission (EOCs). This report presents the results of a joint project of the Paul Scherrer Institut (PSI, Villigen, Switzerland) and the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, Garching, Germany) that examined some methods recently proposed for addressing the EOC issue. Five methods were investigated: 1 ) ATHEANA, 2) the Borssele screening methodology. 3) CREAM, 4) CAHR, and 5) CODA. In addition to a comparison of their scope, basic assumptions, and analytical approach, the methods were each applied in the analysis of PWR Loss of Feedwater scenarios based on the 1985 Davis-Besse event, in which the operator response included actions that can be categorized as EOCs. The aim was to compare how the methods consider a concrete scenario in which EOCs have in fact been observed. These case applications show how the methods are used in practical terms and constitute a common basis for comparing the methods and the insights that they provide. The identification of the potentially significant EOCs to be analysed in the PSA is currently the central problem for their treatment. The identification or search scheme has to consider an extensive set of potential actions that the operators may take. These actions may take place instead of required actions, for example, because the operators fail to assess the plant state correctly, or they may occur even when no action is required. As a result of this broad search space, most methodologies apply multiple schemes to

  12. Power quality events recognition using a SVM-based method

    Energy Technology Data Exchange (ETDEWEB)

    Cerqueira, Augusto Santiago; Ferreira, Danton Diego; Ribeiro, Moises Vidal; Duque, Carlos Augusto [Department of Electrical Circuits, Federal University of Juiz de Fora, Campus Universitario, 36036 900, Juiz de Fora MG (Brazil)

    2008-09-15

    In this paper, a novel SVM-based method for power quality event classification is proposed. A simple approach for feature extraction is introduced, based on the subtraction of the fundamental component from the acquired voltage signal. The resulting signal is presented to a support vector machine for event classification. Results from simulation are presented and compared with two other methods, the OTFR and the LCEC. The proposed method shown an improved performance followed by a reasonable computational cost. (author)

  13. A Comparative Study of Feature Selection and Classification Methods for Gene Expression Data

    KAUST Repository

    Abusamra, Heba

    2013-01-01

    Different experiments have been applied to compare the performance of the classification methods with and without performing feature selection. Results revealed the important role of feature selection in classifying gene expression data. By performing feature selection, the classification accuracy can be significantly boosted by using a small number of genes. The relationship of features selected in different feature selection methods is investigated and the most frequent features selected in each fold among all methods for both datasets are evaluated.

  14. A calibration method for proposed XRF measurements of arsenic and selenium in nail clippings

    International Nuclear Information System (INIS)

    Gherase, Mihai R; Fleming, David E B

    2011-01-01

    A calibration method for proposed x-ray fluorescence (XRF) measurements of arsenic and selenium in nail clippings is demonstrated. Phantom nail clippings were produced from a whole nail phantom (0.7 mm thickness, 25 x 25 mm 2 area) and contained equal concentrations of arsenic and selenium ranging from 0 to 20 μg g -1 in increments of 5 μg g -1 . The phantom nail clippings were then grouped in samples of five different masses: 20, 40, 60, 80 and 100 mg for each concentration. Experimental x-ray spectra were acquired for each of the sample masses using a portable x-ray tube and a detector unit. Calibration lines (XRF signal in a number of counts versus stoichiometric elemental concentration) were produced for each of the two elements. A semi-empirical relationship between the mass of the nail phantoms (m) and the slope of the calibration line (s) was determined separately for arsenic and selenium. Using this calibration method, one can estimate elemental concentrations and their uncertainties from the XRF spectra of human nail clippings. (note)

  15. Lessons from comparative effectiveness research methods development projects funded under the Recovery Act.

    Science.gov (United States)

    Zurovac, Jelena; Esposito, Dominick

    2014-11-01

    The American Recovery and Reinvestment Act of 2009 (ARRA) directed nearly US$29.2 million to comparative effectiveness research (CER) methods development. To help inform future CER methods investments, we describe the ARRA CER methods projects, identify barriers to this research and discuss the alignment of topics with published methods development priorities. We used several existing resources and held discussions with ARRA CER methods investigators. Although funded projects explored many identified priority topics, investigators noted that much work remains. For example, given the considerable investments in CER data infrastructure, the methods development field can benefit from additional efforts to educate researchers about the availability of new data sources and about how best to apply methods to match their research questions and data.

  16. Window Material Daylighting Performance Assessment Algorithm: Comparing Radiosity and Split-Flux Methods

    Directory of Open Access Journals (Sweden)

    Yeo Beom Yoon

    2014-04-01

    Full Text Available Windows are the primary aperture to introduce solar radiation to the interior space of a building. This experiment explores the use of EnergyPlus software for analyzing the illuminance level on the floor of a room with reference to its distance from the window. For this experiment, a double clear glass window has been used. The preliminary modelling in EnergyPlus showed a consistent result with the experimentally monitored data in real time. EnergyPlus has two mainly used daylighting algorithms: DElight method employing radiosity technique and Detailed method employing split-flux technique. Further analysis for illuminance using DElight and Detailed methods showed significant difference in the results. Finally, we compared the algorithms of the two analysis methods in EnergyPlus.

  17. Comparation studies of uranium analysis method using spectrophotometer and voltammeter

    International Nuclear Information System (INIS)

    Sugeng Pomomo

    2013-01-01

    Comparation studies of uranium analysis method by spectrophotometer and voltammeter had been done. The objective of experiment is to examine the reliability of analysis method and instrument performance by evaluate parameters; linearity, accuracy, precision and detection limit. Uranyl nitrate hexahydrate is used as standard, and the sample is solvent mixture of tributyl phosphate and kerosene containing uranium (from phosphoric acid purification unit Petrokimia Gresik). Uranium (U) stripping in the sample use HN0 3 0,5 N and then was analyzed by using of both instrument. Analysis of standard show that both methods give a good linearity by correlation coefficient > 0,999. Spectrophotometry give accuration 99,34 - 101,05 % with ratio standard deviation (RSD) 1,03 %; detection limit (DL) 0,05 ppm. Voltammetry give accuration 95,63 -101,49 % with RSD 3,91 %; detection limit (DL) 0,509 ppm. On the analysis of sludge samples were given the significantly different in result; spectrophotometry give U concentration 4,445 ppm by RSD 6,74 % and voltammetry give U concentration 7,693 by RSD 19,53%. (author)

  18. Density Functional Theory versus the Hartree-Fock Method: Comparative Assessment

    International Nuclear Information System (INIS)

    Amusia, M.Ya.; Shaginyan, V.R.; Msezane, A.Z.

    2003-01-01

    We compare two different approaches to investigations of many-electron systems. The first is the Hartree-Fock (HF) method and the second is the Density Functional Theory (DFT). Overview of the main features and peculiar properties of the HF method are presented. A way to realize the HF method within the Kohn-Sham (KS) approach of the DFT is discussed. We show that this is impossible without including a specific correlation energy, which is defined by the difference between the sum of the kinetic and exchange energies of a system considered within KS and HF, respectively. It is the nonlocal exchange potential entering the HF equations that generates this correlation energy. We show that the total correlation energy of a finite electron system, which has to include this correlation energy, cannot be obtained from considerations of uniform electron systems. The single-particle excitation spectrum of many-electron systems is related to the eigenvalues of the corresponding KS equations. We demonstrate that this spectrum does not coincide in general with the eigenvalues of KS or HF equations

  19. Density Functional Theory versus the Hartree-Fock Method: Comparative Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Amusia, M.Ya.; Shaginyan, V.R. [The Hebrew University, Jerusalem (Israel); Msezane, A.Z. [Clark Atlanta Univ., Atlanta, GA (United States). Center for Theoretical Studies of Physical Systems

    2003-12-01

    We compare two different approaches to investigations of many-electron systems. The first is the Hartree-Fock (HF) method and the second is the Density Functional Theory (DFT). Overview of the main features and peculiar properties of the HF method are presented. A way to realize the HF method within the Kohn-Sham (KS) approach of the DFT is discussed. We show that this is impossible without including a specific correlation energy, which is defined by the difference between the sum of the kinetic and exchange energies of a system considered within KS and HF, respectively. It is the nonlocal exchange potential entering the HF equations that generates this correlation energy. We show that the total correlation energy of a finite electron system, which has to include this correlation energy, cannot be obtained from considerations of uniform electron systems. The single-particle excitation spectrum of many-electron systems is related to the eigenvalues of the corresponding KS equations. We demonstrate that this spectrum does not coincide in general with the eigenvalues of KS or HF equations.

  20. Comparative analysis of methods and sources of financing of the transport organizations activity

    Science.gov (United States)

    Gorshkov, Roman

    2017-10-01

    The article considers the analysis of methods of financing of transport organizations in conditions of limited investment resources. A comparative analysis of these methods is carried out, the classification of investment, methods and sources of financial support for projects being implemented to date are presented. In order to select the optimal sources of financing for the projects, various methods of financial management and financial support for the activities of the transport organization were analyzed, which were considered from the perspective of analysis of advantages and limitations. The result of the study is recommendations on the selection of optimal sources and methods of financing of transport organizations.

  1. An Efficient Method for Detection of Outliers in Tracer Curves Derived from Dynamic Contrast-Enhanced Imaging

    Directory of Open Access Journals (Sweden)

    Linning Ye

    2018-01-01

    Full Text Available Presence of outliers in tracer concentration-time curves derived from dynamic contrast-enhanced imaging can adversely affect the analysis of the tracer curves by model-fitting. A computationally efficient method for detecting outliers in tracer concentration-time curves is presented in this study. The proposed method is based on a piecewise linear model and implemented using a robust clustering algorithm. The method is noniterative and all the parameters are automatically estimated. To compare the proposed method with existing Gaussian model based and robust regression-based methods, simulation studies were performed by simulating tracer concentration-time curves using the generalized Tofts model and kinetic parameters derived from different tissue types. Results show that the proposed method and the robust regression-based method achieve better detection performance than the Gaussian model based method. Compared with the robust regression-based method, the proposed method can achieve similar detection performance with much faster computation speed.

  2. A Comparative Study of Feature Selection Methods for the Discriminative Analysis of Temporal Lobe Epilepsy

    Directory of Open Access Journals (Sweden)

    Chunren Lai

    2017-12-01

    Full Text Available It is crucial to differentiate patients with temporal lobe epilepsy (TLE from the healthy population and determine abnormal brain regions in TLE. The cortical features and changes can reveal the unique anatomical patterns of brain regions from structural magnetic resonance (MR images. In this study, structural MR images from 41 patients with left TLE, 34 patients with right TLE, and 58 normal controls (NC were acquired, and four kinds of cortical measures, namely cortical thickness, cortical surface area, gray matter volume (GMV, and mean curvature, were explored for discriminative analysis. Three feature selection methods including the independent sample t-test filtering, the sparse-constrained dimensionality reduction model (SCDRM, and the support vector machine-recursive feature elimination (SVM-RFE were investigated to extract dominant features among the compared groups for classification using the support vector machine (SVM classifier. The results showed that the SVM-RFE achieved the highest performance (most classifications with more than 84% accuracy, followed by the SCDRM, and the t-test. Especially, the surface area and GMV exhibited prominent discriminative ability, and the performance of the SVM was improved significantly when the four cortical measures were combined. Additionally, the dominant regions with higher classification weights were mainly located in the temporal and the frontal lobe, including the entorhinal cortex, rostral middle frontal, parahippocampal cortex, superior frontal, insula, and cuneus. This study concluded that the cortical features provided effective information for the recognition of abnormal anatomical patterns and the proposed methods had the potential to improve the clinical diagnosis of TLE.

  3. COMPARATIVE EFFECTIVENESS OF DIFFERENT METHODS OF CANDIDAL DYSBACTERIOSIS THERAPY

    Directory of Open Access Journals (Sweden)

    S.V. Nikolaeva

    2009-01-01

    Full Text Available The study of effectiveness of different methods of microbiological disorders correction in children after 3 years old with candidal dysbacteriosis are presented in this article. The study compared probiotical sour milk-made stuff («Actimel» and sour milk-made stuff, not fortified with probiotical cultures («Rastishka» and traditional kefir. It was shown that an inclusion of probiotical sour milkmade stuff in diet of children with candidal dysbacteriosis results in normalization of lacto- and bifidobacteria level and decreasing of Candida level.Key words: children, candidal dysbacteriosis, probiotics.(Voprosy sovremennoi pediatrii — Current Pediatrics. 2009;8(6:31-35

  4. Qualitative methods in radiography research: a proposed framework

    International Nuclear Information System (INIS)

    Adams, J.; Smith, T.

    2003-01-01

    Introduction: While radiography is currently developing a research base, which is important in terms of professional development and informing practice and policy issues in the field, the amount of research published by radiographers remains limited. However, a range of qualitative methods offer further opportunities for radiography research. Purpose: This paper briefly introduces a number of key qualitative methods (qualitative interviews, focus groups, observational methods, diary methods and document/text analysis) and sketches one possible framework for future qualitative work in radiography research. The framework focuses upon three areas for study: intra-professional issues; inter-professional issues; and clinical practice, patient and health delivery issues. While the paper outlines broad areas for future focus rather than providing a detailed protocol for how individual pieces of research should be conducted, a few research questions have been chosen and examples of possible qualitative methods required to answer such questions are outlined for each area. Conclusion: Given the challenges and opportunities currently facing the development of a research base within radiography, the outline of key qualitative methods and broad areas suitable for their application is offered as a useful tool for those within the profession looking to embark upon or enhance their research career

  5. Improved Object Proposals with Geometrical Features for Autonomous Driving

    Directory of Open Access Journals (Sweden)

    Yiliu Feng

    2017-01-01

    Full Text Available This paper aims at generating high-quality object proposals for object detection in autonomous driving. Most existing proposal generation methods are designed for the general object detection, which may not perform well in a particular scene. We propose several geometrical features suited for autonomous driving and integrate them into state-of-the-art general proposal generation methods. In particular, we formulate the integration as a feature fusion problem by fusing the geometrical features with existing proposal generation methods in a Bayesian framework. Experiments on the challenging KITTI benchmark demonstrate that our approach improves the existing methods significantly. Combined with a convolutional neural net detector, our approach achieves state-of-the-art performance on all three KITTI object classes.

  6. Comparative study of on-line response time measurement methods for platinum resistance thermometer

    International Nuclear Information System (INIS)

    Zwingelstein, G.; Gopal, R.

    1979-01-01

    This study deals with the in site determination of the response time of platinum resistance sensor. In the first part of this work, two methods furnishing the reference response time of the sensors are studied. In the second part of the work, two methods obtaining the response time without dismounting of the sensor, are studied. A comparative study of the performances of these methods is included for fluid velocities varying from 0 to 10 m/sec, in both laboratory and plant conditions

  7. PROCESS CAPABILITY ESTIMATION FOR NON-NORMALLY DISTRIBUTED DATA USING ROBUST METHODS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2016-06-01

    Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.

  8. Screening of Plant Extracts for Antioxidant Activity: a Comparative Study on Three Testing Methods

    NARCIS (Netherlands)

    Koleva, I.; Beek, van T.A.; Linssen, J.P.H.; Groot, de Æ.; Evstatieva, L.N.

    2002-01-01

    Three methods widely employed in the evaluation of antioxidant activity, namely 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging method, static headspace gas chromatography (HS-GC) and -carotene bleaching test (BCBT), have been compared with regard to their application in the screening of

  9. Comparative study of different application methods of 14C-Fosthiazate in tomato plants

    International Nuclear Information System (INIS)

    Nitesh Sharma; Surendra Kumar

    2011-01-01

    A comparative study of different application methods of nematicide 14 C-Fosthiazate was done for the uptake in tomato plants in two varieties Pusa Ruby and Pusa Early Dwarf. The application methods used for the research purpose are seed treatment, soil supplication and drip application in presence and absence of surfactant (Tween-80).It as found that percent absorption was the highest in the drip irrigation method in presence of surfactant. The percent uptake of 14 C-Fosthiazate in two varieties of tomato plants was found to be higher in Pusa Early Dwarf in all the treatment methods. (author)

  10. Evaluation of methods to compare consequences from hazardous materials transportation accidents

    International Nuclear Information System (INIS)

    Rhoads, R.E.; Franklin, A.L.; Lavender, J.C.

    1986-10-01

    This report presents the results of a project to develop a framework for making meaningful comparisons of the consequences from transportation accidents involving hazardous materials. The project was conducted in two phases. In Phase I, methods that could potentially be used to develop the consequence comparisons for hazardous material transportation accidents were identified and reviewed. Potential improvements were identified and an evaluation of the improved methods was performed. Based on this evaluation, several methods were selected for detailed evaluation in Phase II of the project. The methods selected were location-dependent scenarios, figure of merit and risk assessment. This evaluation included application of the methods to a sample problem which compares the consequences of four representative hazardous materials - chlorine, propane, spent nuclear fuel and class A explosives. These materials were selected because they represented a broad class of hazardous material properties and consequence mechanisms. The sample case aplication relied extensively on consequence calculations performed in previous transportation risk assessment studies. A consultant was employed to assist in developing consequence models for explosives. The results of the detailed evaluation of the three consequence comparison methods indicates that methods are available to perform technically defensible comparisons of the consequences from a wide variety of hazardous materials. Location-dependent scenario and risk assessment methods are available now and the figure of merit method could be developed with additional effort. All of the methods require substantial effort to implement. Methods that would require substantially less effort were identified in the preliminary evaluation, but questions of technical accuracy preclude their application on a scale. These methods may have application to specific cases, however

  11. Extraction of Protein Interaction Data: A Comparative Analysis of Methods in Use

    Directory of Open Access Journals (Sweden)

    Jose Hena

    2007-01-01

    Full Text Available Several natural language processing tools, both commercial and freely available, are used to extract protein interactions from publications. Methods used by these tools include pattern matching to dynamic programming with individual recall and precision rates. A methodical survey of these tools, keeping in mind the minimum interaction information a researcher would need, in comparison to manual analysis has not been carried out. We compared data generated using some of the selected NLP tools with manually curated protein interaction data (PathArt and IMaps to comparatively determine the recall and precision rate. The rates were found to be lower than the published scores when a normalized definition for interaction is considered. Each data point captured wrongly or not picked up by the tool was analyzed. Our evaluation brings forth critical failures of NLP tools and provides pointers for the development of an ideal NLP tool.

  12. Assessing the Goodness of Fit of Phylogenetic Comparative Methods: A Meta-Analysis and Simulation Study.

    Directory of Open Access Journals (Sweden)

    Dwueng-Chwuan Jhwueng

    Full Text Available Phylogenetic comparative methods (PCMs have been applied widely in analyzing data from related species but their fit to data is rarely assessed.Can one determine whether any particular comparative method is typically more appropriate than others by examining comparative data sets?I conducted a meta-analysis of 122 phylogenetic data sets found by searching all papers in JEB, Blackwell Synergy and JSTOR published in 2002-2005 for the purpose of assessing the fit of PCMs. The number of species in these data sets ranged from 9 to 117.I used the Akaike information criterion to compare PCMs, and then fit PCMs to bivariate data sets through REML analysis. Correlation estimates between two traits and bootstrapped confidence intervals of correlations from each model were also compared.For phylogenies of less than one hundred taxa, the Independent Contrast method and the independent, non-phylogenetic models provide the best fit.For bivariate analysis, correlations from different PCMs are qualitatively similar so that actual correlations from real data seem to be robust to the PCM chosen for the analysis. Therefore, researchers might apply the PCM they believe best describes the evolutionary mechanisms underlying their data.

  13. A Novel Feature-Level Data Fusion Method for Indoor Autonomous Localization

    Directory of Open Access Journals (Sweden)

    Minxiang Liu

    2013-01-01

    Full Text Available We present a novel feature-level data fusion method for autonomous localization in an inactive multiple reference unknown indoor environment. Since monocular sensors cannot provide the depth information directly, the proposed method incorporates the edge information of images from a camera with homologous depth information received from an infrared sensor. Real-time experimental results demonstrate that the accuracies of position and orientation are greatly improved by using the proposed fusion method in an unknown complex indoor environment. Compared to monocular localization, the proposed method is found to have up to 70 percent improvement in accuracy.

  14. A comparative critical study between FMEA and FTA risk analysis methods

    Science.gov (United States)

    Cristea, G.; Constantinescu, DM

    2017-10-01

    Today there is used an overwhelming number of different risk analyses techniques with acronyms such as: FMEA (Failure Modes and Effects Analysis) and its extension FMECA (Failure Mode, Effects, and Criticality Analysis), DRBFM (Design Review by Failure Mode), FTA (Fault Tree Analysis) and and its extension ETA (Event Tree Analysis), HAZOP (Hazard & Operability Studies), HACCP (Hazard Analysis and Critical Control Points) and What-if/Checklist. However, the most used analysis techniques in the mechanical and electrical industry are FMEA and FTA. In FMEA, which is an inductive method, information about the consequences and effects of the failures is usually collected through interviews with experienced people, and with different knowledge i.e., cross-functional groups. The FMEA is used to capture potential failures/risks & impacts and prioritize them on a numeric scale called Risk Priority Number (RPN) which ranges from 1 to 1000. FTA is a deductive method i.e., a general system state is decomposed into chains of more basic events of components. The logical interrelationship of how such basic events depend on and affect each other is often described analytically in a reliability structure which can be visualized as a tree. Both methods are very time-consuming to be applied thoroughly, and this is why it is oftenly not done so. As a consequence possible failure modes may not be identified. To address these shortcomings, it is proposed to use a combination of FTA and FMEA.

  15. A Comparative Study between a Pseudo-Forward Equation (PFE and Intelligence Methods for the Characterization of the North Sea Reservoir

    Directory of Open Access Journals (Sweden)

    Saeed Mojeddifar

    2014-12-01

    Full Text Available This paper presents a comparative study between three versions of adaptive neuro-fuzzy inference system (ANFIS algorithms and a pseudo-forward equation (PFE to characterize the North Sea reservoir (F3 block based on seismic data. According to the statistical studies, four attributes (energy, envelope, spectral decomposition and similarity are known to be useful as fundamental attributes in porosity estimation. Different ANFIS models were constructed using three clustering methods of grid partitioning (GP, subtractive clustering method (SCM and fuzzy c-means clustering (FCM. An experimental equation, called PFE and based on similarity attributes, was also proposed to estimate porosity values of the reservoir. When the validation set derived from training wells was used, the R-square coefficient between two variables (actual and predicted values was obtained as 0.7935 and 0.7404 for the ANFIS algorithm and the PFE model, respectively. But when the testing set derived from testing wells was used, the same coefficients decreased to 0.252 and 0.5133 for the ANFIS algorithm and the PFE model, respectively. According to these results, and the geological characteristics observed in the F3 block, it seems that the ANFIS algorithms cannot estimate the porosity acceptably. By contrast, in the outputs of PFE, the ability to detect geological structures such as faults (gas chimney, folds (salt dome, and bright spots, alongside the porosity estimation of sandstone reservoirs, could help in determining the drilling target locations. Finally, this work proposes that the developed PFE could be a good technique for characterizing the reservoir of the F3 block.

  16. Analysis of Vibration Diagnostics Methods for Induction Motors

    Directory of Open Access Journals (Sweden)

    A. P. Kalinov

    2012-01-01

    Full Text Available The paper presents an analysis of existing vibration diagnostics methods. In order to evaluate an efficiency of method application the following criteria have been proposed: volume of input data required for establishing diagnosis, data content, software and hardware level, execution time for vibration diagnostics. According to the mentioned criteria a classification of vibration diagnostics methods for determination of their advantages and disadvantages, search for their development and improvement has been presented in paper. The paper contains a comparative estimation of methods in accordance with the proposed  criteria. According to this estimation the most efficient methods are a spectral analysis and spectral analysis of the vibration signal envelope.

  17. Settlement behavior of the container for high-level nuclear waste disposal. Centrifuge model tests and proposal for simple evaluation method for settlement behavior

    International Nuclear Information System (INIS)

    Nakamura, Kunihiko; Tanaka, Yukihisa

    2004-01-01

    In Japan, bentonite will be used as buffer materials in high-level nuclear waste disposal. In the softened buffer material with the infiltration of various properties of under ground water, if the container deeply sinks, the decrease of the thickness of the buffer materials may lose the required abilities. Therefore, it is very important to consider settlement of container. In this study, influences of distilled water and artificial seawater on the settlement of the container were investigated and a simple evaluation method for settlement of the container was proposed. The following findings were obtained from this study. (1) Under the distilled water, amount of settlement decreases exponentially as dry density becomes larger. (2) While the amount of settlement of container under the 10% artificial seawater was almost equal to the one in the distilled water, the container was floating under the 100% artificial seawater. (3) The simple evaluation method for settlement of container was proposed based on the diffuse double layer theory, and the effectiveness of the proposed method was demonstrated by the results of several experiments. (author)

  18. Challenging a court settlement: Concept, legal nature and methods of challenging in domestic and comparative law

    Directory of Open Access Journals (Sweden)

    Salma Marija

    2011-01-01

    Full Text Available In this paper the author offers analysis of rules regulating the challenging of a court settlement in light of the evolution and legal nature of the court settlement in domestic and comparative law (Austrian, German, and Hungarian laws. The method of the procedural challenge depended on the understanding whether the settlement is an agreement (contract between parties before the court or it is a decision of the court (on acceptance or rejection of the proposal of the parties to reach a settlement. In the earlier instance the method of challenge is by filing of an action, and in the latter instance it represents a form of a legal remedy, most often extraordinary legal remedy - request for repetition of a trial, against final and binding decision of the court by which the settlement was either accepted or rejected. Theoretical dilemma about the legal nature of the court settlement, had an effect on normative regulations, as well as on court practice. In the Serbian law, this dilemma was resolved by enactment of the Civil Procedure Code which explicitly regulates that court settlement is challenged by an action before the court. As a result of this, the idea of a court settlement, as a form of an agreement, prevailed in the legal system. However, considerable procedural effects of the court settlement cannot be ignored. The principal procedural effect is that the litigation is terminated. Further, the court settlement represents a form of an executive title.

  19. Teaching Comparative Law in the 21st Century: Beyond the Civil/Common Law Dichotomy.

    Science.gov (United States)

    Waxman, Michael P.

    2001-01-01

    Asserts that the inexorable shift to transnational and global legal practice demands a comparable shift in methods of teaching comparative law to move it beyond its current American common law/European civil law myopia. Proposes an introductory course, Law in Comparative Cultures, which exposes students to a panoply of international legal systems.…

  20. Towards an efficient protocol for the determination of triterpenic acids in olive fruit: a comparative study of drying and extraction methods.

    Science.gov (United States)

    Goulas, Vlasios; Manganaris, George A

    2012-01-01

    Triterpenic acids, such as maslinic acid and oleanolic acid, are commonly found in olive fruits and have been associated with many health benefits. The drying and extraction methods, as well as the solvents used, are critical factors in the determination of their concentration in plant tissues. Thus, there is an emerging need for standardisation of an efficient extraction protocol that determines triterpenic acid content in olive fruits. To evaluate common extraction methods of triterpenic acids from olive fruits and to determine the effect of the drying method on their content in order to propose an optimum protocol for their quantification. The efficacy of different drying and extraction methods was evaluated through the quantification of maslinic acid and oleanolic acid contents using the reversed-phase HPLC technique. Data showed that ultrasonic assisted extraction with ethanol or a mixture of ethanol:methanol (1:1, v/v) resulted in the recovery of significantly higher amounts of triterpenic acids than other methods used. The drying method also affected the estimated triterpenic acid content; frozen or lyophilised olive fruit material gave higher yields of triterpenic acids compared with air-dried material at both 35°C and 105°C. This study provides a rapid and low-cost extraction method, i.e. ultrasonic assisted extraction with an eco-friendly solvent such as ethanol, from frozen or lyophilised olive fruit for the accurate determination of the triterpenic acid content in olive fruit. Copyright © 2011 John Wiley & Sons, Ltd.

  1. Proposed Suitable Methods to Detect Transient Regime Switching to Improve Power Quality with Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Javad Safaee Kuchaksaraee

    2016-10-01

    Full Text Available The increasing consumption of electrical energy and the use of non-linear loads that create transient regime states in distribution networks is increasing day by day. This is the only reason due to which the analysis of power quality for energy sustainability in power networks has become more important. Transients are often created by energy injection through switching or lightning and make changes in voltage and nominal current. Sudden increase or decrease in voltage or current makes characteristics of the transient regime. This paper shed some lights on the capacitor bank switching, which is one of the main causes for oscillatory transient regime states in the distribution network, using wavelet transform. The identification of the switching current of capacitor bank and the internal fault current of the transformer to prevent the unnecessary outage of the differential relay, it propose a new smart method. The accurate performance of this method is shown by simulation in EMTP and MATLAB (matrix laboratory software.

  2. Visual field examination method using virtual reality glasses compared with the Humphrey perimeter

    Directory of Open Access Journals (Sweden)

    Tsapakis S

    2017-08-01

    Full Text Available Stylianos Tsapakis, Dimitrios Papaconstantinou, Andreas Diagourtas, Konstantinos Droutsas, Konstantinos Andreanos, Marilita M Moschos, Dimitrios Brouzas 1st Department of Ophthalmology, National and Kapodistrian University of Athens, Athens, Greece Purpose: To present a visual field examination method using virtual reality glasses and evaluate the reliability of the method by comparing the results with those of the Humphrey perimeter.Materials and methods: Virtual reality glasses, a smartphone with a 6 inch display, and software that implements a fast-threshold 3 dB step staircase algorithm for the central 24° of visual field (52 points were used to test 20 eyes of 10 patients, who were tested in a random and consecutive order as they appeared in our glaucoma department. The results were compared with those obtained from the same patients using the Humphrey perimeter.Results: High correlation coefficient (r=0.808, P<0.0001 was found between the virtual reality visual field test and the Humphrey perimeter visual field.Conclusion: Visual field examination results using virtual reality glasses have a high correlation with the Humphrey perimeter allowing the method to be suitable for probable clinical use. Keywords: visual fields, virtual reality glasses, perimetry, visual fields software, smartphone

  3. A comparative analysis on computational methods for fitting an ERGM to biological network data

    Directory of Open Access Journals (Sweden)

    Sudipta Saha

    2015-03-01

    Full Text Available Exponential random graph models (ERGM based on graph theory are useful in studying global biological network structure using its local properties. However, computational methods for fitting such models are sensitive to the type, structure and the number of the local features of a network under study. In this paper, we compared computational methods for fitting an ERGM with local features of different types and structures. Two commonly used methods, such as the Markov Chain Monte Carlo Maximum Likelihood Estimation and the Maximum Pseudo Likelihood Estimation are considered for estimating the coefficients of network attributes. We compared the estimates of observed network to our random simulated network using both methods under ERGM. The motivation was to ascertain the extent to which an observed network would deviate from a randomly simulated network if the physical numbers of attributes were approximately same. Cut-off points of some common attributes of interest for different order of nodes were determined through simulations. We implemented our method to a known regulatory network database of Escherichia coli (E. coli.

  4. A comparative study of boar semen extenders with different proposed preservation times and their effect on semen quality and fertility

    OpenAIRE

    Marina Anastasia Karageorgiou; Georgios Tsousis; Constantin M. Boscos; Eleni D. Tzika; Panagiotis D. Tassis; Ioannis A. Tsakmakidis

    2016-01-01

    The present study compared the quality characteristics of boar semen diluted with three extenders of different proposed preservation times (short-term, medium-term and long-term). A part of extended semen was used for artificial insemination on the farm (30 sows/extender), while the remaining part was stored for three days (16–18 °C). Stored and used semen was also laboratory assessed at insemination time, on days 1 and 2 after the collection (day 0). The long-term extender was used for a sho...

  5. A diagram retrieval method with multi-label learning

    Science.gov (United States)

    Fu, Songping; Lu, Xiaoqing; Liu, Lu; Qu, Jingwei; Tang, Zhi

    2015-01-01

    In recent years, the retrieval of plane geometry figures (PGFs) has attracted increasing attention in the fields of mathematics education and computer science. However, the high cost of matching complex PGF features leads to the low efficiency of most retrieval systems. This paper proposes an indirect classification method based on multi-label learning, which improves retrieval efficiency by reducing the scope of compare operation from the whole database to small candidate groups. Label correlations among PGFs are taken into account for the multi-label classification task. The primitive feature selection for multi-label learning and the feature description of visual geometric elements are conducted individually to match similar PGFs. The experiment results show the competitive performance of the proposed method compared with existing PGF retrieval methods in terms of both time consumption and retrieval quality.

  6. Comparing three methods for participatory simulation of hospital work systems

    DEFF Research Database (Denmark)

    Broberg, Ole; Andersen, Simone Nyholm

    Summative Statement: This study compared three participatory simulation methods using different simulation objects: Low resolution table-top setup using Lego figures, full scale mock-ups, and blueprints using Lego figures. It was concluded the three objects by differences in fidelity and affordance...... scenarios using the objects. Results: Full scale mock-ups significantly addressed the local space and technology/tool elements of a work system. In contrast, the table-top simulation object addressed the organizational issues of the future work system. The blueprint based simulation addressed...

  7. Improvement of vector compensation method for vehicle magnetic distortion field

    Energy Technology Data Exchange (ETDEWEB)

    Pang, Hongfeng, E-mail: panghongfeng@126.com; Zhang, Qi; Li, Ji; Luo, Shitu; Chen, Dixiang; Pan, Mengchun; Luo, Feilu

    2014-03-15

    Magnetic distortions such as eddy-current field and low frequency magnetic field have not been considered in vector compensation methods. A new compensation method is proposed to suppress these magnetic distortions and improve compensation performance, in which the magnetic distortions related to measurement vectors and time are considered. The experimental system mainly consists of a three-axis fluxgate magnetometer (DM-050), an underwater vehicle and a proton magnetometer, in which the scalar value of magnetic field is obtained with the proton magnetometer and considered to be the true value. Comparing with traditional compensation methods, experimental results show that the magnetic distortions can be further reduced by two times. After compensation, error intensity and RMS error are reduced from 11684.013 nT and 7794.604 nT to 16.219 nT and 5.907 nT respectively. It suggests an effective way to improve the compensation performance of magnetic distortions. - Highlights: • A new vector compensation method is proposed for vehicle magnetic distortion. • The proposed model not only includes magnetometer error but also considers magnetic distortion. • Compensation parameters are computed directly by solving nonlinear equations. • Compared with traditional methods, the proposed method is not related with rotation angle rate. • Error intensity and RMS error can be reduced to 1/2 of the error with traditional methods.

  8. Improvement of vector compensation method for vehicle magnetic distortion field

    International Nuclear Information System (INIS)

    Pang, Hongfeng; Zhang, Qi; Li, Ji; Luo, Shitu; Chen, Dixiang; Pan, Mengchun; Luo, Feilu

    2014-01-01

    Magnetic distortions such as eddy-current field and low frequency magnetic field have not been considered in vector compensation methods. A new compensation method is proposed to suppress these magnetic distortions and improve compensation performance, in which the magnetic distortions related to measurement vectors and time are considered. The experimental system mainly consists of a three-axis fluxgate magnetometer (DM-050), an underwater vehicle and a proton magnetometer, in which the scalar value of magnetic field is obtained with the proton magnetometer and considered to be the true value. Comparing with traditional compensation methods, experimental results show that the magnetic distortions can be further reduced by two times. After compensation, error intensity and RMS error are reduced from 11684.013 nT and 7794.604 nT to 16.219 nT and 5.907 nT respectively. It suggests an effective way to improve the compensation performance of magnetic distortions. - Highlights: • A new vector compensation method is proposed for vehicle magnetic distortion. • The proposed model not only includes magnetometer error but also considers magnetic distortion. • Compensation parameters are computed directly by solving nonlinear equations. • Compared with traditional methods, the proposed method is not related with rotation angle rate. • Error intensity and RMS error can be reduced to 1/2 of the error with traditional methods

  9. Adjusting the general growth balance method for migration

    OpenAIRE

    Hill, Kenneth; Queiroz, Bernardo

    2010-01-01

    Death distribution methods proposed for death registration coverage by comparison with census age distributions assume no net migration. This assumption makes it problematic to apply these methods to sub-national and national populations affected by substantial net migration. In this paper, we propose and explore a two-step process in which the Growth Balance Equation is first used to estimate net migration rates, using a model of age-specific migration, and then it is used to compare the obs...

  10. A proposal of a three-dimensional CT measurement method of maxillofacial structure

    International Nuclear Information System (INIS)

    Tanaka, Ray; Hayashi, Takafumi

    2007-01-01

    Three-dimensional CT measurement is put in practice in order to grasp the pathological condition on diseases such as the temporomandibular joint disorder, maxillofacial anomaly, jaw deformity, or fracture which cause the morphologic changes of the maxillofacial bones. On the 3D measurement, the unique system that is obtained by volume rendering 3D images with a simultaneous reference of axial images combined with coronal and sagittal multi-planar reconstruction (MPR) images (we call this MPR referential method), is employed in order to define the measurement points. Our purpose in this report is to indicate the usefulness of this unique method by comparing with the common way to define the measurement points on only 3D reconstruction images without consulting of MPR images. Clinical CT data obtained from a male patient with skeletal malocclusion was used. Contiguous axial images were reconstructed at 4 times magnification, with a reconstruction interval of 0.5 mm, focused on the temporomandibular joint region in his left side. After these images were converted to Digital Imaging and Communications in Medicine (DICOM) format and sent to personal computer (PC), 3D reconstruction image was created using free 3D DICOM medical image viewer. The coordinates of 3 measurement points (the lateral and medial pole of the mandibular condyle, and the left foramen ovale) were defined with MPR images (MPR coordinates) as reference coordinates, and then the coordinates that were defined on only 3D reconstruction image without consulting to MPR images (3D coordinates) were compared to those of MPR coordinates. Three examiners were engaged independently 10 times for every measurement point. In our result, there was no correspondence between 3D coordinates and MPR coordinates, and contribution of 3D coordinates showed a variety in every measurement point and in every observer. We deemed that ''MPR referential method'' is useful to assess the location of the target point of anatomical

  11. Sources of variability for the single-comparator method in a heavy-water reactor

    International Nuclear Information System (INIS)

    Damsgaard, E.; Heydorn, K.

    1978-11-01

    The well thermalized flux in the heavy-water-moderated DR 3 reactor at Risoe prompted us to investigate to what extent a single comparator could be used for multi-element determination instead of multiple comparators. The reliability of the single-comparator method is limited by the thermal-to-epithermal ratio, and experiments were designed to determine the variations in this ratio throughout a reactor operating period (4 weeks including a shut-down period of 4-5 days). The bi-isotopic method using zirconium as monitor was chosen, because 94 Zr and 96 Zr exhibit a large difference in their Isub(o)/Σsub(th) values, and would permit determination of the flux ratio with a precision sufficient to determine variations. One of the irradiation facilities comprises a rotating magazine with 3 channels, each of which can hold five aluminium cans. In this rig, five cans, each holding a polyvial with 1 ml of aqueous zirconium solution were irradiated simultaneously in one channel. Irradiations were carried out in the first and the third week of 4 periods. In another facility consisting of a pneumatic tube system, two samples were simultaneously irradiated on top of each other in a polyethylene rabbit. Experiments were carried out once a week for 4 periods. All samples were counted on a Ge(Li)-detector for 95 Zr, 97 sup(m)Nb and 97 Nb. The thermal-to-epithermal flux ratio was calculated from the induced activity, the nuclear data for the two zirconium isotopes and the detector efficiency. By analysis of variance the total variation of the flux ratio was separated into a random variation between reactor periods, and systematic differences between the positions, as well as the weeks in the operating period. If the variations are in statistical control, the error resulting from use of the single-comparator method in multi-element determination can be estimated for any combination of irradiation position and day in the operating period. With the measure flux ratio variations in DR

  12. Modified method for bronchial suture by Ramirez Gama compared to separate stitches suture: experimental study

    Directory of Open Access Journals (Sweden)

    Vitor Mayer de Moura

    Full Text Available OBJECTIVE: To experimentally compare two classic techniques described for manual suture of the bronchial stump. METHODS: We used organs of pigs, with isolated trachea and lungs, preserved by refrigeration. We dissected 30 bronchi, which were divided into three groups of ten bronchi each, of 3mm, 5mm, and 7mm, respectively. In each, we performed the suture with simple, separated, extramucosal stitches in five other bronchi, and the technique proposed by Ramirez and modified by Santos et al in the other five. Once the sutures were finished, the anastomoses were tested using compressed air ventilation, applying an endotracheal pressure of 20mmHg. RESULTS: the Ramirez Gama suture was more effective in the bronchi of 3, 5 and 7 mm, and there was no air leak even after subjecting them to a tracheal pressure of 20mmHg. The simple interrupted sutures were less effective, with extravasation in six of the 15 tested bronchi, especially in the angles of the sutures. These figures were not significant (p = 0.08. CONCLUSION: manual sutures of the bronchial stumps were more effective when the modified Ramirez Gama suture was used in the caliber bronchi arms when tested with increased endotracheal pressure.

  13. Facade Proposals for Urban Augmented Reality

    OpenAIRE

    Fond , Antoine; Berger , Marie-Odile; Simon , Gilles

    2017-01-01

    International audience; We introduce a novel object proposals method specific to building facades. We define new image cues that measure typical facadecharacteristics such as semantic, symmetry and repetitions. They are combined to generate a few facade candidates in urban environments fast. We show that our method outperforms state-of-the-art object proposals techniques for this task on the 1000 images of the Zurich Building Database. We demonstrate the interest of this procedure for augment...

  14. Price forecasting of day-ahead electricity markets using a hybrid forecast method

    International Nuclear Information System (INIS)

    Shafie-khah, M.; Moghaddam, M. Parsa; Sheikh-El-Eslami, M.K.

    2011-01-01

    Research highlights: → A hybrid method is proposed to forecast the day-ahead prices in electricity market. → The method combines Wavelet-ARIMA and RBFN network models. → PSO method is applied to obtain optimum RBFN structure for avoiding over fitting. → One of the merits of the proposed method is lower need to the input data. → The proposed method has more accurate behavior in compare with previous methods. -- Abstract: Energy price forecasting in a competitive electricity market is crucial for the market participants in planning their operations and managing their risk, and it is also the key information in the economic optimization of the electric power industry. However, price series usually have a complex behavior due to their nonlinearity, nonstationarity, and time variancy. In this paper, a novel hybrid method to forecast day-ahead electricity price is proposed. This hybrid method is based on wavelet transform, Auto-Regressive Integrated Moving Average (ARIMA) models and Radial Basis Function Neural Networks (RBFN). The wavelet transform provides a set of better-behaved constitutive series than price series for prediction. ARIMA model is used to generate a linear forecast, and then RBFN is developed as a tool for nonlinear pattern recognition to correct the estimation error in wavelet-ARIMA forecast. Particle Swarm Optimization (PSO) is used to optimize the network structure which makes the RBFN be adapted to the specified training set, reducing computation complexity and avoiding overfitting. The proposed method is examined on the electricity market of mainland Spain and the results are compared with some of the most recent price forecast methods. The results show that the proposed hybrid method could provide a considerable improvement for the forecasting accuracy.

  15. Price forecasting of day-ahead electricity markets using a hybrid forecast method

    Energy Technology Data Exchange (ETDEWEB)

    Shafie-khah, M., E-mail: miadreza@gmail.co [Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Moghaddam, M. Parsa, E-mail: parsa@modares.ac.i [Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Sheikh-El-Eslami, M.K., E-mail: aleslam@modares.ac.i [Tarbiat Modares University, Tehran (Iran, Islamic Republic of)

    2011-05-15

    Research highlights: {yields} A hybrid method is proposed to forecast the day-ahead prices in electricity market. {yields} The method combines Wavelet-ARIMA and RBFN network models. {yields} PSO method is applied to obtain optimum RBFN structure for avoiding over fitting. {yields} One of the merits of the proposed method is lower need to the input data. {yields} The proposed method has more accurate behavior in compare with previous methods. -- Abstract: Energy price forecasting in a competitive electricity market is crucial for the market participants in planning their operations and managing their risk, and it is also the key information in the economic optimization of the electric power industry. However, price series usually have a complex behavior due to their nonlinearity, nonstationarity, and time variancy. In this paper, a novel hybrid method to forecast day-ahead electricity price is proposed. This hybrid method is based on wavelet transform, Auto-Regressive Integrated Moving Average (ARIMA) models and Radial Basis Function Neural Networks (RBFN). The wavelet transform provides a set of better-behaved constitutive series than price series for prediction. ARIMA model is used to generate a linear forecast, and then RBFN is developed as a tool for nonlinear pattern recognition to correct the estimation error in wavelet-ARIMA forecast. Particle Swarm Optimization (PSO) is used to optimize the network structure which makes the RBFN be adapted to the specified training set, reducing computation complexity and avoiding overfitting. The proposed method is examined on the electricity market of mainland Spain and the results are compared with some of the most recent price forecast methods. The results show that the proposed hybrid method could provide a considerable improvement for the forecasting accuracy.

  16. A measurement fusion method for nonlinear system identification using a cooperative learning algorithm.

    Science.gov (United States)

    Xia, Youshen; Kamel, Mohamed S

    2007-06-01

    Identification of a general nonlinear noisy system viewed as an estimation of a predictor function is studied in this article. A measurement fusion method for the predictor function estimate is proposed. In the proposed scheme, observed data are first fused by using an optimal fusion technique, and then the optimal fused data are incorporated in a nonlinear function estimator based on a robust least squares support vector machine (LS-SVM). A cooperative learning algorithm is proposed to implement the proposed measurement fusion method. Compared with related identification methods, the proposed method can minimize both the approximation error and the noise error. The performance analysis shows that the proposed optimal measurement fusion function estimate has a smaller mean square error than the LS-SVM function estimate. Moreover, the proposed cooperative learning algorithm can converge globally to the optimal measurement fusion function estimate. Finally, the proposed measurement fusion method is applied to ARMA signal and spatial temporal signal modeling. Experimental results show that the proposed measurement fusion method can provide a more accurate model.

  17. [Comparative study of two treatment methods for acute periodontal abscess].

    Science.gov (United States)

    Jin, Dong-mei; Wang, Wei-qian

    2012-10-01

    The aim of this short-term study was to compare the clinical efficacy of 2 different methods to treat acute periodontal abscesses. After patient selection, 100 cases of acute periodontal abscess were randomly divided into two groups. The experimental group was treated by supra- and subgingival scaling, while the control group was treated by incision and drainage. A clinical examination was carried out to record the following variables: subjective clinical variables including pain, edema, redness and swelling; objective clinical variables including gingival index(GI), bleeding index(BI), probing depth(PD),suppuration, lymphadenopathy and tooth mobility. The data was analyzed with SPSS 19.0 software package. RESULES: Subjective clinical variables demonstrated statistically significant improvements with both methods from the first day after treatment and lasted for at least 30 days(P0.05), but the experimental group showed more improvement in edema and redness than the control group(Pperiodontal abscesses.

  18. New proposal of moderator temperature coefficient estimation method using gray-box model in NPP, (1)

    International Nuclear Information System (INIS)

    Mori, Michitsugu; Kagami, Yuichi; Kanemoto, Shigeru; Enomoto, Mitsuhiro; Tamaoki, Tetsuo; Kawamura, Shinichiro

    2004-01-01

    The purpose of the present paper is to establish a new void reactivity coefficient (VRC) estimation method based on gray box modeling concept. The gray box model consists of a point kinetics model as the first principle model and a fitting model of moderator temperature kinetics. Applying Kalman filter and maximum likehood estimation algorithms to the gray box model, MTC can be estimated. The verification test is done by Monte Carlo simulation, and, it is shown that the present method gives the best estimation results comparing with the conventional methods from the viewpoints of non-biased and smallest scattering estimation performance. Furthermore, the method is verified via real plant data analysis. The reason of good performance of the present method is explained by proper definition of likelihood function based on explicit expression of observation and system noise in the gray box model. (author)

  19. Investigating the Efficacy of Practical Skill Teaching: A Pilot-Study Comparing Three Educational Methods

    Science.gov (United States)

    Maloney, Stephen; Storr, Michael; Paynter, Sophie; Morgan, Prue; Ilic, Dragan

    2013-01-01

    Effective education of practical skills can alter clinician behaviour, positively influence patient outcomes, and reduce the risk of patient harm. This study compares the efficacy of two innovative practical skill teaching methods, against a traditional teaching method. Year three pre-clinical physiotherapy students consented to participate in a…

  20. Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality.

    Science.gov (United States)

    Gosling, Simon N; Hondula, David M; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer

    2017-08-16

    Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to "adaptation uncertainty" (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. This study had three aims: a ) Compare the range in projected impacts that arises from using different adaptation modeling methods; b ) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c ) recommend modeling method(s) to use in future impact assessments. We estimated impacts for 2070-2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634.

  1. Security analysis and improvements to the PsychoPass method.

    Science.gov (United States)

    Brumen, Bostjan; Heričko, Marjan; Rozman, Ivan; Hölbl, Marko

    2013-08-13

    In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength.

  2. Online solving of economic dispatch problem using neural network approach and comparing it with classical method

    International Nuclear Information System (INIS)

    Mohammadi, A.; Varahram, M.H.

    2007-01-01

    In this study, two methods for solving economic dispatch problems, namely Hopfield neural network and lambda iteration method are compared. Three sample of power system with 3, 6 and 20 units have been considered. The time required for CPU, for solving economic dispatch of these two systems has been calculated. It has been Shown that for on-line economic dispatch, Hopfield neural network is more efficient and the time required for Convergence is considerably smaller compared to classical methods. (author)

  3. Comparative analysis of solution methods of the punctual kinetic equations

    International Nuclear Information System (INIS)

    Hernandez S, A.

    2003-01-01

    The following one written it presents a comparative analysis among different analytical solutions for the punctual kinetics equation, which present two variables of interest: a) the temporary behavior of the neutronic population, and b) The temporary behavior of the different groups of precursors of delayed neutrons. The first solution is based on a method that solves the transfer function of the differential equation for the neutronic population, in which intends to obtain the different poles that give the stability of this transfer function. In this section it is demonstrated that the temporary variation of the reactivity of the system can be managed as it is required, since the integration time for this method doesn't affect the result. However, the second solution is based on an iterative method like that of Runge-Kutta or the Euler method where the algorithm was only used to solve first order differential equations giving this way solution to each differential equation that conforms the equations of punctual kinetics. In this section it is demonstrated that only it can obtain a correct temporary behavior of the neutronic population when it is integrated on an interval of very short time, forcing to the temporary variation of the reactivity to change very quick way without one has some control about the time. In both methods the same change is used so much in the reactivity of the system like in the integration times, giving validity to the results graph the one the temporary behavior of the neutronic population vs. time. (Author)

  4. GHM method for obtaining rationalsolutions of nonlinear differential equations.

    Science.gov (United States)

    Vazquez-Leal, Hector; Sarmiento-Reyes, Arturo

    2015-01-01

    In this paper, we propose the application of the general homotopy method (GHM) to obtain rational solutions of nonlinear differential equations. It delivers a high precision representation of the nonlinear differential equation using a few linear algebraic terms. In order to assess the benefits of this proposal, three nonlinear problems are solved and compared against other semi-analytic methods or numerical methods. The obtained results show that GHM is a powerful tool, capable to generate highly accurate rational solutions. AMS subject classification 34L30.

  5. Developing an Agent-Based Simulation System for Post-Earthquake Operations in Uncertainty Conditions: A Proposed Method for Collaboration among Agents

    Directory of Open Access Journals (Sweden)

    Navid Hooshangi

    2018-01-01

    Full Text Available Agent-based modeling is a promising approach for developing simulation tools for natural hazards in different areas, such as during urban search and rescue (USAR operations. The present study aimed to develop a dynamic agent-based simulation model in post-earthquake USAR operations using geospatial information system and multi agent systems (GIS and MASs, respectively. We also propose an approach for dynamic task allocation and establishing collaboration among agents based on contract net protocol (CNP and interval-based Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS methods, which consider uncertainty in natural hazards information during agents’ decision-making. The decision-making weights were calculated by analytic hierarchy process (AHP. In order to implement the system, earthquake environment was simulated and the damage of the buildings and a number of injuries were calculated in Tehran’s District 3: 23%, 37%, 24% and 16% of buildings were in slight, moderate, extensive and completely vulnerable classes, respectively. The number of injured persons was calculated to be 17,238. Numerical results in 27 scenarios showed that the proposed method is more accurate than the CNP method in the terms of USAR operational time (at least 13% decrease and the number of human fatalities (at least 9% decrease. In interval uncertainty analysis of our proposed simulated system, the lower and upper bounds of uncertain responses are evaluated. The overall results showed that considering uncertainty in task allocation can be a highly advantageous in the disaster environment. Such systems can be used to manage and prepare for natural hazards.

  6. Monocyte-mediated erythrocyte destruction. A comparative study of current methods

    International Nuclear Information System (INIS)

    Hunt, J.S.; Beck, M.L.; Wood, G.W.

    1981-01-01

    Three assay systems-EAIgG rosette formation, 51Cr release, and erythrophagocytosis-were used to quantitate interaction between antibody-coated human erythrocytes and normal blood monocytes. The three methods were compared in terms of time requirements and sensitivity. Erythrophagocytosis required more time to perform (2 hours) than did rosette tests (30 minutes) but less than minimum 51Cr release assays (5.5 hours). Erythrophagocytosis was 20-fold more sensitive than either of the other two procedures. Results obtained with purified IgG anti-D and with antibodies induced by transfusion or pregnancy were similar

  7. Comparing energy technology alternatives from an environmental perspective

    International Nuclear Information System (INIS)

    House, P.W.; Coleman, J.A.; Shull, R.D.; Matheny, R.W.; Hock, J.C.

    1981-02-01

    A number of individuals and organizations advocate the use of comparative, formal analysis to determine which are the safest methods for producing and using energy. Some have suggested that the findings of such analyses should be the basis upon which final decisions are made about whether to actually deploy energy technologies. Some of those who support formal comparative analysis are in a position to shape the policy debate on energy and environment. An opposing viewpoint is presented, arguing that for technical reasons, analysis can provide no definitive or rationally credible answers to the question of overall safety. Analysis has not and cannot determine the sum total of damage to human welfare and ecological communities from energy technologies. Analysis has produced estimates of particular types of damage; however, it is impossible to make such estimates comparable and commensurate across different classes of technologies and environmental effects. As a result of the deficiencies, comparative analysis connot form the basis of a credible, viable energy policy. Yet, without formal comparative analysis, how can health, safety, and the natural environment be protected. This paper proposes a method for improving the Nation's approach to this problem. The proposal essentially is that health and the environment should be considered as constraints on the deployment of energy technologies, constraints that are embodied in Government regulations. Whichever technologies can function within these constraints should then compete among themselves. This competition should be based on market factors like cost and efficiency and on political factors like national security and the questions of equity

  8. Carbapenem inactivation: a very affordable and highly specific method for phenotypic detection of carbapenemase-producing Pseudomonas aeruginosa isolates compared with other methods.

    Science.gov (United States)

    Akhi, Mohammad Taghi; Khalili, Younes; Ghotaslou, Reza; Kafil, Hossein Samadi; Yousefi, Saber; Nagili, Behroz; Goli, Hamid Reza

    2017-06-01

    This investigation was undertaken to compare phenotypic and molecular methods for detection of carbapenemase-producing Pseudomonas aeruginosa. A total of 245 non-duplicated isolates of P. aeruginosa were collected from hospitalized patients. Disc diffusion method was used to identify carbapenem-resistant bacteria. Three phenotypic methods, including Modified Hodge Test (MHT), Modified Carba NP (MCNP) test and Carbapenem Inactivation Method (CIM) were used for investigation of carbapenemase production. In addition, polymerase chain reaction (PCR) was used to detect carbapenemase encoding genes. Of 245 P. aeruginosa isolates investigated, 121 isolates were carbapenem-resistant. Among carbapenem-resistant isolates, 40, 39 and 35 isolates exhibited positive results using MHT, MCNP test and CIM, respectively. PCR indicated the presence of carbapenemase genes in 35 of carbapenem-resistant isolates. MHT showed low sensitivity and specificity for carbapenemase detection among P. aeruginosa isolates in comparison to PCR. CIM was most affordable and highly specific than MCNP test compared with the molecular method.

  9. Proposal on concept of security of energy supply with nuclear energy

    International Nuclear Information System (INIS)

    Ujita, Hiroshi; Matsui, Kazuaki; Yamada, Eiji

    2009-01-01

    Security of energy supply (SoS) was a major concern for OECD governments in the early 1970s. Since then, successive oil crises, volatility of hydrocarbon prices, as well as terrorist risks and natural disasters, have brought the issue back to the centre stage of policy agendas. SoS concept has been proposed which is defined by time frame and space frame as well. Wide meaning SoS consists of narrow meaning SoS of short-term energy crisis, which is the traditional concept, and long-term global energy problem, which has become important recently. Three models have been proposed here for evaluating SoS. A method to estimate energy security level in a quantitative manner by comparing with various measures has been also proposed, in which nuclear energy contribution onto SoS can be further measured. (author)

  10. Comparative analysis of approximations used in the methods of Faddeev equations and hyperspherical harmonics

    International Nuclear Information System (INIS)

    Mukhtarova, M.I.

    1988-01-01

    Comparative analysis of approximations, used in the methods of Faddeev equations and hyperspherical harmonics (MHH) was conducted. The differences in solutions of these methods, related with introduction of approximation of sufficient partial states into the three-nucleon problem, is shown. MHH method is preferred. It is shown that MHH advantage can be manifested clearly when studying new classes of interactions: three-particle, Δ-isobar, nonlocal and other interactions

  11. Which missing value imputation method to use in expression profiles: a comparative study and two selection schemes

    Directory of Open Access Journals (Sweden)

    Lotz Meredith J

    2008-01-01

    Full Text Available Abstract Background Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures × time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. Results We found that the optimal imputation algorithms (LSA, LLS, and BPCA are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Conclusion Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA

  12. Which missing value imputation method to use in expression profiles: a comparative study and two selection schemes.

    Science.gov (United States)

    Brock, Guy N; Shaffer, John R; Blakesley, Richard E; Lotz, Meredith J; Tseng, George C

    2008-01-10

    Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures x time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. We found that the optimal imputation algorithms (LSA, LLS, and BPCA) are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS) scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS) scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA) are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA) performed better on mcroarray data with lower complexity

  13. Comparative Studies of Core Thermal Hydraulic Design Methods for the Prototype Sodium Cooled Fast Reactor

    International Nuclear Information System (INIS)

    Choi, Sun Rock; Lim, Jae Yong; Kim, Sang Ji

    2013-01-01

    In this work, various core thermal-hydraulic design methods, which have arisen during the development of a prototype SFR, are compared to establish a proper design procedure. Comparative studies have been performed to determine the appropriate design method for the prototype SFR. The results show that the minimization method show a lower cladding midwall temperature than the fixed outlet temperature methods and superior thermal safety margin with the same coolant flow. The Korea Atomic energy Research Institute (KAERI) has performed a conceptual SFR design with the final goal of constructing a prototype plant by 2028. The main objective of the SFR prototype plant is to verify the TRU metal fuel performance, reactor operation, and transmutation ability of high-level wastes. The core thermal-hydraulic design is used to ensure the safe fuel performance during the whole plant operation. Compared to the critical heat flux in typical light water reactors, nuclear fuel damages in SFR subassemblies are arisen from a creep induced failure. The creep limit is evaluated based on both the maximum cladding temperature and the uncertainties of the design parameters. Therefore, the core thermalhydraulic design method, which eventually determines the cladding temperature, is highly important to assure a safe and reliable operation of the reactor systems

  14. Thresholding methods for PET imaging: A review

    International Nuclear Information System (INIS)

    Dewalle-Vignion, A.S.; Betrouni, N.; Huglo, D.; Vermandel, M.; Dewalle-Vignion, A.S.; Hossein-Foucher, C.; Huglo, D.; Vermandel, M.; Dewalle-Vignion, A.S.; Hossein-Foucher, C.; Huglo, D.; Vermandel, M.; El Abiad, A.

    2010-01-01

    This work deals with positron emission tomography segmentation methods for tumor volume determination. We propose a state of art techniques based on fixed or adaptive threshold. Methods found in literature are analysed with an objective point of view on their methodology, advantages and limitations. Finally, a comparative study is presented. (authors)

  15. Comparison between Two Linear Supervised Learning Machines' Methods with Principle Component Based Methods for the Spectrofluorimetric Determination of Agomelatine and Its Degradants.

    Science.gov (United States)

    Elkhoudary, Mahmoud M; Naguib, Ibrahim A; Abdel Salam, Randa A; Hadad, Ghada M

    2017-05-01

    Four accurate, sensitive and reliable stability indicating chemometric methods were developed for the quantitative determination of Agomelatine (AGM) whether in pure form or in pharmaceutical formulations. Two supervised learning machines' methods; linear artificial neural networks (PC-linANN) preceded by principle component analysis and linear support vector regression (linSVR), were compared with two principle component based methods; principle component regression (PCR) as well as partial least squares (PLS) for the spectrofluorimetric determination of AGM and its degradants. The results showed the benefits behind using linear learning machines' methods and the inherent merits of their algorithms in handling overlapped noisy spectral data especially during the challenging determination of AGM alkaline and acidic degradants (DG1 and DG2). Relative mean squared error of prediction (RMSEP) for the proposed models in the determination of AGM were 1.68, 1.72, 0.68 and 0.22 for PCR, PLS, SVR and PC-linANN; respectively. The results showed the superiority of supervised learning machines' methods over principle component based methods. Besides, the results suggested that linANN is the method of choice for determination of components in low amounts with similar overlapped spectra and narrow linearity range. Comparison between the proposed chemometric models and a reported HPLC method revealed the comparable performance and quantification power of the proposed models.

  16. On an image reconstruction method for ECT

    Science.gov (United States)

    Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro

    2007-04-01

    An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.

  17. Methods to establish flaw tolerances

    International Nuclear Information System (INIS)

    Varga, T.

    1978-01-01

    Three conventional methods used to establish flaw tolerances are compared with new approaches using fracture mechanics. The conventional methods are those based on (a) non-destructive testing methods; (b) fabrication and quality assurance experience; and (c) service and damage experience. Pre-requisites of fracture mechanics methods are outlined, and summaries given of linear elastic mechanics (LEFM) and elastoplastic fracture mechanics (EPFM). The latter includes discussion of C.O.D.(crack opening displacement), the J-integral and equivalent energy. Proposals are made for establishing flaw tolerances. (U.K.)

  18. [A study for testing the antifungal susceptibility of yeast by the Japanese Society for Medical Mycology (JSMM) method. The proposal of the modified JSMM method 2009].

    Science.gov (United States)

    Nishiyama, Yayoi; Abe, Michiko; Ikeda, Reiko; Uno, Jun; Oguri, Toyoko; Shibuya, Kazutoshi; Maesaki, Shigefumi; Mohri, Shinobu; Yamada, Tsuyoshi; Ishibashi, Hiroko; Hasumi, Yayoi; Abe, Shigeru

    2010-01-01

    The Japanese Society for Medical Mycology (JSMM) method used for testing the antifungal susceptibility of yeast, the MIC end point for azole antifungal agents, is currently set at IC(80). It was recently shown, however that there is an inconsistency in the MIC value between the JSMM method and the CLSI M27-A2 (CLSI) method, in which the end- point was to read as IC(50). To resolve this discrepancy and reassess the JSMM method, the MIC for three azoles, fluconazole, itraconazole and voriconazole were compared to 5 strains of each of the following Candida species: C. albicans, C. glabrata, C. tropicalis, C. parapsilosis and C. krusei, for a total of 25 comparisons, using the JSMM method, a modified JSMM method, and the CLSI method. The results showed that when the MIC end- point criterion of the JSMM method was changed from IC(80) to IC(50) (the modified JSMM method) , the MIC value was consistent and compatible with the CLSI method. Finally, it should be emphasized that the JSMM method, using a spectrophotometer for MIC measurement, was superior in both stability and reproducibility, as compared to the CLSI method in which growth was assessed by visual observation.

  19. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  20. Proposal for evaluation methodology on impact resistant performance and construction method of tornado missile protection net structure

    International Nuclear Information System (INIS)

    Namba, Kosuke; Shirai, Koji

    2014-01-01

    In nuclear power plants, the necessity of the Tornado Missile Protection Structure is becoming a technical key issue. Utilization of the net structure seems to be one of the realistic counter measures from the point of the view of the mitigation wind and seismic loads. However, the methodology for the selection of the net suitable materials, the energy absorption design method and the construction method are not sufficiently established. In this report, three materials (high-strength metal mesh, super strong polyethylene fiber net and steel grating) were selected for the candidate material and the material screening tests, the energy absorption tests by free drop test using the heavy weight and the impact tests with the small diameter missile. As a result, high-strength metal mesh was selected as a suitable material for tornado missile protection net structure. Moreover, the construction method to obtain the good energy absorption performance of the material and the practical design method to estimate the energy absorption of the high-strength metal mesh under tornado missile impact load were proposed. (author)

  1. Truss Structure Optimization with Subset Simulation and Augmented Lagrangian Multiplier Method

    Directory of Open Access Journals (Sweden)

    Feng Du

    2017-11-01

    Full Text Available This paper presents a global optimization method for structural design optimization, which integrates subset simulation optimization (SSO and the dynamic augmented Lagrangian multiplier method (DALMM. The proposed method formulates the structural design optimization as a series of unconstrained optimization sub-problems using DALMM and makes use of SSO to find the global optimum. The combined strategy guarantees that the proposed method can automatically detect active constraints and provide global optimal solutions with finite penalty parameters. The accuracy and robustness of the proposed method are demonstrated by four classical truss sizing problems. The results are compared with those reported in the literature, and show a remarkable statistical performance based on 30 independent runs.

  2. Comparing methods of determining Legionella spp. in complex water matrices.

    Science.gov (United States)

    Díaz-Flores, Álvaro; Montero, Juan Carlos; Castro, Francisco Javier; Alejandres, Eva María; Bayón, Carmen; Solís, Inmaculada; Fernández-Lafuente, Roberto; Rodríguez, Guillermo

    2015-04-29

    Legionella testing conducted at environmental laboratories plays an essential role in assessing the risk of disease transmission associated with water systems. However, drawbacks of culture-based methodology used for Legionella enumeration can have great impact on the results and interpretation which together can lead to underestimation of the actual risk. Up to 20% of the samples analysed by these laboratories produced inconclusive results, making effective risk management impossible. Overgrowth of competing microbiota was reported as an important factor for culture failure. For quantitative polymerase chain reaction (qPCR), the interpretation of the results from the environmental samples still remains a challenge. Inhibitors may cause up to 10% of inconclusive results. This study compared a quantitative method based on immunomagnetic separation (IMS method) with culture and qPCR, as a new approach to routine monitoring of Legionella. First, pilot studies evaluated the recovery and detectability of Legionella spp using an IMS method, in the presence of microbiota and biocides. The IMS method results were not affected by microbiota while culture counts were significantly reduced (1.4 log) or negative in the same samples. Damage by biocides of viable Legionella was detected by the IMS method. Secondly, a total of 65 water samples were assayed by all three techniques (culture, qPCR and the IMS method). Of these, 27 (41.5%) were recorded as positive by at least one test. Legionella spp was detected by culture in 7 (25.9%) of the 27 samples. Eighteen (66.7%) of the 27 samples were positive by the IMS method, thirteen of them reporting counts below 10(3) colony forming units per liter (CFU l(-1)), six presented interfering microbiota and three presented PCR inhibition. Of the 65 water samples, 24 presented interfering microbiota by culture and 8 presented partial or complete inhibition of the PCR reaction. So the rate of inconclusive results of culture and PCR was 36

  3. A Method for Automatic Image Rectification and Stitching for Vehicle Yaw Marks Trajectory Estimation

    Directory of Open Access Journals (Sweden)

    Vidas Žuraulis

    2016-02-01

    Full Text Available The aim of this study has been to propose a new method for automatic rectification and stitching of the images taken on the accident site. The proposed method does not require any measurements to be performed on the accident site and thus it is frsjebalaee of measurement errors. The experimental investigation was performed in order to compare the vehicle trajectory estimation according to the yaw marks in the stitched image and the trajectory, reconstructed using the GPS data. The overall mean error of the trajectory reconstruction, produced by the method proposed in this paper was 0.086 m. It was only 0.18% comparing to the whole trajectory length.

  4. Comparative Study of Two Daylighting Analysis Methods with Regard to Window Orientation and Interior Wall Reflectance

    Directory of Open Access Journals (Sweden)

    Yeo Beom Yoon

    2014-09-01

    Full Text Available The accuracy and speed of the daylighting analysis developed for use in EnergyPlus is better than its predecessors. In EnergyPlus, the detailed method uses the Split-flux algorithm whereas the DElight method uses the Radiosity algorithm. Many existing studies have addressed the two methods, either individually or compared with other daylight analysis methods like Ray tracing but still there is lack of detailed comparative study of these two methods. Our previous studies show that the Split-flux method overestimates the illuminance, especially for the areas away from the window. The Radiosity method has the advantage of accurately predicting this illuminance because of how it deals with the diffuse light. For this study, the EnergyPlus model, which has been calibrated using data measured in a real building in previous studies, has also been used. The calibrated model has a south oriented window only. This model is then used to analyze the interior illuminance inside the room for north, west and east orientation of the window by rotating the model and by changing the wall reflectance of the model with south oriented window. Direct and diffuse component of the illuminance as well as the algorithms have been compared for a detailed analysis.

  5. A proposed through-flow inverse method for the design of mixed-flow pumps

    Science.gov (United States)

    Borges, Joao Eduardo

    1991-01-01

    A through-flow (hub-to-shroud) truly inverse method is proposed and described. It uses an imposition of mean swirl, i.e., radius times mean tangential velocity, given throughout the meridional section of the turbomachine as an initial design specification. In the present implementation, it is assumed that the fluid is inviscid, incompressible, and irrotational at inlet and that the blades are supposed to have zero thickness. Only blade rows that impart to the fluid a constant work along the space are considered. An application of this procedure to design the rotor of a mixed-flow pump is described in detail. The strategy used to find a suitable mean swirl distribution and the other design inputs is also described. The final blade shape and pressure distributions on the blade surface are presented, showing that it is possible to obtain feasible designs using this technique. Another advantage of this technique is the fact that it does not require large amounts of CPU time.

  6. An Enhanced Run-Length Encoding Compression Method for Telemetry Data

    Directory of Open Access Journals (Sweden)

    Shan Yanhu

    2017-09-01

    Full Text Available The telemetry data are essential in evaluating the performance of aircraft and diagnosing its failures. This work combines the oversampling technology with the run-length encoding compression algorithm with an error factor to further enhance the compression performance of telemetry data in a multichannel acquisition system. Compression of telemetry data is carried out with the use of FPGAs. In the experiments there are used pulse signals and vibration signals. The proposed method is compared with two existing methods. The experimental results indicate that the compression ratio, precision, and distortion degree of the telemetry data are improved significantly compared with those obtained by the existing methods. The implementation and measurement of the proposed telemetry data compression method show its effectiveness when used in a high-precision high-capacity multichannel acquisition system.

  7. Comparing interactive videodisc training effectiveness to traditional training methods

    International Nuclear Information System (INIS)

    Kenworthy, N.W.

    1987-01-01

    Videodisc skills training programs developed by Industrial Training Corporation are being used and evaluated by major industrial facilities. In one such study, interactive videodisc training programs were compared to videotape and instructor-based training to determine the effectiveness of videodisc in terms of performance, training time and trainee attitudes. Results showed that when initial training was done using the interactive videodisc system, trainee performance was superior to the performance of trainees using videotape, and approximately equal to the performance of those trained by an instructor. When each method was used in follow-up training, interactive videodisc was definitely the most effective. Results also indicate that training time can be reduced using interactive videodisc. Attitudes of both trainees and instructors toward the interactive videodisc training were positive

  8. Measuring larval nematode contamination on cattle pastures: Comparing two herbage sampling methods.

    Science.gov (United States)

    Verschave, S H; Levecke, B; Duchateau, L; Vercruysse, J; Charlier, J

    2015-06-15

    Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the sampling method and the required sample size. The aim of this study was (1) to compare two different sampling methods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using sampling methods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance

  9. Comparative study among calibration methods of clinical applicators of beta radiation

    International Nuclear Information System (INIS)

    Antonio, Patricia de Lara

    2009-01-01

    90 Sr+ 90 Y clinical applicators are instruments used in brachytherapy procedures and they have to be periodically calibrated, according to international standards and recommendations. In this work, four calibration methods of dermatological and ophthalmic applicators were studied, comparing the results with those given by the calibration certificates of the manufacturers. The methods included the use of the standard applicator of the Calibration Laboratory (LCI), calibrated by the National Institute of Standards and Technology; an Amersham applicator (LCI) as reference; a mini-extrapolation chamber developed at LCI as an absolute standard; and thermoluminescent dosimetry. The mini-extrapolation chamber and a PTW commercial extrapolation chamber were studied in relation to their performance through quality control tests of their response, as leakage current, repeatability and reproducibility. The distribution of the depth dose in water, that presents high importance in dosimetry of clinical applicators, was determined using the mini extrapolation chamber and the thermoluminescent dosimeters. The results obtained were considered satisfactory for the both cases, and comparable to the data of the IAEA (2002) standard. Furthermore, a dosimetry postal kit was developed for the calibration of clinical applicators using the thermoluminescent technique, to be sent to clinics and hospitals, without the need of the transport of the sources to IPEN for calibration. (author)

  10. Comparative efficiency of different methods of gluten extraction in indigenous varieties of wheat

    OpenAIRE

    Imran, Samra; Hussain, Zaib; Ghafoor, Farkhanda; Ahmad Nagra, Saeed; Ashbeal Ziai, Naheeda

    2013-01-01

    The present study investigated six varieties of locally grown wheat (Lasani, Sehar, Miraj-08, Chakwal-50, Faisalabad-08 and Inqlab) procured from Punjab Seed Corporation, Lahore, Pakistan for their proximate contents. On the basis of protein content and ready availability, Faisalabad-08 (FD-08) was selected to be used for the assessment of comparative efficiency of various methods used for gluten extraction. Three methods, mechanical, chemical and microbiological were used for the extraction ...

  11. Calculating regional tissue volume for hyperthermic isolated limb perfusion: Four methods compared.

    Science.gov (United States)

    Cecchin, D; Negri, A; Frigo, A C; Bui, F; Zucchetta, P; Bodanza, V; Gregianin, M; Campana, L G; Rossi, C R; Rastrelli, M

    2016-12-01

    Hyperthermic isolated limb perfusion (HILP) can be performed as an alternative to amputation for soft tissue sarcomas and melanomas of the extremities. Melphalan and tumor necrosis factor-alpha are used at a dosage that depends on the volume of the limb. Regional tissue volume is traditionally measured for the purposes of HILP using water displacement volumetry (WDV). Although this technique is considered the gold standard, it is time-consuming and complicated to implement, especially in obese and elderly patients. The aim of the present study was to compare the different methods described in the literature for calculating regional tissue volume in the HILP setting, and to validate an open source software. We reviewed the charts of 22 patients (11 males and 11 females) who had non-disseminated melanoma with in-transit metastases or sarcoma of the lower limb. We calculated the volume of the limb using four different methods: WDV, tape measurements and segmentation of computed tomography images using Osirix and Oncentra Masterplan softwares. The overall comparison provided a concordance correlation coefficient (CCC) of 0.92 for the calculations of whole limb volume. In particular, when Osirix was compared with Oncentra (validated for volume measures and used in radiotherapy), the concordance was near-perfect for the calculation of the whole limb volume (CCC = 0.99). With methods based on CT the user can choose a reliable plane for segmentation purposes. CT-based methods also provides the opportunity to separate the whole limb volume into defined tissue volumes (cortical bone, fat and water). Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Statistical methods of evaluating and comparing imaging techniques

    International Nuclear Information System (INIS)

    Freedman, L.S.

    1987-01-01

    Over the past 20 years several new methods of generating images of internal organs and the anatomy of the body have been developed and used to enhance the accuracy of diagnosis and treatment. These include ultrasonic scanning, radioisotope scanning, computerised X-ray tomography (CT) and magnetic resonance imaging (MRI). The new techniques have made a considerable impact on radiological practice in hospital departments, not least on the investigational process for patients suspected or known to have malignant disease. As a consequence of the increased range of imaging techniques now available, there has developed a need to evaluate and compare their usefulness. Over the past 10 years formal studies of the application of imaging technology have been conducted and many reports have appeared in the literature. These studies cover a range of clinical situations. Likewise, the methodologies employed for evaluating and comparing the techniques in question have differed widely. While not attempting an exhaustive review of the clinical studies which have been reported, this paper aims to examine the statistical designs and analyses which have been used. First a brief review of the different types of study is given. Examples of each type are then chosen to illustrate statistical issues related to their design and analysis. In the final sections it is argued that a form of classification for these different types of study might be helpful in clarifying relationships between them and bringing a perspective to the field. A classification based upon a limited analogy with clinical trials is suggested

  13. Phylogenetic comparative methods on phylogenetic networks with reticulations.

    Science.gov (United States)

    Bastide, Paul; Solís-Lemus, Claudia; Kriebel, Ricardo; Sparks, K William; Ané, Cécile

    2018-04-25

    The goal of Phylogenetic Comparative Methods (PCMs) is to study the distribution of quantitative traits among related species. The observed traits are often seen as the result of a Brownian Motion (BM) along the branches of a phylogenetic tree. Reticulation events such as hybridization, gene flow or horizontal gene transfer, can substantially affect a species' traits, but are not modeled by a tree. Phylogenetic networks have been designed to represent reticulate evolution. As they become available for downstream analyses, new models of trait evolution are needed, applicable to networks. One natural extension of the BM is to use a weighted average model for the trait of a hybrid, at a reticulation point. We develop here an efficient recursive algorithm to compute the phylogenetic variance matrix of a trait on a network, in only one preorder traversal of the network. We then extend the standard PCM tools to this new framework, including phylogenetic regression with covariates (or phylogenetic ANOVA), ancestral trait reconstruction, and Pagel's λ test of phylogenetic signal. The trait of a hybrid is sometimes outside of the range of its two parents, for instance because of hybrid vigor or hybrid depression. These two phenomena are rather commonly observed in present-day hybrids. Transgressive evolution can be modeled as a shift in the trait value following a reticulation point. We develop a general framework to handle such shifts, and take advantage of the phylogenetic regression view of the problem to design statistical tests for ancestral transgressive evolution in the evolutionary history of a group of species. We study the power of these tests in several scenarios, and show that recent events have indeed the strongest impact on the trait distribution of present-day taxa. We apply those methods to a dataset of Xiphophorus fishes, to confirm and complete previous analysis in this group. All the methods developed here are available in the Julia package PhyloNetworks.

  14. Proposal for an alignment method of the CLIC linear accelerator - From geodesic networks to the active pre-alignment

    International Nuclear Information System (INIS)

    Touze, T.

    2011-01-01

    The compact linear collider (CLIC) is the particle accelerator project proposed by the european organization for nuclear research (CERN) for high energy physics after the large hadron collider (LHC). Because of the nano-metric scale of the CLIC leptons beams, the emittance growth budget is very tight. It induces alignment tolerances on the positions of the CLIC components that have never been achieved before. The last step of the CLIC alignment will be done according to the beam itself. It falls within the competence of the physicists. However, in order to implement the beam-based feedback, a challenging pre-alignment is required: 10 μm at 3σ along a 200 m sliding window. For such a precision, the proposed solution must be compatible with a feedback between the measurement and repositioning systems. The CLIC pre-alignment will have to be active. This thesis does not demonstrate the feasibility of the CLIC active pre-alignment but shows the way to the last developments that have to be done for that purpose. A method is proposed. Based on the management of the Helmert transformations between Euclidean coordinate systems, from the geodetic networks to the metrological measurements, this method is likely to solve the CLIC pre-alignment problem. Large scale facilities have been built and Monte-Carlo simulations have been made in order to validate the mathematical modeling of the measurement systems and of the alignment references. When this is done, it will be possible to extrapolate the modeling to the entire CLIC length. It will be the last step towards the demonstration of the CLIC pre-alignment feasibility. (author)

  15. Comparing a single-stage geocoding method to a multi-stage geocoding method: how much and where do they disagree?

    Directory of Open Access Journals (Sweden)

    Rice Kenneth

    2007-03-01

    Full Text Available Abstract Background Geocoding methods vary among spatial epidemiology studies. Errors in the geocoding process and differential match rates may reduce study validity. We compared two geocoding methods using 8,157 Washington State addresses. The multi-stage geocoding method implemented by the state health department used a sequence of local and national reference files. The single-stage method used a single national reference file. For each address geocoded by both methods, we measured the distance between the locations assigned by each method. Area-level characteristics were collected from census data, and modeled as predictors of the discordance between geocoded address coordinates. Results The multi-stage method had a higher match rate than the single-stage method: 99% versus 95%. Of 7,686 addresses were geocoded by both methods, 96% were geocoded to the same census tract by both methods and 98% were geocoded to locations within 1 km of each other by the two methods. The distance between geocoded coordinates for the same address was higher in sparsely populated and low poverty areas, and counties with local reference files. Conclusion The multi-stage geocoding method had a higher match rate than the single-stage method. An examination of differences in the location assigned to the same address suggested that study results may be most sensitive to the choice of geocoding method in sparsely populated or low-poverty areas.

  16. Comparing the methods plot and point-centered quarter to describe a woody community from typical Cerrado

    Directory of Open Access Journals (Sweden)

    Firmino Cardoso Pereira

    2015-05-01

    Full Text Available This article evaluates the effectiveness of the methods fixed area plots (AP and point-centered quarters (PQ to describe a woody community from typical Cerrado. We used 10 APs and 140 PQs, distributed into 5 transects. We compared the density of individuals, floristic composition, richness of families, genera, and species, and vertical and horizontal vegetation structure. The AP method was more effective to sample the density of individuals. The PQ method was more effective for characterizing species richness, vertical vegetation structure, and record of species with low abundance. The composition of families, genera, and species, as well as the species with higher importance value index in the community were similarly determined by the 2 methods. The methods compared are complementary. We suggest that the use of AP, PQ, or both methods may be aimed at the vegetation parameter under study.

  17. COMPARING IMAGE-BASED METHODS FOR ASSESSING VISUAL CLUTTER IN GENERALIZED MAPS

    Directory of Open Access Journals (Sweden)

    G. Touya

    2015-08-01

    Full Text Available Map generalization abstracts and simplifies geographic information to derive maps at smaller scales. The automation of map generalization requires techniques to evaluate the global quality of a generalized map. The quality and legibility of a generalized map is related to the complexity of the map, or the amount of clutter in the map, i.e. the excessive amount of information and its disorganization. Computer vision research is highly interested in measuring clutter in images, and this paper proposes to compare some of the existing techniques from computer vision, applied to generalized maps evaluation. Four techniques from the literature are described and tested on a large set of maps, generalized at different scales: edge density, subband entropy, quad tree complexity, and segmentation clutter. The results are analyzed against several criteria related to generalized maps, the identification of cluttered areas, the preservation of the global amount of information, the handling of occlusions and overlaps, foreground vs background, and blank space reduction.

  18. An Overview and Comparison of Online Implementable SOC Estimation Methods for Lithium-ion Battery

    DEFF Research Database (Denmark)

    Meng, Jinhao; Ricco, Mattia; Luo, Guangzhao

    2018-01-01

    . Many SOC estimation methods have been proposed in the literature. However, only a few of them consider the real-time applicability. This paper reviews recently proposed online SOC estimation methods and classifies them into five categories. Their principal features are illustrated, and the main pros...... and cons are provided. The SOC estimation methods are compared and discussed in terms of accuracy, robustness, and computation burden. Afterward, as the most popular type of model based SOC estimation algorithms, seven nonlinear filters existing in literature are compared in terms of their accuracy...

  19. Comparative research of finite element methods for perforated structures of nuclear power plant primary equipment

    International Nuclear Information System (INIS)

    Xiong Guangming; Deng Xiaoyun; Jin Ting

    2013-01-01

    Many perforated structures are used for nuclear power plant primary equipment, and they are complex, and have various forms. In order to explore the analysis and evaluation method, this paper used finite element method and equivalent analytic method to do the comparative analysis of perforated structures. The paper considered the main influence factors (including perforated forms, arrangements, and etc.), obtaining the systematic analysis methods of perforated structures. (authors)

  20. Comparing methods of targeting obesity interventions in populations: An agent-based simulation.

    Science.gov (United States)

    Beheshti, Rahmatollah; Jalalpour, Mehdi; Glass, Thomas A

    2017-12-01

    Social networks as well as neighborhood environments have been shown to effect obesity-related behaviors including energy intake and physical activity. Accordingly, harnessing social networks to improve targeting of obesity interventions may be promising to the extent this leads to social multiplier effects and wider diffusion of intervention impact on populations. However, the literature evaluating network-based interventions has been inconsistent. Computational methods like agent-based models (ABM) provide researchers with tools to experiment in a simulated environment. We develop an ABM to compare conventional targeting methods (random selection, based on individual obesity risk, and vulnerable areas) with network-based targeting methods. We adapt a previously published and validated model of network diffusion of obesity-related behavior. We then build social networks among agents using a more realistic approach. We calibrate our model first against national-level data. Our results show that network-based targeting may lead to greater population impact. We also present a new targeting method that outperforms other methods in terms of intervention effectiveness at the population level.

  1. A Comparative study of two RVE modelling methods for chopped carbon fiber SMC

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Zhangxing; Li, Yi; Shao, Yimin; Huang, Tianyu; Xu, Hongyi; Li, Yang; Chen, Wei; Zeng, Danielle; Avery, Katherine; Kang, HongTae; Su, Xuming

    2017-04-06

    To achieve vehicle light-weighting, the chopped carbon fiber sheet molding compound (SMC) is identified as a promising material to replace metals. However, there are no effective tools and methods to predict the mechanical property of the chopped carbon fiber SMC due to the high complexity in microstructure features and the anisotropic properties. In this paper, the Representative Volume Element (RVE) approach is used to model the SMC microstructure. Two modeling methods, the Voronoi diagram-based method and the chip packing method, are developed for material RVE property prediction. The two methods are compared in terms of the predicted elastic modulus and the predicted results are validated using the Digital Image Correlation (DIC) tensile test results. Furthermore, the advantages and shortcomings of these two methods are discussed in terms of the required input information and the convenience of use in the integrated processing-microstructure-property analysis.

  2. The qualitative research proposal

    Directory of Open Access Journals (Sweden)

    H Klopper

    2008-09-01

    Full Text Available Qualitative research in the health sciences has had to overcome many prejudices and a number of misunderstandings, but today qualitative research is as acceptable as quantitative research designs and is widely funded and published. Writing the proposal of a qualitative study, however, can be a challenging feat, due to the emergent nature of the qualitative research design and the description of the methodology as a process. Even today, many sub-standard proposals at post-graduate evaluation committees and application proposals to be considered for funding are still seen. This problem has led the researcher to develop a framework to guide the qualitative researcher in writing the proposal of a qualitative study based on the following research questions: (i What is the process of writing a qualitative research proposal? and (ii What does the structure and layout of a qualitative proposal look like? The purpose of this article is to discuss the process of writing the qualitative research proposal, as well as describe the structure and layout of a qualitative research proposal. The process of writing a qualitative research proposal is discussed with regards to the most important questions that need to be answered in your research proposal with consideration of the guidelines of being practical, being persuasive, making broader links, aiming for crystal clarity and planning before you write. While the structure of the qualitative research proposal is discussed with regards to the key sections of the proposal, namely the cover page, abstract, introduction, review of the literature, research problem and research questions, research purpose and objectives, research paradigm, research design, research method, ethical considerations, dissemination plan, budget and appendices.

  3. Steam leak detection method in pipeline using histogram analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Se Oh; Jeon, Hyeong Seop; Son, Ki Sung; Chae, Gyung Sun [Saean Engineering Corp, Seoul (Korea, Republic of); Park, Jong Won [Dept. of Information Communications Engineering, Chungnam NationalUnversity, Daejeon (Korea, Republic of)

    2015-10-15

    Leak detection in a pipeline usually involves acoustic emission sensors such as contact type sensors. These contact type sensors pose difficulties for installation and cannot operate in areas having high temperature and radiation. Therefore, recently, many researchers have studied the leak detection phenomenon by using a camera. Leak detection by using a camera has the advantages of long distance monitoring and wide area surveillance. However, the conventional leak detection method by using difference images often mistakes the vibration of a structure for a leak. In this paper, we propose a method for steam leakage detection by using the moving average of difference images and histogram analysis. The proposed method can separate the leakage and the vibration of a structure. The working performance of the proposed method is verified by comparing with experimental results.

  4. A Legendre Wavelet Spectral Collocation Method for Solving Oscillatory Initial Value Problems

    Directory of Open Access Journals (Sweden)

    A. Karimi Dizicheh

    2013-01-01

    wavelet suitable for large intervals, and then the Legendre-Guass collocation points of the Legendre wavelet are derived. Using this strategy, the iterative spectral method converts the differential equation to a set of algebraic equations. Solving these algebraic equations yields an approximate solution for the differential equation. The proposed method is illustrated by some numerical examples, and the result is compared with the exponentially fitted Runge-Kutta method. Our proposed method is simple and highly accurate.

  5. New knowledge network evaluation method for design rationale management

    Science.gov (United States)

    Jing, Shikai; Zhan, Hongfei; Liu, Jihong; Wang, Kuan; Jiang, Hao; Zhou, Jingtao

    2015-01-01

    Current design rationale (DR) systems have not demonstrated the value of the approach in practice since little attention is put to the evaluation method of DR knowledge. To systematize knowledge management process for future computer-aided DR applications, a prerequisite is to provide the measure for the DR knowledge. In this paper, a new knowledge network evaluation method for DR management is presented. The method characterizes the DR knowledge value from four perspectives, namely, the design rationale structure scale, association knowledge and reasoning ability, degree of design justification support and degree of knowledge representation conciseness. The DR knowledge comprehensive value is also measured by the proposed method. To validate the proposed method, different style of DR knowledge network and the performance of the proposed measure are discussed. The evaluation method has been applied in two realistic design cases and compared with the structural measures. The research proposes the DR knowledge evaluation method which can provide object metric and selection basis for the DR knowledge reuse during the product design process. In addition, the method is proved to be more effective guidance and support for the application and management of DR knowledge.

  6. Comparative assessment of supervision and decision-making procedures regarding sustainable development; Evaluation comparee de methodes de controle et de decision en matiere de developpement durable

    Energy Technology Data Exchange (ETDEWEB)

    Carlevaro, F.; Garbely, M.; Genoud, S.

    2002-07-01

    This final report for the Swiss Federal Office of Energy (SFOE) presents the results of a study made on the possibilities of establishing a system of indicators that allows the monitoring of sustainable development and its effects, as stipulated in the Agenda 21. The report presents the findings of the study on criteria and indicators for sustainability in the energy area. The challenge posed by the synthesis of information from a system of indicators is discussed and four general approaches are proposed, compared and tested for the monitoring of sustainability in the energy area. These include the calculation of a composite index from several indicators, a similar process that uses statistical methods of dimensional reduction, methods for the measurement of productivity loaned from economics and a method for decision-making using multiple criteria. Examples for the four approaches are given and experience gained in their use - partly in other countries and in United Nations agencies - is discussed.

  7. Advances in a framework to compare bio-dosimetry methods for triage in large-scale radiation events

    International Nuclear Information System (INIS)

    Flood, Ann Barry; Boyle, Holly K.; Du, Gaixin; Demidenko, Eugene; Williams, Benjamin B.; Swartz, Harold M.; Nicolalde, Roberto J.

    2014-01-01

    Planning and preparation for a large-scale nuclear event would be advanced by assessing the applicability of potentially available bio-dosimetry methods. Using an updated comparative framework the performance of six bio-dosimetry methods was compared for five different population sizes (100-1 000 000) and two rates for initiating processing of the marker (15 or 15 000 people per hour) with four additional time windows. These updated factors are extrinsic to the bio-dosimetry methods themselves but have direct effects on each method's ability to begin processing individuals and the size of the population that can be accommodated. The results indicate that increased population size, along with severely compromised infrastructure, increases the time needed to triage, which decreases the usefulness of many time intensive dosimetry methods. This framework and model for evaluating bio-dosimetry provides important information for policy-makers and response planners to facilitate evaluation of each method and should advance coordination of these methods into effective triage plans. (authors)

  8. An Expectation-Maximization Method for Calibrating Synchronous Machine Models

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang

    2013-07-21

    The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.

  9. A matrix structured LED backlight system with 2D-DHT local dimming method

    Science.gov (United States)

    Liu, Jia; Li, Yang; Du, Sidan

    To reduce the number of the drivers in the conventional local dimming method for LCDs, a novel LED backlight local dimming system is proposed in this paper. The backlight of this system is generated by 2D discrete Hadamard transform and its matrix structured LED modules. Compared with the conventional 2D local dimming method, the proposed method costs much fewer drivers but with little degradation.

  10. Comparing treatment effects after adjustment with multivariable Cox proportional hazards regression and propensity score methods

    NARCIS (Netherlands)

    Martens, Edwin P; de Boer, Anthonius; Pestman, Wiebe R; Belitser, Svetlana V; Stricker, Bruno H Ch; Klungel, Olaf H

    PURPOSE: To compare adjusted effects of drug treatment for hypertension on the risk of stroke from propensity score (PS) methods with a multivariable Cox proportional hazards (Cox PH) regression in an observational study with censored data. METHODS: From two prospective population-based cohort

  11. Comparative evaluation of ultrasound scanner accuracy in distance measurement

    Science.gov (United States)

    Branca, F. P.; Sciuto, S. A.; Scorza, A.

    2012-10-01

    The aim of the present study is to develop and compare two different automatic methods for accuracy evaluation in ultrasound phantom measurements on B-mode images: both of them give as a result the relative error e between measured distances, performed by 14 brand new ultrasound medical scanners, and nominal distances, among nylon wires embedded in a reference test object. The first method is based on a least squares estimation, while the second one applies the mean value of the same distance evaluated at different locations in ultrasound image (same distance method). Results for both of them are proposed and explained.

  12. New adaptive sampling method in particle image velocimetry

    International Nuclear Information System (INIS)

    Yu, Kaikai; Xu, Jinglei; Tang, Lan; Mo, Jianwei

    2015-01-01

    This study proposes a new adaptive method to enable the number of interrogation windows and their positions in a particle image velocimetry (PIV) image interrogation algorithm to become self-adapted according to the seeding density. The proposed method can relax the constraint of uniform sampling rate and uniform window size commonly adopted in the traditional PIV algorithm. In addition, the positions of the sampling points are redistributed on the basis of the spring force generated by the sampling points. The advantages include control of the number of interrogation windows according to the local seeding density and smoother distribution of sampling points. The reliability of the adaptive sampling method is illustrated by processing synthetic and experimental images. The synthetic example attests to the advantages of the sampling method. Compared with that of the uniform interrogation technique in the experimental application, the spatial resolution is locally enhanced when using the proposed sampling method. (technical design note)

  13. Based on Penalty Function Method

    Directory of Open Access Journals (Sweden)

    Ishaq Baba

    2015-01-01

    Full Text Available The dual response surface for simultaneously optimizing the mean and variance models as separate functions suffers some deficiencies in handling the tradeoffs between bias and variance components of mean squared error (MSE. In this paper, the accuracy of the predicted response is given a serious attention in the determination of the optimum setting conditions. We consider four different objective functions for the dual response surface optimization approach. The essence of the proposed method is to reduce the influence of variance of the predicted response by minimizing the variability relative to the quality characteristics of interest and at the same time achieving the specific target output. The basic idea is to convert the constraint optimization function into an unconstraint problem by adding the constraint to the original objective function. Numerical examples and simulations study are carried out to compare performance of the proposed method with some existing procedures. Numerical results show that the performance of the proposed method is encouraging and has exhibited clear improvement over the existing approaches.

  14. Minimum-Voltage Vector Injection Method for Sensorless Control of PMSM for Low-Speed Operations

    DEFF Research Database (Denmark)

    Xie, Ge; Lu, Kaiyuan; Kumar, Dwivedi Sanjeet

    2016-01-01

    In this paper, a simple signal injection method is proposed for sensorless control of PMSM at low speed, which ideally requires one voltage vector only for position estimation. The proposed method is easy to implement resulting in low computation burden. No filters are needed for extracting...... may also be further developed to inject two opposite voltage vectors to reduce the effects of inverter voltage error on the position estimation accuracy. The effectiveness of the proposed method is demonstrated by comparing with other sensorless control method. Theoretical analysis and experimental...

  15. A comparative evaluation of emerging methods for errors of commission based on applications to the Davis-Besse (1985) event

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B.; Dang, V.N.; Hirschberg, S. [Paul Scherrer Inst., Nuclear Energy and Safety Research Dept., CH-5232 Villigen PSI (Switzerland); Straeter, O. [Gesellschaft fur Anlagen- und Reaktorsicherheit (Germany)

    1999-12-01

    In considering the human role in accidents, the classical PSA methodology applied today focuses primarily on the omissions of actions required of the operators at specific points in the scenario models. A practical, proven methodology is not available for systematically identifying and analyzing the scenario contexts in which the operators might perform inappropriate actions that aggravate the scenario. As a result, typical PSA's do not comprehensively treat these actions, referred to as errors of commission (EOCs). This report presents the results of a joint project of the Paul Scherrer Institut (PSI, Villigen, Switzerland) and the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, Garching, Germany) that examined some methods recently proposed for addressing the EOC issue. Five methods were investigated: 1 ) ATHEANA, 2) the Borssele screening methodology. 3) CREAM, 4) CAHR, and 5) CODA. In addition to a comparison of their scope, basic assumptions, and analytical approach, the methods were each applied in the analysis of PWR Loss of Feedwater scenarios based on the 1985 Davis-Besse event, in which the operator response included actions that can be categorized as EOCs. The aim was to compare how the methods consider a concrete scenario in which EOCs have in fact been observed. These case applications show how the methods are used in practical terms and constitute a common basis for comparing the methods and the insights that they provide. The identification of the potentially significant EOCs to be analysed in the PSA is currently the central problem for their treatment. The identification or search scheme has to consider an extensive set of potential actions that the operators may take. These actions may take place instead of required actions, for example, because the operators fail to assess the plant state correctly, or they may occur even when no action is required. As a result of this broad search space, most methodologies apply multiple schemes to

  16. Compensation of kinematic geometric parameters error and comparative study of accuracy testing for robot

    Science.gov (United States)

    Du, Liang; Shi, Guangming; Guan, Weibin; Zhong, Yuansheng; Li, Jin

    2014-12-01

    Geometric error is the main error of the industrial robot, and it plays a more significantly important fact than other error facts for robot. The compensation model of kinematic error is proposed in this article. Many methods can be used to test the robot accuracy, therefore, how to compare which method is better one. In this article, a method is used to compare two methods for robot accuracy testing. It used Laser Tracker System (LTS) and Three Coordinate Measuring instrument (TCM) to test the robot accuracy according to standard. According to the compensation result, it gets the better method which can improve the robot accuracy apparently.

  17. [A retrieval method of drug molecules based on graph collapsing].

    Science.gov (United States)

    Qu, J W; Lv, X Q; Liu, Z M; Liao, Y; Sun, P H; Wang, B; Tang, Z

    2018-04-18

    To establish a compact and efficient hypergraph representation and a graph-similarity-based retrieval method of molecules to achieve effective and efficient medicine information retrieval. Chemical structural formula (CSF) was a primary search target as a unique and precise identifier for each compound at the molecular level in the research field of medicine information retrieval. To retrieve medicine information effectively and efficiently, a complete workflow of the graph-based CSF retrieval system was introduced. This system accepted the photos taken from smartphones and the sketches drawn on tablet personal computers as CSF inputs, and formalized the CSFs with the corresponding graphs. Then this paper proposed a compact and efficient hypergraph representation for molecules on the basis of analyzing factors that directly affected the efficiency of graph matching. According to the characteristics of CSFs, a hierarchical collapsing method combining graph isomorphism and frequent subgraph mining was adopted. There was yet a fundamental challenge, subgraph overlapping during the collapsing procedure, which hindered the method from establishing the correct compact hypergraph of an original CSF graph. Therefore, a graph-isomorphism-based algorithm was proposed to select dominant acyclic subgraphs on the basis of overlapping analysis. Finally, the spatial similarity among graphical CSFs was evaluated by multi-dimensional measures of similarity. To evaluate the performance of the proposed method, the proposed system was firstly compared with Wikipedia Chemical Structure Explorer (WCSE), the state-of-the-art system that allowed CSF similarity searching within Wikipedia molecules dataset, on retrieval accuracy. The system achieved higher values on mean average precision, discounted cumulative gain, rank-biased precision, and expected reciprocal rank than WCSE from the top-2 to the top-10 retrieved results. Specifically, the system achieved 10%, 1.41, 6.42%, and 1

  18. The challenges facing ethnographic design research: A proposed methodological solution

    DEFF Research Database (Denmark)

    Cash, Philip; Hicks, Ben; Culley, Steve

    2009-01-01

    Central to improving and maintaining high levels of performance in emerging ethnographic design research is a fundamental requirement to address some of the problems associated with the subject. In particular seven core issues are identified and include the complexity of test development......, variability of methods, resource intensiveness, subjectivity, comparability, common metrics and industrial acceptance. To address these problems this paper describes a structured methodological approach in which three main areas are proposed, the modularisation of the research process, the standardisation...... of the dataset and the stratification of the research context. The paper then examines the fundamental requirements of this scheme and how these relate to a Design Observatory approach. Following this, the proposed solution is related back to the initial problem set and potential issues are discussed. Finally...

  19. Comparative study of two methods for determining the diffusible hydrogen content in welds

    International Nuclear Information System (INIS)

    Celio de Abreu, L.; Modenesi, P.J.; Villani-Marques, P.

    1994-01-01

    This work presents a comparative study of the methods for measurement of the amount of diffusible hydrogen in welds: glycerin, mercury and gaseous chromatography. The effect of the variables collecting temperatures and times were analyzed. Basic electrodes type AWS E 9018-M were humidified and dried at different times and temperatures in order to obtain a large variation in the diffusible hydrogen contents. The results showed that the collecting time can be reduced when the collecting temperature is raised, the mercury and chromatography methods present similar results, higher than those obtained by the glycerin method, the use of liquid nitrogen in the preparation of the specimens for test is unessential. The chromatography method presents the lower dispersion and is the method that can have the collecting time more reduced by the raising of the collecting temperature. The use of equations for comparison between results obtained by the various methods encountered in the literature is also discussed. (Author) 16 refs

  20. Laplacian manifold regularization method for fluorescence molecular tomography

    Science.gov (United States)

    He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei

    2017-04-01

    Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.

  1. Comparing Methods of Calculating Expected Annual Damage in Urban Pluvial Flood Risk Assessments

    Directory of Open Access Journals (Sweden)

    Anders Skovgård Olsen

    2015-01-01

    Full Text Available Estimating the expected annual damage (EAD due to flooding in an urban area is of great interest for urban water managers and other stakeholders. It is a strong indicator for a given area showing how vulnerable it is to flood risk and how much can be gained by implementing e.g., climate change adaptation measures. This study identifies and compares three different methods for estimating the EAD based on unit costs of flooding of urban assets. One of these methods was used in previous studies and calculates the EAD based on a few extreme events by assuming a log-linear relationship between cost of an event and the corresponding return period. This method is compared to methods that are either more complicated or require more calculations. The choice of method by which the EAD is calculated appears to be of minor importance. At all three case study areas it seems more important that there is a shift in the damage costs as a function of the return period. The shift occurs approximately at the 10 year return period and can perhaps be related to the design criteria for sewer systems. Further, it was tested if the EAD estimation could be simplified by assuming a single unit cost per flooded area. The results indicate that within each catchment this may be a feasible approach. However the unit costs varies substantially between different case study areas. Hence it is not feasible to develop unit costs that can be used to calculate EAD, most likely because the urban landscape is too heterogeneous.

  2. An AC Resistance Optimization Method Applicable for Inductor and Transformer Windings with Full Layers and Partial Layers

    DEFF Research Database (Denmark)

    Shen, Zhan; Li, Zhiguang; Jin, Long

    2017-01-01

    This paper proposes an ac resistance optimization method applicable for both inductor and transformer windings with full layers and partial layers. The proposed method treats the number of layers of the windings as a design variable instead of as a predefined parameter, compared to existing methods...

  3. A novel method for unsteady flow field segmentation based on stochastic similarity of direction

    Science.gov (United States)

    Omata, Noriyasu; Shirayama, Susumu

    2018-04-01

    Recent developments in fluid dynamics research have opened up the possibility for the detailed quantitative understanding of unsteady flow fields. However, the visualization techniques currently in use generally provide only qualitative insights. A method for dividing the flow field into physically relevant regions of interest can help researchers quantify unsteady fluid behaviors. Most methods at present compare the trajectories of virtual Lagrangian particles. The time-invariant features of an unsteady flow are also frequently of interest, but the Lagrangian specification only reveals time-variant features. To address these challenges, we propose a novel method for the time-invariant spatial segmentation of an unsteady flow field. This segmentation method does not require Lagrangian particle tracking but instead quantitatively compares the stochastic models of the direction of the flow at each observed point. The proposed method is validated with several clustering tests for 3D flows past a sphere. Results show that the proposed method reveals the time-invariant, physically relevant structures of an unsteady flow.

  4. COMPARATIVE STUDY ON MILK CASEIN ASSAY METHODS

    Directory of Open Access Journals (Sweden)

    RODICA CĂPRIłĂ

    2008-05-01

    Full Text Available Casein, the main milk protein was determined by different assay methods: the gravimetric method, the method based on the neutralization of the NaOH excess used for the casein precipitate solving and the method based on the titration of the acetic acid used for the casein precipitation. The last method is the simplest one, with the fewer steps, and also with the lowest error degree. The results of the experiment revealed that the percentage of casein from the whole milk protein represents between 72.6–81.3% in experiment 1, between 73.6–81.3% in experiment 2 and between 74.3–81% in experiment 3.

  5. Free vibration analysis of multi-span pipe conveying fluid with dynamic stiffness method

    International Nuclear Information System (INIS)

    Li Baohui; Gao Hangshan; Zhai Hongbo; Liu Yongshou; Yue Zhufeng

    2011-01-01

    Research highlights: → The dynamic stiffness method was proposed to analysis the free vibration of multi-span pipe conveying fluid. → The main advantage of the proposed method is that it can hold a high precision even though the element size is large. → The flowing fluid can weaken the pipe stiffness, when the fluid velocity increases, the natural frequencies of pipe are decreasing. - Abstract: By taking a pipe as Timoshenko beam, in this paper the original 4-equation model of pipe conveying fluid was modified by taking the dynamic effects of fluid into account. The shape function that always used in the finite element method was replaced by the exact wave solution of the modified four equations. And then the dynamic stiffness was deduced for the free vibration of pipe conveying fluid. The proposed method was validated by comparing the results of critical velocity with analytical solution for a simply supported pipe at both ends. In the example, the proposed method was applied to calculate the first three natural frequencies of a three span pipe with twelve meters long in three different cases. The results of natural frequency for the pipe conveying stationary fluid fitted well with that calculated by finite element software Abaqus. It was shown that the dynamic stiffness method can still hold high precision even though the element's size was quite large. And this is the predominant advantage of the proposed method comparing with conventional finite element method.

  6. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    Directory of Open Access Journals (Sweden)

    Jordi Marcé-Nogué

    2017-10-01

    Full Text Available Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches.

  7. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    Science.gov (United States)

    De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep

    2017-01-01

    Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107

  8. Evaluation of T cell subsets by an immunocytochemical method compared to flow cytometry in four countries

    DEFF Research Database (Denmark)

    Lisse, I M; Böttiger, B; Christensen, L B

    1997-01-01

    The authors tested an alternative method for CD4 and CD8 T lymphocytes enumeration, the immunoalkaline phosphatase method (IA), in three African countries and in Denmark. The IA determinations from 136 HIV antibody positive and 105 HIV antibody negative individuals were compared...... by the two methods are not interchangeable as IA compared to FC consistently gives higher percentage of CD4 T lymphocytes, and lower percentage of CD8 T lymphocytes. Mean differences between the two methods did not differ between the three African countries indicating that the IA method provides systematic...... results. Replicate measurements suggested good correspondence between results obtained by IA. By using an IA level of 4 T lymphocytes/microliter, the sensitivity was 81% and specificity 96% for detecting an FC level of 4 T lymphocytes/microliter. Using an IA level of 4 T...

  9. Evaluation of Different Methods for Considering Bar-Concrete ...

    African Journals Online (AJOL)

    theory, but the perfect bond assumption has been removed. The precision of the proposed method in considering the real nonlinear behavior of reinforced concrete frames has been compared to the precision of two other suggested methods for considering bond-slip effect in layer model. Among the capabilities of this ...

  10. Comparative analysis of accelerogram processing methods

    International Nuclear Information System (INIS)

    Goula, X.; Mohammadioun, B.

    1986-01-01

    The work described here inafter is a short development of an on-going research project, concerning high-quality processing of strong-motion recordings of earthquakes. Several processing procedures have been tested, applied to synthetic signals simulating ground-motion designed for this purpose. The methods of correction operating in the time domain are seen to be strongly dependent upon the sampling rate. Two methods of low-frequency filtering followed by an integration of accelerations yielded satisfactory results [fr

  11. Researching Civil Remedies for International Corruption: The Choice of the Functional Comparative Method

    NARCIS (Netherlands)

    A.O. Makinwa (Abiola)

    2009-01-01

    textabstractThis paper motivates the choice of the functional comparative method to research the issue of civil remedies for international corruption. It shows how the social, economic and political factors that have shaped the normative context of the research question point to the functional

  12. A method to compare calculated and experimental 14 MeV neutron attenuation coefficient and to determine the total removal cross-section

    International Nuclear Information System (INIS)

    Elay, A.G.

    1978-01-01

    A method to compare calculated and experimental neutron attenuation coefficients (chi) when samples are o, different geometries but the same material is proposed. The best Σ (total removal cross section) is determined by using the fact that the logarithm of the attenuation coefficient varies linearly with respect to Σ i.e. lg chi = + asub(s) Σ, where asub(s) is a parameter that characterises all the geometrical experimental conditions of the neutron source, the sample and the relative source-to-sample geometry. In order to increase the precision, samples of different geometries but the same material were used. Values of chi are determined experimentally and asub(s) calculated for these geometries. The graph of lg chi as a function of asub(s) together with a simple fit to a straight line is sufficient to determine Σ (the slope of the line). (T.G.)

  13. A finite volume method for cylindrical heat conduction problems based on local analytical solution

    KAUST Repository

    Li, Wang

    2012-10-01

    A new finite volume method for cylindrical heat conduction problems based on local analytical solution is proposed in this paper with detailed derivation. The calculation results of this new method are compared with the traditional second-order finite volume method. The newly proposed method is more accurate than conventional ones, even though the discretized expression of this proposed method is slightly more complex than the second-order central finite volume method, making it cost more calculation time on the same grids. Numerical result shows that the total CPU time of the new method is significantly less than conventional methods for achieving the same level of accuracy. © 2012 Elsevier Ltd. All rights reserved.

  14. A finite volume method for cylindrical heat conduction problems based on local analytical solution

    KAUST Repository

    Li, Wang; Yu, Bo; Wang, Xinran; Wang, Peng; Sun, Shuyu

    2012-01-01

    A new finite volume method for cylindrical heat conduction problems based on local analytical solution is proposed in this paper with detailed derivation. The calculation results of this new method are compared with the traditional second-order finite volume method. The newly proposed method is more accurate than conventional ones, even though the discretized expression of this proposed method is slightly more complex than the second-order central finite volume method, making it cost more calculation time on the same grids. Numerical result shows that the total CPU time of the new method is significantly less than conventional methods for achieving the same level of accuracy. © 2012 Elsevier Ltd. All rights reserved.

  15. Time-trends in method-specific suicide rates compared with the availability of specific compounds

    DEFF Research Database (Denmark)

    Nordentoft, Merete; Qin, Ping; Helweg-Larsen, Karin

    2006-01-01

    Restriction of means for suicide is an important part of suicide preventive strategies in different countries. All suicides in Denmark between 1970 and 2000 were examined with regard to method used for suicide. Overall suicide mortality and method-specific suicide mortality was compared...... in the number of suicides by self-poisoning with these compounds. Restricted access occurred concomittantly with a 55% decrease in suicide rate....

  16. Innovative methods for calculation of freeway travel time using limited data : final report.

    Science.gov (United States)

    2008-01-01

    Description: Travel time estimations created by processing of simulated freeway loop detector data using proposed method have been compared with travel times reported from VISSIM model. An improved methodology was proposed to estimate freeway corrido...

  17. An axisymmetric method of creep analysis for primary and secondary creep

    International Nuclear Information System (INIS)

    Jahed, Hamid; Bidabadi, Jalal

    2003-01-01

    A general axisymmetric method for elastic-plastic analysis was previously proposed by Jahed and Dubey [ASME J Pressure Vessels Technol 119 (1997) 264]. In the present work the method is extended to the time domain. General rate type governing equations are derived and solved in terms of rate of change of displacement as a function of rate of change in loading. Different types of loading, such as internal and external pressure, centrifugal loading and temperature gradient, are considered. To derive specific equations and employ the proposed formulation, the problem of an inhomogeneous non-uniform rotating disc is worked out. Primary and secondary creep behaviour is predicted using the proposed method and results are compared to FEM results. The problem of creep in pressurized vessels is also solved. Several numerical examples show the effectiveness and robustness of the proposed method

  18. Acoustic Source Localization via Subspace Based Method Using Small Aperture MEMS Arrays

    Directory of Open Access Journals (Sweden)

    Xin Zhang

    2014-01-01

    Full Text Available Small aperture microphone arrays provide many advantages for portable devices and hearing aid equipment. In this paper, a subspace based localization method is proposed for acoustic source using small aperture arrays. The effects of array aperture on localization are analyzed by using array response (array manifold. Besides array aperture, the frequency of acoustic source and the variance of signal power are simulated to demonstrate how to optimize localization performance, which is carried out by introducing frequency error with the proposed method. The proposed method for 5 mm array aperture is validated by simulations and experiments with MEMS microphone arrays. Different types of acoustic sources can be localized with the highest precision of 6 degrees even in the presence of wind noise and other noises. Furthermore, the proposed method reduces the computational complexity compared with other methods.

  19. Joint Pitch and DOA Estimation Using the ESPRIT method

    DEFF Research Database (Denmark)

    Wu, Yuntao; Amir, Leshem; Jensen, Jesper Rindom

    2015-01-01

    In this paper, the problem of joint multi-pitch and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signals is considered. A spatio-temporal matrix signal model for a uniform linear array is defined, and then the ESPRIT method based on subspace techniques that exploits...... the invariance property in the time domain is first used to estimate the multi pitch frequencies of multiple harmonic signals. Followed by the estimated pitch frequencies, the DOA estimations based on the ESPRIT method are also presented by using the shift invariance structure in the spatial domain. Compared...... to the existing stateof-the-art algorithms, the proposed method based on ESPRIT without 2-D searching is computationally more efficient but performs similarly. An asymptotic performance analysis of the DOA and pitch estimation of the proposed method are also presented. Finally, the effectiveness of the proposed...

  20. A vibration correction method for free-fall absolute gravimeters

    Science.gov (United States)

    Qian, J.; Wang, G.; Wu, K.; Wang, L. J.

    2018-02-01

    An accurate determination of gravitational acceleration, usually approximated as 9.8 m s-2, has been playing an important role in the areas of metrology, geophysics, and geodetics. Absolute gravimetry has been experiencing rapid developments in recent years. Most absolute gravimeters today employ a free-fall method to measure gravitational acceleration. Noise from ground vibration has become one of the most serious factors limiting measurement precision. Compared to vibration isolators, the vibration correction method is a simple and feasible way to reduce the influence of ground vibrations. A modified vibration correction method is proposed and demonstrated. A two-dimensional golden section search algorithm is used to search for the best parameters of the hypothetical transfer function. Experiments using a T-1 absolute gravimeter are performed. It is verified that for an identical group of drop data, the modified method proposed in this paper can achieve better correction effects with much less computation than previous methods. Compared to vibration isolators, the correction method applies to more hostile environments and even dynamic platforms, and is expected to be used in a wider range of applications.

  1. Comparative study of construction schemes for proposed LINAC tunnel for ADSS

    International Nuclear Information System (INIS)

    Parchani, G.; Suresh, N.

    2003-01-01

    Radiation shielded structures involve architectural, structural and radiation shielding design. In order to attenuate the radiation level to the permissible limits concrete has been recognized as a most versatile radiation shielding material and is being extensively used. Concrete in addition to radiation shielding properties possesses very good mechanical properties, which enables its use as a structural member. High-energy linac lab, which will generate radiation, needs very large thickness of concrete for shielding. The length of tunnel (1.00 kM) is one of the most important factors in finalizing construction scheme. In view of this, it becomes essential to explore alternate construction schemes for such structures to optimize the cost of construction. In this paper, various alternates for the construction of proposed linac tunnel have been studied

  2. A method for statistically comparing spatial distribution maps

    Directory of Open Access Journals (Sweden)

    Reynolds Mary G

    2009-01-01

    Full Text Available Abstract Background Ecological niche modeling is a method for estimation of species distributions based on certain ecological parameters. Thus far, empirical determination of significant differences between independently generated distribution maps for a single species (maps which are created through equivalent processes, but with different ecological input parameters, has been challenging. Results We describe a method for comparing model outcomes, which allows a statistical evaluation of whether the strength of prediction and breadth of predicted areas is measurably different between projected distributions. To create ecological niche models for statistical comparison, we utilized GARP (Genetic Algorithm for Rule-Set Production software to generate ecological niche models of human monkeypox in Africa. We created several models, keeping constant the case location input records for each model but varying the ecological input data. In order to assess the relative importance of each ecological parameter included in the development of the individual predicted distributions, we performed pixel-to-pixel comparisons between model outcomes and calculated the mean difference in pixel scores. We used a two sample Student's t-test, (assuming as null hypothesis that both maps were identical to each other regardless of which input parameters were used to examine whether the mean difference in corresponding pixel scores from one map to another was greater than would be expected by chance alone. We also utilized weighted kappa statistics, frequency distributions, and percent difference to look at the disparities in pixel scores. Multiple independent statistical tests indicated precipitation as the single most important independent ecological parameter in the niche model for human monkeypox disease. Conclusion In addition to improving our understanding of the natural factors influencing the distribution of human monkeypox disease, such pixel-to-pixel comparison

  3. A Proposal on the Geometry Splitting Strategy to Enhance the Calculation Efficiency in Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Han, Gi Yeong; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this study, how the geometry splitting strategy affects the calculation efficiency was analyzed. In this study, a geometry splitting method was proposed to increase the calculation efficiency in Monte Carlo simulation. First, the analysis of the neutron distribution characteristics in a deep penetration problem was performed. Then, considering the neutron population distribution, a geometry splitting method was devised. Using the proposed method, the FOMs with benchmark problems were estimated and compared with the conventional geometry splitting strategy. The results show that the proposed method can considerably increase the calculation efficiency in using geometry splitting method. It is expected that the proposed method will contribute to optimizing the computational cost as well as reducing the human errors in Monte Carlo simulation. Geometry splitting in Monte Carlo (MC) calculation is one of the most popular variance reduction techniques due to its simplicity, reliability and efficiency. For the use of the geometry splitting, the user should determine locations of geometry splitting and assign the relative importance of each region. Generally, the splitting parameters are decided by the user's experience. However, in this process, the splitting parameters can ineffectively or erroneously be selected. In order to prevent it, there is a recommendation to help the user eliminate guesswork, which is to split the geometry evenly. And then, the importance is estimated by a few iterations for preserving population of particle penetrating each region. However, evenly geometry splitting method can make the calculation inefficient due to the change in mean free path (MFP) of particles.

  4. A Proposal on the Geometry Splitting Strategy to Enhance the Calculation Efficiency in Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Han, Gi Yeong; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung

    2014-01-01

    In this study, how the geometry splitting strategy affects the calculation efficiency was analyzed. In this study, a geometry splitting method was proposed to increase the calculation efficiency in Monte Carlo simulation. First, the analysis of the neutron distribution characteristics in a deep penetration problem was performed. Then, considering the neutron population distribution, a geometry splitting method was devised. Using the proposed method, the FOMs with benchmark problems were estimated and compared with the conventional geometry splitting strategy. The results show that the proposed method can considerably increase the calculation efficiency in using geometry splitting method. It is expected that the proposed method will contribute to optimizing the computational cost as well as reducing the human errors in Monte Carlo simulation. Geometry splitting in Monte Carlo (MC) calculation is one of the most popular variance reduction techniques due to its simplicity, reliability and efficiency. For the use of the geometry splitting, the user should determine locations of geometry splitting and assign the relative importance of each region. Generally, the splitting parameters are decided by the user's experience. However, in this process, the splitting parameters can ineffectively or erroneously be selected. In order to prevent it, there is a recommendation to help the user eliminate guesswork, which is to split the geometry evenly. And then, the importance is estimated by a few iterations for preserving population of particle penetrating each region. However, evenly geometry splitting method can make the calculation inefficient due to the change in mean free path (MFP) of particles

  5. A parallel orbital-updating based plane-wave basis method for electronic structure calculations

    International Nuclear Information System (INIS)

    Pan, Yan; Dai, Xiaoying; Gironcoli, Stefano de; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui

    2017-01-01

    Highlights: • Propose three parallel orbital-updating based plane-wave basis methods for electronic structure calculations. • These new methods can avoid the generating of large scale eigenvalue problems and then reduce the computational cost. • These new methods allow for two-level parallelization which is particularly interesting for large scale parallelization. • Numerical experiments show that these new methods are reliable and efficient for large scale calculations on modern supercomputers. - Abstract: Motivated by the recently proposed parallel orbital-updating approach in real space method , we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.

  6. A New Method to Solve Numeric Solution of Nonlinear Dynamic System

    Directory of Open Access Journals (Sweden)

    Min Hu

    2016-01-01

    Full Text Available It is well known that the cubic spline function has advantages of simple forms, good convergence, approximation, and second-order smoothness. A particular class of cubic spline function is constructed and an effective method to solve the numerical solution of nonlinear dynamic system is proposed based on the cubic spline function. Compared with existing methods, this method not only has high approximation precision, but also avoids the Runge phenomenon. The error analysis of several methods is given via two numeric examples, which turned out that the proposed method is a much more feasible tool applied to the engineering practice.

  7. A comparative study of cultural methods for the detection of Salmonella in feed and feed ingredients

    Directory of Open Access Journals (Sweden)

    Haggblom Per

    2009-02-01

    Full Text Available Abstract Background Animal feed as a source of infection to food producing animals is much debated. In order to increase our present knowledge about possible feed transmission it is important to know that the present isolation methods for Salmonella are reliable also for feed materials. In a comparative study the ability of the standard method used for isolation of Salmonella in feed in the Nordic countries, the NMKL71 method (Nordic Committee on Food Analysis was compared to the Modified Semisolid Rappaport Vassiliadis method (MSRV and the international standard method (EN ISO 6579:2002. Five different feed materials were investigated, namely wheat grain, soybean meal, rape seed meal, palm kernel meal, pellets of pig feed and also scrapings from a feed mill elevator. Four different levels of the Salmonella serotypes S. Typhimurium, S. Cubana and S. Yoruba were added to each feed material, respectively. For all methods pre-enrichment in Buffered Peptone Water (BPW were carried out followed by enrichments in the different selective media and finally plating on selective agar media. Results The results obtained with all three methods showed no differences in detection levels, with an accuracy and sensitivity of 65% and 56%, respectively. However, Müller-Kauffmann tetrathionate-novobiocin broth (MKTTn, performed less well due to many false-negative results on Brilliant Green agar (BGA plates. Compared to other feed materials palm kernel meal showed a higher detection level with all serotypes and methods tested. Conclusion The results of this study showed that the accuracy, sensitivity and specificity of the investigated cultural methods were equivalent. However, the detection levels for different feed and feed ingredients varied considerably.

  8. A Proposed Method for Improving the Performance of P-Type GaAs IMPATTs

    Directory of Open Access Journals (Sweden)

    H. A. El-Motaafy

    2012-07-01

    Full Text Available A special waveform is proposed and assumed to be the optimum waveform for p-type GaAs IMPATTs. This waveform is deduced after careful and extensive study of the performance of these devices. The results presented here indicate the superiority of the performance of the IMPATTs driven by the proposed waveform over that obtained when the same IMPATTs are driven by the conventional sinusoidal waveform. These results are obtained using a full-scale computer simulation program that takes fully into account all the physical effects pertinent to IMPATT operation.  In this paper, it is indicated that the superiority of the proposed waveform is attributed to its ability to reduce the bad effects that usually degrade the IMPATT performance such as the space-charge effect and the drift-velocity dropping below saturation effect. The superiority is also attributed to the ability of the proposed waveform to improve the phase relationship between the terminal voltage and the induced current.Key Words: Computer-Aided Design, GaAs IMPATT, Microwave Engineering

  9. Comparative Study of Gas Reconstruction Robust Methods for Multicomponent Gas Mixtures

    Directory of Open Access Journals (Sweden)

    V. A. Gorodnichev

    2015-01-01

    determining concentrations of the gas mix components as compared to the averaging, by the least-squares method and other options of function ( x .

  10. [Nonparametric method of estimating survival functions containing right-censored and interval-censored data].

    Science.gov (United States)

    Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi

    2014-04-01

    Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.

  11. Comparative-historical method in Slavic linguistics and Alexander Vostokov’s philological intuitions

    Directory of Open Access Journals (Sweden)

    Melkov Andrey Sergeevich

    2015-04-01

    Full Text Available The article is devoted to the estimation of Alexander Vostokov’s (1781-1864 contribution to the formation and development of the Slavic philology as a scientific discipline. In the foundation of the research there is the analyses of Vostokov’s work “Judgement about the Slavic language”, which has become the result of the scientist’s study the oldest Russian manuscript “Ostromir Gospels”. Vostokov devised a new method for the Slavic philology, which is used to call comparative-historical in modern science. The scientist gave the beginning of Old Church Slavonic and Old Russian written monuments scientific researching. Thanks to Vostokov’s discoveries, there has been formed the basis of Russian comparative-historical linguistics.

  12. Maximum-likelihood methods for array processing based on time-frequency distributions

    Science.gov (United States)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  13. Comparative evaluation of different methods of treatment of miners with vibration-noise pathology

    Energy Technology Data Exchange (ETDEWEB)

    Bel' skaya, M.L.; Nekhorosheva, M.A.; Konovalova, S.I.; Kukhtina, G.V.; Gonchar, I.G.; Terent' eva, D.P.; Grishchenko, L.A.; Soboleva, N.P.; Kharitonov, S.A.; Priklonskii, I.V.

    1984-10-01

    Two new therapeutic methods of treating vibration-noise pathology, needle acupuncture and hyperbaric oxygenation, are compared with established methods of medical and physical therapy. Four complexes of therapy are recommended: I complex (control), medication and physical therapy; II complex, acupuncture and medical therapy; III complex, acupuncture, medical and physical therapy; IV complex, hyperbaric oxygenation, medical and physical therapy. The four complexes were tested on a selected group of miners. II, III and IV complexes were correlated with control (I) on the basis of subjective signs, objective changes in nervous system and functional state of vegetative and peripheral nervous system. A table compares the effectiveness of II, III, IV complexes with I complex. Results confirm effectiveness of medical and physical therapy. Application of acupuncture increases benefits to cardiovascular system and hyperbaric therapy aids neurosensory hearing impairment. As a result of investigation, acupuncture and hyperbaric therapy are recommended for treatment of patients suffering vibration-noise pathology with a differential approach to their purpose. 8 references.

  14. Estimation Methods of the Point Spread Function Axial Position: A Comparative Computational Study

    Directory of Open Access Journals (Sweden)

    Javier Eduardo Diaz Zamboni

    2017-01-01

    Full Text Available The precise knowledge of the point spread function is central for any imaging system characterization. In fluorescence microscopy, point spread function (PSF determination has become a common and obligatory task for each new experimental device, mainly due to its strong dependence on acquisition conditions. During the last decade, algorithms have been developed for the precise calculation of the PSF, which fit model parameters that describe image formation on the microscope to experimental data. In order to contribute to this subject, a comparative study of three parameter estimation methods is reported, namely: I-divergence minimization (MIDIV, maximum likelihood (ML and non-linear least square (LSQR. They were applied to the estimation of the point source position on the optical axis, using a physical model. Methods’ performance was evaluated under different conditions and noise levels using synthetic images and considering success percentage, iteration number, computation time, accuracy and precision. The main results showed that the axial position estimation requires a high SNR to achieve an acceptable success level and higher still to be close to the estimation error lower bound. ML achieved a higher success percentage at lower SNR compared to MIDIV and LSQR with an intrinsic noise source. Only the ML and MIDIV methods achieved the error lower bound, but only with data belonging to the optical axis and high SNR. Extrinsic noise sources worsened the success percentage, but no difference was found between noise sources for the same method for all methods studied.

  15. Estimation of effective brain connectivity with dual Kalman filter and EEG source localization methods.

    Science.gov (United States)

    Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher

    2017-09-01

    Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter w