WorldWideScience

Sample records for multiple methods approach

  1. Adjusted permutation method for multiple attribute decision making with meta-heuristic solution approaches

    Directory of Open Access Journals (Sweden)

    Hossein Karimi

    2011-04-01

    Full Text Available The permutation method of multiple attribute decision making has two significant deficiencies: high computational time and wrong priority output in some problem instances. In this paper, a novel permutation method called adjusted permutation method (APM is proposed to compensate deficiencies of conventional permutation method. We propose Tabu search (TS and particle swarm optimization (PSO to find suitable solutions at a reasonable computational time for large problem instances. The proposed method is examined using some numerical examples to evaluate the performance of the proposed method. The preliminary results show that both approaches provide competent solutions in relatively reasonable amounts of time while TS performs better to solve APM.

  2. MANGO: a new approach to multiple sequence alignment.

    Science.gov (United States)

    Zhang, Zefeng; Lin, Hao; Li, Ming

    2007-01-01

    Multiple sequence alignment is a classical and challenging task for biological sequence analysis. The problem is NP-hard. The full dynamic programming takes too much time. The progressive alignment heuristics adopted by most state of the art multiple sequence alignment programs suffer from the 'once a gap, always a gap' phenomenon. Is there a radically new way to do multiple sequence alignment? This paper introduces a novel and orthogonal multiple sequence alignment method, using multiple optimized spaced seeds and new algorithms to handle these seeds efficiently. Our new algorithm processes information of all sequences as a whole, avoiding problems caused by the popular progressive approaches. Because the optimized spaced seeds are provably significantly more sensitive than the consecutive k-mers, the new approach promises to be more accurate and reliable. To validate our new approach, we have implemented MANGO: Multiple Alignment with N Gapped Oligos. Experiments were carried out on large 16S RNA benchmarks showing that MANGO compares favorably, in both accuracy and speed, against state-of-art multiple sequence alignment methods, including ClustalW 1.83, MUSCLE 3.6, MAFFT 5.861, Prob-ConsRNA 1.11, Dialign 2.2.1, DIALIGN-T 0.2.1, T-Coffee 4.85, POA 2.0 and Kalign 2.0.

  3. Multiple flood vulnerability assessment approach based on fuzzy comprehensive evaluation method and coordinated development degree model.

    Science.gov (United States)

    Yang, Weichao; Xu, Kui; Lian, Jijian; Bin, Lingling; Ma, Chao

    2018-05-01

    Flood is a serious challenge that increasingly affects the residents as well as policymakers. Flood vulnerability assessment is becoming gradually relevant in the world. The purpose of this study is to develop an approach to reveal the relationship between exposure, sensitivity and adaptive capacity for better flood vulnerability assessment, based on the fuzzy comprehensive evaluation method (FCEM) and coordinated development degree model (CDDM). The approach is organized into three parts: establishment of index system, assessment of exposure, sensitivity and adaptive capacity, and multiple flood vulnerability assessment. Hydrodynamic model and statistical data are employed for the establishment of index system; FCEM is used to evaluate exposure, sensitivity and adaptive capacity; and CDDM is applied to express the relationship of the three components of vulnerability. Six multiple flood vulnerability types and four levels are proposed to assess flood vulnerability from multiple perspectives. Then the approach is applied to assess the spatiality of flood vulnerability in Hainan's eastern area, China. Based on the results of multiple flood vulnerability, a decision-making process for rational allocation of limited resources is proposed and applied to the study area. The study shows that multiple flood vulnerability assessment can evaluate vulnerability more completely, and help decision makers learn more information about making decisions in a more comprehensive way. In summary, this study provides a new way for flood vulnerability assessment and disaster prevention decision. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Multiple Shooting and Time Domain Decomposition Methods

    CERN Document Server

    Geiger, Michael; Körkel, Stefan; Rannacher, Rolf

    2015-01-01

    This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms.  The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics.  This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...

  5. Multiple histogram method and static Monte Carlo sampling

    NARCIS (Netherlands)

    Inda, M.A.; Frenkel, D.

    2004-01-01

    We describe an approach to use multiple-histogram methods in combination with static, biased Monte Carlo simulations. To illustrate this, we computed the force-extension curve of an athermal polymer from multiple histograms constructed in a series of static Rosenbluth Monte Carlo simulations. From

  6. Feedback structure based entropy approach for multiple-model estimation

    Institute of Scientific and Technical Information of China (English)

    Shen-tu Han; Xue Anke; Guo Yunfei

    2013-01-01

    The variable-structure multiple-model (VSMM) approach, one of the multiple-model (MM) methods, is a popular and effective approach in handling problems with mode uncertainties. The model sequence set adaptation (MSA) is the key to design a better VSMM. However, MSA methods in the literature have big room to improve both theoretically and practically. To this end, we propose a feedback structure based entropy approach that could find the model sequence sets with the smallest size under certain conditions. The filtered data are fed back in real time and can be used by the minimum entropy (ME) based VSMM algorithms, i.e., MEVSMM. Firstly, the full Markov chains are used to achieve optimal solutions. Secondly, the myopic method together with particle filter (PF) and the challenge match algorithm are also used to achieve sub-optimal solutions, a trade-off between practicability and optimality. The numerical results show that the proposed algorithm provides not only refined model sets but also a good robustness margin and very high accuracy.

  7. A multiple regression method for genomewide association studies ...

    Indian Academy of Sciences (India)

    Bujun Mei

    2018-06-07

    Jun 7, 2018 ... Similar to the typical genomewide association tests using LD ... new approach performed validly when the multiple regression based on linkage method was employed. .... the model, two groups of scenarios were simulated.

  8. Optimal planning approaches with multiple impulses for rendezvous based on hybrid genetic algorithm and control method

    Directory of Open Access Journals (Sweden)

    JingRui Zhang

    2015-03-01

    Full Text Available In this article, we focus on safe and effective completion of a rendezvous and docking task by looking at planning approaches and control with fuel-optimal rendezvous for a target spacecraft running on a near-circular reference orbit. A variety of existent practical path constraints are considered, including the constraints of field of view, impulses, and passive safety. A rendezvous approach is calculated by using a hybrid genetic algorithm with those constraints. Furthermore, a control method of trajectory tracking is adopted to overcome the external disturbances. Based on Clohessy–Wiltshire equations, we first construct the mathematical model of optimal planning approaches of multiple impulses with path constraints. Second, we introduce the principle of hybrid genetic algorithm with both stronger global searching ability and local searching ability. We additionally explain the application of this algorithm in the problem of trajectory planning. Then, we give three-impulse simulation examples to acquire an optimal rendezvous trajectory with the path constraints presented in this article. The effectiveness and applicability of the tracking control method are verified with the optimal trajectory above as control objective through the numerical simulation.

  9. Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities.

    Science.gov (United States)

    Green, Carla A; Duan, Naihua; Gibbons, Robert D; Hoagwood, Kimberly E; Palinkas, Lawrence A; Wisdom, Jennifer P

    2015-09-01

    Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings.

  10. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.

    Science.gov (United States)

    Yang, Shaofu; Liu, Qingshan; Wang, Jun

    2018-04-01

    This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.

  11. Novel Approach to Tourism Analysis with Multiple Outcome Capability Using Rough Set Theory

    Directory of Open Access Journals (Sweden)

    Chun-Che Huang

    2016-12-01

    Full Text Available To explore the relationship between characteristics and decision-making outcomes of the tourist is critical to keep competitive tourism business. In investigation of tourism development, most of the existing studies lack of a systematic approach to analyze qualitative data. Although the traditional Rough Set (RS based approach is an excellent classification method in qualitative modeling, but it is canarsquo;t deal with the case of multiple outcomes, which is a common situation in tourism. Consequently, the Multiple Outcome Reduct Generation (MORG and Multiple Outcome Rule Extraction (MORE approaches based on RS to handle multiple outcomes are proposed. This study proposes a ranking based approach to induct meaningful reducts and ensure the strength and robustness of decision rules, which helps decision makers understand touristarsquo;s characteristics in a tourism case.

  12. The Multiple Intelligences Teaching Method and Mathematics ...

    African Journals Online (AJOL)

    The Multiple Intelligences teaching approach has evolved and been embraced widely especially in the United States. The approach has been found to be very effective in changing situations for the better, in the teaching and learning of any subject especially mathematics. Multiple Intelligences teaching approach proposes ...

  13. Electromagnetic imaging of multiple-scattering small objects: non-iterative analytical approach

    International Nuclear Information System (INIS)

    Chen, X; Zhong, Y

    2008-01-01

    Multiple signal classification (MUSIC) imaging method and the least squares method are applied to solve the electromagnetic inverse scattering problem of determining the locations and polarization tensors of a collection of small objects embedded in a known background medium. Based on the analysis of induced electric and magnetic dipoles, the proposed MUSIC method is able to deal with some special scenarios, due to the shapes and materials of objects, to which the standard MUSIC doesn't apply. After the locations of objects are obtained, the nonlinear inverse problem of determining the polarization tensors of objects accounting for multiple scattering between objects is solved by a non-iterative analytical approach based on the least squares method

  14. A retrospective likelihood approach for efficient integration of multiple omics factors in case-control association studies.

    Science.gov (United States)

    Balliu, Brunilda; Tsonaka, Roula; Boehringer, Stefan; Houwing-Duistermaat, Jeanine

    2015-03-01

    Integrative omics, the joint analysis of outcome and multiple types of omics data, such as genomics, epigenomics, and transcriptomics data, constitute a promising approach for powerful and biologically relevant association studies. These studies often employ a case-control design, and often include nonomics covariates, such as age and gender, that may modify the underlying omics risk factors. An open question is how to best integrate multiple omics and nonomics information to maximize statistical power in case-control studies that ascertain individuals based on the phenotype. Recent work on integrative omics have used prospective approaches, modeling case-control status conditional on omics, and nonomics risk factors. Compared to univariate approaches, jointly analyzing multiple risk factors with a prospective approach increases power in nonascertained cohorts. However, these prospective approaches often lose power in case-control studies. In this article, we propose a novel statistical method for integrating multiple omics and nonomics factors in case-control association studies. Our method is based on a retrospective likelihood function that models the joint distribution of omics and nonomics factors conditional on case-control status. The new method provides accurate control of Type I error rate and has increased efficiency over prospective approaches in both simulated and real data. © 2015 Wiley Periodicals, Inc.

  15. Application of algorithms and artificial-intelligence approach for locating multiple harmonics in distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Y.-Y.; Chen, Y.-C. [Chung Yuan University (China). Dept. of Electrical Engineering

    1999-05-01

    A new method is proposed for locating multiple harmonic sources in distribution systems. The proposed method first determines the proper locations for metering measurement using fuzzy clustering. Next, an artificial neural network based on the back-propagation approach is used to identify the most likely location for multiple harmonic sources. A set of systematic algorithmic steps is developed until all harmonic locations are identified. The simulation results for an 18-busbar system show that the proposed method is very efficient in locating the multiple harmonics in a distribution system. (author)

  16. A feature point identification method for positron emission particle tracking with multiple tracers

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Cody, E-mail: cwiggin2@vols.utk.edu [University of Tennessee-Knoxville, Department of Physics and Astronomy, 1408 Circle Drive, Knoxville, TN 37996 (United States); Santos, Roque [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States); Escuela Politécnica Nacional, Departamento de Ciencias Nucleares (Ecuador); Ruggles, Arthur [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States)

    2017-01-21

    A novel detection algorithm for Positron Emission Particle Tracking (PEPT) with multiple tracers based on optical feature point identification (FPI) methods is presented. This new method, the FPI method, is compared to a previous multiple PEPT method via analyses of experimental and simulated data. The FPI method outperforms the older method in cases of large particle numbers and fine time resolution. Simulated data show the FPI method to be capable of identifying 100 particles at 0.5 mm average spatial error. Detection error is seen to vary with the inverse square root of the number of lines of response (LORs) used for detection and increases as particle separation decreases. - Highlights: • A new approach to positron emission particle tracking is presented. • Using optical feature point identification analogs, multiple particle tracking is achieved. • Method is compared to previous multiple particle method. • Accuracy and applicability of method is explored.

  17. A data-driven multiplicative fault diagnosis approach for automation processes.

    Science.gov (United States)

    Hao, Haiyang; Zhang, Kai; Ding, Steven X; Chen, Zhiwen; Lei, Yaguo

    2014-09-01

    This paper presents a new data-driven method for diagnosing multiplicative key performance degradation in automation processes. Different from the well-established additive fault diagnosis approaches, the proposed method aims at identifying those low-level components which increase the variability of process variables and cause performance degradation. Based on process data, features of multiplicative fault are extracted. To identify the root cause, the impact of fault on each process variable is evaluated in the sense of contribution to performance degradation. Then, a numerical example is used to illustrate the functionalities of the method and Monte-Carlo simulation is performed to demonstrate the effectiveness from the statistical viewpoint. Finally, to show the practical applicability, a case study on the Tennessee Eastman process is presented. Copyright © 2013. Published by Elsevier Ltd.

  18. Systematic approach to optimize a pretreatment method for ultrasensitive liquid chromatography with tandem mass spectrometry analysis of multiple target compounds in biological samples.

    Science.gov (United States)

    Togashi, Kazutaka; Mutaguchi, Kuninori; Komuro, Setsuko; Kataoka, Makoto; Yamazaki, Hiroshi; Yamashita, Shinji

    2016-08-01

    In current approaches for new drug development, highly sensitive and robust analytical methods for the determination of test compounds in biological samples are essential. These analytical methods should be optimized for every target compound. However, for biological samples that contain multiple compounds as new drug candidates obtained by cassette dosing tests, it would be preferable to develop a single method that allows the determination of all compounds at once. This study aims to establish a systematic approach that enables a selection of the most appropriate pretreatment method for multiple target compounds without the use of their chemical information. We investigated the retention times of 27 known compounds under different mobile phase conditions and determined the required pretreatment of human plasma samples using several solid-phase and liquid-liquid extractions. From the relationship between retention time and recovery in a principal component analysis, appropriate pretreatments were categorized into several types. Based on the category, we have optimized a pretreatment method for the identification of three calcium channel blockers in human plasma. Plasma concentrations of these drugs in a cassette-dose clinical study at microdose level were successfully determined with a lower limit of quantitation of 0.2 pg/mL for diltiazem, 1 pg/mL for nicardipine, and 2 pg/mL for nifedipine. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Ling-Yu Duan

    2010-01-01

    Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  20. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Tian Yonghong

    2010-01-01

    Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  1. A linear multiple balance method for discrete ordinates neutron transport equations

    International Nuclear Information System (INIS)

    Park, Chang Je; Cho, Nam Zin

    2000-01-01

    A linear multiple balance method (LMB) is developed to provide more accurate and positive solutions for the discrete ordinates neutron transport equations. In this multiple balance approach, one mesh cell is divided into two subcells with quadratic approximation of angular flux distribution. Four multiple balance equations are used to relate center angular flux with average angular flux by Simpson's rule. From the analysis of spatial truncation error, the accuracy of the linear multiple balance scheme is ο(Δ 4 ) whereas that of diamond differencing is ο(Δ 2 ). To accelerate the linear multiple balance method, we also describe a simplified additive angular dependent rebalance factor scheme which combines a modified boundary projection acceleration scheme and the angular dependent rebalance factor acceleration schme. It is demonstrated, via fourier analysis of a simple model problem as well as numerical calculations, that the additive angular dependent rebalance factor acceleration scheme is unconditionally stable with spectral radius < 0.2069c (c being the scattering ration). The numerical results tested so far on slab-geometry discrete ordinates transport problems show that the solution method of linear multiple balance is effective and sufficiently efficient

  2. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    Science.gov (United States)

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  3. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method

    Directory of Open Access Journals (Sweden)

    Jure Tuta

    2018-03-01

    Full Text Available This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method, a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.. Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  4. A data fusion approach for track monitoring from multiple in-service trains

    Science.gov (United States)

    Lederman, George; Chen, Siheng; Garrett, James H.; Kovačević, Jelena; Noh, Hae Young; Bielak, Jacobo

    2017-10-01

    We present a data fusion approach for enabling data-driven rail-infrastructure monitoring from multiple in-service trains. A number of researchers have proposed using vibration data collected from in-service trains as a low-cost method to monitor track geometry. The majority of this work has focused on developing novel features to extract information about the tracks from data produced by individual sensors on individual trains. We extend this work by presenting a technique to combine extracted features from multiple passes over the tracks from multiple sensors aboard multiple vehicles. There are a number of challenges in combining multiple data sources, like different relative position coordinates depending on the location of the sensor within the train. Furthermore, as the number of sensors increases, the likelihood that some will malfunction also increases. We use a two-step approach that first minimizes position offset errors through data alignment, then fuses the data with a novel adaptive Kalman filter that weights data according to its estimated reliability. We show the efficacy of this approach both through simulations and on a data-set collected from two instrumented trains operating over a one-year period. Combining data from numerous in-service trains allows for more continuous and more reliable data-driven monitoring than analyzing data from any one train alone; as the number of instrumented trains increases, the proposed fusion approach could facilitate track monitoring of entire rail-networks.

  5. The slice balance approach (SBA): a characteristic-based, multiple balance SN approach on unstructured polyhedral meshes

    International Nuclear Information System (INIS)

    Grove, R.E.

    2005-01-01

    The Slice Balance Approach (SBA) is an approach for solving geometrically-complex, neutral-particle transport problems within a multi-group discrete ordinates (S N ) framework. The salient feature is an angle-dependent spatial decomposition. We approximate general surfaces with arbitrary polygonal faces and mesh the geometry with arbitrarily-shaped polyhedral cells. A cell-local spatial decomposition divides cells into angle-dependent slices for each S N direction. This subdivision follows from a characteristic-based view of the transport problem. Most balance-based characteristic methods use it implicitly; we use it explicitly and exploit its properties. Our mathematical approach is a multiple balance approach using exact spatial moments balance equations on cells and slices along with auxiliary relations on slices. We call this the slice balance approach; it is a characteristic-based multiple balance approach. The SBA is intentionally general and can extend differencing schemes to arbitrary 2-D and 3-D meshes. This work contributes to development of general-geometry deterministic transport capability to complement Monte Carlo capability for large, geometrically-complex transport problems. The purpose of this paper is to describe the SBA. We describe the spatial decomposition and mathematical framework and highlight a few interesting properties. We sketch the derivation of two solution schemes, a step characteristic scheme and a diamond-difference-like scheme, to illustrate the approach and we present interesting results for a 2-D problem. (author)

  6. Neutron source multiplication method

    International Nuclear Information System (INIS)

    Clayton, E.D.

    1985-01-01

    Extensive use has been made of neutron source multiplication in thousands of measurements of critical masses and configurations and in subcritical neutron-multiplication measurements in situ that provide data for criticality prevention and control in nuclear materials operations. There is continuing interest in developing reliable methods for monitoring the reactivity, or k/sub eff/, of plant operations, but the required measurements are difficult to carry out and interpret on the far subcritical configurations usually encountered. The relationship between neutron multiplication and reactivity is briefly discussed and data presented to illustrate problems associated with the absolute measurement of neutron multiplication and reactivity in subcritical systems. A number of curves of inverse multiplication have been selected from a variety of experiments showing variations observed in multiplication during the course of critical and subcritical experiments where different methods of reactivity addition were used, with different neutron source detector position locations. Concern is raised regarding the meaning and interpretation of k/sub eff/ as might be measured in a far subcritical system because of the modal effects and spectrum differences that exist between the subcritical and critical systems. Because of this, the calculation of k/sub eff/ identical with unity for the critical assembly, although necessary, may not be sufficient to assure safety margins in calculations pertaining to far subcritical systems. Further study is needed on the interpretation and meaning of k/sub eff/ in the far subcritical system

  7. Combining morphometric evidence from multiple registration methods using dempster-shafer theory

    Science.gov (United States)

    Rajagopalan, Vidya; Wyatt, Christopher

    2010-03-01

    In tensor-based morphometry (TBM) group-wise differences in brain structure are measured using high degreeof- freedom registration and some form of statistical test. However, it is known that TBM results are sensitive to both the registration method and statistical test used. Given the lack of an objective model of group variation is it difficult to determine a best registration method for TBM. The use of statistical tests is also problematic given the corrections required for multiple testing and the notorius difficulty selecting and intepreting signigance values. This paper presents an approach to address both of these issues by combining multiple registration methods using Dempster-Shafer Evidence theory to produce belief maps of categorical changes between groups. This approach is applied to the comparison brain morphometry in aging, a typical application of TBM, using the determinant of the Jacobian as a measure of volume change. We show that the Dempster-Shafer combination produces a unique and easy to interpret belief map of regional changes between and within groups without the complications associated with hypothesis testing.

  8. A nonparametric multiple imputation approach for missing categorical data

    Directory of Open Access Journals (Sweden)

    Muhan Zhou

    2017-06-01

    Full Text Available Abstract Background Incomplete categorical variables with more than two categories are common in public health data. However, most of the existing missing-data methods do not use the information from nonresponse (missingness probabilities. Methods We propose a nearest-neighbour multiple imputation approach to impute a missing at random categorical outcome and to estimate the proportion of each category. The donor set for imputation is formed by measuring distances between each missing value with other non-missing values. The distance function is calculated based on a predictive score, which is derived from two working models: one fits a multinomial logistic regression for predicting the missing categorical outcome (the outcome model and the other fits a logistic regression for predicting missingness probabilities (the missingness model. A weighting scheme is used to accommodate contributions from two working models when generating the predictive score. A missing value is imputed by randomly selecting one of the non-missing values with the smallest distances. We conduct a simulation to evaluate the performance of the proposed method and compare it with several alternative methods. A real-data application is also presented. Results The simulation study suggests that the proposed method performs well when missingness probabilities are not extreme under some misspecifications of the working models. However, the calibration estimator, which is also based on two working models, can be highly unstable when missingness probabilities for some observations are extremely high. In this scenario, the proposed method produces more stable and better estimates. In addition, proper weights need to be chosen to balance the contributions from the two working models and achieve optimal results for the proposed method. Conclusions We conclude that the proposed multiple imputation method is a reasonable approach to dealing with missing categorical outcome data with

  9. Integrated Markov-neural reliability computation method: A case for multiple automated guided vehicle system

    International Nuclear Information System (INIS)

    Fazlollahtabar, Hamed; Saidi-Mehrabad, Mohammad; Balakrishnan, Jaydeep

    2015-01-01

    This paper proposes an integrated Markovian and back propagation neural network approaches to compute reliability of a system. While states of failure occurrences are significant elements for accurate reliability computation, Markovian based reliability assessment method is designed. Due to drawbacks shown by Markovian model for steady state reliability computations and neural network for initial training pattern, integration being called Markov-neural is developed and evaluated. To show efficiency of the proposed approach comparative analyses are performed. Also, for managerial implication purpose an application case for multiple automated guided vehicles (AGVs) in manufacturing networks is conducted. - Highlights: • Integrated Markovian and back propagation neural network approach to compute reliability. • Markovian based reliability assessment method. • Managerial implication is shown in an application case for multiple automated guided vehicles (AGVs) in manufacturing networks

  10. Filter multiplexing by use of spatial Code Division Multiple Access approach.

    Science.gov (United States)

    Solomon, Jonathan; Zalevsky, Zeev; Mendlovic, David; Monreal, Javier Garcia

    2003-02-10

    The increasing popularity of optical communication has also brought a demand for a broader bandwidth. The trend, naturally, was to implement methods from traditional electronic communication. One of the most effective traditional methods is Code Division Multiple Access. In this research, we suggest the use of this approach for spatial coding applied to images. The approach is to multiplex several filters into one plane while keeping their mutual orthogonality. It is shown that if the filters are limited by their bandwidth, the output of all the filters can be sampled in the original image resolution and fully recovered through an all-optical setup. The theoretical analysis of such a setup is verified in an experimental demonstration.

  11. A time warping approach to multiple sequence alignment.

    Science.gov (United States)

    Arribas-Gil, Ana; Matias, Catherine

    2017-04-25

    We propose an approach for multiple sequence alignment (MSA) derived from the dynamic time warping viewpoint and recent techniques of curve synchronization developed in the context of functional data analysis. Starting from pairwise alignments of all the sequences (viewed as paths in a certain space), we construct a median path that represents the MSA we are looking for. We establish a proof of concept that our method could be an interesting ingredient to include into refined MSA techniques. We present a simple synthetic experiment as well as the study of a benchmark dataset, together with comparisons with 2 widely used MSA softwares.

  12. Dynamic reflexivity in action: an armchair walkthrough of a qualitatively driven mixed-method and multiple methods study of mindfulness training in schoolchildren.

    Science.gov (United States)

    Cheek, Julianne; Lipschitz, David L; Abrams, Elizabeth M; Vago, David R; Nakamura, Yoshio

    2015-06-01

    Dynamic reflexivity is central to enabling flexible and emergent qualitatively driven inductive mixed-method and multiple methods research designs. Yet too often, such reflexivity, and how it is used at various points of a study, is absent when we write our research reports. Instead, reports of mixed-method and multiple methods research focus on what was done rather than how it came to be done. This article seeks to redress this absence of emphasis on the reflexive thinking underpinning the way that mixed- and multiple methods, qualitatively driven research approaches are thought about and subsequently used throughout a project. Using Morse's notion of an armchair walkthrough, we excavate and explore the layers of decisions we made about how, and why, to use qualitatively driven mixed-method and multiple methods research in a study of mindfulness training (MT) in schoolchildren. © The Author(s) 2015.

  13. Comparison of Methods to Trace Multiple Subskills: Is LR-DBN Best?

    Science.gov (United States)

    Xu, Yanbo; Mostow, Jack

    2012-01-01

    A long-standing challenge for knowledge tracing is how to update estimates of multiple subskills that underlie a single observable step. We characterize approaches to this problem by how they model knowledge tracing, fit its parameters, predict performance, and update subskill estimates. Previous methods allocated blame or credit among subskills…

  14. Searching for intermediate-mass black holes in galaxies with low-luminosity AGN: a multiple-method approach

    Science.gov (United States)

    Koliopanos, F.; Ciambur, B.; Graham, A.; Webb, N.; Coriat, M.; Mutlu-Pakdil, B.; Davis, B.; Godet, O.; Barret, D.; Seigar, M.

    2017-10-01

    Intermediate Mass Black Holes (IMBHs) are predicted by a variety of models and are the likely seeds for super massive BHs (SMBHs). However, we have yet to establish their existence. One method, by which we can discover IMBHs, is by measuring the mass of an accreting BH, using X-ray and radio observations and drawing on the correlation between radio luminosity, X-ray luminosity and the BH mass, known as the fundamental plane of BH activity (FP-BH). Furthermore, the mass of BHs in the centers of galaxies, can be estimated using scaling relations between BH mass and galactic properties. We are initiating a campaign to search for IMBH candidates in dwarf galaxies with low-luminosity AGN, using - for the first time - three different scaling relations and the FP-BH, simultaneously. In this first stage of our campaign, we measure the mass of seven LLAGN, that have been previously suggested to host central IMBHs, investigate the consistency between the predictions of the BH scaling relations and the FP-BH, in the low mass regime and demonstrate that this multiple method approach provides a robust average mass prediction. In my talk, I will discuss our methodology, results and next steps of this campaign.

  15. Resampling-based methods in single and multiple testing for equality of covariance/correlation matrices.

    Science.gov (United States)

    Yang, Yang; DeGruttola, Victor

    2012-06-22

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.

  16. A Bayesian trans-dimensional approach for the fusion of multiple geophysical datasets

    Science.gov (United States)

    JafarGandomi, Arash; Binley, Andrew

    2013-09-01

    We propose a Bayesian fusion approach to integrate multiple geophysical datasets with different coverage and sensitivity. The fusion strategy is based on the capability of various geophysical methods to provide enough resolution to identify either subsurface material parameters or subsurface structure, or both. We focus on electrical resistivity as the target material parameter and electrical resistivity tomography (ERT), electromagnetic induction (EMI), and ground penetrating radar (GPR) as the set of geophysical methods. However, extending the approach to different sets of geophysical parameters and methods is straightforward. Different geophysical datasets are entered into a trans-dimensional Markov chain Monte Carlo (McMC) search-based joint inversion algorithm. The trans-dimensional property of the McMC algorithm allows dynamic parameterisation of the model space, which in turn helps to avoid bias of the post-inversion results towards a particular model. Given that we are attempting to develop an approach that has practical potential, we discretize the subsurface into an array of one-dimensional earth-models. Accordingly, the ERT data that are collected by using two-dimensional acquisition geometry are re-casted to a set of equivalent vertical electric soundings. Different data are inverted either individually or jointly to estimate one-dimensional subsurface models at discrete locations. We use Shannon's information measure to quantify the information obtained from the inversion of different combinations of geophysical datasets. Information from multiple methods is brought together via introducing joint likelihood function and/or constraining the prior information. A Bayesian maximum entropy approach is used for spatial fusion of spatially dispersed estimated one-dimensional models and mapping of the target parameter. We illustrate the approach with a synthetic dataset and then apply it to a field dataset. We show that the proposed fusion strategy is

  17. Ensemble approach combining multiple methods improves human transcription start site prediction

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-11-30

    Abstract Background The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets. Results We demonstrate the heterogeneity of current prediction sets, and take advantage of this heterogeneity to construct a two-level classifier (\\'Profisi Ensemble\\') using predictions from 7 programs, along with 2 other data sources. Support vector machines using \\'full\\' and \\'reduced\\' data sets are combined in an either\\/or approach. We achieve a 14% increase in performance over the current state-of-the-art, as benchmarked by a third-party tool. Conclusions Supervised learning methods are a useful way to combine predictions from diverse sources.

  18. Multiple sequential failure model: A probabilistic approach to quantifying human error dependency

    International Nuclear Information System (INIS)

    Samanta

    1985-01-01

    This paper rpesents a probabilistic approach to quantifying human error dependency when multiple tasks are performed. Dependent human failures are dominant contributors to risks from nuclear power plants. An overview of the Multiple Sequential Failure (MSF) model developed and its use in probabilistic risk assessments (PRAs) depending on the available data are discussed. A small-scale psychological experiment was conducted on the nature of human dependency and the interpretation of the experimental data by the MSF model show remarkable accommodation of the dependent failure data. The model, which provides an unique method for quantification of dependent failures in human reliability analysis, can be used in conjunction with any of the general methods currently used for performing the human reliability aspect in PRAs

  19. Analytic Methods for Evaluating Patterns of Multiple Congenital Anomalies in Birth Defect Registries.

    Science.gov (United States)

    Agopian, A J; Evans, Jane A; Lupo, Philip J

    2018-01-15

    It is estimated that 20 to 30% of infants with birth defects have two or more birth defects. Among these infants with multiple congenital anomalies (MCA), co-occurring anomalies may represent either chance (i.e., unrelated etiologies) or pathogenically associated patterns of anomalies. While some MCA patterns have been recognized and described (e.g., known syndromes), others have not been identified or characterized. Elucidating these patterns may result in a better understanding of the etiologies of these MCAs. This article reviews the literature with regard to analytic methods that have been used to evaluate patterns of MCAs, in particular those using birth defect registry data. A popular method for MCA assessment involves a comparison of the observed to expected ratio for a given combination of MCAs, or one of several modified versions of this comparison. Other methods include use of numerical taxonomy or other clustering techniques, multiple regression analysis, and log-linear analysis. Advantages and disadvantages of these approaches, as well as specific applications, were outlined. Despite the availability of multiple analytic approaches, relatively few MCA combinations have been assessed. The availability of large birth defects registries and computing resources that allow for automated, big data strategies for prioritizing MCA patterns may provide for new avenues for better understanding co-occurrence of birth defects. Thus, the selection of an analytic approach may depend on several considerations. Birth Defects Research 110:5-11, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  20. Determination of 226Ra contamination depth in soil using the multiple photopeaks method

    International Nuclear Information System (INIS)

    Haddad, Kh.; Al-Masri, M.S.; Doubal, A.W.

    2014-01-01

    Radioactive contamination presents a diverse range of challenges in many industries. Determination of radioactive contamination depth plays a vital role in the assessment of contaminated sites, because it can be used to estimate the activity content. It is determined traditionally by measuring the activity distributions along the depth. This approach gives accurate results, but it is time consuming, lengthy and costly. The multiple photopeaks method was developed in this work for 226 Ra contamination depth determination in a NORM contaminated soil using in-situ gamma spectrometry. The developed method bases on linear correlation between the attenuation ratio of different gamma lines emitted by 214 Bi and the 226 Ra contamination depth. Although this method is approximate, but it is much simpler, faster and cheaper than the traditional one. This method can be applied for any case of multiple gamma emitter contaminant. -- Highlights: • The multiple photopeaks method was developed for 226 Ra contamination depth determination using in-situ gamma spectrometry. • The method bases on linear correlation between the attenuation ratio of 214 Bi gamma lines and 226 Ra contamination depth. • This method is simpler, faster and cheaper than the traditional one, it can be applied for any multiple gamma contaminant

  1. EMUDRA: Ensemble of Multiple Drug Repositioning Approaches to Improve Prediction Accuracy.

    Science.gov (United States)

    Zhou, Xianxiao; Wang, Minghui; Katsyv, Igor; Irie, Hanna; Zhang, Bin

    2018-04-24

    Availability of large-scale genomic, epigenetic and proteomic data in complex diseases makes it possible to objectively and comprehensively identify therapeutic targets that can lead to new therapies. The Connectivity Map has been widely used to explore novel indications of existing drugs. However, the prediction accuracy of the existing methods, such as Kolmogorov-Smirnov statistic remains low. Here we present a novel high-performance drug repositioning approach that improves over the state-of-the-art methods. We first designed an expression weighted cosine method (EWCos) to minimize the influence of the uninformative expression changes and then developed an ensemble approach termed EMUDRA (Ensemble of Multiple Drug Repositioning Approaches) to integrate EWCos and three existing state-of-the-art methods. EMUDRA significantly outperformed individual drug repositioning methods when applied to simulated and independent evaluation datasets. We predicted using EMUDRA and experimentally validated an antibiotic rifabutin as an inhibitor of cell growth in triple negative breast cancer. EMUDRA can identify drugs that more effectively target disease gene signatures and will thus be a useful tool for identifying novel therapies for complex diseases and predicting new indications for existing drugs. The EMUDRA R package is available at doi:10.7303/syn11510888. bin.zhang@mssm.edu or zhangb@hotmail.com. Supplementary data are available at Bioinformatics online.

  2. A Multiple Criteria Decision Making Method Based on Relative Value Distances

    Directory of Open Access Journals (Sweden)

    Shyur Huan-jyh

    2015-12-01

    Full Text Available This paper proposes a new multiple criteria decision-making method called ERVD (election based on relative value distances. The s-shape value function is adopted to replace the expected utility function to describe the risk-averse and risk-seeking behavior of decision makers. Comparisons and experiments contrasting with the TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution method are carried out to verify the feasibility of using the proposed method to represent the decision makers’ preference in the decision making process. Our experimental results show that the proposed approach is an appropriate and effective MCDM method.

  3. A new fast method for inferring multiple consensus trees using k-medoids.

    Science.gov (United States)

    Tahiri, Nadia; Willems, Matthieu; Makarenkov, Vladimir

    2018-04-05

    Gene trees carry important information about specific evolutionary patterns which characterize the evolution of the corresponding gene families. However, a reliable species consensus tree cannot be inferred from a multiple sequence alignment of a single gene family or from the concatenation of alignments corresponding to gene families having different evolutionary histories. These evolutionary histories can be quite different due to horizontal transfer events or to ancient gene duplications which cause the emergence of paralogs within a genome. Many methods have been proposed to infer a single consensus tree from a collection of gene trees. Still, the application of these tree merging methods can lead to the loss of specific evolutionary patterns which characterize some gene families or some groups of gene families. Thus, the problem of inferring multiple consensus trees from a given set of gene trees becomes relevant. We describe a new fast method for inferring multiple consensus trees from a given set of phylogenetic trees (i.e. additive trees or X-trees) defined on the same set of species (i.e. objects or taxa). The traditional consensus approach yields a single consensus tree. We use the popular k-medoids partitioning algorithm to divide a given set of trees into several clusters of trees. We propose novel versions of the well-known Silhouette and Caliński-Harabasz cluster validity indices that are adapted for tree clustering with k-medoids. The efficiency of the new method was assessed using both synthetic and real data, such as a well-known phylogenetic dataset consisting of 47 gene trees inferred for 14 archaeal organisms. The method described here allows inference of multiple consensus trees from a given set of gene trees. It can be used to identify groups of gene trees having similar intragroup and different intergroup evolutionary histories. The main advantage of our method is that it is much faster than the existing tree clustering approaches, while

  4. Approaches to data analysis of multiple-choice questions

    OpenAIRE

    Lin Ding; Robert Beichner

    2009-01-01

    This paper introduces five commonly used approaches to analyzing multiple-choice test data. They are classical test theory, factor analysis, cluster analysis, item response theory, and model analysis. Brief descriptions of the goals and algorithms of these approaches are provided, together with examples illustrating their applications in physics education research. We minimize mathematics, instead placing emphasis on data interpretation using these approaches.

  5. A novel approach for multiple mobile objects path planning: Parametrization method and conflict resolution strategy

    International Nuclear Information System (INIS)

    Ma, Yong; Wang, Hongwei; Zamirian, M.

    2012-01-01

    We present a new approach containing two steps to determine conflict-free paths for mobile objects in two and three dimensions with moving obstacles. Firstly, the shortest path of each object is set as goal function which is subject to collision-avoidance criterion, path smoothness, and velocity and acceleration constraints. This problem is formulated as calculus of variation problem (CVP). Using parametrization method, CVP is converted to time-varying nonlinear programming problems (TNLPP) and then resolved. Secondly, move sequence of object is assigned by priority scheme; conflicts are resolved by multilevel conflict resolution strategy. Approach efficiency is confirmed by numerical examples. -- Highlights: ► Approach with parametrization method and conflict resolution strategy is proposed. ► Approach fits for multi-object paths planning in two and three dimensions. ► Single object path planning and multi-object conflict resolution are orderly used. ► Path of each object obtained with parameterization method in the first phase. ► Conflict-free paths gained by multi-object conflict resolution in the second phase.

  6. Whole-body voxel-based personalized dosimetry: Multiple voxel S-value approach for heterogeneous media with non-uniform activity distributions.

    Science.gov (United States)

    Lee, Min Sun; Kim, Joong Hyun; Paeng, Jin Chul; Kang, Keon Wook; Jeong, Jae Min; Lee, Dong Soo; Lee, Jae Sung

    2017-12-14

    Personalized dosimetry with high accuracy is becoming more important because of the growing interests in personalized medicine and targeted radionuclide therapy. Voxel-based dosimetry using dose point kernel or voxel S-value (VSV) convolution is available. However, these approaches do not consider medium heterogeneity. Here, we propose a new method for whole-body voxel-based personalized dosimetry for heterogeneous media with non-uniform activity distributions, which is referred to as the multiple VSV approach. Methods: The multiple numbers (N) of VSVs for media with different densities covering the whole-body density ranges were used instead of using only a single VSV for water. The VSVs were pre-calculated using GATE Monte Carlo simulation; those were convoluted with the time-integrated activity to generate density-specific dose maps. Computed tomography-based segmentation was conducted to generate binary maps for each density region. The final dose map was acquired by the summation of N segmented density-specific dose maps. We tested several sets of VSVs with different densities: N = 1 (single water VSV), 4, 6, 8, 10, and 20. To validate the proposed method, phantom and patient studies were conducted and compared with direct Monte Carlo, which was considered the ground truth. Finally, patient dosimetry (10 subjects) was conducted using the multiple VSV approach and compared with the single VSV and organ-based dosimetry approaches. Errors at the voxel- and organ-levels were reported for eight organs. Results: In the phantom and patient studies, the multiple VSV approach showed significant improvements regarding voxel-level errors, especially for the lung and bone regions. As N increased, voxel-level errors decreased, although some overestimations were observed at lung boundaries. In the case of multiple VSVs ( N = 8), we achieved voxel-level errors of 2.06%. In the dosimetry study, our proposed method showed much improved results compared to the single VSV and

  7. Fault diagnosis of sensor networked structures with multiple faults using a virtual beam based approach

    Science.gov (United States)

    Wang, H.; Jing, X. J.

    2017-07-01

    This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.

  8. Predicting Speech Intelligibility with a Multiple Speech Subsystems Approach in Children with Cerebral Palsy

    Science.gov (United States)

    Lee, Jimin; Hustad, Katherine C.; Weismer, Gary

    2014-01-01

    Purpose: Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystems approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Method: Nine acoustic variables reflecting different subsystems, and…

  9. On multiple level-set regularization methods for inverse problems

    International Nuclear Information System (INIS)

    DeCezaro, A; Leitão, A; Tai, X-C

    2009-01-01

    We analyze a multiple level-set method for solving inverse problems with piecewise constant solutions. This method corresponds to an iterated Tikhonov method for a particular Tikhonov functional G α based on TV–H 1 penalization. We define generalized minimizers for our Tikhonov functional and establish an existence result. Moreover, we prove convergence and stability results of the proposed Tikhonov method. A multiple level-set algorithm is derived from the first-order optimality conditions for the Tikhonov functional G α , similarly as the iterated Tikhonov method. The proposed multiple level-set method is tested on an inverse potential problem. Numerical experiments show that the method is able to recover multiple objects as well as multiple contrast levels

  10. Approaches to data analysis of multiple-choice questions

    Directory of Open Access Journals (Sweden)

    Lin Ding

    2009-09-01

    Full Text Available This paper introduces five commonly used approaches to analyzing multiple-choice test data. They are classical test theory, factor analysis, cluster analysis, item response theory, and model analysis. Brief descriptions of the goals and algorithms of these approaches are provided, together with examples illustrating their applications in physics education research. We minimize mathematics, instead placing emphasis on data interpretation using these approaches.

  11. Approaches to Data Analysis of Multiple-Choice Questions

    Science.gov (United States)

    Ding, Lin; Beichner, Robert

    2009-01-01

    This paper introduces five commonly used approaches to analyzing multiple-choice test data. They are classical test theory, factor analysis, cluster analysis, item response theory, and model analysis. Brief descriptions of the goals and algorithms of these approaches are provided, together with examples illustrating their applications in physics…

  12. Multiple-scale approach for the expansion scaling of superfluid quantum gases

    International Nuclear Information System (INIS)

    Egusquiza, I. L.; Valle Basagoiti, M. A.; Modugno, M.

    2011-01-01

    We present a general method, based on a multiple-scale approach, for deriving the perturbative solutions of the scaling equations governing the expansion of superfluid ultracold quantum gases released from elongated harmonic traps. We discuss how to treat the secular terms appearing in the usual naive expansion in the trap asymmetry parameter ε and calculate the next-to-leading correction for the asymptotic aspect ratio, with significant improvement over the previous proposals.

  13. Multiple attenuation to reflection seismic data using Radon filter and Wave Equation Multiple Rejection (WEMR) method

    Energy Technology Data Exchange (ETDEWEB)

    Erlangga, Mokhammad Puput [Geophysical Engineering, Institut Teknologi Bandung, Ganesha Street no.10 Basic Science B Buliding fl.2-3 Bandung, 40132, West Java Indonesia puput.erlangga@gmail.com (Indonesia)

    2015-04-16

    Separation between signal and noise, incoherent or coherent, is important in seismic data processing. Although we have processed the seismic data, the coherent noise is still mixing with the primary signal. Multiple reflections are a kind of coherent noise. In this research, we processed seismic data to attenuate multiple reflections in the both synthetic and real seismic data of Mentawai. There are several methods to attenuate multiple reflection, one of them is Radon filter method that discriminates between primary reflection and multiple reflection in the τ-p domain based on move out difference between primary reflection and multiple reflection. However, in case where the move out difference is too small, the Radon filter method is not enough to attenuate the multiple reflections. The Radon filter also produces the artifacts on the gathers data. Except the Radon filter method, we also use the Wave Equation Multiple Elimination (WEMR) method to attenuate the long period multiple reflection. The WEMR method can attenuate the long period multiple reflection based on wave equation inversion. Refer to the inversion of wave equation and the magnitude of the seismic wave amplitude that observed on the free surface, we get the water bottom reflectivity which is used to eliminate the multiple reflections. The WEMR method does not depend on the move out difference to attenuate the long period multiple reflection. Therefore, the WEMR method can be applied to the seismic data which has small move out difference as the Mentawai seismic data. The small move out difference on the Mentawai seismic data is caused by the restrictiveness of far offset, which is only 705 meter. We compared the real free multiple stacking data after processing with Radon filter and WEMR process. The conclusion is the WEMR method can more attenuate the long period multiple reflection than the Radon filter method on the real (Mentawai) seismic data.

  14. Creative Approaches to Teaching Graduate Research Methods Workshops

    OpenAIRE

    Peter Reilly

    2017-01-01

    Engagement and deeper learning were enhanced by developing several innovative teaching strategies delivered in Research Methods workshops to Graduate Business Students.  Focusing primarily on students adopting a creative approach to formulating a valid research question for undertaking a dissertation successfully. These techniques are applicable to most subject domains to ensure student engagement.  Addressing the various multiple intelligences and learning styles existing within groups while...

  15. Comparing the index-flood and multiple-regression methods using L-moments

    Science.gov (United States)

    Malekinezhad, H.; Nachtnebel, H. P.; Klik, A.

    In arid and semi-arid regions, the length of records is usually too short to ensure reliable quantile estimates. Comparing index-flood and multiple-regression analyses based on L-moments was the main objective of this study. Factor analysis was applied to determine main influencing variables on flood magnitude. Ward’s cluster and L-moments approaches were applied to several sites in the Namak-Lake basin in central Iran to delineate homogeneous regions based on site characteristics. Homogeneity test was done using L-moments-based measures. Several distributions were fitted to the regional flood data and index-flood and multiple-regression methods as two regional flood frequency methods were compared. The results of factor analysis showed that length of main waterway, compactness coefficient, mean annual precipitation, and mean annual temperature were the main variables affecting flood magnitude. The study area was divided into three regions based on the Ward’s method of clustering approach. The homogeneity test based on L-moments showed that all three regions were acceptably homogeneous. Five distributions were fitted to the annual peak flood data of three homogeneous regions. Using the L-moment ratios and the Z-statistic criteria, GEV distribution was identified as the most robust distribution among five candidate distributions for all the proposed sub-regions of the study area, and in general, it was concluded that the generalised extreme value distribution was the best-fit distribution for every three regions. The relative root mean square error (RRMSE) measure was applied for evaluating the performance of the index-flood and multiple-regression methods in comparison with the curve fitting (plotting position) method. In general, index-flood method gives more reliable estimations for various flood magnitudes of different recurrence intervals. Therefore, this method should be adopted as regional flood frequency method for the study area and the Namak-Lake basin

  16. Hesitant fuzzy methods for multiple criteria decision analysis

    CERN Document Server

    Zhang, Xiaolu

    2017-01-01

    The book offers a comprehensive introduction to methods for solving multiple criteria decision making and group decision making problems with hesitant fuzzy information. It reports on the authors’ latest research, as well as on others’ research, providing readers with a complete set of decision making tools, such as hesitant fuzzy TOPSIS, hesitant fuzzy TODIM, hesitant fuzzy LINMAP, hesitant fuzzy QUALIFEX, and the deviation modeling approach with heterogeneous fuzzy information. The main focus is on decision making problems in which the criteria values and/or the weights of criteria are not expressed in crisp numbers but are more suitable to be denoted as hesitant fuzzy elements. The largest part of the book is devoted to new methods recently developed by the authors to solve decision making problems in situations where the available information is vague or hesitant. These methods are presented in detail, together with their application to different type of decision-making problems. All in all, the book ...

  17. Multiple scattering approach to X-ray absorption spectroscopy

    International Nuclear Information System (INIS)

    Benfatto, M.; Wu Ziyu

    2003-01-01

    In this paper authors present the state of the art of the theoretical background needed for analyzing X-ray absorption spectra in the whole energy range. The multiple-scattering (MS) theory is presented in detail with some applications on real systems. Authors also describe recent progress in performing geometrical fitting of the XANES (X-ray absorption near-edge structure) energy region and beyond using a full multiple-scattering approach

  18. Fuzzy multiple attribute decision making methods and applications

    CERN Document Server

    Chen, Shu-Jen

    1992-01-01

    This monograph is intended for an advanced undergraduate or graduate course as well as for researchers, who want a compilation of developments in this rapidly growing field of operations research. This is a sequel to our previous works: "Multiple Objective Decision Making--Methods and Applications: A state-of-the-Art Survey" (No.164 of the Lecture Notes); "Multiple Attribute Decision Making--Methods and Applications: A State-of-the-Art Survey" (No.186 of the Lecture Notes); and "Group Decision Making under Multiple Criteria--Methods and Applications" (No.281 of the Lecture Notes). In this monograph, the literature on methods of fuzzy Multiple Attribute Decision Making (MADM) has been reviewed thoroughly and critically, and classified systematically. This study provides readers with a capsule look into the existing methods, their characteristics, and applicability to the analysis of fuzzy MADM problems. The basic concepts and algorithms from the classical MADM methods have been used in the development of the f...

  19. CURRENT APPROACHES FOR RESEARCH OF MULTIPLE SCLEROSIS BIOMARKERS

    Directory of Open Access Journals (Sweden)

    Kolyada T.I

    2016-12-01

    Full Text Available Current data concerning features of multiple sclerosis (MS etiology, pathogenesis, clinical course and treatment of disease indicate the necessity of personalized approach to the management of MS patients. These features are the variety of possible etiological factors and mechanisms that trigger the development of MS, different courses of disease, and significant differences in treatment efficiency. Phenotypic and pathogenetic heterogeneity of MS requires, on the one hand, the stratification of patients into groups with different treatment depending on a number of criteria including genetic characteristics, disease course, stage of the pathological process, and forms of the disease. On the other hand, it requires the use of modern methods for assessment of individual risk of developing MS, its early diagnosis, evaluation and prognosis of the disease course and the treatment efficiency. This approach is based on the identification and determination of biomarkers of MS including the use of systems biology technology platforms such as genomics, proteomics, metabolomics and bioinformatics. Research and practical use of biomarkers of MS in clinical and laboratory practice requires the use of a wide range of modern medical and biological, mathematical and physicochemical methods. The group of "classical" methods used to study MS biomarkers includes physicochemical and immunological methods aimed at the selection and identification of single molecular biomarkers, as well as methods of molecular genetic analysis. This group of methods includes ELISA, western blotting, isoelectric focusing, immunohistochemical methods, flow cytometry, spectrophotometric and nephelometric methods. These techniques make it possible to carry out both qualitative and quantitative assay of molecular biomarkers. The group of "classical methods" can also include methods based on polymerase chain reaction (including multiplex and allele-specific PCR and genome sequencing

  20. Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.

    Science.gov (United States)

    Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen

    2017-11-01

    A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.

  1. Simple and effective method of determining multiplicity distribution law of neutrons emitted by fissionable material with significant self -multiplication effect

    International Nuclear Information System (INIS)

    Yanjushkin, V.A.

    1991-01-01

    At developing new methods of non-destructive determination of plutonium full mass in nuclear materials and products being involved in uranium -plutonium fuel cycle by its intrinsic neutron radiation, it may be useful to know not only separate moments but the multiplicity distribution law itself of neutron leaving this material surface using the following as parameters - firstly, unconditional multiplicity distribution laws of neutrons formed in spontaneous and induced fission acts of the given fissionable material corresponding nuclei and unconditional multiplicity distribution law of neutrons caused by (α,n) reactions at light nuclei of some elements which compose this material chemical structure; -secondly, probability of induced fission of this material nuclei by an incident neutron of any nature formed during the previous fissions or(α,n) reactions. An attempt to develop similar theory has been undertaken. Here the author proposes his approach to this problem. The main advantage of this approach, to our mind, consists in its mathematical simplicity and easy realization at the computer. In principle, the given model guarantees any good accuracy at any real value of induced fission probability without limitations dealing with physico-chemical composition of nuclear material

  2. Continuum multiple-scattering approach to electron-molecule scattering and molecular photoionization

    International Nuclear Information System (INIS)

    Dehmer, J.L.; Dill, D.

    1979-01-01

    The multiple-scattering approach to the electronic continuum of molecules is described. The continuum multiple-scattering model (CMSM) was developed as a survey tool and, as such was required to satisfy two requirements. First, it had to have a very broad scope, which means (i) molecules of arbitrary geometry and complexity containing any atom in the periodic system, (ii) continuum electron energies from 0-1000 eV, and (iii) capability to treat a large range of processes involving both photoionization and electron scattering. Second, the structure of the theory was required to lend itself to transparent, physical interpretation of major spectral features such as shape resonances. A comprehensive theoretical framework for the continuum multiple scattering method is presented, as well as its applications to electron-molecule scattering and molecular photoionization. Highlights of recent applications in these two areas are reviewed. The major impact of the resulting studies over the last few years has been to establish the importance of shape resonances in electron collisions and photoionization of practically all (non-hydride) molecules

  3. Basic thinking patterns and working methods for multiple DFX

    DEFF Research Database (Denmark)

    Andreasen, Mogens Myrup; Mortensen, Niels Henrik

    1997-01-01

    This paper attempts to describe the theory and methodologies behind DFX and linking multiple DFX's together. The contribution is an articulation of basic thinking patterns and description of some working methods for handling multiple DFX.......This paper attempts to describe the theory and methodologies behind DFX and linking multiple DFX's together. The contribution is an articulation of basic thinking patterns and description of some working methods for handling multiple DFX....

  4. Decreasing Multicollinearity: A Method for Models with Multiplicative Functions.

    Science.gov (United States)

    Smith, Kent W.; Sasaki, M. S.

    1979-01-01

    A method is proposed for overcoming the problem of multicollinearity in multiple regression equations where multiplicative independent terms are entered. The method is not a ridge regression solution. (JKS)

  5. A Hybrid One-Way ANOVA Approach for the Robust and Efficient Estimation of Differential Gene Expression with Multiple Patterns.

    Directory of Open Access Journals (Sweden)

    Mohammad Manir Hossain Mollah

    Full Text Available Identifying genes that are differentially expressed (DE between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA, are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression.The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0 to outlying expressions and larger weights (≤ 1 to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA.Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large

  6. Strongly and weakly directed approaches to teaching multiple representation use in physics

    Directory of Open Access Journals (Sweden)

    Patrick B. Kohl

    2007-06-01

    Full Text Available Good use of multiple representations is considered key to learning physics, and so there is considerable motivation both to learn how students use multiple representations when solving problems and to learn how best to teach problem solving using multiple representations. In this study of two large-lecture algebra-based physics courses at the University of Colorado (CU and Rutgers, the State University of New Jersey, we address both issues. Students in each of the two courses solved five common electrostatics problems of varying difficulty, and we examine their solutions to clarify the relationship between multiple representation use and performance on problems involving free-body diagrams. We also compare our data across the courses, since the two physics-education-research-based courses take substantially different approaches to teaching the use of multiple representations. The course at Rutgers takes a strongly directed approach, emphasizing specific heuristics and problem-solving strategies. The course at CU takes a weakly directed approach, modeling good problem solving without teaching a specific strategy. We find that, in both courses, students make extensive use of multiple representations, and that this use (when both complete and correct is associated with significantly increased performance. Some minor differences in representation use exist, and are consistent with the types of instruction given. Most significant are the strong and broad similarities in the results, suggesting that either instructional approach or a combination thereof can be useful for helping students learn to use multiple representations for problem solving and concept development.

  7. Variational Approaches for the Existence of Multiple Periodic Solutions of Differential Delay Equations

    Directory of Open Access Journals (Sweden)

    Rong Cheng

    2010-01-01

    Full Text Available The existence of multiple periodic solutions of the following differential delay equation (=−((− is established by applying variational approaches directly, where ∈ℝ, ∈(ℝ,ℝ and >0 is a given constant. This means that we do not need to use Kaplan and Yorke's reduction technique to reduce the existence problem of the above equation to an existence problem for a related coupled system. Such a reduction method introduced first by Kaplan and Yorke in (1974 is often employed in previous papers to study the existence of periodic solutions for the above equation and its similar ones by variational approaches.

  8. A Memory/Immunology-Based Control Approach with Applications to Multiple Spacecraft Formation Flying

    Directory of Open Access Journals (Sweden)

    Liguo Weng

    2013-01-01

    Full Text Available This paper addresses the problem of formation control for multiple spacecrafts in Planetary Orbital Environment (POE. Due to the presence of diverse interferences and uncertainties in the outer space, such as the changing spacecraft mass, unavailable space parameters, and varying gravity forces, traditional control methods encounter great difficulties in this area. A new control approach inspired by human memory and immune system is proposed, and this approach is shown to be capable of learning from past control experience and current behavior to improve its performance. It demands much less system dynamic information as compared with traditional controls. Both theoretic analysis and computer simulation verify its effectiveness.

  9. Measuring multiple residual-stress components using the contour method and multiple cuts

    Energy Technology Data Exchange (ETDEWEB)

    Prime, Michael B [Los Alamos National Laboratory; Swenson, Hunter [Los Alamos National Laboratory; Pagliaro, Pierluigi [U. PALERMO; Zuccarello, Bernardo [U. PALERMO

    2009-01-01

    The conventional contour method determines one component of stress over the cross section of a part. The part is cut into two, the contour of the exposed surface is measured, and Bueckner's superposition principle is analytically applied to calculate stresses. In this paper, the contour method is extended to the measurement of multiple stress components by making multiple cuts with subsequent applications of superposition. The theory and limitations are described. The theory is experimentally tested on a 316L stainless steel disk with residual stresses induced by plastically indenting the central portion of the disk. The stress results are validated against independent measurements using neutron diffraction. The theory has implications beyond just multiple cuts. The contour method measurements and calculations for the first cut reveal how the residual stresses have changed throughout the part. Subsequent measurements of partially relaxed stresses by other techniques, such as laboratory x-rays, hole drilling, or neutron or synchrotron diffraction, can be superimposed back to the original state of the body.

  10. An agent-based negotiation approach for balancing multiple coupled control domains

    DEFF Research Database (Denmark)

    Umair, Aisha; Clausen, Anders; Jørgensen, Bo Nørregaard

    2015-01-01

    Solving multi-objective multi-issue negotiation problems involving interdependent issues distributed among multiple control domains is inherent to most non-trivial cyber-physical systems. In these systems, the coordinated operation of interconnected subsystems performing autonomous control....... The proposed approach can solve negotiation problems with interdependent issues across multiple coupled control domains. We demonstrate our approach by solving a coordination problem where a Combined Heat and Power Plant must allocate electricity for three commercial greenhouses to ensure the required plant...

  11. Approach to Multi-Criteria Group Decision-Making Problems Based on the Best-Worst-Method and ELECTRE Method

    Directory of Open Access Journals (Sweden)

    Xinshang You

    2016-09-01

    Full Text Available This paper proposes a novel approach to cope with the multi-criteria group decision-making problems. We give the pairwise comparisons based on the best-worst-method (BWM, which can decrease comparison times. Additionally, our comparison results are determined with the positive and negative aspects. In order to deal with the decision matrices effectively, we consider the elimination and choice translation reality (ELECTRE III method under the intuitionistic multiplicative preference relations environment. The ELECTRE III method is designed for a double-automatic system. Under a certain limitation, without bothering the decision-makers to reevaluate the alternatives, this system can adjust some special elements that have the most influence on the group’s satisfaction degree. Moreover, the proposed method is suitable for both the intuitionistic multiplicative preference relation and the interval valued fuzzy preference relations through the transformation formula. An illustrative example is followed to demonstrate the rationality and availability of the novel method.

  12. Power-efficient method for IM-DD optical transmission of multiple OFDM signals.

    Science.gov (United States)

    Effenberger, Frank; Liu, Xiang

    2015-05-18

    We propose a power-efficient method for transmitting multiple frequency-division multiplexed (FDM) orthogonal frequency-division multiplexing (OFDM) signals in intensity-modulation direct-detection (IM-DD) optical systems. This method is based on quadratic soft clipping in combination with odd-only channel mapping. We show, both analytically and experimentally, that the proposed approach is capable of improving the power efficiency by about 3 dB as compared to conventional FDM OFDM signals under practical bias conditions, making it a viable solution in applications such as optical fiber-wireless integrated systems where both IM-DD optical transmission and OFDM signaling are important.

  13. Optimal Route Searching with Multiple Dynamical Constraints—A Geometric Algebra Approach

    Directory of Open Access Journals (Sweden)

    Dongshuang Li

    2018-05-01

    Full Text Available The process of searching for a dynamic constrained optimal path has received increasing attention in traffic planning, evacuation, and personalized or collaborative traffic service. As most existing multiple constrained optimal path (MCOP methods cannot search for a path given various types of constraints that dynamically change during the search, few approaches for dynamic multiple constrained optimal path (DMCOP with type II dynamics are available for practical use. In this study, we develop a method to solve the DMCOP problem with type II dynamics based on the unification of various types of constraints under a geometric algebra (GA framework. In our method, the network topology and three different types of constraints are represented by using algebraic base coding. With a parameterized optimization of the MCOP algorithm based on a greedy search strategy under the generation-refinement paradigm, this algorithm is found to accurately support the discovery of optimal paths as the constraints of numerical values, nodes, and route structure types are dynamically added to the network. The algorithm was tested with simulated cases of optimal tourism route searches in China’s road networks with various combinations of constraints. The case study indicates that our algorithm can not only solve the DMCOP with different types of constraints but also use constraints to speed up the route filtering.

  14. Hybrid MCDA Methods to Integrate Multiple Ecosystem Services in Forest Management Planning: A Critical Review.

    Science.gov (United States)

    Uhde, Britta; Hahn, W Andreas; Griess, Verena C; Knoke, Thomas

    2015-08-01

    Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.

  15. Hybrid MCDA Methods to Integrate Multiple Ecosystem Services in Forest Management Planning: A Critical Review

    Science.gov (United States)

    Uhde, Britta; Andreas Hahn, W.; Griess, Verena C.; Knoke, Thomas

    2015-08-01

    Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.

  16. Non-animal methods to predict skin sensitization (II): an assessment of defined approaches *.

    Science.gov (United States)

    Kleinstreuer, Nicole C; Hoffmann, Sebastian; Alépée, Nathalie; Allen, David; Ashikaga, Takao; Casey, Warren; Clouet, Elodie; Cluzel, Magalie; Desprez, Bertrand; Gellatly, Nichola; Göbel, Carsten; Kern, Petra S; Klaric, Martina; Kühnl, Jochen; Martinozzi-Teissier, Silvia; Mewes, Karsten; Miyazawa, Masaaki; Strickland, Judy; van Vliet, Erwin; Zang, Qingda; Petersohn, Dirk

    2018-05-01

    Skin sensitization is a toxicity endpoint of widespread concern, for which the mechanistic understanding and concurrent necessity for non-animal testing approaches have evolved to a critical juncture, with many available options for predicting sensitization without using animals. Cosmetics Europe and the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods collaborated to analyze the performance of multiple non-animal data integration approaches for the skin sensitization safety assessment of cosmetics ingredients. The Cosmetics Europe Skin Tolerance Task Force (STTF) collected and generated data on 128 substances in multiple in vitro and in chemico skin sensitization assays selected based on a systematic assessment by the STTF. These assays, together with certain in silico predictions, are key components of various non-animal testing strategies that have been submitted to the Organization for Economic Cooperation and Development as case studies for skin sensitization. Curated murine local lymph node assay (LLNA) and human skin sensitization data were used to evaluate the performance of six defined approaches, comprising eight non-animal testing strategies, for both hazard and potency characterization. Defined approaches examined included consensus methods, artificial neural networks, support vector machine models, Bayesian networks, and decision trees, most of which were reproduced using open source software tools. Multiple non-animal testing strategies incorporating in vitro, in chemico, and in silico inputs demonstrated equivalent or superior performance to the LLNA when compared to both animal and human data for skin sensitization.

  17. Multiple scattering approach to the vibrational excitation of molecules by slow electrons

    International Nuclear Information System (INIS)

    Drukarev, G.

    1976-01-01

    Another approach to the problem of vibrational excitation of homonuclear two-atomic molecules by slow electrons possibly accompanied by rotational transitions is presented based on the picture of multiple scattering of an electron inside the molecule. The scattering of two fixed centers in the zero range potential model is considered. The results indicate that the multiple scattering determines the order of magnitude of the vibrational excitation cross sections in the energy region under consideration even if the zero range potential model is used. Also the connection between the multiple scattering approach and quasi-stationary molecular ion picture is established. 9 refs

  18. A General Method for QTL Mapping in Multiple Related Populations Derived from Multiple Parents

    Directory of Open Access Journals (Sweden)

    Yan AO

    2009-03-01

    Full Text Available It's well known that incorporating some existing populations derived from multiple parents may improve QTL mapping and QTL-based breeding programs. However, no general maximum likelihood method has been available for this strategy. Based on the QTL mapping in multiple related populations derived from two parents, a maximum likelihood estimation method was proposed, which can incorporate several populations derived from three or more parents and also can be used to handle different mating designs. Taking a circle design as an example, we conducted simulation studies to study the effect of QTL heritability and sample size upon the proposed method. The results showed that under the same heritability, enhanced power of QTL detection and more precise and accurate estimation of parameters could be obtained when three F2 populations were jointly analyzed, compared with the joint analysis of any two F2 populations. Higher heritability, especially with larger sample sizes, would increase the ability of QTL detection and improve the estimation of parameters. Potential advantages of the method are as follows: firstly, the existing results of QTL mapping in single population can be compared and integrated with each other with the proposed method, therefore the ability of QTL detection and precision of QTL mapping can be improved. Secondly, owing to multiple alleles in multiple parents, the method can exploit gene resource more adequately, which will lay an important genetic groundwork for plant improvement.

  19. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    Science.gov (United States)

    Xie, Weihong; Yu, Yang

    2017-01-01

    Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG) in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM) estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly. PMID:29124062

  20. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    Directory of Open Access Journals (Sweden)

    Fan Liang

    2017-01-01

    Full Text Available Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly.

  1. The importance of neurophysiological-Bobath method in multiple sclerosis

    Directory of Open Access Journals (Sweden)

    Adrian Miler

    2018-02-01

    Full Text Available Rehabilitation treatment in multiple sclerosis should be carried out continuously, can take place in the hospital, ambulatory as well as environmental conditions. In the traditional approach, it focuses on reducing the symptoms of the disease, such as paresis, spasticity, ataxia, pain, sensory disturbances, speech disorders, blurred vision, fatigue, neurogenic bladder dysfunction, and cognitive impairment. In kinesiotherapy in people with paresis, the most common methods are the (Bobathian method.Improvement can be achieved by developing the ability to maintain a correct posture in various positions (so-called postural alignment, patterns based on corrective and equivalent responses. During the therapy, various techniques are used to inhibit pathological motor patterns and stimulate the reaction. The creators of the method believe that each movement pattern has its own postural system, from which it can be initiated, carried out and effectively controlled. Correct movement can not take place in the wrong position of the body. The physiotherapist discusses with the patient how to perform individual movement patterns, which protects him against spontaneous pathological compensation.The aim of the work is to determine the meaning and application of the  Bobath method in the therapy of people with MS

  2. HARMONIC ANALYSIS OF SVPWM INVERTER USING MULTIPLE-PULSES METHOD

    Directory of Open Access Journals (Sweden)

    Mehmet YUMURTACI

    2009-01-01

    Full Text Available Space Vector Modulation (SVM technique is a popular and an important PWM technique for three phases voltage source inverter in the control of Induction Motor. In this study harmonic analysis of Space Vector PWM (SVPWM is investigated using multiple-pulses method. Multiple-Pulses method calculates the Fourier coefficients of individual positive and negative pulses of the output PWM waveform and adds them together using the principle of superposition to calculate the Fourier coefficients of the all PWM output signal. Harmonic magnitudes can be calculated directly by this method without linearization, using look-up tables or Bessel functions. In this study, the results obtained in the application of SVPWM for values of variable parameters are compared with the results obtained with the multiple-pulses method.

  3. Heuristic Solution Approaches to the Double TSP with Multiple Stacks

    DEFF Research Database (Denmark)

    Petersen, Hanne Løhmann

    This paper introduces the Double Travelling Salesman Problem with Multiple Stacks and presents a three different metaheuristic approaches to its solution. The Double Travelling Salesman Problem with Multiple Stacks is concerned with finding the shortest route performing pickups and deliveries in ...... are developed for the problem and used with each of the heuristics. Finally some computational results are given along with lower bounds on the objective value....

  4. Heuristic Solution Approaches to the Double TSP with Multiple Stacks

    DEFF Research Database (Denmark)

    Petersen, Hanne Løhmann

    2006-01-01

    This paper introduces the Double Travelling Salesman Problem with Multiple Stacks and presents a three different metaheuristic approaches to its solution. The Double Travelling Salesman Problem with Multiple Stacks is concerned with finding the shortest route performing pickups and deliveries in ...... are developed for the problem and used with each of the heuristics. Finally some computational results are given along with lower bounds on the objective value....

  5. Creative Approaches to Teaching Graduate Research Methods Workshops

    Directory of Open Access Journals (Sweden)

    Peter Reilly

    2017-06-01

    Full Text Available Engagement and deeper learning were enhanced by developing several innovative teaching strategies delivered in Research Methods workshops to Graduate Business Students.  Focusing primarily on students adopting a creative approach to formulating a valid research question for undertaking a dissertation successfully. These techniques are applicable to most subject domains to ensure student engagement.  Addressing the various multiple intelligences and learning styles existing within groups while ensuring these sessions are student centred and conducive to a collaborative learning environment.  Blogs, interactive tutorials, online videos, games and posters, are used to develop student’s cognitive and metacognitive abilities.  Using novelty images appeals to a groups’ intellectual curiosity, acting as an interpretive device to explain  the value of adopting a holistic rather than analytic approach towards a topic.

  6. The concurrent multiplicative-additive approach for gauge-radar/satellite multisensor precipitation estimates

    Science.gov (United States)

    Garcia-Pintado, J.; Barberá, G. G.; Erena Arrabal, M.; Castillo, V. M.

    2010-12-01

    Objective analysis schemes (OAS), also called ``succesive correction methods'' or ``observation nudging'', have been proposed for multisensor precipitation estimation combining remote sensing data (meteorological radar or satellite) with data from ground-based raingauge networks. However, opposite to the more complex geostatistical approaches, the OAS techniques for this use are not optimized. On the other hand, geostatistical techniques ideally require, at the least, modelling the covariance from the rain gauge data at every time step evaluated, which commonly cannot be soundly done. Here, we propose a new procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) for operational rainfall estimation using rain gauges and meteorological radar, which does not require explicit modelling of spatial covariances. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on the OAS, whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The approach considers radar estimates as background a priori information (first guess), so that nudging to observations (gauges) may be relaxed smoothly to the first guess, and the relaxation shape is obtained from the sequential

  7. Effectiveness of Cognitive Existential Approach on Decreasing Demoralization in Women with Multiple Sclerosis

    Directory of Open Access Journals (Sweden)

    Nasim Pakniya

    2015-12-01

    Full Text Available Objectives: Multiple Sclerosis is the most prevalent central nervous system diseases thatdue to being chronic, frequent recurrence, uncertainty about its progress, and disability, can lead to various distresses as well as demoralization . Rehabilitation method based on Cognitive-Existential therapy is an integratedapproach which can help to decrease demoralization syndrome in these patients. This study aimed to exploring effectiveness of rehabilitation method based on Cognitive-Existential approach on decreasing demoralization syndrome in patients with MS. Methods: Single subject design is used in this study. Among women who had referred to Tehran MS Association, 3 women (aged between 20-40 were selected through purposeful sampling and separately participated in 10 sessions (90 minutes. Participants were assessed during 7 phases of intervention (2 baselines, 3 measurement during intervention, 2 follow-up through Demoralization Syndrome Scale (2004 and Cognitive Distortion scale (2010. Data were analyzed by calculating process variation index and visual analysis. Results: Comparing patients with MS scores on the diagram during 7 time measurement and calculating recovery percentage, represent decreasing in demoralization syndrome score scale. Discussions: Findings showed that rehabilitation method based on Cognitive Existential approach can decrease demoralization syndrome in patients with MS.

  8. Isothermal multiple displacement amplification: a methodical approach enhancing molecular routine diagnostics of microcarcinomas and small biopsies

    Directory of Open Access Journals (Sweden)

    Mairinger FD

    2014-08-01

    Full Text Available Fabian D Mairinger,1 Robert FH Walter,2 Claudia Vollbrecht,3 Thomas Hager,1 Karl Worm,1 Saskia Ting,1 Jeremias Wohlschläger,1 Paul Zarogoulidis,4 Konstantinos Zarogoulidis,4 Kurt W Schmid1 1Institute of Pathology, 2Ruhrlandklinik, West German Lung Center, University Hospital Essen, Essen, 3Institute of Pathology, University Hospital Cologne, Cologne, Germany; 4Pulmonary Department, Oncology Unit, G Papanikolaou General Hospital, Aristotle University of Thessaloniki, Thessaloniki, Greece Background and methods: Isothermal multiple displacement amplification (IMDA can be a powerful tool in molecular routine diagnostics for homogeneous and sequence-independent whole-genome amplification of notably small tumor samples, eg, microcarcinomas and biopsies containing a small amount of tumor. Currently, this method is not well established in pathology laboratories. We designed a study to confirm the feasibility and convenience of this method for routine diagnostics with formalin-fixed, paraffin-embedded samples prepared by laser-capture microdissection. Results: A total of 250 µg DNA (concentration 5 µg/µL was generated by amplification over a period of 8 hours with a material input of approximately 25 cells, approximately equivalent to 175 pg of genomic DNA. In the generated DNA, a representation of all chromosomes could be shown and the presence of elected genes relevant for diagnosis in clinical samples could be proven. Mutational analysis of clinical samples could be performed without any difficulty and showed concordance with earlier diagnostic findings. Conclusion: We established the feasibility and convenience of IMDA for routine diagnostics. We also showed that small amounts of DNA, which were not analyzable with current molecular methods, could be sufficient for a wide field of applications in molecular routine diagnostics when they are preamplified with IMDA. Keywords: isothermal multiple displacement amplification, isothermal, whole

  9. A Nonparametric, Multiple Imputation-Based Method for the Retrospective Integration of Data Sets

    Science.gov (United States)

    Carrig, Madeline M.; Manrique-Vallier, Daniel; Ranby, Krista W.; Reiter, Jerome P.; Hoyle, Rick H.

    2015-01-01

    Complex research questions often cannot be addressed adequately with a single data set. One sensible alternative to the high cost and effort associated with the creation of large new data sets is to combine existing data sets containing variables related to the constructs of interest. The goal of the present research was to develop a flexible, broadly applicable approach to the integration of disparate data sets that is based on nonparametric multiple imputation and the collection of data from a convenient, de novo calibration sample. We demonstrate proof of concept for the approach by integrating three existing data sets containing items related to the extent of problematic alcohol use and associations with deviant peers. We discuss both necessary conditions for the approach to work well and potential strengths and weaknesses of the method compared to other data set integration approaches. PMID:26257437

  10. Multiple sclerosis: general features and pharmacologic approach

    International Nuclear Information System (INIS)

    Nielsen Lagumersindez, Denis; Martinez Sanchez, Gregorio

    2009-01-01

    Multiple sclerosis is an autoimmune, inflammatory and desmyelinization disease central nervous system (CNS) of unknown etiology and critical evolution. There different etiological hypotheses talking of a close interrelation among predisposing genetic factors and dissimilar environmental factors, able to give raise to autoimmune response at central nervous system level. Hypothesis of autoimmune pathogeny is based on study of experimental models, and findings in biopsies of affected patients by disease. Accumulative data report that the oxidative stress plays a main role in pathogenesis of multiple sclerosis. Oxygen reactive species generated by macrophages has been involved as mediators of demyelinization and of axon damage, in experimental autoimmune encephalomyelitis and strictly in multiple sclerosis. Disease diagnosis is difficult because of there is not a confirmatory unique test. Management of it covers the treatment of acute relapses, disease modification, and symptoms management. These features require an individualized approach, base on evolution of this affection, and tolerability of treatments. In addition to diet, among non-pharmacologic treatments for multiple sclerosis it is recommended physical therapy. Besides, some clinical assays have been performed in which we used natural extracts, nutrition supplements, and other agents with promising results. Pharmacology allowed neurologists with a broad array of proved effectiveness drugs; however, results of research laboratories in past years make probable that therapeutical possibilities increase notably in future. (Author)

  11. A novel EMD selecting thresholding method based on multiple iteration for denoising LIDAR signal

    Science.gov (United States)

    Li, Meng; Jiang, Li-hui; Xiong, Xing-long

    2015-06-01

    Empirical mode decomposition (EMD) approach has been believed to be potentially useful for processing the nonlinear and non-stationary LIDAR signals. To shed further light on its performance, we proposed the EMD selecting thresholding method based on multiple iteration, which essentially acts as a development of EMD interval thresholding (EMD-IT), and randomly alters the samples of noisy parts of all the corrupted intrinsic mode functions to generate a better effect of iteration. Simulations on both synthetic signals and LIDAR signals from real world support this method.

  12. Hybrid multiple criteria decision-making methods

    DEFF Research Database (Denmark)

    Zavadskas, Edmundas Kazimieras; Govindan, K.; Antucheviciene, Jurgita

    2016-01-01

    Formal decision-making methods can be used to help improve the overall sustainability of industries and organisations. Recently, there has been a great proliferation of works aggregating sustainability criteria by using diverse multiple criteria decision-making (MCDM) techniques. A number of revi...

  13. Group decision-making approach for flood vulnerability identification using the fuzzy VIKOR method

    Science.gov (United States)

    Lee, G.; Jun, K. S.; Chung, E.-S.

    2015-04-01

    This study proposes an improved group decision making (GDM) framework that combines the VIKOR method with data fuzzification to quantify the spatial flood vulnerability including multiple criteria. In general, GDM method is an effective tool for formulating a compromise solution that involves various decision makers since various stakeholders may have different perspectives on their flood risk/vulnerability management responses. The GDM approach is designed to achieve consensus building that reflects the viewpoints of each participant. The fuzzy VIKOR method was developed to solve multi-criteria decision making (MCDM) problems with conflicting and noncommensurable criteria. This comprising method can be used to obtain a nearly ideal solution according to all established criteria. This approach effectively can propose some compromising decisions by combining the GDM method and fuzzy VIKOR method. The spatial flood vulnerability of the southern Han River using the GDM approach combined with the fuzzy VIKOR method was compared with the spatial flood vulnerability using general MCDM methods, such as the fuzzy TOPSIS and classical GDM methods (i.e., Borda, Condorcet, and Copeland). As a result, the proposed fuzzy GDM approach can reduce the uncertainty in the data confidence and weight derivation techniques. Thus, the combination of the GDM approach with the fuzzy VIKOR method can provide robust prioritization because it actively reflects the opinions of various groups and considers uncertainty in the input data.

  14. A global calibration method for multiple vision sensors based on multiple targets

    International Nuclear Information System (INIS)

    Liu, Zhen; Zhang, Guangjun; Wei, Zhenzhong; Sun, Junhua

    2011-01-01

    The global calibration of multiple vision sensors (MVS) has been widely studied in the last two decades. In this paper, we present a global calibration method for MVS with non-overlapping fields of view (FOVs) using multiple targets (MT). MT is constructed by fixing several targets, called sub-targets, together. The mutual coordinate transformations between sub-targets need not be known. The main procedures of the proposed method are as follows: one vision sensor is selected from MVS to establish the global coordinate frame (GCF). MT is placed in front of the vision sensors for several (at least four) times. Using the constraint that the relative positions of all sub-targets are invariant, the transformation matrix from the coordinate frame of each vision sensor to GCF can be solved. Both synthetic and real experiments are carried out and good result is obtained. The proposed method has been applied to several real measurement systems and shown to be both flexible and accurate. It can serve as an attractive alternative to existing global calibration methods

  15. A multiple multicomponent approach to chimeric peptide-peptoid podands.

    Science.gov (United States)

    Rivera, Daniel G; León, Fredy; Concepción, Odette; Morales, Fidel E; Wessjohann, Ludger A

    2013-05-10

    The success of multi-armed, peptide-based receptors in supramolecular chemistry traditionally is not only based on the sequence but equally on an appropriate positioning of various peptidic chains to create a multivalent array of binding elements. As a faster, more versatile and alternative access toward (pseudo)peptidic receptors, a new approach based on multiple Ugi four-component reactions (Ugi-4CR) is proposed as a means of simultaneously incorporating several binding and catalytic elements into organizing scaffolds. By employing α-amino acids either as the amino or acid components of the Ugi-4CRs, this multiple multicomponent process allows for the one-pot assembly of podands bearing chimeric peptide-peptoid chains as appended arms. Tripodal, bowl-shaped, and concave polyfunctional skeletons are employed as topologically varied platforms for positioning the multiple peptidic chains formed by Ugi-4CRs. In a similar approach, steroidal building blocks with several axially-oriented isocyano groups are synthesized and utilized to align the chimeric chains with conformational constrains, thus providing an alternative to the classical peptido-steroidal receptors. The branched and hybrid peptide-peptoid appendages allow new possibilities for both rational design and combinatorial production of synthetic receptors. The concept is also expandable to other multicomponent reactions. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. A Bayesian joint probability modeling approach for seasonal forecasting of streamflows at multiple sites

    Science.gov (United States)

    Wang, Q. J.; Robertson, D. E.; Chiew, F. H. S.

    2009-05-01

    Seasonal forecasting of streamflows can be highly valuable for water resources management. In this paper, a Bayesian joint probability (BJP) modeling approach for seasonal forecasting of streamflows at multiple sites is presented. A Box-Cox transformed multivariate normal distribution is proposed to model the joint distribution of future streamflows and their predictors such as antecedent streamflows and El Niño-Southern Oscillation indices and other climate indicators. Bayesian inference of model parameters and uncertainties is implemented using Markov chain Monte Carlo sampling, leading to joint probabilistic forecasts of streamflows at multiple sites. The model provides a parametric structure for quantifying relationships between variables, including intersite correlations. The Box-Cox transformed multivariate normal distribution has considerable flexibility for modeling a wide range of predictors and predictands. The Bayesian inference formulated allows the use of data that contain nonconcurrent and missing records. The model flexibility and data-handling ability means that the BJP modeling approach is potentially of wide practical application. The paper also presents a number of statistical measures and graphical methods for verification of probabilistic forecasts of continuous variables. Results for streamflows at three river gauges in the Murrumbidgee River catchment in southeast Australia show that the BJP modeling approach has good forecast quality and that the fitted model is consistent with observed data.

  17. Numerical Computation of Underground Inundation in Multiple Layers Using the Adaptive Transfer Method

    Directory of Open Access Journals (Sweden)

    Hyung-Jun Kim

    2018-01-01

    Full Text Available Extreme rainfall causes surface runoff to flow towards lowlands and subterranean facilities, such as subway stations and buildings with underground spaces in densely packed urban areas. These facilities and areas are therefore vulnerable to catastrophic submergence. However, flood modeling of underground space has not yet been adequately studied because there are difficulties in reproducing the associated multiple horizontal layers connected with staircases or elevators. This study proposes a convenient approach to simulate underground inundation when two layers are connected. The main facet of this approach is to compute the flow flux passing through staircases in an upper layer and to transfer the equivalent quantity to a lower layer. This is defined as the ‘adaptive transfer method’. This method overcomes the limitations of 2D modeling by introducing layers connecting concepts to prevent large variations in mesh sizes caused by complicated underlying obstacles or local details. Consequently, this study aims to contribute to the numerical analysis of flow in inundated underground spaces with multiple floors.

  18. A minimally invasive multiple marker approach allows highly efficient detection of meningioma tumors

    Directory of Open Access Journals (Sweden)

    Meese Eckart

    2006-12-01

    Full Text Available Abstract Background The development of effective frameworks that permit an accurate diagnosis of tumors, especially in their early stages, remains a grand challenge in the field of bioinformatics. Our approach uses statistical learning techniques applied to multiple antigen tumor antigen markers utilizing the immune system as a very sensitive marker of molecular pathological processes. For validation purposes we choose the intracranial meningioma tumors as model system since they occur very frequently, are mostly benign, and are genetically stable. Results A total of 183 blood samples from 93 meningioma patients (WHO stages I-III and 90 healthy controls were screened for seroreactivity with a set of 57 meningioma-associated antigens. We tested several established statistical learning methods on the resulting reactivity patterns using 10-fold cross validation. The best performance was achieved by Naïve Bayes Classifiers. With this classification method, our framework, called Minimally Invasive Multiple Marker (MIMM approach, yielded a specificity of 96.2%, a sensitivity of 84.5%, and an accuracy of 90.3%, the respective area under the ROC curve was 0.957. Detailed analysis revealed that prediction performs particularly well on low-grade (WHO I tumors, consistent with our goal of early stage tumor detection. For these tumors the best classification result with a specificity of 97.5%, a sensitivity of 91.3%, an accuracy of 95.6%, and an area under the ROC curve of 0.971 was achieved using a set of 12 antigen markers only. This antigen set was detected by a subset selection method based on Mutual Information. Remarkably, our study proves that the inclusion of non-specific antigens, detected not only in tumor but also in normal sera, increases the performance significantly, since non-specific antigens contribute additional diagnostic information. Conclusion Our approach offers the possibility to screen members of risk groups as a matter of routine

  19. An Extended TOPSIS Method for the Multiple Attribute Decision Making Problems Based on Interval Neutrosophic Set

    Directory of Open Access Journals (Sweden)

    Pingping Chi

    2013-03-01

    Full Text Available The interval neutrosophic set (INS can be easier to express the incomplete, indeterminate and inconsistent information, and TOPSIS is one of the most commonly used and effective method for multiple attribute decision making, however, in general, it can only process the attribute values with crisp numbers. In this paper, we have extended TOPSIS to INS, and with respect to the multiple attribute decision making problems in which the attribute weights are unknown and the attribute values take the form of INSs, we proposed an expanded TOPSIS method. Firstly, the definition of INS and the operational laws are given, and distance between INSs is defined. Then, the attribute weights are determined based on the Maximizing deviation method and an extended TOPSIS method is developed to rank the alternatives. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness.

  20. Optimization of breeding methods when introducing multiple ...

    African Journals Online (AJOL)

    Optimization of breeding methods when introducing multiple resistance genes from American to Chinese wheat. JN Qi, X Zhang, C Yin, H Li, F Lin. Abstract. Stripe rust is one of the most destructive diseases of wheat worldwide. Growing resistant cultivars with resistance genes is the most effective method to control this ...

  1. Curvelet-domain multiple matching method combined with cubic B-spline function

    Science.gov (United States)

    Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming

    2018-05-01

    Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.

  2. Drug induced mortality: a multiple cause approach on Italian causes of death Register

    Directory of Open Access Journals (Sweden)

    Francesco Grippo

    2015-04-01

    Full Text Available Background: Drug-related mortality is a complex phenomenon that has several health, social and economic effects. In this paper trends of drug-induced mortality in Italy are analysed. Two approaches have been followed: the traditional analysis of the underlying cause of death (UC (data refers to the Istat mortality database from 1980 to 2011, and the multiple cause (MCanalysis, that is the analysis of all conditions reported on the death certificate (data for 2003-2011 period.Methods: Data presented in this paper are based on the Italian mortality register. The selection of Icd codes used for the analysis follows the definition of the European Monitoring Centre for Drugs and Drug Addiction. Using different indicators (crude and standardized rates, ratio multiple to underlying, the results obtained from the two approaches (UC and MC have been compared. Moreover, as a measure of association between drug-related causes and specific conditions on the death certificate, an estimation of the age-standardized relative risk (RR has been used.Results: In the years 2009-2011, the total number of certificates whit mention of drug use was 1,293, 60% higher than the number UC based. The groups of conditions more strongly associated with drug-related causes are the mental and behavioral disorders (especially alcohol consumption, viral hepatitis, cirrhosis and fibrosis of liver, AIDS and endocarditis.Conclusions : The analysis based on multiple cause approach shows, for the first time, a more detailed picture of the drug related death; it allows to better describe the mortality profiles and to re-evaluate  the contribution of a specific cause to death.

  3. Development of an asymmetric multiple-position neutron source (AMPNS) method to monitor the criticality of a degraded reactor core

    International Nuclear Information System (INIS)

    Kim, S.S.; Levine, S.H.

    1985-01-01

    An analytical/experimental method has been developed to monitor the subcritical reactivity and unfold the k/sub infinity/ distribution of a degraded reactor core. The method uses several fixed neutron detectors and a Cf-252 neutron source placed sequentially in multiple positions in the core. Therefore, it is called the Asymmetric Multiple Position Neutron Source (AMPNS) method. The AMPNS method employs nucleonic codes to analyze the neutron multiplication of a Cf-252 neutron source. An optimization program, GPM, is utilized to unfold the k/sub infinity/ distribution of the degraded core, in which the desired performance measure minimizes the error between the calculated and the measured count rates of the degraded reactor core. The analytical/experimental approach is validated by performing experiments using the Penn State Breazeale TRIGA Reactor (PSBR). A significant result of this study is that it provides a method to monitor the criticality of a damaged core during the recovery period

  4. Pediatric Multiple Sclerosis: Genes, Environment, and a Comprehensive Therapeutic Approach.

    Science.gov (United States)

    Cappa, Ryan; Theroux, Liana; Brenton, J Nicholas

    2017-10-01

    Pediatric multiple sclerosis is an increasingly recognized and studied disorder that accounts for 3% to 10% of all patients with multiple sclerosis. The risk for pediatric multiple sclerosis is thought to reflect a complex interplay between environmental and genetic risk factors. Environmental exposures, including sunlight (ultraviolet radiation, vitamin D levels), infections (Epstein-Barr virus), passive smoking, and obesity, have been identified as potential risk factors in youth. Genetic predisposition contributes to the risk of multiple sclerosis, and the major histocompatibility complex on chromosome 6 makes the single largest contribution to susceptibility to multiple sclerosis. With the use of large-scale genome-wide association studies, other non-major histocompatibility complex alleles have been identified as independent risk factors for the disease. The bridge between environment and genes likely lies in the study of epigenetic processes, which are environmentally-influenced mechanisms through which gene expression may be modified. This article will review these topics to provide a framework for discussion of a comprehensive approach to counseling and ultimately treating the pediatric patient with multiple sclerosis. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. A New Conflict Resolution Method for Multiple Mobile Robots in Cluttered Environments With Motion-Liveness.

    Science.gov (United States)

    Shahriari, Mohammadali; Biglarbegian, Mohammad

    2018-01-01

    This paper presents a new conflict resolution methodology for multiple mobile robots while ensuring their motion-liveness, especially for cluttered and dynamic environments. Our method constructs a mathematical formulation in a form of an optimization problem by minimizing the overall travel times of the robots subject to resolving all the conflicts in their motion. This optimization problem can be easily solved through coordinating only the robots' speeds. To overcome the computational cost in executing the algorithm for very cluttered environments, we develop an innovative method through clustering the environment into independent subproblems that can be solved using parallel programming techniques. We demonstrate the scalability of our approach through performing extensive simulations. Simulation results showed that our proposed method is capable of resolving the conflicts of 100 robots in less than 1.23 s in a cluttered environment that has 4357 intersections in the paths of the robots. We also developed an experimental testbed and demonstrated that our approach can be implemented in real time. We finally compared our approach with other existing methods in the literature both quantitatively and qualitatively. This comparison shows while our approach is mathematically sound, it is more computationally efficient, scalable for very large number of robots, and guarantees the live and smooth motion of robots.

  6. MR-based conductivity imaging using multiple receiver coils.

    Science.gov (United States)

    Lee, Joonsung; Shin, Jaewook; Kim, Dong-Hyun

    2016-08-01

    To propose a signal combination method for MR-based tissue conductivity mapping using a standard clinical scanner with multiple receiver coils. The theory of the proposed method is presented with two practical approaches, a coil-specific approach and a subject-specific approach. Conductivity maps were reconstructed using the transceive phase of the combined signal. The sensitivities of the coefficients used for signal combination were analyzed and the method was compared with other signal combination methods. For validation, multiple receiver brain coils and multiple receiver breast coils were used in phantom, in vivo brain, and in vivo breast studies. The variation among the conductivity estimates was conductivity estimates. MR-based tissue conductivity mapping is feasible when using a standard clinical MR scanner with multiple receiver coils. The proposed method reduces systematic errors in phase-based conductivity mapping that can occur due to the inhomogeneous magnitude of the combined receive profile. Magn Reson Med 76:530-539, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  7. Rapid descriptive sensory methods – Comparison of Free Multiple Sorting, Partial Napping, Napping, Flash Profiling and conventional profiling

    DEFF Research Database (Denmark)

    Dehlholm, Christian; Brockhoff, Per B.; Meinert, Lene

    2012-01-01

    is a modal restriction of Napping to specific sensory modalities, directing sensation and still allowing a holistic approach to products. The new methods are compared to Flash Profiling, Napping and conventional descriptive sensory profiling. Evaluations are performed by several panels of expert assessors......Two new rapid descriptive sensory evaluation methods are introduced to the field of food sensory evaluation. The first method, free multiple sorting, allows subjects to perform ad libitum free sortings, until they feel that no more relevant dissimilarities among products remain. The second method...... are applied for the graphical validation and comparisons. This allows similar comparisons and is applicable to single-block evaluation designs such as Napping. The partial Napping allows repetitions on multiple sensory modalities, e.g. appearance, taste and mouthfeel, and shows the average...

  8. An integrated modeling approach to support management decisions of coupled groundwater-agricultural systems under multiple uncertainties

    Science.gov (United States)

    Hagos Subagadis, Yohannes; Schütze, Niels; Grundmann, Jens

    2015-04-01

    The planning and implementation of effective water resources management strategies need an assessment of multiple (physical, environmental, and socio-economic) issues, and often requires new research in which knowledge of diverse disciplines are combined in a unified methodological and operational frameworks. Such integrative research to link different knowledge domains faces several practical challenges. Such complexities are further compounded by multiple actors frequently with conflicting interests and multiple uncertainties about the consequences of potential management decisions. A fuzzy-stochastic multiple criteria decision analysis tool was developed in this study to systematically quantify both probabilistic and fuzzy uncertainties associated with complex hydrosystems management. It integrated physical process-based models, fuzzy logic, expert involvement and stochastic simulation within a general framework. Subsequently, the proposed new approach is applied to a water-scarce coastal arid region water management problem in northern Oman, where saltwater intrusion into a coastal aquifer due to excessive groundwater extraction for irrigated agriculture has affected the aquifer sustainability, endangering associated socio-economic conditions as well as traditional social structure. Results from the developed method have provided key decision alternatives which can serve as a platform for negotiation and further exploration. In addition, this approach has enabled to systematically quantify both probabilistic and fuzzy uncertainties associated with the decision problem. Sensitivity analysis applied within the developed tool has shown that the decision makers' risk aversion and risk taking attitude may yield in different ranking of decision alternatives. The developed approach can be applied to address the complexities and uncertainties inherent in water resources systems to support management decisions, while serving as a platform for stakeholder participation.

  9. The double travelling salesman problem with multiple stacks - Formulation and heuristic solution approaches

    DEFF Research Database (Denmark)

    Petersen, Hanne Løhmann; Madsen, Oli B.G.

    2009-01-01

    This paper introduces the double travelling salesman problem with multiple stacks and presents four different metaheuristic approaches to its solution. The double TSP with multiple stacks is concerned with determining the shortest route performing pickups and deliveries in two separated networks...

  10. Research on neutron source multiplication method in nuclear critical safety

    International Nuclear Information System (INIS)

    Zhu Qingfu; Shi Yongqian; Hu Dingsheng

    2005-01-01

    The paper concerns in the neutron source multiplication method research in nuclear critical safety. Based on the neutron diffusion equation with external neutron source the effective sub-critical multiplication factor k s is deduced, and k s is different to the effective neutron multiplication factor k eff in the case of sub-critical system with external neutron source. The verification experiment on the sub-critical system indicates that the parameter measured with neutron source multiplication method is k s , and k s is related to the external neutron source position in sub-critical system and external neutron source spectrum. The relation between k s and k eff and the effect of them on nuclear critical safety is discussed. (author)

  11. Closed-Loop Surface Related Multiple Estimation

    NARCIS (Netherlands)

    Lopez Angarita, G.A.

    2016-01-01

    Surface-related multiple elimination (SRME) is one of the most commonly used methods for suppressing surface multiples. However, in order to obtain an accurate surface multiple estimation, dense source and receiver sampling is required. The traditional approach to this problem is performing data

  12. Optimization of Inventories for Multiple Companies by Fuzzy Control Method

    OpenAIRE

    Kawase, Koichi; Konishi, Masami; Imai, Jun

    2008-01-01

    In this research, Fuzzy control theory is applied to the inventory control of the supply chain between multiple companies. The proposed control method deals with the amountof inventories expressing supply chain between multiple companies. Referring past demand and tardiness, inventory amounts of raw materials are determined by Fuzzy inference. The method that an appropriate inventory control becomes possible optimizing fuzzy control gain by using SA method for Fuzzy control. The variation of ...

  13. Unplanned Complex Suicide-A Consideration of Multiple Methods.

    Science.gov (United States)

    Ateriya, Navneet; Kanchan, Tanuj; Shekhawat, Raghvendra Singh; Setia, Puneet; Saraf, Ashish

    2018-05-01

    Detailed death investigations are mandatory to find out the exact cause and manner in non-natural deaths. In this reference, use of multiple methods in suicide poses a challenge for the investigators especially when the choice of methods to cause death is unplanned. There is an increased likelihood that doubts of homicide are raised in cases of unplanned complex suicides. A case of complex suicide is reported where the victim resorted to multiple methods to end his life, and what appeared to be an unplanned variant based on the death scene investigations. A meticulous crime scene examination, interviews of the victim's relatives and other witnesses, and a thorough autopsy are warranted to conclude on the cause and manner of death in all such cases. © 2017 American Academy of Forensic Sciences.

  14. A platform analytical quality by design (AQbD) approach for multiple UHPLC-UV and UHPLC-MS methods development for protein analysis.

    Science.gov (United States)

    Kochling, Jianmei; Wu, Wei; Hua, Yimin; Guan, Qian; Castaneda-Merced, Juan

    2016-06-05

    A platform analytical quality by design approach for methods development is presented in this paper. This approach is not limited just to method development following the same logical Analytical quality by design (AQbD) process, it is also exploited across a range of applications in methods development with commonality in equipment and procedures. As demonstrated by the development process of 3 methods, the systematic approach strategy offers a thorough understanding of the method scientific strength. The knowledge gained from the UHPLC-UV peptide mapping method can be easily transferred to the UHPLC-MS oxidation method and the UHPLC-UV C-terminal heterogeneity methods of the same protein. In addition, the platform AQbD method development strategy ensures method robustness is built in during development. In early phases, a good method can generate reliable data for product development allowing confident decision making. Methods generated following the AQbD approach have great potential for avoiding extensive post-approval analytical method change. While in the commercial phase, high quality data ensures timely data release, reduced regulatory risk, and lowered lab operational cost. Moreover, large, reliable database and knowledge gained during AQbD method development provide strong justifications during regulatory filling for the selection of important parameters or parameter change needs for method validation, and help to justify for removal of unnecessary tests used for product specifications. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Single-electron multiplication statistics as a combination of Poissonian pulse height distributions using constraint regression methods

    International Nuclear Information System (INIS)

    Ballini, J.-P.; Cazes, P.; Turpin, P.-Y.

    1976-01-01

    Analysing the histogram of anode pulse amplitudes allows a discussion of the hypothesis that has been proposed to account for the statistical processes of secondary multiplication in a photomultiplier. In an earlier work, good agreement was obtained between experimental and reconstructed spectra, assuming a first dynode distribution including two Poisson distributions of distinct mean values. This first approximation led to a search for a method which could give the weights of several Poisson distributions of distinct mean values. Three methods have been briefly exposed: classical linear regression, constraint regression (d'Esopo's method), and regression on variables subject to error. The use of these methods gives an approach of the frequency function which represents the dispersion of the punctual mean gain around the whole first dynode mean gain value. Comparison between this function and the one employed in Polya distribution allows the statement that the latter is inadequate to describe the statistical process of secondary multiplication. Numerous spectra obtained with two kinds of photomultiplier working under different physical conditions have been analysed. Then two points are discussed: - Does the frequency function represent the dynode structure and the interdynode collection process. - Is the model (the multiplication process of all dynodes but the first one, is Poissonian) valid whatever the photomultiplier and the utilization conditions. (Auth.)

  16. Performance evaluation of 2D and 3D deep learning approaches for automatic segmentation of multiple organs on CT images

    Science.gov (United States)

    Zhou, Xiangrong; Yamada, Kazuma; Kojima, Takuya; Takayama, Ryosuke; Wang, Song; Zhou, Xinxin; Hara, Takeshi; Fujita, Hiroshi

    2018-02-01

    The purpose of this study is to evaluate and compare the performance of modern deep learning techniques for automatically recognizing and segmenting multiple organ regions on 3D CT images. CT image segmentation is one of the important task in medical image analysis and is still very challenging. Deep learning approaches have demonstrated the capability of scene recognition and semantic segmentation on nature images and have been used to address segmentation problems of medical images. Although several works showed promising results of CT image segmentation by using deep learning approaches, there is no comprehensive evaluation of segmentation performance of the deep learning on segmenting multiple organs on different portions of CT scans. In this paper, we evaluated and compared the segmentation performance of two different deep learning approaches that used 2D- and 3D deep convolutional neural networks (CNN) without- and with a pre-processing step. A conventional approach that presents the state-of-the-art performance of CT image segmentation without deep learning was also used for comparison. A dataset that includes 240 CT images scanned on different portions of human bodies was used for performance evaluation. The maximum number of 17 types of organ regions in each CT scan were segmented automatically and compared to the human annotations by using ratio of intersection over union (IU) as the criterion. The experimental results demonstrated the IUs of the segmentation results had a mean value of 79% and 67% by averaging 17 types of organs that segmented by a 3D- and 2D deep CNN, respectively. All the results of the deep learning approaches showed a better accuracy and robustness than the conventional segmentation method that used probabilistic atlas and graph-cut methods. The effectiveness and the usefulness of deep learning approaches were demonstrated for solving multiple organs segmentation problem on 3D CT images.

  17. Multiple and mixed methods in formative evaluation: Is more better? Reflections from a South African study

    Directory of Open Access Journals (Sweden)

    Willem Odendaal

    2016-12-01

    Full Text Available Abstract Background Formative programme evaluations assess intervention implementation processes, and are seen widely as a way of unlocking the ‘black box’ of any programme in order to explore and understand why a programme functions as it does. However, few critical assessments of the methods used in such evaluations are available, and there are especially few that reflect on how well the evaluation achieved its objectives. This paper describes a formative evaluation of a community-based lay health worker programme for TB and HIV/AIDS clients across three low-income communities in South Africa. It assesses each of the methods used in relation to the evaluation objectives, and offers suggestions on ways of optimising the use of multiple, mixed-methods within formative evaluations of complex health system interventions. Methods The evaluation’s qualitative methods comprised interviews, focus groups, observations and diary keeping. Quantitative methods included a time-and-motion study of the lay health workers’ scope of practice and a client survey. The authors conceptualised and conducted the evaluation, and through iterative discussions, assessed the methods used and their results. Results Overall, the evaluation highlighted programme issues and insights beyond the reach of traditional single methods evaluations. The strengths of the multiple, mixed-methods in this evaluation included a detailed description and nuanced understanding of the programme and its implementation, and triangulation of the perspectives and experiences of clients, lay health workers, and programme managers. However, the use of multiple methods needs to be carefully planned and implemented as this approach can overstretch the logistic and analytic resources of an evaluation. Conclusions For complex interventions, formative evaluation designs including multiple qualitative and quantitative methods hold distinct advantages over single method evaluations. However

  18. A Semiparametric Bayesian Approach for Analyzing Longitudinal Data from Multiple Related Groups.

    Science.gov (United States)

    Das, Kiranmoy; Afriyie, Prince; Spirko, Lauren

    2015-11-01

    Often the biological and/or clinical experiments result in longitudinal data from multiple related groups. The analysis of such data is quite challenging due to the fact that groups might have shared information on the mean and/or covariance functions. In this article, we consider a Bayesian semiparametric approach of modeling the mean trajectories for longitudinal response coming from multiple related groups. We consider matrix stick-breaking process priors on the group mean parameters which allows information sharing on the mean trajectories across the groups. Simulation studies are performed to demonstrate the effectiveness of the proposed approach compared to the more traditional approaches. We analyze data from a one-year follow-up of nutrition education for hypercholesterolemic children with three different treatments where the children are from different age-groups. Our analysis provides more clinically useful information than the previous analysis of the same dataset. The proposed approach will be a very powerful tool for analyzing data from clinical trials and other medical experiments.

  19. Characterizing lentic freshwater fish assemblages using multiple sampling methods

    Science.gov (United States)

    Fischer, Jesse R.; Quist, Michael C.

    2014-01-01

    Characterizing fish assemblages in lentic ecosystems is difficult, and multiple sampling methods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of sampling methods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.

  20. Multiple sclerosis: general features and pharmacologic approach; Esclerosis multiple: aspectos generales y abordaje farmacologico

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen Lagumersindez, Denis; Martinez Sanchez, Gregorio [Instituto de Farmacia y Alimentos, Universidad de La Habana, La Habana (Cuba)

    2009-07-01

    Multiple sclerosis is an autoimmune, inflammatory and desmyelinization disease central nervous system (CNS) of unknown etiology and critical evolution. There different etiological hypotheses talking of a close interrelation among predisposing genetic factors and dissimilar environmental factors, able to give raise to autoimmune response at central nervous system level. Hypothesis of autoimmune pathogeny is based on study of experimental models, and findings in biopsies of affected patients by disease. Accumulative data report that the oxidative stress plays a main role in pathogenesis of multiple sclerosis. Oxygen reactive species generated by macrophages has been involved as mediators of demyelinization and of axon damage, in experimental autoimmune encephalomyelitis and strictly in multiple sclerosis. Disease diagnosis is difficult because of there is not a confirmatory unique test. Management of it covers the treatment of acute relapses, disease modification, and symptoms management. These features require an individualized approach, base on evolution of this affection, and tolerability of treatments. In addition to diet, among non-pharmacologic treatments for multiple sclerosis it is recommended physical therapy. Besides, some clinical assays have been performed in which we used natural extracts, nutrition supplements, and other agents with promising results. Pharmacology allowed neurologists with a broad array of proved effectiveness drugs; however, results of research laboratories in past years make probable that therapeutical possibilities increase notably in future. (Author)

  1. Quantifying submarine groundwater discharge in the coastal zone via multiple methods

    International Nuclear Information System (INIS)

    Burnett, W.C.; Aggarwal, P.K.; Aureli, A.; Bokuniewicz, H.; Cable, J.E.; Charette, M.A.; Kontar, E.; Krupa, S.; Kulkarni, K.M.; Loveless, A.; Moore, W.S.; Oberdorfer, J.A.; Oliveira, J.; Ozyurt, N.; Povinec, P.; Privitera, A.M.G.; Rajar, R.; Ramessur, R.T.; Scholten, J.; Stieglitz, T.; Taniguchi, M.; Turner, J.V.

    2006-01-01

    Submarine groundwater discharge (SGD) is now recognized as an important pathway between land and sea. As such, this flow may contribute to the biogeochemical and other marine budgets of near-shore waters. These discharges typically display significant spatial and temporal variability making assessments difficult. Groundwater seepage is patchy, diffuse, temporally variable, and may involve multiple aquifers. Thus, the measurement of its magnitude and associated chemical fluxes is a challenging enterprise. A joint project of UNESCO and the International Atomic Energy Agency (IAEA) has examined several methods of SGD assessment and carried out a series of five intercomparison experiments in different hydrogeologic environments (coastal plain, karst, glacial till, fractured crystalline rock, and volcanic terrains). This report reviews the scientific and management significance of SGD, measurement approaches, and the results of the intercomparison experiments. We conclude that while the process is essentially ubiquitous in coastal areas, the assessment of its magnitude at any one location is subject to enough variability that measurements should be made by a variety of techniques and over large enough spatial and temporal scales to capture the majority of these changing conditions. We feel that all the measurement techniques described here are valid although they each have their own advantages and disadvantages. It is recommended that multiple approaches be applied whenever possible. In addition, a continuing effort is required in order to capture long-period tidal fluctuations, storm effects, and seasonal variations

  2. Quantifying submarine groundwater discharge in the coastal zone via multiple methods

    Energy Technology Data Exchange (ETDEWEB)

    Burnett, W.C. [Department of Oceanography, Florida State University, Tallahassee, FL 32306 (United States); Aggarwal, P.K.; Kulkarni, K.M. [Isotope Hydrology Section, International Atomic Energy Agency (Austria); Aureli, A. [Department Water Resources Management, University of Palermo, Catania (Italy); Bokuniewicz, H. [Marine Science Research Center, Stony Brook University (United States); Cable, J.E. [Department Oceanography, Louisiana State University (United States); Charette, M.A. [Department Marine Chemistry, Woods Hole Oceanographic Institution (United States); Kontar, E. [Shirshov Institute of Oceanology (Russian Federation); Krupa, S. [South Florida Water Management District (United States); Loveless, A. [University of Western Australia (Australia); Moore, W.S. [Department Geological Sciences, University of South Carolina (United States); Oberdorfer, J.A. [Department Geology, San Jose State University (United States); Oliveira, J. [Instituto de Pesquisas Energeticas e Nucleares (Brazil); Ozyurt, N. [Department Geological Engineering, Hacettepe (Turkey); Povinec, P.; Scholten, J. [Marine Environment Laboratory, International Atomic Energy Agency (Monaco); Privitera, A.M.G. [U.O. 4.17 of the G.N.D.C.I., National Research Council (Italy); Rajar, R. [Faculty of Civil and Geodetic Engineering, University of Ljubljana (Slovenia); Ramessur, R.T. [Department Chemistry, University of Mauritius (Mauritius); Stieglitz, T. [Mathematical and Physical Sciences, James Cook University (Australia); Taniguchi, M. [Research Institute for Humanity and Nature (Japan); Turner, J.V. [CSIRO, Land and Water, Perth (Australia)

    2006-08-31

    Submarine groundwater discharge (SGD) is now recognized as an important pathway between land and sea. As such, this flow may contribute to the biogeochemical and other marine budgets of near-shore waters. These discharges typically display significant spatial and temporal variability making assessments difficult. Groundwater seepage is patchy, diffuse, temporally variable, and may involve multiple aquifers. Thus, the measurement of its magnitude and associated chemical fluxes is a challenging enterprise. A joint project of UNESCO and the International Atomic Energy Agency (IAEA) has examined several methods of SGD assessment and carried out a series of five intercomparison experiments in different hydrogeologic environments (coastal plain, karst, glacial till, fractured crystalline rock, and volcanic terrains). This report reviews the scientific and management significance of SGD, measurement approaches, and the results of the intercomparison experiments. We conclude that while the process is essentially ubiquitous in coastal areas, the assessment of its magnitude at any one location is subject to enough variability that measurements should be made by a variety of techniques and over large enough spatial and temporal scales to capture the majority of these changing conditions. We feel that all the measurement techniques described here are valid although they each have their own advantages and disadvantages. It is recommended that multiple approaches be applied whenever possible. In addition, a continuing effort is required in order to capture long-period tidal fluctuations, storm effects, and seasonal variations. (author)

  3. An Intuitionistic Multiplicative ORESTE Method for Patients’ Prioritization of Hospitalization

    Directory of Open Access Journals (Sweden)

    Cheng Zhang

    2018-04-01

    Full Text Available The tension brought about by sickbeds is a common and intractable issue in public hospitals in China due to the large population. Assigning the order of hospitalization of patients is difficult because of complex patient information such as disease type, emergency degree, and severity. It is critical to rank the patients taking full account of various factors. However, most of the evaluation criteria for hospitalization are qualitative, and the classical ranking method cannot derive the detailed relations between patients based on these criteria. Motivated by this, a comprehensive multiple criteria decision making method named the intuitionistic multiplicative ORESTE (organísation, rangement et Synthèse dedonnées relarionnelles, in French was proposed to handle the problem. The subjective and objective weights of criteria were considered in the proposed method. To do so, first, considering the vagueness of human perceptions towards the alternatives, an intuitionistic multiplicative preference relation model is applied to represent the experts’ preferences over the pairwise alternatives with respect to the predetermined criteria. Then, a correlation coefficient-based weight determining method is developed to derive the objective weights of criteria. This method can overcome the biased results caused by highly-related criteria. Afterwards, we improved the general ranking method, ORESTE, by introducing a new score function which considers both the subjective and objective weights of criteria. An intuitionistic multiplicative ORESTE method was then developed and further highlighted by a case study concerning the patients’ prioritization.

  4. Normalization method for metabolomics data using optimal selection of multiple internal standards

    Directory of Open Access Journals (Sweden)

    Yetukuri Laxman

    2007-03-01

    Full Text Available Abstract Background Success of metabolomics as the phenotyping platform largely depends on its ability to detect various sources of biological variability. Removal of platform-specific sources of variability such as systematic error is therefore one of the foremost priorities in data preprocessing. However, chemical diversity of molecular species included in typical metabolic profiling experiments leads to different responses to variations in experimental conditions, making normalization a very demanding task. Results With the aim to remove unwanted systematic variation, we present an approach that utilizes variability information from multiple internal standard compounds to find optimal normalization factor for each individual molecular species detected by metabolomics approach (NOMIS. We demonstrate the method on mouse liver lipidomic profiles using Ultra Performance Liquid Chromatography coupled to high resolution mass spectrometry, and compare its performance to two commonly utilized normalization methods: normalization by l2 norm and by retention time region specific standard compound profiles. The NOMIS method proved superior in its ability to reduce the effect of systematic error across the full spectrum of metabolite peaks. We also demonstrate that the method can be used to select best combinations of standard compounds for normalization. Conclusion Depending on experiment design and biological matrix, the NOMIS method is applicable either as a one-step normalization method or as a two-step method where the normalization parameters, influenced by variabilities of internal standard compounds and their correlation to metabolites, are first calculated from a study conducted in repeatability conditions. The method can also be used in analytical development of metabolomics methods by helping to select best combinations of standard compounds for a particular biological matrix and analytical platform.

  5. Visualizing Matrix Multiplication

    Science.gov (United States)

    Daugulis, Peteris; Sondore, Anita

    2018-01-01

    Efficient visualizations of computational algorithms are important tools for students, educators, and researchers. In this article, we point out an innovative visualization technique for matrix multiplication. This method differs from the standard, formal approach by using block matrices to make computations more visual. We find this method a…

  6. Using the Direct Sampling Multiple-Point Geostatistical Method for Filling Gaps in Landsat 7 ETM+ SLC-off Imagery

    KAUST Repository

    Yin, Gaohong

    2016-05-01

    Since the failure of the Scan Line Corrector (SLC) instrument on Landsat 7, observable gaps occur in the acquired Landsat 7 imagery, impacting the spatial continuity of observed imagery. Due to the highly geometric and radiometric accuracy provided by Landsat 7, a number of approaches have been proposed to fill the gaps. However, all proposed approaches have evident constraints for universal application. The main issues in gap-filling are an inability to describe the continuity features such as meandering streams or roads, or maintaining the shape of small objects when filling gaps in heterogeneous areas. The aim of the study is to validate the feasibility of using the Direct Sampling multiple-point geostatistical method, which has been shown to reconstruct complicated geological structures satisfactorily, to fill Landsat 7 gaps. The Direct Sampling method uses a conditional stochastic resampling of known locations within a target image to fill gaps and can generate multiple reconstructions for one simulation case. The Direct Sampling method was examined across a range of land cover types including deserts, sparse rural areas, dense farmlands, urban areas, braided rivers and coastal areas to demonstrate its capacity to recover gaps accurately for various land cover types. The prediction accuracy of the Direct Sampling method was also compared with other gap-filling approaches, which have been previously demonstrated to offer satisfactory results, under both homogeneous area and heterogeneous area situations. Studies have shown that the Direct Sampling method provides sufficiently accurate prediction results for a variety of land cover types from homogeneous areas to heterogeneous land cover types. Likewise, it exhibits superior performances when used to fill gaps in heterogeneous land cover types without input image or with an input image that is temporally far from the target image in comparison with other gap-filling approaches.

  7. The optimal approach of detecting stochastic gravitational wave from string cosmology using multiple detectors

    International Nuclear Information System (INIS)

    Fan Xilong; Zhu Zonghong

    2008-01-01

    String cosmology models predict a relic background of gravitational wave produced during the dilaton-driven inflation. It's spectrum is most likely to be detected by ground gravitational wave laser interferometers (IFOs), like LIGO, Virgo, GEO, as the energy density grows rapidly with frequency. We show the certain ranges of the parameters that underlying string cosmology model using two approaches, associated with 5% false alarm and 95% detection rate. The result presents that the approach of combining multiple pairs of IFOs is better than the approach of directly combining the outputs of multiple IFOs for LIGOH, LIGOL, Virgo and GEO

  8. Method for measuring multiple scattering corrections between liquid scintillators

    Energy Technology Data Exchange (ETDEWEB)

    Verbeke, J.M., E-mail: verbeke2@llnl.gov; Glenn, A.M., E-mail: glenn22@llnl.gov; Keefer, G.J., E-mail: keefer1@llnl.gov; Wurtz, R.E., E-mail: wurtz1@llnl.gov

    2016-07-21

    A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  9. Search Strategy of Detector Position For Neutron Source Multiplication Method by Using Detected-Neutron Multiplication Factor

    International Nuclear Information System (INIS)

    Endo, Tomohiro

    2011-01-01

    In this paper, an alternative definition of a neutron multiplication factor, detected-neutron multiplication factor kdet, is produced for the neutron source multiplication method..(NSM). By using kdet, a search strategy of appropriate detector position for NSM is also proposed. The NSM is one of the practical subcritical measurement techniques, i.e., the NSM does not require any special equipment other than a stationary external neutron source and an ordinary neutron detector. Additionally, the NSM method is based on steady-state analysis, so that this technique is very suitable for quasi real-time measurement. It is noted that the correction factors play important roles in order to accurately estimate subcriticality from the measured neutron count rates. The present paper aims to clarify how to correct the subcriticality measured by the NSM method, the physical meaning of the correction factors, and how to reduce the impact of correction factors by setting a neutron detector at an appropriate detector position

  10. Design of multiple representations e-learning resources based on a contextual approach for the basic physics course

    Science.gov (United States)

    Bakri, F.; Muliyati, D.

    2018-05-01

    This research aims to design e-learning resources with multiple representations based on a contextual approach for the Basic Physics Course. The research uses the research and development methods accordance Dick & Carey strategy. The development carried out in the digital laboratory of Physics Education Department, Mathematics and Science Faculty, Universitas Negeri Jakarta. The result of the process of product development with Dick & Carey strategy, have produced e-learning design of the Basic Physics Course is presented in multiple representations in contextual learning syntax. The appropriate of representation used in the design of learning basic physics include: concept map, video, figures, data tables of experiment results, charts of data tables, the verbal explanations, mathematical equations, problem and solutions example, and exercise. Multiple representations are presented in the form of contextual learning by stages: relating, experiencing, applying, transferring, and cooperating.

  11. Multiple independent identification decisions: a method of calibrating eyewitness identifications.

    Science.gov (United States)

    Pryke, Sean; Lindsay, R C L; Dysart, Jennifer E; Dupuis, Paul

    2004-02-01

    Two experiments (N = 147 and N = 90) explored the use of multiple independent lineups to identify a target seen live. In Experiment 1, simultaneous face, body, and sequential voice lineups were used. In Experiment 2, sequential face, body, voice, and clothing lineups were used. Both studies demonstrated that multiple identifications (by the same witness) from independent lineups of different features are highly diagnostic of suspect guilt (G. L. Wells & R. C. L. Lindsay, 1980). The number of suspect and foil selections from multiple independent lineups provides a powerful method of calibrating the accuracy of eyewitness identification. Implications for use of current methods are discussed. ((c) 2004 APA, all rights reserved)

  12. The multiple imputation method: a case study involving secondary data analysis.

    Science.gov (United States)

    Walani, Salimah R; Cleland, Charles M

    2015-05-01

    To illustrate with the example of a secondary data analysis study the use of the multiple imputation method to replace missing data. Most large public datasets have missing data, which need to be handled by researchers conducting secondary data analysis studies. Multiple imputation is a technique widely used to replace missing values while preserving the sample size and sampling variability of the data. The 2004 National Sample Survey of Registered Nurses. The authors created a model to impute missing values using the chained equation method. They used imputation diagnostics procedures and conducted regression analysis of imputed data to determine the differences between the log hourly wages of internationally educated and US-educated registered nurses. The authors used multiple imputation procedures to replace missing values in a large dataset with 29,059 observations. Five multiple imputed datasets were created. Imputation diagnostics using time series and density plots showed that imputation was successful. The authors also present an example of the use of multiple imputed datasets to conduct regression analysis to answer a substantive research question. Multiple imputation is a powerful technique for imputing missing values in large datasets while preserving the sample size and variance of the data. Even though the chained equation method involves complex statistical computations, recent innovations in software and computation have made it possible for researchers to conduct this technique on large datasets. The authors recommend nurse researchers use multiple imputation methods for handling missing data to improve the statistical power and external validity of their studies.

  13. Gaussian Multiple Instance Learning Approach for Mapping the Slums of the World Using Very High Resolution Imagery

    Energy Technology Data Exchange (ETDEWEB)

    Vatsavai, Raju [ORNL

    2013-01-01

    In this paper, we present a computationally efficient algo- rithm based on multiple instance learning for mapping infor- mal settlements (slums) using very high-resolution remote sensing imagery. From remote sensing perspective, infor- mal settlements share unique spatial characteristics that dis- tinguish them from other urban structures like industrial, commercial, and formal residential settlements. However, regular pattern recognition and machine learning methods, which are predominantly single-instance or per-pixel classi- fiers, often fail to accurately map the informal settlements as they do not capture the complex spatial patterns. To overcome these limitations we employed a multiple instance based machine learning approach, where groups of contigu- ous pixels (image patches) are modeled as generated by a Gaussian distribution. We have conducted several experi- ments on very high-resolution satellite imagery, represent- ing four unique geographic regions across the world. Our method showed consistent improvement in accurately iden- tifying informal settlements.

  14. Receptivity to Kinetic Fluctuations: A Multiple Scales Approach

    Science.gov (United States)

    Edwards, Luke; Tumin, Anatoli

    2017-11-01

    The receptivity of high-speed compressible boundary layers to kinetic fluctuations (KF) is considered within the framework of fluctuating hydrodynamics. The formulation is based on the idea that KF-induced dissipative fluxes may lead to the generation of unstable modes in the boundary layer. Fedorov and Tumin solved the receptivity problem using an asymptotic matching approach which utilized a resonant inner solution in the vicinity of the generation point of the second Mack mode. Here we take a slightly more general approach based on a multiple scales WKB ansatz which requires fewer assumptions about the behavior of the stability spectrum. The approach is modeled after the one taken by Luchini to study low speed incompressible boundary layers over a swept wing. The new framework is used to study examples of high-enthalpy, flat plate boundary layers whose spectra exhibit nuanced behavior near the generation point, such as first mode instabilities and near-neutral evolution over moderate length scales. The configurations considered exhibit supersonic unstable second Mack modes despite the temperature ratio Tw /Te > 1 , contrary to prior expectations. Supported by AFOSR and ONR.

  15. A hybrid approach to parameter identification of linear delay differential equations involving multiple delays

    Science.gov (United States)

    Marzban, Hamid Reza

    2018-05-01

    In this paper, we are concerned with the parameter identification of linear time-invariant systems containing multiple delays. The approach is based upon a hybrid of block-pulse functions and Legendre's polynomials. The convergence of the proposed procedure is established and an upper error bound with respect to the L2-norm associated with the hybrid functions is derived. The problem under consideration is first transformed into a system of algebraic equations. The least squares technique is then employed for identification of the desired parameters. Several multi-delay systems of varying complexity are investigated to evaluate the performance and capability of the proposed approximation method. It is shown that the proposed approach is also applicable to a class of nonlinear multi-delay systems. It is demonstrated that the suggested procedure provides accurate results for the desired parameters.

  16. A Fisher Kernel Approach for Multiple Instance Based Object Retrieval in Video Surveillance

    Directory of Open Access Journals (Sweden)

    MIRONICA, I.

    2015-11-01

    Full Text Available This paper presents an automated surveillance system that exploits the Fisher Kernel representation in the context of multiple-instance object retrieval task. The proposed algorithm has the main purpose of tracking a list of persons in several video sources, using only few training examples. In the first step, the Fisher Kernel representation describes a set of features as the derivative with respect to the log-likelihood of the generative probability distribution that models the feature distribution. Then, we learn the generative probability distribution over all features extracted from a reduced set of relevant frames. The proposed approach shows significant improvements and we demonstrate that Fisher kernels are well suited for this task. We demonstrate the generality of our approach in terms of features by conducting an extensive evaluation with a broad range of keypoints features. Also, we evaluate our method on two standard video surveillance datasets attaining superior results comparing to state-of-the-art object recognition algorithms.

  17. Using Combinatorial Approach to Improve Students' Learning of the Distributive Law and Multiplicative Identities

    Science.gov (United States)

    Tsai, Yu-Ling; Chang, Ching-Kuch

    2009-01-01

    This article reports an alternative approach, called the combinatorial model, to learning multiplicative identities, and investigates the effects of implementing results for this alternative approach. Based on realistic mathematics education theory, the new instructional materials or modules of the new approach were developed by the authors. From…

  18. Computing multiple periodic solutions of nonlinear vibration problems using the harmonic balance method and Groebner bases

    Science.gov (United States)

    Grolet, Aurelien; Thouverez, Fabrice

    2015-02-01

    This paper is devoted to the study of vibration of mechanical systems with geometric nonlinearities. The harmonic balance method is used to derive systems of polynomial equations whose solutions give the frequency component of the possible steady states. Groebner basis methods are used for computing all solutions of polynomial systems. This approach allows to reduce the complete system to an unique polynomial equation in one variable driving all solutions of the problem. In addition, in order to decrease the number of variables, we propose to first work on the undamped system, and recover solution of the damped system using a continuation on the damping parameter. The search for multiple solutions is illustrated on a simple system, where the influence of the retained number of harmonic is studied. Finally, the procedure is applied on a simple cyclic system and we give a representation of the multiple states versus frequency.

  19. Galerkin projection methods for solving multiple related linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Chan, T.F.; Ng, M.; Wan, W.L.

    1996-12-31

    We consider using Galerkin projection methods for solving multiple related linear systems A{sup (i)}x{sup (i)} = b{sup (i)} for 1 {le} i {le} s, where A{sup (i)} and b{sup (i)} are different in general. We start with the special case where A{sup (i)} = A and A is symmetric positive definite. The method generates a Krylov subspace from a set of direction vectors obtained by solving one of the systems, called the seed system, by the CG method and then projects the residuals of other systems orthogonally onto the generated Krylov subspace to get the approximate solutions. The whole process is repeated with another unsolved system as a seed until all the systems are solved. We observe in practice a super-convergence behaviour of the CG process of the seed system when compared with the usual CG process. We also observe that only a small number of restarts is required to solve all the systems if the right-hand sides are close to each other. These two features together make the method particularly effective. In this talk, we give theoretical proof to justify these observations. Furthermore, we combine the advantages of this method and the block CG method and propose a block extension of this single seed method. The above procedure can actually be modified for solving multiple linear systems A{sup (i)}x{sup (i)} = b{sup (i)}, where A{sup (i)} are now different. We can also extend the previous analytical results to this more general case. Applications of this method to multiple related linear systems arising from image restoration and recursive least squares computations are considered as examples.

  20. Users in the Driver's Seat: A New Approach to Classifying Teaching Methods in a University Repository

    NARCIS (Netherlands)

    Neumann, Susanne; Oberhuemer, Petra; Koper, Rob

    2009-01-01

    Neumann, S., Oberhuemer, P., & Koper, R. (2009). Users in the Driver's Seat: A New Approach to Classifying Teaching Methods in a University Repository. In U. Cress, V. Dimitrova & M. Specht (Eds.), Learning in the Synergy of Multiple Disciplines. Proceedings of the Fourth European Conference on

  1. A Monte Carlo Study on Multiple Output Stochastic Frontiers: Comparison of Two Approaches

    DEFF Research Database (Denmark)

    Henningsen, Geraldine; Henningsen, Arne; Jensen, Uwe

    , dividing all other output quantities by the selected output quantity, and using these ratios as regressors (OD). Another approach is the stochastic ray production frontier (SR) which transforms the output quantities into their Euclidean distance as the dependent variable and their polar coordinates......In the estimation of multiple output technologies in a primal approach, the main question is how to handle the multiple outputs. Often an output distance function is used, where the classical approach is to exploit its homogeneity property by selecting one output quantity as the dependent variable...... of both specifications for the case of a Translog output distance function with respect to different common statistical problems as well as problems arising as a consequence of zero values in the output quantities. Although, our results partly show clear reactions to statistical misspecifications...

  2. A Multiple Mobility Support Approach (MMSA Based on PEAS for NCW in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Bong-Joo Koo

    2011-01-01

    Full Text Available Wireless Sensor Networks (WSNs can be implemented as one of sensor systems in Network Centric Warfare (NCW. Mobility support and energy efficiency are key concerns for this application, due to multiple mobile users and stimuli in real combat field. However, mobility support approaches that can be adopted in this circumstance are rare. This paper proposes Multiple Mobility Support Approach (MMSA based on Probing Environment and Adaptive Sleeping (PEAS to support the simultaneous mobility of both multiple users and stimuli by sharing the information of stimuli in WSNs. Simulations using Qualnet are conducted, showing that MMSA can support multiple mobile users and stimuli with good energy efficiency. It is expected that the proposed MMSA can be applied to real combat field.

  3. Sensitivity studies on the approaches for addressing multiple initiating events in fire events PSA

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dae Il; Lim, Ho Gon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    A single fire event within a fire compartment or a fire scenario can cause multiple initiating events (IEs). As an example, a fire in a turbine building fire area can cause a loss of the main feed-water (LOMF) and loss of off-site power (LOOP) IEs. Previous domestic fire events PSA had considered only the most severe initiating event among multiple initiating events. NUREG/CR-6850 and ANS/ASME PRA Standard require that multiple IEs are to be addressed in fire events PSA. In this paper, sensitivity studies on the approaches for addressing multiple IEs in fire events PSA for Hanul Unit 3 were performed and their results were presented. In this paper, sensitivity studies on the approaches for addressing multiple IEs in fire events PSA are performed and their results were presented. From the sensitivity analysis results, we can find that the incorporations of multiple IEs into fire events PSA model result in the core damage frequency (CDF) increase and may lead to the generation of the duplicate cutsets. Multiple IEs also can occur at internal flooding event or other external events such as seismic event. They should be considered in the constructions of PSA models in order to realistically estimate risk due to flooding or seismic events.

  4. Isothermal multiple displacement amplification: a methodical approach enhancing molecular routine diagnostics of microcarcinomas and small biopsies.

    Science.gov (United States)

    Mairinger, Fabian D; Walter, Robert Fh; Vollbrecht, Claudia; Hager, Thomas; Worm, Karl; Ting, Saskia; Wohlschläger, Jeremias; Zarogoulidis, Paul; Zarogoulidis, Konstantinos; Schmid, Kurt W

    2014-01-01

    Isothermal multiple displacement amplification (IMDA) can be a powerful tool in molecular routine diagnostics for homogeneous and sequence-independent whole-genome amplification of notably small tumor samples, eg, microcarcinomas and biopsies containing a small amount of tumor. Currently, this method is not well established in pathology laboratories. We designed a study to confirm the feasibility and convenience of this method for routine diagnostics with formalin-fixed, paraffin-embedded samples prepared by laser-capture microdissection. A total of 250 μg DNA (concentration 5 μg/μL) was generated by amplification over a period of 8 hours with a material input of approximately 25 cells, approximately equivalent to 175 pg of genomic DNA. In the generated DNA, a representation of all chromosomes could be shown and the presence of elected genes relevant for diagnosis in clinical samples could be proven. Mutational analysis of clinical samples could be performed without any difficulty and showed concordance with earlier diagnostic findings. We established the feasibility and convenience of IMDA for routine diagnostics. We also showed that small amounts of DNA, which were not analyzable with current molecular methods, could be sufficient for a wide field of applications in molecular routine diagnostics when they are preamplified with IMDA.

  5. Nonlinear coupled mode approach for modeling counterpropagating solitons in the presence of disorder-induced multiple scattering in photonic crystal waveguides

    Science.gov (United States)

    Mann, Nishan; Hughes, Stephen

    2018-02-01

    We present the analytical and numerical details behind our recently published article [Phys. Rev. Lett. 118, 253901 (2017), 10.1103/PhysRevLett.118.253901], describing the impact of disorder-induced multiple scattering on counterpropagating solitons in photonic crystal waveguides. Unlike current nonlinear approaches using the coupled mode formalism, we account for the effects of intraunit cell multiple scattering. To solve the resulting system of coupled semilinear partial differential equations, we introduce a modified Crank-Nicolson-type norm-preserving implicit finite difference scheme inspired by the transfer matrix method. We provide estimates of the numerical dispersion characteristics of our scheme so that optimal step sizes can be chosen to either minimize numerical dispersion or to mimic the exact dispersion. We then show numerical results of a fundamental soliton propagating in the presence of multiple scattering to demonstrate that choosing a subunit cell spatial step size is critical in accurately capturing the effects of multiple scattering, and illustrate the stochastic nature of disorder by simulating soliton propagation in various instances of disordered photonic crystal waveguides. Our approach is easily extended to include a wide range of optical nonlinearities and is applicable to various photonic nanostructures where power propagation is bidirectional, either by choice, or as a result of multiple scattering.

  6. Symbolic interactionism as a theoretical perspective for multiple method research.

    Science.gov (United States)

    Benzies, K M; Allen, M N

    2001-02-01

    Qualitative and quantitative research rely on different epistemological assumptions about the nature of knowledge. However, the majority of nurse researchers who use multiple method designs do not address the problem of differing theoretical perspectives. Traditionally, symbolic interactionism has been viewed as one perspective underpinning qualitative research, but it is also the basis for quantitative studies. Rooted in social psychology, symbolic interactionism has a rich intellectual heritage that spans more than a century. Underlying symbolic interactionism is the major assumption that individuals act on the basis of the meaning that things have for them. The purpose of this paper is to present symbolic interactionism as a theoretical perspective for multiple method designs with the aim of expanding the dialogue about new methodologies. Symbolic interactionism can serve as a theoretical perspective for conceptually clear and soundly implemented multiple method research that will expand the understanding of human health behaviour.

  7. An Efficient Implementation of Track-Oriented Multiple Hypothesis Tracker Using Graphical Model Approaches

    Directory of Open Access Journals (Sweden)

    Jinping Sun

    2017-01-01

    Full Text Available The multiple hypothesis tracker (MHT is currently the preferred method for addressing data association problem in multitarget tracking (MTT application. MHT seeks the most likely global hypothesis by enumerating all possible associations over time, which is equal to calculating maximum a posteriori (MAP estimate over the report data. Despite being a well-studied method, MHT remains challenging mostly because of the computational complexity of data association. In this paper, we describe an efficient method for solving the data association problem using graphical model approaches. The proposed method uses the graph representation to model the global hypothesis formation and subsequently applies an efficient message passing algorithm to obtain the MAP solution. Specifically, the graph representation of data association problem is formulated as a maximum weight independent set problem (MWISP, which translates the best global hypothesis formation into finding the maximum weight independent set on the graph. Then, a max-product belief propagation (MPBP inference algorithm is applied to seek the most likely global hypotheses with the purpose of avoiding a brute force hypothesis enumeration procedure. The simulation results show that the proposed MPBP-MHT method can achieve better tracking performance than other algorithms in challenging tracking situations.

  8. Improving automated multiple sclerosis lesion segmentation with a cascaded 3D convolutional neural network approach.

    Science.gov (United States)

    Valverde, Sergi; Cabezas, Mariano; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Oliver, Arnau; Lladó, Xavier

    2017-07-15

    In this paper, we present a novel automated method for White Matter (WM) lesion segmentation of Multiple Sclerosis (MS) patient images. Our approach is based on a cascade of two 3D patch-wise convolutional neural networks (CNN). The first network is trained to be more sensitive revealing possible candidate lesion voxels while the second network is trained to reduce the number of misclassified voxels coming from the first network. This cascaded CNN architecture tends to learn well from a small (n≤35) set of labeled data of the same MRI contrast, which can be very interesting in practice, given the difficulty to obtain manual label annotations and the large amount of available unlabeled Magnetic Resonance Imaging (MRI) data. We evaluate the accuracy of the proposed method on the public MS lesion segmentation challenge MICCAI2008 dataset, comparing it with respect to other state-of-the-art MS lesion segmentation tools. Furthermore, the proposed method is also evaluated on two private MS clinical datasets, where the performance of our method is also compared with different recent public available state-of-the-art MS lesion segmentation methods. At the time of writing this paper, our method is the best ranked approach on the MICCAI2008 challenge, outperforming the rest of 60 participant methods when using all the available input modalities (T1-w, T2-w and FLAIR), while still in the top-rank (3rd position) when using only T1-w and FLAIR modalities. On clinical MS data, our approach exhibits a significant increase in the accuracy segmenting of WM lesions when compared with the rest of evaluated methods, highly correlating (r≥0.97) also with the expected lesion volume. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. A Hybrid Fuzzy Time Series Approach Based on Fuzzy Clustering and Artificial Neural Network with Single Multiplicative Neuron Model

    Directory of Open Access Journals (Sweden)

    Ozge Cagcag Yolcu

    2013-01-01

    Full Text Available Particularly in recent years, artificial intelligence optimization techniques have been used to make fuzzy time series approaches more systematic and improve forecasting performance. Besides, some fuzzy clustering methods and artificial neural networks with different structures are used in the fuzzification of observations and determination of fuzzy relationships, respectively. In approaches considering the membership values, the membership values are determined subjectively or fuzzy outputs of the system are obtained by considering that there is a relation between membership values in identification of relation. This necessitates defuzzification step and increases the model error. In this study, membership values were obtained more systematically by using Gustafson-Kessel fuzzy clustering technique. The use of artificial neural network with single multiplicative neuron model in identification of fuzzy relation eliminated the architecture selection problem as well as the necessity for defuzzification step by constituting target values from real observations of time series. The training of artificial neural network with single multiplicative neuron model which is used for identification of fuzzy relation step is carried out with particle swarm optimization. The proposed method is implemented using various time series and the results are compared with those of previous studies to demonstrate the performance of the proposed method.

  10. Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable Video

    Directory of Open Access Journals (Sweden)

    Vladislavs Dovgalecs

    2013-01-01

    Full Text Available The analysis of video acquired with a wearable camera is a challenge that multimedia community is facing with the proliferation of such sensors in various applications. In this paper, we focus on the problem of automatic visual place recognition in a weakly constrained environment, targeting the indexing of video streams by topological place recognition. We propose to combine several machine learning approaches in a time regularized framework for image-based place recognition indoors. The framework combines the power of multiple visual cues and integrates the temporal continuity information of video. We extend it with computationally efficient semisupervised method leveraging unlabeled video sequences for an improved indexing performance. The proposed approach was applied on challenging video corpora. Experiments on a public and a real-world video sequence databases show the gain brought by the different stages of the method.

  11. New approach to invariant-embedding methods in reactor physics calculations

    International Nuclear Information System (INIS)

    Forsbacka, M.J.; Rydin, R.A.

    1997-01-01

    Invariant-embedding methods offer an alternative approach to modeling physical phenomena and solving mathematical problems. Invariant embedding allows one to express traditional boundary-value problems as initial-value problems. In doing this, one effectively reformulates a problem to be solved in terms of an embedding parameter. In this paper, a hybrid method that consists of Monte Carlo-generated response functions that describe the neutronic properties of local spatial cells are coupled together in a global reactor model using the invariant embedding methodology, where the system multiplication factor k eff is used as the embedding parameter. Thus, k eff is computed directly rather than as the result of a secondary eigenvalue calculation. Because the response functions can represent any arbitrary material distribution within a local cell, this method shows promise to accurately assess the change in reactivity due to core disruptive accidents and other changes in system configuration such as changing control rod positions. This paper reports a series of proof-of-concept calculations that assess this method

  12. MULTIPLE OBJECTS

    Directory of Open Access Journals (Sweden)

    A. A. Bosov

    2015-04-01

    Full Text Available Purpose. The development of complicated techniques of production and management processes, information systems, computer science, applied objects of systems theory and others requires improvement of mathematical methods, new approaches for researches of application systems. And the variety and diversity of subject systems makes necessary the development of a model that generalizes the classical sets and their development – sets of sets. Multiple objects unlike sets are constructed by multiple structures and represented by the structure and content. The aim of the work is the analysis of multiple structures, generating multiple objects, the further development of operations on these objects in application systems. Methodology. To achieve the objectives of the researches, the structure of multiple objects represents as constructive trio, consisting of media, signatures and axiomatic. Multiple object is determined by the structure and content, as well as represented by hybrid superposition, composed of sets, multi-sets, ordered sets (lists and heterogeneous sets (sequences, corteges. Findings. In this paper we study the properties and characteristics of the components of hybrid multiple objects of complex systems, proposed assessments of their complexity, shown the rules of internal and external operations on objects of implementation. We introduce the relation of arbitrary order over multiple objects, we define the description of functions and display on objects of multiple structures. Originality.In this paper we consider the development of multiple structures, generating multiple objects.Practical value. The transition from the abstract to the subject of multiple structures requires the transformation of the system and multiple objects. Transformation involves three successive stages: specification (binding to the domain, interpretation (multiple sites and particularization (goals. The proposed describe systems approach based on hybrid sets

  13. Solution of Constrained Optimal Control Problems Using Multiple Shooting and ESDIRK Methods

    DEFF Research Database (Denmark)

    Capolei, Andrea; Jørgensen, John Bagterp

    2012-01-01

    of this paper is the use of ESDIRK integration methods for solution of the initial value problems and the corresponding sensitivity equations arising in the multiple shooting algorithm. Compared to BDF-methods, ESDIRK-methods are advantageous in multiple shooting algorithms in which restarts and frequent...... algorithm. As we consider stiff systems, implicit solvers with sensitivity computation capabilities for initial value problems must be used in the multiple shooting algorithm. Traditionally, multi-step methods based on the BDF algorithm have been used for such problems. The main novel contribution...... discontinuities on each shooting interval are present. The ESDIRK methods are implemented using an inexact Newton method that reuses the factorization of the iteration matrix for the integration as well as the sensitivity computation. Numerical experiments are provided to demonstrate the algorithm....

  14. Multiple regression approach to predict turbine-generator output for Chinshan nuclear power plant

    International Nuclear Information System (INIS)

    Chan, Yea-Kuang; Tsai, Yu-Ching

    2017-01-01

    The objective of this study is to develop a turbine cycle model using the multiple regression approach to estimate the turbine-generator output for the Chinshan Nuclear Power Plant (NPP). The plant operating data was verified using a linear regression model with a corresponding 95% confidence interval for the operating data. In this study, the key parameters were selected as inputs for the multiple regression based turbine cycle model. The proposed model was used to estimate the turbine-generator output. The effectiveness of the proposed turbine cycle model was demonstrated by using plant operating data obtained from the Chinshan NPP Unit 2. The results show that this multiple regression based turbine cycle model can be used to accurately estimate the turbine-generator output. In addition, this study also provides an alternative approach with simple and easy features to evaluate the thermal performance for nuclear power plants.

  15. Multiple regression approach to predict turbine-generator output for Chinshan nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Yea-Kuang; Tsai, Yu-Ching [Institute of Nuclear Energy Research, Taoyuan City, Taiwan (China). Nuclear Engineering Division

    2017-03-15

    The objective of this study is to develop a turbine cycle model using the multiple regression approach to estimate the turbine-generator output for the Chinshan Nuclear Power Plant (NPP). The plant operating data was verified using a linear regression model with a corresponding 95% confidence interval for the operating data. In this study, the key parameters were selected as inputs for the multiple regression based turbine cycle model. The proposed model was used to estimate the turbine-generator output. The effectiveness of the proposed turbine cycle model was demonstrated by using plant operating data obtained from the Chinshan NPP Unit 2. The results show that this multiple regression based turbine cycle model can be used to accurately estimate the turbine-generator output. In addition, this study also provides an alternative approach with simple and easy features to evaluate the thermal performance for nuclear power plants.

  16. Combining qualitative and quantitative operational research methods to inform quality improvement in pathways that span multiple settings

    Science.gov (United States)

    Crowe, Sonya; Brown, Katherine; Tregay, Jenifer; Wray, Jo; Knowles, Rachel; Ridout, Deborah A; Bull, Catherine; Utley, Martin

    2017-01-01

    Background Improving integration and continuity of care across sectors within resource constraints is a priority in many health systems. Qualitative operational research methods of problem structuring have been used to address quality improvement in services involving multiple sectors but not in combination with quantitative operational research methods that enable targeting of interventions according to patient risk. We aimed to combine these methods to augment and inform an improvement initiative concerning infants with congenital heart disease (CHD) whose complex care pathway spans multiple sectors. Methods Soft systems methodology was used to consider systematically changes to services from the perspectives of community, primary, secondary and tertiary care professionals and a patient group, incorporating relevant evidence. Classification and regression tree (CART) analysis of national audit datasets was conducted along with data visualisation designed to inform service improvement within the context of limited resources. Results A ‘Rich Picture’ was developed capturing the main features of services for infants with CHD pertinent to service improvement. This was used, along with a graphical summary of the CART analysis, to guide discussions about targeting interventions at specific patient risk groups. Agreement was reached across representatives of relevant health professions and patients on a coherent set of targeted recommendations for quality improvement. These fed into national decisions about service provision and commissioning. Conclusions When tackling complex problems in service provision across multiple settings, it is important to acknowledge and work with multiple perspectives systematically and to consider targeting service improvements in response to confined resources. Our research demonstrates that applying a combination of qualitative and quantitative operational research methods is one approach to doing so that warrants further

  17. Multiple Site-Directed and Saturation Mutagenesis by the Patch Cloning Method.

    Science.gov (United States)

    Taniguchi, Naohiro; Murakami, Hiroshi

    2017-01-01

    Constructing protein-coding genes with desired mutations is a basic step for protein engineering. Herein, we describe a multiple site-directed and saturation mutagenesis method, termed MUPAC. This method has been used to introduce multiple site-directed mutations in the green fluorescent protein gene and in the moloney murine leukemia virus reverse transcriptase gene. Moreover, this method was also successfully used to introduce randomized codons at five desired positions in the green fluorescent protein gene, and for simple DNA assembly for cloning.

  18. Application of neutron multiplicity counting to waste assay

    Energy Technology Data Exchange (ETDEWEB)

    Pickrell, M.M.; Ensslin, N. [Los Alamos National Lab., NM (United States); Sharpe, T.J. [North Carolina State Univ., Raleigh, NC (United States)

    1997-11-01

    This paper describes the use of a new figure of merit code that calculates both bias and precision for coincidence and multiplicity counting, and determines the optimum regions for each in waste assay applications. A {open_quotes}tunable multiplicity{close_quotes} approach is developed that uses a combination of coincidence and multiplicity counting to minimize the total assay error. An example is shown where multiplicity analysis is used to solve for mass, alpha, and multiplication and tunable multiplicity is shown to work well. The approach provides a method for selecting coincidence, multiplicity, or tunable multiplicity counting to give the best assay with the lowest total error over a broad spectrum of assay conditions. 9 refs., 6 figs.

  19. Approaches and Methods of Periodization in Literary History

    Directory of Open Access Journals (Sweden)

    Naser Gholi Sarli

    2013-10-01

    Full Text Available Abstract One of the most fundamental acts of historiography is to classify historical information in diachronic axis. The method of this classification or periodization shows the theoretical approach of the historian and determines the structure and the form of his history. Because of multiple criteria of analysis and various literary genres, periodization in literary history is more complicated than that of general history. We can distinguish two approaches in periodization of literary history, although these can be used together: extrinsic or social-cultural approach (based on criteria extrinsic to literature and intrinsic or formalist approach (based on criteria intrinsic to literature. Then periodization in literary history can be formulated in different methods and may be based upon various criteria: chronological such as century, decade and year organic patterns of evolution great poets and writers literary emblems and evaluations of every period events, concepts and periods of general or political history analogy of literary history and history of ideas or history of arts approaches and styles of language dominant literary norms. These methods actually are used together and everyone has adequacy in special kind of literary history. In periodization of Persian contemporary literature, some methods and models current in periodization of poetry have been applied identically to periodization of prose. Periodization based upon century, decade and year is the simplest and most mechanical method but sometimes certain centuries in some countries have symbolic and stylistic meaning, and decades were used often for subdivisions of literary history, especially nowadays with fast rhythm of literary change. Periodization according to organic patterns of evolution equates the changes of literary history with the life phases of an organism, and offers an account of birth, mature and death (and sometimes re-birth of literary genres, but this method have

  20. Approaches and Methods of Periodization in Literary History

    Directory of Open Access Journals (Sweden)

    Dr. N. Gh. Sarli

    Full Text Available One of the most fundamental acts of historiography is to classify historical information in diachronic axis. The method of this classification or periodization shows the theoretical approach of the historian and determines the structure and the form of his history. Because of multiple criteria of analysis and various literary genres, periodization in literary history is more complicated than that of general history. We can distinguish two approaches in periodization of literary history, although these can be used together: extrinsic or social-cultural approach (based on criteria extrinsic to literature and intrinsic or formalist approach (based on criteria intrinsic to literature. Then periodization in literary history can be formulated in different methods and may be based upon various criteria: chronological such as century, decade and year; organic patterns of evolution; great poets and writers; literary emblems and evaluations of every period; events, concepts and periods of general or political history; analogy of literary history and history of ideas or history of arts; approaches and styles of language; dominant literary norms. These methods actually are used together and everyone has adequacy in special kind of literary history. In periodization of Persian contemporary literature, some methods and models current in periodization of poetry have been applied identically to periodization of prose. Periodization based upon century, decade and year is the simplest and most mechanical method but sometimes certain centuries in some countries have symbolic and stylistic meaning, and decades were used often for subdivisions of literary history, especially nowadays with fast rhythm of literary change.Periodization according to organic patterns of evolution equates the changes of literary history with the life phases of an organism, and offers an account of birth, mature and death (and sometimes re-birth of literary genres, but this method have

  1. Approaches and Methods of Periodization in Literary History

    Directory of Open Access Journals (Sweden)

    Naser Gholi Sarli

    2013-11-01

    Full Text Available Abstract One of the most fundamental acts of historiography is to classify historical information in diachronic axis. The method of this classification or periodization shows the theoretical approach of the historian and determines the structure and the form of his history. Because of multiple criteria of analysis and various literary genres, periodization in literary history is more complicated than that of general history. We can distinguish two approaches in periodization of literary history, although these can be used together: extrinsic or social-cultural approach (based on criteria extrinsic to literature and intrinsic or formalist approach (based on criteria intrinsic to literature. Then periodization in literary history can be formulated in different methods and may be based upon various criteria: chronological such as century, decade and year organic patterns of evolution great poets and writers literary emblems and evaluations of every period events, concepts and periods of general or political history analogy of literary history and history of ideas or history of arts approaches and styles of language dominant literary norms. These methods actually are used together and everyone has adequacy in special kind of literary history. In periodization of Persian contemporary literature, some methods and models current in periodization of poetry have been applied identically to periodization of prose. Periodization based upon century, decade and year is the simplest and most mechanical method but sometimes certain centuries in some countries have symbolic and stylistic meaning, and decades were used often for subdivisions of literary history, especially nowadays with fast rhythm of literary change. Periodization according to organic patterns of evolution equates the changes of literary history with the life phases of an organism, and offers an account of birth, mature and death (and sometimes re-birth of literary genres, but this method have

  2. Multiple-Features-Based Semisupervised Clustering DDoS Detection Method

    Directory of Open Access Journals (Sweden)

    Yonghao Gu

    2017-01-01

    Full Text Available DDoS attack stream from different agent host converged at victim host will become very large, which will lead to system halt or network congestion. Therefore, it is necessary to propose an effective method to detect the DDoS attack behavior from the massive data stream. In order to solve the problem that large numbers of labeled data are not provided in supervised learning method, and the relatively low detection accuracy and convergence speed of unsupervised k-means algorithm, this paper presents a semisupervised clustering detection method using multiple features. In this detection method, we firstly select three features according to the characteristics of DDoS attacks to form detection feature vector. Then, Multiple-Features-Based Constrained-K-Means (MF-CKM algorithm is proposed based on semisupervised clustering. Finally, using MIT Laboratory Scenario (DDoS 1.0 data set, we verify that the proposed method can improve the convergence speed and accuracy of the algorithm under the condition of using a small amount of labeled data sets.

  3. A crack growth evaluation method for interacting multiple cracks

    International Nuclear Information System (INIS)

    Kamaya, Masayuki

    2003-01-01

    When stress corrosion cracking or corrosion fatigue occurs, multiple cracks are frequently initiated in the same area. According to section XI of the ASME Boiler and Pressure Vessel Code, multiple cracks are considered as a single combined crack in crack growth analysis, if the specified conditions are satisfied. In crack growth processes, however, no prescription for the interference between multiple cracks is given in this code. The JSME Post-Construction Code, issued in May 2000, prescribes the conditions of crack coalescence in the crack growth process. This study aimed to extend this prescription to more general cases. A simulation model was applied, to simulate the crack growth process, taking into account the interference between two cracks. This model made it possible to analyze multiple crack growth behaviors for many cases (e.g. different relative position and length) that could not be studied by experiment only. Based on these analyses, a new crack growth analysis method was suggested for taking into account the interference between multiple cracks. (author)

  4. Assessment of Different Metal Screw Joint Parameters by Using Multiple Criteria Analysis Methods

    Directory of Open Access Journals (Sweden)

    Audrius Čereška

    2018-05-01

    Full Text Available This study compares screw joints made of different materials, including screws of different diameters. For that purpose, 8, 10, 12, 14, 16 mm diameter steel screws and various parts made of aluminum (Al, steel (Stl, bronze (Brz, cast iron (CI, copper (Cu and brass (Br are considered. Multiple criteria decision making (MCDM methods such as evaluation based on distance from average solution (EDAS, simple additive weighting (SAW, technique for order of preference by similarity to ideal solution (TOPSIS and complex proportional assessment (COPRAS are utilized to assess reliability of screw joints also considering cost issues. The entropy, criterion impact loss (CILOS and integrated determination of objective criteria weights (IDOCRIW methods are utilized to assess weights of decision criteria and find the best design alternative. Numerical results confirm the validity of the proposed approach.

  5. Upscaling permeability for three-dimensional fractured porous rocks with the multiple boundary method

    Science.gov (United States)

    Chen, Tao; Clauser, Christoph; Marquart, Gabriele; Willbrand, Karen; Hiller, Thomas

    2018-02-01

    Upscaling permeability of grid blocks is crucial for groundwater models. A novel upscaling method for three-dimensional fractured porous rocks is presented. The objective of the study was to compare this method with the commonly used Oda upscaling method and the volume averaging method. First, the multiple boundary method and its computational framework were defined for three-dimensional stochastic fracture networks. Then, the different upscaling methods were compared for a set of rotated fractures, for tortuous fractures, and for two discrete fracture networks. The results computed by the multiple boundary method are comparable with those of the other two methods and fit best the analytical solution for a set of rotated fractures. The errors in flow rate of the equivalent fracture model decrease when using the multiple boundary method. Furthermore, the errors of the equivalent fracture models increase from well-connected fracture networks to poorly connected ones. Finally, the diagonal components of the equivalent permeability tensors tend to follow a normal or log-normal distribution for the well-connected fracture network model with infinite fracture size. By contrast, they exhibit a power-law distribution for the poorly connected fracture network with multiple scale fractures. The study demonstrates the accuracy and the flexibility of the multiple boundary upscaling concept. This makes it attractive for being incorporated into any existing flow-based upscaling procedures, which helps in reducing the uncertainty of groundwater models.

  6. Compositional mining of multiple object API protocols through state abstraction.

    Science.gov (United States)

    Dai, Ziying; Mao, Xiaoguang; Lei, Yan; Qi, Yuhua; Wang, Rui; Gu, Bin

    2013-01-01

    API protocols specify correct sequences of method invocations. Despite their usefulness, API protocols are often unavailable in practice because writing them is cumbersome and error prone. Multiple object API protocols are more expressive than single object API protocols. However, the huge number of objects of typical object-oriented programs poses a major challenge to the automatic mining of multiple object API protocols: besides maintaining scalability, it is important to capture various object interactions. Current approaches utilize various heuristics to focus on small sets of methods. In this paper, we present a general, scalable, multiple object API protocols mining approach that can capture all object interactions. Our approach uses abstract field values to label object states during the mining process. We first mine single object typestates as finite state automata whose transitions are annotated with states of interacting objects before and after the execution of the corresponding method and then construct multiple object API protocols by composing these annotated single object typestates. We implement our approach for Java and evaluate it through a series of experiments.

  7. A heuristic approach using multiple criteria for environmentally benign 3PLs selection

    Science.gov (United States)

    Kongar, Elif

    2005-11-01

    Maintaining competitiveness in an environment where price and quality differences between competing products are disappearing depends on the company's ability to reduce costs and supply time. Timely responses to rapidly changing market conditions require an efficient Supply Chain Management (SCM). Outsourcing logistics to third-party logistics service providers (3PLs) is one commonly used way of increasing the efficiency of logistics operations, while creating a more "core competency focused" business environment. However, this alone may not be sufficient. Due to recent environmental regulations and growing public awareness regarding environmental issues, 3PLs need to be not only efficient but also environmentally benign to maintain companies' competitiveness. Even though an efficient and environmentally benign combination of 3PLs can theoretically be obtained using exhaustive search algorithms, heuristics approaches to the selection process may be superior in terms of the computational complexity. In this paper, a hybrid approach that combines a multiple criteria Genetic Algorithm (GA) with Linear Physical Weighting Algorithm (LPPW) to be used in efficient and environmentally benign 3PLs is proposed. A numerical example is also provided to illustrate the method and the analyses.

  8. Comparison between Two Assessment Methods; Modified Essay Questions and Multiple Choice Questions

    Directory of Open Access Journals (Sweden)

    Assadi S.N.* MD

    2015-09-01

    Full Text Available Aims Using the best assessment methods is an important factor in educational development of health students. Modified essay questions and multiple choice questions are two prevalent methods of assessing the students. The aim of this study was to compare two methods of modified essay questions and multiple choice questions in occupational health engineering and work laws courses. Materials & Methods This semi-experimental study was performed during 2013 to 2014 on occupational health students of Mashhad University of Medical Sciences. The class of occupational health and work laws course in 2013 was considered as group A and the class of 2014 as group B. Each group had 50 students.The group A students were assessed by modified essay questions method and the group B by multiple choice questions method.Data were analyzed in SPSS 16 software by paired T test and odd’s ratio. Findings The mean grade of occupational health and work laws course was 18.68±0.91 in group A (modified essay questions and was 18.78±0.86 in group B (multiple choice questions which was not significantly different (t=-0.41; p=0.684. The mean grade of chemical chapter (p<0.001 in occupational health engineering and harmful work law (p<0.001 and other (p=0.015 chapters in work laws were significantly different between two groups. Conclusion Modified essay questions and multiple choice questions methods have nearly the same student assessing value for the occupational health engineering and work laws course.

  9. Breeding approaches in simultaneous selection for multiple stress tolerance of maize in tropical environments

    Directory of Open Access Journals (Sweden)

    Denić M.

    2007-01-01

    Full Text Available Maize is the principal crop and major staple food in the most countries of Sub-Saharan Africa. However, due to the influence of abiotic and biotic stress factors, maize production faces serious constraints. Among the agro-ecological conditions, the main constraints are: lack and poor distribution of rainfall; low soil fertility; diseases (maize streak virus, downy mildew, leaf blights, rusts, gray leaf spot, stem/cob rots and pests (borers and storage pests. Among the socio-economic production constraints are: poor economy, serious shortage of trained manpower; insufficient management expertise, lack of use of improved varieties and poor cultivation practices. To develop desirable varieties, and thus consequently alleviate some of these constraints, appropriate breeding approaches and field-based methodologies in selection for multiple stress tolerance, were implemented. These approaches are mainly based on: a Crossing selected genotypes with more desirable stress tolerant and other agronomic traits; b Using the disease/pest spreader row method, combined with testing and selection of created progenies under strong to intermediate pressure of drought and low soil fertility in nurseries; and c Evaluation of the varieties developed in multi-location trials under low and "normal" inputs. These approaches provide testing and selection of large number of progenies, which is required for simultaneous selection for multiple stress tolerance. Data obtained revealed that remarkable improvement of the traits under selection was achieved. Biggest progress was obtained in selection for maize streak virus and downy mildew resistance, flintiness and earliness. In the case of drought stress, statistical analyses revealed significant negative correlation between yield and anthesis-silking interval, and between yield and days to silk, but positive correlation between yield and grain weight per ear.

  10. Combining multiple FDG-PET radiotherapy target segmentation methods to reduce the effect of variable performance of individual segmentation methods

    Energy Technology Data Exchange (ETDEWEB)

    McGurk, Ross J. [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Bowsher, James; Das, Shiva K. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Lee, John A [Molecular Imaging and Experimental Radiotherapy Unit, Universite Catholique de Louvain, 1200 Brussels (Belgium)

    2013-04-15

    different between 128 Multiplication-Sign 128 and 256 Multiplication-Sign 256 grid sizes for either method (MJV, p= 0.0519; STAPLE, p= 0.5672) but was for SMASD values (MJV, p < 0.0001; STAPLE, p= 0.0164). The best individual method varied depending on object characteristics. However, both MJV and STAPLE provided essentially equivalent accuracy to using the best independent method in every situation, with mean differences in DSC of 0.01-0.03, and 0.05-0.12 mm for SMASD. Conclusions: Combining segmentations offers a robust approach to object segmentation in PET. Both MJV and STAPLE improved accuracy and were robust against the widely varying performance of individual segmentation methods. Differences between MJV and STAPLE are such that either offers good performance when combining volumes. Neither method requires a training dataset but MJV is simpler to interpret, easy to implement and fast.

  11. Use of Multiple Imputation Method to Improve Estimation of Missing Baseline Serum Creatinine in Acute Kidney Injury Research

    Science.gov (United States)

    Peterson, Josh F.; Eden, Svetlana K.; Moons, Karel G.; Ikizler, T. Alp; Matheny, Michael E.

    2013-01-01

    Summary Background and objectives Baseline creatinine (BCr) is frequently missing in AKI studies. Common surrogate estimates can misclassify AKI and adversely affect the study of related outcomes. This study examined whether multiple imputation improved accuracy of estimating missing BCr beyond current recommendations to apply assumed estimated GFR (eGFR) of 75 ml/min per 1.73 m2 (eGFR 75). Design, setting, participants, & measurements From 41,114 unique adult admissions (13,003 with and 28,111 without BCr data) at Vanderbilt University Hospital between 2006 and 2008, a propensity score model was developed to predict likelihood of missing BCr. Propensity scoring identified 6502 patients with highest likelihood of missing BCr among 13,003 patients with known BCr to simulate a “missing” data scenario while preserving actual reference BCr. Within this cohort (n=6502), the ability of various multiple-imputation approaches to estimate BCr and classify AKI were compared with that of eGFR 75. Results All multiple-imputation methods except the basic one more closely approximated actual BCr than did eGFR 75. Total AKI misclassification was lower with multiple imputation (full multiple imputation + serum creatinine) (9.0%) than with eGFR 75 (12.3%; Pcreatinine) (15.3%) versus eGFR 75 (40.5%; P<0.001). Multiple imputation improved specificity and positive predictive value for detecting AKI at the expense of modestly decreasing sensitivity relative to eGFR 75. Conclusions Multiple imputation can improve accuracy in estimating missing BCr and reduce misclassification of AKI beyond currently proposed methods. PMID:23037980

  12. Method for Collision Avoidance Motion Coordination of Multiple Mobile Robots Using Central Observation

    Energy Technology Data Exchange (ETDEWEB)

    Ko, N.Y.; Seo, D.J. [Chosun University, Kwangju (Korea)

    2003-04-01

    This paper presents a new method driving multiple robots to their goal position without collision. Each robot adjusts its motion based on the information on the goal locations, velocity, and position of the robot and the velocity and position of the other robots. To consider the movement of the robots in a work area, we adopt the concept of avoidability measure. The avoidability measure figures the degree of how easily a robot can avoid other robots considering the following factors: the distance from the robot to the other robots, velocity of the robot and the other robots. To implement the concept in moving robot avoidance, relative distance between the robots is derived. Our method combines the relative distance with an artificial potential field method. The proposed method is simulated for several cases. The results show that the proposed method steers robots to open space anticipating the approach of other robots. In contrast, the usual potential field method sometimes fails preventing collision or causes hasty motion, because it initiates avoidance motion later than the proposed method. The proposed method can be used to move robots in a robot soccer team to their appropriate position without collision as fast as possible. (author). 21 refs., 10 figs., 13 tabs.

  13. Graphical approach for multiple values logic minimization

    Science.gov (United States)

    Awwal, Abdul Ahad S.; Iftekharuddin, Khan M.

    1999-03-01

    Multiple valued logic (MVL) is sought for designing high complexity, highly compact, parallel digital circuits. However, the practical realization of an MVL-based system is dependent on optimization of cost, which directly affects the optical setup. We propose a minimization technique for MVL logic optimization based on graphical visualization, such as a Karnaugh map. The proposed method is utilized to solve signed-digit binary and trinary logic minimization problems. The usefulness of the minimization technique is demonstrated for the optical implementation of MVL circuits.

  14. Walking path-planning method for multiple radiation areas

    International Nuclear Information System (INIS)

    Liu, Yong-kuo; Li, Meng-kun; Peng, Min-jun; Xie, Chun-li; Yuan, Cheng-qian; Wang, Shuang-yu; Chao, Nan

    2016-01-01

    Highlights: • Radiation environment modeling method is designed. • Path-evaluating method and segmented path-planning method are proposed. • Path-planning simulation platform for radiation environment is built. • The method avoids to be misled by minimum dose path in single area. - Abstract: Based on minimum dose path-searching method, walking path-planning method for multiple radiation areas was designed to solve minimum dose path problem in single area and find minimum dose path in the whole space in this paper. Path-planning simulation platform was built using C# programming language and DirectX engine. The simulation platform was used in simulations dealing with virtual nuclear facilities. Simulation results indicated that the walking-path planning method is effective in providing safety for people walking in nuclear facilities.

  15. Multidisciplinary approaches to managing osteoarthritis in multiple joint sites: a systematic review.

    Science.gov (United States)

    Finney, Andrew; Healey, Emma; Jordan, Joanne L; Ryan, Sarah; Dziedzic, Krysia S

    2016-07-08

    The National Institute for Health and Care Excellence's Osteoarthritis (OA) guidelines recommended that future research should consider the benefits of combination therapies in people with OA across multiple joint sites. However, the clinical effectiveness of such approaches to OA management is unknown. This systematic review therefore aimed to identify the clinical and cost effectiveness of multidisciplinary approaches targeting multiple joint sites for OA in primary care. A systematic review of randomised controlled trials. Computerised bibliographic databases were searched (MEDLINE, EMBASE, CINAHL, PsychINFO, BNI, HBE, HMIC, AMED, Web of Science and Cochrane). Studies were included if they met the following criteria; a randomised controlled trial (RCT), a primary care population with OA across at least two different peripheral joint sites (multiple joint sites), and interventions undertaken by at least two different health disciplines (multidisciplinary). The Cochrane 'Risk of Bias' tool and PEDro were used for quality assessment of eligible studies. Clinical and cost effectiveness was determined by extracting and examining self-reported outcomes for pain, function, quality of life (QoL) and health care utilisation. The date range for the search was from database inception until August 2015. The search identified 1148 individual titles of which four were included in the review. A narrative review was conducted due to the heterogeneity of the included trials. Each of the four trials used either educational or exercise interventions facilitated by a range of different health disciplines. Moderate clinical benefits on pain, function and QoL were reported across the studies. The beneficial effects of exercise generally decreased over time within all studies. Two studies were able to show a reduction in healthcare utilisation due to a reduction in visits to a physiotherapist or a reduction in x-rays and orthopaedic referrals. The intervention that showed the most

  16. Mediation Analysis with Multiple Mediators.

    Science.gov (United States)

    VanderWeele, T J; Vansteelandt, S

    2014-01-01

    Recent advances in the causal inference literature on mediation have extended traditional approaches to direct and indirect effects to settings that allow for interactions and non-linearities. In this paper, these approaches from causal inference are further extended to settings in which multiple mediators may be of interest. Two analytic approaches, one based on regression and one based on weighting are proposed to estimate the effect mediated through multiple mediators and the effects through other pathways. The approaches proposed here accommodate exposure-mediator interactions and, to a certain extent, mediator-mediator interactions as well. The methods handle binary or continuous mediators and binary, continuous or count outcomes. When the mediators affect one another, the strategy of trying to assess direct and indirect effects one mediator at a time will in general fail; the approach given in this paper can still be used. A characterization is moreover given as to when the sum of the mediated effects for multiple mediators considered separately will be equal to the mediated effect of all of the mediators considered jointly. The approach proposed in this paper is robust to unmeasured common causes of two or more mediators.

  17. Statistical Methods for Magnetic Resonance Image Analysis with Applications to Multiple Sclerosis

    Science.gov (United States)

    Pomann, Gina-Maria

    Multiple sclerosis (MS) is an immune-mediated neurological disease that causes disability and morbidity. In patients with MS, the accumulation of lesions in the white matter of the brain is associated with disease progression and worse clinical outcomes. In the first part of the dissertation, we present methodology to study to compare the brain anatomy between patients with MS and controls. A nonparametric testing procedure is proposed for testing the null hypothesis that two samples of curves observed at discrete grids and with noise have the same underlying distribution. We propose to decompose the curves using functional principal component analysis of an appropriate mixture process, which we refer to as marginal functional principal component analysis. This approach reduces the dimension of the testing problem in a way that enables the use of traditional nonparametric univariate testing procedures. The procedure is computationally efficient and accommodates different sampling designs. Numerical studies are presented to validate the size and power properties of the test in many realistic scenarios. In these cases, the proposed test is more powerful than its primary competitor. The proposed methodology is illustrated on a state-of-the art diffusion tensor imaging study, where the objective is to compare white matter tract profiles in healthy individuals and MS patients. In the second part of the thesis, we present methods to study the behavior of MS in the white matter of the brain. Breakdown of the blood-brain barrier in newer lesions is indicative of more active disease-related processes and is a primary outcome considered in clinical trials of treatments for MS. Such abnormalities in active MS lesions are evaluated in vivo using contrast-enhanced structural magnetic resonance imaging (MRI), during which patients receive an intravenous infusion of a costly magnetic contrast agent. In some instances, the contrast agents can have toxic effects. Recently, local

  18. Freestyle multiple propeller flap reconstruction (jigsaw puzzle approach) for complicated back defects.

    Science.gov (United States)

    Park, Sung Woo; Oh, Tae Suk; Eom, Jin Sup; Sun, Yoon Chi; Suh, Hyun Suk; Hong, Joon Pio

    2015-05-01

    The reconstruction of the posterior trunk remains to be a challenge as defects can be extensive, with deep dead space, and fixation devices exposed. Our goal was to achieve a tension-free closure for complex defects on the posterior trunk. From August 2006 to May 2013, 18 cases were reconstructed with multiple flaps combining perforator(s) and local skin flaps. The reconstructions were performed using freestyle approach. Starting with propeller flap(s) in single or multilobed design and sequentially in conjunction with adjacent random pattern flaps such as fitting puzzle. All defects achieved tensionless primary closure. The final appearance resembled a jigsaw puzzle-like appearance. The average size of defect was 139.6 cm(2) (range, 36-345 cm(2)). A total of 26 perforator flaps were used in addition to 19 random pattern flaps for 18 cases. In all cases, a single perforator was used for each propeller flap. The defect and the donor site all achieved tension-free closure. The reconstruction was 100% successful without flap loss. One case of late infection was noted at 12 months after surgery. Using multiple lobe designed propeller flaps in conjunction with random pattern flaps in a freestyle approach, resembling putting a jigsaw puzzle together, we can achieve a tension-free closure by distributing the tension to multiple flaps, supplying sufficient volume to obliterate dead space, and have reliable vascularity as the flaps do not need to be oversized. This can be a viable approach to reconstruct extensive defects on the posterior trunk. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  19. A simple method for combining genetic mapping data from multiple crosses and experimental designs.

    Directory of Open Access Journals (Sweden)

    Jeremy L Peirce

    Full Text Available BACKGROUND: Over the past decade many linkage studies have defined chromosomal intervals containing polymorphisms that modulate a variety of traits. Many phenotypes are now associated with enough mapping data that meta-analysis could help refine locations of known QTLs and detect many novel QTLs. METHODOLOGY/PRINCIPAL FINDINGS: We describe a simple approach to combining QTL mapping results for multiple studies and demonstrate its utility using two hippocampus weight loci. Using data taken from two populations, a recombinant inbred strain set and an advanced intercross population we demonstrate considerable improvements in significance and resolution for both loci. 1-LOD support intervals were improved 51% for Hipp1a and 37% for Hipp9a. We first generate locus-wise permuted P-values for association with the phenotype from multiple maps, which can be done using a permutation method appropriate to each population. These results are then assigned to defined physical positions by interpolation between markers with known physical and genetic positions. We then use Fisher's combination test to combine position-by-position probabilities among experiments. Finally, we calculate genome-wide combined P-values by generating locus-specific P-values for each permuted map for each experiment. These permuted maps are then sampled with replacement and combined. The distribution of best locus-specific P-values for each combined map is the null distribution of genome-wide adjusted P-values. CONCLUSIONS/SIGNIFICANCE: Our approach is applicable to a wide variety of segregating and non-segregating mapping populations, facilitates rapid refinement of physical QTL position, is complementary to other QTL fine mapping methods, and provides an appropriate genome-wide criterion of significance for combined mapping results.

  20. A permutation-based multiple testing method for time-course microarray experiments

    Directory of Open Access Journals (Sweden)

    George Stephen L

    2009-10-01

    Full Text Available Abstract Background Time-course microarray experiments are widely used to study the temporal profiles of gene expression. Storey et al. (2005 developed a method for analyzing time-course microarray studies that can be applied to discovering genes whose expression trajectories change over time within a single biological group, or those that follow different time trajectories among multiple groups. They estimated the expression trajectories of each gene using natural cubic splines under the null (no time-course and alternative (time-course hypotheses, and used a goodness of fit test statistic to quantify the discrepancy. The null distribution of the statistic was approximated through a bootstrap method. Gene expression levels in microarray data are often complicatedly correlated. An accurate type I error control adjusting for multiple testing requires the joint null distribution of test statistics for a large number of genes. For this purpose, permutation methods have been widely used because of computational ease and their intuitive interpretation. Results In this paper, we propose a permutation-based multiple testing procedure based on the test statistic used by Storey et al. (2005. We also propose an efficient computation algorithm. Extensive simulations are conducted to investigate the performance of the permutation-based multiple testing procedure. The application of the proposed method is illustrated using the Caenorhabditis elegans dauer developmental data. Conclusion Our method is computationally efficient and applicable for identifying genes whose expression levels are time-dependent in a single biological group and for identifying the genes for which the time-profile depends on the group in a multi-group setting.

  1. Multiple time-scale methods in particle simulations of plasmas

    International Nuclear Information System (INIS)

    Cohen, B.I.

    1985-01-01

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling

  2. Combining qualitative and quantitative operational research methods to inform quality improvement in pathways that span multiple settings.

    Science.gov (United States)

    Crowe, Sonya; Brown, Katherine; Tregay, Jenifer; Wray, Jo; Knowles, Rachel; Ridout, Deborah A; Bull, Catherine; Utley, Martin

    2017-08-01

    Improving integration and continuity of care across sectors within resource constraints is a priority in many health systems. Qualitative operational research methods of problem structuring have been used to address quality improvement in services involving multiple sectors but not in combination with quantitative operational research methods that enable targeting of interventions according to patient risk. We aimed to combine these methods to augment and inform an improvement initiative concerning infants with congenital heart disease (CHD) whose complex care pathway spans multiple sectors. Soft systems methodology was used to consider systematically changes to services from the perspectives of community, primary, secondary and tertiary care professionals and a patient group, incorporating relevant evidence. Classification and regression tree (CART) analysis of national audit datasets was conducted along with data visualisation designed to inform service improvement within the context of limited resources. A 'Rich Picture' was developed capturing the main features of services for infants with CHD pertinent to service improvement. This was used, along with a graphical summary of the CART analysis, to guide discussions about targeting interventions at specific patient risk groups. Agreement was reached across representatives of relevant health professions and patients on a coherent set of targeted recommendations for quality improvement. These fed into national decisions about service provision and commissioning. When tackling complex problems in service provision across multiple settings, it is important to acknowledge and work with multiple perspectives systematically and to consider targeting service improvements in response to confined resources. Our research demonstrates that applying a combination of qualitative and quantitative operational research methods is one approach to doing so that warrants further consideration. Published by the BMJ Publishing Group

  3. Multiple Learning Approaches in the Professional Development of School Leaders -- Theoretical Perspectives and Empirical Findings on Self-assessment and Feedback

    Science.gov (United States)

    Huber, Stephan Gerhard

    2013-01-01

    This article investigates the use of multiple learning approaches and different modes and types of learning in the (continuous) professional development (PD) of school leaders, particularly the use of self-assessment and feedback. First, formats and multiple approaches to professional learning are described. Second, a possible approach to…

  4. The application of multiple intelligence approach to the learning of human circulatory system

    Science.gov (United States)

    Kumalasari, Lita; Yusuf Hilmi, A.; Priyandoko, Didik

    2017-11-01

    The purpose of this study is to offer an alternative teaching approach or strategies which able to accommodate students’ different ability, intelligence and learning style. Also can gives a new idea for the teacher as a facilitator for exploring how to teach the student in creative ways and more student-center activities, for a lesson such as circulatory system. This study was carried out at one private school in Bandung involved eight students to see their responses toward the lesson that delivered by using Multiple Intelligence approach which is include Linguistic, Logical-Mathematical, Visual-Spatial, Musical, Bodily-Kinesthetic, Interpersonal, Intrapersonal, and Naturalistic. Students were test by using MI test based on Howard Gardner’s MI model to see their dominant intelligence. The result showed the percentage of top three ranks of intelligence are Bodily-Kinesthetic (73%), Visual-Spatial (68%), and Logical-Mathematical (61%). The learning process is given by using some different multimedia and activities to engaged their learning style and intelligence such as mini experiment, short clip, and questions. Student response is given by using self-assessment and the result is all students said the lesson gives them a knowledge and skills that useful for their life, they are clear with the explanation given, they didn’t find difficulties to understand the lesson and can complete the assignment given. At the end of the study, it is reveal that the students who are learned by Multiple Intelligence instructional approach have more enhance to the lesson given. It’s also found out that the students participated in the learning process which Multiple Intelligence approach was applied enjoyed the activities and have great fun.

  5. Trace element analysis of environmental samples by multiple prompt gamma-ray analysis method

    International Nuclear Information System (INIS)

    Oshima, Masumi; Matsuo, Motoyuki; Shozugawa, Katsumi

    2011-01-01

    The multiple γ-ray detection method has been proved to be a high-resolution and high-sensitivity method in application to nuclide quantification. The neutron prompt γ-ray analysis method is successfully extended by combining it with the γ-ray detection method, which is called Multiple prompt γ-ray analysis, MPGA. In this review we show the principle of this method and its characteristics. Several examples of its application to environmental samples, especially river sediments in the urban area and sea sediment samples are also described. (author)

  6. Monitoring and identification of spatiotemporal landscape changes in multiple remote sensing images by using a stratified conditional Latin hypercube sampling approach and geostatistical simulation.

    Science.gov (United States)

    Lin, Yu-Pin; Chu, Hone-Jay; Huang, Yu-Long; Tang, Chia-Hsi; Rouhani, Shahrokh

    2011-06-01

    This study develops a stratified conditional Latin hypercube sampling (scLHS) approach for multiple, remotely sensed, normalized difference vegetation index (NDVI) images. The objective is to sample, monitor, and delineate spatiotemporal landscape changes, including spatial heterogeneity and variability, in a given area. The scLHS approach, which is based on the variance quadtree technique (VQT) and the conditional Latin hypercube sampling (cLHS) method, selects samples in order to delineate landscape changes from multiple NDVI images. The images are then mapped for calibration and validation by using sequential Gaussian simulation (SGS) with the scLHS selected samples. Spatial statistical results indicate that in terms of their statistical distribution, spatial distribution, and spatial variation, the statistics and variograms of the scLHS samples resemble those of multiple NDVI images more closely than those of cLHS and VQT samples. Moreover, the accuracy of simulated NDVI images based on SGS with scLHS samples is significantly better than that of simulated NDVI images based on SGS with cLHS samples and VQT samples, respectively. However, the proposed approach efficiently monitors the spatial characteristics of landscape changes, including the statistics, spatial variability, and heterogeneity of NDVI images. In addition, SGS with the scLHS samples effectively reproduces spatial patterns and landscape changes in multiple NDVI images.

  7. Secure Multiparty Quantum Computation for Summation and Multiplication.

    Science.gov (United States)

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2016-01-21

    As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics.

  8. The strategic selecting criteria and performance by using the multiple criteria method

    Directory of Open Access Journals (Sweden)

    Lisa Y. Chen

    2008-02-01

    Full Text Available As the increasing competitive intensity in the current service market, organizational capabilities have been recognized as the importance of sustaining competitive advantage. The profitable growth for the firms has been fueled a need to systematically assess and renew the organization. The purpose of this study is to analyze the financial performance of the firms to create an effective evaluating structure for the Taiwan's service industry. This study utilized TOPSIS (technique for order preference by similarity to ideal solution method to evaluate the operating performance of 12 companies. TOPSIS is a multiple criteria decision making method to identify solutions from a finite set of alternatives based upon simultaneous minimization of distance from an ideal point and maximization of distance from a nadir point. By using this approach, this study measures the financial performance of firms through two aspects and ten indicators. The result indicated e-life had outstanding performance among the 12 retailers. The findings of this study provided managers to better understand their market position, competition, and profitability for future strategic planning and operational management.

  9. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    Science.gov (United States)

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2017-06-01

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  10. A novel method for producing multiple ionization of noble gas

    International Nuclear Information System (INIS)

    Wang Li; Li Haiyang; Dai Dongxu; Bai Jiling; Lu Richang

    1997-01-01

    We introduce a novel method for producing multiple ionization of He, Ne, Ar, Kr and Xe. A nanosecond pulsed electron beam with large number density, which could be energy-controlled, was produced by incidence a focused 308 nm laser beam onto a stainless steel grid. On Time-of-Flight Mass Spectrometer, using this electron beam, we obtained multiple ionization of noble gas He, Ne, Ar and Xe. Time of fight mass spectra of these ions were given out. These ions were supposed to be produced by step by step ionization of the gas atoms by electron beam impact. This method may be used as a ideal soft ionizing point ion source in Time of Flight Mass Spectrometer

  11. Forest soil mineral weathering rates: use of multiple approaches

    Science.gov (United States)

    Randy K. Kolka; D.F. Grigal; E.A. Nater

    1996-01-01

    Knowledge of rates of release of base cations from mineral dissolution (weathering) is essential to understand ecosystem elemental cycling. Although much studied, rates remain enigmatic. We compared the results of four methods to determine cation (Ca + Mg + K) release rates at five forested soils/sites in the northcentral U.S.A. Our premise was that multiple...

  12. Some problems of neutron source multiplication method for site measurement technology in nuclear critical safety

    International Nuclear Information System (INIS)

    Shi Yongqian; Zhu Qingfu; Hu Dingsheng; He Tao; Yao Shigui; Lin Shenghuo

    2004-01-01

    The paper gives experiment theory and experiment method of neutron source multiplication method for site measurement technology in the nuclear critical safety. The measured parameter by source multiplication method actually is a sub-critical with source neutron effective multiplication factor k s , but not the neutron effective multiplication factor k eff . The experiment research has been done on the uranium solution nuclear critical safety experiment assembly. The k s of different sub-criticality is measured by neutron source multiplication experiment method, and k eff of different sub-criticality, the reactivity coefficient of unit solution level, is first measured by period method, and then multiplied by difference of critical solution level and sub-critical solution level and obtained the reactivity of sub-critical solution level. The k eff finally can be extracted from reactivity formula. The effect on the nuclear critical safety and different between k eff and k s are discussed

  13. INTEGRATED FUSION METHOD FOR MULTIPLE TEMPORAL-SPATIAL-SPECTRAL IMAGES

    Directory of Open Access Journals (Sweden)

    H. Shen

    2012-08-01

    Full Text Available Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.

  14. Resampling Approach for Determination of the Method for Reference Interval Calculation in Clinical Laboratory Practice▿

    Science.gov (United States)

    Pavlov, Igor Y.; Wilson, Andrew R.; Delgado, Julio C.

    2010-01-01

    Reference intervals (RI) play a key role in clinical interpretation of laboratory test results. Numerous articles are devoted to analyzing and discussing various methods of RI determination. The two most widely used approaches are the parametric method, which assumes data normality, and a nonparametric, rank-based procedure. The decision about which method to use is usually made arbitrarily. The goal of this study was to demonstrate that using a resampling approach for the comparison of RI determination techniques could help researchers select the right procedure. Three methods of RI calculation—parametric, transformed parametric, and quantile-based bootstrapping—were applied to multiple random samples drawn from 81 values of complement factor B observations and from a computer-simulated normally distributed population. It was shown that differences in RI between legitimate methods could be up to 20% and even more. The transformed parametric method was found to be the best method for the calculation of RI of non-normally distributed factor B estimations, producing an unbiased RI and the lowest confidence limits and interquartile ranges. For a simulated Gaussian population, parametric calculations, as expected, were the best; quantile-based bootstrapping produced biased results at low sample sizes, and the transformed parametric method generated heavily biased RI. The resampling approach could help compare different RI calculation methods. An algorithm showing a resampling procedure for choosing the appropriate method for RI calculations is included. PMID:20554803

  15. Mango: multiple alignment with N gapped oligos.

    Science.gov (United States)

    Zhang, Zefeng; Lin, Hao; Li, Ming

    2008-06-01

    Multiple sequence alignment is a classical and challenging task. The problem is NP-hard. The full dynamic programming takes too much time. The progressive alignment heuristics adopted by most state-of-the-art works suffer from the "once a gap, always a gap" phenomenon. Is there a radically new way to do multiple sequence alignment? In this paper, we introduce a novel and orthogonal multiple sequence alignment method, using both multiple optimized spaced seeds and new algorithms to handle these seeds efficiently. Our new algorithm processes information of all sequences as a whole and tries to build the alignment vertically, avoiding problems caused by the popular progressive approaches. Because the optimized spaced seeds have proved significantly more sensitive than the consecutive k-mers, the new approach promises to be more accurate and reliable. To validate our new approach, we have implemented MANGO: Multiple Alignment with N Gapped Oligos. Experiments were carried out on large 16S RNA benchmarks, showing that MANGO compares favorably, in both accuracy and speed, against state-of-the-art multiple sequence alignment methods, including ClustalW 1.83, MUSCLE 3.6, MAFFT 5.861, ProbConsRNA 1.11, Dialign 2.2.1, DIALIGN-T 0.2.1, T-Coffee 4.85, POA 2.0, and Kalign 2.0. We have further demonstrated the scalability of MANGO on very large datasets of repeat elements. MANGO can be downloaded at http://www.bioinfo.org.cn/mango/ and is free for academic usage.

  16. Sustainable Assessment of Aerosol Pollution Decrease Applying Multiple Attribute Decision-Making Methods

    Directory of Open Access Journals (Sweden)

    Audrius Čereška

    2016-06-01

    Full Text Available Air pollution with various materials, particularly with aerosols, increases with the advances in technological development. This is a complicated global problem. One of the priorities in achieving sustainable development is the reduction of harmful technological effects on the environment and human health. It is a responsibility of researchers to search for effective methods of reducing pollution. The reliable results can be obtained by combining the approaches used in various fields of science and technology. This paper aims to demonstrate the effectiveness of the multiple attribute decision-making (MADM methods in investigating and solving the environmental pollution problems. The paper presents the study of the process of the evaporation of a toxic liquid based on using the MADM methods. A schematic view of the test setup is presented. The density, viscosity, and rate of the released vapor flow are measured and the dependence of the variation of the solution concentration on its temperature is determined in the experimental study. The concentration of hydrochloric acid solution (HAS varies in the range from 28% to 34%, while the liquid is heated from 50 to 80 °C. The variations in the parameters are analyzed using the well-known VIKOR and COPRAS MADM methods. For determining the criteria weights, a new CILOS (Criterion Impact LOSs method is used. The experimental results are arranged in the priority order, using the MADM methods. Based on the obtained data, the technological parameters of production, ensuring minimum environmental pollution, can be chosen.

  17. Quantifying cause-related mortality by weighting multiple causes of death

    Science.gov (United States)

    Moreno-Betancur, Margarita; Lamarche-Vadel, Agathe; Rey, Grégoire

    2016-01-01

    Abstract Objective To investigate a new approach to calculating cause-related standardized mortality rates that involves assigning weights to each cause of death reported on death certificates. Methods We derived cause-related standardized mortality rates from death certificate data for France in 2010 using: (i) the classic method, which considered only the underlying cause of death; and (ii) three novel multiple-cause-of-death weighting methods, which assigned weights to multiple causes of death mentioned on death certificates: the first two multiple-cause-of-death methods assigned non-zero weights to all causes mentioned and the third assigned non-zero weights to only the underlying cause and other contributing causes that were not part of the main morbid process. As the sum of the weights for each death certificate was 1, each death had an equal influence on mortality estimates and the total number of deaths was unchanged. Mortality rates derived using the different methods were compared. Findings On average, 3.4 causes per death were listed on each certificate. The standardized mortality rate calculated using the third multiple-cause-of-death weighting method was more than 20% higher than that calculated using the classic method for five disease categories: skin diseases, mental disorders, endocrine and nutritional diseases, blood diseases and genitourinary diseases. Moreover, this method highlighted the mortality burden associated with certain diseases in specific age groups. Conclusion A multiple-cause-of-death weighting approach to calculating cause-related standardized mortality rates from death certificate data identified conditions that contributed more to mortality than indicated by the classic method. This new approach holds promise for identifying underrecognized contributors to mortality. PMID:27994280

  18. A multi-method approach toward de novo glycan characterization: a Man-5 case study.

    Science.gov (United States)

    Prien, Justin M; Prater, Bradley D; Cockrill, Steven L

    2010-05-01

    Regulatory agencies' expectations for biotherapeutic approval are becoming more stringent with regard to product characterization, where minor species as low as 0.1% of a given profile are typically identified. The mission of this manuscript is to demonstrate a multi-method approach toward de novo glycan characterization and quantitation, including minor species at or approaching the 0.1% benchmark. Recently, unexpected isomers of the Man(5)GlcNAc(2) (M(5)) were reported (Prien JM, Ashline DJ, Lapadula AJ, Zhang H, Reinhold VN. 2009. The high mannose glycans from bovine ribonuclease B isomer characterization by ion trap mass spectrometry (MS). J Am Soc Mass Spectrom. 20:539-556). In the current study, quantitative analysis of these isomers found in commercial M(5) standard demonstrated that they are in low abundance (2-aminobenzoic acid to detect and chromatographically resolve multiple M(5) isomers in bovine ribonuclease B. With this multi-method approach, we have the capabilities to comprehensively characterize a biotherapeutic's glycan array in a de novo manner, including structural isomers at >/=0.1% of the total chromatographic peak area.

  19. Hybrid Optimization-Based Approach for Multiple Intelligent Vehicles Requests Allocation

    Directory of Open Access Journals (Sweden)

    Ahmed Hussein

    2018-01-01

    Full Text Available Self-driving cars are attracting significant attention during the last few years, which makes the technology advances jump fast and reach a point of having a number of automated vehicles on the roads. Therefore, the necessity of cooperative driving for these automated vehicles is exponentially increasing. One of the main issues in the cooperative driving world is the Multirobot Task Allocation (MRTA problem. This paper addresses the MRTA problem, specifically for the problem of vehicles and requests allocation. The objective is to introduce a hybrid optimization-based approach to solve the problem of multiple intelligent vehicles requests allocation as an instance of MRTA problem, to find not only a feasible solution, but also an optimized one as per the objective function. Several test scenarios were implemented in order to evaluate the efficiency of the proposed approach. These scenarios are based on well-known benchmarks; thus a comparative study is conducted between the obtained results and the suboptimal results. The analysis of the experimental results shows that the proposed approach was successful in handling various scenarios, especially with the increasing number of vehicles and requests, which displays the proposed approach efficiency and performance.

  20. Statistics of electron multiplication in multiplier phototube: iterative method

    International Nuclear Information System (INIS)

    Grau Malonda, A.; Ortiz Sanchez, J.F.

    1985-01-01

    An iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situations are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average anti-r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (author)

  1. Multiple centroid method to evaluate the adaptability of alfalfa genotypes

    Directory of Open Access Journals (Sweden)

    Moysés Nascimento

    2015-02-01

    Full Text Available This study aimed to evaluate the efficiency of multiple centroids to study the adaptability of alfalfa genotypes (Medicago sativa L.. In this method, the genotypes are compared with ideotypes defined by the bissegmented regression model, according to the researcher's interest. Thus, genotype classification is carried out as determined by the objective of the researcher and the proposed recommendation strategy. Despite the great potential of the method, it needs to be evaluated under the biological context (with real data. In this context, we used data on the evaluation of dry matter production of 92 alfalfa cultivars, with 20 cuttings, from an experiment in randomized blocks with two repetitions carried out from November 2004 to June 2006. The multiple centroid method proved efficient for classifying alfalfa genotypes. Moreover, it showed no unambiguous indications and provided that ideotypes were defined according to the researcher's interest, facilitating data interpretation.

  2. Multiple triangulation and collaborative research using qualitative methods to explore decision making in pre-hospital emergency care

    Directory of Open Access Journals (Sweden)

    Maxine Johnson

    2017-01-01

    Full Text Available Abstract Background Paramedics make important and increasingly complex decisions at scene about patient care. Patient safety implications of influences on decision making in the pre-hospital setting were previously under-researched. Cutting edge perspectives advocate exploring the whole system rather than individual influences on patient safety. Ethnography (the study of people and cultures has been acknowledged as a suitable method for identifying health care issues as they occur within the natural context. In this paper we compare multiple methods used in a multi-site, qualitative study that aimed to identify system influences on decision making. Methods The study was conducted in three NHS Ambulance Trusts in England and involved researchers from each Trust working alongside academic researchers. Exploratory interviews with key informants e.g. managers (n = 16 and document review provided contextual information. Between October 2012 and July 2013 researchers observed 34 paramedic shifts and ten paramedics provided additional accounts via audio-recorded ‘digital diaries’ (155 events. Three staff focus groups (total n = 21 and three service user focus groups (total n = 23 explored a range of experiences and perceptions. Data collection and analysis was carried out by academic and ambulance service researchers as well as service users. Workshops were held at each site to elicit feedback on the findings and facilitate prioritisation of issues identified. Results The use of a multi-method qualitative approach allowed cross-validation of important issues for ambulance service staff and service users. A key factor in successful implementation of the study was establishing good working relationships with academic and ambulance service teams. Enrolling at least one research lead at each site facilitated the recruitment process as well as study progress. Active involvement with the study allowed ambulance service researchers and service

  3. Maintenance Approaches for Different Production Methods

    Directory of Open Access Journals (Sweden)

    Mungani, Dzivhuluwani Simon

    2013-11-01

    Full Text Available Various production methods are used in industry to manufacture or produce a variety of products needed by industry and consumers. The nature of a product determines which production method is most suitable or cost-effective. A continuous process is typically used to produce large volumes of liquids or gases. Batch processing is often used for small volumes, such as pharmaceutical products. This paper discusses a research project to determine the relationship between maintenance approaches and production methods. A survey was done to determine to what extent three maintenance approaches reliability-centred maintenance (RCM, total productive maintenance (TPM, and business-centred maintenance (BCM are used for three different processing methods (continuous process, batch process, and a production line method.

  4. A Multiple Identity Approach to Gender: Identification with Women, Identification with Feminists, and Their Interaction

    Directory of Open Access Journals (Sweden)

    Jolien A. van Breen

    2017-06-01

    Full Text Available Across four studies, we examine multiple identities in the context of gender and propose that women's attitudes toward gender group membership are governed by two largely orthogonal dimensions of gender identity: identification with women and identification with feminists. We argue that identification with women reflects attitudes toward the content society gives to group membership: what does it mean to be a woman in terms of group characteristics, interests and values? Identification with feminists, on the other hand, is a politicized identity dimension reflecting attitudes toward the social position of the group: what does it mean to be a woman in terms of disadvantage, inequality, and relative status? We examine the utility of this multiple identity approach in four studies. Study 1 showed that identification with women reflects attitudes toward group characteristics, such as femininity and self-stereotyping, while identification with feminists reflects attitudes toward the group's social position, such as perceived sexism. The two dimensions are shown to be largely independent, and as such provide support for the multiple identity approach. In Studies 2–4, we examine the utility of this multiple identity approach in predicting qualitative differences in gender attitudes. Results show that specific combinations of identification with women and feminists predicted attitudes toward collective action and gender stereotypes. Higher identification with feminists led to endorsement of radical collective action (Study 2 and critical attitudes toward gender stereotypes (Studies 3–4, especially at lower levels of identification with women. The different combinations of high vs. low identification with women and feminists can be thought of as reflecting four theoretical identity “types.” A woman can be (1 strongly identified with neither women nor feminists (“low identifier”, (2 strongly identified with women but less so with feminists (

  5. A Multiple Identity Approach to Gender: Identification with Women, Identification with Feminists, and Their Interaction

    Science.gov (United States)

    van Breen, Jolien A.; Spears, Russell; Kuppens, Toon; de Lemus, Soledad

    2017-01-01

    Across four studies, we examine multiple identities in the context of gender and propose that women's attitudes toward gender group membership are governed by two largely orthogonal dimensions of gender identity: identification with women and identification with feminists. We argue that identification with women reflects attitudes toward the content society gives to group membership: what does it mean to be a woman in terms of group characteristics, interests and values? Identification with feminists, on the other hand, is a politicized identity dimension reflecting attitudes toward the social position of the group: what does it mean to be a woman in terms of disadvantage, inequality, and relative status? We examine the utility of this multiple identity approach in four studies. Study 1 showed that identification with women reflects attitudes toward group characteristics, such as femininity and self-stereotyping, while identification with feminists reflects attitudes toward the group's social position, such as perceived sexism. The two dimensions are shown to be largely independent, and as such provide support for the multiple identity approach. In Studies 2–4, we examine the utility of this multiple identity approach in predicting qualitative differences in gender attitudes. Results show that specific combinations of identification with women and feminists predicted attitudes toward collective action and gender stereotypes. Higher identification with feminists led to endorsement of radical collective action (Study 2) and critical attitudes toward gender stereotypes (Studies 3–4), especially at lower levels of identification with women. The different combinations of high vs. low identification with women and feminists can be thought of as reflecting four theoretical identity “types.” A woman can be (1) strongly identified with neither women nor feminists (“low identifier”), (2) strongly identified with women but less so with feminists (

  6. A Multiple Identity Approach to Gender: Identification with Women, Identification with Feminists, and Their Interaction.

    Science.gov (United States)

    van Breen, Jolien A; Spears, Russell; Kuppens, Toon; de Lemus, Soledad

    2017-01-01

    Across four studies, we examine multiple identities in the context of gender and propose that women's attitudes toward gender group membership are governed by two largely orthogonal dimensions of gender identity: identification with women and identification with feminists. We argue that identification with women reflects attitudes toward the content society gives to group membership: what does it mean to be a woman in terms of group characteristics, interests and values? Identification with feminists, on the other hand, is a politicized identity dimension reflecting attitudes toward the social position of the group: what does it mean to be a woman in terms of disadvantage, inequality, and relative status? We examine the utility of this multiple identity approach in four studies. Study 1 showed that identification with women reflects attitudes toward group characteristics, such as femininity and self-stereotyping, while identification with feminists reflects attitudes toward the group's social position, such as perceived sexism. The two dimensions are shown to be largely independent, and as such provide support for the multiple identity approach. In Studies 2-4, we examine the utility of this multiple identity approach in predicting qualitative differences in gender attitudes. Results show that specific combinations of identification with women and feminists predicted attitudes toward collective action and gender stereotypes. Higher identification with feminists led to endorsement of radical collective action (Study 2) and critical attitudes toward gender stereotypes (Studies 3-4), especially at lower levels of identification with women. The different combinations of high vs. low identification with women and feminists can be thought of as reflecting four theoretical identity "types." A woman can be (1) strongly identified with neither women nor feminists ("low identifier"), (2) strongly identified with women but less so with feminists ("traditional identifier"), (3

  7. Use of ultrasonic array method for positioning multiple partial discharge sources in transformer oil.

    Science.gov (United States)

    Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng

    2014-08-01

    Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method.

  8. Ensemble approach combining multiple methods improves human transcription start site prediction.

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-01-01

    The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets.

  9. Regularization methods for ill-posed problems in multiple Hilbert scales

    International Nuclear Information System (INIS)

    Mazzieri, Gisela L; Spies, Ruben D

    2012-01-01

    Several convergence results in Hilbert scales under different source conditions are proved and orders of convergence and optimal orders of convergence are derived. Also, relations between those source conditions are proved. The concept of a multiple Hilbert scale on a product space is introduced, and regularization methods on these scales are defined, both for the case of a single observation and for the case of multiple observations. In the latter case, it is shown how vector-valued regularization functions in these multiple Hilbert scales can be used. In all cases, convergence is proved and orders and optimal orders of convergence are shown. Finally, some potential applications and open problems are discussed. (paper)

  10. Multiple player tracking in sports video: a dual-mode two-way bayesian inference approach with progressive observation modeling.

    Science.gov (United States)

    Xing, Junliang; Ai, Haizhou; Liu, Liwei; Lao, Shihong

    2011-06-01

    Multiple object tracking (MOT) is a very challenging task yet of fundamental importance for many practical applications. In this paper, we focus on the problem of tracking multiple players in sports video which is even more difficult due to the abrupt movements of players and their complex interactions. To handle the difficulties in this problem, we present a new MOT algorithm which contributes both in the observation modeling level and in the tracking strategy level. For the observation modeling, we develop a progressive observation modeling process that is able to provide strong tracking observations and greatly facilitate the tracking task. For the tracking strategy, we propose a dual-mode two-way Bayesian inference approach which dynamically switches between an offline general model and an online dedicated model to deal with single isolated object tracking and multiple occluded object tracking integrally by forward filtering and backward smoothing. Extensive experiments on different kinds of sports videos, including football, basketball, as well as hockey, demonstrate the effectiveness and efficiency of the proposed method.

  11. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    Energy Technology Data Exchange (ETDEWEB)

    AlRashidi, M.R., E-mail: malrash2002@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait); AlHajri, M.F., E-mail: mfalhajri@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait)

    2011-10-15

    Highlights: {yields} A new hybrid PSO for optimal DGs placement and sizing. {yields} Statistical analysis to fine tune PSO parameters. {yields} Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  12. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    International Nuclear Information System (INIS)

    AlRashidi, M.R.; AlHajri, M.F.

    2011-01-01

    Highlights: → A new hybrid PSO for optimal DGs placement and sizing. → Statistical analysis to fine tune PSO parameters. → Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  13. Risks of multiple sclerosis in relatives of patients in Flanders, Belgium

    NARCIS (Netherlands)

    Carton, H; Vlietinck, R; Debruyne, J; DeKeyser, J; DHooghe, MB; Loos, R; Medaer, R; Truyen, L; Yee, IML; Sadovnick, AD

    Objectives - To calculate age adjusted risks for multiple sclerosis in relatives of Flemish patients with multiple sclerosis. Methods - Lifetime risks were calculated using the maximum likelihood approach. Results - Vital information was obtained on 674 probands with multiple sclerosis in Flanders

  14. Multiple-target method for sputtering amorphous films for bubble-domain devices

    International Nuclear Information System (INIS)

    Burilla, C.T.; Bekebrede, W.R.; Smith, A.B.

    1976-01-01

    Previously, sputtered amorphous metal alloys for bubble applications have ordinarily been prepared by standard sputtering techniques using a single target electrode. The deposition of these alloys is reported using a multiple target rf technique in which a separate target is used for each element contained in the alloy. One of the main advantages of this multiple-target approach is that the film composition can be easily changed by simply varying the voltages applied to the elemental targets. In the apparatus, the centers of the targets are positioned on a 15 cm-radius circle. The platform holding the film substrate is on a 15 cm-long arm which can rotate about the center, thus bringing the sample successively under each target. The platform rotation rate is adjustable from 0 to 190 rpm. That this latter speed is sufficient to homogenize the alloys produced is demonstrated by measurements made of the uniaxial anisotropy constant in Gd 0 . 12 Co 0 . 59 Cu 0 . 29 films. The anisotropy is 6.0 x 10 5 ergs/cm 3 and independent of rotation rate above approximately 25 rpm, but it drops rapidly for slower rotation rates, reaching 1.8 x 10 5 ergs/cm 3 for 7 rpm. The film quality is equal to that of films made by conventional methods. Coercivities of a few oersteds in samples with stripe widths of 1 to 2 μm and magnetizations of 800 to 2800 G were observed

  15. Multiple-scattering formalism for correlated systems: A KKR-DMFT approach

    International Nuclear Information System (INIS)

    Minar, J.; Perlov, A.; Ebert, H.; Chioncel, L.; Katsnelson, M. I.; Lichtenstein, A.I.

    2005-01-01

    We present a charge and self-energy self-consistent computational scheme for correlated systems based on the Korringa-Kohn-Rostoker (KKR) multiple scattering theory with the many-body effects described by the means of dynamical mean field theory (DMFT). The corresponding local multiorbital and energy dependent self-energy is included into the set of radial differential equations for the single-site wave functions. The KKR Green's function is written in terms of the multiple scattering path operator, the later one being evaluated using the single-site solution for the t-matrix that in turn is determined by the wave functions. An appealing feature of this approach is that it allows to consider local quantum and disorder fluctuations on the same footing. Within the coherent potential approximation (CPA) the correlated atoms are placed into a combined effective medium determined by the DMFT self-consistency condition. Results of corresponding calculations for pure Fe, Ni, and Fe x Ni 1-x alloys are presented

  16. On Thermally Interacting Multiple Boreholes with Variable Heating Strength: Comparison between Analytical and Numerical Approaches

    Directory of Open Access Journals (Sweden)

    Marc A. Rosen

    2012-08-01

    Full Text Available The temperature response in the soil surrounding multiple boreholes is evaluated analytically and numerically. The assumption of constant heat flux along the borehole wall is examined by coupling the problem to the heat transfer problem inside the borehole and presenting a model with variable heat flux along the borehole length. In the analytical approach, a line source of heat with a finite length is used to model the conduction of heat in the soil surrounding the boreholes. In the numerical method, a finite volume method in a three dimensional meshed domain is used. In order to determine the heat flux boundary condition, the analytical quasi-three-dimensional solution to the heat transfer problem of the U-tube configuration inside the borehole is used. This solution takes into account the variation in heating strength along the borehole length due to the temperature variation of the fluid running in the U-tube. Thus, critical depths at which thermal interaction occurs can be determined. Finally, in order to examine the validity of the numerical method, a comparison is made with the results of line source method.

  17. Multiple Moving Obstacles Avoidance of Service Robot using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Achmad Jazidie

    2011-12-01

    Full Text Available In this paper, we propose a multiple moving obstacles avoidance using stereo vision for service robots in indoor environments. We assume that this model of service robot is used to deliver a cup to the recognized customer from the starting point to the destination. The contribution of this research is a new method for multiple moving obstacle avoidance with Bayesian approach using stereo camera. We have developed and introduced 3 main modules to recognize faces, to identify multiple moving obstacles and to maneuver of robot. A group of people who is walking will be tracked as a multiple moving obstacle, and the speed, direction, and distance of the moving obstacles is estimated by a stereo camera in order that the robot can maneuver to avoid the collision. To overcome the inaccuracies of vision sensor, Bayesian approach is used for estimate the absense and direction of obstacles. We present the results of the experiment of the service robot called Srikandi III which uses our proposed method and we also evaluate its performance. Experiments shown that our proposed method working well, and Bayesian approach proved increasing the estimation perform for absence and direction of moving obstacle.

  18. A note on the relationships between multiple imputation, maximum likelihood and fully Bayesian methods for missing responses in linear regression models.

    Science.gov (United States)

    Chen, Qingxia; Ibrahim, Joseph G

    2014-07-01

    Multiple Imputation, Maximum Likelihood and Fully Bayesian methods are the three most commonly used model-based approaches in missing data problems. Although it is easy to show that when the responses are missing at random (MAR), the complete case analysis is unbiased and efficient, the aforementioned methods are still commonly used in practice for this setting. To examine the performance of and relationships between these three methods in this setting, we derive and investigate small sample and asymptotic expressions of the estimates and standard errors, and fully examine how these estimates are related for the three approaches in the linear regression model when the responses are MAR. We show that when the responses are MAR in the linear model, the estimates of the regression coefficients using these three methods are asymptotically equivalent to the complete case estimates under general conditions. One simulation and a real data set from a liver cancer clinical trial are given to compare the properties of these methods when the responses are MAR.

  19. Simultaneous Two-Way Clustering of Multiple Correspondence Analysis

    Science.gov (United States)

    Hwang, Heungsun; Dillon, William R.

    2010-01-01

    A 2-way clustering approach to multiple correspondence analysis is proposed to account for cluster-level heterogeneity of both respondents and variable categories in multivariate categorical data. Specifically, in the proposed method, multiple correspondence analysis is combined with k-means in a unified framework in which "k"-means is…

  20. A Pattern-Oriented Approach to a Methodical Evaluation of Modeling Methods

    Directory of Open Access Journals (Sweden)

    Michael Amberg

    1996-11-01

    Full Text Available The paper describes a pattern-oriented approach to evaluate modeling methods and to compare various methods with each other from a methodical viewpoint. A specific set of principles (the patterns is defined by investigating the notations and the documentation of comparable modeling methods. Each principle helps to examine some parts of the methods from a specific point of view. All principles together lead to an overall picture of the method under examination. First the core ("method neutral" meaning of each principle is described. Then the methods are examined regarding the principle. Afterwards the method specific interpretations are compared with each other and with the core meaning of the principle. By this procedure, the strengths and weaknesses of modeling methods regarding methodical aspects are identified. The principles are described uniformly using a principle description template according to descriptions of object oriented design patterns. The approach is demonstrated by evaluating a business process modeling method.

  1. Multiple Response Regression for Gaussian Mixture Models with Known Labels.

    Science.gov (United States)

    Lee, Wonyul; Du, Ying; Sun, Wei; Hayes, D Neil; Liu, Yufeng

    2012-12-01

    Multiple response regression is a useful regression technique to model multiple response variables using the same set of predictor variables. Most existing methods for multiple response regression are designed for modeling homogeneous data. In many applications, however, one may have heterogeneous data where the samples are divided into multiple groups. Our motivating example is a cancer dataset where the samples belong to multiple cancer subtypes. In this paper, we consider modeling the data coming from a mixture of several Gaussian distributions with known group labels. A naive approach is to split the data into several groups according to the labels and model each group separately. Although it is simple, this approach ignores potential common structures across different groups. We propose new penalized methods to model all groups jointly in which the common and unique structures can be identified. The proposed methods estimate the regression coefficient matrix, as well as the conditional inverse covariance matrix of response variables. Asymptotic properties of the proposed methods are explored. Through numerical examples, we demonstrate that both estimation and prediction can be improved by modeling all groups jointly using the proposed methods. An application to a glioblastoma cancer dataset reveals some interesting common and unique gene relationships across different cancer subtypes.

  2. INCLUSION OF CHILDREN WITH INTELLECTUAL AND MULTIPLE DISABILITIES: A COMMUNITY-BASED REHABILITATION APPROACH, INDIA

    Directory of Open Access Journals (Sweden)

    Ram LAKHAN

    2013-03-01

    Full Text Available Background: Inclusion of children with intellectual disabilities (ID and multiple disabilities (MD in regular schools in India is extremely poor. One of the key objectives of community-based rehabilitation (CBR is to include ID & MD children in regular schools. This study attempted to find out association with age, ID severity, poverty, gender, parent education, population, and multiple disabilities comprising one or more disorders cerebral palsy, epilepsy and psychiatric disorders with inclusion among 259 children in Barwani Block of Barwani District in the state of Madhya Pradesh, India.Aim: Inclusion of children with intellectual and multiple disabilities in regular schools through CBR approach in India.Method: Chi square test was conducted to investigate association between inclusion and predictor variables ID categories, age, gender, poverty level, parent education, population type and multiple disabilities. Result: Inclusion was possible for borderline 2(66.4%, mild 54(68.3%, moderate 18(18.2%, and age range from 5 to 12 years 63 (43%. Children living in poor families 63 (30.6%, not poor 11(18.9%, parental edu­ca­ti­on none 52 (26%, primary level 11 (65%, midd­le school 10 (48% high school 0 (0% and bachelor degree 1(7%, female 34 (27.9%, male 40 (29.2%, tribal 40 (28.7%, non-tribal 34(28.3% and multiple disabled with cerebral palsy 1(1.2%, epilepsy 3 (4.8% and psychiatry disorders 12 (22.6% were able to receive inclusive education. Sig­ni­ficant difference in inclusion among ID ca­te­gories (c2=99.8, p < 0.001, poverty (c2=3.37, p 0.044, parental education (c2=23.7, p < 0.001, MD CP (c2=43.9, p < 0.001 and epilepsy (c2=22.4, p < 0.001 were seen.Conclusion: Inclusion through CBR is feasible and acceptable in poor rural settings in India. CBR can facilitate inclusion of children with borderline, mild and moderate categories by involving their parents, teachers and community members.

  3. Segmenting Multiple Sclerosis Lesions using a Spatially Constrained K-Nearest Neighbour approach

    DEFF Research Database (Denmark)

    Lyksborg, Mark; Larsen, Rasmus; Sørensen, Per Soelberg

    2012-01-01

    We propose a method for the segmentation of Multiple Sclerosis lesions. The method is based on probability maps derived from a K-Nearest Neighbours classication. These are used as a non parametric likelihood in a Bayesian formulation with a prior that assumes connectivity of neighbouring voxels. ...

  4. Multiple triangulation and collaborative research using qualitative methods to explore decision making in pre-hospital emergency care.

    Science.gov (United States)

    Johnson, Maxine; O'Hara, Rachel; Hirst, Enid; Weyman, Andrew; Turner, Janette; Mason, Suzanne; Quinn, Tom; Shewan, Jane; Siriwardena, A Niroshan

    2017-01-24

    Paramedics make important and increasingly complex decisions at scene about patient care. Patient safety implications of influences on decision making in the pre-hospital setting were previously under-researched. Cutting edge perspectives advocate exploring the whole system rather than individual influences on patient safety. Ethnography (the study of people and cultures) has been acknowledged as a suitable method for identifying health care issues as they occur within the natural context. In this paper we compare multiple methods used in a multi-site, qualitative study that aimed to identify system influences on decision making. The study was conducted in three NHS Ambulance Trusts in England and involved researchers from each Trust working alongside academic researchers. Exploratory interviews with key informants e.g. managers (n = 16) and document review provided contextual information. Between October 2012 and July 2013 researchers observed 34 paramedic shifts and ten paramedics provided additional accounts via audio-recorded 'digital diaries' (155 events). Three staff focus groups (total n = 21) and three service user focus groups (total n = 23) explored a range of experiences and perceptions. Data collection and analysis was carried out by academic and ambulance service researchers as well as service users. Workshops were held at each site to elicit feedback on the findings and facilitate prioritisation of issues identified. The use of a multi-method qualitative approach allowed cross-validation of important issues for ambulance service staff and service users. A key factor in successful implementation of the study was establishing good working relationships with academic and ambulance service teams. Enrolling at least one research lead at each site facilitated the recruitment process as well as study progress. Active involvement with the study allowed ambulance service researchers and service users to gain a better understanding of the research

  5. A method for the generation of random multiple Coulomb scattering angles

    International Nuclear Information System (INIS)

    Campbell, J.R.

    1995-06-01

    A method for the random generation of spatial angles drawn from non-Gaussian multiple Coulomb scattering distributions is presented. The method employs direct numerical inversion of cumulative probability distributions computed from the universal non-Gaussian angular distributions of Marion and Zimmerman. (author). 12 refs., 3 figs

  6. Methods of fast, multiple-point in vivo T1 determination

    International Nuclear Information System (INIS)

    Zhang, Y.; Spigarelli, M.; Fencil, L.E.; Yeung, H.N.

    1989-01-01

    Two methods of rapid, multiple-point determination of T1 in vivo have been evaluated with a phantom consisting of vials of gel in different Mn + + concentrations. The first method was an inversion-recovery- on-the-fly technique, and the second method used a variable- tip-angle (α) progressive saturation with two sub- sequences of different repetition times. In the first method, 1/T1 was evaluated by an exponential fit. In the second method, 1/T1 was obtained iteratively with a linear fit and then readjusted together with α to a model equation until self-consistency was reached

  7. VIKOR Method for Interval Neutrosophic Multiple Attribute Group Decision-Making

    Directory of Open Access Journals (Sweden)

    Yu-Han Huang

    2017-11-01

    Full Text Available In this paper, we will extend the VIKOR (VIsekriterijumska optimizacija i KOmpromisno Resenje method to multiple attribute group decision-making (MAGDM with interval neutrosophic numbers (INNs. Firstly, the basic concepts of INNs are briefly presented. The method first aggregates all individual decision-makers’ assessment information based on an interval neutrosophic weighted averaging (INWA operator, and then employs the extended classical VIKOR method to solve MAGDM problems with INNs. The validity and stability of this method are verified by example analysis and sensitivity analysis, and its superiority is illustrated by a comparison with the existing methods.

  8. The Effectiveness of Using a Multiple Gating Approach to Discriminate among ADHD Subtypes

    Science.gov (United States)

    Simonsen, Brandi M.; Bullis, Michael D.

    2007-01-01

    This study explored the ability of Systematically Progressive Assessment (SPA), a multiple gating approach for assessing students with attention-deficit/hyperactivity disorder (ADHD), to discriminate between subtypes of ADHD. A total of 48 students with ADHD (ages 6-11) were evaluated with three "gates" of assessment. Logistic regression analysis…

  9. Approach and landing guidance design for reusable launch vehicle using multiple sliding surfaces technique

    Directory of Open Access Journals (Sweden)

    Xiangdong LIU

    2017-08-01

    Full Text Available An autonomous approach and landing (A&L guidance law is presented in this paper for landing an unpowered reusable launch vehicle (RLV at the designated runway touchdown. Considering the full nonlinear point-mass dynamics, a guidance scheme is developed in three-dimensional space. In order to guarantee a successful A&L movement, the multiple sliding surfaces guidance (MSSG technique is applied to derive the closed-loop guidance law, which stems from higher order sliding mode control theory and has advantage in the finite time reaching property. The global stability of the proposed guidance approach is proved by the Lyapunov-based method. The designed guidance law can generate new trajectories on-line without any specific requirement on off-line analysis except for the information on the boundary conditions of the A&L phase and instantaneous states of the RLV. Therefore, the designed guidance law is flexible enough to target different touchdown points on the runway and is capable of dealing with large initial condition errors resulted from the previous flight phase. Finally, simulation results show the effectiveness of the proposed guidance law in different scenarios.

  10. Public Transportation Hub Location with Stochastic Demand: An Improved Approach Based on Multiple Attribute Group Decision-Making

    Directory of Open Access Journals (Sweden)

    Sen Liu

    2015-01-01

    Full Text Available Urban public transportation hubs are the key nodes of the public transportation system. The location of such hubs is a combinatorial problem. Many factors can affect the decision-making of location, including both quantitative and qualitative factors; however, most current research focuses solely on either the quantitative or the qualitative factors. Little has been done to combine these two approaches. To fulfill this gap in the research, this paper proposes a novel approach to the public transportation hub location problem, which takes both quantitative and qualitative factors into account. In this paper, an improved multiple attribute group decision-making (MAGDM method based on TOPSIS (Technique for Order Preference by Similarity to Ideal Solution and deviation is proposed to convert the qualitative factors of each hub into quantitative evaluation values. A location model with stochastic passenger flows is then established based on the above evaluation values. Finally, stochastic programming theory is applied to solve the model and to determine the location result. A numerical study shows that this approach is applicable and effective.

  11. Multiple-scattering theory. New developments and applications

    Energy Technology Data Exchange (ETDEWEB)

    Ernst, Arthur

    2007-12-04

    Multiple-scattering theory (MST) is a very efficient technique for calculating the electronic properties of an assembly of atoms. It provides explicitly the Green function, which can be used in many applications such as magnetism, transport and spectroscopy. This work gives an overview on recent developments of multiple-scattering theory. One of the important innovations is the multiple scattering implementation of the self-interaction correction approach, which enables realistic electronic structure calculations of systems with localized electrons. Combined with the coherent potential approximation (CPA), this method can be applied for studying the electronic structure of alloys and as well as pseudo-alloys representing charge and spin disorder. This formalism is extended to finite temperatures which allows to investigate phase transitions and thermal fluctuations in correlated materials. Another novel development is the implementation of the self-consistent non-local CPA approach, which takes into account charge correlations around the CPA average and chemical short range order. This formalism is generalized to the relativistic treatment of magnetically ordered systems. Furthermore, several improvements are implemented to optimize the computational performance and to increase the accuracy of the KKR Green function method. The versatility of the approach is illustrated in numerous applications. (orig.)

  12. Multiple-scattering theory. New developments and applications

    International Nuclear Information System (INIS)

    Ernst, Arthur

    2007-01-01

    Multiple-scattering theory (MST) is a very efficient technique for calculating the electronic properties of an assembly of atoms. It provides explicitly the Green function, which can be used in many applications such as magnetism, transport and spectroscopy. This work gives an overview on recent developments of multiple-scattering theory. One of the important innovations is the multiple scattering implementation of the self-interaction correction approach, which enables realistic electronic structure calculations of systems with localized electrons. Combined with the coherent potential approximation (CPA), this method can be applied for studying the electronic structure of alloys and as well as pseudo-alloys representing charge and spin disorder. This formalism is extended to finite temperatures which allows to investigate phase transitions and thermal fluctuations in correlated materials. Another novel development is the implementation of the self-consistent non-local CPA approach, which takes into account charge correlations around the CPA average and chemical short range order. This formalism is generalized to the relativistic treatment of magnetically ordered systems. Furthermore, several improvements are implemented to optimize the computational performance and to increase the accuracy of the KKR Green function method. The versatility of the approach is illustrated in numerous applications. (orig.)

  13. Application of multiple timestep integration method in SSC

    International Nuclear Information System (INIS)

    Guppy, J.G.

    1979-01-01

    The thermohydraulic transient simulation of an entire LMFBR system is, by its very nature, complex. Physically, the entire plant consists of many subsystems which are coupled by various processes and/or components. The characteristic integration timesteps for these processes/components can vary over a wide range. To improve computing efficiency, a multiple timestep scheme (MTS) approach has been used in the development of the Super System Code (SSC). In this paper: (1) the partitioning of the system and the timestep control are described, and (2) results are presented showing a savings in computer running time using the MTS of as much as five times the time required using a single timestep scheme

  14. A mesh-free approach to acoustic scattering from multiple spheres nested inside a large sphere by using diagonal translation operators.

    Science.gov (United States)

    Hesford, Andrew J; Astheimer, Jeffrey P; Greengard, Leslie F; Waag, Robert C

    2010-02-01

    A multiple-scattering approach is presented to compute the solution of the Helmholtz equation when a number of spherical scatterers are nested in the interior of an acoustically large enclosing sphere. The solution is represented in terms of partial-wave expansions, and a linear system of equations is derived to enforce continuity of pressure and normal particle velocity across all material interfaces. This approach yields high-order accuracy and avoids some of the difficulties encountered when using integral equations that apply to surfaces of arbitrary shape. Calculations are accelerated by using diagonal translation operators to compute the interactions between spheres when the operators are numerically stable. Numerical results are presented to demonstrate the accuracy and efficiency of the method.

  15. Multiple predictor smoothing methods for sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  16. Multiple predictor smoothing methods for sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  17. Correction of measured multiplicity distributions by the simulated annealing method

    International Nuclear Information System (INIS)

    Hafidouni, M.

    1993-01-01

    Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs

  18. A multiple-scale power series method for solving nonlinear ordinary differential equations

    Directory of Open Access Journals (Sweden)

    Chein-Shan Liu

    2016-02-01

    Full Text Available The power series solution is a cheap and effective method to solve nonlinear problems, like the Duffing-van der Pol oscillator, the Volterra population model and the nonlinear boundary value problems. A novel power series method by considering the multiple scales $R_k$ in the power term $(t/R_k^k$ is developed, which are derived explicitly to reduce the ill-conditioned behavior in the data interpolation. In the method a huge value times a tiny value is avoided, such that we can decrease the numerical instability and which is the main reason to cause the failure of the conventional power series method. The multiple scales derived from an integral can be used in the power series expansion, which provide very accurate numerical solutions of the problems considered in this paper.

  19. Bootstrap inference when using multiple imputation.

    Science.gov (United States)

    Schomaker, Michael; Heumann, Christian

    2018-04-16

    Many modern estimators require bootstrapping to calculate confidence intervals because either no analytic standard error is available or the distribution of the parameter of interest is nonsymmetric. It remains however unclear how to obtain valid bootstrap inference when dealing with multiple imputation to address missing data. We present 4 methods that are intuitively appealing, easy to implement, and combine bootstrap estimation with multiple imputation. We show that 3 of the 4 approaches yield valid inference, but that the performance of the methods varies with respect to the number of imputed data sets and the extent of missingness. Simulation studies reveal the behavior of our approaches in finite samples. A topical analysis from HIV treatment research, which determines the optimal timing of antiretroviral treatment initiation in young children, demonstrates the practical implications of the 4 methods in a sophisticated and realistic setting. This analysis suffers from missing data and uses the g-formula for inference, a method for which no standard errors are available. Copyright © 2018 John Wiley & Sons, Ltd.

  20. Generalized internal multiple imaging

    KAUST Repository

    Zuberi, M. A. H.

    2014-08-05

    Internal multiples deteriorate the image when the imaging procedure assumes only single scattering, especially if the velocity model does not have sharp contrasts to reproduce such scattering in the Green’s function through forward modeling. If properly imaged, internal multiples (internally scattered energy) can enhance the seismic image. Conventionally, to image internal multiples, accurate, sharp contrasts in the velocity model are required to construct a Green’s function with all the scattered energy. As an alternative, we have developed a generalized internal multiple imaging procedure that images any order internal scattering using the background Green’s function (from the surface to each image point), constructed from a smooth velocity model, usually used for conventional imaging. For the first-order internal multiples, the approach consisted of three steps, in which we first back propagated the recorded surface seismic data using the background Green’s function, then crosscorrelated the back-propagated data with the recorded data, and finally crosscorrelated the result with the original background Green’s function. This procedure images the contribution of the recorded first-order internal multiples, and it is almost free of the single-scattering recorded energy. The cost includes one additional crosscorrelation over the conventional single-scattering imaging application. We generalized this method to image internal multiples of any order separately. The resulting images can be added to the conventional single-scattering image, obtained, e.g., from Kirchhoff or reverse-time migration, to enhance the image. Application to synthetic data with reflectors illuminated by multiple scattering (double scattering) demonstrated the effectiveness of the approach.

  1. Analysis and performance estimation of the Conjugate Gradient method on multiple GPUs

    NARCIS (Netherlands)

    Verschoor, M.; Jalba, A.C.

    2012-01-01

    The Conjugate Gradient (CG) method is a widely-used iterative method for solving linear systems described by a (sparse) matrix. The method requires a large amount of Sparse-Matrix Vector (SpMV) multiplications, vector reductions and other vector operations to be performed. We present a number of

  2. Multiple Contexts, Multiple Methods: A Study of Academic and Cultural Identity among Children of Immigrant Parents

    Science.gov (United States)

    Urdan, Tim; Munoz, Chantico

    2012-01-01

    Multiple methods were used to examine the academic motivation and cultural identity of a sample of college undergraduates. The children of immigrant parents (CIPs, n = 52) and the children of non-immigrant parents (non-CIPs, n = 42) completed surveys assessing core cultural identity, valuing of cultural accomplishments, academic self-concept,…

  3. Field evaluation of personal sampling methods for multiple bioaerosols.

    Science.gov (United States)

    Wang, Chi-Hsun; Chen, Bean T; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine

    2015-01-01

    Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  4. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    International Nuclear Information System (INIS)

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-01-01

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries

  5. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    Energy Technology Data Exchange (ETDEWEB)

    Spill, Fabian, E-mail: fspill@bu.edu [Department of Biomedical Engineering, Boston University, 44 Cummington Street, Boston, MA 02215 (United States); Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States); Guerrero, Pilar [Department of Mathematics, University College London, Gower Street, London WC1E 6BT (United Kingdom); Alarcon, Tomas [Centre de Recerca Matematica, Campus de Bellaterra, Edifici C, 08193 Bellaterra (Barcelona) (Spain); Departament de Matemàtiques, Universitat Atonòma de Barcelona, 08193 Bellaterra (Barcelona) (Spain); Maini, Philip K. [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Byrne, Helen [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Computational Biology Group, Department of Computer Science, University of Oxford, Oxford OX1 3QD (United Kingdom)

    2015-10-15

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.

  6. The initial rise method extended to multiple trapping levels in thermoluminescent materials

    Energy Technology Data Exchange (ETDEWEB)

    Furetta, C. [CICATA-Legaria, Instituto Politecnico Nacional, 11500 Mexico D.F. (Mexico); Guzman, S. [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 Mexico D.F. (Mexico); Ruiz, B. [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 Mexico D.F. (Mexico); Departamento de Agricultura y Ganaderia, Universidad de Sonora, A.P. 305, 83190 Hermosillo, Sonora (Mexico); Cruz-Zaragoza, E., E-mail: ecruz@nucleares.unam.m [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 Mexico D.F. (Mexico)

    2011-02-15

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level.

  7. The initial rise method extended to multiple trapping levels in thermoluminescent materials

    International Nuclear Information System (INIS)

    Furetta, C.; Guzman, S.; Ruiz, B.; Cruz-Zaragoza, E.

    2011-01-01

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level.

  8. Participant perceptions of a novel physiotherapy approach ("Blue Prescription") for increasing levels of physical activity in people with multiple sclerosis: a qualitative study following intervention.

    Science.gov (United States)

    Smith, Catherine M; Hale, Leigh A; Mulligan, Hilda F; Treharne, Gareth J

    2013-07-01

    The aim of this study was to investigate experiences of participating in a feasibility trial of a novel physiotherapy intervention (Blue Prescription). The trial was designed to increase participation in physical activity for people with multiple sclerosis living in the community. We individually interviewed 27 volunteers from two New Zealand metropolitan areas at the conclusion of their participation in Blue Prescription. We asked volunteers about what participation in Blue Prescription had meant to them; how participants intended to continue with their physical activity; how the approach differed from previous experiences of physiotherapy encounters; and how Blue Prescription could be improved. Interviews were semi-structured, audio-recorded, transcribed verbatim, and analysed using a General Inductive Approach. 'Support' was identified as a key theme with three sub-themes: 'The therapeutic relationship'; 'The Blue Prescription approach'; and 'Supporting themselves'. We identified two additional themes 'Motivation to participate' and 'Improving the Blue Prescription approach'. A novel approach (Blue Prescription) which facilitates engagement in higher levels of desirable physical activity was perceived by participants to be supportive, motivating and enabling. This approach might be particularly useful for people with multiple sclerosis ready to adopt new health-related behaviours. For future studies, this approach requires further refinement, particularly with regards to methods of communication and evaluation.

  9. Hierarchical screening for multiple mental disorders.

    Science.gov (United States)

    Batterham, Philip J; Calear, Alison L; Sunderland, Matthew; Carragher, Natacha; Christensen, Helen; Mackinnon, Andrew J

    2013-10-01

    There is a need for brief, accurate screening when assessing multiple mental disorders. Two-stage hierarchical screening, consisting of brief pre-screening followed by a battery of disorder-specific scales for those who meet diagnostic criteria, may increase the efficiency of screening without sacrificing precision. This study tested whether more efficient screening could be gained using two-stage hierarchical screening than by administering multiple separate tests. Two Australian adult samples (N=1990) with high rates of psychopathology were recruited using Facebook advertising to examine four methods of hierarchical screening for four mental disorders: major depressive disorder, generalised anxiety disorder, panic disorder and social phobia. Using K6 scores to determine whether full screening was required did not increase screening efficiency. However, pre-screening based on two decision tree approaches or item gating led to considerable reductions in the mean number of items presented per disorder screened, with estimated item reductions of up to 54%. The sensitivity of these hierarchical methods approached 100% relative to the full screening battery. Further testing of the hierarchical screening approach based on clinical criteria and in other samples is warranted. The results demonstrate that a two-phase hierarchical approach to screening multiple mental disorders leads to considerable increases efficiency gains without reducing accuracy. Screening programs should take advantage of prescreeners based on gating items or decision trees to reduce the burden on respondents. © 2013 Elsevier B.V. All rights reserved.

  10. Multiple regression and beyond an introduction to multiple regression and structural equation modeling

    CERN Document Server

    Keith, Timothy Z

    2014-01-01

    Multiple Regression and Beyond offers a conceptually oriented introduction to multiple regression (MR) analysis and structural equation modeling (SEM), along with analyses that flow naturally from those methods. By focusing on the concepts and purposes of MR and related methods, rather than the derivation and calculation of formulae, this book introduces material to students more clearly, and in a less threatening way. In addition to illuminating content necessary for coursework, the accessibility of this approach means students are more likely to be able to conduct research using MR or SEM--and more likely to use the methods wisely. Covers both MR and SEM, while explaining their relevance to one another Also includes path analysis, confirmatory factor analysis, and latent growth modeling Figures and tables throughout provide examples and illustrate key concepts and techniques For additional resources, please visit: http://tzkeith.com/.

  11. An Approach for Predicting Essential Genes Using Multiple Homology Mapping and Machine Learning Algorithms.

    Science.gov (United States)

    Hua, Hong-Li; Zhang, Fa-Zhan; Labena, Abraham Alemayehu; Dong, Chuan; Jin, Yan-Ting; Guo, Feng-Biao

    Investigation of essential genes is significant to comprehend the minimal gene sets of cell and discover potential drug targets. In this study, a novel approach based on multiple homology mapping and machine learning method was introduced to predict essential genes. We focused on 25 bacteria which have characterized essential genes. The predictions yielded the highest area under receiver operating characteristic (ROC) curve (AUC) of 0.9716 through tenfold cross-validation test. Proper features were utilized to construct models to make predictions in distantly related bacteria. The accuracy of predictions was evaluated via the consistency of predictions and known essential genes of target species. The highest AUC of 0.9552 and average AUC of 0.8314 were achieved when making predictions across organisms. An independent dataset from Synechococcus elongatus , which was released recently, was obtained for further assessment of the performance of our model. The AUC score of predictions is 0.7855, which is higher than other methods. This research presents that features obtained by homology mapping uniquely can achieve quite great or even better results than those integrated features. Meanwhile, the work indicates that machine learning-based method can assign more efficient weight coefficients than using empirical formula based on biological knowledge.

  12. SuperTRI: A new approach based on branch support analyses of multiple independent data sets for assessing reliability of phylogenetic inferences.

    Science.gov (United States)

    Ropiquet, Anne; Li, Blaise; Hassanin, Alexandre

    2009-09-01

    Supermatrix and supertree are two methods for constructing a phylogenetic tree by using multiple data sets. However, these methods are not a panacea, as conflicting signals between data sets can lead to misinterpret the evolutionary history of taxa. In particular, the supermatrix approach is expected to be misleading if the species-tree signal is not dominant after the combination of the data sets. Moreover, most current supertree methods suffer from two limitations: (i) they ignore or misinterpret secondary (non-dominant) phylogenetic signals of the different data sets; and (ii) the logical basis of node robustness measures is unclear. To overcome these limitations, we propose a new approach, called SuperTRI, which is based on the branch support analyses of the independent data sets, and where the reliability of the nodes is assessed using three measures: the supertree Bootstrap percentage and two other values calculated from the separate analyses: the mean branch support (mean Bootstrap percentage or mean posterior probability) and the reproducibility index. The SuperTRI approach is tested on a data matrix including seven genes for 82 taxa of the family Bovidae (Mammalia, Ruminantia), and the results are compared to those found with the supermatrix approach. The phylogenetic analyses of the supermatrix and independent data sets were done using four methods of tree reconstruction: Bayesian inference, maximum likelihood, and unweighted and weighted maximum parsimony. The results indicate, firstly, that the SuperTRI approach shows less sensitivity to the four phylogenetic methods, secondly, that it is more accurate to interpret the relationships among taxa, and thirdly, that interesting conclusions on introgression and radiation can be drawn from the comparisons between SuperTRI and supermatrix analyses.

  13. Optimized simultaneous inversion of primary and multiple reflections; Inversion linearisee simultanee des reflexions primaires et des reflexions multiples

    Energy Technology Data Exchange (ETDEWEB)

    Pelle, L.

    2003-12-01

    The removal of multiple reflections remains a real problem in seismic imaging. Many preprocessing methods have been developed to attenuate multiples in seismic data but none of them is satisfactory in 3D. The objective of this thesis is to develop a new method to remove multiples, extensible in 3D. Contrary to the existing methods, our approach is not a preprocessing step: we directly include the multiple removal in the imaging process by means of a simultaneous inversion of primaries and multiples. We then propose to improve the standard linearized inversion so as to make it insensitive to the presence of multiples in the data. We exploit kinematics differences between primaries and multiples. We propose to pick in the data the kinematics of the multiples we want to remove. The wave field is decomposed into primaries and multiples. Primaries are modeled by the Ray+Born operator from perturbations of the logarithm of impedance, given the velocity field. Multiples are modeled by the Transport operator from an initial trace, given the picking. The inverse problem simultaneously fits primaries and multiples to the data. To solve this problem with two unknowns, we take advantage of the isometric nature of the Transport operator, which allows to drastically reduce the CPU time: this simultaneous inversion is this almost as fast as the standard linearized inversion. This gain of time opens the way to different applications to multiple removal and in particular, allows to foresee the straightforward 3D extension. (author)

  14. Multiple Model-Based Synchronization Approaches for Time Delayed Slaving Data in a Space Launch Vehicle Tracking System

    Directory of Open Access Journals (Sweden)

    Haryong Song

    2016-01-01

    Full Text Available Due to the inherent characteristics of the flight mission of a space launch vehicle (SLV, which is required to fly over very large distances and have very high fault tolerances, in general, SLV tracking systems (TSs comprise multiple heterogeneous sensors such as radars, GPS, INS, and electrooptical targeting systems installed over widespread areas. To track an SLV without interruption and to hand over the measurement coverage between TSs properly, the mission control system (MCS transfers slaving data to each TS through mission networks. When serious network delays occur, however, the slaving data from the MCS can lead to the failure of the TS. To address this problem, in this paper, we propose multiple model-based synchronization (MMS approaches, which take advantage of the multiple motion models of an SLV. Cubic spline extrapolation, prediction through an α-β-γ filter, and a single model Kalman filter are presented as benchmark approaches. We demonstrate the synchronization accuracy and effectiveness of the proposed MMS approaches using the Monte Carlo simulation with the nominal trajectory data of Korea Space Launch Vehicle-I.

  15. A neutron multiplicity analysis method for uranium samples with liquid scintillators

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Hao, E-mail: zhouhao_ciae@126.com [China Institute of Atomic Energy, P.O.BOX 275-8, Beijing 102413 (China); Lin, Hongtao [Xi' an Reasearch Institute of High-tech, Xi' an, Shaanxi 710025 (China); Liu, Guorong; Li, Jinghuai; Liang, Qinglei; Zhao, Yonggang [China Institute of Atomic Energy, P.O.BOX 275-8, Beijing 102413 (China)

    2015-10-11

    A new neutron multiplicity analysis method for uranium samples with liquid scintillators is introduced. An active well-type fast neutron multiplicity counter has been built, which consists of four BC501A liquid scintillators, a n/γdiscrimination module MPD-4, a multi-stop time to digital convertor MCS6A, and two Am–Li sources. A mathematical model is built to symbolize the detection processes of fission neutrons. Based on this model, equations in the form of R=F*P*Q*T could be achieved, where F indicates the induced fission rate by interrogation sources, P indicates the transfer matrix determined by multiplication process, Q indicates the transfer matrix determined by detection efficiency, T indicates the transfer matrix determined by signal recording process and crosstalk in the counter. Unknown parameters about the item are determined by the solutions of the equations. A {sup 252}Cf source and some low enriched uranium items have been measured. The feasibility of the method is proven by its application to the data analysis of the experiments.

  16. Statistics of electron multiplication in a multiplier phototube; Iterative method

    International Nuclear Information System (INIS)

    Ortiz, J. F.; Grau, A.

    1985-01-01

    In the present paper an iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situation are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (Author) 11 refs

  17. Maternal Smoking During Pregnancy and Offspring Birth Weight: A Genetically-Informed Approach Comparing Multiple Raters

    Science.gov (United States)

    Knopik, Valerie S.; Marceau, Kristine; Palmer, Rohan H. C.; Smith, Taylor F.; Heath, Andrew C.

    2016-01-01

    Maternal smoking during pregnancy (SDP) is a significant public health concern with adverse consequences to the health and well-being of the fetus. There is considerable debate about the best method of assessing SDP, including birth/medical records, timeline follow-back approaches, multiple reporters, and biological verification (e.g., cotinine). This is particularly salient for genetically-informed approaches where it is not always possible or practical to do a prospective study starting during the prenatal period when concurrent biological specimen samples can be collected with ease. In a sample of families (N = 173) specifically selected for sibling pairs discordant for prenatal smoking exposure, we: (1) compare rates of agreement across different types of report—maternal report of SDP, paternal report of maternal SDP, and SDP contained on birth records from the Department of Vital Statistics; (2) examine whether SDP is predictive of birth weight outcomes using our best SDP report as identified via step (1); and (3) use a sibling-comparison approach that controls for genetic and familial influences that siblings share in order to assess the effects of SDP on birth weight. Results show high agreement between reporters and support the utility of retrospective report of SDP. Further, we replicate a causal association between SDP and birth weight, wherein SDP results in reduced birth weight even when accounting for genetic and familial confounding factors via a sibling comparison approach. PMID:26494459

  18. Review of life-cycle approaches coupled with data envelopment analysis: launching the CFP + DEA method for energy policy making.

    Science.gov (United States)

    Vázquez-Rowe, Ian; Iribarren, Diego

    2015-01-01

    Life-cycle (LC) approaches play a significant role in energy policy making to determine the environmental impacts associated with the choice of energy source. Data envelopment analysis (DEA) can be combined with LC approaches to provide quantitative benchmarks that orientate the performance of energy systems towards environmental sustainability, with different implications depending on the selected LC + DEA method. The present paper examines currently available LC + DEA methods and develops a novel method combining carbon footprinting (CFP) and DEA. Thus, the CFP + DEA method is proposed, a five-step structure including data collection for multiple homogenous entities, calculation of target operating points, evaluation of current and target carbon footprints, and result interpretation. As the current context for energy policy implies an anthropocentric perspective with focus on the global warming impact of energy systems, the CFP + DEA method is foreseen to be the most consistent LC + DEA approach to provide benchmarks for energy policy making. The fact that this method relies on the definition of operating points with optimised resource intensity helps to moderate the concerns about the omission of other environmental impacts. Moreover, the CFP + DEA method benefits from CFP specifications in terms of flexibility, understanding, and reporting.

  19. The Initial Rise Method in the case of multiple trapping levels

    International Nuclear Information System (INIS)

    Furetta, C.; Guzman, S.; Cruz Z, E.

    2009-10-01

    The aim of the paper is to extent the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to the minerals extracted from Nopal herb and Oregano spice because the thermoluminescent glow curves shape suggests a trap distribution instead of a single trapping level. (Author)

  20. The Initial Rise Method in the case of multiple trapping levels

    Energy Technology Data Exchange (ETDEWEB)

    Furetta, C. [Centro de Investigacion en Ciencia Aplicada y Tecnologia Avanzada, IPN, Av. Legaria 694, Col. Irrigacion, 11500 Mexico D. F. (Mexico); Guzman, S.; Cruz Z, E. [Instituto de Ciencias Nucleares, UNAM, A. P. 70-543, 04510 Mexico D. F. (Mexico)

    2009-10-15

    The aim of the paper is to extent the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to the minerals extracted from Nopal herb and Oregano spice because the thermoluminescent glow curves shape suggests a trap distribution instead of a single trapping level. (Author)

  1. The initial rise method extended to multiple trapping levels in thermoluminescent materials.

    Science.gov (United States)

    Furetta, C; Guzmán, S; Ruiz, B; Cruz-Zaragoza, E

    2011-02-01

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Field theoretical approach to proton-nucleus reactions: II-Multiple-step excitation process

    International Nuclear Information System (INIS)

    Eiras, A.; Kodama, T.; Nemes, M.

    1989-01-01

    A field theoretical formulation to multiple step excitation process in proton-nucleus collision within the context of a relativistic eikonal approach is presented. A closed form expression for the double differential cross section can be obtained whose structure is very simple and makes the physics transparent. Glauber's formulation of the same process is obtained as a limit of ours and the necessary approximations are studied and discussed. (author) [pt

  3. Field evaluation of personal sampling methods for multiple bioaerosols.

    Directory of Open Access Journals (Sweden)

    Chi-Hsun Wang

    Full Text Available Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min. Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  4. Closed-loop surface-related multiple elimination and its application to simultaneous data reconstruction

    NARCIS (Netherlands)

    Lopez Angarita, G.A.; Verschuur, D.J.

    2015-01-01

    Surface-related multiple elimination (SRME) is one of the most commonly used methods for suppressing surface multiples. However, to obtain an accurate surface multiple estimation, dense source and receiver sampling is required. The traditional approach to this problem is performing data

  5. Measurement of subcritical multiplication by the interval distribution method

    International Nuclear Information System (INIS)

    Nelson, G.W.

    1985-01-01

    The prompt decay constant or the subcritical neutron multiplication may be determined by measuring the distribution of the time intervals between successive neutron counts. The distribution data is analyzed by least-squares fitting to a theoretical distribution function derived from a point reactor probability model. Published results of measurements with one- and two-detector systems are discussed. Data collection times are shorter, and statistical errors are smaller the nearer the system is to delayed critical. Several of the measurements indicate that a shorter data collection time and higher accuracy are possible with the interval distribution method than with the Feynman variance method

  6. A frequency domain global parameter estimation method for multiple reference frequency response measurements

    Science.gov (United States)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.

  7. History, rare, and multiple events of mechanical unfolding of repeat proteins

    Science.gov (United States)

    Sumbul, Fidan; Marchesi, Arin; Rico, Felix

    2018-03-01

    Mechanical unfolding of proteins consisting of repeat domains is an excellent tool to obtain large statistics. Force spectroscopy experiments using atomic force microscopy on proteins presenting multiple domains have revealed that unfolding forces depend on the number of folded domains (history) and have reported intermediate states and rare events. However, the common use of unspecific attachment approaches to pull the protein of interest holds important limitations to study unfolding history and may lead to discarding rare and multiple probing events due to the presence of unspecific adhesion and uncertainty on the pulling site. Site-specific methods that have recently emerged minimize this uncertainty and would be excellent tools to probe unfolding history and rare events. However, detailed characterization of these approaches is required to identify their advantages and limitations. Here, we characterize a site-specific binding approach based on the ultrastable complex dockerin/cohesin III revealing its advantages and limitations to assess the unfolding history and to investigate rare and multiple events during the unfolding of repeated domains. We show that this approach is more robust, reproducible, and provides larger statistics than conventional unspecific methods. We show that the method is optimal to reveal the history of unfolding from the very first domain and to detect rare events, while being more limited to assess intermediate states. Finally, we quantify the forces required to unfold two molecules pulled in parallel, difficult when using unspecific approaches. The proposed method represents a step forward toward more reproducible measurements to probe protein unfolding history and opens the door to systematic probing of rare and multiple molecule unfolding mechanisms.

  8. A level set method for multiple sclerosis lesion segmentation.

    Science.gov (United States)

    Zhao, Yue; Guo, Shuxu; Luo, Min; Shi, Xue; Bilello, Michel; Zhang, Shaoxiang; Li, Chunming

    2018-06-01

    In this paper, we present a level set method for multiple sclerosis (MS) lesion segmentation from FLAIR images in the presence of intensity inhomogeneities. We use a three-phase level set formulation of segmentation and bias field estimation to segment MS lesions and normal tissue region (including GM and WM) and CSF and the background from FLAIR images. To save computational load, we derive a two-phase formulation from the original multi-phase level set formulation to segment the MS lesions and normal tissue regions. The derived method inherits the desirable ability to precisely locate object boundaries of the original level set method, which simultaneously performs segmentation and estimation of the bias field to deal with intensity inhomogeneity. Experimental results demonstrate the advantages of our method over other state-of-the-art methods in terms of segmentation accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Using automatic item generation to create multiple-choice test items.

    Science.gov (United States)

    Gierl, Mark J; Lai, Hollis; Turner, Simon R

    2012-08-01

    Many tests of medical knowledge, from the undergraduate level to the level of certification and licensure, contain multiple-choice items. Although these are efficient in measuring examinees' knowledge and skills across diverse content areas, multiple-choice items are time-consuming and expensive to create. Changes in student assessment brought about by new forms of computer-based testing have created the demand for large numbers of multiple-choice items. Our current approaches to item development cannot meet this demand. We present a methodology for developing multiple-choice items based on automatic item generation (AIG) concepts and procedures. We describe a three-stage approach to AIG and we illustrate this approach by generating multiple-choice items for a medical licensure test in the content area of surgery. To generate multiple-choice items, our method requires a three-stage process. Firstly, a cognitive model is created by content specialists. Secondly, item models are developed using the content from the cognitive model. Thirdly, items are generated from the item models using computer software. Using this methodology, we generated 1248 multiple-choice items from one item model. Automatic item generation is a process that involves using models to generate items using computer technology. With our method, content specialists identify and structure the content for the test items, and computer technology systematically combines the content to generate new test items. By combining these outcomes, items can be generated automatically. © Blackwell Publishing Ltd 2012.

  10. System and method for image registration of multiple video streams

    Science.gov (United States)

    Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton

    2018-02-06

    Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.

  11. New weighting methods for phylogenetic tree reconstruction using multiple loci.

    Science.gov (United States)

    Misawa, Kazuharu; Tajima, Fumio

    2012-08-01

    Efficient determination of evolutionary distances is important for the correct reconstruction of phylogenetic trees. The performance of the pooled distance required for reconstructing a phylogenetic tree can be improved by applying large weights to appropriate distances for reconstructing phylogenetic trees and small weights to inappropriate distances. We developed two weighting methods, the modified Tajima-Takezaki method and the modified least-squares method, for reconstructing phylogenetic trees from multiple loci. By computer simulations, we found that both of the new methods were more efficient in reconstructing correct topologies than the no-weight method. Hence, we reconstructed hominoid phylogenetic trees from mitochondrial DNA using our new methods, and found that the levels of bootstrap support were significantly increased by the modified Tajima-Takezaki and by the modified least-squares method.

  12. A multiple objective test assembly approach for exposure control problems in Computerized Adaptive Testing

    Directory of Open Access Journals (Sweden)

    Theo J.H.M. Eggen

    2010-01-01

    Full Text Available Overexposure and underexposure of items in the bank are serious problems in operational computerized adaptive testing (CAT systems. These exposure problems might result in item compromise, or point at a waste of investments. The exposure control problem can be viewed as a test assembly problem with multiple objectives. Information in the test has to be maximized, item compromise has to be minimized, and pool usage has to be optimized. In this paper, a multiple objectives method is developed to deal with both types of exposure problems. In this method, exposure control parameters based on observed exposure rates are implemented as weights for the information in the item selection procedure. The method does not need time consuming simulation studies, and it can be implemented conditional on ability level. The method is compared with Sympson Hetter method for exposure control, with the Progressive method and with alphastratified testing. The results show that the method is successful in dealing with both kinds of exposure problems.

  13. Exploiting Multiple Detections for Person Re-Identification

    Directory of Open Access Journals (Sweden)

    Amran Bhuiyan

    2018-01-01

    Full Text Available Re-identification systems aim at recognizing the same individuals in multiple cameras, and one of the most relevant problems is that the appearance of same individual varies across cameras due to illumination and viewpoint changes. This paper proposes the use of cumulative weighted brightness transfer functions (CWBTFs to model these appearance variations. Different from recently proposed methods which only consider pairs of images to learn a brightness transfer function, we exploit such a multiple-frame-based learning approach that leverages consecutive detections of each individual to transfer the appearance. We first present a CWBTF framework for the task of transforming appearance from one camera to another. We then present a re-identification framework where we segment the pedestrian images into meaningful parts and extract features from such parts, as well as from the whole body. Jointly, both of these frameworks contribute to model the appearance variations more robustly. We tested our approach on standard multi-camera surveillance datasets, showing consistent and significant improvements over existing methods on three different datasets without any other additional cost. Our approach is general and can be applied to any appearance-based method.

  14. Geometric calibration method for multiple head cone beam SPECT systems

    International Nuclear Information System (INIS)

    Rizo, Ph.; Grangeat, P.; Guillemaud, R.; Sauze, R.

    1993-01-01

    A method is presented for performing geometric calibration on Single Photon Emission Tomography (SPECT) cone beam systems with multiple cone beam collimators, each having its own orientation parameters. This calibration method relies on the fact that, in tomography, for each head, the relative position of the rotation axis and of the collimator does not change during the acquisition. In order to ensure the method stability, the parameters to be estimated in intrinsic parameters and extrinsic parameters are separated. The intrinsic parameters describe the acquisition geometry and the extrinsic parameters position of the detection system with respect to the rotation axis. (authors) 3 refs

  15. Two approaches to incorporate clinical data uncertainty into multiple criteria decision analysis for benefit-risk assessment of medicinal products.

    Science.gov (United States)

    Wen, Shihua; Zhang, Lanju; Yang, Bo

    2014-07-01

    The Problem formulation, Objectives, Alternatives, Consequences, Trade-offs, Uncertainties, Risk attitude, and Linked decisions (PrOACT-URL) framework and multiple criteria decision analysis (MCDA) have been recommended by the European Medicines Agency for structured benefit-risk assessment of medicinal products undergoing regulatory review. The objective of this article was to provide solutions to incorporate the uncertainty from clinical data into the MCDA model when evaluating the overall benefit-risk profiles among different treatment options. Two statistical approaches, the δ-method approach and the Monte-Carlo approach, were proposed to construct the confidence interval of the overall benefit-risk score from the MCDA model as well as other probabilistic measures for comparing the benefit-risk profiles between treatment options. Both approaches can incorporate the correlation structure between clinical parameters (criteria) in the MCDA model and are straightforward to implement. The two proposed approaches were applied to a case study to evaluate the benefit-risk profile of an add-on therapy for rheumatoid arthritis (drug X) relative to placebo. It demonstrated a straightforward way to quantify the impact of the uncertainty from clinical data to the benefit-risk assessment and enabled statistical inference on evaluating the overall benefit-risk profiles among different treatment options. The δ-method approach provides a closed form to quantify the variability of the overall benefit-risk score in the MCDA model, whereas the Monte-Carlo approach is more computationally intensive but can yield its true sampling distribution for statistical inference. The obtained confidence intervals and other probabilistic measures from the two approaches enhance the benefit-risk decision making of medicinal products. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  16. Comparison of multiple-criteria decision-making methods - results of simulation study

    Directory of Open Access Journals (Sweden)

    Michał Adamczak

    2016-12-01

    Full Text Available Background: Today, both researchers and practitioners have many methods for supporting the decision-making process. Due to the conditions in which supply chains function, the most interesting are multi-criteria methods. The use of sophisticated methods for supporting decisions requires the parameterization and execution of calculations that are often complex. So is it efficient to use sophisticated methods? Methods: The authors of the publication compared two popular multi-criteria decision-making methods: the  Weighted Sum Model (WSM and the Analytic Hierarchy Process (AHP. A simulation study reflects these two decision-making methods. Input data for this study was a set of criteria weights and the value of each in terms of each criterion. Results: The iGrafx Process for Six Sigma simulation software recreated how both multiple-criteria decision-making methods (WSM and AHP function. The result of the simulation was a numerical value defining the preference of each of the alternatives according to the WSM and AHP methods. The alternative producing a result of higher numerical value  was considered preferred, according to the selected method. In the analysis of the results, the relationship between the values of the parameters and the difference in the results presented by both methods was investigated. Statistical methods, including hypothesis testing, were used for this purpose. Conclusions: The simulation study findings prove that the results obtained with the use of two multiple-criteria decision-making methods are very similar. Differences occurred more frequently in lower-value parameters from the "value of each alternative" group and higher-value parameters from the "weight of criteria" group.

  17. Method for Multiple Targets Tracking in Cognitive Radar Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Yang Jun

    2016-02-01

    Full Text Available A multiple targets cognitive radar tracking method based on Compressed Sensing (CS is proposed. In this method, the theory of CS is introduced to the case of cognitive radar tracking process in multiple targets scenario. The echo signal is sparsely expressed. The designs of sparse matrix and measurement matrix are accomplished by expressing the echo signal sparsely, and subsequently, the restruction of measurement signal under the down-sampling condition is realized. On the receiving end, after considering that the problems that traditional particle filter suffers from degeneracy, and require a large number of particles, the particle swarm optimization particle filter is used to track the targets. On the transmitting end, the Posterior Cramér-Rao Bounds (PCRB of the tracking accuracy is deduced, and the radar waveform parameters are further cognitively designed using PCRB. Simulation results show that the proposed method can not only reduce the data quantity, but also provide a better tracking performance compared with traditional method.

  18. Methodological issues underlying multiple decrement life table analysis.

    Science.gov (United States)

    Mode, C J; Avery, R C; Littman, G S; Potter, R G

    1977-02-01

    In this paper, the actuarial method of multiple decrement life table analysis of censored, longitudinal data is examined. The discussion is organized in terms of the first segment of usage of an intrauterine device. Weaknesses of the actuarial approach are pointed out, and an alternative approach, based on the classical model of competing risks, is proposed. Finally, the actuarial and the alternative method of analyzing censored data are compared, using data from the Taichung Medical Study on Intrauterine Devices.

  19. The reconstruction of late Holocene environmental change at Redhead Lagoon, NSW, using a multiple-method approach

    International Nuclear Information System (INIS)

    Franklin, N.; Gale, S.

    1999-01-01

    . However, since the 1960s, urbanisation has also contributed to accelerated sedimentation and urban pollution within the basin. The sedimentary record also illustrates dramatic and sudden changes in sediment chemistry. In particular, atmospheric pollution from industrial activities has affected lake sediment quality. Increases in heavy metal trace elements such as lead, zinc, arsenic, nickel and copper have been attributed to fallout of atmospheric particulate matter from the nearby smelter at Cockle Creek and the coal-fired power stations around Lake Macquarie. This study shows that a multiple-method approach is capable of yielding important insights into the history of environmental conditions within a single catchment. A combination of analyses together with documented records of land use changes can improve the reliability of the dates obtained by the more established chronological techniques

  20. Sampling plans in attribute mode with multiple levels of precision

    International Nuclear Information System (INIS)

    Franklin, M.

    1986-01-01

    This paper describes a method for deriving sampling plans for nuclear material inventory verification. The method presented is different from the classical approach which envisages two levels of measurement precision corresponding to NDA and DA. In the classical approach the precisions of the two measurement methods are taken as fixed parameters. The new approach is based on multiple levels of measurement precision. The design of the sampling plan consists of choosing the number of measurement levels, the measurement precision to be used at each level and the sample size to be used at each level

  1. Multiple Intelligences within the Cross-Curricular Approach

    Directory of Open Access Journals (Sweden)

    Anthoula Vaiou

    2010-02-01

    Full Text Available The present study was realized in a Greek 6th grade State Primary School class and was based on Howard Gardner’s theory of multiple intelligences, which was first introduced in 1983. More particularly, it was explored to what extent the young learners possess multiple intelligences through the use of a specially-designed questionnaire and a series of interviews. The findings of the above have served as a tool to the construction of a project work based on students’ learning preferences within a cross-curricular framework, easily applicable to the Greek State School curriculum. All learners were activated to participate within a school environment that traditionally promotes linguistic and mathematical skills matching dominant multiple intelligences or a combination of some of them to thematic units already taught by Greek teachers. The suggested project was assessed through observation and student portfolio, showing that the young learners’ multiple intelligences were exploited to a great extent, promoting the learning process satisfactorily. The results of this study can provide a contribution to the literature of multiple intelligences in the Greek reality and suggest a need for further consideration and exploration in the field. Finally, the researcher of this study hopes the present work could function as a springboard for more elaborated studies in the future.

  2. Weighted least-square approach for simultaneous measurement of multiple reflective surfaces

    Science.gov (United States)

    Tang, Shouhong; Bills, Richard E.; Freischlad, Klaus

    2007-09-01

    Phase shifting interferometry (PSI) is a highly accurate method for measuring the nanometer-scale relative surface height of a semi-reflective test surface. PSI is effectively used in conjunction with Fizeau interferometers for optical testing, hard disk inspection, and semiconductor wafer flatness. However, commonly-used PSI algorithms are unable to produce an accurate phase measurement if more than one reflective surface is present in the Fizeau interferometer test cavity. Examples of test parts that fall into this category include lithography mask blanks and their protective pellicles, and plane parallel optical beam splitters. The plane parallel surfaces of these parts generate multiple interferograms that are superimposed in the recording plane of the Fizeau interferometer. When using wavelength shifting in PSI the phase shifting speed of each interferogram is proportional to the optical path difference (OPD) between the two reflective surfaces. The proposed method is able to differentiate each underlying interferogram from each other in an optimal manner. In this paper, we present a method for simultaneously measuring the multiple test surfaces of all underlying interferograms from these superimposed interferograms through the use of a weighted least-square fitting technique. The theoretical analysis of weighted least-square technique and the measurement results will be described in this paper.

  3. Towards Multi-Method Research Approach in Empirical Software Engineering

    Science.gov (United States)

    Mandić, Vladimir; Markkula, Jouni; Oivo, Markku

    This paper presents results of a literature analysis on Empirical Research Approaches in Software Engineering (SE). The analysis explores reasons why traditional methods, such as statistical hypothesis testing and experiment replication are weakly utilized in the field of SE. It appears that basic assumptions and preconditions of the traditional methods are contradicting the actual situation in the SE. Furthermore, we have identified main issues that should be considered by the researcher when selecting the research approach. In virtue of reasons for weak utilization of traditional methods we propose stronger use of Multi-Method approach with Pragmatism as the philosophical standpoint.

  4. [A factor analysis method for contingency table data with unlimited multiple choice questions].

    Science.gov (United States)

    Toyoda, Hideki; Haiden, Reina; Kubo, Saori; Ikehara, Kazuya; Isobe, Yurie

    2016-02-01

    The purpose of this study is to propose a method of factor analysis for analyzing contingency tables developed from the data of unlimited multiple-choice questions. This method assumes that the element of each cell of the contingency table has a binominal distribution and a factor analysis model is applied to the logit of the selection probability. Scree plot and WAIC are used to decide the number of factors, and the standardized residual, the standardized difference between the sample, and the proportion ratio, is used to select items. The proposed method was applied to real product impression research data on advertised chips and energy drinks. Since the results of the analysis showed that this method could be used in conjunction with conventional factor analysis model, and extracted factors were fully interpretable, and suggests the usefulness of the proposed method in the study of psychology using unlimited multiple-choice questions.

  5. Single- versus multiple-sample method to measure glomerular filtration rate.

    Science.gov (United States)

    Delanaye, Pierre; Flamant, Martin; Dubourg, Laurence; Vidal-Petiot, Emmanuelle; Lemoine, Sandrine; Cavalier, Etienne; Schaeffner, Elke; Ebert, Natalie; Pottel, Hans

    2018-01-08

    There are many different ways to measure glomerular filtration rate (GFR) using various exogenous filtration markers, each having their own strengths and limitations. However, not only the marker, but also the methodology may vary in many ways, including the use of urinary or plasma clearance, and, in the case of plasma clearance, the number of time points used to calculate the area under the concentration-time curve, ranging from only one (Jacobsson method) to eight (or more) blood samples. We collected the results obtained from 5106 plasma clearances (iohexol or 51Cr-ethylenediaminetetraacetic acid (EDTA)) using three to four time points, allowing GFR calculation using the slope-intercept method and the Bröchner-Mortensen correction. For each time point, the Jacobsson formula was applied to obtain the single-sample GFR. We used Bland-Altman plots to determine the accuracy of the Jacobsson method at each time point. The single-sample method showed within 10% concordances with the multiple-sample method of 66.4%, 83.6%, 91.4% and 96.0% at the time points 120, 180, 240 and ≥300 min, respectively. Concordance was poorer at lower GFR levels, and this trend is in parallel with increasing age. Results were similar in males and females. Some discordance was found in the obese subjects. Single-sample GFR is highly concordant with a multiple-sample strategy, except in the low GFR range (<30 mL/min). © The Author 2018. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  6. The impact of a multiple intelligences teaching approach drug education programme on drug refusal skills of Nigerian pupils.

    Science.gov (United States)

    Nwagu, Evelyn N; Ezedum, Chuks E; Nwagu, Eric K N

    2015-09-01

    The rising incidence of drug abuse among youths in Nigeria is a source of concern for health educators. This study was carried out on primary six pupils to determine the effect of a Multiple Intelligences Teaching Approach Drug Education Programme (MITA-DEP) on pupils' acquisition of drug refusal skills. A programme of drug education based on the Multiple Intelligences Teaching Approach (MITA) was developed. An experimental group was taught using this programme while a control group was taught using the same programme but developed based on the Traditional Teaching Approach. Pupils taught with the MITA acquired more drug refusal skills than those taught with the Traditional Teaching Approach. Urban pupils taught with the MITA acquired more skills than rural pupils. There was no statistically significant difference in the mean refusal skills of male and female pupils taught with the MITA. © The Author(s) 2014.

  7. An Automated Approach to Reasoning Under Multiple Perspectives

    Science.gov (United States)

    deBessonet, Cary

    2004-01-01

    This is the final report with emphasis on research during the last term. The context for the research has been the development of an automated reasoning technology for use in SMS (symbolic Manipulation System), a system used to build and query knowledge bases (KBs) using a special knowledge representation language SL (Symbolic Language). SMS interpreters assertive SL input and enters the results as components of its universe. The system operates in two basic models: 1) constructive mode (for building KBs); and 2) query/search mode (for querying KBs). Query satisfaction consists of matching query components with KB components. The system allows "penumbral matches," that is, matches that do not exactly meet the specifications of the query, but which are deemed relevant for the conversational context. If the user wants to know whether SMS has information that holds, say, for "any chow," the scope of relevancy might be set so that the system would respond based on a finding that it has information that holds for "most dogs," although this is not exactly what was called for by the query. The response would be qualified accordingly, as would normally be the case in ordinary human conversation. The general goal of the research was to develop an approach by which assertive content could be interpreted from multiple perspectives so that reasoning operations could be successfully conducted over the results. The interpretation of an SL statement such as, "{person believes [captain (asserted (perhaps)) (astronaut saw (comet (bright)))]}," which in English would amount to asserting something to the effect that, "Some person believes that a captain perhaps asserted that an astronaut saw a bright comet," would require the recognition of multiple perspectives, including some that are: a) epistemically-based (focusing on "believes"); b) assertion-based (focusing on "asserted"); c) perception-based (focusing on "saw"); d) adjectivally-based (focusing on "bight"); and e) modally

  8. Multiple Family Group Therapy: An Interpersonal/Postmodern Approach.

    Science.gov (United States)

    Thorngren, Jill M.; Kleist, David M.

    2002-01-01

    Multiple Family Group Therapy has been identified as a viable treatment model for a variety of client populations. A combination of family systems theories and therapeutic group factors provide the opportunity to explore multiple levels of intrapersonal and interpersonal relationships between families. This article depicts a Multiple Family Group…

  9. A mixed methods study of multiple health behaviors among individuals with stroke

    Directory of Open Access Journals (Sweden)

    Matthew Plow

    2017-05-01

    Full Text Available Background Individuals with stroke often have multiple cardiovascular risk factors that necessitate promoting engagement in multiple health behaviors. However, observational studies of individuals with stroke have typically focused on promoting a single health behavior. Thus, there is a poor understanding of linkages between healthy behaviors and the circumstances in which factors, such as stroke impairments, may influence a single or multiple health behaviors. Methods We conducted a mixed methods convergent parallel study of 25 individuals with stroke to examine the relationships between stroke impairments and physical activity, sleep, and nutrition. Our goal was to gain further insight into possible strategies to promote multiple health behaviors among individuals with stroke. This study focused on physical activity, sleep, and nutrition because of their importance in achieving energy balance, maintaining a healthy weight, and reducing cardiovascular risks. Qualitative and quantitative data were collected concurrently, with the former being prioritized over the latter. Qualitative data was prioritized in order to develop a conceptual model of engagement in multiple health behaviors among individuals with stroke. Qualitative and quantitative data were analyzed independently and then were integrated during the inference stage to develop meta-inferences. The 25 individuals with stroke completed closed-ended questionnaires on healthy behaviors and physical function. They also participated in face-to-face focus groups and one-to-one phone interviews. Results We found statistically significant and moderate correlations between hand function and healthy eating habits (r = 0.45, sleep disturbances and limitations in activities of daily living (r =  − 0.55, BMI and limitations in activities of daily living (r =  − 0.49, physical activity and limitations in activities of daily living (r = 0.41, mobility impairments and BMI (r

  10. Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems

    Science.gov (United States)

    Razzak, M. A.; Alam, M. Z.; Sharif, M. N.

    2018-03-01

    In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.

  11. The Propagation of Movement Variability in Time: A Methodological Approach for Discrete Movements with Multiple Degrees of Freedom

    Science.gov (United States)

    Krüger, Melanie; Straube, Andreas; Eggert, Thomas

    2017-01-01

    In recent years, theory-building in motor neuroscience and our understanding of the synergistic control of the redundant human motor system has significantly profited from the emergence of a range of different mathematical approaches to analyze the structure of movement variability. Approaches such as the Uncontrolled Manifold method or the Noise-Tolerance-Covariance decomposition method allow to detect and interpret changes in movement coordination due to e.g., learning, external task constraints or disease, by analyzing the structure of within-subject, inter-trial movement variability. Whereas, for cyclical movements (e.g., locomotion), mathematical approaches exist to investigate the propagation of movement variability in time (e.g., time series analysis), similar approaches are missing for discrete, goal-directed movements, such as reaching. Here, we propose canonical correlation analysis as a suitable method to analyze the propagation of within-subject variability across different time points during the execution of discrete movements. While similar analyses have already been applied for discrete movements with only one degree of freedom (DoF; e.g., Pearson's product-moment correlation), canonical correlation analysis allows to evaluate the coupling of inter-trial variability across different time points along the movement trajectory for multiple DoF-effector systems, such as the arm. The theoretical analysis is illustrated by empirical data from a study on reaching movements under normal and disturbed proprioception. The results show increased movement duration, decreased movement amplitude, as well as altered movement coordination under ischemia, which results in a reduced complexity of movement control. Movement endpoint variability is not increased under ischemia. This suggests that healthy adults are able to immediately and efficiently adjust the control of complex reaching movements to compensate for the loss of proprioceptive information. Further, it is

  12. Efficient multiple-trait association and estimation of genetic correlation using the matrix-variate linear mixed model.

    Science.gov (United States)

    Furlotte, Nicholas A; Eskin, Eleazar

    2015-05-01

    Multiple-trait association mapping, in which multiple traits are used simultaneously in the identification of genetic variants affecting those traits, has recently attracted interest. One class of approaches for this problem builds on classical variance component methodology, utilizing a multitrait version of a linear mixed model. These approaches both increase power and provide insights into the genetic architecture of multiple traits. In particular, it is possible to estimate the genetic correlation, which is a measure of the portion of the total correlation between traits that is due to additive genetic effects. Unfortunately, the practical utility of these methods is limited since they are computationally intractable for large sample sizes. In this article, we introduce a reformulation of the multiple-trait association mapping approach by defining the matrix-variate linear mixed model. Our approach reduces the computational time necessary to perform maximum-likelihood inference in a multiple-trait model by utilizing a data transformation. By utilizing a well-studied human cohort, we show that our approach provides more than a 10-fold speedup, making multiple-trait association feasible in a large population cohort on the genome-wide scale. We take advantage of the efficiency of our approach to analyze gene expression data. By decomposing gene coexpression into a genetic and environmental component, we show that our method provides fundamental insights into the nature of coexpressed genes. An implementation of this method is available at http://genetics.cs.ucla.edu/mvLMM. Copyright © 2015 by the Genetics Society of America.

  13. Estimation of Multiple Pitches in Stereophonic Mixtures using a Codebook-based Approach

    DEFF Research Database (Denmark)

    Hansen, Martin Weiss; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2017-01-01

    In this paper, a method for multi-pitch estimation of stereophonic mixtures of multiple harmonic signals is presented. The method is based on a signal model which takes the amplitude and delay panning parameters of the sources in a stereophonic mixture into account. Furthermore, the method is based...... on the extended invariance principle (EXIP), and a codebook of realistic amplitude vectors. For each fundamental frequency candidate in each of the sources, the amplitude estimates are mapped to entries in the codebook, and the pitch and model order are estimated jointly. The performance of the proposed method...

  14. Approaches to greenhouse gas accounting methods for biomass carbon

    International Nuclear Information System (INIS)

    Downie, Adriana; Lau, David; Cowie, Annette; Munroe, Paul

    2014-01-01

    This investigation examines different approaches for the GHG flux accounting of activities within a tight boundary of biomass C cycling, with scope limited to exclude all other aspects of the lifecycle. Alternative approaches are examined that a) account for all emissions including biogenic CO 2 cycling – the biogenic method; b) account for the quantity of C that is moved to and maintained in the non-atmospheric pool – the stock method; and c) assume that the net balance of C taken up by biomass is neutral over the short-term and hence there is no requirement to include this C in the calculation – the simplified method. This investigation demonstrates the inaccuracies in both emissions forecasting and abatement calculations that result from the use of the simplified method, which is commonly accepted for use. It has been found that the stock method is the most accurate and appropriate approach for use in calculating GHG inventories, however short-comings of this approach emerge when applied to abatement projects, as it does not account for the increase in biogenic CO 2 emissions that are generated when non-CO 2 GHG emissions in the business-as-usual case are offset. Therefore the biogenic method or a modified version of the stock method should be used to accurately estimate GHG emissions abatement achieved by a project. This investigation uses both the derivation of methodology equations from first principles and worked examples to explore the fundamental differences in the alternative approaches. Examples are developed for three project scenarios including; landfill, combustion and slow-pyrolysis (biochar) of biomass. -- Highlights: • Different approaches can be taken to account for the GHG emissions from biomass. • Simplification of GHG accounting methods is useful, however, can lead to inaccuracies. • Approaches used currently are often inadequate for practises that store carbon. • Accounting methods for emissions forecasting can be inadequate for

  15. Comparison of two methods of surface profile extraction from multiple ultrasonic range measurements

    NARCIS (Netherlands)

    Barshan, B; Baskent, D

    Two novel methods for surface profile extraction based on multiple ultrasonic range measurements are described and compared. One of the methods employs morphological processing techniques, whereas the other employs a spatial voting scheme followed by simple thresholding. Morphological processing

  16. Non-Abelian Kubo formula and the multiple time-scale method

    International Nuclear Information System (INIS)

    Zhang, X.; Li, J.

    1996-01-01

    The non-Abelian Kubo formula is derived from the kinetic theory. That expression is compared with the one obtained using the eikonal for a Chern endash Simons theory. The multiple time-scale method is used to study the non-Abelian Kubo formula, and the damping rate for longitudinal color waves is computed. copyright 1996 Academic Press, Inc

  17. Fuzzy multiple objective decision making methods and applications

    CERN Document Server

    Lai, Young-Jou

    1994-01-01

    In the last 25 years, the fuzzy set theory has been applied in many disciplines such as operations research, management science, control theory, artificial intelligence/expert system, etc. In this volume, methods and applications of crisp, fuzzy and possibilistic multiple objective decision making are first systematically and thoroughly reviewed and classified. This state-of-the-art survey provides readers with a capsule look into the existing methods, and their characteristics and applicability to analysis of fuzzy and possibilistic programming problems. To realize practical fuzzy modelling, it presents solutions for real-world problems including production/manufacturing, location, logistics, environment management, banking/finance, personnel, marketing, accounting, agriculture economics and data analysis. This book is a guided tour through the literature in the rapidly growing fields of operations research and decision making and includes the most up-to-date bibliographical listing of literature on the topi...

  18. MSblender: A probabilistic approach for integrating peptide identifications from multiple database search engines.

    Science.gov (United States)

    Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M

    2011-07-01

    Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.

  19. Comparing Multiple Intelligences Approach with Traditional Teaching on Eight Grade Students' Achievement in and Attitudes toward Science

    Science.gov (United States)

    Kaya, Osman Nafiz; Dogan, Alev; Gokcek, Nur; Kilic, Ziya; Kilic, Esma

    2007-01-01

    The purpose of this study was to investigate the effects of multiple intelligences (MI) teaching approach on 8th Grade students' achievement in and attitudes toward science. This study used a pretest-posttest control group experimental design. While the experimental group (n=30) was taught a unit on acids and bases using MI teaching approach, the…

  20. An exploratory trial exploring the use of a multiple intelligences teaching approach (MITA) for teaching clinical skills to first year undergraduate nursing students.

    Science.gov (United States)

    Sheahan, Linda; While, Alison; Bloomfield, Jacqueline

    2015-12-01

    The teaching and learning of clinical skills is a key component of nurse education programmes. The clinical competency of pre-registration nursing students has raised questions about the proficiency of teaching strategies for clinical skill acquisition within pre-registration education. This study aimed to test the effectiveness of teaching clinical skills using a multiple intelligences teaching approach (MITA) compared with the conventional teaching approach. A randomised controlled trial was conducted. Participants were randomly allocated to an experimental group (MITA intervention) (n=46) and a control group (conventional teaching) (n=44) to learn clinical skills. Setting was in one Irish third-level educational institution. Participants were all first year nursing students (n=90) in one institution. The experimental group was taught using MITA delivered by the researcher while the control group was taught by a team of six experienced lecturers. Participant preference for learning was measured by the Index of Learning Styles (ILS). Participants' multiple intelligence (MI) preferences were measured with a multiple intelligences development assessment scale (MIDAS). All participants were assessed using the same objective structured clinical examination (OSCE) at the end of semester one and semester two. MI assessment preferences were measured by a multiple intelligences assessment preferences questionnaire. The MITA intervention was evaluated using a questionnaire. The strongest preference on ILS for both groups was the sensing style. The highest MI was interpersonal intelligence. Participants in the experimental group had higher scores in all three OSCEs (pmultiple choice questions as methods of assessment. MITA was evaluated positively. The study findings support the use of MITA for clinical skills teaching and advance the understanding of how MI teaching approaches may be used in nursing education. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Socratic Method as an Approach to Teaching

    Directory of Open Access Journals (Sweden)

    Haris Delić

    2016-10-01

    Full Text Available In this article we presented the theoretical view of Socrates' life and his method in teaching. After the biographical facts of Socrates and his life, we explained the method he used in teaching and the two main types of his method, Classic and Modern Socratic Method. Since the core of Socrates' approach is the dialogue as a form of teaching we explained how exactly the Socratic dialogue goes. Besides that, we presented two examples of dialogues that Socrates led, Meno and Gorgias. Socratic circle is also one of the aspects that we presented in this paper. It is the form of seminars that is crucial for group discussions of a given theme. At the end, some disadvantages of the Method are explained. With this paper, the reader can get the conception of this approach of teaching and can use Socrates as an example of how successfull teacher leads his students towards the goal.

  2. Integrating Multiple Teaching Methods into a General Chemistry Classroom

    Science.gov (United States)

    Francisco, Joseph S.; Nicoll, Gayle; Trautmann, Marcella

    1998-02-01

    In addition to the traditional lecture format, three other teaching strategies (class discussions, concept maps, and cooperative learning) were incorporated into a freshman level general chemistry course. Student perceptions of their involvement in each of the teaching methods, as well as their perceptions of the utility of each method were used to assess the effectiveness of the integration of the teaching strategies as received by the students. Results suggest that each strategy serves a unique purpose for the students and increased student involvement in the course. These results indicate that the multiple teaching strategies were well received by the students and that all teaching strategies are necessary for students to get the most out of the course.

  3. Human-centered approaches in geovisualization design: investigating multiple methods through a long-term case study.

    Science.gov (United States)

    Lloyd, David; Dykes, Jason

    2011-12-01

    Working with three domain specialists we investigate human-centered approaches to geovisualization following an ISO13407 taxonomy covering context of use, requirements and early stages of design. Our case study, undertaken over three years, draws attention to repeating trends: that generic approaches fail to elicit adequate requirements for geovis application design; that the use of real data is key to understanding needs and possibilities; that trust and knowledge must be built and developed with collaborators. These processes take time but modified human-centred approaches can be effective. A scenario developed through contextual inquiry but supplemented with domain data and graphics is useful to geovis designers. Wireframe, paper and digital prototypes enable successful communication between specialist and geovis domains when incorporating real and interesting data, prompting exploratory behaviour and eliciting previously unconsidered requirements. Paper prototypes are particularly successful at eliciting suggestions, especially for novel visualization. Enabling specialists to explore their data freely with a digital prototype is as effective as using a structured task protocol and is easier to administer. Autoethnography has potential for framing the design process. We conclude that a common understanding of context of use, domain data and visualization possibilities are essential to successful geovis design and develop as this progresses. HC approaches can make a significant contribution here. However, modified approaches, applied with flexibility, are most promising. We advise early, collaborative engagement with data – through simple, transient visual artefacts supported by data sketches and existing designs – before moving to successively more sophisticated data wireframes and data prototypes. © 2011 IEEE

  4. An Application of Graphical Approach to Construct Multiple Testing Procedure in a Hypothetical Phase III Design

    Directory of Open Access Journals (Sweden)

    Naitee eTing

    2014-01-01

    Full Text Available Many multiple testing procedures (MTP have been developed in recent years. Among these new procedures, the graphical approach is flexible and easy to communicate with non-statisticians. A hypothetical Phase III clinical trial design is introduced in this manuscript to demonstrate how graphical approach can be applied in clinical product development. In this design, an active comparator is used. It is thought that this test drug under development could potentially be superior to this comparator. For comparison of efficacy, the primary endpoint is well established and widely accepted by regulatory agencies. However, an important secondary endpoint based on Phase II findings looks very promising. The target dose may have a good opportunity to deliver superiority to the comparator. Furthermore, a lower dose is included in case the target dose may demonstrate potential safety concerns. This Phase III study is designed as a non-inferiority trial with two doses, and two endpoints. This manuscript will illustrate how graphical approach is applied to this design in handling multiple testing issues.

  5. Should methods of correction for multiple comparisons be applied in pharmacovigilance?

    Directory of Open Access Journals (Sweden)

    Lorenza Scotti

    2015-12-01

    Full Text Available Purpose. In pharmacovigilance, spontaneous reporting databases are devoted to the early detection of adverse event ‘signals’ of marketed drugs. A common limitation of these systems is the wide number of concurrently investigated associations, implying a high probability of generating positive signals simply by chance. However it is not clear if the application of methods aimed to adjust for the multiple testing problems are needed when at least some of the drug-outcome relationship under study are known. To this aim we applied a robust estimation method for the FDR (rFDR particularly suitable in the pharmacovigilance context. Methods. We exploited the data available for the SAFEGUARD project to apply the rFDR estimation methods to detect potential false positive signals of adverse reactions attributable to the use of non-insulin blood glucose lowering drugs. Specifically, the number of signals generated from the conventional disproportionality measures and after the application of the rFDR adjustment method was compared. Results. Among the 311 evaluable pairs (i.e., drug-event pairs with at least one adverse event report, 106 (34% signals were considered as significant from the conventional analysis. Among them 1 resulted in false positive signals according to rFDR method. Conclusions. The results of this study seem to suggest that when a restricted number of drug-outcome pairs is considered and warnings about some of them are known, multiple comparisons methods for recognizing false positive signals are not so useful as suggested by theoretical considerations.

  6. Parallelised Krylov subspace method for reactor kinetics by IQS approach

    International Nuclear Information System (INIS)

    Gupta, Anurag; Modak, R.S.; Gupta, H.P.; Kumar, Vinod; Bhatt, K.

    2005-01-01

    Nuclear reactor kinetics involves numerical solution of space-time-dependent multi-group neutron diffusion equation. Two distinct approaches exist for this purpose: the direct (implicit time differencing) approach and the improved quasi-static (IQS) approach. Both the approaches need solution of static space-energy-dependent diffusion equations at successive time-steps; the step being relatively smaller for the direct approach. These solutions are usually obtained by Gauss-Seidel type iterative methods. For a faster solution, the Krylov sub-space methods have been tried and also parallelised by many investigators. However, these studies seem to have been done only for the direct approach. In the present paper, parallelised Krylov methods are applied to the IQS approach in addition to the direct approach. It is shown that the speed-up obtained for IQS is higher than that for the direct approach. The reasons for this are also discussed. Thus, the use of IQS approach along with parallelised Krylov solvers seems to be a promising scheme

  7. Optimal load allocation of multiple fuel boilers.

    Science.gov (United States)

    Dunn, Alex C; Du, Yan Yi

    2009-04-01

    This paper presents a new methodology for optimally allocating a set of multiple industrial boilers that each simultaneously consumes multiple fuel types. Unlike recent similar approaches in the utility industry that use soft computing techniques, this approach is based on a second-order gradient search method that is easy to implement without any specialized optimization software. The algorithm converges rapidly and the application yields significant savings benefits, up to 3% of the overall operating cost of industrial boiler systems in the examples given and potentially higher in other cases, depending on the plant circumstances. Given today's energy prices, this can yield significant savings benefits to manufacturers that raise steam for plant operations.

  8. A mixed-methods approach to systematic reviews.

    Science.gov (United States)

    Pearson, Alan; White, Heath; Bath-Hextall, Fiona; Salmond, Susan; Apostolo, Joao; Kirkpatrick, Pamela

    2015-09-01

    There are an increasing number of published single-method systematic reviews that focus on different types of evidence related to a particular topic. As policy makers and practitioners seek clear directions for decision-making from systematic reviews, it is likely that it will be increasingly difficult for them to identify 'what to do' if they are required to find and understand a plethora of syntheses related to a particular topic.Mixed-methods systematic reviews are designed to address this issue and have the potential to produce systematic reviews of direct relevance to policy makers and practitioners.On the basis of the recommendations of the Joanna Briggs Institute International Mixed Methods Reviews Methodology Group in 2012, the Institute adopted a segregated approach to mixed-methods synthesis as described by Sandelowski et al., which consists of separate syntheses of each component method of the review. Joanna Briggs Institute's mixed-methods synthesis of the findings of the separate syntheses uses a Bayesian approach to translate the findings of the initial quantitative synthesis into qualitative themes and pooling these with the findings of the initial qualitative synthesis.

  9. Use of multiple methods to determine factors affecting quality of care of patients with diabetes.

    Science.gov (United States)

    Khunti, K

    1999-10-01

    The process of care of patients with diabetes is complex; however, GPs are playing a greater role in its management. Despite the research evidence, the quality of care of patients with diabetes is variable. In order to improve care, information is required on the obstacles faced by practices in improving care. Qualitative and quantitative methods can be used for formation of hypotheses and the development of survey procedures. However, to date few examples exist in general practice research on the use of multiple methods using both quantitative and qualitative techniques for hypothesis generation. We aimed to determine information on all factors that may be associated with delivery of care to patients with diabetes. Factors for consideration on delivery of diabetes care were generated by multiple qualitative methods including brainstorming with health professionals and patients, a focus group and interviews with key informants which included GPs and practice nurses. Audit data showing variations in care of patients with diabetes were used to stimulate the brainstorming session. A systematic literature search focusing on quality of care of patients with diabetes in primary care was also conducted. Fifty-four potential factors were identified by multiple methods. Twenty (37.0%) were practice-related factors, 14 (25.9%) were patient-related factors and 20 (37.0%) were organizational factors. A combination of brainstorming and the literature review identified 51 (94.4%) factors. Patients did not identify factors in addition to those identified by other methods. The complexity of delivery of care to patients with diabetes is reflected in the large number of potential factors identified in this study. This study shows the feasibility of using multiple methods for hypothesis generation. Each evaluation method provided unique data which could not otherwise be easily obtained. This study highlights a way of combining various traditional methods in an attempt to overcome the

  10. A simplified approach for evaluating multiple test outcomes and multiple disease states in relation to the exercise thallium-201 stress test in suspected coronary artery disease

    International Nuclear Information System (INIS)

    Pollock, S.G.; Watson, D.D.; Gibson, R.S.; Beller, G.A.; Kaul, S.

    1989-01-01

    This study describes a simplified approach for the interpretation of electrocardiographic and thallium-201 imaging data derived from the same patient during exercise. The 383 patients in this study had also undergone selective coronary arteriography within 3 months of the exercise test. This matrix approach allows for multiple test outcomes (both tests positive, both negative, 1 test positive and 1 negative) and multiple disease states (no coronary artery disease vs 1-vessel vs multivessel coronary artery disease). Because this approach analyzes the results of 2 test outcomes simultaneously rather than serially, it also negates the lack of test independence, if such an effect is present. It is also demonstrated that ST-segment depression on the electrocardiogram and defects on initial thallium-201 images provide conditionally independent information regarding the presence of coronary artery disease in patients without prior myocardial infarction. In contrast, ST-segment depression on the electrocardiogram and redistribution on the delayed thallium-201 images may not provide totally independent information regarding the presence of exercise-induced ischemia in patients with or without myocardial infarction

  11. A Two-Dimensional Helmholtz Equation Solution for the Multiple Cavity Scattering Problem

    Science.gov (United States)

    2013-02-01

    obtained by using the block Gauss – Seidel iterative meth- od. To show the convergence of the iterative method, we define the error between two...models to the general multiple cavity setting. Numerical examples indicate that the convergence of the Gauss – Seidel iterative method depends on the...variational approach. A block Gauss – Seidel iterative method is introduced to solve the cou- pled system of the multiple cavity scattering problem, where

  12. A hybrid approach to simulate multiple photon scattering in X-ray imaging

    International Nuclear Information System (INIS)

    Freud, N.; Letang, J.-M.; Babot, D.

    2005-01-01

    A hybrid simulation approach is proposed to compute the contribution of scattered radiation in X- or γ-ray imaging. This approach takes advantage of the complementarity between the deterministic and probabilistic simulation methods. The proposed hybrid method consists of two stages. Firstly, a set of scattering events occurring in the inspected object is determined by means of classical Monte Carlo simulation. Secondly, this set of scattering events is used as a starting point to compute the energy imparted to the detector, with a deterministic algorithm based on a 'forced detection' scheme. For each scattering event, the probability for the scattered photon to reach each pixel of the detector is calculated using well-known physical models (form factor and incoherent scattering function approximations, in the case of Rayleigh and Compton scattering respectively). The results of the proposed hybrid approach are compared to those obtained with the Monte Carlo method alone (Geant4 code) and found to be in excellent agreement. The convergence of the results when the number of scattering events increases is studied. The proposed hybrid approach makes it possible to simulate the contribution of each type (Compton or Rayleigh) and order of scattering, separately or together, with a single PC, within reasonable computation times (from minutes to hours, depending on the number of pixels of the detector). This constitutes a substantial benefit, compared to classical simulation methods (Monte Carlo or deterministic approaches), which usually requires a parallel computing architecture to obtain comparable results

  13. A hybrid approach to simulate multiple photon scattering in X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Freud, N. [CNDRI, Laboratory of Nondestructive Testing using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, avenue Albert Einstein, 69621 Villeurbanne Cedex (France)]. E-mail: nicolas.freud@insa-lyon.fr; Letang, J.-M. [CNDRI, Laboratory of Nondestructive Testing using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Babot, D. [CNDRI, Laboratory of Nondestructive Testing using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, avenue Albert Einstein, 69621 Villeurbanne Cedex (France)

    2005-01-01

    A hybrid simulation approach is proposed to compute the contribution of scattered radiation in X- or {gamma}-ray imaging. This approach takes advantage of the complementarity between the deterministic and probabilistic simulation methods. The proposed hybrid method consists of two stages. Firstly, a set of scattering events occurring in the inspected object is determined by means of classical Monte Carlo simulation. Secondly, this set of scattering events is used as a starting point to compute the energy imparted to the detector, with a deterministic algorithm based on a 'forced detection' scheme. For each scattering event, the probability for the scattered photon to reach each pixel of the detector is calculated using well-known physical models (form factor and incoherent scattering function approximations, in the case of Rayleigh and Compton scattering respectively). The results of the proposed hybrid approach are compared to those obtained with the Monte Carlo method alone (Geant4 code) and found to be in excellent agreement. The convergence of the results when the number of scattering events increases is studied. The proposed hybrid approach makes it possible to simulate the contribution of each type (Compton or Rayleigh) and order of scattering, separately or together, with a single PC, within reasonable computation times (from minutes to hours, depending on the number of pixels of the detector). This constitutes a substantial benefit, compared to classical simulation methods (Monte Carlo or deterministic approaches), which usually requires a parallel computing architecture to obtain comparable results.

  14. Performance evaluation and ranking of direct sales stores using BSC approach and fuzzy multiple attribute decision-making methods

    Directory of Open Access Journals (Sweden)

    Mojtaba Soltannezhad Dizaji

    2017-07-01

    Full Text Available In an environment where markets go through a volatile process, and rapid fundamental changes occur due to technological advances, it is important to ensure and maintain a good performance measurement. Organizations, in their performance evaluation, should consider different types of financial and non-financial indicators. In systems like direct sales stores in which decision units have multiple inputs and outputs, all criteria influencing on performance must be combined and examined in a system, simultaneously. The purpose of this study is to evaluate the performance of different products through direct sales of a firm named Shirin Asal with a combination of Balanced Scorecard, fuzzy AHP and TOPSIS so that the weaknesses of subjectivity and selective consideration of evaluators in evaluating the performance indicators are reduced and evaluation integration is provided by considering the contribution of each indicator and each indicator group of balanced scorecard. The research method of this case study is applied. The data collection method is a questionnaire from the previous studies, the use of experts' opinions and the study of documents in the organization. MATLAB and SPSS were used to analyze the data. During this study, the customer and financial perspectives are of the utmost importance to assess the company branches. Among the sub-criteria, the rate of new customer acquisition in the customer dimension and the net income to sales ratio in financial dimension are of the utmost importance.

  15. Methods of counting ribs on chest CT: the modified sternomanubrial approach

    Energy Technology Data Exchange (ETDEWEB)

    Yi, Kyung Sik; Kim, Sung Jin; Jeon, Min Hee; Lee, Seung Young; Bae, Il Hun [Chungbuk National University, Cheongju (Korea, Republic of)

    2007-08-15

    The purpose of this study was to evaluate the accuracy of each method of counting ribs on chest CT and to propose a new method: the anterior approach with using the sternocostal joints. CT scans of 38 rib lesions of 27 patients were analyzed (fracture: 25, metastasis: 11, benign bone disease: 2). Each lesion was independently counted by three radiologists with using three different methods for counting ribs: the sternoclavicular approach, the xiphisternal approach and the modified sternomanubrial approach. The rib lesions were divided into three parts of evaluation of each method according to the location of the lesion as follows: the upper part (between the first and fourth thoracic vertebra), the middle part (between the fifth and eighth) and the lower part (between the ninth and twelfth). The most accurate method was a modified sternomanubrial approach (99.1%). The accuracies of a xiphisternal approach and a sternoclavicular approach were 95.6% and 88.6%, respectively. A modified sternomanubrial approach showed the highest accuracies in all three parts (100%, 100% and 97.9%, respectively). We propose a new method for counting ribs, the modified sternomanubrial approach, which was more accurate than the known methods in any parts of the bony thorax, and it may be an easier and quicker method than the others in clinical practice.

  16. Cumulative health risk assessment: integrated approaches for multiple contaminants, exposures, and effects

    International Nuclear Information System (INIS)

    Rice, Glenn; Teuschler, Linda; MacDonel, Margaret; Butler, Jim; Finster, Molly; Hertzberg, Rick; Harou, Lynne

    2007-01-01

    Available in abstract form only. Full text of publication follows: As information about environmental contamination has increased in recent years, so has public interest in the combined effects of multiple contaminants. This interest has been highlighted by recent tragedies such as the World Trade Center disaster and hurricane Katrina. In fact, assessing multiple contaminants, exposures, and effects has long been an issue for contaminated sites, including U.S. Department of Energy (DOE) legacy waste sites. Local citizens have explicitly asked the federal government to account for cumulative risks, with contaminants moving offsite via groundwater flow, surface runoff, and air dispersal being a common emphasis. Multiple exposures range from ingestion and inhalation to dermal absorption and external gamma irradiation. Three types of concerns can lead to cumulative assessments: (1) specific sources or releases - e.g., industrial facilities or accidental discharges; (2) contaminant levels - in environmental media or human tissues; and (3) elevated rates of disease - e.g., asthma or cancer. The specific initiator frames the assessment strategy, including a determination of appropriate models to be used. Approaches are being developed to better integrate a variety of data, extending from environmental to internal co-location of contaminants and combined effects, to support more practical assessments of cumulative health risks. (authors)

  17. Full Body Pose Estimation During Occlusion using Multiple Cameras

    DEFF Research Database (Denmark)

    Fihl, Preben; Cosar, Serhan

    people is a very challenging problem for methods based on pictorials structure as for any other monocular pose estimation method. In this report we present work on a multi-view approach based on pictorial structures that integrate low level information from multiple calibrated cameras to improve the 2D...

  18. An ensemble-based dynamic Bayesian averaging approach for discharge simulations using multiple global precipitation products and hydrological models

    Science.gov (United States)

    Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris

    2018-03-01

    Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.

  19. Multiple imputation of rainfall missing data in the Iberian Mediterranean context

    Science.gov (United States)

    Miró, Juan Javier; Caselles, Vicente; Estrela, María José

    2017-11-01

    Given the increasing need for complete rainfall data networks, in recent years have been proposed diverse methods for filling gaps in observed precipitation series, progressively more advanced that traditional approaches to overcome the problem. The present study has consisted in validate 10 methods (6 linear, 2 non-linear and 2 hybrid) that allow multiple imputation, i.e., fill at the same time missing data of multiple incomplete series in a dense network of neighboring stations. These were applied for daily and monthly rainfall in two sectors in the Júcar River Basin Authority (east Iberian Peninsula), which is characterized by a high spatial irregularity and difficulty of rainfall estimation. A classification of precipitation according to their genetic origin was applied as pre-processing, and a quantile-mapping adjusting as post-processing technique. The results showed in general a better performance for the non-linear and hybrid methods, highlighting that the non-linear PCA (NLPCA) method outperforms considerably the Self Organizing Maps (SOM) method within non-linear approaches. On linear methods, the Regularized Expectation Maximization method (RegEM) was the best, but far from NLPCA. Applying EOF filtering as post-processing of NLPCA (hybrid approach) yielded the best results.

  20. A Bayesian Hierarchical Model for Relating Multiple SNPs within Multiple Genes to Disease Risk

    Directory of Open Access Journals (Sweden)

    Lewei Duan

    2013-01-01

    Full Text Available A variety of methods have been proposed for studying the association of multiple genes thought to be involved in a common pathway for a particular disease. Here, we present an extension of a Bayesian hierarchical modeling strategy that allows for multiple SNPs within each gene, with external prior information at either the SNP or gene level. The model involves variable selection at the SNP level through latent indicator variables and Bayesian shrinkage at the gene level towards a prior mean vector and covariance matrix that depend on external information. The entire model is fitted using Markov chain Monte Carlo methods. Simulation studies show that the approach is capable of recovering many of the truly causal SNPs and genes, depending upon their frequency and size of their effects. The method is applied to data on 504 SNPs in 38 candidate genes involved in DNA damage response in the WECARE study of second breast cancers in relation to radiotherapy exposure.

  1. OPTIMAL TOUR CONSTRUCTIONS FOR MULTIPLE MOBILE ROBOTS

    Directory of Open Access Journals (Sweden)

    AMIR A. SHAFIE

    2011-04-01

    Full Text Available The attempts to use mobile robots in a variety of environments are currently being limited by their navigational capability, thus a set of robots must be configured for one specific environment. The problem of navigating an environment is the fundamental problem in mobile robotic where various methods including exact and heuristic approaches have been proposed to solve the problem. This paper proposed a solution to the navigation problem via the use of multiple robots to explore the environment employing heuristic methods to navigate the environment using a variant of a Traveling Salesman Problem (TSP known as Multiple Traveling Salesman Problem (M-TSP.

  2. MULTIPLE RETAINED TEETH IN MANDIBLE: A Case Report

    Directory of Open Access Journals (Sweden)

    Cvetan Cvetanov

    2010-07-01

    Full Text Available Purpose: The aim of this science report is to show a rare case of multiple impacted teeth at adult patient and our propose clinical approach.Materials and methods: The clinical case is showed from adult man /64-year old/ with multiple impacted teeth (6 impacted teeth in the anterior place on the mandible were not suggestive of any syndrome or metabolic disorder. The extraction of the impacted teeth was made on two stage with piezosurgery unit under local anaesthesia. For prevention of postsurgical complications, as a swelling and prevention of postsurgical resorbtion were used coneshapes from pressure xeno colagen. To base on clinical and radiological examination we will discuss the differential diagnosis and we will offer a clinical approach about decided the case.Result and Conclusion: The incidence of multiple retained teeth by literature research range from 10.9% to 40.4%, most frequently is the retention of the third molars. In the literature most rarely have clinical reports about multiple retained teeth which differ from third molars at adult patients. The rare clinical case we showed is very demonstrative and the medicative approach which we used gave excellent result.

  3. Control of multiple robots using vision sensors

    CERN Document Server

    Aranda, Miguel; Sagüés, Carlos

    2017-01-01

    This monograph introduces novel methods for the control and navigation of mobile robots using multiple-1-d-view models obtained from omni-directional cameras. This approach overcomes field-of-view and robustness limitations, simultaneously enhancing accuracy and simplifying application on real platforms. The authors also address coordinated motion tasks for multiple robots, exploring different system architectures, particularly the use of multiple aerial cameras in driving robot formations on the ground. Again, this has benefits of simplicity, scalability and flexibility. Coverage includes details of: a method for visual robot homing based on a memory of omni-directional images a novel vision-based pose stabilization methodology for non-holonomic ground robots based on sinusoidal-varying control inputs an algorithm to recover a generic motion between two 1-d views and which does not require a third view a novel multi-robot setup where multiple camera-carrying unmanned aerial vehicles are used to observe and c...

  4. Representations of Multiple-Valued Logic Functions

    CERN Document Server

    Stankovic, Radomir S

    2012-01-01

    Compared to binary switching functions, multiple-valued functions offer more compact representations of the information content of signals modeled by logic functions and, therefore, their use fits very well in the general settings of data compression attempts and approaches. The first task in dealing with such signals is to provide mathematical methods for their representation in a way that will make their application in practice feasible.Representation of Multiple-Valued Logic Functions is aimed at providing an accessible introduction to these mathematical techniques that are necessary for ap

  5. Multiple Imputation of a Randomly Censored Covariate Improves Logistic Regression Analysis.

    Science.gov (United States)

    Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A

    2016-01-01

    Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semiparametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.

  6. Correlation expansion: a powerful alternative multiple scattering calculation method

    International Nuclear Information System (INIS)

    Zhao Haifeng; Wu Ziyu; Sebilleau, Didier

    2008-01-01

    We introduce a powerful alternative expansion method to perform multiple scattering calculations. In contrast to standard MS series expansion, where the scattering contributions are grouped in terms of scattering order and may diverge in the low energy region, this expansion, called correlation expansion, partitions the scattering process into contributions from different small atom groups and converges at all energies. It converges faster than MS series expansion when the latter is convergent. Furthermore, it takes less memory than the full MS method so it can be used in the near edge region without any divergence problem, even for large clusters. The correlation expansion framework we derive here is very general and can serve to calculate all the elements of the scattering path operator matrix. Photoelectron diffraction calculations in a cluster containing 23 atoms are presented to test the method and compare it to full MS and standard MS series expansion

  7. Study of the multiple scattering effect in TEBENE using the Monte Carlo method

    International Nuclear Information System (INIS)

    Singkarat, Somsorn.

    1990-01-01

    The neutron time-of-flight and energy spectra, from the TEBENE set-up, have been calculated by a computer program using the Monte Carlo method. The neutron multiple scattering within the polyethylene scatterer ring is closely investigated. The results show that multiple scattering has a significant effect on the detected neutron yield. They also indicate that the thickness of the scatterer ring has to be carefully chosen. (author)

  8. Probabilistic atlas-guided eigen-organ method for simultaneous bounding box estimation of multiple organs in volumetric CT images

    International Nuclear Information System (INIS)

    Yao, Cong; Wada, Takashige; Shimizu, Akinobu; Kobatake, Hidefumi; Nawano, Shigeru

    2006-01-01

    We propose an approach for the simultaneous bounding box estimation of multiple organs in volumetric CT images. Local eigen-organ spaces are constructed for different types of training organs, and a global eigen-space, which describes the spatial relationships between the organs, is also constructed. Each volume of interest in the abdominal CT image is projected into the local eigen-organ spaces, and several candidate locations are determined. The final selection of the organ locations is made by projecting the set of candidate locations into the global eigen-space. A probabilistic atlas of organs is used to eliminate locations with low probability and to guide the selection of candidate locations. Evaluation by the leave-one-out method using 10 volumetric abdominal CT images showed that the proposed method provided an average accuracy of 80.38% for 11 different organ types. (author)

  9. Improved modified energy ratio method using a multi-window approach for accurate arrival picking

    Science.gov (United States)

    Lee, Minho; Byun, Joongmoo; Kim, Dowan; Choi, Jihun; Kim, Myungsun

    2017-04-01

    To identify accurately the location of microseismic events generated during hydraulic fracture stimulation, it is necessary to detect the first break of the P- and S-wave arrival times recorded at multiple receivers. These microseismic data often contain high-amplitude noise, which makes it difficult to identify the P- and S-wave arrival times. The short-term-average to long-term-average (STA/LTA) and modified energy ratio (MER) methods are based on the differences in the energy densities of the noise and signal, and are widely used to identify the P-wave arrival times. The MER method yields more consistent results than the STA/LTA method for data with a low signal-to-noise (S/N) ratio. However, although the MER method shows good results regardless of the delay of the signal wavelet for signals with a high S/N ratio, it may yield poor results if the signal is contaminated by high-amplitude noise and does not have the minimum delay. Here we describe an improved MER (IMER) method, whereby we apply a multiple-windowing approach to overcome the limitations of the MER method. The IMER method contains calculations of an additional MER value using a third window (in addition to the original MER window), as well as the application of a moving average filter to each MER data point to eliminate high-frequency fluctuations in the original MER distributions. The resulting distribution makes it easier to apply thresholding. The proposed IMER method was applied to synthetic and real datasets with various S/N ratios and mixed-delay wavelets. The results show that the IMER method yields a high accuracy rate of around 80% within five sample errors for the synthetic datasets. Likewise, in the case of real datasets, 94.56% of the P-wave picking results obtained by the IMER method had a deviation of less than 0.5 ms (corresponding to 2 samples) from the manual picks.

  10. Detection-Discrimination Method for Multiple Repeater False Targets Based on Radar Polarization Echoes

    Directory of Open Access Journals (Sweden)

    Z. W. ZONG

    2014-04-01

    Full Text Available Multiple repeat false targets (RFTs, created by the digital radio frequency memory (DRFM system of jammer, are widely used in practical to effectively exhaust the limited tracking and discrimination resource of defence radar. In this paper, common characteristic of radar polarization echoes of multiple RFTs is used for target recognition. Based on the echoes from two receiving polarization channels, the instantaneous polarization radio (IPR is defined and its variance is derived by employing Taylor series expansion. A detection-discrimination method is designed based on probability grids. By using the data from microwave anechoic chamber, the detection threshold of the method is confirmed. Theoretical analysis and simulations indicate that the method is valid and feasible. Furthermore, the estimation performance of IPRs of RFTs due to the influence of signal noise ratio (SNR is also covered.

  11. Synthesizing monochromatic 3-D images by multiple-exposure rainbow holography with vertical area-partition approach

    Institute of Scientific and Technical Information of China (English)

    翟宏琛; 王明伟; 刘福民; 母国光

    2002-01-01

    We report for the first time the theoretical analysis and experimental results of a white-light reconstructed monochromatic 3-D image synthesizing tomograms by multiple rainbow holo-graphy with vertical-area partition (VAP) approach. The theoretical and experimental results show that 3-D monochromatic image can be synthesized by recording the master hologram by VAP ap-proach without any distortions either in gray scale or in geometrical position. A 3-D monochromatic image synthesized from a series of medical tomograms is presented in this paper for the first time.

  12. An Improved Clutter Suppression Method for Weather Radars Using Multiple Pulse Repetition Time Technique

    Directory of Open Access Journals (Sweden)

    Yingjie Yu

    2017-01-01

    Full Text Available This paper describes the implementation of an improved clutter suppression method for the multiple pulse repetition time (PRT technique based on simulated radar data. The suppression method is constructed using maximum likelihood methodology in time domain and is called parametric time domain method (PTDM. The procedure relies on the assumption that precipitation and clutter signal spectra follow a Gaussian functional form. The multiple interleaved pulse repetition frequencies (PRFs that are used in this work are set to four PRFs (952, 833, 667, and 513 Hz. Based on radar simulation, it is shown that the new method can provide accurate retrieval of Doppler velocity even in the case of strong clutter contamination. The obtained velocity is nearly unbiased for all the range of Nyquist velocity interval. Also, the performance of the method is illustrated on simulated radar data for plan position indicator (PPI scan. Compared with staggered 2-PRT transmission schemes with PTDM, the proposed method presents better estimation accuracy under certain clutter situations.

  13. Multiple Signal Classification Algorithm Based Electric Dipole Source Localization Method in an Underwater Environment

    Directory of Open Access Journals (Sweden)

    Yidong Xu

    2017-10-01

    Full Text Available A novel localization method based on multiple signal classification (MUSIC algorithm is proposed for positioning an electric dipole source in a confined underwater environment by using electric dipole-receiving antenna array. In this method, the boundary element method (BEM is introduced to analyze the boundary of the confined region by use of a matrix equation. The voltage of each dipole pair is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields based localization method, which can be easily implemented in practical engineering applications. Then, a global-multiple region-conjugate gradient (CG hybrid search method is used to reduce the computation burden and to improve the operation speed. Two localization simulation models and a physical experiment are conducted. Both the simulation results and physical experiment result provide accurate positioning performance, with the help to verify the effectiveness of the proposed localization method in underwater environments.

  14. Applied immuno-epidemiological research: an approach for integrating existing knowledge into the statistical analysis of multiple immune markers.

    Science.gov (United States)

    Genser, Bernd; Fischer, Joachim E; Figueiredo, Camila A; Alcântara-Neves, Neuza; Barreto, Mauricio L; Cooper, Philip J; Amorim, Leila D; Saemann, Marcus D; Weichhart, Thomas; Rodrigues, Laura C

    2016-05-20

    Immunologists often measure several correlated immunological markers, such as concentrations of different cytokines produced by different immune cells and/or measured under different conditions, to draw insights from complex immunological mechanisms. Although there have been recent methodological efforts to improve the statistical analysis of immunological data, a framework is still needed for the simultaneous analysis of multiple, often correlated, immune markers. This framework would allow the immunologists' hypotheses about the underlying biological mechanisms to be integrated. We present an analytical approach for statistical analysis of correlated immune markers, such as those commonly collected in modern immuno-epidemiological studies. We demonstrate i) how to deal with interdependencies among multiple measurements of the same immune marker, ii) how to analyse association patterns among different markers, iii) how to aggregate different measures and/or markers to immunological summary scores, iv) how to model the inter-relationships among these scores, and v) how to use these scores in epidemiological association analyses. We illustrate the application of our approach to multiple cytokine measurements from 818 children enrolled in a large immuno-epidemiological study (SCAALA Salvador), which aimed to quantify the major immunological mechanisms underlying atopic diseases or asthma. We demonstrate how to aggregate systematically the information captured in multiple cytokine measurements to immunological summary scores aimed at reflecting the presumed underlying immunological mechanisms (Th1/Th2 balance and immune regulatory network). We show how these aggregated immune scores can be used as predictors in regression models with outcomes of immunological studies (e.g. specific IgE) and compare the results to those obtained by a traditional multivariate regression approach. The proposed analytical approach may be especially useful to quantify complex immune

  15. Seismic PSA method for multiple nuclear power plants in a site

    Energy Technology Data Exchange (ETDEWEB)

    Hakata, Tadakuni [Nuclear Safety Commission, Tokyo (Japan)

    2007-07-15

    The maximum number of nuclear power plants in a site is eight and about 50% of power plants are built in sites with three or more plants in the world. Such nuclear sites have potential risks of simultaneous multiple plant damages especially at external events. Seismic probabilistic safety assessment method (Level-1 PSA) for multi-unit sites with up to 9 units has been developed. The models include Fault-tree linked Monte Carlo computation, taking into consideration multivariate correlations of components and systems from partial to complete, inside and across units. The models were programmed as a computer program CORAL reef. Sample analysis and sensitivity studies were performed to verify the models and algorithms and to understand some of risk insights and risk metrics, such as site core damage frequency (CDF per site-year) for multiple reactor plants. This study will contribute to realistic state of art seismic PSA, taking consideration of multiple reactor power plants, and to enhancement of seismic safety. (author)

  16. Multiple external hazards compound level 3 PSA methods research of nuclear power plant

    Science.gov (United States)

    Wang, Handing; Liang, Xiaoyu; Zhang, Xiaoming; Yang, Jianfeng; Liu, Weidong; Lei, Dina

    2017-01-01

    2011 Fukushima nuclear power plant severe accident was caused by both earthquake and tsunami, which results in large amount of radioactive nuclides release. That accident has caused the radioactive contamination on the surrounding environment. Although this accident probability is extremely small, once such an accident happens that is likely to release a lot of radioactive materials into the environment, and cause radiation contamination. Therefore, studying accidents consequences is important and essential to improve nuclear power plant design and management. Level 3 PSA methods of nuclear power plant can be used to analyze radiological consequences, and quantify risk to the public health effects around nuclear power plants. Based on multiple external hazards compound level 3 PSA methods studies of nuclear power plant, and the description of the multiple external hazards compound level 3 PSA technology roadmap and important technical elements, as well as taking a coastal nuclear power plant as the reference site, we analyzed the impact of off-site consequences of nuclear power plant severe accidents caused by multiple external hazards. At last we discussed the impact of off-site consequences probabilistic risk studies and its applications under multiple external hazards compound conditions, and explained feasibility and reasonableness of emergency plans implementation.

  17. 29 CFR 4010.12 - Alternative method of compliance for certain sponsors of multiple employer plans.

    Science.gov (United States)

    2010-07-01

    ... BENEFIT GUARANTY CORPORATION CERTAIN REPORTING AND DISCLOSURE REQUIREMENTS ANNUAL FINANCIAL AND ACTUARIAL INFORMATION REPORTING § 4010.12 Alternative method of compliance for certain sponsors of multiple employer... part for an information year if any contributing sponsor of the multiple employer plan provides a...

  18. Multiple-walled BN nanotubes obtained with a mechanical alloying technique

    International Nuclear Information System (INIS)

    Rosas, G.; Sistos, J.; Ascencio, J.A.; Medina, A.; Perez, R.

    2005-01-01

    An experimental method to obtain multiple-walled nanotubes of BN using low energy is presented. The method is based on the use of mechanical alloying techniques with elemental boron powders and nitrogen gas mixed in an autoclave at room temperature. The chemical and structural characteristics of the multiple-walled nanotubes were obtained using different techniques, such as X-ray diffraction, transmission electron microscopy, EELS microanalysis, high-resolution electron microscopy images and theoretical simulations based on the multisliced approach of the electron diffraction theory. This investigation clearly illustrates the production of multiple-wall BN nanotubes at room temperature. These results open up a new kind of synthesis method with low expense and important perspectives for use in large-quantity production. (orig.)

  19. Paper Prototyping: The Surplus Merit of a Multi-Method Approach

    Directory of Open Access Journals (Sweden)

    Stephanie Bettina Linek

    2015-07-01

    Full Text Available This article describes a multi-method approach for usability testing. The approach combines paper prototyping and think-aloud with two supplemental methods: advanced scribbling and a handicraft task. The method of advanced scribbling instructs the participants to use different colors for marking important, unnecessary and confusing elements in a paper prototype. In the handicraft task the participants have to tinker a paper prototype of their wish version. Both methods deliver additional information on the needs and expectations of the potential users and provide helpful indicators for clarifying complex or contradictory findings. The multi-method approach and its surplus benefit are illustrated by a pilot study on the redesign of the homepage of a library 2.0. The findings provide positive evidence for the applicability of the advanced scribbling and the handicraft task as well as for the surplus merit of the multi-method approach. The article closes with a discussion and outlook. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs150379

  20. Phylo: a citizen science approach for improving multiple sequence alignment.

    Directory of Open Access Journals (Sweden)

    Alexander Kawrykow

    Full Text Available BACKGROUND: Comparative genomics, or the study of the relationships of genome structure and function across different species, offers a powerful tool for studying evolution, annotating genomes, and understanding the causes of various genetic disorders. However, aligning multiple sequences of DNA, an essential intermediate step for most types of analyses, is a difficult computational task. In parallel, citizen science, an approach that takes advantage of the fact that the human brain is exquisitely tuned to solving specific types of problems, is becoming increasingly popular. There, instances of hard computational problems are dispatched to a crowd of non-expert human game players and solutions are sent back to a central server. METHODOLOGY/PRINCIPAL FINDINGS: We introduce Phylo, a human-based computing framework applying "crowd sourcing" techniques to solve the Multiple Sequence Alignment (MSA problem. The key idea of Phylo is to convert the MSA problem into a casual game that can be played by ordinary web users with a minimal prior knowledge of the biological context. We applied this strategy to improve the alignment of the promoters of disease-related genes from up to 44 vertebrate species. Since the launch in November 2010, we received more than 350,000 solutions submitted from more than 12,000 registered users. Our results show that solutions submitted contributed to improving the accuracy of up to 70% of the alignment blocks considered. CONCLUSIONS/SIGNIFICANCE: We demonstrate that, combined with classical algorithms, crowd computing techniques can be successfully used to help improving the accuracy of MSA. More importantly, we show that an NP-hard computational problem can be embedded in casual game that can be easily played by people without significant scientific training. This suggests that citizen science approaches can be used to exploit the billions of "human-brain peta-flops" of computation that are spent every day playing games

  1. Total System Performance Assessment-License Application Methods and Approach

    Energy Technology Data Exchange (ETDEWEB)

    J. McNeish

    2002-09-13

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issue (KTI) agreements, the ''Yucca Mountain Review Plan'' (CNWRA 2002 [158449]), and 10 CFR Part 63. This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are utilized in this document.

  2. Practice-oriented optical thin film growth simulation via multiple scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Turowski, Marcus, E-mail: m.turowski@lzh.de [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); Jupé, Marco [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany); Melzig, Thomas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Moskovkin, Pavel [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Daniel, Alain [Centre for Research in Metallurgy, CRM, 21 Avenue du bois Saint Jean, Liège 4000 (Belgium); Pflug, Andreas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Lucas, Stéphane [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Ristau, Detlev [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany)

    2015-10-01

    Simulation of the coating process is a very promising approach for the understanding of thin film formation. Nevertheless, this complex matter cannot be covered by a single simulation technique. To consider all mechanisms and processes influencing the optical properties of the growing thin films, various common theoretical methods have been combined to a multi-scale model approach. The simulation techniques have been selected in order to describe all processes in the coating chamber, especially the various mechanisms of thin film growth, and to enable the analysis of the resulting structural as well as optical and electronic layer properties. All methods are merged with adapted communication interfaces to achieve optimum compatibility of the different approaches and to generate physically meaningful results. The present contribution offers an approach for the full simulation of an Ion Beam Sputtering (IBS) coating process combining direct simulation Monte Carlo, classical molecular dynamics, kinetic Monte Carlo, and density functional theory. The simulation is performed exemplary for an existing IBS-coating plant to achieve a validation of the developed multi-scale approach. Finally, the modeled results are compared to experimental data. - Highlights: • A model approach for simulating an Ion Beam Sputtering (IBS) process is presented. • In order to combine the different techniques, optimized interfaces are developed. • The transport of atomic species in the coating chamber is calculated. • We modeled structural and optical film properties based on simulated IBS parameter. • The modeled and the experimental refractive index data fit very well.

  3. A mixed methods study of multiple health behaviors among individuals with stroke.

    Science.gov (United States)

    Plow, Matthew; Moore, Shirley M; Sajatovic, Martha; Katzan, Irene

    2017-01-01

    Individuals with stroke often have multiple cardiovascular risk factors that necessitate promoting engagement in multiple health behaviors. However, observational studies of individuals with stroke have typically focused on promoting a single health behavior. Thus, there is a poor understanding of linkages between healthy behaviors and the circumstances in which factors, such as stroke impairments, may influence a single or multiple health behaviors. We conducted a mixed methods convergent parallel study of 25 individuals with stroke to examine the relationships between stroke impairments and physical activity, sleep, and nutrition. Our goal was to gain further insight into possible strategies to promote multiple health behaviors among individuals with stroke. This study focused on physical activity, sleep, and nutrition because of their importance in achieving energy balance, maintaining a healthy weight, and reducing cardiovascular risks. Qualitative and quantitative data were collected concurrently, with the former being prioritized over the latter. Qualitative data was prioritized in order to develop a conceptual model of engagement in multiple health behaviors among individuals with stroke. Qualitative and quantitative data were analyzed independently and then were integrated during the inference stage to develop meta-inferences. The 25 individuals with stroke completed closed-ended questionnaires on healthy behaviors and physical function. They also participated in face-to-face focus groups and one-to-one phone interviews. We found statistically significant and moderate correlations between hand function and healthy eating habits ( r  = 0.45), sleep disturbances and limitations in activities of daily living ( r  =  - 0.55), BMI and limitations in activities of daily living ( r  =  - 0.49), physical activity and limitations in activities of daily living ( r  = 0.41), mobility impairments and BMI ( r  =  - 0.41), sleep disturbances and physical

  4. Magic Finger Teaching Method in Learning Multiplication Facts among Deaf Students

    Science.gov (United States)

    Thai, Liong; Yasin, Mohd. Hanafi Mohd

    2016-01-01

    Deaf students face problems in mastering multiplication facts. This study aims to identify the effectiveness of Magic Finger Teaching Method (MFTM) and students' perception towards MFTM. The research employs a quasi experimental with non-equivalent pre-test and post-test control group design. Pre-test, post-test and questionnaires were used. As…

  5. Multiple Criteria and Multiple Periods Performance Analysis: The Comparison of North African Railways

    Science.gov (United States)

    Sabri, Karim; Colson, Gérard E.; Mbangala, Augustin M.

    2008-10-01

    Multi-period differences of technical and financial performances are analysed by comparing five North African railways over the period (1990-2004). A first approach is based on the Malmquist DEA TFP index for measuring the total factors productivity change, decomposed into technical efficiency change and technological changes. A multiple criteria analysis is also performed using the PROMETHEE II method and the software ARGOS. These methods provide complementary detailed information, especially by discriminating the technological and management progresses by Malmquist and the two dimensions of performance by Promethee: that are the service to the community and the enterprises performances, often in conflict.

  6. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2014-03-04

    © 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.

  7. A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.

    Science.gov (United States)

    Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying

    2015-09-01

    Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.

  8. Total System Performance Assessment - License Application Methods and Approach

    International Nuclear Information System (INIS)

    McNeish, J.

    2003-01-01

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issues (KTIs) identified in agreements with the U.S. Nuclear Regulatory Commission, the ''Yucca Mountain Review Plan'' (YMRP), ''Final Report'' (NRC 2003 [163274]), and the NRC final rule 10 CFR Part 63 (NRC 2002 [156605]). This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are used in this document

  9. Approach to proliferation risk assessment based on multiple objective analysis framework

    Energy Technology Data Exchange (ETDEWEB)

    Andrianov, A.; Kuptsov, I. [Obninsk Institute for Nuclear Power Engineering of NNRU MEPhI (Russian Federation); Studgorodok 1, Obninsk, Kaluga region, 249030 (Russian Federation)

    2013-07-01

    The approach to the assessment of proliferation risk using the methods of multi-criteria decision making and multi-objective optimization is presented. The approach allows the taking into account of the specifics features of the national nuclear infrastructure, and possible proliferation strategies (motivations, intentions, and capabilities). 3 examples of applying the approach are shown. First, the approach has been used to evaluate the attractiveness of HEU (high enriched uranium)production scenarios at a clandestine enrichment facility using centrifuge enrichment technology. Secondly, the approach has been applied to assess the attractiveness of scenarios for undeclared production of plutonium or HEU by theft of materials circulating in nuclear fuel cycle facilities and thermal reactors. Thirdly, the approach has been used to perform a comparative analysis of the structures of developing nuclear power systems based on different types of nuclear fuel cycles, the analysis being based on indicators of proliferation risk.

  10. Approach to proliferation risk assessment based on multiple objective analysis framework

    International Nuclear Information System (INIS)

    Andrianov, A.; Kuptsov, I.

    2013-01-01

    The approach to the assessment of proliferation risk using the methods of multi-criteria decision making and multi-objective optimization is presented. The approach allows the taking into account of the specifics features of the national nuclear infrastructure, and possible proliferation strategies (motivations, intentions, and capabilities). 3 examples of applying the approach are shown. First, the approach has been used to evaluate the attractiveness of HEU (high enriched uranium)production scenarios at a clandestine enrichment facility using centrifuge enrichment technology. Secondly, the approach has been applied to assess the attractiveness of scenarios for undeclared production of plutonium or HEU by theft of materials circulating in nuclear fuel cycle facilities and thermal reactors. Thirdly, the approach has been used to perform a comparative analysis of the structures of developing nuclear power systems based on different types of nuclear fuel cycles, the analysis being based on indicators of proliferation risk

  11. Dual worth trade-off method and its application for solving multiple criteria decision making problems

    Institute of Scientific and Technical Information of China (English)

    Feng Junwen

    2006-01-01

    To overcome the limitations of the traditional surrogate worth trade-off (SWT) method and solve the multiple criteria decision making problem more efficiently and interactively, a new method labeled dual worth trade-off (DWT) method is proposed. The DWT method dynamically uses the duality theory related to the multiple criteria decision making problem and analytic hierarchy process technique to obtain the decision maker's solution preference information and finally find the satisfactory compromise solution of the decision maker. Through the interactive process between the analyst and the decision maker, trade-off information is solicited and treated properly, the representative subset of efficient solutions and the satisfactory solution to the problem are found. The implementation procedure for the DWT method is presented. The effectiveness and applicability of the DWT method are shown by a practical case study in the field of production scheduling.

  12. Analyzing Statistical Mediation with Multiple Informants: A New Approach with an Application in Clinical Psychology.

    Science.gov (United States)

    Papa, Lesther A; Litson, Kaylee; Lockhart, Ginger; Chassin, Laurie; Geiser, Christian

    2015-01-01

    Testing mediation models is critical for identifying potential variables that need to be targeted to effectively change one or more outcome variables. In addition, it is now common practice for clinicians to use multiple informant (MI) data in studies of statistical mediation. By coupling the use of MI data with statistical mediation analysis, clinical researchers can combine the benefits of both techniques. Integrating the information from MIs into a statistical mediation model creates various methodological and practical challenges. The authors review prior methodological approaches to MI mediation analysis in clinical research and propose a new latent variable approach that overcomes some limitations of prior approaches. An application of the new approach to mother, father, and child reports of impulsivity, frustration tolerance, and externalizing problems (N = 454) is presented. The results showed that frustration tolerance mediated the relationship between impulsivity and externalizing problems. The new approach allows for a more comprehensive and effective use of MI data when testing mediation models.

  13. [Cormorbidity in multiple sclerosis and its therapeutic approach].

    Science.gov (United States)

    Estruch, Bonaventura Casanova

    2014-12-01

    Multiple sclerosis (MS) is a long-term chronic disease, in which intercurrent processes develop three times more frequently in affected individuals than in persons without MS. Knowledge of the comorbidity of MS, its definition and measurement (Charlson index) improves patient management. Acting on comorbid conditions delays the progression of disability, which is intimately linked to the number of concurrent processes and with health states and habits. Moreover, the presence of comorbidities delays the diagnosis of MS, which in turn delays the start of treatment. The main comorbidity found in MS includes other autoimmune diseases (thyroiditis, systemic lupus erythematosus, or pemphigus) but can also include general diseases, such as asthma or osteomuscular alterations, and, in particular, psychiatric disturbances. All these alterations should be evaluated with multidimensional scales (Disability Expectancy Table, DET), which allow more accurate determination of the patient's real clinical course and quality of life. These scales also allow identification of how MS, concurrent and intercurrent processes occurring during the clinical course, and the treatment provided affect patients with MS. An overall approach to patients' health status helps to improve quality of life. Copyright © 2014 Elsevier España, S.L.U. All rights reserved.

  14. TODIM Method for Single-Valued Neutrosophic Multiple Attribute Decision Making

    Directory of Open Access Journals (Sweden)

    Dong-Sheng Xu

    2017-10-01

    Full Text Available Recently, the TODIM has been used to solve multiple attribute decision making (MADM problems. The single-valued neutrosophic sets (SVNSs are useful tools to depict the uncertainty of the MADM. In this paper, we will extend the TODIM method to the MADM with the single-valued neutrosophic numbers (SVNNs. Firstly, the definition, comparison, and distance of SVNNs are briefly presented, and the steps of the classical TODIM method for MADM problems are introduced. Then, the extended classical TODIM method is proposed to deal with MADM problems with the SVNNs, and its significant characteristic is that it can fully consider the decision makers’ bounded rationality which is a real action in decision making. Furthermore, we extend the proposed model to interval neutrosophic sets (INSs. Finally, a numerical example is proposed.

  15. Traffic Management by Using Admission Control Methods in Multiple Node IMS Network

    Directory of Open Access Journals (Sweden)

    Filip Chamraz

    2016-01-01

    Full Text Available The paper deals with Admission Control methods (AC as a possible solution for traffic management in IMS networks (IP Multimedia Subsystem - from the point of view of an efficient redistribution of the available network resources and keeping the parameters of Quality of Service (QoS. The paper specifically aims at the selection of the most appropriate method for the specific type of traffic and traffic management concept using AC methods on multiple nodes. The potential benefit and disadvantage of the used solution is evaluated.

  16. Multiple-linac approach for tritium production and other applications

    International Nuclear Information System (INIS)

    Ruggiero, A.G.

    1995-01-01

    This report describes an approach to tritium production based on the use of multiple proton linear accelerators. Features of a single APTT Linac as proposed by the Los Alamos National Laboratory are presented and discussed. An alternative approach to the attainment of the same total proton beam power of 200 MW with several lower-performance superconducting Linacs is proposed and discussed. Although each of these accelerators are considerable extrapolations of present technology, the latter can nevertheless be built at less technical risk when compared to the single high-current APT Linac, particularly concerning the design and the performance of the low-energy front-end. The use of superconducting cavities is also proposed as a way of optimizing the accelerating gradient, the overall length, and the operational costs. The superconducting technology has already been successfully demonstrated in a number of large-size projects and should be seriously considered for the acceleration of intense low-energy beams of protons. Finally, each linear accelerator would represent an ideal source of very intense beams of protons for a variety of applications, such as: weapons and waste actinide transmutation processes, isotopes for medical application, spallation neutron sources, and the generation of intense beams of neutrinos and muons for nuclear and high-energy physics research. The research community at large has obviously an interest in providing expertise for, and in having access to, the demonstration, the construction, the operation, and the exploitation of these top-performance accelerators

  17. Multiple Stressors and Ecological Complexity Require A New Approach to Coral Reef Research

    Directory of Open Access Journals (Sweden)

    Linwood Hagan Pendleton

    2016-03-01

    Full Text Available Ocean acidification, climate change, and other environmental stressors threaten coral reef ecosystems and the people who depend upon them. New science reveals that these multiple stressors interact and may affect a multitude of physiological and ecological processes in complex ways. The interaction of multiple stressors and ecological complexity may mean that the negative effects on coral reef ecosystems will happen sooner and be more severe than previously thought. Yet, most research on the effects of global change on coral reefs focus on one or few stressors and pathways or outcomes (e.g. bleaching. Based on a critical review of the literature, we call for a regionally targeted strategy of mesocosm-level research that addresses this complexity and provides more realistic projections about coral reef impacts in the face of global environmental change. We believe similar approaches are needed for other ecosystems that face global environmental change.

  18. Combining multiple decisions: applications to bioinformatics

    International Nuclear Information System (INIS)

    Yukinawa, N; Ishii, S; Takenouchi, T; Oba, S

    2008-01-01

    Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods

  19. MULTIPLE CRITERIA DECISION MAKING APPROACH FOR INDUSTRIAL ENGINEER SELECTION USING FUZZY AHP-FUZZY TOPSIS

    OpenAIRE

    Deliktaş, Derya; ÜSTÜN, Özden

    2018-01-01

    In this study, a fuzzy multiple criteria decision-making approach is proposed to select an industrial engineer among ten candidates in a manufacturing environment. The industrial engineer selection problem is a special case of the personal selection problem. This problem, which has hierarchical structure of criteria and many decision makers, contains many criteria. The evaluation process of decision makers also includes ambiguous parameters. The fuzzy AHP is used to determin...

  20. A quantitative approach to choose among multiple mutually exclusive decisions: comparative expected utility theory

    OpenAIRE

    Zhu, Pengyu

    2018-01-01

    Mutually exclusive decisions have been studied for decades. Many well-known decision theories have been defined to help people either to make rational decisions or to interpret people's behaviors, such as expected utility theory, regret theory, prospect theory, and so on. The paper argues that none of these decision theories are designed to provide practical, normative and quantitative approaches for multiple mutually exclusive decisions. Different decision-makers should naturally make differ...

  1. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  2. An efficient method to transcription factor binding sites imputation via simultaneous completion of multiple matrices with positional consistency.

    Science.gov (United States)

    Guo, Wei-Li; Huang, De-Shuang

    2017-08-22

    Transcription factors (TFs) are DNA-binding proteins that have a central role in regulating gene expression. Identification of DNA-binding sites of TFs is a key task in understanding transcriptional regulation, cellular processes and disease. Chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) enables genome-wide identification of in vivo TF binding sites. However, it is still difficult to map every TF in every cell line owing to cost and biological material availability, which poses an enormous obstacle for integrated analysis of gene regulation. To address this problem, we propose a novel computational approach, TFBSImpute, for predicting additional TF binding profiles by leveraging information from available ChIP-seq TF binding data. TFBSImpute fuses the dataset to a 3-mode tensor and imputes missing TF binding signals via simultaneous completion of multiple TF binding matrices with positional consistency. We show that signals predicted by our method achieve overall similarity with experimental data and that TFBSImpute significantly outperforms baseline approaches, by assessing the performance of imputation methods against observed ChIP-seq TF binding profiles. Besides, motif analysis shows that TFBSImpute preforms better in capturing binding motifs enriched in observed data compared with baselines, indicating that the higher performance of TFBSImpute is not simply due to averaging related samples. We anticipate that our approach will constitute a useful complement to experimental mapping of TF binding, which is beneficial for further study of regulation mechanisms and disease.

  3. Analyzing Statistical Mediation with Multiple Informants: A New Approach with an Application in Clinical Psychology

    Directory of Open Access Journals (Sweden)

    Lesther ePapa

    2015-11-01

    Full Text Available Testing mediation models is critical for identifying potential variables that need to be targeted to effectively change one or more outcome variables. In addition, it is now common practice for clinicians to use multiple informant (MI data in studies of statistical mediation. By coupling the use of MI data with statistical mediation analysis, clinical researchers can combine the benefits of both techniques. Integrating the information from MIs into a statistical mediation model creates various methodological and practical challenges. The authors review prior methodological approaches to MI mediation analysis in clinical research and propose a new latent variable approach that overcomes some limitations of prior approaches. An application of the new approach to mother, father, and child reports of impulsivity, frustration tolerance, and externalizing problems (N = 454 is presented. The results showed that frustration tolerance mediated the relationship between impulsivity and externalizing problems. Advantages and limitations of the new approach are discussed. The new approach can help clinical researchers overcome limitations of prior techniques. It allows for a more comprehensive and effective use of MI data when testing mediation models.

  4. Multiple point statistical simulation using uncertain (soft) conditional data

    Science.gov (United States)

    Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou

    2018-05-01

    Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.

  5. An integrated lean-methods approach to hospital facilities redesign.

    Science.gov (United States)

    Nicholas, John

    2012-01-01

    Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach.

  6. Total System Performance Assessment - License Application Methods and Approach

    Energy Technology Data Exchange (ETDEWEB)

    J. McNeish

    2003-12-08

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issues (KTIs) identified in agreements with the U.S. Nuclear Regulatory Commission, the ''Yucca Mountain Review Plan'' (YMRP), ''Final Report'' (NRC 2003 [163274]), and the NRC final rule 10 CFR Part 63 (NRC 2002 [156605]). This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are used in this document.

  7. A Method to Construct Plasma with Nonlinear Density Enhancement Effect in Multiple Internal Inductively Coupled Plasmas

    International Nuclear Information System (INIS)

    Chen Zhipeng; Li Hong; Liu Qiuyan; Luo Chen; Xie Jinlin; Liu Wandong

    2011-01-01

    A method is proposed to built up plasma based on a nonlinear enhancement phenomenon of plasma density with discharge by multiple internal antennas simultaneously. It turns out that the plasma density under multiple sources is higher than the linear summation of the density under each source. This effect is helpful to reduce the fast exponential decay of plasma density in single internal inductively coupled plasma source and generating a larger-area plasma with multiple internal inductively coupled plasma sources. After a careful study on the balance between the enhancement and the decay of plasma density in experiments, a plasma is built up by four sources, which proves the feasibility of this method. According to the method, more sources and more intensive enhancement effect can be employed to further build up a high-density, large-area plasma for different applications. (low temperature plasma)

  8. Improving Students' Creative Thinking and Achievement through the Implementation of Multiple Intelligence Approach with Mind Mapping

    Science.gov (United States)

    Widiana, I. Wayan; Jampel, I. Nyoman

    2016-01-01

    This classroom action research aimed to improve the students' creative thinking and achievement in learning science. It conducted through the implementation of multiple intelligences with mind mapping approach and describing the students' responses. The subjects of this research were the fifth grade students of SD 8 Tianyar Barat, Kubu, and…

  9. Multi-method and innovative approaches to researching the learning and social practices of young digital users

    DEFF Research Database (Denmark)

    Vittadini, Nicoletta; Carlo, Simone; Gilje, Øystein

    2014-01-01

    One of the most significant challenges in researching the social aspects of contemporary societies is to adapt the methodological approach to complex digital media environments. Learning processes take place in this complex environment, and they include formal and informal experiences (learning...... in school, home, and real-virtual communities), peer cultures and inter-generational connections, production and creation as relevant activities, and personal interests as a focal point. Methods used in the study of learning and the social practices of young people must take into account four key issues......: boundaries between online and offline experiences are blurring; young people act performatively, knowingly, or reflexively; and their activities cannot be understood through the use of a single method, but require the use of multiple tools of investigation. The article discusses three methodological issues...

  10. Multi-method and innovative approaches to researching the learning and social practices of young digital users

    DEFF Research Database (Denmark)

    Vittadini, Nicoletta; Carlo, Simone; Gilje, Øystein

    2012-01-01

    One of the most significant challenges in researching the social aspects of contemporary societies is to adapt the methodological approach to complex digital media environments. Learning processes take place in this complex environment, and they include formal and informal experiences (learning...... in school, home, and real-virtual communities), peer cultures and intergenerational connections, production and creation as relevant activities, and personal interests as a focal point. Methods used in the study of learning and the social practices of young people must take into account four key issues......: boundaries between online and offline experiences are blurring; young people act performatively; young people act knowingly or reflexively; and the activities of young people cannot be understood through the use of a single method but require the use of multiple tools of investigation. The article discusses...

  11. Comparison of multiple gene assembly methods for metabolic engineering

    Science.gov (United States)

    Chenfeng Lu; Karen Mansoorabadi; Thomas Jeffries

    2007-01-01

    A universal, rapid DNA assembly method for efficient multigene plasmid construction is important for biological research and for optimizing gene expression in industrial microbes. Three different approaches to achieve this goal were evaluated. These included creating long complementary extensions using a uracil-DNA glycosylase technique, overlap extension polymerase...

  12. Optimization and modeling of spot welding parameters with simultaneous multiple response consideration using multi objective Taguchi method and RSM

    Energy Technology Data Exchange (ETDEWEB)

    Muhammad, Nora Siah; Manurung Yupiter HP; Hafidzi, Moham Mad; Abas, Sun Haji Kiyai; Tham, Ghalib; Haru Man, Esa [Universiti Teknologi MARA (UiTM), Selangor (Malaysia)

    2012-08-15

    This paper presents an alternative method to optimize process parameters of resistance spot welding (RSW) towards weld zone development. The optimization approach attempts to consider simultaneously the multiple quality characteristics, namely weld nugget and heat affected zone (HAZ), using multi objective Taguchi method (MTM). The experimental study was conducted for plate thickness of 1.5mm under different welding current, weld time and hold time. The optimum welding parameters were investigated using the Taguchi method with L9 orthogonal array. The optimum value was analyzed by means of MTM, which involved the calculation of total normalized quality loss (TNQL) and multi signal to noise ratio (MSNR). A significant level of the welding parameters was further obtained by using analysis of variance (ANOVA). Furthermore, the first order model for predicting the weld zone development is derived by using response surface methodology (RSM). Based on the experimental confirmation test, the proposed method can be effectively applied to estimate the size of weld zone, which can be used to enhance and optimized the welding performance in RSW or other application.

  13. Optimization and modeling of spot welding parameters with simultaneous multiple response consideration using multi objective Taguchi method and RSM

    International Nuclear Information System (INIS)

    Muhammad, Nora Siah; Manurung Yupiter HP; Hafidzi, Moham Mad; Abas, Sun Haji Kiyai; Tham, Ghalib; Haru Man, Esa

    2012-01-01

    This paper presents an alternative method to optimize process parameters of resistance spot welding (RSW) towards weld zone development. The optimization approach attempts to consider simultaneously the multiple quality characteristics, namely weld nugget and heat affected zone (HAZ), using multi objective Taguchi method (MTM). The experimental study was conducted for plate thickness of 1.5mm under different welding current, weld time and hold time. The optimum welding parameters were investigated using the Taguchi method with L9 orthogonal array. The optimum value was analyzed by means of MTM, which involved the calculation of total normalized quality loss (TNQL) and multi signal to noise ratio (MSNR). A significant level of the welding parameters was further obtained by using analysis of variance (ANOVA). Furthermore, the first order model for predicting the weld zone development is derived by using response surface methodology (RSM). Based on the experimental confirmation test, the proposed method can be effectively applied to estimate the size of weld zone, which can be used to enhance and optimized the welding performance in RSW or other application

  14. Clustering Multiple Sclerosis Subgroups with Multifractal Methods and Self-Organizing Map Algorithm

    Science.gov (United States)

    Karaca, Yeliz; Cattani, Carlo

    Magnetic resonance imaging (MRI) is the most sensitive method to detect chronic nervous system diseases such as multiple sclerosis (MS). In this paper, Brownian motion Hölder regularity functions (polynomial, periodic (sine), exponential) for 2D image, such as multifractal methods were applied to MR brain images, aiming to easily identify distressed regions, in MS patients. With these regions, we have proposed an MS classification based on the multifractal method by using the Self-Organizing Map (SOM) algorithm. Thus, we obtained a cluster analysis by identifying pixels from distressed regions in MR images through multifractal methods and by diagnosing subgroups of MS patients through artificial neural networks.

  15. Tailor-made rehabilitation approach using multiple types of hybrid assistive limb robots for acute stroke patients: A pilot study.

    Science.gov (United States)

    Fukuda, Hiroyuki; Morishita, Takashi; Ogata, Toshiyasu; Saita, Kazuya; Hyakutake, Koichi; Watanabe, Junko; Shiota, Etsuji; Inoue, Tooru

    2016-01-01

    This article investigated the feasibility of a tailor-made neurorehabilitation approach using multiple types of hybrid assistive limb (HAL) robots for acute stroke patients. We investigated the clinical outcomes of patients who underwent rehabilitation using the HAL robots. The Brunnstrom stage, Barthel index (BI), and functional independence measure (FIM) were evaluated at baseline and when patients were transferred to a rehabilitation facility. Scores were compared between the multiple-robot rehabilitation and single-robot rehabilitation groups. Nine hemiplegic acute stroke patients (five men and four women; mean age 59.4 ± 12.5 years; four hemorrhagic stroke and five ischemic stroke) underwent rehabilitation using multiple types of HAL robots for 19.4 ± 12.5 days, and 14 patients (six men and eight women; mean age 63.2 ± 13.9 years; nine hemorrhagic stroke and five ischemic stroke) underwent rehabilitation using a single type of HAL robot for 14.9 ± 8.9 days. The multiple-robot rehabilitation group showed significantly better outcomes in the Brunnstrom stage of the upper extremity, BI, and FIM scores. To the best of the authors' knowledge, this is the first pilot study demonstrating the feasibility of rehabilitation using multiple exoskeleton robots. The tailor-made rehabilitation approach may be useful for the treatment of acute stroke.

  16. Case studies: Soil mapping using multiple methods

    Science.gov (United States)

    Petersen, Hauke; Wunderlich, Tina; Hagrey, Said A. Al; Rabbel, Wolfgang; Stümpel, Harald

    2010-05-01

    Soil is a non-renewable resource with fundamental functions like filtering (e.g. water), storing (e.g. carbon), transforming (e.g. nutrients) and buffering (e.g. contamination). Degradation of soils is meanwhile not only to scientists a well known fact, also decision makers in politics have accepted this as a serious problem for several environmental aspects. National and international authorities have already worked out preservation and restoration strategies for soil degradation, though it is still work of active research how to put these strategies into real practice. But common to all strategies the description of soil state and dynamics is required as a base step. This includes collecting information from soils with methods ranging from direct soil sampling to remote applications. In an intermediate scale mobile geophysical methods are applied with the advantage of fast working progress but disadvantage of site specific calibration and interpretation issues. In the framework of the iSOIL project we present here some case studies for soil mapping performed using multiple geophysical methods. We will present examples of combined field measurements with EMI-, GPR-, magnetic and gammaspectrometric techniques carried out with the mobile multi-sensor-system of Kiel University (GER). Depending on soil type and actual environmental conditions, different methods show a different quality of information. With application of diverse methods we want to figure out, which methods or combination of methods will give the most reliable information concerning soil state and properties. To investigate the influence of varying material we performed mapping campaigns on field sites with sandy, loamy and loessy soils. Classification of measured or derived attributes show not only the lateral variability but also gives hints to a variation in the vertical distribution of soil material. For all soils of course soil water content can be a critical factor concerning a succesful

  17. A multiparameter chaos control method based on OGY approach

    International Nuclear Information System (INIS)

    Souza de Paula, Aline; Amorim Savi, Marcelo

    2009-01-01

    Chaos control is based on the richness of responses of chaotic behavior and may be understood as the use of tiny perturbations for the stabilization of a UPO embedded in a chaotic attractor. Since one of these UPO can provide better performance than others in a particular situation the use of chaos control can make this kind of behavior to be desirable in a variety of applications. The OGY method is a discrete technique that considers small perturbations promoted in the neighborhood of the desired orbit when the trajectory crosses a specific surface, such as a Poincare section. This contribution proposes a multiparameter semi-continuous method based on OGY approach in order to control chaotic behavior. Two different approaches are possible with this method: coupled approach, where all control parameters influences system dynamics although they are not active; and uncoupled approach that is a particular case where control parameters return to the reference value when they become passive parameters. As an application of the general formulation, it is investigated a two-parameter actuation of a nonlinear pendulum control employing coupled and uncoupled approaches. Analyses are carried out considering signals that are generated by numerical integration of the mathematical model using experimentally identified parameters. Results show that the procedure can be a good alternative for chaos control since it provides a more effective UPO stabilization than the classical single-parameter approach.

  18. A comparison of approaches for simultaneous inference of fixed effects for multiple outcomes using linear mixed models

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2018-01-01

    Longitudinal studies with multiple outcomes often pose challenges for the statistical analysis. A joint model including all outcomes has the advantage of incorporating the simultaneous behavior but is often difficult to fit due to computational challenges. We consider 2 alternative approaches to ......, pairwise fitting shows a larger loss in efficiency than the marginal models approach. Using an alternative to the joint modelling strategy will lead to some but not necessarily a large loss of efficiency for small sample sizes....

  19. Permutation statistical methods an integrated approach

    CERN Document Server

    Berry, Kenneth J; Johnston, Janis E

    2016-01-01

    This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. This research monograph addresses a statistically-informed audience, and can also easily serve as a ...

  20. Unsupervised multiple kernel learning for heterogeneous data integration.

    Science.gov (United States)

    Mariette, Jérôme; Villa-Vialaneix, Nathalie

    2018-03-15

    Recent high-throughput sequencing advances have expanded the breadth of available omics datasets and the integrated analysis of multiple datasets obtained on the same samples has allowed to gain important insights in a wide range of applications. However, the integration of various sources of information remains a challenge for systems biology since produced datasets are often of heterogeneous types, with the need of developing generic methods to take their different specificities into account. We propose a multiple kernel framework that allows to integrate multiple datasets of various types into a single exploratory analysis. Several solutions are provided to learn either a consensus meta-kernel or a meta-kernel that preserves the original topology of the datasets. We applied our framework to analyse two public multi-omics datasets. First, the multiple metagenomic datasets, collected during the TARA Oceans expedition, was explored to demonstrate that our method is able to retrieve previous findings in a single kernel PCA as well as to provide a new image of the sample structures when a larger number of datasets are included in the analysis. To perform this analysis, a generic procedure is also proposed to improve the interpretability of the kernel PCA in regards with the original data. Second, the multi-omics breast cancer datasets, provided by The Cancer Genome Atlas, is analysed using a kernel Self-Organizing Maps with both single and multi-omics strategies. The comparison of these two approaches demonstrates the benefit of our integration method to improve the representation of the studied biological system. Proposed methods are available in the R package mixKernel, released on CRAN. It is fully compatible with the mixOmics package and a tutorial describing the approach can be found on mixOmics web site http://mixomics.org/mixkernel/. jerome.mariette@inra.fr or nathalie.villa-vialaneix@inra.fr. Supplementary data are available at Bioinformatics online.

  1. Comparison of Deep Learning With Multiple Machine Learning Methods and Metrics Using Diverse Drug Discovery Data Sets.

    Science.gov (United States)

    Korotcov, Alexandru; Tkachenko, Valery; Russo, Daniel P; Ekins, Sean

    2017-12-04

    Machine learning methods have been applied to many data sets in pharmaceutical research for several decades. The relative ease and availability of fingerprint type molecular descriptors paired with Bayesian methods resulted in the widespread use of this approach for a diverse array of end points relevant to drug discovery. Deep learning is the latest machine learning algorithm attracting attention for many of pharmaceutical applications from docking to virtual screening. Deep learning is based on an artificial neural network with multiple hidden layers and has found considerable traction for many artificial intelligence applications. We have previously suggested the need for a comparison of different machine learning methods with deep learning across an array of varying data sets that is applicable to pharmaceutical research. End points relevant to pharmaceutical research include absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) properties, as well as activity against pathogens and drug discovery data sets. In this study, we have used data sets for solubility, probe-likeness, hERG, KCNQ1, bubonic plague, Chagas, tuberculosis, and malaria to compare different machine learning methods using FCFP6 fingerprints. These data sets represent whole cell screens, individual proteins, physicochemical properties as well as a data set with a complex end point. Our aim was to assess whether deep learning offered any improvement in testing when assessed using an array of metrics including AUC, F1 score, Cohen's kappa, Matthews correlation coefficient and others. Based on ranked normalized scores for the metrics or data sets Deep Neural Networks (DNN) ranked higher than SVM, which in turn was ranked higher than all the other machine learning methods. Visualizing these properties for training and test sets using radar type plots indicates when models are inferior or perhaps over trained. These results also suggest the need for assessing deep learning further

  2. Approaches and methods for econometric analysis of market power

    DEFF Research Database (Denmark)

    Perekhozhuk, Oleksandr; Glauben, Thomas; Grings, Michael

    2017-01-01

    , functional forms, estimation methods and derived estimates of the degree of market power. Thereafter, we use our framework to evaluate several structural models based on PTA and GIM to measure oligopsony power in the Ukrainian dairy industry. The PTA-based results suggest that the estimated parameters......This study discusses two widely used approaches in the New Empirical Industrial Organization (NEIO) literature and examines the strengths and weaknesses of the Production-Theoretic Approach (PTA) and the General Identification Method (GIM) for the econometric analysis of market power...... in agricultural and food markets. We provide a framework that may help researchers to evaluate and improve structural models of market power. Starting with the specification of the approaches in question, we compare published empirical studies of market power with respect to the choice of the applied approach...

  3. Multiple emotions: a person-centered approach to the relationship between intergroup emotion and action orientation.

    Science.gov (United States)

    Fernando, Julian W; Kashima, Yoshihisa; Laham, Simon M

    2014-08-01

    Although a great deal of research has investigated the relationship between emotions and action orientations, most studies to date have used variable-centered techniques to identify the best emotion predictor(s) of a particular action. Given that people frequently report multiple or blended emotions, a profitable area of research may be to adopt person-centered approaches to examine the action orientations elicited by a particular combination of emotions or "emotion profile." In two studies, across instances of intergroup inequality in Australia and Canada, we examined participants' experiences of six intergroup emotions: sympathy, anger directed at three targets, shame, and pride. In both studies, five groups of participants with similar emotion profiles were identified by cluster analysis and their action orientations were compared; clusters indicated that the majority of participants experienced multiple emotions. Each action orientation was also regressed on the six emotions. There were a number of differences in the results obtained from the person-centered and variable-centered approaches. This was most apparent for sympathy: the group of participants experiencing only sympathy showed little inclination to perform prosocial actions, yet sympathy was a significant predictor of numerous action orientations in regression analyses. These results imply that sympathy may only prompt a desire for action when experienced in combination with other emotions. We suggest that the use of person-centered and variable-centered approaches as complementary analytic strategies may enrich research into not only the affective predictors of action, but emotion research in general.

  4. Multiple Paths to Mathematics Practice in Al-Kashi's Key to Arithmetic

    Science.gov (United States)

    Taani, Osama

    2014-01-01

    In this paper, I discuss one of the most distinguishing features of Jamshid al-Kashi's pedagogy from his Key to Arithmetic, a well-known Arabic mathematics textbook from the fifteenth century. This feature is the multiple paths that he includes to find a desired result. In the first section light is shed on al-Kashi's life and his contributions to mathematics and astronomy. Section 2 starts with a brief discussion of the contents and pedagogy of the Key to Arithmetic. Al-Kashi's multiple approaches are discussed through four different examples of his versatility in presenting a topic from multiple perspectives. These examples are multiple definitions, multiple algorithms, multiple formulas, and multiple methods for solving word problems. Section 3 is devoted to some benefits that can be gained by implementing al-Kashi's multiple paths approach in modern curricula. For this discussion, examples from two teaching modules taken from the Key to Arithmetic and implemented in Pre-Calculus and mathematics courses for preservice teachers are discussed. Also, the conclusions are supported by some aspects of these modules. This paper is an attempt to help mathematics educators explore more benefits from reading from original sources.

  5. Data matching for free-surface multiple attenuation by multidimensional deconvolution

    Science.gov (United States)

    van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald

    2012-09-01

    A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.

  6. A method of risk assessment for a multi-plant site

    International Nuclear Information System (INIS)

    White, R.F.

    1983-06-01

    A model is presented which can be used in conjunction with probabilistic risk assessment to estimate whether a site on which there are several plants (reactors or chemical plants containing radioactive materials) meets whatever risk acceptance criteria or numerical risk guidelines are applied at the time of the assessment in relation to various groups of people and for various sources of risk. The application of the multi-plant site model to the direct and inverse methods of risk assessment is described. A method is proposed by which the potential hazard rating associated with a given plant can be quantified so that an appropriate allocation can be made when assessing the risks associated with each of the plants on a site. (author)

  7. MultipleColposcopyJCO

    Science.gov (United States)

    Performing multiple biopsies during a procedure known as colposcopy—visual inspection of the cervix—is more effective than performing only a single biopsy of the worst-appearing area for detecting cervical cancer precursors. This multiple biopsy approach

  8. Using a Forensic Research Method for Establishing an Alternative Method for Audience Measurement in Print Advertising

    DEFF Research Database (Denmark)

    Schmidt, Marcus; Krause, Niels; Solgaard, Hans Stubbe

    2012-01-01

    Advantages and disadvantages of the survey approach are discussed. It is hypothesized that observational methods sometimes constitute reasonable and powerful substitute to traditional survey methods. Under certain circumstances, unobtrusive methods may even outperform traditional techniques. Non...... amount of pages, the method appears applicable for flyers with multiple pages....

  9. Parametric optimization of multiple quality characteristics in laser cutting of Inconel-718 by using hybrid approach of multiple regression analysis and genetic algorithm

    Science.gov (United States)

    Shrivastava, Prashant Kumar; Pandey, Arun Kumar

    2018-06-01

    Inconel-718 has found high demand in different industries due to their superior mechanical properties. The traditional cutting methods are facing difficulties for cutting these alloys due to their low thermal potential, lower elasticity and high chemical compatibility at inflated temperature. The challenges of machining and/or finishing of unusual shapes and/or sizes in these materials have also faced by traditional machining. Laser beam cutting may be applied for the miniaturization and ultra-precision cutting and/or finishing by appropriate control of different process parameter. This paper present multi-objective optimization the kerf deviation, kerf width and kerf taper in the laser cutting of Incone-718 sheet. The second order regression models have been developed for different quality characteristics by using the experimental data obtained through experimentation. The regression models have been used as objective function for multi-objective optimization based on the hybrid approach of multiple regression analysis and genetic algorithm. The comparison of optimization results to experimental results shows an improvement of 88%, 10.63% and 42.15% in kerf deviation, kerf width and kerf taper, respectively. Finally, the effects of different process parameters on quality characteristics have also been discussed.

  10. Improving discrimination of savanna tree species through a multiple endmember spectral-angle-mapper (SAM) approach: canopy level analysis

    CSIR Research Space (South Africa)

    Cho, Moses A

    2010-11-01

    Full Text Available sensing. The objectives of this paper were to (i) evaluate the classification performance of a multiple-endmember spectral angle mapper (SAM) classification approach (conventionally known as the nearest neighbour) in discriminating ten common African...

  11. Part 2. Development of Enhanced Statistical Methods for Assessing Health Effects Associated with an Unknown Number of Major Sources of Multiple Air Pollutants.

    Science.gov (United States)

    Park, Eun Sug; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford

    2015-06-01

    A major difficulty with assessing source-specific health effects is that source-specific exposures cannot be measured directly; rather, they need to be estimated by a source-apportionment method such as multivariate receptor modeling. The uncertainty in source apportionment (uncertainty in source-specific exposure estimates and model uncertainty due to the unknown number of sources and identifiability conditions) has been largely ignored in previous studies. Also, spatial dependence of multipollutant data collected from multiple monitoring sites has not yet been incorporated into multivariate receptor modeling. The objectives of this project are (1) to develop a multipollutant approach that incorporates both sources of uncertainty in source-apportionment into the assessment of source-specific health effects and (2) to develop enhanced multivariate receptor models that can account for spatial correlations in the multipollutant data collected from multiple sites. We employed a Bayesian hierarchical modeling framework consisting of multivariate receptor models, health-effects models, and a hierarchical model on latent source contributions. For the health model, we focused on the time-series design in this project. Each combination of number of sources and identifiability conditions (additional constraints on model parameters) defines a different model. We built a set of plausible models with extensive exploratory data analyses and with information from previous studies, and then computed posterior model probability to estimate model uncertainty. Parameter estimation and model uncertainty estimation were implemented simultaneously by Markov chain Monte Carlo (MCMC*) methods. We validated the methods using simulated data. We illustrated the methods using PM2.5 (particulate matter ≤ 2.5 μm in aerodynamic diameter) speciation data and mortality data from Phoenix, Arizona, and Houston, Texas. The Phoenix data included counts of cardiovascular deaths and daily PM2

  12. SU-F-T-637: Single-Isocenter Versus Multiple-Isocenter VMAT SRS for Unusual Multiple Metastasis Case with Two Widely Separated Lesions

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, EM; Popple, RA; Fiveash, JB [The University of Alabama at Birmingham, Birmingham, AL (United States)

    2016-06-15

    Purpose: Single-isocenter (SI) volumetric modulated arc therapy has been shown to be an effective and efficient approach to multiple metastasis radiosurgery. However, certain extreme cases raise the question of whether multiple-isocenter (MI) approaches can still generate superior plans. In this study, we ask this question with respect to a clinical case with two very widely separated lesions. Methods: A patient with two widely separated (d = 12cm) tumors was treated with SI-VMAT SRS using 10MV flattening filter free (FFF) beam with high-definition multi-leaf collimator (HD-MLC, 2.5/5mm) in two non-coplanar arcs using concentric rings to enforce steep gradient. Because of lesion positioning with respect to collimator angle selection, lesions were treated by 5mm leaves. We re-planned the case with a congruent arc arrangement but separate isocenter for each lesion. In this manner, lesions were treated by 2.5mm leaves. Conformity index (CI), V50%, and mean brain dose were compared. Results: Neither conformity (CI-SI = 1.12, CI-MI = 1.08) nor V50% (V50%-SI =8.82cc, V50%-MI =8.81cc) were improved by utilizing a separate isocenter for each lesion. Mean brain dose was slightly reduced (dmean-SI = 118.4 cGy, dmean-MI = 88.7 cGy) by using multiple isocenters. Conclusion: For this case with a lesion at the apex of the brain and another distantly located at the base of skull, employing a separate isocenter for each target did not meaningfully improve plan quality. Single-isocenter VMAT has been shown feasible and equivalent to multiple-isocenter VMAT for multiple metastasis cases in general. In this extreme case, single- and multiple- isocenter VMAT were also equivalent. If rotational setup errors are appropriately corrected, the increased delivery efficiency of the single-isocenter approach renders it preferable to the multiple isocenter approach. Dr’s Thomas, Popple, and Fiveash have all received honoraria from Varian Medical Systems for discussing their experiences with

  13. An implementation of multiple multipole method in the analyse of elliptical objects to enhance backscattering light

    Science.gov (United States)

    Jalali, T.

    2015-07-01

    In this paper, we present dielectric elliptical shapes modelling with respect to a highly confined power distribution in the resulting nanojet, which has been parameterized according to the beam waist and its beam divergence. The method is based on spherical bessel function as a basis function, which is adapted to standard multiple multipole method. This method can handle elliptically shaped particles due to the change of size and refractive indices, which have been studied under plane wave illumination in two and three dimensional multiple multipole method. Because of its fast and good convergence, the results obtained from simulation are highly accurate and reliable. The simulation time is less than minute for two and three dimension. Therefore, the proposed method is found to be computationally efficient, fast and accurate.

  14. Method of Fusion Diagnosis for Dam Service Status Based on Joint Distribution Function of Multiple Points

    Directory of Open Access Journals (Sweden)

    Zhenxiang Jiang

    2016-01-01

    Full Text Available The traditional methods of diagnosing dam service status are always suitable for single measuring point. These methods also reflect the local status of dams without merging multisource data effectively, which is not suitable for diagnosing overall service. This study proposes a new method involving multiple points to diagnose dam service status based on joint distribution function. The function, including monitoring data of multiple points, can be established with t-copula function. Therefore, the possibility, which is an important fusing value in different measuring combinations, can be calculated, and the corresponding diagnosing criterion is established with typical small probability theory. Engineering case study indicates that the fusion diagnosis method can be conducted in real time and the abnormal point can be detected, thereby providing a new early warning method for engineering safety.

  15. Assessing Neurocognition via Gamified Experimental Logic: A novel approach to simultaneous acquisition of multiple ERPs

    Directory of Open Access Journals (Sweden)

    Ajay Kumar eNair

    2016-01-01

    Full Text Available The present study describes the development of a neurocognitive paradigm: ‘Assessing Neurocognition via Gamified Experimental Logic’ (ANGEL, for performing the parametric evaluation of multiple neurocognitive functions simultaneously. ANGEL employs an audiovisual sensory motor design for the acquisition of multiple event related potentials (ERPs - the C1, P50, MMN, N1, N170, P2, N2pc, LRP, P300 and ERN. The ANGEL paradigm allows assessment of ten neurocognitive variables over the course of three ‘game’ levels of increasing complexity ranging from simple passive observation to complex discrimination and response in the presence of multiple distractors. The paradigm allows assessment of several levels of rapid decision making: speeded up response vs response-inhibition; responses to easy vs difficult tasks; responses based on gestalt perception of clear vs ambiguous stimuli; and finally, responses with set shifting during challenging tasks. The paradigm has been tested using 18 healthy participants from both sexes and the possibilities of varied data analyses have been presented in this paper. The ANGEL approach provides an ecologically valid assessment (as compared to existing tools that quickly yields a very rich dataset and helps to assess multiple ERPs that can be studied extensively to assess cognitive functions in health and disease conditions.

  16. Assessing Neurocognition via Gamified Experimental Logic: A Novel Approach to Simultaneous Acquisition of Multiple ERPs.

    Science.gov (United States)

    Nair, Ajay K; Sasidharan, Arun; John, John P; Mehrotra, Seema; Kutty, Bindu M

    2016-01-01

    The present study describes the development of a neurocognitive paradigm: "Assessing Neurocognition via Gamified Experimental Logic" (ANGEL), for performing the parametric evaluation of multiple neurocognitive functions simultaneously. ANGEL employs an audiovisual sensory motor design for the acquisition of multiple event related potentials (ERPs)-the C1, P50, MMN, N1, N170, P2, N2pc, LRP, P300, and ERN. The ANGEL paradigm allows assessment of 10 neurocognitive variables over the course of three "game" levels of increasing complexity ranging from simple passive observation to complex discrimination and response in the presence of multiple distractors. The paradigm allows assessment of several levels of rapid decision making: speeded up response vs. response-inhibition; responses to easy vs. difficult tasks; responses based on gestalt perception of clear vs. ambiguous stimuli; and finally, responses with set shifting during challenging tasks. The paradigm has been tested using 18 healthy participants from both sexes and the possibilities of varied data analyses have been presented in this paper. The ANGEL approach provides an ecologically valid assessment (as compared to existing tools) that quickly yields a very rich dataset and helps to assess multiple ERPs that can be studied extensively to assess cognitive functions in health and disease conditions.

  17. MULTIPLE CRITERA METHODS WITH FOCUS ON ANALYTIC HIERARCHY PROCESS AND GROUP DECISION MAKING

    Directory of Open Access Journals (Sweden)

    Lidija Zadnik-Stirn

    2010-12-01

    Full Text Available Managing natural resources is a group multiple criteria decision making problem. In this paper the analytic hierarchy process is the chosen method for handling the natural resource problems. The one decision maker problem is discussed and, three methods: the eigenvector method, data envelopment analysis method, and logarithmic least squares method are presented for the derivation of the priority vector. Further, the group analytic hierarchy process is discussed and six methods for the aggregation of individual judgments or priorities: weighted arithmetic mean method, weighted geometric mean method, and four methods based on data envelopment analysis are compared. The case study on land use in Slovenia is applied. The conclusions review consistency, sensitivity analyses, and some future directions of research.

  18. A Promising Approach to Integrally Evaluate the Disease Outcome of Cerebral Ischemic Rats Based on Multiple-Biomarker Crosstalk

    Directory of Open Access Journals (Sweden)

    Guimei Ran

    2017-01-01

    Full Text Available Purpose. The study was designed to evaluate the disease outcome based on multiple biomarkers related to cerebral ischemia. Methods. Rats were randomly divided into sham, permanent middle cerebral artery occlusion, and edaravone-treated groups. Cerebral ischemia was induced by permanent middle cerebral artery occlusion surgery in rats. To form a simplified crosstalk network, the related multiple biomarkers were chosen as S100β, HIF-1α, IL-1β, PGI2, TXA2, and GSH-Px. The levels or activities of these biomarkers in plasma were detected before and after ischemia. Concurrently, neurological deficit scores and cerebral infarct volumes were assessed. Based on a mathematic model, network balance maps and three integral disruption parameters (k, φ, and u of the simplified crosstalk network were achieved. Results. The levels or activities of the related biomarkers and neurological deficit scores were significantly impacted by cerebral ischemia. The balance maps intuitively displayed the network disruption, and the integral disruption parameters quantitatively depicted the disruption state of the simplified network after cerebral ischemia. The integral disruption parameter u values correlated significantly with neurological deficit scores and infarct volumes. Conclusion. Our results indicate that the approach based on crosstalk network may provide a new promising way to integrally evaluate the outcome of cerebral ischemia.

  19. A Signal Detection Approach in a Multiple Cohort Study: Different Admission Tools Uniquely Select Different Successful Students

    Directory of Open Access Journals (Sweden)

    Linda van Ooijen-van der Linden

    2018-05-01

    Full Text Available Using multiple admission tools in university admission procedures is common practice. This is particularly useful if different admission tools uniquely select different subgroups of students who will be successful in university programs. A signal-detection approach was used to investigate the accuracy of Secondary School grade point average (SSGPA, an admission test score (ACS, and a non-cognitive score (NCS in uniquely selecting successful students. This was done for three consecutive first year cohorts of a broad psychology program. Each applicant's score on SSGPA, ACS, or NCS alone—and on seven combinations of these scores, all considered separate “admission tools”—was compared at two different (medium and high cut-off scores (criterion levels. Each of the tools selected successful students who were not selected by any of the other tools. Both sensitivity and specificity were enhanced by implementing multiple tools. The signal-detection approach distinctively provided useful information for decisions on admission instruments and cut-off scores.

  20. Multiple constant multiplication optimizations for field programmable gate arrays

    CERN Document Server

    Kumm, Martin

    2016-01-01

    This work covers field programmable gate array (FPGA)-specific optimizations of circuits computing the multiplication of a variable by several constants, commonly denoted as multiple constant multiplication (MCM). These optimizations focus on low resource usage but high performance. They comprise the use of fast carry-chains in adder-based constant multiplications including ternary (3-input) adders as well as the integration of look-up table-based constant multipliers and embedded multipliers to get the optimal mapping to modern FPGAs. The proposed methods can be used for the efficient implementation of digital filters, discrete transforms and many other circuits in the domain of digital signal processing, communication and image processing. Contents Heuristic and ILP-Based Optimal Solutions for the Pipelined Multiple Constant Multiplication Problem Methods to Integrate Embedded Multipliers, LUT-Based Constant Multipliers and Ternary (3-Input) Adders An Optimized Multiple Constant Multiplication Architecture ...

  1. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras

    Directory of Open Access Journals (Sweden)

    Yajie Liao

    2017-06-01

    Full Text Available Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices, which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration.

  2. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    Science.gov (United States)

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  3. A Multiple Model Prediction Algorithm for CNC Machine Wear PHM

    Directory of Open Access Journals (Sweden)

    Huimin Chen

    2011-01-01

    Full Text Available The 2010 PHM data challenge focuses on the remaining useful life (RUL estimation for cutters of a high speed CNC milling machine using measurements from dynamometer, accelerometer, and acoustic emission sensors. We present a multiple model approach for wear depth estimation of milling machine cutters using the provided data. The feature selection, initial wear estimation and multiple model fusion components of the proposed algorithm are explained in details and compared with several alternative methods using the training data. The final submission ranked #2 among professional and student participants and the method is applicable to other data driven PHM problems.

  4. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    Science.gov (United States)

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  5. A Variational Method to Retrieve the Extinction Profile in Liquid Clouds Using Multiple Field-of-View Lidar

    Science.gov (United States)

    Pounder, Nicola L.; Hogan, Robin J.; Varnai, Tamas; Battaglia, Alessandro; Cahalan, Robert F.

    2011-01-01

    While liquid clouds playa very important role in the global radiation budget, it's been very difficult to remotely determine their internal cloud structure. Ordinary lidar instruments (similar to radars but using visible light pulses) receive strong signals from such clouds, but the information is limited to a thin layer near the cloud boundary. Multiple field-of-view (FOV) lidars offer some new hope as they are able to isolate photons that were scattered many times by cloud droplets and penetrated deep into a cloud before returning to the instrument. Their data contains new information on cloud structure, although the lack of fast simulation methods made it challenging to interpret the observations. This paper describes a fast new technique that can simulate multiple-FOV lidar signals and can even estimate the way the signals would change in response to changes in cloud properties-an ability that allows quick refinements in our initial guesses of cloud structure. Results for a hypothetical airborne three-FOV lidar suggest that this approach can help determine cloud structure for a deeper layer in clouds, and can reliably determine the optical thickness of even fairly thick liquid clouds. The algorithm is also applied to stratocumulus observations by the 8-FOV airborne "THOR" lidar. These tests demonstrate that the new method can determine the depth to which a lidar provides useful information on vertical cloud structure. This work opens the way to exploit data from spaceborne lidar and radar more rigorously than has been possible up to now.

  6. Multiple predictor smoothing methods for sensitivity analysis: Example results

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  7. An efficient multiple particle filter based on the variational Bayesian approach

    KAUST Repository

    Ait-El-Fquih, Boujemaa

    2015-12-07

    This paper addresses the filtering problem in large-dimensional systems, in which conventional particle filters (PFs) remain computationally prohibitive owing to the large number of particles needed to obtain reasonable performances. To overcome this drawback, a class of multiple particle filters (MPFs) has been recently introduced in which the state-space is split into low-dimensional subspaces, and then a separate PF is applied to each subspace. In this paper, we adopt the variational Bayesian (VB) approach to propose a new MPF, the VBMPF. The proposed filter is computationally more efficient since the propagation of each particle requires generating one (new) particle only, while in the standard MPFs a set of (children) particles needs to be generated. In a numerical test, the proposed VBMPF behaves better than the PF and MPF.

  8. The impact of farmers’ participation in field trials in creating awareness and stimulating compliance with the World Health Organization’s farm-based multiple-barrier approach

    DEFF Research Database (Denmark)

    Amponsah, Owusu; Vigre, Håkan; Schou, Torben Wilde

    2016-01-01

    -barrier approach field trials. The results of the study show that participation in the field trials has statistically significant effects on farmers’ awareness of the farm-based multiple-barrier approach. Compliance has, however, been undermined by the farmers’ perception that the cost of compliance is more......The results of a study aimed as assessing the extent to which urban vegetable farmers’ participation in field trials can impact on their awareness and engender compliance with the World Health Organization’s farm-based multiple-barrier approach are presented in this paper. Both qualitative...... and quantitative approaches have been used in this paper. One hundred vegetable farmers and four vegetable farmers’ associations in the Kumasi Metropolis in Ghana were covered. The individual farmers were grouped into two, namely: (1) participants and (2) non-participants of the farm-based multiple...

  9. A characteristic based multiple balance approach for SN on arbitrary polygonal meshes

    International Nuclear Information System (INIS)

    Grove, R.E.; Pevey, R.E.

    1995-01-01

    The authors introduce a new approach for characteristic based S n transport on arbitrary polygonal meshes in XY geometry. They approximate a general surface as an arbitrary polygon and rotate to a coordinate system aligned with the direction of particle travel. They use exact moment balance equations on whole cells and subregions called slices and close the system by analytically solving the characteristic equation. The authors assume spatial functions for boundary conditions and cell sources and formulate analogous functions for outgoing edge and cell angular fluxes which exactly preserve spatial moments of the analytic solution. In principle, their approach provides the framework to extend characteristic methods formulated on rectangular grids to arbitrary polygonal meshes. The authors derive schemes based on step and linear spatial approximations. Their step characteristic scheme is mathematically equivalent to the Extended Step Characteristic (ESC) method but their approach and scheme differ in the geometry rotation and in the solution form. Their solutions are simple and permit edge-based transport sweep ordering

  10. Approaching complexity by stochastic methods: From biological systems to turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Friedrich, Rudolf [Institute for Theoretical Physics, University of Muenster, D-48149 Muenster (Germany); Peinke, Joachim [Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Sahimi, Muhammad [Mork Family Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, CA 90089-1211 (United States); Reza Rahimi Tabar, M., E-mail: mohammed.r.rahimi.tabar@uni-oldenburg.de [Department of Physics, Sharif University of Technology, Tehran 11155-9161 (Iran, Islamic Republic of); Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Fachbereich Physik, Universitaet Osnabrueck, Barbarastrasse 7, 49076 Osnabrueck (Germany)

    2011-09-15

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  11. Approaching complexity by stochastic methods: From biological systems to turbulence

    International Nuclear Information System (INIS)

    Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.

    2011-01-01

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  12. Efficient surrogate models for reliability analysis of systems with multiple failure modes

    International Nuclear Information System (INIS)

    Bichon, Barron J.; McFarland, John M.; Mahadevan, Sankaran

    2011-01-01

    Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis. - Highlights: → Extends efficient global reliability analysis to systems with multiple failure modes. → Constructs locally accurate Gaussian process models of each response. → Highly efficient and accurate method for assessing system reliability. → Effectiveness is demonstrated on several test problems from the literature.

  13. Fuzzy multiple linear regression: A computational approach

    Science.gov (United States)

    Juang, C. H.; Huang, X. H.; Fleming, J. W.

    1992-01-01

    This paper presents a new computational approach for performing fuzzy regression. In contrast to Bardossy's approach, the new approach, while dealing with fuzzy variables, closely follows the conventional regression technique. In this approach, treatment of fuzzy input is more 'computational' than 'symbolic.' The following sections first outline the formulation of the new approach, then deal with the implementation and computational scheme, and this is followed by examples to illustrate the new procedure.

  14. Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar

    Directory of Open Access Journals (Sweden)

    Shouguo Yang

    2015-12-01

    Full Text Available A novel spatio-temporal 2-dimensional (2-D processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters’ outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD and direction of arrival (DOA, and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results.

  15. A sequential mixed methods research approach to investigating HIV ...

    African Journals Online (AJOL)

    2016-09-03

    Sep 3, 2016 ... Sequential mixed methods research is an effective approach for ... show the effectiveness of the research method. ... qualitative data before quantitative datasets ..... whereby both types of data are collected simultaneously.

  16. Using Module Analysis for Multiple Choice Responses: A New Method Applied to Force Concept Inventory Data

    Science.gov (United States)

    Brewe, Eric; Bruun, Jesper; Bearden, Ian G.

    2016-01-01

    We describe "Module Analysis for Multiple Choice Responses" (MAMCR), a new methodology for carrying out network analysis on responses to multiple choice assessments. This method is used to identify modules of non-normative responses which can then be interpreted as an alternative to factor analysis. MAMCR allows us to identify conceptual…

  17. Analysis of Genome-Wide Association Studies with Multiple Outcomes Using Penalization

    Science.gov (United States)

    Liu, Jin; Huang, Jian; Ma, Shuangge

    2012-01-01

    Genome-wide association studies have been extensively conducted, searching for markers for biologically meaningful outcomes and phenotypes. Penalization methods have been adopted in the analysis of the joint effects of a large number of SNPs (single nucleotide polymorphisms) and marker identification. This study is partly motivated by the analysis of heterogeneous stock mice dataset, in which multiple correlated phenotypes and a large number of SNPs are available. Existing penalization methods designed to analyze a single response variable cannot accommodate the correlation among multiple response variables. With multiple response variables sharing the same set of markers, joint modeling is first employed to accommodate the correlation. The group Lasso approach is adopted to select markers associated with all the outcome variables. An efficient computational algorithm is developed. Simulation study and analysis of the heterogeneous stock mice dataset show that the proposed method can outperform existing penalization methods. PMID:23272092

  18. State of the art in non-animal approaches for skin sensitization testing: from individual test methods towards testing strategies.

    Science.gov (United States)

    Ezendam, Janine; Braakhuis, Hedwig M; Vandebriel, Rob J

    2016-12-01

    The hazard assessment of skin sensitizers relies mainly on animal testing, but much progress is made in the development, validation and regulatory acceptance and implementation of non-animal predictive approaches. In this review, we provide an update on the available computational tools and animal-free test methods for the prediction of skin sensitization hazard. These individual test methods address mostly one mechanistic step of the process of skin sensitization induction. The adverse outcome pathway (AOP) for skin sensitization describes the key events (KEs) that lead to skin sensitization. In our review, we have clustered the available test methods according to the KE they inform: the molecular initiating event (MIE/KE1)-protein binding, KE2-keratinocyte activation, KE3-dendritic cell activation and KE4-T cell activation and proliferation. In recent years, most progress has been made in the development and validation of in vitro assays that address KE2 and KE3. No standardized in vitro assays for T cell activation are available; thus, KE4 cannot be measured in vitro. Three non-animal test methods, addressing either the MIE, KE2 or KE3, are accepted as OECD test guidelines, and this has accelerated the development of integrated or defined approaches for testing and assessment (e.g. testing strategies). The majority of these approaches are mechanism-based, since they combine results from multiple test methods and/or computational tools that address different KEs of the AOP to estimate skin sensitization potential and sometimes potency. Other approaches are based on statistical tools. Until now, eleven different testing strategies have been published, the majority using the same individual information sources. Our review shows that some of the defined approaches to testing and assessment are able to accurately predict skin sensitization hazard, sometimes even more accurate than the currently used animal test. A few defined approaches are developed to provide an

  19. Detection of bifurcations in noisy coupled systems from multiple time series

    International Nuclear Information System (INIS)

    Williamson, Mark S.; Lenton, Timothy M.

    2015-01-01

    We generalize a method of detecting an approaching bifurcation in a time series of a noisy system from the special case of one dynamical variable to multiple dynamical variables. For a system described by a stochastic differential equation consisting of an autonomous deterministic part with one dynamical variable and an additive white noise term, small perturbations away from the system's fixed point will decay slower the closer the system is to a bifurcation. This phenomenon is known as critical slowing down and all such systems exhibit this decay-type behaviour. However, when the deterministic part has multiple coupled dynamical variables, the possible dynamics can be much richer, exhibiting oscillatory and chaotic behaviour. In our generalization to the multi-variable case, we find additional indicators to decay rate, such as frequency of oscillation. In the case of approaching a homoclinic bifurcation, there is no change in decay rate but there is a decrease in frequency of oscillations. The expanded method therefore adds extra tools to help detect and classify approaching bifurcations given multiple time series, where the underlying dynamics are not fully known. Our generalisation also allows bifurcation detection to be applied spatially if one treats each spatial location as a new dynamical variable. One may then determine the unstable spatial mode(s). This is also something that has not been possible with the single variable method. The method is applicable to any set of time series regardless of its origin, but may be particularly useful when anticipating abrupt changes in the multi-dimensional climate system

  20. Detection of bifurcations in noisy coupled systems from multiple time series

    Science.gov (United States)

    Williamson, Mark S.; Lenton, Timothy M.

    2015-03-01

    We generalize a method of detecting an approaching bifurcation in a time series of a noisy system from the special case of one dynamical variable to multiple dynamical variables. For a system described by a stochastic differential equation consisting of an autonomous deterministic part with one dynamical variable and an additive white noise term, small perturbations away from the system's fixed point will decay slower the closer the system is to a bifurcation. This phenomenon is known as critical slowing down and all such systems exhibit this decay-type behaviour. However, when the deterministic part has multiple coupled dynamical variables, the possible dynamics can be much richer, exhibiting oscillatory and chaotic behaviour. In our generalization to the multi-variable case, we find additional indicators to decay rate, such as frequency of oscillation. In the case of approaching a homoclinic bifurcation, there is no change in decay rate but there is a decrease in frequency of oscillations. The expanded method therefore adds extra tools to help detect and classify approaching bifurcations given multiple time series, where the underlying dynamics are not fully known. Our generalisation also allows bifurcation detection to be applied spatially if one treats each spatial location as a new dynamical variable. One may then determine the unstable spatial mode(s). This is also something that has not been possible with the single variable method. The method is applicable to any set of time series regardless of its origin, but may be particularly useful when anticipating abrupt changes in the multi-dimensional climate system.

  1. Detection of bifurcations in noisy coupled systems from multiple time series

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, Mark S., E-mail: m.s.williamson@exeter.ac.uk; Lenton, Timothy M. [Earth System Science Group, College of Life and Environmental Sciences, University of Exeter, Laver Building, North Park Road, Exeter EX4 4QE (United Kingdom)

    2015-03-15

    We generalize a method of detecting an approaching bifurcation in a time series of a noisy system from the special case of one dynamical variable to multiple dynamical variables. For a system described by a stochastic differential equation consisting of an autonomous deterministic part with one dynamical variable and an additive white noise term, small perturbations away from the system's fixed point will decay slower the closer the system is to a bifurcation. This phenomenon is known as critical slowing down and all such systems exhibit this decay-type behaviour. However, when the deterministic part has multiple coupled dynamical variables, the possible dynamics can be much richer, exhibiting oscillatory and chaotic behaviour. In our generalization to the multi-variable case, we find additional indicators to decay rate, such as frequency of oscillation. In the case of approaching a homoclinic bifurcation, there is no change in decay rate but there is a decrease in frequency of oscillations. The expanded method therefore adds extra tools to help detect and classify approaching bifurcations given multiple time series, where the underlying dynamics are not fully known. Our generalisation also allows bifurcation detection to be applied spatially if one treats each spatial location as a new dynamical variable. One may then determine the unstable spatial mode(s). This is also something that has not been possible with the single variable method. The method is applicable to any set of time series regardless of its origin, but may be particularly useful when anticipating abrupt changes in the multi-dimensional climate system.

  2. Merging for Particle-Mesh Complex Particle Kinetic Modeling of the Multiple Plasma Beams

    Science.gov (United States)

    Lipatov, Alexander S.

    2011-01-01

    We suggest a merging procedure for the Particle-Mesh Complex Particle Kinetic (PMCPK) method in case of inter-penetrating flow (multiple plasma beams). We examine the standard particle-in-cell (PIC) and the PMCPK methods in the case of particle acceleration by shock surfing for a wide range of the control numerical parameters. The plasma dynamics is described by a hybrid (particle-ion-fluid-electron) model. Note that one may need a mesh if modeling with the computation of an electromagnetic field. Our calculations use specified, time-independent electromagnetic fields for the shock, rather than self-consistently generated fields. While a particle-mesh method is a well-verified approach, the CPK method seems to be a good approach for multiscale modeling that includes multiple regions with various particle/fluid plasma behavior. However, the CPK method is still in need of a verification for studying the basic plasma phenomena: particle heating and acceleration by collisionless shocks, magnetic field reconnection, beam dynamics, etc.

  3. Performance of a novel multiple-signal luminescence sediment tracing method

    Science.gov (United States)

    Reimann, Tony

    2014-05-01

    transport. The EET increases with increasing distance from the nourishment source, indicating that our method is capable to quantify sediment transport distances. We furthermore observed that the EET of an aeolian analogue is orders of magnitudes higher than those of the water-lain transported Zandmotor samples, suggesting that our approach is also able to differentiate between different modes of coastal sediment transport. This new luminescence approach offers new possibilities to decipher the sedimentation history of palaeo-environmental archives e.g. in coastal, fluvial or aeolian settings. References: Reimann, T.et al. Quantifying the degreeof bleaching during sediment transport using a polymineral multiple-signalluminescence approach. Submitted. Stive, M.J.F. et al. 2013. A New Alternative to Saving Our Beaches from Sea-Level Rise: The SandEngine. Journal of Coastal research 29, 1001-1008.

  4. Visualization of a City Sustainability Index (CSI: Towards Transdisciplinary Approaches Involving Multiple Stakeholders

    Directory of Open Access Journals (Sweden)

    Koichiro Mori

    2015-09-01

    Full Text Available We have developed a visualized 3-D model of a City Sustainability Index (CSI based on our original concept of city sustainability in which a sustainable city is defined as one that maximizes socio-economic benefits while meeting constraint conditions of the environment and socio-economic equity on a permanent basis. The CSI is based on constraint and maximization indicators. Constraint indicators assess whether a city meets the necessary minimum conditions for city sustainability. Maximization indicators measure the benefits that a city generates in socio-economic aspects. When used in the policy-making process, the choice of constraint indicators should be implemented using a top-down approach. In contrast, a bottom-up approach is more suitable for defining maximization indicators because this technique involves multiple stakeholders (in a transdisciplinary approach. Using different materials of various colors, shapes, sizes, we designed and constructed the visualized physical model of the CSI to help people evaluate and compare the performance of different cities in terms of sustainability. The visualized model of the CSI can convey complicated information in a simple and straightforward manner to diverse stakeholders so that the sustainability analysis can be understood intuitively by ordinary citizens as well as experts. Thus, the CSI model helps stakeholders to develop critical thinking about city sustainability and enables policymakers to make informed decisions for sustainability through a transdisciplinary approach.

  5. Integrated health messaging for multiple neglected zoonoses: Approaches, challenges and opportunities in Morocco.

    Science.gov (United States)

    Ducrotoy, M J; Yahyaoui Azami, H; El Berbri, I; Bouslikhane, M; Fassi Fihri, O; Boué, F; Petavy, A F; Dakkak, A; Welburn, S; Bardosh, K L

    2015-12-01

    Integrating the control of multiple neglected zoonoses at the community-level holds great potential, but critical data is missing to inform the design and implementation of different interventions. In this paper we present an evaluation of an integrated health messaging intervention, using powerpoint presentations, for five bacterial (brucellosis and bovine tuberculosis) and dog-associated (rabies, cystic echinococcosis and leishmaniasis) zoonotic diseases in Sidi Kacem Province, northwest Morocco. Conducted by veterinary and epidemiology students between 2013 and 2014, this followed a process-based approach that encouraged sequential adaptation of images, key messages, and delivery strategies using auto-evaluation and end-user feedback. We describe the challenges and opportunities of this approach, reflecting on who was targeted, how education was conducted, and what tools and approaches were used. Our results showed that: (1) replacing words with local pictures and using "hands-on" activities improved receptivity; (2) information "overload" easily occurred when disease transmission pathways did not overlap; (3) access and receptivity at schools was greater than at the community-level; and (4) piggy-backing on high-priority diseases like rabies offered an important avenue to increase knowledge of other zoonoses. We conclude by discussing the merits of incorporating our validated education approach into the school curriculum in order to influence long-term behaviour change. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Raster-based outranking method: a new approach for municipal solid waste landfill (MSW) siting.

    Science.gov (United States)

    Hamzeh, Mohamad; Abbaspour, Rahim Ali; Davalou, Romina

    2015-08-01

    MSW landfill siting is a complicated process because it requires integration of several factors. In this paper, geographic information system (GIS) and multiple criteria decision analysis (MCDA) were combined to handle the municipal solid waste (MSW) landfill siting. For this purpose, first, 16 input data layers were prepared in GIS environment. Then, the exclusionary lands were eliminated and potentially suitable areas for the MSW disposal were identified. These potentially suitable areas, in an innovative approach, were further examined by deploying Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE) II and analytic network process (ANP), which are two of the most recent MCDA methods, in order to determine land suitability for landfilling. PROMETHEE II was used to determine a complete ranking of the alternatives, while ANP was employed to quantify the subjective judgments of evaluators as criteria weights. The resulting land suitability was reported on a grading scale of 1-5 from 1 to 5, which is the least to the most suitable area, respectively. Finally, three optimal sites were selected by taking into consideration the local conditions of 15 sites, which were candidates for MSW landfilling. Research findings show that the raster-based method yields effective results.

  7. Dynamical properties of the growing continuum using multiple-scale method

    Directory of Open Access Journals (Sweden)

    Hynčík L.

    2008-12-01

    Full Text Available The theory of growth and remodeling is applied to the 1D continuum. This can be mentioned e.g. as a model of the muscle fibre or piezo-electric stack. Hyperelastic material described by free energy potential suggested by Fung is used whereas the change of stiffness is taken into account. Corresponding equations define the dynamical system with two degrees of freedom. Its stability and the properties of bifurcations are studied using multiple-scale method. There are shown the conditions under which the degenerated Hopf's bifurcation is occuring.

  8. A Unified Spatiotemporal Modeling Approach for Predicting Concentrations of Multiple Air Pollutants in the Multi-Ethnic Study of Atherosclerosis and Air Pollution

    Science.gov (United States)

    Olives, Casey; Kim, Sun-Young; Sheppard, Lianne; Sampson, Paul D.; Szpiro, Adam A.; Oron, Assaf P.; Lindström, Johan; Vedal, Sverre; Kaufman, Joel D.

    2014-01-01

    Background: Cohort studies of the relationship between air pollution exposure and chronic health effects require predictions of exposure over long periods of time. Objectives: We developed a unified modeling approach for predicting fine particulate matter, nitrogen dioxide, oxides of nitrogen, and black carbon (as measured by light absorption coefficient) in six U.S. metropolitan regions from 1999 through early 2012 as part of the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air). Methods: We obtained monitoring data from regulatory networks and supplemented those data with study-specific measurements collected from MESA Air community locations and participants’ homes. In each region, we applied a spatiotemporal model that included a long-term spatial mean, time trends with spatially varying coefficients, and a spatiotemporal residual. The mean structure was derived from a large set of geographic covariates that was reduced using partial least-squares regression. We estimated time trends from observed time series and used spatial smoothing methods to borrow strength between observations. Results: Prediction accuracy was high for most models, with cross-validation R2 (R2CV) > 0.80 at regulatory and fixed sites for most regions and pollutants. At home sites, overall R2CV ranged from 0.45 to 0.92, and temporally adjusted R2CV ranged from 0.23 to 0.92. Conclusions: This novel spatiotemporal modeling approach provides accurate fine-scale predictions in multiple regions for four pollutants. We have generated participant-specific predictions for MESA Air to investigate health effects of long-term air pollution exposures. These successes highlight modeling advances that can be adopted more widely in modern cohort studies. Citation: Keller JP, Olives C, Kim SY, Sheppard L, Sampson PD, Szpiro AA, Oron AP, Lindström J, Vedal S, Kaufman JD. 2015. A unified spatiotemporal modeling approach for predicting concentrations of multiple air pollutants in the Multi

  9. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    Science.gov (United States)

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. A coagulation-powdered activated carbon-ultrafiltration - Multiple barrier approach for removing toxins from two Australian cyanobacterial blooms

    International Nuclear Information System (INIS)

    Dixon, Mike B.; Richard, Yann; Ho, Lionel; Chow, Christopher W.K.; O'Neill, Brian K.; Newcombe, Gayle

    2011-01-01

    Cyanobacteria are a major problem for the world wide water industry as they can produce metabolites toxic to humans in addition to taste and odour compounds that make drinking water aesthetically displeasing. Removal of cyanobacterial toxins from drinking water is important to avoid serious illness in consumers. This objective can be confidently achieved through the application of the multiple barrier approach to drinking water quality and safety. In this study the use of a multiple barrier approach incorporating coagulation, powdered activated carbon (PAC) and ultrafiltration (UF) was investigated for the removal of intracellular and extracellular cyanobacterial toxins from two naturally occurring blooms in South Australia. Also investigated was the impact of these treatments on the UF flux. In this multibarrier approach, coagulation was used to remove the cells and thus the intracellular toxin while PAC was used for extracellular toxin adsorption and finally the UF was used for floc, PAC and cell removal. Cyanobacterial cells were completely removed using the UF membrane alone and when used in conjunction with coagulation. Extracellular toxins were removed to varying degrees by PAC addition. UF flux deteriorated dramatically during a trial with a very high cell concentration; however, the flux was improved by coagulation and PAC addition.

  11. A versatile method for confirmatory evaluation of the effects of a covariate in multiple models

    DEFF Research Database (Denmark)

    Pipper, Christian Bressen; Ritz, Christian; Bisgaard, Hans

    2012-01-01

    to provide a fine-tuned control of the overall type I error in a wide range of epidemiological experiments where in reality no other useful alternative exists. The methodology proposed is applied to a multiple-end-point study of the effect of neonatal bacterial colonization on development of childhood asthma.......Modern epidemiology often requires testing of the effect of a covariate on multiple end points from the same study. However, popular state of the art methods for multiple testing require the tests to be evaluated within the framework of a single model unifying all end points. This severely limits...

  12. Development and application of a unified balancing approach with multiple constraints

    Science.gov (United States)

    Zorzi, E. S.; Lee, C. C.; Giordano, J. C.

    1985-01-01

    The development of a general analytic approach to constrained balancing that is consistent with past influence coefficient methods is described. The approach uses Lagrange multipliers to impose orbit and/or weight constraints; these constraints are combined with the least squares minimization process to provide a set of coupled equations that result in a single solution form for determining correction weights. Proper selection of constraints results in the capability to: (1) balance higher speeds without disturbing previously balanced modes, thru the use of modal trial weight sets; (2) balance off-critical speeds; and (3) balance decoupled modes by use of a single balance plane. If no constraints are imposed, this solution form reduces to the general weighted least squares influence coefficient method. A test facility used to examine the use of the general constrained balancing procedure and application of modal trial weight ratios is also described.

  13. More basic approach to the analysis of multiple specimen R-curves for determination of J/sub c/

    International Nuclear Information System (INIS)

    Carlson, K.W.; Williams, J.A.

    1980-02-01

    Multiple specimen J-R curves were developed for groups of 1T compact specimens with different a/W values and depth of side grooving. The purpose of this investigation was to determine J/sub c/ (J at onset of crack extension) for each group. Judicious selection of points on the load versus load-line deflection record at which to unload and heat tint specimens permitted direct observation of approximate onset of crack extension. It was found that the present recommended procedure for determining J/sub c/ from multiple specimen R-curves, which is being considered for standardization, consistently yielded nonconservative J/sub c/ values. A more basic approach to analyzing multiple specimen R-curves is presented, applied, and discussed. This analysis determined J/sub c/ values that closely corresponded to actual observed onset of crack extension

  14. Iterative approach as alternative to S-matrix in modal methods

    Science.gov (United States)

    Semenikhin, Igor; Zanuccoli, Mauro

    2014-12-01

    The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.

  15. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs

    Directory of Open Access Journals (Sweden)

    Min-Kyu Kim

    2015-12-01

    Full Text Available This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs. The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.

  16. Mediation Analysis with Multiple Mediators

    OpenAIRE

    VanderWeele, T.J.; Vansteelandt, S.

    2014-01-01

    Recent advances in the causal inference literature on mediation have extended traditional approaches to direct and indirect effects to settings that allow for interactions and non-linearities. In this paper, these approaches from causal inference are further extended to settings in which multiple mediators may be of interest. Two analytic approaches, one based on regression and one based on weighting are proposed to estimate the effect mediated through multiple mediators and the effects throu...

  17. NEWTONIAN IMPERIALIST COMPETITVE APPROACH TO OPTIMIZING OBSERVATION OF MULTIPLE TARGET POINTS IN MULTISENSOR SURVEILLANCE SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. Afghan-Toloee

    2013-09-01

    Full Text Available The problem of specifying the minimum number of sensors to deploy in a certain area to face multiple targets has been generally studied in the literatures. In this paper, we are arguing the multi-sensors deployment problem (MDP. The Multi-sensor placement problem can be clarified as minimizing the cost required to cover the multi target points in the area. We propose a more feasible method for the multi-sensor placement problem. Our method makes provision the high coverage of grid based placements while minimizing the cost as discovered in perimeter placement techniques. The NICA algorithm as improved ICA (Imperialist Competitive Algorithm is used to decrease the performance time to explore an enough solution compared to other meta-heuristic schemes such as GA, PSO and ICA. A three dimensional area is used for clarify the multiple target and placement points, making provision x, y, and z computations in the observation algorithm. A structure of model for the multi-sensor placement problem is proposed: The problem is constructed as an optimization problem with the objective to minimize the cost while covering all multiple target points upon a given probability of observation tolerance.

  18. Optimization of Multiple Responses of Ultrasonic Machining (USM Process: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Rina Chakravorty

    2013-04-01

    Full Text Available Ultrasonic machining (USM process has multiple performance measures, e.g. material removal rate (MRR, tool wear rate (TWR, surface roughness (SR etc., which are affected by several process parameters. The researchers commonly attempted to optimize USM process with respect to individual responses, separately. In the recent past, several systematic procedures for dealing with the multi-response optimization problems have been proposed in the literature. Although most of these methods use complex mathematics or statistics, there are some simple methods, which can be comprehended and implemented by the engineers to optimize the multiple responses of USM processes. However, the relative optimization performance of these approaches is unknown because the effectiveness of different methods has been demonstrated using different sets of process data. In this paper, the computational requirements for four simple methods are presented, and two sets of past experimental data on USM processes are analysed using these methods. The relative performances of these methods are then compared. The results show that weighted signal-to-noise (WSN ratio method and utility theory (UT method usually give better overall optimisation performance for the USM process than the other approaches.

  19. Decision Making in Manufacturing Environment Using Graph Theory and Fuzzy Multiple Attribute Decision Making Methods Volume 2

    CERN Document Server

    Rao, R Venkata

    2013-01-01

    Decision Making in Manufacturing Environment Using Graph Theory and Fuzzy Multiple Attribute Decision Making Methods presents the concepts and details of applications of MADM methods. A range of methods are covered including Analytic Hierarchy Process (AHP), Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), VIšekriterijumsko KOmpromisno Rangiranje (VIKOR), Data Envelopment Analysis (DEA), Preference Ranking METHod for Enrichment Evaluations (PROMETHEE), ELimination Et Choix Traduisant la Realité (ELECTRE), COmplex PRoportional ASsessment (COPRAS), Grey Relational Analysis (GRA), UTility Additive (UTA), and Ordered Weighted Averaging (OWA). The existing MADM methods are improved upon and three novel multiple attribute decision making methods for solving the decision making problems of the manufacturing environment are proposed. The concept of integrated weights is introduced in the proposed subjective and objective integrated weights (SOIW) method and the weighted Euclidean distance ba...

  20. Multiple and sequential data acquisition method: an improved method for fragmentation and detection of cross-linked peptides on a hybrid linear trap quadrupole Orbitrap Velos mass spectrometer.

    Science.gov (United States)

    Rudashevskaya, Elena L; Breitwieser, Florian P; Huber, Marie L; Colinge, Jacques; Müller, André C; Bennett, Keiryn L

    2013-02-05

    The identification and validation of cross-linked peptides by mass spectrometry remains a daunting challenge for protein-protein cross-linking approaches when investigating protein interactions. This includes the fragmentation of cross-linked peptides in the mass spectrometer per se and following database searching, the matching of the molecular masses of the fragment ions to the correct cross-linked peptides. The hybrid linear trap quadrupole (LTQ) Orbitrap Velos combines the speed of the tandem mass spectrometry (MS/MS) duty circle with high mass accuracy, and these features were utilized in the current study to substantially improve the confidence in the identification of cross-linked peptides. An MS/MS method termed multiple and sequential data acquisition method (MSDAM) was developed. Preliminary optimization of the MS/MS settings was performed with a synthetic peptide (TP1) cross-linked with bis[sulfosuccinimidyl] suberate (BS(3)). On the basis of these results, MSDAM was created and assessed on the BS(3)-cross-linked bovine serum albumin (BSA) homodimer. MSDAM applies a series of multiple sequential fragmentation events with a range of different normalized collision energies (NCE) to the same precursor ion. The combination of a series of NCE enabled a considerable improvement in the quality of the fragmentation spectra for cross-linked peptides, and ultimately aided in the identification of the sequences of the cross-linked peptides. Concurrently, MSDAM provides confirmatory evidence from the formation of reporter ions fragments, which reduces the false positive rate of incorrectly assigned cross-linked peptides.

  1. Self-calibrated multiple-echo acquisition with radial trajectories using the conjugate gradient method (SMART-CG).

    Science.gov (United States)

    Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F

    2011-04-01

    To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast three-dimensional MRI data acquisition. Copyright © 2011 Wiley-Liss, Inc.

  2. A New Approach to Adaptive Control of Multiple Scales in Plasma Simulations

    Science.gov (United States)

    Omelchenko, Yuri

    2007-04-01

    A new approach to temporal refinement of kinetic (Particle-in-Cell, Vlasov) and fluid (MHD, two-fluid) simulations of plasmas is presented: Discrete-Event Simulation (DES). DES adaptively distributes CPU resources in accordance with local time scales and enables asynchronous integration of inhomogeneous nonlinear systems with multiple time scales on meshes of arbitrary topologies. This removes computational penalties usually incurred in explicit codes due to the global Courant-Friedrich-Levy (CFL) restriction on a time-step size. DES stands apart from multiple time-stepping algorithms in that it requires neither selecting a global synchronization time step nor pre-determining a sequence of time-integration operations for individual parts of the system (local time increments need not bear any integer multiple relations). Instead, elements of a mesh-distributed solution self-adaptively predict and synchronize their temporal trajectories by directly enforcing local causality (accuracy) constraints, which are formulated in terms of incremental changes to the evolving solution. Together with flux-conservative propagation of information, this new paradigm ensures stable and fast asynchronous runs, where idle computation is automatically eliminated. DES is parallelized via a novel Preemptive Event Processing (PEP) technique, which automatically synchronizes elements with similar update rates. In this mode, events with close execution times are projected onto time levels, which are adaptively determined by the program. PEP allows reuse of standard message-passing algorithms on distributed architectures. For optimum accuracy, DES can be combined with adaptive mesh refinement (AMR) techniques for structured and unstructured meshes. Current examples of event-driven models range from electrostatic, hybrid particle-in-cell plasma systems to reactive fluid dynamics simulations. They demonstrate the superior performance of DES in terms of accuracy, speed and robustness.

  3. An engineering method to estimate the junction temperatures of light-emitting diodes in multiple LED application

    International Nuclear Information System (INIS)

    Fu, Xing; Hu, Run; Luo, Xiaobing

    2014-01-01

    Acquiring the junction temperature of light emitting diode (LED) is essential for performance evaluation. But it is hard to get in the multiple LED applications. In this paper, an engineering method is presented to estimate the junction temperatures of LEDs in multiple LED applications. This method is mainly based on an analytical model, and it can be easily applied with some simple measurements. Simulations and experiments were conducted to prove the feasibility of the method, and the deviations among the results obtained by the present method with those by simulation as well as experiments are less than 2% and 3%, respectively. In the final part of this study, the engineering method was used to analyze the thermal resistances of a street lamp. The material of lead frame was found to affect the system thermal resistance mostly, and the choice of solder material strongly depended on the material of the lead frame.

  4. The Green Function cellular method and its relation to multiple scattering theory

    International Nuclear Information System (INIS)

    Butler, W.H.; Zhang, X.G.; Gonis, A.

    1992-01-01

    This paper investigates techniques for solving the wave equation which are based on the idea of obtaining exact local solutions within each potential cell, which are then joined to form a global solution. The authors derive full potential multiple scattering theory (MST) from the Lippmann-Schwinger equation and show that it as well as a closely related cellular method are techniques of this type. This cellular method appears to have all of the advantages of MST and the added advantage of having a secular matrix with only nearest neighbor interactions. Since this cellular method is easily linearized one can rigorously reduce electronic structure calculation to the problem of solving a nearest neighbor tight-binding problem

  5. Design of an Evolutionary Approach for Intrusion Detection

    Directory of Open Access Journals (Sweden)

    Gulshan Kumar

    2013-01-01

    ensemble methods like bagging and boosting. In addition, the proposed approach is a generalized classification approach that is applicable to the problem of any field having multiple conflicting objectives, and a dataset can be represented in the form of labelled instances in terms of its features.

  6. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  7. Methods for meta-analysis of multiple traits using GWAS summary statistics.

    Science.gov (United States)

    Ray, Debashree; Boehnke, Michael

    2018-03-01

    Genome-wide association studies (GWAS) for complex diseases have focused primarily on single-trait analyses for disease status and disease-related quantitative traits. For example, GWAS on risk factors for coronary artery disease analyze genetic associations of plasma lipids such as total cholesterol, LDL-cholesterol, HDL-cholesterol, and triglycerides (TGs) separately. However, traits are often correlated and a joint analysis may yield increased statistical power for association over multiple univariate analyses. Recently several multivariate methods have been proposed that require individual-level data. Here, we develop metaUSAT (where USAT is unified score-based association test), a novel unified association test of a single genetic variant with multiple traits that uses only summary statistics from existing GWAS. Although the existing methods either perform well when most correlated traits are affected by the genetic variant in the same direction or are powerful when only a few of the correlated traits are associated, metaUSAT is designed to be robust to the association structure of correlated traits. metaUSAT does not require individual-level data and can test genetic associations of categorical and/or continuous traits. One can also use metaUSAT to analyze a single trait over multiple studies, appropriately accounting for overlapping samples, if any. metaUSAT provides an approximate asymptotic P-value for association and is computationally efficient for implementation at a genome-wide level. Simulation experiments show that metaUSAT maintains proper type-I error at low error levels. It has similar and sometimes greater power to detect association across a wide array of scenarios compared to existing methods, which are usually powerful for some specific association scenarios only. When applied to plasma lipids summary data from the METSIM and the T2D-GENES studies, metaUSAT detected genome-wide significant loci beyond the ones identified by univariate analyses

  8. A sensitive mass spectrometric method for hypothesis-driven detection of peptide post-translational modifications: multiple reaction monitoring-initiated detection and sequencing (MIDAS).

    Science.gov (United States)

    Unwin, Richard D; Griffiths, John R; Whetton, Anthony D

    2009-01-01

    The application of a targeted mass spectrometric workflow to the sensitive identification of post-translational modifications is described. This protocol employs multiple reaction monitoring (MRM) to search for all putative peptides specifically modified in a target protein. Positive MRMs trigger an MS/MS experiment to confirm the nature and site of the modification. This approach, termed MIDAS (MRM-initiated detection and sequencing), is more sensitive than approaches using neutral loss scanning or precursor ion scanning methodologies, due to a more efficient use of duty cycle along with a decreased background signal associated with MRM. We describe the use of MIDAS for the identification of phosphorylation, with a typical experiment taking just a couple of hours from obtaining a peptide sample. With minor modifications, the MIDAS method can be applied to other protein modifications or unmodified peptides can be used as a MIDAS target.

  9. Hybrid approaches for multiple-species stochastic reaction-diffusion models

    Science.gov (United States)

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-10-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  10. Hybrid approaches for multiple-species stochastic reaction-diffusion models.

    KAUST Repository

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K; Byrne, Helen

    2015-01-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  11. Hybrid approaches for multiple-species stochastic reaction-diffusion models.

    KAUST Repository

    Spill, Fabian

    2015-10-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  12. ATTENUATION OF DIFFRACTED MULTIPLES WITH AN APEX-SHIFTED TANGENT-SQUARED RADON TRANSFORM IN IMAGE SPACE

    Directory of Open Access Journals (Sweden)

    Alvarez Gabriel

    2006-12-01

    Full Text Available In this paper, we propose a method to attenuate diffracted multiples with an apex-shifted tangent-squared Radon transform in angle domain common image gathers (ADCIG . Usually, where diffracted multiples are a problem, the wave field propagation is complex and the moveout of primaries and multiples in data space is irregular. The method handles the complexity of the wave field propagation by wave-equation migration provided that migration velocities are reasonably accurate. As a result, the moveout of the multiples is well behaved in the ADCIGs. For 2D data, the apex-shifted tangent-squared Radon transform maps the 2D space image into a 3D space-cube model whose dimensions are depth, curvature and apex-shift distance.
    Well-corrected primaries map to or near the zero curvature plane and specularly-reflected multiples map to or near the zero apex-shift plane. Diffracted multiples map elsewhere in the cube according to their curvature and apex-shift distance. Thus, specularly reflected as well as diffracted multiples can be attenuated simultaneously. This approach is illustrated with a segment of a 2D seismic line over a large salt body in the Gulf of Mexico. It is shown that ignoring the apex shift compromises the attenuation of the diffracted multiples, whereas the approach proposed attenuates both the specularly-reflected and the diffracted multiples without compromising the primaries.

  13. A modular method to handle multiple time-dependent quantities in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Shin, J; Faddegon, B A; Perl, J; Schümann, J; Paganetti, H

    2012-01-01

    A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call ‘Time Features’. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method. (paper)

  14. Flutter analysis of an airfoil with multiple nonlinearities and uncertainties

    Directory of Open Access Journals (Sweden)

    Haitao Liao

    2013-09-01

    Full Text Available An original method for calculating the limit cycle oscillations of nonlinear aero-elastic system is presented. The problem of determining the maximum vibration amplitude of limit cycle is transformed into a nonlinear optimization problem. The harmonic balance method and the Floquet theory are selected to construct the general nonlinear equality and inequality constraints. The resulting constrained maximization problem is then solved by using the MultiStart algorithm. Finally, the proposed approach is validated and used to analyse the limit cycle oscillations of an airfoil with multiple nonlinearities and uncertainties. Numerical examples show that the coexistence of multiple nonlinearities may lead to low amplitude limit cycle oscillation.

  15. Accelerated Enveloping Distribution Sampling: Enabling Sampling of Multiple End States while Preserving Local Energy Minima.

    Science.gov (United States)

    Perthold, Jan Walther; Oostenbrink, Chris

    2018-05-17

    Enveloping distribution sampling (EDS) is an efficient approach to calculate multiple free-energy differences from a single molecular dynamics (MD) simulation. However, the construction of an appropriate reference-state Hamiltonian that samples all states efficiently is not straightforward. We propose a novel approach for the construction of the EDS reference-state Hamiltonian, related to a previously described procedure to smoothen energy landscapes. In contrast to previously suggested EDS approaches, our reference-state Hamiltonian preserves local energy minima of the combined end-states. Moreover, we propose an intuitive, robust and efficient parameter optimization scheme to tune EDS Hamiltonian parameters. We demonstrate the proposed method with established and novel test systems and conclude that our approach allows for the automated calculation of multiple free-energy differences from a single simulation. Accelerated EDS promises to be a robust and user-friendly method to compute free-energy differences based on solid statistical mechanics.

  16. Automatic identification approach for high-performance liquid chromatography-multiple reaction monitoring fatty acid global profiling.

    Science.gov (United States)

    Tie, Cai; Hu, Ting; Jia, Zhi-Xin; Zhang, Jin-Lan

    2015-08-18

    Fatty acids (FAs) are a group of lipid molecules that are essential to organisms. As potential biomarkers for different diseases, FAs have attracted increasing attention from both biological researchers and the pharmaceutical industry. A sensitive and accurate method for globally profiling and identifying FAs is required for biomarker discovery. The high selectivity and sensitivity of high-performance liquid chromatography-multiple reaction monitoring (HPLC-MRM) gives it great potential to fulfill the need to identify FAs from complicated matrices. This paper developed a new approach for global FA profiling and identification for HPLC-MRM FA data mining. Mathematical models for identifying FAs were simulated using the isotope-induced retention time (RT) shift (IRS) and peak area ratios between parallel isotope peaks for a series of FA standards. The FA structures were predicated using another model based on the RT and molecular weight. Fully automated FA identification software was coded using the Qt platform based on these mathematical models. Different samples were used to verify the software. A high identification efficiency (greater than 75%) was observed when 96 FA species were identified in plasma. This FAs identification strategy promises to accelerate FA research and applications.

  17. An Efficient Approach for Solving Mesh Optimization Problems Using Newton’s Method

    Directory of Open Access Journals (Sweden)

    Jibum Kim

    2014-01-01

    Full Text Available We present an efficient approach for solving various mesh optimization problems. Our approach is based on Newton’s method, which uses both first-order (gradient and second-order (Hessian derivatives of the nonlinear objective function. The volume and surface mesh optimization algorithms are developed such that mesh validity and surface constraints are satisfied. We also propose several Hessian modification methods when the Hessian matrix is not positive definite. We demonstrate our approach by comparing our method with nonlinear conjugate gradient and steepest descent methods in terms of both efficiency and mesh quality.

  18. A study of single multiplicative neuron model with nonlinear filters for hourly wind speed prediction

    International Nuclear Information System (INIS)

    Wu, Xuedong; Zhu, Zhiyu; Su, Xunliang; Fan, Shaosheng; Du, Zhaoping; Chang, Yanchao; Zeng, Qingjun

    2015-01-01

    Wind speed prediction is one important methods to guarantee the wind energy integrated into the whole power system smoothly. However, wind power has a non–schedulable nature due to the strong stochastic nature and dynamic uncertainty nature of wind speed. Therefore, wind speed prediction is an indispensable requirement for power system operators. Two new approaches for hourly wind speed prediction are developed in this study by integrating the single multiplicative neuron model and the iterated nonlinear filters for updating the wind speed sequence accurately. In the presented methods, a nonlinear state–space model is first formed based on the single multiplicative neuron model and then the iterated nonlinear filters are employed to perform dynamic state estimation on wind speed sequence with stochastic uncertainty. The suggested approaches are demonstrated using three cases wind speed data and are compared with autoregressive moving average, artificial neural network, kernel ridge regression based residual active learning and single multiplicative neuron model methods. Three types of prediction errors, mean absolute error improvement ratio and running time are employed for different models’ performance comparison. Comparison results from Tables 1–3 indicate that the presented strategies have much better performance for hourly wind speed prediction than other technologies. - Highlights: • Developed two novel hybrid modeling methods for hourly wind speed prediction. • Uncertainty and fluctuations of wind speed can be better explained by novel methods. • Proposed strategies have online adaptive learning ability. • Proposed approaches have shown better performance compared with existed approaches. • Comparison and analysis of two proposed novel models for three cases are provided

  19. Orchestrating Multiple Intelligences

    Science.gov (United States)

    Moran, Seana; Kornhaber, Mindy; Gardner, Howard

    2006-01-01

    Education policymakers often go astray when they attempt to integrate multiple intelligences theory into schools, according to the originator of the theory, Howard Gardner, and his colleagues. The greatest potential of a multiple intelligences approach to education grows from the concept of a profile of intelligences. Each learner's intelligence…

  20. Multiple Object Tracking Using the Shortest Path Faster Association Algorithm

    Directory of Open Access Journals (Sweden)

    Zhenghao Xi

    2014-01-01

    Full Text Available To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time.

  1. Extending Data Worth Analyses to Select Multiple Observations Targeting Multiple Forecasts

    DEFF Research Database (Denmark)

    Vilhelmsen, Troels Norvin; Ferre, Ty Paul

    2017-01-01

    . In the present study, we extend previous data worth analyses to include: simultaneous selection of multiple new measurements and consideration of multiple forecasts of interest. We show how the suggested approach can be used to optimize data collection. This can be used in a manner that suggests specific...... measurement sets or that produces probability maps indicating areas likely to be informative for specific forecasts. Moreover, we provide examples documenting that sequential measurement election approaches often lead to suboptimal designs and that estimates of data covariance should be included when...

  2. Multiple resonance compensation for betatron coupling and its equivalence with matrix method

    CERN Document Server

    De Ninno, G

    1999-01-01

    Analyses of betatron coupling can be broadly divided into two categories: the matrix approach that decouples the single-turn matrix to reveal the normal modes and the hamiltonian approach that evaluates the coupling in terms of the action of resonances in perturbation theory. The latter is often regarded as being less exact but good for physical insight. The common opinion is that the correction of the two closest sum and difference resonances to the working point is sufficient to reduce the off-axis terms in the 4X4 single-turn matrix, but this is only partially true. The reason for this is explained, and a method is developed that sums to infinity all coupling resonances and, in this way, obtains results equivalent to the matrix approach. The two approaches is discussed with reference to the dynamic aperture. Finally, the extension of the summation method to resonances of all orders is outlined and the relative importance of a single resonance compared to all resonances of a given order is analytically desc...

  3. New approach to equipment quality evaluation method with distinct functions

    Directory of Open Access Journals (Sweden)

    Milisavljević Vladimir M.

    2016-01-01

    Full Text Available The paper presents new approach for improving method for quality evaluation and selection of equipment (devices and machinery by applying distinct functions. Quality evaluation and selection of devices and machinery is a multi-criteria problem which involves the consideration of numerous parameters of various origins. Original selection method with distinct functions is based on technical parameters with arbitrary evaluation of each parameter importance (weighting. Improvement of this method, presented in this paper, addresses the issue of weighting of parameters by using Delphi Method. Finally, two case studies are provided, which included quality evaluation of standard boilers for heating and evaluation of load-haul-dump (LHD machines, to demonstrate applicability of this approach. Analytical Hierarchical Process (AHP is used as a control method.

  4. Direct integration multiple collision integral transport analysis method for high energy fusion neutronics

    International Nuclear Information System (INIS)

    Koch, K.R.

    1985-01-01

    A new analysis method specially suited for the inherent difficulties of fusion neutronics was developed to provide detailed studies of the fusion neutron transport physics. These studies should provide a better understanding of the limitations and accuracies of typical fusion neutronics calculations. The new analysis method is based on the direct integration of the integral form of the neutron transport equation and employs a continuous energy formulation with the exact treatment of the energy angle kinematics of the scattering process. In addition, the overall solution is analyzed in terms of uncollided, once-collided, and multi-collided solution components based on a multiple collision treatment. Furthermore, the numerical evaluations of integrals use quadrature schemes that are based on the actual dependencies exhibited in the integrands. The new DITRAN computer code was developed on the Cyber 205 vector supercomputer to implement this direct integration multiple-collision fusion neutronics analysis. Three representative fusion reactor models were devised and the solutions to these problems were studied to provide suitable choices for the numerical quadrature orders as well as the discretized solution grid and to understand the limitations of the new analysis method. As further verification and as a first step in assessing the accuracy of existing fusion-neutronics calculations, solutions obtained using the new analysis method were compared to typical multigroup discrete ordinates calculations

  5. DEWA: A Multiaspect Approach for Multiple Face Detection in Complex Scene Digital Image

    Directory of Open Access Journals (Sweden)

    Setiawan Hadi

    2013-09-01

    Full Text Available A new approach for detecting faces in a digital image with unconstrained background has been developed. The approach is composed of three phases: segmentation phase, filtering phase and localization phase. In the segmentation phase, we utilized both training and non-training methods, which are implemented in user selectable color space. In the filtering phase, Minkowski addition-based objects removal has been used for image cleaning. In the last phase, an image processing method and a data mining method are employed for grouping and localizing objects, combined with geometric-based image analysis. Several experiments have been conducted using our special face database that consists of simple objects and complex objects. The experiment results demonstrated that the detection accuracy is around 90% and the detection speed is less than 1 second in average.

  6. A new approach for peat inventory methods; Turvetutkimusten menetelmaekehitystarkastelu

    Energy Technology Data Exchange (ETDEWEB)

    Laatikainen, M.; Leino, J.; Lerssi, J.; Torppa, J.; Turunen, J. Email: jukka.turunen@gtk.fi

    2011-07-01

    Development of the new peatland inventory method started in 2009. There was a need to investigate whether new methods and tools could be developed cost-effectively so field inventory work would more completely cover the whole peatland area and the quality and liability of the final results would remain at a high level. The old inventory method in place at the Geological Survey of Finland (GTK) is based on the main transect and cross transect approach across a peatland area. The goal of this study was to find a practical grid-based method linked to the geographic information system suitable for field conditions. the triangle-grid method with even distance between the study points was found to be the most suitable approach. A new Ramac-ground penetrating radar was obtained by the GTK in 2009, and it was concluded in the study of new peatland inventory methods. This radar model is relatively light and very suitable, for example, to the forestry drained peatlands, which are often difficult to cross because of the intensive ditch network. the goal was to investigate the best working methods for the ground penetrating radar to optimize its use in the large-scale peatland inventory. Together with the new field inventory methods, a novel interpolation-based method (MITTI) for modelling peat depths was developed. MITTI makes it possible to take advantage of all the available peat-depth data including, at the moment, aerogeophysical and ground penetrating radar measurements, drilling data and the mire outline. The characteristic uncertainties of each data type are taken into account and, in addition to the depth model itself, an uncertainty map of the model is computed. Combined with the grid-based field inventory method, this multi-approach provides better tools to more accurately estimate the peat depths, peat amounts and peat type distributions. The development of the new peatland inventory method was divided into four separate sections: (1) Development of new field

  7. Parallelism measurement for base plate of standard artifact with multiple tactile approaches

    Science.gov (United States)

    Ye, Xiuling; Zhao, Yan; Wang, Yiwen; Wang, Zhong; Fu, Luhua; Liu, Changjie

    2018-01-01

    Nowadays, as workpieces become more precise and more specialized which results in more sophisticated structures and higher accuracy for the artifacts, higher requirements have been put forward for measuring accuracy and measuring methods. As an important method to obtain the size of workpieces, coordinate measuring machine (CMM) has been widely used in many industries. In order to achieve the calibration of a self-developed CMM, it is found that the parallelism of the base plate used for fixing the standard artifact is an important factor which affects the measurement accuracy in the process of studying self-made high-precision standard artifact. And aimed to measure the parallelism of the base plate, by using the existing high-precision CMM, gauge blocks, dial gauge and marble platform with the tactile approach, three methods for parallelism measurement of workpieces are employed, and comparisons are made within the measurement results. The results of experiments show that the final accuracy of all the three methods is able to reach micron level and meets the measurement requirements. Simultaneously, these three approaches are suitable for different measurement conditions which provide a basis for rapid and high-precision measurement under different equipment conditions.

  8. Integrating multiple programme and policy approaches to hepatitis C prevention and care for injection drug users: a comprehensive approach.

    Science.gov (United States)

    Birkhead, Guthrie S; Klein, Susan J; Candelas, Alma R; O'Connell, Daniel A; Rothman, Jeffrey R; Feldman, Ira S; Tsui, Dennis S; Cotroneo, Richard A; Flanigan, Colleen A

    2007-10-01

    New York State is home to an estimated 230,000 individuals chronically infected with hepatitis C virus (HCV) and roughly 171,500 active injection drug users (IDUs). HCV/HIV co-infection is common and models of service delivery that effectively meet IDUs' needs are required. A HCV strategic plan has stressed integration. HCV prevention and care are integrated within health and human service settings, including HIV/AIDS organisations and drug treatment programmes. Other measures that support comprehensive HCV services for IDUs include reimbursement, clinical guidelines, training and HCV prevention education. Community and provider collaborations inform programme and policy development. IDUs access 5 million syringes annually through harm reduction/syringe exchange programmes (SEPs) and a statewide syringe access programme. Declines in HCV prevalence amongst IDUs in New York City coincided with improved syringe availability. New models of care successfully link IDUs at SEPs and in drug treatment to health care. Over 7000 Medicaid recipients with HCV/HIV co-infection had health care encounters related to their HCV in a 12-month period and 10,547 claims for HCV-related medications were paid. The success rate of transitional case management referrals to drug treatment is over 90%. Training and clinical guidelines promote provider knowledge about HCV and contribute to quality HCV care for IDUs. Chart reviews of 2570 patients with HIV in 2004 documented HCV status 97.4% of the time, overall, in various settings. New HCV surveillance systems are operational. Despite this progress, significant challenges remain. A comprehensive, public health approach, using multiple strategies across systems and mobilizing multiple sectors, can enhance IDUs access to HCV prevention and care. A holisitic approach with integrated services, including for HCV-HIV co-infected IDUs is needed. Leadership, collaboration and resources are essential.

  9. Seismic reflector imaging using internal multiples with Marchenko-type equations

    NARCIS (Netherlands)

    Slob, E.C.; Wapenaar, C.P.A.; Broggini, F.; Snieder, R.

    2014-01-01

    We present an imaging method that creates a map of reflection coefficients in correct one-way time with no contamination from internal multiples using purely a filtering approach. The filter is computed from the measured reflection response and does not require a background model. We demonstrate

  10. Relative accuracy of spatial predictive models for lynx Lynx canadensis derived using logistic regression-AIC, multiple criteria evaluation and Bayesian approaches

    Directory of Open Access Journals (Sweden)

    Shelley M. ALEXANDER

    2009-02-01

    Full Text Available We compared probability surfaces derived using one set of environmental variables in three Geographic Information Systems (GIS-based approaches: logistic regression and Akaike’s Information Criterion (AIC, Multiple Criteria Evaluation (MCE, and Bayesian Analysis (specifically Dempster-Shafer theory. We used lynx Lynx canadensis as our focal species, and developed our environment relationship model using track data collected in Banff National Park, Alberta, Canada, during winters from 1997 to 2000. The accuracy of the three spatial models were compared using a contingency table method. We determined the percentage of cases in which both presence and absence points were correctly classified (overall accuracy, the failure to predict a species where it occurred (omission error and the prediction of presence where there was absence (commission error. Our overall accuracy showed the logistic regression approach was the most accurate (74.51%. The multiple criteria evaluation was intermediate (39.22%, while the Dempster-Shafer (D-S theory model was the poorest (29.90%. However, omission and commission error tell us a different story: logistic regression had the lowest commission error, while D-S theory produced the lowest omission error. Our results provide evidence that habitat modellers should evaluate all three error measures when ascribing confidence in their model. We suggest that for our study area at least, the logistic regression model is optimal. However, where sample size is small or the species is very rare, it may also be useful to explore and/or use a more ecologically cautious modelling approach (e.g. Dempster-Shafer that would over-predict, protect more sites, and thereby minimize the risk of missing critical habitat in conservation plans[Current Zoology 55(1: 28 – 40, 2009].

  11. Interconnection blocks: a method for providing reusable, rapid, multiple, aligned and planar microfluidic interconnections

    DEFF Research Database (Denmark)

    Sabourin, David; Snakenborg, Detlef; Dufva, Hans Martin

    2009-01-01

    In this paper a method is presented for creating 'interconnection blocks' that are re-usable and provide multiple, aligned and planar microfluidic interconnections. Interconnection blocks made from polydimethylsiloxane allow rapid testing of microfluidic chips and unobstructed microfluidic observ...

  12. Approach to evaluation and management of a patient with multiple food allergies.

    Science.gov (United States)

    Bird, J Andrew

    2016-01-01

    Diagnosing food allergy is often challenging, and validated testing modalities are mostly limited to immunoglobulin E (IgE)-mediated reactions to foods. Use of food-specific IgE tests and skin prick tests in individuals without a history that supports an IgE-mediated reaction to the specific food being tested diminishes the predictive capabilities of the test. To review the literature regarding evaluation of patients with a concern for multiple food allergies and to demonstrate an evidence-based approach to diagnosis and management. A literature search was performed and articles identified as relevant based on the search terms "food allergy," "food allergy diagnosis," "skin prick test," "serum IgE test," "oral food challenge", and "food allergy management." Patients at risk of food allergy are often misdiagnosed and appropriate evaluation of patients with concern for food allergy includes taking a thorough diet history and reaction history, performing specific tests intentionally and when indicated, and conducting an oral food challenge in a safe environment by an experienced provider when test results are inconclusive. An evidence-based approach to diagnosing and managing a patient at risk of having a life-threatening food allergy is reviewed.

  13. Stepwise approach to establishing multiple outreach laboratory information system-electronic medical record interfaces.

    Science.gov (United States)

    Pantanowitz, Liron; Labranche, Wayne; Lareau, William

    2010-05-26

    Clinical laboratory outreach business is changing as more physician practices adopt an electronic medical record (EMR). Physician connectivity with the laboratory information system (LIS) is consequently becoming more important. However, there are no reports available to assist the informatician with establishing and maintaining outreach LIS-EMR connectivity. A four-stage scheme is presented that was successfully employed to establish unidirectional and bidirectional interfaces with multiple physician EMRs. This approach involves planning (step 1), followed by interface building (step 2) with subsequent testing (step 3), and finally ongoing maintenance (step 4). The role of organized project management, software as a service (SAAS), and alternate solutions for outreach connectivity are discussed.

  14. Experimental design and multiple response optimization. Using the desirability function in analytical methods development.

    Science.gov (United States)

    Candioti, Luciana Vera; De Zan, María M; Cámara, María S; Goicoechea, Héctor C

    2014-06-01

    A review about the application of response surface methodology (RSM) when several responses have to be simultaneously optimized in the field of analytical methods development is presented. Several critical issues like response transformation, multiple response optimization and modeling with least squares and artificial neural networks are discussed. Most recent analytical applications are presented in the context of analytLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, ArgentinaLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, Argentinaical methods development, especially in multiple response optimization procedures using the desirability function. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Statistical Analysis of a Class: Monte Carlo and Multiple Imputation Spreadsheet Methods for Estimation and Extrapolation

    Science.gov (United States)

    Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael

    2017-01-01

    The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…

  16. A differential transformation approach for solving functional differential equations with multiple delays

    Science.gov (United States)

    Rebenda, Josef; Šmarda, Zdeněk

    2017-07-01

    In the paper an efficient semi-analytical approach based on the method of steps and the differential transformation is proposed for numerical approximation of solutions of functional differential models of delayed and neutral type on a finite interval of arbitrary length, including models with several constant delays. Algorithms for both commensurate and non-commensurate delays are described, applications are shown in examples. Validity and efficiency of the presented algorithms is compared with the variational iteration method, the Adomian decomposition method and the polynomial least squares method numerically. Matlab package DDE23 is used to produce reference numerical values.

  17. Multiple rotation assessment through isothetic fringes in speckle photography

    International Nuclear Information System (INIS)

    Angel, Luciano; Tebaldi, Myrian; Bolognini, Nestor

    2007-01-01

    The use of different pupils for storing each speckled image in speckle photography is employed to determine multiple in-plane rotations. The method consists of recording a four-exposure specklegram where the rotations are done between exposures. This specklegram is then optically processed in a whole field approach rendering isothetic fringes, which give detailed information about the multiple rotations. It is experimentally demonstrated that the proposed arrangement permits the depiction of six isothetics in order to measure either six different angles or three nonparallel components for two local general in-plane displacements

  18. Cross-species multiple environmental stress responses: An integrated approach to identify candidate genes for multiple stress tolerance in sorghum (Sorghum bicolor (L. Moench and related model species.

    Directory of Open Access Journals (Sweden)

    Adugna Abdi Woldesemayat

    Full Text Available Crop response to the changing climate and unpredictable effects of global warming with adverse conditions such as drought stress has brought concerns about food security to the fore; crop yield loss is a major cause of concern in this regard. Identification of genes with multiple responses across environmental stresses is the genetic foundation that leads to crop adaptation to environmental perturbations.In this paper, we introduce an integrated approach to assess candidate genes for multiple stress responses across-species. The approach combines ontology based semantic data integration with expression profiling, comparative genomics, phylogenomics, functional gene enrichment and gene enrichment network analysis to identify genes associated with plant stress phenotypes. Five different ontologies, viz., Gene Ontology (GO, Trait Ontology (TO, Plant Ontology (PO, Growth Ontology (GRO and Environment Ontology (EO were used to semantically integrate drought related information.Target genes linked to Quantitative Trait Loci (QTLs controlling yield and stress tolerance in sorghum (Sorghum bicolor (L. Moench and closely related species were identified. Based on the enriched GO terms of the biological processes, 1116 sorghum genes with potential responses to 5 different stresses, such as drought (18%, salt (32%, cold (20%, heat (8% and oxidative stress (25% were identified to be over-expressed. Out of 169 sorghum drought responsive QTLs associated genes that were identified based on expression datasets, 56% were shown to have multiple stress responses. On the other hand, out of 168 additional genes that have been evaluated for orthologous pairs, 90% were conserved across species for drought tolerance. Over 50% of identified maize and rice genes were responsive to drought and salt stresses and were co-located within multifunctional QTLs. Among the total identified multi-stress responsive genes, 272 targets were shown to be co-localized within QTLs

  19. Perceived Intrafamilial Connectedness and Autonomy in Families with and without an Anxious Family Member: A Multiple Informant Approach

    Science.gov (United States)

    de Albuquerque, Jiske E. G.; Schneider, Silvia

    2012-01-01

    Perceived intrafamilial "emotional connectedness" and "autonomy" were investigated within families with and without an anxious family member using a multiple informant approach. The sample consisted of 32 mothers with a current anxiety disorder and 56 controls, their partners, and their anxious and nonanxious teenage children. No differences were…

  20. Algebraic Verification Method for SEREs Properties via Groebner Bases Approaches

    Directory of Open Access Journals (Sweden)

    Ning Zhou

    2013-01-01

    Full Text Available This work presents an efficient solution using computer algebra system to perform linear temporal properties verification for synchronous digital systems. The method is essentially based on both Groebner bases approaches and symbolic simulation. A mechanism for constructing canonical polynomial set based symbolic representations for both circuit descriptions and assertions is studied. We then present a complete checking algorithm framework based on these algebraic representations by using Groebner bases. The computational experience result in this work shows that the algebraic approach is a quite competitive checking method and will be a useful supplement to the existent verification methods based on simulation.

  1. Compensation Methods for Non-uniform and Incomplete Data Sampling in High Resolution PET with Multiple Scintillation Crystal Layers

    International Nuclear Information System (INIS)

    Lee, Jae Sung; Kim, Soo Mee; Lee, Dong Soo; Hong, Jong Hong; Sim, Kwang Souk; Rhee, June Tak

    2008-01-01

    To establish the methods for sinogram formation and correction in order to appropriately apply the filtered backprojection (FBP) reconstruction algorithm to the data acquired using PET scanner with multiple scintillation crystal layers. Formation for raw PET data storage and conversion methods from listmode data to histogram and sinogram were optimized. To solve the various problems occurred while the raw histogram was converted into sinogram, optimal sampling strategy and sampling efficiency correction method were investigated. Gap compensation methods that is unique in this system were also investigated. All the sinogram data were reconstructed using 2D filtered backprojection algorithm and compared to estimate the improvements by the correction algorithms. Optimal radial sampling interval and number of angular samples in terms of the sampling theorem and sampling efficiency correction algorithm were pitch/2 and 120, respectively. By applying the sampling efficiency correction and gap compensation, artifacts and background noise on the reconstructed image could be reduced. Conversion method from the histogram to sinogram was investigated for the FBP reconstruction of data acquired using multiple scintillation crystal layers. This method will be useful for the fast 2D reconstruction of multiple crystal layer PET data

  2. Multiple predictor smoothing methods for sensitivity analysis: Description of techniques

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  3. Systematic approaches to data analysis from the Critical Decision Method

    Directory of Open Access Journals (Sweden)

    Martin Sedlár

    2015-01-01

    Full Text Available The aim of the present paper is to introduce how to analyse the qualitative data from the Critical Decision Method. At first, characterizing the method provides the meaningful introduction into the issue. This method used in naturalistic decision making research is one of the cognitive task analysis methods, it is based on the retrospective semistructured interview about critical incident from the work and it may be applied in various domains such as emergency services, military, transport, sport or industry. Researchers can make two types of methodological adaptation. Within-method adaptations modify the way of conducting the interviews and cross-method adaptations combine this method with other related methods. There are many decsriptions of conducting the interview, but the descriptions how the data should be analysed are rare. Some researchers use conventional approaches like content analysis, grounded theory or individual procedures with reference to the objectives of research project. Wong (2004 describes two approaches to data analysis proposed for this method of data collection, which are described and reviewed in the details. They enable systematic work with a large amount of data. The structured approach organizes the data according to an a priori analysis framework and it is suitable for clearly defined object of research. Each incident is studied separately. At first, the decision chart showing the main decision points and then the incident summary are made. These decision points are used to identify the relevant statements from the transcript, which are analysed in terms of the Recognition-Primed Decision Model. Finally, the results from all the analysed incidents are integrated. The limitation of the structured approach is it may not reveal some interesting concepts. The emergent themes approach helps to identify these concepts while maintaining a systematic framework for analysis and it is used for exploratory research design. It

  4. Why is the Arkavathy River drying? A multiple hypothesis approach in a data scarce region

    Science.gov (United States)

    Srinivasan, V.; Thompson, S.; Madhyastha, K.; Penny, G.; Jeremiah, K.; Lele, S.

    2015-01-01

    The developing world faces unique challenges in achieving water security as it is disproportionately exposed to stressors such as climate change while also undergoing demographic growth, agricultural intensification and industrialization. Investigative approaches are needed that can inform sound policy development and planning to address the water security challenge in the context of data scarcity. We investigated the "predictions under change" problem in the Thippagondanahalli (TG Halli) catchment of the Arkavathy sub-basin in South India. River inflows into the TG Halli reservoir have declined since the 1970s, and the reservoir is currently operating at only 20% of its built capacity. The mechanisms responsible for the drying of the river are not understood, resulting in uncoordinated and potentially counter-productive management responses. The objective of this study was to investigate potential explanations of the drying trend and thus obtain predictive insight. We used a multiple working hypothesis approach to investigate the decline in inflow into TG Halli reservoir. Five hypotheses were tested using data from field surveys and reliable secondary sources: (1) changes in rainfall amount, timing and storm intensity, (2) rising temperatures, (3) increased groundwater extraction, (4) expansion of eucalyptus plantations, and (5) increased fragmentation of the river channel. Our results indicate that proximate anthropogenic drivers of change such as groundwater pumping, expansion of eucalyptus plantations, and to a lesser extent channel fragmentation, are much more likely to have caused the decline in surface flows in the TG Halli catchment than changing climate. The case study shows that direct human interventions play a significant role in altering the hydrology of watersheds. The multiple working hypotheses approach presents a systematic way to quantify the relative contributions of anthropogenic drivers to hydrologic change. The approach not only yields a

  5. Prediction Approach of Critical Node Based on Multiple Attribute Decision Making for Opportunistic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Qifan Chen

    2016-01-01

    Full Text Available Predicting critical nodes of Opportunistic Sensor Network (OSN can help us not only to improve network performance but also to decrease the cost in network maintenance. However, existing ways of predicting critical nodes in static network are not suitable for OSN. In this paper, the conceptions of critical nodes, region contribution, and cut-vertex in multiregion OSN are defined. We propose an approach to predict critical node for OSN, which is based on multiple attribute decision making (MADM. It takes RC to present the dependence of regions on Ferry nodes. TOPSIS algorithm is employed to find out Ferry node with maximum comprehensive contribution, which is a critical node. The experimental results show that, in different scenarios, this approach can predict the critical nodes of OSN better.

  6. Matrix-type multiple reciprocity boundary element method for solving three-dimensional two-group neutron diffusion equations

    International Nuclear Information System (INIS)

    Itagaki, Masafumi; Sahashi, Naoki.

    1997-01-01

    The multiple reciprocity boundary element method has been applied to three-dimensional two-group neutron diffusion problems. A matrix-type boundary integral equation has been derived to solve the first and the second group neutron diffusion equations simultaneously. The matrix-type fundamental solutions used here satisfy the equation which has a point source term and is adjoint to the neutron diffusion equations. A multiple reciprocity method has been employed to transform the matrix-type domain integral related to the fission source into an equivalent boundary one. The higher order fundamental solutions required for this formulation are composed of a series of two types of analytic functions. The eigenvalue itself is also calculated using only boundary integrals. Three-dimensional test calculations indicate that the present method provides stable and accurate solutions for criticality problems. (author)

  7. Expansion methods for solving integral equations with multiple time lags using Bernstein polynomial of the second kind

    Directory of Open Access Journals (Sweden)

    Mahmoud Paripour

    2014-08-01

    Full Text Available In this paper, the Bernstein polynomials are used to approximatethe solutions of linear integral equations with multiple time lags (IEMTL through expansion methods (collocation method, partition method, Galerkin method. The method is discussed in detail and illustrated by solving some numerical examples. Comparison between the exact and approximated results obtained from these methods is carried out

  8. Identifying multiple outliers in linear regression: robust fit and clustering approach

    International Nuclear Information System (INIS)

    Robiah Adnan; Mohd Nor Mohamad; Halim Setan

    2001-01-01

    This research provides a clustering based approach for determining potential candidates for outliers. This is modification of the method proposed by Serbert et. al (1988). It is based on using the single linkage clustering algorithm to group the standardized predicted and residual values of data set fit by least trimmed of squares (LTS). (Author)

  9. Newton’s method an updated approach of Kantorovich’s theory

    CERN Document Server

    Ezquerro Fernández, José Antonio

    2017-01-01

    This book shows the importance of studying semilocal convergence in iterative methods through Newton's method and addresses the most important aspects of the Kantorovich's theory including implicated studies. Kantorovich's theory for Newton's method used techniques of functional analysis to prove the semilocal convergence of the method by means of the well-known majorant principle. To gain a deeper understanding of these techniques the authors return to the beginning and present a deep-detailed approach of Kantorovich's theory for Newton's method, where they include old results, for a historical perspective and for comparisons with new results, refine old results, and prove their most relevant results, where alternative approaches leading to new sufficient semilocal convergence criteria for Newton's method are given. The book contains many numerical examples involving nonlinear integral equations, two boundary value problems and systems of nonlinear equations related to numerous physical phenomena. The book i...

  10. METHOD FOR SELECTION OF PROJECT MANAGEMENT APPROACH BASED ON FUZZY CONCEPTS

    Directory of Open Access Journals (Sweden)

    Igor V. KONONENKO

    2017-03-01

    Full Text Available Literature analysis of works that devoted to research of the selection a project management approach and development of effective methods for this problem solution is given. Mathematical model and method for selection of project management approach with fuzzy concepts of applicability of existing approaches are proposed. The selection is made of such approaches as the PMBOK Guide, the ISO21500 standard, the PRINCE2 methodology, the SWEBOK Guide, agile methodologies Scrum, XP, and Kanban. The number of project parameters which have a great impact on the result of the selection and measure of their impact is determined. Project parameters relate to information about the project, team, communication, critical project risks. They include the number of people involved in the project, the customer's experience with this project team, the project team's experience in this field, the project team's understanding of requirements, adapting ability, initiative, and others. The suggested method is considered on the example of its application for selection a project management approach to software development project.

  11. Climate change, livelihoods and the multiple determinants of water adequacy: two approaches at regional to global scale

    Science.gov (United States)

    Lissner, Tabea; Reusser, Dominik

    2015-04-01

    Inadequate access to water is already a problem in many regions of the world and processes of global change are expected to further exacerbate the situation. Many aspects determine the adequacy of water resources: beside actual physical water stress, where the resource itself is limited, economic and social water stress can be experienced if access to resource is limited by inadequate infrastructure, political or financial constraints. To assess the adequacy of water availability for human use, integrated approaches are needed that allow to view the multiple determinants in conjunction and provide sound results as a basis for informed decisions. This contribution proposes two parts of an integrated approach to look at the multiple dimensions of water scarcity at regional to global scale. These were developed in a joint project with the German Development Agency (GIZ). It first outlines the AHEAD approach to measure Adequate Human livelihood conditions for wEll-being And Development, implemented at global scale and at national resolution. This first approach allows viewing impacts of climate change, e.g. changes in water availability, within the wider context of AHEAD conditions. A specific focus lies on the uncertainties in projections of climate change and future water availability. As adequate water access is not determined by water availability alone, in a second step we develop an approach to assess the water requirements for different sectors in more detail, including aspects of quantity, quality as well as access, in an integrated way. This more detailed approach is exemplified at region-scale in Indonesia and South Africa. Our results show that in many regions of the world, water scarcity is a limitation to AHEAD conditions in many countries, regardless of differing modelling output. The more detailed assessments highlight the relevance of additional aspects to assess the adequacy of water for human use, showing that in many regions, quality and

  12. BIOFEEDBACK: A NEW METHOD FOR CORRECTION OF MOTOR DISORDERS IN PATIENTS WITH MULTIPLE SCLEROSIS

    Directory of Open Access Journals (Sweden)

    Ya. S. Pekker

    2014-01-01

    Full Text Available Major disabling factors in multiple sclerosis is motor disorders. Rehabilitation of such violations is one of the most important medical and social problems. Currently, most of the role given to the development of methods for correction of motor disorders based on accessing natural resources of the human body. One of these methods is the adaptive control with biofeedback (BFB. The aim of our study was the correction of motor disorders in multiple sclerosis patients using biofeedback training. In the study, we have developed scenarios for training rehabilitation program computer EMG biofeedback aimed at correction of motor disorders in patients with multiple sclerosis (MS. The method was tested in the neurological clinic of SSMU. The study included 9 patients with definite diagnosis of MS with the presence of the clinical picture of combined pyramidal and cerebellar symptoms. Assessed the effectiveness of rehabilitation procedures biofeedback training using specialized scales (rating scale functional systems Kurtzke; questionnaire research quality of life – SF-36, evaluation of disease impact Profile – SIP and score on a scale fatigue – FSS. In the studied group of patients decreased score on a scale of fatigue (FSS, increased motor control (SIP2, the physical and mental components of health (SF-36. The tendency to reduce the amount of neurological deficit by reducing the points on the pyramidal Kurtske violations. Analysis of the exchange rate dynamics of biofeedback training on EMG for trained muscles indicates an increase in the recorded signal OEMG from session to session. Proved a tendency to increase strength and coordination trained muscles of patients studied.Positive results of biofeedback therapy in patients with MS can be recommended to use this method in the complex rehabilitation measures to correct motor and psycho-emotional disorders.

  13. An improved early detection method of type-2 diabetes mellitus using multiple classifier system

    KAUST Repository

    Zhu, Jia

    2015-01-01

    The specific causes of complex diseases such as Type-2 Diabetes Mellitus (T2DM) have not yet been identified. Nevertheless, many medical science researchers believe that complex diseases are caused by a combination of genetic, environmental, and lifestyle factors. Detection of such diseases becomes an issue because it is not free from false presumptions and is accompanied by unpredictable effects. Given the greatly increased amount of data gathered in medical databases, data mining has been used widely in recent years to detect and improve the diagnosis of complex diseases. However, past research showed that no single classifier can be considered optimal for all problems. Therefore, in this paper, we focus on employing multiple classifier systems to improve the accuracy of detection for complex diseases, such as T2DM. We proposed a dynamic weighted voting scheme called multiple factors weighted combination for classifiers\\' decision combination. This method considers not only the local and global accuracy but also the diversity among classifiers and localized generalization error of each classifier. We evaluated our method on two real T2DM data sets and other medical data sets. The favorable results indicated that our proposed method significantly outperforms individual classifiers and other fusion methods.

  14. Structured plant metabolomics for the simultaneous exploration of multiple factors.

    Science.gov (United States)

    Vasilev, Nikolay; Boccard, Julien; Lang, Gerhard; Grömping, Ulrike; Fischer, Rainer; Goepfert, Simon; Rudaz, Serge; Schillberg, Stefan

    2016-11-17

    Multiple factors act simultaneously on plants to establish complex interaction networks involving nutrients, elicitors and metabolites. Metabolomics offers a better understanding of complex biological systems, but evaluating the simultaneous impact of different parameters on metabolic pathways that have many components is a challenging task. We therefore developed a novel approach that combines experimental design, untargeted metabolic profiling based on multiple chromatography systems and ionization modes, and multiblock data analysis, facilitating the systematic analysis of metabolic changes in plants caused by different factors acting at the same time. Using this method, target geraniol compounds produced in transgenic tobacco cell cultures were grouped into clusters based on their response to different factors. We hypothesized that our novel approach may provide more robust data for process optimization in plant cell cultures producing any target secondary metabolite, based on the simultaneous exploration of multiple factors rather than varying one factor each time. The suitability of our approach was verified by confirming several previously reported examples of elicitor-metabolite crosstalk. However, unravelling all factor-metabolite networks remains challenging because it requires the identification of all biochemically significant metabolites in the metabolomics dataset.

  15. Systematic Analysis of the Multiple Bioactivities of Green Tea through a Network Pharmacology Approach

    Directory of Open Access Journals (Sweden)

    Shoude Zhang

    2014-01-01

    Full Text Available During the past decades, a number of studies have demonstrated multiple beneficial health effects of green tea. Polyphenolics are the most biologically active components of green tea. Many targets can be targeted or affected by polyphenolics. In this study, we excavated all of the targets of green tea polyphenolics (GTPs though literature mining and target calculation and analyzed the multiple pharmacology actions of green tea comprehensively through a network pharmacology approach. In the end, a total of 200 Homo sapiens targets were identified for fifteen GTPs. These targets were classified into six groups according to their related disease, which included cancer, diabetes, neurodegenerative disease, cardiovascular disease, muscular disease, and inflammation. Moreover, these targets mapped into 143 KEGG pathways, 26 of which were more enriched, as determined though pathway enrichment analysis and target-pathway network analysis. Among the identified pathways, 20 pathways were selected for analyzing the mechanisms of green tea in these diseases. Overall, this study systematically illustrated the mechanisms of the pleiotropic activity of green tea by analyzing the corresponding “drug-target-pathway-disease” interaction network.

  16. A multi-disciplinary approach for the integrated assessment of multiple risks in delta areas.

    Science.gov (United States)

    Sperotto, Anna; Torresan, Silvia; Critto, Andrea; Marcomini, Antonio

    2016-04-01

    The assessment of climate change related risks is notoriously difficult due to the complex and uncertain combinations of hazardous events that might happen, the multiplicity of physical processes involved, the continuous changes and interactions of environmental and socio-economic systems. One important challenge lies in predicting and modelling cascades of natural and man -made hazard events which can be triggered by climate change, encompassing different spatial and temporal scales. Another regard the potentially difficult integration of environmental, social and economic disciplines in the multi-risk concept. Finally, the effective interaction between scientists and stakeholders is essential to ensure that multi-risk knowledge is translated into efficient adaptation and management strategies. The assessment is even more complex at the scale of deltaic systems which are particularly vulnerable to global environmental changes, due to the fragile equilibrium between the presence of valuable natural ecosystems and relevant economic activities. Improving our capacity to assess the combined effects of multiple hazards (e.g. sea-level rise, storm surges, reduction in sediment load, local subsidence, saltwater intrusion) is therefore essential to identify timely opportunities for adaptation. A holistic multi-risk approach is here proposed to integrate terminology, metrics and methodologies from different research fields (i.e. environmental, social and economic sciences) thus creating shared knowledge areas to advance multi risk assessment and management in delta regions. A first testing of the approach, including the application of Bayesian network analysis for the assessment of impacts of climate change on key natural systems (e.g. wetlands, protected areas, beaches) and socio-economic activities (e.g. agriculture, tourism), is applied in the Po river delta in Northern Italy. The approach is based on a bottom-up process involving local stakeholders early in different

  17. Stepwise approach to establishing multiple outreach laboratory information system-electronic medical record interfaces

    Directory of Open Access Journals (Sweden)

    Liron Pantanowitz

    2010-01-01

    Full Text Available Clinical laboratory outreach business is changing as more physician practices adopt an electronic medical record (EMR. Physician connectivity with the laboratory information system (LIS is consequently becoming more important. However, there are no reports available to assist the informatician with establishing and maintaining outreach LIS-EMR connectivity. A four-stage scheme is presented that was successfully employed to establish unidirectional and bidirectional interfaces with multiple physician EMRs. This approach involves planning (step 1, followed by interface building (step 2 with subsequent testing (step 3, and finally ongoing maintenance (step 4. The role of organized project management, software as a service (SAAS, and alternate solutions for outreach connectivity are discussed.

  18. A novel sampling method for multiple multiscale targets from scattering amplitudes at a fixed frequency

    Science.gov (United States)

    Liu, Xiaodong

    2017-08-01

    A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.

  19. Addressing the targeting range of the ABILHAND-56 in relapsing-remitting multiple sclerosis: A mixed methods psychometric study.

    Science.gov (United States)

    Cleanthous, Sophie; Strzok, Sara; Pompilus, Farrah; Cano, Stefan; Marquis, Patrick; Cohan, Stanley; Goldman, Myla D; Kresa-Reahl, Kiren; Petrillo, Jennifer; Castrillo-Viguera, Carmen; Cadavid, Diego; Chen, Shih-Yin

    2018-01-01

    ABILHAND, a manual ability patient-reported outcome instrument originally developed for stroke patients, has been used in multiple sclerosis clinical trials; however, psychometric analyses indicated the measure's limited measurement range and precision in higher-functioning multiple sclerosis patients. The purpose of this study was to identify candidate items to expand the measurement range of the ABILHAND-56, thus improving its ability to detect differences in manual ability in higher-functioning multiple sclerosis patients. A step-wise mixed methods design strategy was used, comprising two waves of patient interviews, a combination of qualitative (concept elicitation and cognitive debriefing) and quantitative (Rasch measurement theory) analytic techniques, and consultation interviews with three clinical neurologists specializing in multiple sclerosis. Original ABILHAND was well understood in this context of use. Eighty-two new manual ability concepts were identified. Draft supplementary items were generated and refined with patient and neurologist input. Rasch measurement theory psychometric analysis indicated supplementary items improved targeting to higher-functioning multiple sclerosis patients and measurement precision. The final pool of Early Multiple Sclerosis Manual Ability items comprises 20 items. The synthesis of qualitative and quantitative methods used in this study improves the ABILHAND content validity to more effectively identify manual ability changes in early multiple sclerosis and potentially help determine treatment effect in higher-functioning patients in clinical trials.

  20. Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: power and applicability analysis

    NARCIS (Netherlands)

    Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.

    2017-01-01

    Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels