WorldWideScience

Sample records for multiple methods approach

  1. Adjusted permutation method for multiple attribute decision making with meta-heuristic solution approaches

    Directory of Open Access Journals (Sweden)

    Hossein Karimi

    2011-04-01

    Full Text Available The permutation method of multiple attribute decision making has two significant deficiencies: high computational time and wrong priority output in some problem instances. In this paper, a novel permutation method called adjusted permutation method (APM is proposed to compensate deficiencies of conventional permutation method. We propose Tabu search (TS and particle swarm optimization (PSO to find suitable solutions at a reasonable computational time for large problem instances. The proposed method is examined using some numerical examples to evaluate the performance of the proposed method. The preliminary results show that both approaches provide competent solutions in relatively reasonable amounts of time while TS performs better to solve APM.

  2. Multiple flood vulnerability assessment approach based on fuzzy comprehensive evaluation method and coordinated development degree model.

    Science.gov (United States)

    Yang, Weichao; Xu, Kui; Lian, Jijian; Bin, Lingling; Ma, Chao

    2018-05-01

    Flood is a serious challenge that increasingly affects the residents as well as policymakers. Flood vulnerability assessment is becoming gradually relevant in the world. The purpose of this study is to develop an approach to reveal the relationship between exposure, sensitivity and adaptive capacity for better flood vulnerability assessment, based on the fuzzy comprehensive evaluation method (FCEM) and coordinated development degree model (CDDM). The approach is organized into three parts: establishment of index system, assessment of exposure, sensitivity and adaptive capacity, and multiple flood vulnerability assessment. Hydrodynamic model and statistical data are employed for the establishment of index system; FCEM is used to evaluate exposure, sensitivity and adaptive capacity; and CDDM is applied to express the relationship of the three components of vulnerability. Six multiple flood vulnerability types and four levels are proposed to assess flood vulnerability from multiple perspectives. Then the approach is applied to assess the spatiality of flood vulnerability in Hainan's eastern area, China. Based on the results of multiple flood vulnerability, a decision-making process for rational allocation of limited resources is proposed and applied to the study area. The study shows that multiple flood vulnerability assessment can evaluate vulnerability more completely, and help decision makers learn more information about making decisions in a more comprehensive way. In summary, this study provides a new way for flood vulnerability assessment and disaster prevention decision. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Optimal planning approaches with multiple impulses for rendezvous based on hybrid genetic algorithm and control method

    Directory of Open Access Journals (Sweden)

    JingRui Zhang

    2015-03-01

    Full Text Available In this article, we focus on safe and effective completion of a rendezvous and docking task by looking at planning approaches and control with fuel-optimal rendezvous for a target spacecraft running on a near-circular reference orbit. A variety of existent practical path constraints are considered, including the constraints of field of view, impulses, and passive safety. A rendezvous approach is calculated by using a hybrid genetic algorithm with those constraints. Furthermore, a control method of trajectory tracking is adopted to overcome the external disturbances. Based on Clohessy–Wiltshire equations, we first construct the mathematical model of optimal planning approaches of multiple impulses with path constraints. Second, we introduce the principle of hybrid genetic algorithm with both stronger global searching ability and local searching ability. We additionally explain the application of this algorithm in the problem of trajectory planning. Then, we give three-impulse simulation examples to acquire an optimal rendezvous trajectory with the path constraints presented in this article. The effectiveness and applicability of the tracking control method are verified with the optimal trajectory above as control objective through the numerical simulation.

  4. A novel approach for multiple mobile objects path planning: Parametrization method and conflict resolution strategy

    International Nuclear Information System (INIS)

    Ma, Yong; Wang, Hongwei; Zamirian, M.

    2012-01-01

    We present a new approach containing two steps to determine conflict-free paths for mobile objects in two and three dimensions with moving obstacles. Firstly, the shortest path of each object is set as goal function which is subject to collision-avoidance criterion, path smoothness, and velocity and acceleration constraints. This problem is formulated as calculus of variation problem (CVP). Using parametrization method, CVP is converted to time-varying nonlinear programming problems (TNLPP) and then resolved. Secondly, move sequence of object is assigned by priority scheme; conflicts are resolved by multilevel conflict resolution strategy. Approach efficiency is confirmed by numerical examples. -- Highlights: ► Approach with parametrization method and conflict resolution strategy is proposed. ► Approach fits for multi-object paths planning in two and three dimensions. ► Single object path planning and multi-object conflict resolution are orderly used. ► Path of each object obtained with parameterization method in the first phase. ► Conflict-free paths gained by multi-object conflict resolution in the second phase.

  5. Isothermal multiple displacement amplification: a methodical approach enhancing molecular routine diagnostics of microcarcinomas and small biopsies

    Directory of Open Access Journals (Sweden)

    Mairinger FD

    2014-08-01

    Full Text Available Fabian D Mairinger,1 Robert FH Walter,2 Claudia Vollbrecht,3 Thomas Hager,1 Karl Worm,1 Saskia Ting,1 Jeremias Wohlschläger,1 Paul Zarogoulidis,4 Konstantinos Zarogoulidis,4 Kurt W Schmid1 1Institute of Pathology, 2Ruhrlandklinik, West German Lung Center, University Hospital Essen, Essen, 3Institute of Pathology, University Hospital Cologne, Cologne, Germany; 4Pulmonary Department, Oncology Unit, G Papanikolaou General Hospital, Aristotle University of Thessaloniki, Thessaloniki, Greece Background and methods: Isothermal multiple displacement amplification (IMDA can be a powerful tool in molecular routine diagnostics for homogeneous and sequence-independent whole-genome amplification of notably small tumor samples, eg, microcarcinomas and biopsies containing a small amount of tumor. Currently, this method is not well established in pathology laboratories. We designed a study to confirm the feasibility and convenience of this method for routine diagnostics with formalin-fixed, paraffin-embedded samples prepared by laser-capture microdissection. Results: A total of 250 µg DNA (concentration 5 µg/µL was generated by amplification over a period of 8 hours with a material input of approximately 25 cells, approximately equivalent to 175 pg of genomic DNA. In the generated DNA, a representation of all chromosomes could be shown and the presence of elected genes relevant for diagnosis in clinical samples could be proven. Mutational analysis of clinical samples could be performed without any difficulty and showed concordance with earlier diagnostic findings. Conclusion: We established the feasibility and convenience of IMDA for routine diagnostics. We also showed that small amounts of DNA, which were not analyzable with current molecular methods, could be sufficient for a wide field of applications in molecular routine diagnostics when they are preamplified with IMDA. Keywords: isothermal multiple displacement amplification, isothermal, whole

  6. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    Science.gov (United States)

    Xie, Weihong; Yu, Yang

    2017-01-01

    Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG) in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM) estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly. PMID:29124062

  7. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    Directory of Open Access Journals (Sweden)

    Fan Liang

    2017-01-01

    Full Text Available Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly.

  8. Ensemble approach combining multiple methods improves human transcription start site prediction

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-11-30

    Abstract Background The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets. Results We demonstrate the heterogeneity of current prediction sets, and take advantage of this heterogeneity to construct a two-level classifier (\\'Profisi Ensemble\\') using predictions from 7 programs, along with 2 other data sources. Support vector machines using \\'full\\' and \\'reduced\\' data sets are combined in an either\\/or approach. We achieve a 14% increase in performance over the current state-of-the-art, as benchmarked by a third-party tool. Conclusions Supervised learning methods are a useful way to combine predictions from diverse sources.

  9. Isothermal multiple displacement amplification: a methodical approach enhancing molecular routine diagnostics of microcarcinomas and small biopsies.

    Science.gov (United States)

    Mairinger, Fabian D; Walter, Robert Fh; Vollbrecht, Claudia; Hager, Thomas; Worm, Karl; Ting, Saskia; Wohlschläger, Jeremias; Zarogoulidis, Paul; Zarogoulidis, Konstantinos; Schmid, Kurt W

    2014-01-01

    Isothermal multiple displacement amplification (IMDA) can be a powerful tool in molecular routine diagnostics for homogeneous and sequence-independent whole-genome amplification of notably small tumor samples, eg, microcarcinomas and biopsies containing a small amount of tumor. Currently, this method is not well established in pathology laboratories. We designed a study to confirm the feasibility and convenience of this method for routine diagnostics with formalin-fixed, paraffin-embedded samples prepared by laser-capture microdissection. A total of 250 μg DNA (concentration 5 μg/μL) was generated by amplification over a period of 8 hours with a material input of approximately 25 cells, approximately equivalent to 175 pg of genomic DNA. In the generated DNA, a representation of all chromosomes could be shown and the presence of elected genes relevant for diagnosis in clinical samples could be proven. Mutational analysis of clinical samples could be performed without any difficulty and showed concordance with earlier diagnostic findings. We established the feasibility and convenience of IMDA for routine diagnostics. We also showed that small amounts of DNA, which were not analyzable with current molecular methods, could be sufficient for a wide field of applications in molecular routine diagnostics when they are preamplified with IMDA.

  10. Neutron source multiplication method

    International Nuclear Information System (INIS)

    Clayton, E.D.

    1985-01-01

    Extensive use has been made of neutron source multiplication in thousands of measurements of critical masses and configurations and in subcritical neutron-multiplication measurements in situ that provide data for criticality prevention and control in nuclear materials operations. There is continuing interest in developing reliable methods for monitoring the reactivity, or k/sub eff/, of plant operations, but the required measurements are difficult to carry out and interpret on the far subcritical configurations usually encountered. The relationship between neutron multiplication and reactivity is briefly discussed and data presented to illustrate problems associated with the absolute measurement of neutron multiplication and reactivity in subcritical systems. A number of curves of inverse multiplication have been selected from a variety of experiments showing variations observed in multiplication during the course of critical and subcritical experiments where different methods of reactivity addition were used, with different neutron source detector position locations. Concern is raised regarding the meaning and interpretation of k/sub eff/ as might be measured in a far subcritical system because of the modal effects and spectrum differences that exist between the subcritical and critical systems. Because of this, the calculation of k/sub eff/ identical with unity for the critical assembly, although necessary, may not be sufficient to assure safety margins in calculations pertaining to far subcritical systems. Further study is needed on the interpretation and meaning of k/sub eff/ in the far subcritical system

  11. Ensemble approach combining multiple methods improves human transcription start site prediction.

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-01-01

    The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets.

  12. Searching for intermediate-mass black holes in galaxies with low-luminosity AGN: a multiple-method approach

    Science.gov (United States)

    Koliopanos, F.; Ciambur, B.; Graham, A.; Webb, N.; Coriat, M.; Mutlu-Pakdil, B.; Davis, B.; Godet, O.; Barret, D.; Seigar, M.

    2017-10-01

    Intermediate Mass Black Holes (IMBHs) are predicted by a variety of models and are the likely seeds for super massive BHs (SMBHs). However, we have yet to establish their existence. One method, by which we can discover IMBHs, is by measuring the mass of an accreting BH, using X-ray and radio observations and drawing on the correlation between radio luminosity, X-ray luminosity and the BH mass, known as the fundamental plane of BH activity (FP-BH). Furthermore, the mass of BHs in the centers of galaxies, can be estimated using scaling relations between BH mass and galactic properties. We are initiating a campaign to search for IMBH candidates in dwarf galaxies with low-luminosity AGN, using - for the first time - three different scaling relations and the FP-BH, simultaneously. In this first stage of our campaign, we measure the mass of seven LLAGN, that have been previously suggested to host central IMBHs, investigate the consistency between the predictions of the BH scaling relations and the FP-BH, in the low mass regime and demonstrate that this multiple method approach provides a robust average mass prediction. In my talk, I will discuss our methodology, results and next steps of this campaign.

  13. The reconstruction of late Holocene environmental change at Redhead Lagoon, NSW, using a multiple-method approach

    International Nuclear Information System (INIS)

    Franklin, N.; Gale, S.

    1999-01-01

    . However, since the 1960s, urbanisation has also contributed to accelerated sedimentation and urban pollution within the basin. The sedimentary record also illustrates dramatic and sudden changes in sediment chemistry. In particular, atmospheric pollution from industrial activities has affected lake sediment quality. Increases in heavy metal trace elements such as lead, zinc, arsenic, nickel and copper have been attributed to fallout of atmospheric particulate matter from the nearby smelter at Cockle Creek and the coal-fired power stations around Lake Macquarie. This study shows that a multiple-method approach is capable of yielding important insights into the history of environmental conditions within a single catchment. A combination of analyses together with documented records of land use changes can improve the reliability of the dates obtained by the more established chronological techniques

  14. Performance evaluation and ranking of direct sales stores using BSC approach and fuzzy multiple attribute decision-making methods

    Directory of Open Access Journals (Sweden)

    Mojtaba Soltannezhad Dizaji

    2017-07-01

    Full Text Available In an environment where markets go through a volatile process, and rapid fundamental changes occur due to technological advances, it is important to ensure and maintain a good performance measurement. Organizations, in their performance evaluation, should consider different types of financial and non-financial indicators. In systems like direct sales stores in which decision units have multiple inputs and outputs, all criteria influencing on performance must be combined and examined in a system, simultaneously. The purpose of this study is to evaluate the performance of different products through direct sales of a firm named Shirin Asal with a combination of Balanced Scorecard, fuzzy AHP and TOPSIS so that the weaknesses of subjectivity and selective consideration of evaluators in evaluating the performance indicators are reduced and evaluation integration is provided by considering the contribution of each indicator and each indicator group of balanced scorecard. The research method of this case study is applied. The data collection method is a questionnaire from the previous studies, the use of experts' opinions and the study of documents in the organization. MATLAB and SPSS were used to analyze the data. During this study, the customer and financial perspectives are of the utmost importance to assess the company branches. Among the sub-criteria, the rate of new customer acquisition in the customer dimension and the net income to sales ratio in financial dimension are of the utmost importance.

  15. Systematic approach to optimize a pretreatment method for ultrasensitive liquid chromatography with tandem mass spectrometry analysis of multiple target compounds in biological samples.

    Science.gov (United States)

    Togashi, Kazutaka; Mutaguchi, Kuninori; Komuro, Setsuko; Kataoka, Makoto; Yamazaki, Hiroshi; Yamashita, Shinji

    2016-08-01

    In current approaches for new drug development, highly sensitive and robust analytical methods for the determination of test compounds in biological samples are essential. These analytical methods should be optimized for every target compound. However, for biological samples that contain multiple compounds as new drug candidates obtained by cassette dosing tests, it would be preferable to develop a single method that allows the determination of all compounds at once. This study aims to establish a systematic approach that enables a selection of the most appropriate pretreatment method for multiple target compounds without the use of their chemical information. We investigated the retention times of 27 known compounds under different mobile phase conditions and determined the required pretreatment of human plasma samples using several solid-phase and liquid-liquid extractions. From the relationship between retention time and recovery in a principal component analysis, appropriate pretreatments were categorized into several types. Based on the category, we have optimized a pretreatment method for the identification of three calcium channel blockers in human plasma. Plasma concentrations of these drugs in a cassette-dose clinical study at microdose level were successfully determined with a lower limit of quantitation of 0.2 pg/mL for diltiazem, 1 pg/mL for nicardipine, and 2 pg/mL for nifedipine. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Performance evaluation and ranking of direct sales stores using BSC approach and fuzzy multiple attribute decision-making methods

    OpenAIRE

    Mojtaba Soltannezhad Dizaji; Mohammad Mahdavi Mazdeh; Ahmad Makui

    2017-01-01

    In an environment where markets go through a volatile process, and rapid fundamental changes occur due to technological advances, it is important to ensure and maintain a good performance measurement. Organizations, in their performance evaluation, should consider different types of financial and non-financial indicators. In systems like direct sales stores in which decision units have multiple inputs and outputs, all criteria influencing on performance must be combined and examined in a syst...

  17. Human-centered approaches in geovisualization design: investigating multiple methods through a long-term case study.

    Science.gov (United States)

    Lloyd, David; Dykes, Jason

    2011-12-01

    Working with three domain specialists we investigate human-centered approaches to geovisualization following an ISO13407 taxonomy covering context of use, requirements and early stages of design. Our case study, undertaken over three years, draws attention to repeating trends: that generic approaches fail to elicit adequate requirements for geovis application design; that the use of real data is key to understanding needs and possibilities; that trust and knowledge must be built and developed with collaborators. These processes take time but modified human-centred approaches can be effective. A scenario developed through contextual inquiry but supplemented with domain data and graphics is useful to geovis designers. Wireframe, paper and digital prototypes enable successful communication between specialist and geovis domains when incorporating real and interesting data, prompting exploratory behaviour and eliciting previously unconsidered requirements. Paper prototypes are particularly successful at eliciting suggestions, especially for novel visualization. Enabling specialists to explore their data freely with a digital prototype is as effective as using a structured task protocol and is easier to administer. Autoethnography has potential for framing the design process. We conclude that a common understanding of context of use, domain data and visualization possibilities are essential to successful geovis design and develop as this progresses. HC approaches can make a significant contribution here. However, modified approaches, applied with flexibility, are most promising. We advise early, collaborative engagement with data – through simple, transient visual artefacts supported by data sketches and existing designs – before moving to successively more sophisticated data wireframes and data prototypes. © 2011 IEEE

  18. Multiple Shooting and Time Domain Decomposition Methods

    CERN Document Server

    Geiger, Michael; Körkel, Stefan; Rannacher, Rolf

    2015-01-01

    This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms.  The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics.  This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...

  19. The Multiple Intelligences Teaching Method and Mathematics ...

    African Journals Online (AJOL)

    The Multiple Intelligences teaching approach has evolved and been embraced widely especially in the United States. The approach has been found to be very effective in changing situations for the better, in the teaching and learning of any subject especially mathematics. Multiple Intelligences teaching approach proposes ...

  20. A platform analytical quality by design (AQbD) approach for multiple UHPLC-UV and UHPLC-MS methods development for protein analysis.

    Science.gov (United States)

    Kochling, Jianmei; Wu, Wei; Hua, Yimin; Guan, Qian; Castaneda-Merced, Juan

    2016-06-05

    A platform analytical quality by design approach for methods development is presented in this paper. This approach is not limited just to method development following the same logical Analytical quality by design (AQbD) process, it is also exploited across a range of applications in methods development with commonality in equipment and procedures. As demonstrated by the development process of 3 methods, the systematic approach strategy offers a thorough understanding of the method scientific strength. The knowledge gained from the UHPLC-UV peptide mapping method can be easily transferred to the UHPLC-MS oxidation method and the UHPLC-UV C-terminal heterogeneity methods of the same protein. In addition, the platform AQbD method development strategy ensures method robustness is built in during development. In early phases, a good method can generate reliable data for product development allowing confident decision making. Methods generated following the AQbD approach have great potential for avoiding extensive post-approval analytical method change. While in the commercial phase, high quality data ensures timely data release, reduced regulatory risk, and lowered lab operational cost. Moreover, large, reliable database and knowledge gained during AQbD method development provide strong justifications during regulatory filling for the selection of important parameters or parameter change needs for method validation, and help to justify for removal of unnecessary tests used for product specifications. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. A multiple regression method for genomewide association studies ...

    Indian Academy of Sciences (India)

    Bujun Mei

    2018-06-07

    Jun 7, 2018 ... Similar to the typical genomewide association tests using LD ... new approach performed validly when the multiple regression based on linkage method was employed. .... the model, two groups of scenarios were simulated.

  2. Multiple histogram method and static Monte Carlo sampling

    NARCIS (Netherlands)

    Inda, M.A.; Frenkel, D.

    2004-01-01

    We describe an approach to use multiple-histogram methods in combination with static, biased Monte Carlo simulations. To illustrate this, we computed the force-extension curve of an athermal polymer from multiple histograms constructed in a series of static Rosenbluth Monte Carlo simulations. From

  3. MANGO: a new approach to multiple sequence alignment.

    Science.gov (United States)

    Zhang, Zefeng; Lin, Hao; Li, Ming

    2007-01-01

    Multiple sequence alignment is a classical and challenging task for biological sequence analysis. The problem is NP-hard. The full dynamic programming takes too much time. The progressive alignment heuristics adopted by most state of the art multiple sequence alignment programs suffer from the 'once a gap, always a gap' phenomenon. Is there a radically new way to do multiple sequence alignment? This paper introduces a novel and orthogonal multiple sequence alignment method, using multiple optimized spaced seeds and new algorithms to handle these seeds efficiently. Our new algorithm processes information of all sequences as a whole, avoiding problems caused by the popular progressive approaches. Because the optimized spaced seeds are provably significantly more sensitive than the consecutive k-mers, the new approach promises to be more accurate and reliable. To validate our new approach, we have implemented MANGO: Multiple Alignment with N Gapped Oligos. Experiments were carried out on large 16S RNA benchmarks showing that MANGO compares favorably, in both accuracy and speed, against state-of-art multiple sequence alignment methods, including ClustalW 1.83, MUSCLE 3.6, MAFFT 5.861, Prob-ConsRNA 1.11, Dialign 2.2.1, DIALIGN-T 0.2.1, T-Coffee 4.85, POA 2.0 and Kalign 2.0.

  4. Multiple sclerosis: general features and pharmacologic approach

    International Nuclear Information System (INIS)

    Nielsen Lagumersindez, Denis; Martinez Sanchez, Gregorio

    2009-01-01

    Multiple sclerosis is an autoimmune, inflammatory and desmyelinization disease central nervous system (CNS) of unknown etiology and critical evolution. There different etiological hypotheses talking of a close interrelation among predisposing genetic factors and dissimilar environmental factors, able to give raise to autoimmune response at central nervous system level. Hypothesis of autoimmune pathogeny is based on study of experimental models, and findings in biopsies of affected patients by disease. Accumulative data report that the oxidative stress plays a main role in pathogenesis of multiple sclerosis. Oxygen reactive species generated by macrophages has been involved as mediators of demyelinization and of axon damage, in experimental autoimmune encephalomyelitis and strictly in multiple sclerosis. Disease diagnosis is difficult because of there is not a confirmatory unique test. Management of it covers the treatment of acute relapses, disease modification, and symptoms management. These features require an individualized approach, base on evolution of this affection, and tolerability of treatments. In addition to diet, among non-pharmacologic treatments for multiple sclerosis it is recommended physical therapy. Besides, some clinical assays have been performed in which we used natural extracts, nutrition supplements, and other agents with promising results. Pharmacology allowed neurologists with a broad array of proved effectiveness drugs; however, results of research laboratories in past years make probable that therapeutical possibilities increase notably in future. (Author)

  5. Optimization of breeding methods when introducing multiple ...

    African Journals Online (AJOL)

    Optimization of breeding methods when introducing multiple resistance genes from American to Chinese wheat. JN Qi, X Zhang, C Yin, H Li, F Lin. Abstract. Stripe rust is one of the most destructive diseases of wheat worldwide. Growing resistant cultivars with resistance genes is the most effective method to control this ...

  6. Graphical approach for multiple values logic minimization

    Science.gov (United States)

    Awwal, Abdul Ahad S.; Iftekharuddin, Khan M.

    1999-03-01

    Multiple valued logic (MVL) is sought for designing high complexity, highly compact, parallel digital circuits. However, the practical realization of an MVL-based system is dependent on optimization of cost, which directly affects the optical setup. We propose a minimization technique for MVL logic optimization based on graphical visualization, such as a Karnaugh map. The proposed method is utilized to solve signed-digit binary and trinary logic minimization problems. The usefulness of the minimization technique is demonstrated for the optical implementation of MVL circuits.

  7. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.

    Science.gov (United States)

    Yang, Shaofu; Liu, Qingshan; Wang, Jun

    2018-04-01

    This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.

  8. Hybrid multiple criteria decision-making methods

    DEFF Research Database (Denmark)

    Zavadskas, Edmundas Kazimieras; Govindan, K.; Antucheviciene, Jurgita

    2016-01-01

    Formal decision-making methods can be used to help improve the overall sustainability of industries and organisations. Recently, there has been a great proliferation of works aggregating sustainability criteria by using diverse multiple criteria decision-making (MCDM) techniques. A number of revi...

  9. Development of a universal psycho-educational intervention to prevent common postpartum mental disorders in primiparous women: a multiple method approach

    Directory of Open Access Journals (Sweden)

    Rowe Heather J

    2010-08-01

    Full Text Available Abstract Background Prevention of postnatal mental disorders in women is an important component of comprehensive health service delivery because of the substantial potential benefits for population health. However, diverse approaches to prevention of postnatal depression have had limited success, possibly because anxiety and adjustment disorders are also problematic, mental health problems are multifactorially determined, and because relationships amongst psychosocial risk factors are complex and difficult to modify. The aim of this paper is to describe the development of a novel psycho-educational intervention to prevent postnatal mental disorders in mothers of firstborn infants. Methods Data from a variety of sources were synthesised: a literature review summarised epidemiological evidence about neglected modifiable risk factors; clinical research evidence identified successful psychosocial treatments for postnatal mental health problems; consultations with clinicians, health professionals, policy makers and consumers informed the proposed program and psychological and health promotion theories underpinned the proposed mechanisms of effect. The intervention was pilot-tested with small groups of mothers and fathers and their first newborn infants. Results What Were We Thinking! is a psycho-educational intervention, designed for universal implementation, that addresses heightened learning needs of parents of first newborns. It re-conceptualises mental health problems in mothers of infants as reflecting unmet needs for adaptations in the intimate partner relationship after the birth of a baby, and skills to promote settled infant behaviour. It addresses these two risk factors in half-day seminars, facilitated by trained maternal and child health nurses using non-psychiatric language, in groups of up to five couples and their four-week old infants in primary care. It is designed to promote confidence and reduce mental disorders by providing skills

  10. Fuzzy multiple linear regression: A computational approach

    Science.gov (United States)

    Juang, C. H.; Huang, X. H.; Fleming, J. W.

    1992-01-01

    This paper presents a new computational approach for performing fuzzy regression. In contrast to Bardossy's approach, the new approach, while dealing with fuzzy variables, closely follows the conventional regression technique. In this approach, treatment of fuzzy input is more 'computational' than 'symbolic.' The following sections first outline the formulation of the new approach, then deal with the implementation and computational scheme, and this is followed by examples to illustrate the new procedure.

  11. Feedback structure based entropy approach for multiple-model estimation

    Institute of Scientific and Technical Information of China (English)

    Shen-tu Han; Xue Anke; Guo Yunfei

    2013-01-01

    The variable-structure multiple-model (VSMM) approach, one of the multiple-model (MM) methods, is a popular and effective approach in handling problems with mode uncertainties. The model sequence set adaptation (MSA) is the key to design a better VSMM. However, MSA methods in the literature have big room to improve both theoretically and practically. To this end, we propose a feedback structure based entropy approach that could find the model sequence sets with the smallest size under certain conditions. The filtered data are fed back in real time and can be used by the minimum entropy (ME) based VSMM algorithms, i.e., MEVSMM. Firstly, the full Markov chains are used to achieve optimal solutions. Secondly, the myopic method together with particle filter (PF) and the challenge match algorithm are also used to achieve sub-optimal solutions, a trade-off between practicability and optimality. The numerical results show that the proposed algorithm provides not only refined model sets but also a good robustness margin and very high accuracy.

  12. Is the Evaluation of the Students' Values Possible? An Integrated Approach to Determining the Weights of Students' Personal Goals Using Multiple-Criteria Methods

    Science.gov (United States)

    Dadelo, Stanislav; Turskis, Zenonas; Zavadskas, Edmundas Kazimieras; Kacerauskas, Tomas; Dadeliene, Ruta

    2016-01-01

    To maximize the effectiveness of a decision, it is necessary to support decision-making with integrated methods. It can be assumed that subjective evaluation (considering only absolute values) is only remotely connected with the evaluation of real processes. Therefore, relying solely on these values in process management decision-making would be a…

  13. An approach to study of methods for urban analysis and urban fabric renewal in observation of a city as a multiple fractal structure

    Directory of Open Access Journals (Sweden)

    Bogdanov Ana

    2007-01-01

    Full Text Available Urban forms and processes can be observed as fractal structures since in their seemingly chaotic development and complexity it can be noticed an internal order and regularity, which could be quantified and described by the methods of fractal analysis. With determination of fractal dimension it is possible to quantify the level of irregularity, the complexity and hierarchy of the urban structures, as well as the level of urban transformations in various time intersections. The fractal geometry method has been used in analyses of spatial distribution of population, networks and utilities because it corresponds more than deterministic methods to the nature of urban settlements as open, non-linear and dynamic systems. In that sense, fractal geometry becomes the means to grasp a complex morphological urban structure of urban settlements in general, the interrelationships between the inner spatial elements, and to predict future development possibilities. Moreover on the basis of urban pattern analysis by means of fractal geometry, it is possible to evaluate the growth and development process and to perform a comparative analysis of development in spatially and temporarily different settlement settings. Having in view that complex urban fabric presumes tight connections and diversity, which is in contrast to sprawl and monotony which increasingly characterize urban growth and development, this paper is a contribution to research of potential for modern urban settlements to regain the spirit of spontaneity and human dimension through application of development models that are fractal geometry based.

  14. A time warping approach to multiple sequence alignment.

    Science.gov (United States)

    Arribas-Gil, Ana; Matias, Catherine

    2017-04-25

    We propose an approach for multiple sequence alignment (MSA) derived from the dynamic time warping viewpoint and recent techniques of curve synchronization developed in the context of functional data analysis. Starting from pairwise alignments of all the sequences (viewed as paths in a certain space), we construct a median path that represents the MSA we are looking for. We establish a proof of concept that our method could be an interesting ingredient to include into refined MSA techniques. We present a simple synthetic experiment as well as the study of a benchmark dataset, together with comparisons with 2 widely used MSA softwares.

  15. Multiple predictor smoothing methods for sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  16. Multiple predictor smoothing methods for sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  17. Localized Multiple Kernel Learning A Convex Approach

    Science.gov (United States)

    2016-11-22

    data. All the aforementioned approaches to localized MKL are formulated in terms of non-convex optimization problems, and deep the- oretical...learning. IEEE Transactions on Neural Networks, 22(3):433–446, 2011. Jingjing Yang, Yuanning Li, Yonghong Tian, Lingyu Duan, and Wen Gao. Group-sensitive

  18. Case studies: Soil mapping using multiple methods

    Science.gov (United States)

    Petersen, Hauke; Wunderlich, Tina; Hagrey, Said A. Al; Rabbel, Wolfgang; Stümpel, Harald

    2010-05-01

    Soil is a non-renewable resource with fundamental functions like filtering (e.g. water), storing (e.g. carbon), transforming (e.g. nutrients) and buffering (e.g. contamination). Degradation of soils is meanwhile not only to scientists a well known fact, also decision makers in politics have accepted this as a serious problem for several environmental aspects. National and international authorities have already worked out preservation and restoration strategies for soil degradation, though it is still work of active research how to put these strategies into real practice. But common to all strategies the description of soil state and dynamics is required as a base step. This includes collecting information from soils with methods ranging from direct soil sampling to remote applications. In an intermediate scale mobile geophysical methods are applied with the advantage of fast working progress but disadvantage of site specific calibration and interpretation issues. In the framework of the iSOIL project we present here some case studies for soil mapping performed using multiple geophysical methods. We will present examples of combined field measurements with EMI-, GPR-, magnetic and gammaspectrometric techniques carried out with the mobile multi-sensor-system of Kiel University (GER). Depending on soil type and actual environmental conditions, different methods show a different quality of information. With application of diverse methods we want to figure out, which methods or combination of methods will give the most reliable information concerning soil state and properties. To investigate the influence of varying material we performed mapping campaigns on field sites with sandy, loamy and loessy soils. Classification of measured or derived attributes show not only the lateral variability but also gives hints to a variation in the vertical distribution of soil material. For all soils of course soil water content can be a critical factor concerning a succesful

  19. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    Science.gov (United States)

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  20. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method

    Directory of Open Access Journals (Sweden)

    Jure Tuta

    2018-03-01

    Full Text Available This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method, a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.. Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  1. Approaches to data analysis of multiple-choice questions

    OpenAIRE

    Lin Ding; Robert Beichner

    2009-01-01

    This paper introduces five commonly used approaches to analyzing multiple-choice test data. They are classical test theory, factor analysis, cluster analysis, item response theory, and model analysis. Brief descriptions of the goals and algorithms of these approaches are provided, together with examples illustrating their applications in physics education research. We minimize mathematics, instead placing emphasis on data interpretation using these approaches.

  2. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Ling-Yu Duan

    2010-01-01

    Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  3. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Tian Yonghong

    2010-01-01

    Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  4. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating......, and allows direct incorporation of high-level and qualitative plant knowledge into themodel. These advantages have proven to be very appealing for industrial applications, and the practical, intuitively appealing nature of the framework isdemonstrated in chapters describing applications of local methods...... to problems in the process industries, biomedical applications and autonomoussystems. The successful application of the ideas to demanding problems is already encouraging, but creative development of the basic framework isneeded to better allow the integration of human knowledge with automated learning...

  5. Multiple diagnostic approaches to palpable breast mass

    Energy Technology Data Exchange (ETDEWEB)

    Chin, Soo Yil; Kim, Kie Hwan; Moon, Nan Mo; Kim, Yong Kyu; Jang, Ja June [Korea Cancer Center Hospital, Seoul (Korea, Republic of)

    1985-12-15

    The combination of the various diagnostic methods of palpable breast mass has improved the diagnostic accuracy. From September 1983 to August 1985 pathologically proven 85 patients with palpable breast masses examined with x-ray mammography, ultrasonography, penumomammography and aspiration cytology at Korea Cancer Center Hospital were analyzed. The diagnostic accuracies of each methods were 77.6% of mammogram, 74.1% of ultrasonogram, 90.5% of penumomammogram and 92.4% of aspiration cytology. Pneumomammograms was accomplished without difficulty or complication and depicted more clearly delineated mass with various pathognomonic findings; air-ductal pattern in fibroadenoma (90.4%) and cystosarcoma phylloides (100%), air-halo in fibrocystic disease (14.2%), fibroadenoma (100%), cystosarcoma phylloides (100%), air-cystogram in cystic type of fibrocystic disease (100%) and vaculoar pattern or irregular air collection without retained peripheral gas in carcinoma.

  6. Multiple diagnostic approaches to palpable breast mass

    International Nuclear Information System (INIS)

    Chin, Soo Yil; Kim, Kie Hwan; Moon, Nan Mo; Kim, Yong Kyu; Jang, Ja June

    1985-01-01

    The combination of the various diagnostic methods of palpable breast mass has improved the diagnostic accuracy. From September 1983 to August 1985 pathologically proven 85 patients with palpable breast masses examined with x-ray mammography, ultrasonography, penumomammography and aspiration cytology at Korea Cancer Center Hospital were analyzed. The diagnostic accuracies of each methods were 77.6% of mammogram, 74.1% of ultrasonogram, 90.5% of penumomammogram and 92.4% of aspiration cytology. Pneumomammograms was accomplished without difficulty or complication and depicted more clearly delineated mass with various pathognomonic findings; air-ductal pattern in fibroadenoma (90.4%) and cystosarcoma phylloides (100%), air-halo in fibrocystic disease (14.2%), fibroadenoma (100%), cystosarcoma phylloides (100%), air-cystogram in cystic type of fibrocystic disease (100%) and vaculoar pattern or irregular air collection without retained peripheral gas in carcinoma

  7. Decreasing Multicollinearity: A Method for Models with Multiplicative Functions.

    Science.gov (United States)

    Smith, Kent W.; Sasaki, M. S.

    1979-01-01

    A method is proposed for overcoming the problem of multicollinearity in multiple regression equations where multiplicative independent terms are entered. The method is not a ridge regression solution. (JKS)

  8. A nonparametric multiple imputation approach for missing categorical data

    Directory of Open Access Journals (Sweden)

    Muhan Zhou

    2017-06-01

    Full Text Available Abstract Background Incomplete categorical variables with more than two categories are common in public health data. However, most of the existing missing-data methods do not use the information from nonresponse (missingness probabilities. Methods We propose a nearest-neighbour multiple imputation approach to impute a missing at random categorical outcome and to estimate the proportion of each category. The donor set for imputation is formed by measuring distances between each missing value with other non-missing values. The distance function is calculated based on a predictive score, which is derived from two working models: one fits a multinomial logistic regression for predicting the missing categorical outcome (the outcome model and the other fits a logistic regression for predicting missingness probabilities (the missingness model. A weighting scheme is used to accommodate contributions from two working models when generating the predictive score. A missing value is imputed by randomly selecting one of the non-missing values with the smallest distances. We conduct a simulation to evaluate the performance of the proposed method and compare it with several alternative methods. A real-data application is also presented. Results The simulation study suggests that the proposed method performs well when missingness probabilities are not extreme under some misspecifications of the working models. However, the calibration estimator, which is also based on two working models, can be highly unstable when missingness probabilities for some observations are extremely high. In this scenario, the proposed method produces more stable and better estimates. In addition, proper weights need to be chosen to balance the contributions from the two working models and achieve optimal results for the proposed method. Conclusions We conclude that the proposed multiple imputation method is a reasonable approach to dealing with missing categorical outcome data with

  9. Approaches to Data Analysis of Multiple-Choice Questions

    Science.gov (United States)

    Ding, Lin; Beichner, Robert

    2009-01-01

    This paper introduces five commonly used approaches to analyzing multiple-choice test data. They are classical test theory, factor analysis, cluster analysis, item response theory, and model analysis. Brief descriptions of the goals and algorithms of these approaches are provided, together with examples illustrating their applications in physics…

  10. CURRENT APPROACHES FOR RESEARCH OF MULTIPLE SCLEROSIS BIOMARKERS

    Directory of Open Access Journals (Sweden)

    Kolyada T.I

    2016-12-01

    Full Text Available Current data concerning features of multiple sclerosis (MS etiology, pathogenesis, clinical course and treatment of disease indicate the necessity of personalized approach to the management of MS patients. These features are the variety of possible etiological factors and mechanisms that trigger the development of MS, different courses of disease, and significant differences in treatment efficiency. Phenotypic and pathogenetic heterogeneity of MS requires, on the one hand, the stratification of patients into groups with different treatment depending on a number of criteria including genetic characteristics, disease course, stage of the pathological process, and forms of the disease. On the other hand, it requires the use of modern methods for assessment of individual risk of developing MS, its early diagnosis, evaluation and prognosis of the disease course and the treatment efficiency. This approach is based on the identification and determination of biomarkers of MS including the use of systems biology technology platforms such as genomics, proteomics, metabolomics and bioinformatics. Research and practical use of biomarkers of MS in clinical and laboratory practice requires the use of a wide range of modern medical and biological, mathematical and physicochemical methods. The group of "classical" methods used to study MS biomarkers includes physicochemical and immunological methods aimed at the selection and identification of single molecular biomarkers, as well as methods of molecular genetic analysis. This group of methods includes ELISA, western blotting, isoelectric focusing, immunohistochemical methods, flow cytometry, spectrophotometric and nephelometric methods. These techniques make it possible to carry out both qualitative and quantitative assay of molecular biomarkers. The group of "classical methods" can also include methods based on polymerase chain reaction (including multiplex and allele-specific PCR and genome sequencing

  11. Approaches to data analysis of multiple-choice questions

    Directory of Open Access Journals (Sweden)

    Lin Ding

    2009-09-01

    Full Text Available This paper introduces five commonly used approaches to analyzing multiple-choice test data. They are classical test theory, factor analysis, cluster analysis, item response theory, and model analysis. Brief descriptions of the goals and algorithms of these approaches are provided, together with examples illustrating their applications in physics education research. We minimize mathematics, instead placing emphasis on data interpretation using these approaches.

  12. Hesitant fuzzy methods for multiple criteria decision analysis

    CERN Document Server

    Zhang, Xiaolu

    2017-01-01

    The book offers a comprehensive introduction to methods for solving multiple criteria decision making and group decision making problems with hesitant fuzzy information. It reports on the authors’ latest research, as well as on others’ research, providing readers with a complete set of decision making tools, such as hesitant fuzzy TOPSIS, hesitant fuzzy TODIM, hesitant fuzzy LINMAP, hesitant fuzzy QUALIFEX, and the deviation modeling approach with heterogeneous fuzzy information. The main focus is on decision making problems in which the criteria values and/or the weights of criteria are not expressed in crisp numbers but are more suitable to be denoted as hesitant fuzzy elements. The largest part of the book is devoted to new methods recently developed by the authors to solve decision making problems in situations where the available information is vague or hesitant. These methods are presented in detail, together with their application to different type of decision-making problems. All in all, the book ...

  13. Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities.

    Science.gov (United States)

    Green, Carla A; Duan, Naihua; Gibbons, Robert D; Hoagwood, Kimberly E; Palinkas, Lawrence A; Wisdom, Jennifer P

    2015-09-01

    Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings.

  14. Basic thinking patterns and working methods for multiple DFX

    DEFF Research Database (Denmark)

    Andreasen, Mogens Myrup; Mortensen, Niels Henrik

    1997-01-01

    This paper attempts to describe the theory and methodologies behind DFX and linking multiple DFX's together. The contribution is an articulation of basic thinking patterns and description of some working methods for handling multiple DFX.......This paper attempts to describe the theory and methodologies behind DFX and linking multiple DFX's together. The contribution is an articulation of basic thinking patterns and description of some working methods for handling multiple DFX....

  15. Heuristic Solution Approaches to the Double TSP with Multiple Stacks

    DEFF Research Database (Denmark)

    Petersen, Hanne Løhmann

    This paper introduces the Double Travelling Salesman Problem with Multiple Stacks and presents a three different metaheuristic approaches to its solution. The Double Travelling Salesman Problem with Multiple Stacks is concerned with finding the shortest route performing pickups and deliveries in ...... are developed for the problem and used with each of the heuristics. Finally some computational results are given along with lower bounds on the objective value....

  16. Heuristic Solution Approaches to the Double TSP with Multiple Stacks

    DEFF Research Database (Denmark)

    Petersen, Hanne Løhmann

    2006-01-01

    This paper introduces the Double Travelling Salesman Problem with Multiple Stacks and presents a three different metaheuristic approaches to its solution. The Double Travelling Salesman Problem with Multiple Stacks is concerned with finding the shortest route performing pickups and deliveries in ...... are developed for the problem and used with each of the heuristics. Finally some computational results are given along with lower bounds on the objective value....

  17. Multiple scattering approach to X-ray absorption spectroscopy

    International Nuclear Information System (INIS)

    Benfatto, M.; Wu Ziyu

    2003-01-01

    In this paper authors present the state of the art of the theoretical background needed for analyzing X-ray absorption spectra in the whole energy range. The multiple-scattering (MS) theory is presented in detail with some applications on real systems. Authors also describe recent progress in performing geometrical fitting of the XANES (X-ray absorption near-edge structure) energy region and beyond using a full multiple-scattering approach

  18. Multiple attenuation to reflection seismic data using Radon filter and Wave Equation Multiple Rejection (WEMR) method

    Energy Technology Data Exchange (ETDEWEB)

    Erlangga, Mokhammad Puput [Geophysical Engineering, Institut Teknologi Bandung, Ganesha Street no.10 Basic Science B Buliding fl.2-3 Bandung, 40132, West Java Indonesia puput.erlangga@gmail.com (Indonesia)

    2015-04-16

    Separation between signal and noise, incoherent or coherent, is important in seismic data processing. Although we have processed the seismic data, the coherent noise is still mixing with the primary signal. Multiple reflections are a kind of coherent noise. In this research, we processed seismic data to attenuate multiple reflections in the both synthetic and real seismic data of Mentawai. There are several methods to attenuate multiple reflection, one of them is Radon filter method that discriminates between primary reflection and multiple reflection in the τ-p domain based on move out difference between primary reflection and multiple reflection. However, in case where the move out difference is too small, the Radon filter method is not enough to attenuate the multiple reflections. The Radon filter also produces the artifacts on the gathers data. Except the Radon filter method, we also use the Wave Equation Multiple Elimination (WEMR) method to attenuate the long period multiple reflection. The WEMR method can attenuate the long period multiple reflection based on wave equation inversion. Refer to the inversion of wave equation and the magnitude of the seismic wave amplitude that observed on the free surface, we get the water bottom reflectivity which is used to eliminate the multiple reflections. The WEMR method does not depend on the move out difference to attenuate the long period multiple reflection. Therefore, the WEMR method can be applied to the seismic data which has small move out difference as the Mentawai seismic data. The small move out difference on the Mentawai seismic data is caused by the restrictiveness of far offset, which is only 705 meter. We compared the real free multiple stacking data after processing with Radon filter and WEMR process. The conclusion is the WEMR method can more attenuate the long period multiple reflection than the Radon filter method on the real (Mentawai) seismic data.

  19. On multiple level-set regularization methods for inverse problems

    International Nuclear Information System (INIS)

    DeCezaro, A; Leitão, A; Tai, X-C

    2009-01-01

    We analyze a multiple level-set method for solving inverse problems with piecewise constant solutions. This method corresponds to an iterated Tikhonov method for a particular Tikhonov functional G α based on TV–H 1 penalization. We define generalized minimizers for our Tikhonov functional and establish an existence result. Moreover, we prove convergence and stability results of the proposed Tikhonov method. A multiple level-set algorithm is derived from the first-order optimality conditions for the Tikhonov functional G α , similarly as the iterated Tikhonov method. The proposed multiple level-set method is tested on an inverse potential problem. Numerical experiments show that the method is able to recover multiple objects as well as multiple contrast levels

  20. Application of algorithms and artificial-intelligence approach for locating multiple harmonics in distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Y.-Y.; Chen, Y.-C. [Chung Yuan University (China). Dept. of Electrical Engineering

    1999-05-01

    A new method is proposed for locating multiple harmonic sources in distribution systems. The proposed method first determines the proper locations for metering measurement using fuzzy clustering. Next, an artificial neural network based on the back-propagation approach is used to identify the most likely location for multiple harmonic sources. A set of systematic algorithmic steps is developed until all harmonic locations are identified. The simulation results for an 18-busbar system show that the proposed method is very efficient in locating the multiple harmonics in a distribution system. (author)

  1. Multiple network interface core apparatus and method

    Science.gov (United States)

    Underwood, Keith D [Albuquerque, NM; Hemmert, Karl Scott [Albuquerque, NM

    2011-04-26

    A network interface controller and network interface control method comprising providing a single integrated circuit as a network interface controller and employing a plurality of network interface cores on the single integrated circuit.

  2. Multiple tag labeling method for DNA sequencing

    Science.gov (United States)

    Mathies, R.A.; Huang, X.C.; Quesada, M.A.

    1995-07-25

    A DNA sequencing method is described which uses single lane or channel electrophoresis. Sequencing fragments are separated in the lane and detected using a laser-excited, confocal fluorescence scanner. Each set of DNA sequencing fragments is separated in the same lane and then distinguished using a binary coding scheme employing only two different fluorescent labels. Also described is a method of using radioisotope labels. 5 figs.

  3. Multiple time scale methods in tokamak magnetohydrodynamics

    International Nuclear Information System (INIS)

    Jardin, S.C.

    1984-01-01

    Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B 2 /2μ 0 , which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed

  4. Novel Approach to Tourism Analysis with Multiple Outcome Capability Using Rough Set Theory

    Directory of Open Access Journals (Sweden)

    Chun-Che Huang

    2016-12-01

    Full Text Available To explore the relationship between characteristics and decision-making outcomes of the tourist is critical to keep competitive tourism business. In investigation of tourism development, most of the existing studies lack of a systematic approach to analyze qualitative data. Although the traditional Rough Set (RS based approach is an excellent classification method in qualitative modeling, but it is canarsquo;t deal with the case of multiple outcomes, which is a common situation in tourism. Consequently, the Multiple Outcome Reduct Generation (MORG and Multiple Outcome Rule Extraction (MORE approaches based on RS to handle multiple outcomes are proposed. This study proposes a ranking based approach to induct meaningful reducts and ensure the strength and robustness of decision rules, which helps decision makers understand touristarsquo;s characteristics in a tourism case.

  5. A feature point identification method for positron emission particle tracking with multiple tracers

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Cody, E-mail: cwiggin2@vols.utk.edu [University of Tennessee-Knoxville, Department of Physics and Astronomy, 1408 Circle Drive, Knoxville, TN 37996 (United States); Santos, Roque [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States); Escuela Politécnica Nacional, Departamento de Ciencias Nucleares (Ecuador); Ruggles, Arthur [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States)

    2017-01-21

    A novel detection algorithm for Positron Emission Particle Tracking (PEPT) with multiple tracers based on optical feature point identification (FPI) methods is presented. This new method, the FPI method, is compared to a previous multiple PEPT method via analyses of experimental and simulated data. The FPI method outperforms the older method in cases of large particle numbers and fine time resolution. Simulated data show the FPI method to be capable of identifying 100 particles at 0.5 mm average spatial error. Detection error is seen to vary with the inverse square root of the number of lines of response (LORs) used for detection and increases as particle separation decreases. - Highlights: • A new approach to positron emission particle tracking is presented. • Using optical feature point identification analogs, multiple particle tracking is achieved. • Method is compared to previous multiple particle method. • Accuracy and applicability of method is explored.

  6. Methods for monitoring multiple gene expression

    Energy Technology Data Exchange (ETDEWEB)

    Berka, Randy [Davis, CA; Bachkirova, Elena [Davis, CA; Rey, Michael [Davis, CA

    2012-05-01

    The present invention relates to methods for monitoring differential expression of a plurality of genes in a first filamentous fungal cell relative to expression of the same genes in one or more second filamentous fungal cells using microarrays containing Trichoderma reesei ESTs or SSH clones, or a combination thereof. The present invention also relates to computer readable media and substrates containing such array features for monitoring expression of a plurality of genes in filamentous fungal cells.

  7. Methods for monitoring multiple gene expression

    Energy Technology Data Exchange (ETDEWEB)

    Berka, Randy; Bachkirova, Elena; Rey, Michael

    2013-10-01

    The present invention relates to methods for monitoring differential expression of a plurality of genes in a first filamentous fungal cell relative to expression of the same genes in one or more second filamentous fungal cells using microarrays containing Trichoderma reesei ESTs or SSH clones, or a combination thereof. The present invention also relates to computer readable media and substrates containing such array features for monitoring expression of a plurality of genes in filamentous fungal cells.

  8. An Interactive Signed Distance Approach for Multiple Criteria Group Decision-Making Based on Simple Additive Weighting Method with Incomplete Preference Information Defined by Interval Type-2 Fuzzy Sets

    OpenAIRE

    Ting-Yu Chen

    2014-01-01

    Interval type-2 fuzzy sets (T2FSs) with interval membership grades are suitable for dealing with imprecision or uncertainties in many real-world problems. In the Interval type-2 fuzzy context, the aim of this paper is to develop an interactive signed distance-based simple additive weighting (SAW) method for solving multiple criteria group decision-making problems with linguistic ratings and incomplete preference information. This paper first formulates a group decision-making problem with unc...

  9. Pediatric Multiple Sclerosis: Genes, Environment, and a Comprehensive Therapeutic Approach.

    Science.gov (United States)

    Cappa, Ryan; Theroux, Liana; Brenton, J Nicholas

    2017-10-01

    Pediatric multiple sclerosis is an increasingly recognized and studied disorder that accounts for 3% to 10% of all patients with multiple sclerosis. The risk for pediatric multiple sclerosis is thought to reflect a complex interplay between environmental and genetic risk factors. Environmental exposures, including sunlight (ultraviolet radiation, vitamin D levels), infections (Epstein-Barr virus), passive smoking, and obesity, have been identified as potential risk factors in youth. Genetic predisposition contributes to the risk of multiple sclerosis, and the major histocompatibility complex on chromosome 6 makes the single largest contribution to susceptibility to multiple sclerosis. With the use of large-scale genome-wide association studies, other non-major histocompatibility complex alleles have been identified as independent risk factors for the disease. The bridge between environment and genes likely lies in the study of epigenetic processes, which are environmentally-influenced mechanisms through which gene expression may be modified. This article will review these topics to provide a framework for discussion of a comprehensive approach to counseling and ultimately treating the pediatric patient with multiple sclerosis. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. The importance of neurophysiological-Bobath method in multiple sclerosis

    Directory of Open Access Journals (Sweden)

    Adrian Miler

    2018-02-01

    Full Text Available Rehabilitation treatment in multiple sclerosis should be carried out continuously, can take place in the hospital, ambulatory as well as environmental conditions. In the traditional approach, it focuses on reducing the symptoms of the disease, such as paresis, spasticity, ataxia, pain, sensory disturbances, speech disorders, blurred vision, fatigue, neurogenic bladder dysfunction, and cognitive impairment. In kinesiotherapy in people with paresis, the most common methods are the (Bobathian method.Improvement can be achieved by developing the ability to maintain a correct posture in various positions (so-called postural alignment, patterns based on corrective and equivalent responses. During the therapy, various techniques are used to inhibit pathological motor patterns and stimulate the reaction. The creators of the method believe that each movement pattern has its own postural system, from which it can be initiated, carried out and effectively controlled. Correct movement can not take place in the wrong position of the body. The physiotherapist discusses with the patient how to perform individual movement patterns, which protects him against spontaneous pathological compensation.The aim of the work is to determine the meaning and application of the  Bobath method in the therapy of people with MS

  11. Fuzzy multiple attribute decision making methods and applications

    CERN Document Server

    Chen, Shu-Jen

    1992-01-01

    This monograph is intended for an advanced undergraduate or graduate course as well as for researchers, who want a compilation of developments in this rapidly growing field of operations research. This is a sequel to our previous works: "Multiple Objective Decision Making--Methods and Applications: A state-of-the-Art Survey" (No.164 of the Lecture Notes); "Multiple Attribute Decision Making--Methods and Applications: A State-of-the-Art Survey" (No.186 of the Lecture Notes); and "Group Decision Making under Multiple Criteria--Methods and Applications" (No.281 of the Lecture Notes). In this monograph, the literature on methods of fuzzy Multiple Attribute Decision Making (MADM) has been reviewed thoroughly and critically, and classified systematically. This study provides readers with a capsule look into the existing methods, their characteristics, and applicability to the analysis of fuzzy MADM problems. The basic concepts and algorithms from the classical MADM methods have been used in the development of the f...

  12. Optimization of Inventories for Multiple Companies by Fuzzy Control Method

    OpenAIRE

    Kawase, Koichi; Konishi, Masami; Imai, Jun

    2008-01-01

    In this research, Fuzzy control theory is applied to the inventory control of the supply chain between multiple companies. The proposed control method deals with the amountof inventories expressing supply chain between multiple companies. Referring past demand and tardiness, inventory amounts of raw materials are determined by Fuzzy inference. The method that an appropriate inventory control becomes possible optimizing fuzzy control gain by using SA method for Fuzzy control. The variation of ...

  13. Forest soil mineral weathering rates: use of multiple approaches

    Science.gov (United States)

    Randy K. Kolka; D.F. Grigal; E.A. Nater

    1996-01-01

    Knowledge of rates of release of base cations from mineral dissolution (weathering) is essential to understand ecosystem elemental cycling. Although much studied, rates remain enigmatic. We compared the results of four methods to determine cation (Ca + Mg + K) release rates at five forested soils/sites in the northcentral U.S.A. Our premise was that multiple...

  14. A multiple multicomponent approach to chimeric peptide-peptoid podands.

    Science.gov (United States)

    Rivera, Daniel G; León, Fredy; Concepción, Odette; Morales, Fidel E; Wessjohann, Ludger A

    2013-05-10

    The success of multi-armed, peptide-based receptors in supramolecular chemistry traditionally is not only based on the sequence but equally on an appropriate positioning of various peptidic chains to create a multivalent array of binding elements. As a faster, more versatile and alternative access toward (pseudo)peptidic receptors, a new approach based on multiple Ugi four-component reactions (Ugi-4CR) is proposed as a means of simultaneously incorporating several binding and catalytic elements into organizing scaffolds. By employing α-amino acids either as the amino or acid components of the Ugi-4CRs, this multiple multicomponent process allows for the one-pot assembly of podands bearing chimeric peptide-peptoid chains as appended arms. Tripodal, bowl-shaped, and concave polyfunctional skeletons are employed as topologically varied platforms for positioning the multiple peptidic chains formed by Ugi-4CRs. In a similar approach, steroidal building blocks with several axially-oriented isocyano groups are synthesized and utilized to align the chimeric chains with conformational constrains, thus providing an alternative to the classical peptido-steroidal receptors. The branched and hybrid peptide-peptoid appendages allow new possibilities for both rational design and combinatorial production of synthetic receptors. The concept is also expandable to other multicomponent reactions. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Multiple independent identification decisions: a method of calibrating eyewitness identifications.

    Science.gov (United States)

    Pryke, Sean; Lindsay, R C L; Dysart, Jennifer E; Dupuis, Paul

    2004-02-01

    Two experiments (N = 147 and N = 90) explored the use of multiple independent lineups to identify a target seen live. In Experiment 1, simultaneous face, body, and sequential voice lineups were used. In Experiment 2, sequential face, body, voice, and clothing lineups were used. Both studies demonstrated that multiple identifications (by the same witness) from independent lineups of different features are highly diagnostic of suspect guilt (G. L. Wells & R. C. L. Lindsay, 1980). The number of suspect and foil selections from multiple independent lineups provides a powerful method of calibrating the accuracy of eyewitness identification. Implications for use of current methods are discussed. ((c) 2004 APA, all rights reserved)

  16. Predicting Speech Intelligibility with a Multiple Speech Subsystems Approach in Children with Cerebral Palsy

    Science.gov (United States)

    Lee, Jimin; Hustad, Katherine C.; Weismer, Gary

    2014-01-01

    Purpose: Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystems approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Method: Nine acoustic variables reflecting different subsystems, and…

  17. Multiple sclerosis: general features and pharmacologic approach; Esclerosis multiple: aspectos generales y abordaje farmacologico

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen Lagumersindez, Denis; Martinez Sanchez, Gregorio [Instituto de Farmacia y Alimentos, Universidad de La Habana, La Habana (Cuba)

    2009-07-01

    Multiple sclerosis is an autoimmune, inflammatory and desmyelinization disease central nervous system (CNS) of unknown etiology and critical evolution. There different etiological hypotheses talking of a close interrelation among predisposing genetic factors and dissimilar environmental factors, able to give raise to autoimmune response at central nervous system level. Hypothesis of autoimmune pathogeny is based on study of experimental models, and findings in biopsies of affected patients by disease. Accumulative data report that the oxidative stress plays a main role in pathogenesis of multiple sclerosis. Oxygen reactive species generated by macrophages has been involved as mediators of demyelinization and of axon damage, in experimental autoimmune encephalomyelitis and strictly in multiple sclerosis. Disease diagnosis is difficult because of there is not a confirmatory unique test. Management of it covers the treatment of acute relapses, disease modification, and symptoms management. These features require an individualized approach, base on evolution of this affection, and tolerability of treatments. In addition to diet, among non-pharmacologic treatments for multiple sclerosis it is recommended physical therapy. Besides, some clinical assays have been performed in which we used natural extracts, nutrition supplements, and other agents with promising results. Pharmacology allowed neurologists with a broad array of proved effectiveness drugs; however, results of research laboratories in past years make probable that therapeutical possibilities increase notably in future. (Author)

  18. HARMONIC ANALYSIS OF SVPWM INVERTER USING MULTIPLE-PULSES METHOD

    Directory of Open Access Journals (Sweden)

    Mehmet YUMURTACI

    2009-01-01

    Full Text Available Space Vector Modulation (SVM technique is a popular and an important PWM technique for three phases voltage source inverter in the control of Induction Motor. In this study harmonic analysis of Space Vector PWM (SVPWM is investigated using multiple-pulses method. Multiple-Pulses method calculates the Fourier coefficients of individual positive and negative pulses of the output PWM waveform and adds them together using the principle of superposition to calculate the Fourier coefficients of the all PWM output signal. Harmonic magnitudes can be calculated directly by this method without linearization, using look-up tables or Bessel functions. In this study, the results obtained in the application of SVPWM for values of variable parameters are compared with the results obtained with the multiple-pulses method.

  19. Research on neutron source multiplication method in nuclear critical safety

    International Nuclear Information System (INIS)

    Zhu Qingfu; Shi Yongqian; Hu Dingsheng

    2005-01-01

    The paper concerns in the neutron source multiplication method research in nuclear critical safety. Based on the neutron diffusion equation with external neutron source the effective sub-critical multiplication factor k s is deduced, and k s is different to the effective neutron multiplication factor k eff in the case of sub-critical system with external neutron source. The verification experiment on the sub-critical system indicates that the parameter measured with neutron source multiplication method is k s , and k s is related to the external neutron source position in sub-critical system and external neutron source spectrum. The relation between k s and k eff and the effect of them on nuclear critical safety is discussed. (author)

  20. An Intuitionistic Multiplicative ORESTE Method for Patients’ Prioritization of Hospitalization

    Directory of Open Access Journals (Sweden)

    Cheng Zhang

    2018-04-01

    Full Text Available The tension brought about by sickbeds is a common and intractable issue in public hospitals in China due to the large population. Assigning the order of hospitalization of patients is difficult because of complex patient information such as disease type, emergency degree, and severity. It is critical to rank the patients taking full account of various factors. However, most of the evaluation criteria for hospitalization are qualitative, and the classical ranking method cannot derive the detailed relations between patients based on these criteria. Motivated by this, a comprehensive multiple criteria decision making method named the intuitionistic multiplicative ORESTE (organísation, rangement et Synthèse dedonnées relarionnelles, in French was proposed to handle the problem. The subjective and objective weights of criteria were considered in the proposed method. To do so, first, considering the vagueness of human perceptions towards the alternatives, an intuitionistic multiplicative preference relation model is applied to represent the experts’ preferences over the pairwise alternatives with respect to the predetermined criteria. Then, a correlation coefficient-based weight determining method is developed to derive the objective weights of criteria. This method can overcome the biased results caused by highly-related criteria. Afterwards, we improved the general ranking method, ORESTE, by introducing a new score function which considers both the subjective and objective weights of criteria. An intuitionistic multiplicative ORESTE method was then developed and further highlighted by a case study concerning the patients’ prioritization.

  1. Symbolic interactionism as a theoretical perspective for multiple method research.

    Science.gov (United States)

    Benzies, K M; Allen, M N

    2001-02-01

    Qualitative and quantitative research rely on different epistemological assumptions about the nature of knowledge. However, the majority of nurse researchers who use multiple method designs do not address the problem of differing theoretical perspectives. Traditionally, symbolic interactionism has been viewed as one perspective underpinning qualitative research, but it is also the basis for quantitative studies. Rooted in social psychology, symbolic interactionism has a rich intellectual heritage that spans more than a century. Underlying symbolic interactionism is the major assumption that individuals act on the basis of the meaning that things have for them. The purpose of this paper is to present symbolic interactionism as a theoretical perspective for multiple method designs with the aim of expanding the dialogue about new methodologies. Symbolic interactionism can serve as a theoretical perspective for conceptually clear and soundly implemented multiple method research that will expand the understanding of human health behaviour.

  2. A data fusion approach for track monitoring from multiple in-service trains

    Science.gov (United States)

    Lederman, George; Chen, Siheng; Garrett, James H.; Kovačević, Jelena; Noh, Hae Young; Bielak, Jacobo

    2017-10-01

    We present a data fusion approach for enabling data-driven rail-infrastructure monitoring from multiple in-service trains. A number of researchers have proposed using vibration data collected from in-service trains as a low-cost method to monitor track geometry. The majority of this work has focused on developing novel features to extract information about the tracks from data produced by individual sensors on individual trains. We extend this work by presenting a technique to combine extracted features from multiple passes over the tracks from multiple sensors aboard multiple vehicles. There are a number of challenges in combining multiple data sources, like different relative position coordinates depending on the location of the sensor within the train. Furthermore, as the number of sensors increases, the likelihood that some will malfunction also increases. We use a two-step approach that first minimizes position offset errors through data alignment, then fuses the data with a novel adaptive Kalman filter that weights data according to its estimated reliability. We show the efficacy of this approach both through simulations and on a data-set collected from two instrumented trains operating over a one-year period. Combining data from numerous in-service trains allows for more continuous and more reliable data-driven monitoring than analyzing data from any one train alone; as the number of instrumented trains increases, the proposed fusion approach could facilitate track monitoring of entire rail-networks.

  3. A General Method for QTL Mapping in Multiple Related Populations Derived from Multiple Parents

    Directory of Open Access Journals (Sweden)

    Yan AO

    2009-03-01

    Full Text Available It's well known that incorporating some existing populations derived from multiple parents may improve QTL mapping and QTL-based breeding programs. However, no general maximum likelihood method has been available for this strategy. Based on the QTL mapping in multiple related populations derived from two parents, a maximum likelihood estimation method was proposed, which can incorporate several populations derived from three or more parents and also can be used to handle different mating designs. Taking a circle design as an example, we conducted simulation studies to study the effect of QTL heritability and sample size upon the proposed method. The results showed that under the same heritability, enhanced power of QTL detection and more precise and accurate estimation of parameters could be obtained when three F2 populations were jointly analyzed, compared with the joint analysis of any two F2 populations. Higher heritability, especially with larger sample sizes, would increase the ability of QTL detection and improve the estimation of parameters. Potential advantages of the method are as follows: firstly, the existing results of QTL mapping in single population can be compared and integrated with each other with the proposed method, therefore the ability of QTL detection and precision of QTL mapping can be improved. Secondly, owing to multiple alleles in multiple parents, the method can exploit gene resource more adequately, which will lay an important genetic groundwork for plant improvement.

  4. Receptivity to Kinetic Fluctuations: A Multiple Scales Approach

    Science.gov (United States)

    Edwards, Luke; Tumin, Anatoli

    2017-11-01

    The receptivity of high-speed compressible boundary layers to kinetic fluctuations (KF) is considered within the framework of fluctuating hydrodynamics. The formulation is based on the idea that KF-induced dissipative fluxes may lead to the generation of unstable modes in the boundary layer. Fedorov and Tumin solved the receptivity problem using an asymptotic matching approach which utilized a resonant inner solution in the vicinity of the generation point of the second Mack mode. Here we take a slightly more general approach based on a multiple scales WKB ansatz which requires fewer assumptions about the behavior of the stability spectrum. The approach is modeled after the one taken by Luchini to study low speed incompressible boundary layers over a swept wing. The new framework is used to study examples of high-enthalpy, flat plate boundary layers whose spectra exhibit nuanced behavior near the generation point, such as first mode instabilities and near-neutral evolution over moderate length scales. The configurations considered exhibit supersonic unstable second Mack modes despite the temperature ratio Tw /Te > 1 , contrary to prior expectations. Supported by AFOSR and ONR.

  5. Method for measuring multiple scattering corrections between liquid scintillators

    Energy Technology Data Exchange (ETDEWEB)

    Verbeke, J.M., E-mail: verbeke2@llnl.gov; Glenn, A.M., E-mail: glenn22@llnl.gov; Keefer, G.J., E-mail: keefer1@llnl.gov; Wurtz, R.E., E-mail: wurtz1@llnl.gov

    2016-07-21

    A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  6. INTEGRATED FUSION METHOD FOR MULTIPLE TEMPORAL-SPATIAL-SPECTRAL IMAGES

    Directory of Open Access Journals (Sweden)

    H. Shen

    2012-08-01

    Full Text Available Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.

  7. The slice balance approach (SBA): a characteristic-based, multiple balance SN approach on unstructured polyhedral meshes

    International Nuclear Information System (INIS)

    Grove, R.E.

    2005-01-01

    The Slice Balance Approach (SBA) is an approach for solving geometrically-complex, neutral-particle transport problems within a multi-group discrete ordinates (S N ) framework. The salient feature is an angle-dependent spatial decomposition. We approximate general surfaces with arbitrary polygonal faces and mesh the geometry with arbitrarily-shaped polyhedral cells. A cell-local spatial decomposition divides cells into angle-dependent slices for each S N direction. This subdivision follows from a characteristic-based view of the transport problem. Most balance-based characteristic methods use it implicitly; we use it explicitly and exploit its properties. Our mathematical approach is a multiple balance approach using exact spatial moments balance equations on cells and slices along with auxiliary relations on slices. We call this the slice balance approach; it is a characteristic-based multiple balance approach. The SBA is intentionally general and can extend differencing schemes to arbitrary 2-D and 3-D meshes. This work contributes to development of general-geometry deterministic transport capability to complement Monte Carlo capability for large, geometrically-complex transport problems. The purpose of this paper is to describe the SBA. We describe the spatial decomposition and mathematical framework and highlight a few interesting properties. We sketch the derivation of two solution schemes, a step characteristic scheme and a diamond-difference-like scheme, to illustrate the approach and we present interesting results for a 2-D problem. (author)

  8. Phylo: a citizen science approach for improving multiple sequence alignment.

    Directory of Open Access Journals (Sweden)

    Alexander Kawrykow

    Full Text Available BACKGROUND: Comparative genomics, or the study of the relationships of genome structure and function across different species, offers a powerful tool for studying evolution, annotating genomes, and understanding the causes of various genetic disorders. However, aligning multiple sequences of DNA, an essential intermediate step for most types of analyses, is a difficult computational task. In parallel, citizen science, an approach that takes advantage of the fact that the human brain is exquisitely tuned to solving specific types of problems, is becoming increasingly popular. There, instances of hard computational problems are dispatched to a crowd of non-expert human game players and solutions are sent back to a central server. METHODOLOGY/PRINCIPAL FINDINGS: We introduce Phylo, a human-based computing framework applying "crowd sourcing" techniques to solve the Multiple Sequence Alignment (MSA problem. The key idea of Phylo is to convert the MSA problem into a casual game that can be played by ordinary web users with a minimal prior knowledge of the biological context. We applied this strategy to improve the alignment of the promoters of disease-related genes from up to 44 vertebrate species. Since the launch in November 2010, we received more than 350,000 solutions submitted from more than 12,000 registered users. Our results show that solutions submitted contributed to improving the accuracy of up to 70% of the alignment blocks considered. CONCLUSIONS/SIGNIFICANCE: We demonstrate that, combined with classical algorithms, crowd computing techniques can be successfully used to help improving the accuracy of MSA. More importantly, we show that an NP-hard computational problem can be embedded in casual game that can be easily played by people without significant scientific training. This suggests that citizen science approaches can be used to exploit the billions of "human-brain peta-flops" of computation that are spent every day playing games

  9. Multiple Contexts, Multiple Methods: A Study of Academic and Cultural Identity among Children of Immigrant Parents

    Science.gov (United States)

    Urdan, Tim; Munoz, Chantico

    2012-01-01

    Multiple methods were used to examine the academic motivation and cultural identity of a sample of college undergraduates. The children of immigrant parents (CIPs, n = 52) and the children of non-immigrant parents (non-CIPs, n = 42) completed surveys assessing core cultural identity, valuing of cultural accomplishments, academic self-concept,…

  10. Correction of measured multiplicity distributions by the simulated annealing method

    International Nuclear Information System (INIS)

    Hafidouni, M.

    1993-01-01

    Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs

  11. System and method for image registration of multiple video streams

    Science.gov (United States)

    Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton

    2018-02-06

    Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.

  12. Multiple time-scale methods in particle simulations of plasmas

    International Nuclear Information System (INIS)

    Cohen, B.I.

    1985-01-01

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling

  13. Multiple-linac approach for tritium production and other applications

    International Nuclear Information System (INIS)

    Ruggiero, A.G.

    1995-01-01

    This report describes an approach to tritium production based on the use of multiple proton linear accelerators. Features of a single APTT Linac as proposed by the Los Alamos National Laboratory are presented and discussed. An alternative approach to the attainment of the same total proton beam power of 200 MW with several lower-performance superconducting Linacs is proposed and discussed. Although each of these accelerators are considerable extrapolations of present technology, the latter can nevertheless be built at less technical risk when compared to the single high-current APT Linac, particularly concerning the design and the performance of the low-energy front-end. The use of superconducting cavities is also proposed as a way of optimizing the accelerating gradient, the overall length, and the operational costs. The superconducting technology has already been successfully demonstrated in a number of large-size projects and should be seriously considered for the acceleration of intense low-energy beams of protons. Finally, each linear accelerator would represent an ideal source of very intense beams of protons for a variety of applications, such as: weapons and waste actinide transmutation processes, isotopes for medical application, spallation neutron sources, and the generation of intense beams of neutrinos and muons for nuclear and high-energy physics research. The research community at large has obviously an interest in providing expertise for, and in having access to, the demonstration, the construction, the operation, and the exploitation of these top-performance accelerators

  14. Statistics of electron multiplication in multiplier phototube: iterative method

    International Nuclear Information System (INIS)

    Grau Malonda, A.; Ortiz Sanchez, J.F.

    1985-01-01

    An iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situations are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average anti-r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (author)

  15. Statistics of electron multiplication in a multiplier phototube; Iterative method

    International Nuclear Information System (INIS)

    Ortiz, J. F.; Grau, A.

    1985-01-01

    In the present paper an iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situation are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (Author) 11 refs

  16. Walking path-planning method for multiple radiation areas

    International Nuclear Information System (INIS)

    Liu, Yong-kuo; Li, Meng-kun; Peng, Min-jun; Xie, Chun-li; Yuan, Cheng-qian; Wang, Shuang-yu; Chao, Nan

    2016-01-01

    Highlights: • Radiation environment modeling method is designed. • Path-evaluating method and segmented path-planning method are proposed. • Path-planning simulation platform for radiation environment is built. • The method avoids to be misled by minimum dose path in single area. - Abstract: Based on minimum dose path-searching method, walking path-planning method for multiple radiation areas was designed to solve minimum dose path problem in single area and find minimum dose path in the whole space in this paper. Path-planning simulation platform was built using C# programming language and DirectX engine. The simulation platform was used in simulations dealing with virtual nuclear facilities. Simulation results indicated that the walking-path planning method is effective in providing safety for people walking in nuclear facilities.

  17. New weighting methods for phylogenetic tree reconstruction using multiple loci.

    Science.gov (United States)

    Misawa, Kazuharu; Tajima, Fumio

    2012-08-01

    Efficient determination of evolutionary distances is important for the correct reconstruction of phylogenetic trees. The performance of the pooled distance required for reconstructing a phylogenetic tree can be improved by applying large weights to appropriate distances for reconstructing phylogenetic trees and small weights to inappropriate distances. We developed two weighting methods, the modified Tajima-Takezaki method and the modified least-squares method, for reconstructing phylogenetic trees from multiple loci. By computer simulations, we found that both of the new methods were more efficient in reconstructing correct topologies than the no-weight method. Hence, we reconstructed hominoid phylogenetic trees from mitochondrial DNA using our new methods, and found that the levels of bootstrap support were significantly increased by the modified Tajima-Takezaki and by the modified least-squares method.

  18. Comparison of Methods to Trace Multiple Subskills: Is LR-DBN Best?

    Science.gov (United States)

    Xu, Yanbo; Mostow, Jack

    2012-01-01

    A long-standing challenge for knowledge tracing is how to update estimates of multiple subskills that underlie a single observable step. We characterize approaches to this problem by how they model knowledge tracing, fit its parameters, predict performance, and update subskill estimates. Previous methods allocated blame or credit among subskills…

  19. Multiple centroid method to evaluate the adaptability of alfalfa genotypes

    Directory of Open Access Journals (Sweden)

    Moysés Nascimento

    2015-02-01

    Full Text Available This study aimed to evaluate the efficiency of multiple centroids to study the adaptability of alfalfa genotypes (Medicago sativa L.. In this method, the genotypes are compared with ideotypes defined by the bissegmented regression model, according to the researcher's interest. Thus, genotype classification is carried out as determined by the objective of the researcher and the proposed recommendation strategy. Despite the great potential of the method, it needs to be evaluated under the biological context (with real data. In this context, we used data on the evaluation of dry matter production of 92 alfalfa cultivars, with 20 cuttings, from an experiment in randomized blocks with two repetitions carried out from November 2004 to June 2006. The multiple centroid method proved efficient for classifying alfalfa genotypes. Moreover, it showed no unambiguous indications and provided that ideotypes were defined according to the researcher's interest, facilitating data interpretation.

  20. Unplanned Complex Suicide-A Consideration of Multiple Methods.

    Science.gov (United States)

    Ateriya, Navneet; Kanchan, Tanuj; Shekhawat, Raghvendra Singh; Setia, Puneet; Saraf, Ashish

    2018-05-01

    Detailed death investigations are mandatory to find out the exact cause and manner in non-natural deaths. In this reference, use of multiple methods in suicide poses a challenge for the investigators especially when the choice of methods to cause death is unplanned. There is an increased likelihood that doubts of homicide are raised in cases of unplanned complex suicides. A case of complex suicide is reported where the victim resorted to multiple methods to end his life, and what appeared to be an unplanned variant based on the death scene investigations. A meticulous crime scene examination, interviews of the victim's relatives and other witnesses, and a thorough autopsy are warranted to conclude on the cause and manner of death in all such cases. © 2017 American Academy of Forensic Sciences.

  1. Characterizing lentic freshwater fish assemblages using multiple sampling methods

    Science.gov (United States)

    Fischer, Jesse R.; Quist, Michael C.

    2014-01-01

    Characterizing fish assemblages in lentic ecosystems is difficult, and multiple sampling methods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of sampling methods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.

  2. Electromagnetic imaging of multiple-scattering small objects: non-iterative analytical approach

    International Nuclear Information System (INIS)

    Chen, X; Zhong, Y

    2008-01-01

    Multiple signal classification (MUSIC) imaging method and the least squares method are applied to solve the electromagnetic inverse scattering problem of determining the locations and polarization tensors of a collection of small objects embedded in a known background medium. Based on the analysis of induced electric and magnetic dipoles, the proposed MUSIC method is able to deal with some special scenarios, due to the shapes and materials of objects, to which the standard MUSIC doesn't apply. After the locations of objects are obtained, the nonlinear inverse problem of determining the polarization tensors of objects accounting for multiple scattering between objects is solved by a non-iterative analytical approach based on the least squares method

  3. Multiple Scattering Approach to Continuum State with Generally Shaped Potential

    International Nuclear Information System (INIS)

    Hatada, Keisuke; Hayakawa, Kuniko; Tenore, Antonio; Benfatto, Maurizio; Natoli, Calogero

    2007-01-01

    We present a new scheme for solving the scattering problem for an arbitrarily shaped potential cell that avoids the well known convergence problems in the angular momentum expansion of the cell shape function. Tests of the method against analytically soluble separable model potentials, with and without shape truncation, have been performed with success. By a judicious choice of the shape of the cells partitioning the whole molecular space and use of empty cells when necessary, we set up a multiple scattering scheme that leads to a straightforward generalization of the same equations in the muffin-tin approximation. For example lmax in the angular momentum expansion can still be chosen according to the rule lmax ∼ kR, where R is the radius of the bounding sphere of the cell and all the matrices appearing in the theory are square matrices

  4. Geometric calibration method for multiple head cone beam SPECT systems

    International Nuclear Information System (INIS)

    Rizo, Ph.; Grangeat, P.; Guillemaud, R.; Sauze, R.

    1993-01-01

    A method is presented for performing geometric calibration on Single Photon Emission Tomography (SPECT) cone beam systems with multiple cone beam collimators, each having its own orientation parameters. This calibration method relies on the fact that, in tomography, for each head, the relative position of the rotation axis and of the collimator does not change during the acquisition. In order to ensure the method stability, the parameters to be estimated in intrinsic parameters and extrinsic parameters are separated. The intrinsic parameters describe the acquisition geometry and the extrinsic parameters position of the detection system with respect to the rotation axis. (authors) 3 refs

  5. Multiple-scale approach for the expansion scaling of superfluid quantum gases

    International Nuclear Information System (INIS)

    Egusquiza, I. L.; Valle Basagoiti, M. A.; Modugno, M.

    2011-01-01

    We present a general method, based on a multiple-scale approach, for deriving the perturbative solutions of the scaling equations governing the expansion of superfluid ultracold quantum gases released from elongated harmonic traps. We discuss how to treat the secular terms appearing in the usual naive expansion in the trap asymmetry parameter ε and calculate the next-to-leading correction for the asymptotic aspect ratio, with significant improvement over the previous proposals.

  6. An Automated Approach to Reasoning Under Multiple Perspectives

    Science.gov (United States)

    deBessonet, Cary

    2004-01-01

    This is the final report with emphasis on research during the last term. The context for the research has been the development of an automated reasoning technology for use in SMS (symbolic Manipulation System), a system used to build and query knowledge bases (KBs) using a special knowledge representation language SL (Symbolic Language). SMS interpreters assertive SL input and enters the results as components of its universe. The system operates in two basic models: 1) constructive mode (for building KBs); and 2) query/search mode (for querying KBs). Query satisfaction consists of matching query components with KB components. The system allows "penumbral matches," that is, matches that do not exactly meet the specifications of the query, but which are deemed relevant for the conversational context. If the user wants to know whether SMS has information that holds, say, for "any chow," the scope of relevancy might be set so that the system would respond based on a finding that it has information that holds for "most dogs," although this is not exactly what was called for by the query. The response would be qualified accordingly, as would normally be the case in ordinary human conversation. The general goal of the research was to develop an approach by which assertive content could be interpreted from multiple perspectives so that reasoning operations could be successfully conducted over the results. The interpretation of an SL statement such as, "{person believes [captain (asserted (perhaps)) (astronaut saw (comet (bright)))]}," which in English would amount to asserting something to the effect that, "Some person believes that a captain perhaps asserted that an astronaut saw a bright comet," would require the recognition of multiple perspectives, including some that are: a) epistemically-based (focusing on "believes"); b) assertion-based (focusing on "asserted"); c) perception-based (focusing on "saw"); d) adjectivally-based (focusing on "bight"); and e) modally

  7. A crack growth evaluation method for interacting multiple cracks

    International Nuclear Information System (INIS)

    Kamaya, Masayuki

    2003-01-01

    When stress corrosion cracking or corrosion fatigue occurs, multiple cracks are frequently initiated in the same area. According to section XI of the ASME Boiler and Pressure Vessel Code, multiple cracks are considered as a single combined crack in crack growth analysis, if the specified conditions are satisfied. In crack growth processes, however, no prescription for the interference between multiple cracks is given in this code. The JSME Post-Construction Code, issued in May 2000, prescribes the conditions of crack coalescence in the crack growth process. This study aimed to extend this prescription to more general cases. A simulation model was applied, to simulate the crack growth process, taking into account the interference between two cracks. This model made it possible to analyze multiple crack growth behaviors for many cases (e.g. different relative position and length) that could not be studied by experiment only. Based on these analyses, a new crack growth analysis method was suggested for taking into account the interference between multiple cracks. (author)

  8. Creative Approaches to Teaching Graduate Research Methods Workshops

    OpenAIRE

    Peter Reilly

    2017-01-01

    Engagement and deeper learning were enhanced by developing several innovative teaching strategies delivered in Research Methods workshops to Graduate Business Students.  Focusing primarily on students adopting a creative approach to formulating a valid research question for undertaking a dissertation successfully. These techniques are applicable to most subject domains to ensure student engagement.  Addressing the various multiple intelligences and learning styles existing within groups while...

  9. Galerkin projection methods for solving multiple related linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Chan, T.F.; Ng, M.; Wan, W.L.

    1996-12-31

    We consider using Galerkin projection methods for solving multiple related linear systems A{sup (i)}x{sup (i)} = b{sup (i)} for 1 {le} i {le} s, where A{sup (i)} and b{sup (i)} are different in general. We start with the special case where A{sup (i)} = A and A is symmetric positive definite. The method generates a Krylov subspace from a set of direction vectors obtained by solving one of the systems, called the seed system, by the CG method and then projects the residuals of other systems orthogonally onto the generated Krylov subspace to get the approximate solutions. The whole process is repeated with another unsolved system as a seed until all the systems are solved. We observe in practice a super-convergence behaviour of the CG process of the seed system when compared with the usual CG process. We also observe that only a small number of restarts is required to solve all the systems if the right-hand sides are close to each other. These two features together make the method particularly effective. In this talk, we give theoretical proof to justify these observations. Furthermore, we combine the advantages of this method and the block CG method and propose a block extension of this single seed method. The above procedure can actually be modified for solving multiple linear systems A{sup (i)}x{sup (i)} = b{sup (i)}, where A{sup (i)} are now different. We can also extend the previous analytical results to this more general case. Applications of this method to multiple related linear systems arising from image restoration and recursive least squares computations are considered as examples.

  10. Maintenance Approaches for Different Production Methods

    Directory of Open Access Journals (Sweden)

    Mungani, Dzivhuluwani Simon

    2013-11-01

    Full Text Available Various production methods are used in industry to manufacture or produce a variety of products needed by industry and consumers. The nature of a product determines which production method is most suitable or cost-effective. A continuous process is typically used to produce large volumes of liquids or gases. Batch processing is often used for small volumes, such as pharmaceutical products. This paper discusses a research project to determine the relationship between maintenance approaches and production methods. A survey was done to determine to what extent three maintenance approaches reliability-centred maintenance (RCM, total productive maintenance (TPM, and business-centred maintenance (BCM are used for three different processing methods (continuous process, batch process, and a production line method.

  11. Socratic Method as an Approach to Teaching

    Directory of Open Access Journals (Sweden)

    Haris Delić

    2016-10-01

    Full Text Available In this article we presented the theoretical view of Socrates' life and his method in teaching. After the biographical facts of Socrates and his life, we explained the method he used in teaching and the two main types of his method, Classic and Modern Socratic Method. Since the core of Socrates' approach is the dialogue as a form of teaching we explained how exactly the Socratic dialogue goes. Besides that, we presented two examples of dialogues that Socrates led, Meno and Gorgias. Socratic circle is also one of the aspects that we presented in this paper. It is the form of seminars that is crucial for group discussions of a given theme. At the end, some disadvantages of the Method are explained. With this paper, the reader can get the conception of this approach of teaching and can use Socrates as an example of how successfull teacher leads his students towards the goal.

  12. Multiple Family Group Therapy: An Interpersonal/Postmodern Approach.

    Science.gov (United States)

    Thorngren, Jill M.; Kleist, David M.

    2002-01-01

    Multiple Family Group Therapy has been identified as a viable treatment model for a variety of client populations. A combination of family systems theories and therapeutic group factors provide the opportunity to explore multiple levels of intrapersonal and interpersonal relationships between families. This article depicts a Multiple Family Group…

  13. Remodeling Functional Connectivity in Multiple Sclerosis: A Challenging Therapeutic Approach.

    Science.gov (United States)

    Stampanoni Bassi, Mario; Gilio, Luana; Buttari, Fabio; Maffei, Pierpaolo; Marfia, Girolama A; Restivo, Domenico A; Centonze, Diego; Iezzi, Ennio

    2017-01-01

    Neurons in the central nervous system are organized in functional units interconnected to form complex networks. Acute and chronic brain damage disrupts brain connectivity producing neurological signs and/or symptoms. In several neurological diseases, particularly in Multiple Sclerosis (MS), structural imaging studies cannot always demonstrate a clear association between lesion site and clinical disability, originating the "clinico-radiological paradox." The discrepancy between structural damage and disability can be explained by a complex network perspective. Both brain networks architecture and synaptic plasticity may play important roles in modulating brain networks efficiency after brain damage. In particular, long-term potentiation (LTP) may occur in surviving neurons to compensate network disconnection. In MS, inflammatory cytokines dramatically interfere with synaptic transmission and plasticity. Importantly, in addition to acute and chronic structural damage, inflammation could contribute to reduce brain networks efficiency in MS leading to worse clinical recovery after a relapse and worse disease progression. These evidence suggest that removing inflammation should represent the main therapeutic target in MS; moreover, as synaptic plasticity is particularly altered by inflammation, specific strategies aimed at promoting LTP mechanisms could be effective for enhancing clinical recovery. Modulation of plasticity with different non-invasive brain stimulation (NIBS) techniques has been used to promote recovery of MS symptoms. Better knowledge of features inducing brain disconnection in MS is crucial to design specific strategies to promote recovery and use NIBS with an increasingly tailored approach.

  14. Remodeling Functional Connectivity in Multiple Sclerosis: A Challenging Therapeutic Approach

    Directory of Open Access Journals (Sweden)

    Mario Stampanoni Bassi

    2017-12-01

    Full Text Available Neurons in the central nervous system are organized in functional units interconnected to form complex networks. Acute and chronic brain damage disrupts brain connectivity producing neurological signs and/or symptoms. In several neurological diseases, particularly in Multiple Sclerosis (MS, structural imaging studies cannot always demonstrate a clear association between lesion site and clinical disability, originating the “clinico-radiological paradox.” The discrepancy between structural damage and disability can be explained by a complex network perspective. Both brain networks architecture and synaptic plasticity may play important roles in modulating brain networks efficiency after brain damage. In particular, long-term potentiation (LTP may occur in surviving neurons to compensate network disconnection. In MS, inflammatory cytokines dramatically interfere with synaptic transmission and plasticity. Importantly, in addition to acute and chronic structural damage, inflammation could contribute to reduce brain networks efficiency in MS leading to worse clinical recovery after a relapse and worse disease progression. These evidence suggest that removing inflammation should represent the main therapeutic target in MS; moreover, as synaptic plasticity is particularly altered by inflammation, specific strategies aimed at promoting LTP mechanisms could be effective for enhancing clinical recovery. Modulation of plasticity with different non-invasive brain stimulation (NIBS techniques has been used to promote recovery of MS symptoms. Better knowledge of features inducing brain disconnection in MS is crucial to design specific strategies to promote recovery and use NIBS with an increasingly tailored approach.

  15. [Cormorbidity in multiple sclerosis and its therapeutic approach].

    Science.gov (United States)

    Estruch, Bonaventura Casanova

    2014-12-01

    Multiple sclerosis (MS) is a long-term chronic disease, in which intercurrent processes develop three times more frequently in affected individuals than in persons without MS. Knowledge of the comorbidity of MS, its definition and measurement (Charlson index) improves patient management. Acting on comorbid conditions delays the progression of disability, which is intimately linked to the number of concurrent processes and with health states and habits. Moreover, the presence of comorbidities delays the diagnosis of MS, which in turn delays the start of treatment. The main comorbidity found in MS includes other autoimmune diseases (thyroiditis, systemic lupus erythematosus, or pemphigus) but can also include general diseases, such as asthma or osteomuscular alterations, and, in particular, psychiatric disturbances. All these alterations should be evaluated with multidimensional scales (Disability Expectancy Table, DET), which allow more accurate determination of the patient's real clinical course and quality of life. These scales also allow identification of how MS, concurrent and intercurrent processes occurring during the clinical course, and the treatment provided affect patients with MS. An overall approach to patients' health status helps to improve quality of life. Copyright © 2014 Elsevier España, S.L.U. All rights reserved.

  16. Multiple approaches to microbial source tracking in tropical northern Australia

    KAUST Repository

    Neave, Matthew

    2014-09-16

    Microbial source tracking is an area of research in which multiple approaches are used to identify the sources of elevated bacterial concentrations in recreational lakes and beaches. At our study location in Darwin, northern Australia, water quality in the harbor is generally good, however dry-season beach closures due to elevated Escherichia coli and enterococci counts are a cause for concern. The sources of these high bacteria counts are currently unknown. To address this, we sampled sewage outfalls, other potential inputs, such as urban rivers and drains, and surrounding beaches, and used genetic fingerprints from E. coli and enterococci communities, fecal markers and 454 pyrosequencing to track contamination sources. A sewage effluent outfall (Larrakeyah discharge) was a source of bacteria, including fecal bacteria that impacted nearby beaches. Two other treated effluent discharges did not appear to influence sites other than those directly adjacent. Several beaches contained fecal indicator bacteria that likely originated from urban rivers and creeks within the catchment. Generally, connectivity between the sites was observed within distinct geographical locations and it appeared that most of the bacterial contamination on Darwin beaches was confined to local sources.

  17. A novel method for producing multiple ionization of noble gas

    International Nuclear Information System (INIS)

    Wang Li; Li Haiyang; Dai Dongxu; Bai Jiling; Lu Richang

    1997-01-01

    We introduce a novel method for producing multiple ionization of He, Ne, Ar, Kr and Xe. A nanosecond pulsed electron beam with large number density, which could be energy-controlled, was produced by incidence a focused 308 nm laser beam onto a stainless steel grid. On Time-of-Flight Mass Spectrometer, using this electron beam, we obtained multiple ionization of noble gas He, Ne, Ar and Xe. Time of fight mass spectra of these ions were given out. These ions were supposed to be produced by step by step ionization of the gas atoms by electron beam impact. This method may be used as a ideal soft ionizing point ion source in Time of Flight Mass Spectrometer

  18. A level set method for multiple sclerosis lesion segmentation.

    Science.gov (United States)

    Zhao, Yue; Guo, Shuxu; Luo, Min; Shi, Xue; Bilello, Michel; Zhang, Shaoxiang; Li, Chunming

    2018-06-01

    In this paper, we present a level set method for multiple sclerosis (MS) lesion segmentation from FLAIR images in the presence of intensity inhomogeneities. We use a three-phase level set formulation of segmentation and bias field estimation to segment MS lesions and normal tissue region (including GM and WM) and CSF and the background from FLAIR images. To save computational load, we derive a two-phase formulation from the original multi-phase level set formulation to segment the MS lesions and normal tissue regions. The derived method inherits the desirable ability to precisely locate object boundaries of the original level set method, which simultaneously performs segmentation and estimation of the bias field to deal with intensity inhomogeneity. Experimental results demonstrate the advantages of our method over other state-of-the-art methods in terms of segmentation accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Measuring multiple residual-stress components using the contour method and multiple cuts

    Energy Technology Data Exchange (ETDEWEB)

    Prime, Michael B [Los Alamos National Laboratory; Swenson, Hunter [Los Alamos National Laboratory; Pagliaro, Pierluigi [U. PALERMO; Zuccarello, Bernardo [U. PALERMO

    2009-01-01

    The conventional contour method determines one component of stress over the cross section of a part. The part is cut into two, the contour of the exposed surface is measured, and Bueckner's superposition principle is analytically applied to calculate stresses. In this paper, the contour method is extended to the measurement of multiple stress components by making multiple cuts with subsequent applications of superposition. The theory and limitations are described. The theory is experimentally tested on a 316L stainless steel disk with residual stresses induced by plastically indenting the central portion of the disk. The stress results are validated against independent measurements using neutron diffraction. The theory has implications beyond just multiple cuts. The contour method measurements and calculations for the first cut reveal how the residual stresses have changed throughout the part. Subsequent measurements of partially relaxed stresses by other techniques, such as laboratory x-rays, hole drilling, or neutron or synchrotron diffraction, can be superimposed back to the original state of the body.

  20. Variational Approaches for the Existence of Multiple Periodic Solutions of Differential Delay Equations

    Directory of Open Access Journals (Sweden)

    Rong Cheng

    2010-01-01

    Full Text Available The existence of multiple periodic solutions of the following differential delay equation (=−((− is established by applying variational approaches directly, where ∈ℝ, ∈(ℝ,ℝ and >0 is a given constant. This means that we do not need to use Kaplan and Yorke's reduction technique to reduce the existence problem of the above equation to an existence problem for a related coupled system. Such a reduction method introduced first by Kaplan and Yorke in (1974 is often employed in previous papers to study the existence of periodic solutions for the above equation and its similar ones by variational approaches.

  1. A Memory/Immunology-Based Control Approach with Applications to Multiple Spacecraft Formation Flying

    Directory of Open Access Journals (Sweden)

    Liguo Weng

    2013-01-01

    Full Text Available This paper addresses the problem of formation control for multiple spacecrafts in Planetary Orbital Environment (POE. Due to the presence of diverse interferences and uncertainties in the outer space, such as the changing spacecraft mass, unavailable space parameters, and varying gravity forces, traditional control methods encounter great difficulties in this area. A new control approach inspired by human memory and immune system is proposed, and this approach is shown to be capable of learning from past control experience and current behavior to improve its performance. It demands much less system dynamic information as compared with traditional controls. Both theoretic analysis and computer simulation verify its effectiveness.

  2. A data-driven multiplicative fault diagnosis approach for automation processes.

    Science.gov (United States)

    Hao, Haiyang; Zhang, Kai; Ding, Steven X; Chen, Zhiwen; Lei, Yaguo

    2014-09-01

    This paper presents a new data-driven method for diagnosing multiplicative key performance degradation in automation processes. Different from the well-established additive fault diagnosis approaches, the proposed method aims at identifying those low-level components which increase the variability of process variables and cause performance degradation. Based on process data, features of multiplicative fault are extracted. To identify the root cause, the impact of fault on each process variable is evaluated in the sense of contribution to performance degradation. Then, a numerical example is used to illustrate the functionalities of the method and Monte-Carlo simulation is performed to demonstrate the effectiveness from the statistical viewpoint. Finally, to show the practical applicability, a case study on the Tennessee Eastman process is presented. Copyright © 2013. Published by Elsevier Ltd.

  3. Measurement of subcritical multiplication by the interval distribution method

    International Nuclear Information System (INIS)

    Nelson, G.W.

    1985-01-01

    The prompt decay constant or the subcritical neutron multiplication may be determined by measuring the distribution of the time intervals between successive neutron counts. The distribution data is analyzed by least-squares fitting to a theoretical distribution function derived from a point reactor probability model. Published results of measurements with one- and two-detector systems are discussed. Data collection times are shorter, and statistical errors are smaller the nearer the system is to delayed critical. Several of the measurements indicate that a shorter data collection time and higher accuracy are possible with the interval distribution method than with the Feynman variance method

  4. Simple and effective method of determining multiplicity distribution law of neutrons emitted by fissionable material with significant self -multiplication effect

    International Nuclear Information System (INIS)

    Yanjushkin, V.A.

    1991-01-01

    At developing new methods of non-destructive determination of plutonium full mass in nuclear materials and products being involved in uranium -plutonium fuel cycle by its intrinsic neutron radiation, it may be useful to know not only separate moments but the multiplicity distribution law itself of neutron leaving this material surface using the following as parameters - firstly, unconditional multiplicity distribution laws of neutrons formed in spontaneous and induced fission acts of the given fissionable material corresponding nuclei and unconditional multiplicity distribution law of neutrons caused by (α,n) reactions at light nuclei of some elements which compose this material chemical structure; -secondly, probability of induced fission of this material nuclei by an incident neutron of any nature formed during the previous fissions or(α,n) reactions. An attempt to develop similar theory has been undertaken. Here the author proposes his approach to this problem. The main advantage of this approach, to our mind, consists in its mathematical simplicity and easy realization at the computer. In principle, the given model guarantees any good accuracy at any real value of induced fission probability without limitations dealing with physico-chemical composition of nuclear material

  5. Fault diagnosis of sensor networked structures with multiple faults using a virtual beam based approach

    Science.gov (United States)

    Wang, H.; Jing, X. J.

    2017-07-01

    This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.

  6. Integrated Markov-neural reliability computation method: A case for multiple automated guided vehicle system

    International Nuclear Information System (INIS)

    Fazlollahtabar, Hamed; Saidi-Mehrabad, Mohammad; Balakrishnan, Jaydeep

    2015-01-01

    This paper proposes an integrated Markovian and back propagation neural network approaches to compute reliability of a system. While states of failure occurrences are significant elements for accurate reliability computation, Markovian based reliability assessment method is designed. Due to drawbacks shown by Markovian model for steady state reliability computations and neural network for initial training pattern, integration being called Markov-neural is developed and evaluated. To show efficiency of the proposed approach comparative analyses are performed. Also, for managerial implication purpose an application case for multiple automated guided vehicles (AGVs) in manufacturing networks is conducted. - Highlights: • Integrated Markovian and back propagation neural network approach to compute reliability. • Markovian based reliability assessment method. • Managerial implication is shown in an application case for multiple automated guided vehicles (AGVs) in manufacturing networks

  7. Permutation statistical methods an integrated approach

    CERN Document Server

    Berry, Kenneth J; Johnston, Janis E

    2016-01-01

    This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. This research monograph addresses a statistically-informed audience, and can also easily serve as a ...

  8. A global calibration method for multiple vision sensors based on multiple targets

    International Nuclear Information System (INIS)

    Liu, Zhen; Zhang, Guangjun; Wei, Zhenzhong; Sun, Junhua

    2011-01-01

    The global calibration of multiple vision sensors (MVS) has been widely studied in the last two decades. In this paper, we present a global calibration method for MVS with non-overlapping fields of view (FOVs) using multiple targets (MT). MT is constructed by fixing several targets, called sub-targets, together. The mutual coordinate transformations between sub-targets need not be known. The main procedures of the proposed method are as follows: one vision sensor is selected from MVS to establish the global coordinate frame (GCF). MT is placed in front of the vision sensors for several (at least four) times. Using the constraint that the relative positions of all sub-targets are invariant, the transformation matrix from the coordinate frame of each vision sensor to GCF can be solved. Both synthetic and real experiments are carried out and good result is obtained. The proposed method has been applied to several real measurement systems and shown to be both flexible and accurate. It can serve as an attractive alternative to existing global calibration methods

  9. Field evaluation of personal sampling methods for multiple bioaerosols.

    Science.gov (United States)

    Wang, Chi-Hsun; Chen, Bean T; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine

    2015-01-01

    Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  10. Field evaluation of personal sampling methods for multiple bioaerosols.

    Directory of Open Access Journals (Sweden)

    Chi-Hsun Wang

    Full Text Available Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min. Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  11. Comparison of multiple gene assembly methods for metabolic engineering

    Science.gov (United States)

    Chenfeng Lu; Karen Mansoorabadi; Thomas Jeffries

    2007-01-01

    A universal, rapid DNA assembly method for efficient multigene plasmid construction is important for biological research and for optimizing gene expression in industrial microbes. Three different approaches to achieve this goal were evaluated. These included creating long complementary extensions using a uracil-DNA glycosylase technique, overlap extension polymerase...

  12. Correlation expansion: a powerful alternative multiple scattering calculation method

    International Nuclear Information System (INIS)

    Zhao Haifeng; Wu Ziyu; Sebilleau, Didier

    2008-01-01

    We introduce a powerful alternative expansion method to perform multiple scattering calculations. In contrast to standard MS series expansion, where the scattering contributions are grouped in terms of scattering order and may diverge in the low energy region, this expansion, called correlation expansion, partitions the scattering process into contributions from different small atom groups and converges at all energies. It converges faster than MS series expansion when the latter is convergent. Furthermore, it takes less memory than the full MS method so it can be used in the near edge region without any divergence problem, even for large clusters. The correlation expansion framework we derive here is very general and can serve to calculate all the elements of the scattering path operator matrix. Photoelectron diffraction calculations in a cluster containing 23 atoms are presented to test the method and compare it to full MS and standard MS series expansion

  13. A linear multiple balance method for discrete ordinates neutron transport equations

    International Nuclear Information System (INIS)

    Park, Chang Je; Cho, Nam Zin

    2000-01-01

    A linear multiple balance method (LMB) is developed to provide more accurate and positive solutions for the discrete ordinates neutron transport equations. In this multiple balance approach, one mesh cell is divided into two subcells with quadratic approximation of angular flux distribution. Four multiple balance equations are used to relate center angular flux with average angular flux by Simpson's rule. From the analysis of spatial truncation error, the accuracy of the linear multiple balance scheme is ο(Δ 4 ) whereas that of diamond differencing is ο(Δ 2 ). To accelerate the linear multiple balance method, we also describe a simplified additive angular dependent rebalance factor scheme which combines a modified boundary projection acceleration scheme and the angular dependent rebalance factor acceleration schme. It is demonstrated, via fourier analysis of a simple model problem as well as numerical calculations, that the additive angular dependent rebalance factor acceleration scheme is unconditionally stable with spectral radius < 0.2069c (c being the scattering ration). The numerical results tested so far on slab-geometry discrete ordinates transport problems show that the solution method of linear multiple balance is effective and sufficiently efficient

  14. Resampling-based methods in single and multiple testing for equality of covariance/correlation matrices.

    Science.gov (United States)

    Yang, Yang; DeGruttola, Victor

    2012-06-22

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.

  15. A Multiple Criteria Decision Making Method Based on Relative Value Distances

    Directory of Open Access Journals (Sweden)

    Shyur Huan-jyh

    2015-12-01

    Full Text Available This paper proposes a new multiple criteria decision-making method called ERVD (election based on relative value distances. The s-shape value function is adopted to replace the expected utility function to describe the risk-averse and risk-seeking behavior of decision makers. Comparisons and experiments contrasting with the TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution method are carried out to verify the feasibility of using the proposed method to represent the decision makers’ preference in the decision making process. Our experimental results show that the proposed approach is an appropriate and effective MCDM method.

  16. Multiple sequential failure model: A probabilistic approach to quantifying human error dependency

    International Nuclear Information System (INIS)

    Samanta

    1985-01-01

    This paper rpesents a probabilistic approach to quantifying human error dependency when multiple tasks are performed. Dependent human failures are dominant contributors to risks from nuclear power plants. An overview of the Multiple Sequential Failure (MSF) model developed and its use in probabilistic risk assessments (PRAs) depending on the available data are discussed. A small-scale psychological experiment was conducted on the nature of human dependency and the interpretation of the experimental data by the MSF model show remarkable accommodation of the dependent failure data. The model, which provides an unique method for quantification of dependent failures in human reliability analysis, can be used in conjunction with any of the general methods currently used for performing the human reliability aspect in PRAs

  17. Multiple and mixed methods in formative evaluation: Is more better? Reflections from a South African study

    Directory of Open Access Journals (Sweden)

    Willem Odendaal

    2016-12-01

    Full Text Available Abstract Background Formative programme evaluations assess intervention implementation processes, and are seen widely as a way of unlocking the ‘black box’ of any programme in order to explore and understand why a programme functions as it does. However, few critical assessments of the methods used in such evaluations are available, and there are especially few that reflect on how well the evaluation achieved its objectives. This paper describes a formative evaluation of a community-based lay health worker programme for TB and HIV/AIDS clients across three low-income communities in South Africa. It assesses each of the methods used in relation to the evaluation objectives, and offers suggestions on ways of optimising the use of multiple, mixed-methods within formative evaluations of complex health system interventions. Methods The evaluation’s qualitative methods comprised interviews, focus groups, observations and diary keeping. Quantitative methods included a time-and-motion study of the lay health workers’ scope of practice and a client survey. The authors conceptualised and conducted the evaluation, and through iterative discussions, assessed the methods used and their results. Results Overall, the evaluation highlighted programme issues and insights beyond the reach of traditional single methods evaluations. The strengths of the multiple, mixed-methods in this evaluation included a detailed description and nuanced understanding of the programme and its implementation, and triangulation of the perspectives and experiences of clients, lay health workers, and programme managers. However, the use of multiple methods needs to be carefully planned and implemented as this approach can overstretch the logistic and analytic resources of an evaluation. Conclusions For complex interventions, formative evaluation designs including multiple qualitative and quantitative methods hold distinct advantages over single method evaluations. However

  18. Determination of 226Ra contamination depth in soil using the multiple photopeaks method

    International Nuclear Information System (INIS)

    Haddad, Kh.; Al-Masri, M.S.; Doubal, A.W.

    2014-01-01

    Radioactive contamination presents a diverse range of challenges in many industries. Determination of radioactive contamination depth plays a vital role in the assessment of contaminated sites, because it can be used to estimate the activity content. It is determined traditionally by measuring the activity distributions along the depth. This approach gives accurate results, but it is time consuming, lengthy and costly. The multiple photopeaks method was developed in this work for 226 Ra contamination depth determination in a NORM contaminated soil using in-situ gamma spectrometry. The developed method bases on linear correlation between the attenuation ratio of different gamma lines emitted by 214 Bi and the 226 Ra contamination depth. Although this method is approximate, but it is much simpler, faster and cheaper than the traditional one. This method can be applied for any case of multiple gamma emitter contaminant. -- Highlights: • The multiple photopeaks method was developed for 226 Ra contamination depth determination using in-situ gamma spectrometry. • The method bases on linear correlation between the attenuation ratio of 214 Bi gamma lines and 226 Ra contamination depth. • This method is simpler, faster and cheaper than the traditional one, it can be applied for any multiple gamma contaminant

  19. Association analysis of multiple traits by an approach of combining ...

    Indian Academy of Sciences (India)

    Lili Chen

    diseases. Joint analysis of multiple traits can increase statistical power of association analysis and uncover the underlying genetic ... genthaler and Thilly 2007), the combined multivariate and ... Because of using reverse regression model, our.

  20. Application of multiple timestep integration method in SSC

    International Nuclear Information System (INIS)

    Guppy, J.G.

    1979-01-01

    The thermohydraulic transient simulation of an entire LMFBR system is, by its very nature, complex. Physically, the entire plant consists of many subsystems which are coupled by various processes and/or components. The characteristic integration timesteps for these processes/components can vary over a wide range. To improve computing efficiency, a multiple timestep scheme (MTS) approach has been used in the development of the Super System Code (SSC). In this paper: (1) the partitioning of the system and the timestep control are described, and (2) results are presented showing a savings in computer running time using the MTS of as much as five times the time required using a single timestep scheme

  1. Multiple predictor smoothing methods for sensitivity analysis: Description of techniques

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  2. Multiple predictor smoothing methods for sensitivity analysis: Example results

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  3. Integrating Multiple Teaching Methods into a General Chemistry Classroom

    Science.gov (United States)

    Francisco, Joseph S.; Nicoll, Gayle; Trautmann, Marcella

    1998-02-01

    In addition to the traditional lecture format, three other teaching strategies (class discussions, concept maps, and cooperative learning) were incorporated into a freshman level general chemistry course. Student perceptions of their involvement in each of the teaching methods, as well as their perceptions of the utility of each method were used to assess the effectiveness of the integration of the teaching strategies as received by the students. Results suggest that each strategy serves a unique purpose for the students and increased student involvement in the course. These results indicate that the multiple teaching strategies were well received by the students and that all teaching strategies are necessary for students to get the most out of the course.

  4. Fuzzy multiple objective decision making methods and applications

    CERN Document Server

    Lai, Young-Jou

    1994-01-01

    In the last 25 years, the fuzzy set theory has been applied in many disciplines such as operations research, management science, control theory, artificial intelligence/expert system, etc. In this volume, methods and applications of crisp, fuzzy and possibilistic multiple objective decision making are first systematically and thoroughly reviewed and classified. This state-of-the-art survey provides readers with a capsule look into the existing methods, and their characteristics and applicability to analysis of fuzzy and possibilistic programming problems. To realize practical fuzzy modelling, it presents solutions for real-world problems including production/manufacturing, location, logistics, environment management, banking/finance, personnel, marketing, accounting, agriculture economics and data analysis. This book is a guided tour through the literature in the rapidly growing fields of operations research and decision making and includes the most up-to-date bibliographical listing of literature on the topi...

  5. Multiple Intelligences within the Cross-Curricular Approach

    Directory of Open Access Journals (Sweden)

    Anthoula Vaiou

    2010-02-01

    Full Text Available The present study was realized in a Greek 6th grade State Primary School class and was based on Howard Gardner’s theory of multiple intelligences, which was first introduced in 1983. More particularly, it was explored to what extent the young learners possess multiple intelligences through the use of a specially-designed questionnaire and a series of interviews. The findings of the above have served as a tool to the construction of a project work based on students’ learning preferences within a cross-curricular framework, easily applicable to the Greek State School curriculum. All learners were activated to participate within a school environment that traditionally promotes linguistic and mathematical skills matching dominant multiple intelligences or a combination of some of them to thematic units already taught by Greek teachers. The suggested project was assessed through observation and student portfolio, showing that the young learners’ multiple intelligences were exploited to a great extent, promoting the learning process satisfactorily. The results of this study can provide a contribution to the literature of multiple intelligences in the Greek reality and suggest a need for further consideration and exploration in the field. Finally, the researcher of this study hopes the present work could function as a springboard for more elaborated studies in the future.

  6. An Extended TOPSIS Method for the Multiple Attribute Decision Making Problems Based on Interval Neutrosophic Set

    Directory of Open Access Journals (Sweden)

    Pingping Chi

    2013-03-01

    Full Text Available The interval neutrosophic set (INS can be easier to express the incomplete, indeterminate and inconsistent information, and TOPSIS is one of the most commonly used and effective method for multiple attribute decision making, however, in general, it can only process the attribute values with crisp numbers. In this paper, we have extended TOPSIS to INS, and with respect to the multiple attribute decision making problems in which the attribute weights are unknown and the attribute values take the form of INSs, we proposed an expanded TOPSIS method. Firstly, the definition of INS and the operational laws are given, and distance between INSs is defined. Then, the attribute weights are determined based on the Maximizing deviation method and an extended TOPSIS method is developed to rank the alternatives. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness.

  7. Filter multiplexing by use of spatial Code Division Multiple Access approach.

    Science.gov (United States)

    Solomon, Jonathan; Zalevsky, Zeev; Mendlovic, David; Monreal, Javier Garcia

    2003-02-10

    The increasing popularity of optical communication has also brought a demand for a broader bandwidth. The trend, naturally, was to implement methods from traditional electronic communication. One of the most effective traditional methods is Code Division Multiple Access. In this research, we suggest the use of this approach for spatial coding applied to images. The approach is to multiplex several filters into one plane while keeping their mutual orthogonality. It is shown that if the filters are limited by their bandwidth, the output of all the filters can be sampled in the original image resolution and fully recovered through an all-optical setup. The theoretical analysis of such a setup is verified in an experimental demonstration.

  8. Need for multiple approaches in collaborative software development

    International Nuclear Information System (INIS)

    LePoire, D. J.

    2002-01-01

    The need to share software and reintegrate it into new applications presents a difficult but important challenge. Component-based development as an approach to this problem is receiving much attention in professional journals and academic curricula. However, there are many other approaches to collaborative software development that might be more appropriate. This paper reviews a few of these approaches and discusses criteria for the conditions and contexts in which these alternative approaches might be more appropriate. This paper complements the discussion of context-based development team organizations and processes. Examples from a small development team that interacts with a larger professional community are analyzed

  9. Analytic Methods for Evaluating Patterns of Multiple Congenital Anomalies in Birth Defect Registries.

    Science.gov (United States)

    Agopian, A J; Evans, Jane A; Lupo, Philip J

    2018-01-15

    It is estimated that 20 to 30% of infants with birth defects have two or more birth defects. Among these infants with multiple congenital anomalies (MCA), co-occurring anomalies may represent either chance (i.e., unrelated etiologies) or pathogenically associated patterns of anomalies. While some MCA patterns have been recognized and described (e.g., known syndromes), others have not been identified or characterized. Elucidating these patterns may result in a better understanding of the etiologies of these MCAs. This article reviews the literature with regard to analytic methods that have been used to evaluate patterns of MCAs, in particular those using birth defect registry data. A popular method for MCA assessment involves a comparison of the observed to expected ratio for a given combination of MCAs, or one of several modified versions of this comparison. Other methods include use of numerical taxonomy or other clustering techniques, multiple regression analysis, and log-linear analysis. Advantages and disadvantages of these approaches, as well as specific applications, were outlined. Despite the availability of multiple analytic approaches, relatively few MCA combinations have been assessed. The availability of large birth defects registries and computing resources that allow for automated, big data strategies for prioritizing MCA patterns may provide for new avenues for better understanding co-occurrence of birth defects. Thus, the selection of an analytic approach may depend on several considerations. Birth Defects Research 110:5-11, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. EMUDRA: Ensemble of Multiple Drug Repositioning Approaches to Improve Prediction Accuracy.

    Science.gov (United States)

    Zhou, Xianxiao; Wang, Minghui; Katsyv, Igor; Irie, Hanna; Zhang, Bin

    2018-04-24

    Availability of large-scale genomic, epigenetic and proteomic data in complex diseases makes it possible to objectively and comprehensively identify therapeutic targets that can lead to new therapies. The Connectivity Map has been widely used to explore novel indications of existing drugs. However, the prediction accuracy of the existing methods, such as Kolmogorov-Smirnov statistic remains low. Here we present a novel high-performance drug repositioning approach that improves over the state-of-the-art methods. We first designed an expression weighted cosine method (EWCos) to minimize the influence of the uninformative expression changes and then developed an ensemble approach termed EMUDRA (Ensemble of Multiple Drug Repositioning Approaches) to integrate EWCos and three existing state-of-the-art methods. EMUDRA significantly outperformed individual drug repositioning methods when applied to simulated and independent evaluation datasets. We predicted using EMUDRA and experimentally validated an antibiotic rifabutin as an inhibitor of cell growth in triple negative breast cancer. EMUDRA can identify drugs that more effectively target disease gene signatures and will thus be a useful tool for identifying novel therapies for complex diseases and predicting new indications for existing drugs. The EMUDRA R package is available at doi:10.7303/syn11510888. bin.zhang@mssm.edu or zhangb@hotmail.com. Supplementary data are available at Bioinformatics online.

  11. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  12. A Fiducial Approach to Extremes and Multiple Comparisons

    Science.gov (United States)

    Wandler, Damian V.

    2010-01-01

    Generalized fiducial inference is a powerful tool for many difficult problems. Based on an extension of R. A. Fisher's work, we used generalized fiducial inference for two extreme value problems and a multiple comparison procedure. The first extreme value problem is dealing with the generalized Pareto distribution. The generalized Pareto…

  13. Differential diagnosis of suspected multiple sclerosis: a consensus approach

    NARCIS (Netherlands)

    Miller, D. H.; Weinshenker, B.G.; Filippi, M.; Banwell, B.L.; Cohen, J.A.; Freedman, M.S.; Galetta, S.L.; Hutchinson, M.; Johnson, R.T.; Kappos, L.; Kira, J.; Lublin, F.D.; McFarland, H.F.; Montalban, X.; Panitch, H.; Richert, J.R.; Reingold, S.C.; Polman, C.H.

    2008-01-01

    Background and objectives: Diagnosis of multiple sclerosis (MS) requires exclusion of diseases that could better explain the clinical and paraclinical findings. A systematic process for exclusion of alternative diagnoses has not been defined. An International Panel of MS experts developed consensus

  14. A retrospective likelihood approach for efficient integration of multiple omics factors in case-control association studies.

    Science.gov (United States)

    Balliu, Brunilda; Tsonaka, Roula; Boehringer, Stefan; Houwing-Duistermaat, Jeanine

    2015-03-01

    Integrative omics, the joint analysis of outcome and multiple types of omics data, such as genomics, epigenomics, and transcriptomics data, constitute a promising approach for powerful and biologically relevant association studies. These studies often employ a case-control design, and often include nonomics covariates, such as age and gender, that may modify the underlying omics risk factors. An open question is how to best integrate multiple omics and nonomics information to maximize statistical power in case-control studies that ascertain individuals based on the phenotype. Recent work on integrative omics have used prospective approaches, modeling case-control status conditional on omics, and nonomics risk factors. Compared to univariate approaches, jointly analyzing multiple risk factors with a prospective approach increases power in nonascertained cohorts. However, these prospective approaches often lose power in case-control studies. In this article, we propose a novel statistical method for integrating multiple omics and nonomics factors in case-control association studies. Our method is based on a retrospective likelihood function that models the joint distribution of omics and nonomics factors conditional on case-control status. The new method provides accurate control of Type I error rate and has increased efficiency over prospective approaches in both simulated and real data. © 2015 Wiley Periodicals, Inc.

  15. Acoustic scattering by multiple elliptical cylinders using collocation multipole method

    International Nuclear Information System (INIS)

    Lee, Wei-Ming

    2012-01-01

    This paper presents the collocation multipole method for the acoustic scattering induced by multiple elliptical cylinders subjected to an incident plane sound wave. To satisfy the Helmholtz equation in the elliptical coordinate system, the scattered acoustic field is formulated in terms of angular and radial Mathieu functions which also satisfy the radiation condition at infinity. The sound-soft or sound-hard boundary condition is satisfied by uniformly collocating points on the boundaries. For the sound-hard or Neumann conditions, the normal derivative of the acoustic pressure is determined by using the appropriate directional derivative without requiring the addition theorem of Mathieu functions. By truncating the multipole expansion, a finite linear algebraic system is derived and the scattered field can then be determined according to the given incident acoustic wave. Once the total field is calculated as the sum of the incident field and the scattered field, the near field acoustic pressure along the scatterers and the far field scattering pattern can be determined. For the acoustic scattering of one elliptical cylinder, the proposed results match well with the analytical solutions. The proposed scattered fields induced by two and three elliptical–cylindrical scatterers are critically compared with those provided by the boundary element method to validate the present method. Finally, the effects of the convexity of an elliptical scatterer, the separation between scatterers and the incident wave number and angle on the acoustic scattering are investigated.

  16. Dynamic reflexivity in action: an armchair walkthrough of a qualitatively driven mixed-method and multiple methods study of mindfulness training in schoolchildren.

    Science.gov (United States)

    Cheek, Julianne; Lipschitz, David L; Abrams, Elizabeth M; Vago, David R; Nakamura, Yoshio

    2015-06-01

    Dynamic reflexivity is central to enabling flexible and emergent qualitatively driven inductive mixed-method and multiple methods research designs. Yet too often, such reflexivity, and how it is used at various points of a study, is absent when we write our research reports. Instead, reports of mixed-method and multiple methods research focus on what was done rather than how it came to be done. This article seeks to redress this absence of emphasis on the reflexive thinking underpinning the way that mixed- and multiple methods, qualitatively driven research approaches are thought about and subsequently used throughout a project. Using Morse's notion of an armchair walkthrough, we excavate and explore the layers of decisions we made about how, and why, to use qualitatively driven mixed-method and multiple methods research in a study of mindfulness training (MT) in schoolchildren. © The Author(s) 2015.

  17. Capitalising on multiplicity: an transdisciplinary systems approach to landscape research

    NARCIS (Netherlands)

    Tress, B.; Tress, G.

    2001-01-01

    Different disciplines have landscape as the focal point of their research. They are successful in presenting new findings about landscapes within their specialization, but collaboration - and thus, transfer of knowledge across disciplinary boundaries - is seldom realized because a common approach

  18. An 00 visual language definition approach supporting multiple views

    OpenAIRE

    Akehurst, David H.; I.E.E.E. Computer Society

    2000-01-01

    The formal approach to visual language definition is to use graph grammars and/or graph transformation techniques. These techniques focus on specifying the syntax and manipulation rules of the concrete representation. This paper presents a constraint and object-oriented approach to defining visual languages that uses UML and OCL as a definition language. Visual language definitions specify a mapping between concrete and abstract models of possible visual sentences, which carl subsequently be ...

  19. A New Classification Approach Based on Multiple Classification Rules

    OpenAIRE

    Zhongmei Zhou

    2014-01-01

    A good classifier can correctly predict new data for which the class label is unknown, so it is important to construct a high accuracy classifier. Hence, classification techniques are much useful in ubiquitous computing. Associative classification achieves higher classification accuracy than some traditional rule-based classification approaches. However, the approach also has two major deficiencies. First, it generates a very large number of association classification rules, especially when t...

  20. A Nonparametric, Multiple Imputation-Based Method for the Retrospective Integration of Data Sets

    Science.gov (United States)

    Carrig, Madeline M.; Manrique-Vallier, Daniel; Ranby, Krista W.; Reiter, Jerome P.; Hoyle, Rick H.

    2015-01-01

    Complex research questions often cannot be addressed adequately with a single data set. One sensible alternative to the high cost and effort associated with the creation of large new data sets is to combine existing data sets containing variables related to the constructs of interest. The goal of the present research was to develop a flexible, broadly applicable approach to the integration of disparate data sets that is based on nonparametric multiple imputation and the collection of data from a convenient, de novo calibration sample. We demonstrate proof of concept for the approach by integrating three existing data sets containing items related to the extent of problematic alcohol use and associations with deviant peers. We discuss both necessary conditions for the approach to work well and potential strengths and weaknesses of the method compared to other data set integration approaches. PMID:26257437

  1. Improving automated multiple sclerosis lesion segmentation with a cascaded 3D convolutional neural network approach.

    Science.gov (United States)

    Valverde, Sergi; Cabezas, Mariano; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Oliver, Arnau; Lladó, Xavier

    2017-07-15

    In this paper, we present a novel automated method for White Matter (WM) lesion segmentation of Multiple Sclerosis (MS) patient images. Our approach is based on a cascade of two 3D patch-wise convolutional neural networks (CNN). The first network is trained to be more sensitive revealing possible candidate lesion voxels while the second network is trained to reduce the number of misclassified voxels coming from the first network. This cascaded CNN architecture tends to learn well from a small (n≤35) set of labeled data of the same MRI contrast, which can be very interesting in practice, given the difficulty to obtain manual label annotations and the large amount of available unlabeled Magnetic Resonance Imaging (MRI) data. We evaluate the accuracy of the proposed method on the public MS lesion segmentation challenge MICCAI2008 dataset, comparing it with respect to other state-of-the-art MS lesion segmentation tools. Furthermore, the proposed method is also evaluated on two private MS clinical datasets, where the performance of our method is also compared with different recent public available state-of-the-art MS lesion segmentation methods. At the time of writing this paper, our method is the best ranked approach on the MICCAI2008 challenge, outperforming the rest of 60 participant methods when using all the available input modalities (T1-w, T2-w and FLAIR), while still in the top-rank (3rd position) when using only T1-w and FLAIR modalities. On clinical MS data, our approach exhibits a significant increase in the accuracy segmenting of WM lesions when compared with the rest of evaluated methods, highly correlating (r≥0.97) also with the expected lesion volume. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A Bayesian joint probability modeling approach for seasonal forecasting of streamflows at multiple sites

    Science.gov (United States)

    Wang, Q. J.; Robertson, D. E.; Chiew, F. H. S.

    2009-05-01

    Seasonal forecasting of streamflows can be highly valuable for water resources management. In this paper, a Bayesian joint probability (BJP) modeling approach for seasonal forecasting of streamflows at multiple sites is presented. A Box-Cox transformed multivariate normal distribution is proposed to model the joint distribution of future streamflows and their predictors such as antecedent streamflows and El Niño-Southern Oscillation indices and other climate indicators. Bayesian inference of model parameters and uncertainties is implemented using Markov chain Monte Carlo sampling, leading to joint probabilistic forecasts of streamflows at multiple sites. The model provides a parametric structure for quantifying relationships between variables, including intersite correlations. The Box-Cox transformed multivariate normal distribution has considerable flexibility for modeling a wide range of predictors and predictands. The Bayesian inference formulated allows the use of data that contain nonconcurrent and missing records. The model flexibility and data-handling ability means that the BJP modeling approach is potentially of wide practical application. The paper also presents a number of statistical measures and graphical methods for verification of probabilistic forecasts of continuous variables. Results for streamflows at three river gauges in the Murrumbidgee River catchment in southeast Australia show that the BJP modeling approach has good forecast quality and that the fitted model is consistent with observed data.

  3. Multiple instance learning tracking method with local sparse representation

    KAUST Repository

    Xie, Chengjun

    2013-10-01

    When objects undergo large pose change, illumination variation or partial occlusion, most existed visual tracking algorithms tend to drift away from targets and even fail in tracking them. To address this issue, in this study, the authors propose an online algorithm by combining multiple instance learning (MIL) and local sparse representation for tracking an object in a video system. The key idea in our method is to model the appearance of an object by local sparse codes that can be formed as training data for the MIL framework. First, local image patches of a target object are represented as sparse codes with an overcomplete dictionary, where the adaptive representation can be helpful in overcoming partial occlusion in object tracking. Then MIL learns the sparse codes by a classifier to discriminate the target from the background. Finally, results from the trained classifier are input into a particle filter framework to sequentially estimate the target state over time in visual tracking. In addition, to decrease the visual drift because of the accumulative errors when updating the dictionary and classifier, a two-step object tracking method combining a static MIL classifier with a dynamical MIL classifier is proposed. Experiments on some publicly available benchmarks of video sequences show that our proposed tracker is more robust and effective than others. © The Institution of Engineering and Technology 2013.

  4. [Multiple sclerosis. Therapeutic nihilism is the wrong approach here].

    Science.gov (United States)

    Voltz, R; Goebels, N; Jarius, S; Hohlfeld, R

    2002-05-06

    The standard treatment for acute multiple sclerosis relapses continues to be the intravenous administration of high-dose methylprednisolone. For prophylactic purposes, immunomodulatory therapy with interferon beta or glatiramer acetate, immunoglobulins or azathioprine. Studies have shown that interferon beta not only reduces the frequency of relapses by one-third, but also significantly delays the second relapse, provided it is administrated early, that is, immediately following the first relapse. The reduction in the patient's quality of life caused by the illness can be appreciably improved by a whole series of symptomatic treatments. The ideal situation is a cooperative effort by an interdisciplinary team.

  5. Training teachers to observation: an approach through multiple intelligences theory

    Directory of Open Access Journals (Sweden)

    Nicolini, P.

    2010-11-01

    Full Text Available Observation is a daily practice in scholastic and educational contexts, but it needs to develop into a professional competence in order to be helpful. In fact, to design an educative and didactic plan and to provide useful tools, activities and tasks to their students, teachers and educators need to collect information about learners. For these reasons we’ll built a Web-Observation (Web-Ob application, a tool able to support good practices in observation. In particular, the Web-Ob can provide Multiple Intelligences Theory as a framework through which children’s behaviors and attitudes can be observed, assessed and evaluated.

  6. A hybrid approach to parameter identification of linear delay differential equations involving multiple delays

    Science.gov (United States)

    Marzban, Hamid Reza

    2018-05-01

    In this paper, we are concerned with the parameter identification of linear time-invariant systems containing multiple delays. The approach is based upon a hybrid of block-pulse functions and Legendre's polynomials. The convergence of the proposed procedure is established and an upper error bound with respect to the L2-norm associated with the hybrid functions is derived. The problem under consideration is first transformed into a system of algebraic equations. The least squares technique is then employed for identification of the desired parameters. Several multi-delay systems of varying complexity are investigated to evaluate the performance and capability of the proposed approximation method. It is shown that the proposed approach is also applicable to a class of nonlinear multi-delay systems. It is demonstrated that the suggested procedure provides accurate results for the desired parameters.

  7. A novel EMD selecting thresholding method based on multiple iteration for denoising LIDAR signal

    Science.gov (United States)

    Li, Meng; Jiang, Li-hui; Xiong, Xing-long

    2015-06-01

    Empirical mode decomposition (EMD) approach has been believed to be potentially useful for processing the nonlinear and non-stationary LIDAR signals. To shed further light on its performance, we proposed the EMD selecting thresholding method based on multiple iteration, which essentially acts as a development of EMD interval thresholding (EMD-IT), and randomly alters the samples of noisy parts of all the corrupted intrinsic mode functions to generate a better effect of iteration. Simulations on both synthetic signals and LIDAR signals from real world support this method.

  8. Power-efficient method for IM-DD optical transmission of multiple OFDM signals.

    Science.gov (United States)

    Effenberger, Frank; Liu, Xiang

    2015-05-18

    We propose a power-efficient method for transmitting multiple frequency-division multiplexed (FDM) orthogonal frequency-division multiplexing (OFDM) signals in intensity-modulation direct-detection (IM-DD) optical systems. This method is based on quadratic soft clipping in combination with odd-only channel mapping. We show, both analytically and experimentally, that the proposed approach is capable of improving the power efficiency by about 3 dB as compared to conventional FDM OFDM signals under practical bias conditions, making it a viable solution in applications such as optical fiber-wireless integrated systems where both IM-DD optical transmission and OFDM signaling are important.

  9. New approaches in the management of multiple sclerosis

    Directory of Open Access Journals (Sweden)

    Laurie J Barten

    2010-11-01

    Full Text Available Laurie J Barten1, Douglas R Allington1, Kendra A Procacci2, Michael P Rivey11The University of Montana and Community Medical Center, Missoula, MT, USA; 2The University of Montana School of Pharmacy, Missoula, MT, USAAbstract: Multiple sclerosis (MS is a central nervous system chronic inflammatory disease that is characterized by an extensive and complex immune response. Scientific advances have occurred in immunology, pathophysiology, and diagnostic and clinical assessment tools, and recent discovery of unique therapeutic targets has spurred numerous Phase II and Phase III clinical trials. Reductions in MS relapse rates and improvements in T2 or gadolinium-enhancing lesion burdens have been reported from Phase III trials that include fingolimod, alemtuzumab, cladribine, and rituximab. Promising Phase II trial data exist for teriflunomide, daclizumab, laquinimod, and fumarate. The optimism created by these favorable findings must be tempered with evaluation of the adverse effect profile produced by these new agents. Given the discovery of progressive multifocal leukoencephalopathy with the use of natalizumab, ongoing vigilance for rare and life-threatening reactions due to new agents should be paramount. Patients with MS often experience difficulty with ambulation, spasticity, and cognition. Recent clinical trial data from two Phase III dalfampridine-SR trials indicate certain patients receive benefits in ambulation. This article provides an overview of data from clinical trials of newer agents of potential benefit in MS.Keywords: multiple sclerosis, Phase II trials, Phase III trials, progressive multifocal leukoencephalopathy, monoclonal antibody

  10. Diagnosing Unemployment: The 'Classification' Approach to Multiple Causation

    NARCIS (Netherlands)

    Rodenburg, P.

    2002-01-01

    The establishment of appropriate policy measures for fighting unemployment has always been difficult since causes of unemployment are hard to identify. This paper analyses an approach used mainly in the 1960s and 1970s in economics, in which classification is used as a way to deal with such a

  11. A Multiple Cross-Cultural Comparison of Approaches to Learning

    Science.gov (United States)

    Bowden, Mark P.; Abhayawansa, Subhash; Manzin, Gregoria

    2015-01-01

    This study compares learning approaches of local English-speaking students and students from Asian countries studying at an Australian metropolitan university. The sample consists of students across 13 different countries. Unlike previous studies, students from Asian countries are subdivided into two categories: students from Confucian Heritage…

  12. Multiple stakeholders in road pricing: A game theoretic approach

    NARCIS (Netherlands)

    Ohazulike, Anthony; Still, Georg J.; Kern, Walter; van Berkum, Eric C.; Hausken, Kjell; Zhuang, Jun

    2015-01-01

    We investigate a game theoretic approach as an alternative to the standard multi-objective optimization models for road pricing. Assuming that various, partly conflicting traffic externalities (congestion, air pollution, noise, safety, etcetera) are represented by corresponding players acting on a

  13. Creative Approaches to Teaching Graduate Research Methods Workshops

    Directory of Open Access Journals (Sweden)

    Peter Reilly

    2017-06-01

    Full Text Available Engagement and deeper learning were enhanced by developing several innovative teaching strategies delivered in Research Methods workshops to Graduate Business Students.  Focusing primarily on students adopting a creative approach to formulating a valid research question for undertaking a dissertation successfully. These techniques are applicable to most subject domains to ensure student engagement.  Addressing the various multiple intelligences and learning styles existing within groups while ensuring these sessions are student centred and conducive to a collaborative learning environment.  Blogs, interactive tutorials, online videos, games and posters, are used to develop student’s cognitive and metacognitive abilities.  Using novelty images appeals to a groups’ intellectual curiosity, acting as an interpretive device to explain  the value of adopting a holistic rather than analytic approach towards a topic.

  14. A Bayesian trans-dimensional approach for the fusion of multiple geophysical datasets

    Science.gov (United States)

    JafarGandomi, Arash; Binley, Andrew

    2013-09-01

    We propose a Bayesian fusion approach to integrate multiple geophysical datasets with different coverage and sensitivity. The fusion strategy is based on the capability of various geophysical methods to provide enough resolution to identify either subsurface material parameters or subsurface structure, or both. We focus on electrical resistivity as the target material parameter and electrical resistivity tomography (ERT), electromagnetic induction (EMI), and ground penetrating radar (GPR) as the set of geophysical methods. However, extending the approach to different sets of geophysical parameters and methods is straightforward. Different geophysical datasets are entered into a trans-dimensional Markov chain Monte Carlo (McMC) search-based joint inversion algorithm. The trans-dimensional property of the McMC algorithm allows dynamic parameterisation of the model space, which in turn helps to avoid bias of the post-inversion results towards a particular model. Given that we are attempting to develop an approach that has practical potential, we discretize the subsurface into an array of one-dimensional earth-models. Accordingly, the ERT data that are collected by using two-dimensional acquisition geometry are re-casted to a set of equivalent vertical electric soundings. Different data are inverted either individually or jointly to estimate one-dimensional subsurface models at discrete locations. We use Shannon's information measure to quantify the information obtained from the inversion of different combinations of geophysical datasets. Information from multiple methods is brought together via introducing joint likelihood function and/or constraining the prior information. A Bayesian maximum entropy approach is used for spatial fusion of spatially dispersed estimated one-dimensional models and mapping of the target parameter. We illustrate the approach with a synthetic dataset and then apply it to a field dataset. We show that the proposed fusion strategy is

  15. A theoretical approach to low multiplicity diffractive dissociation

    International Nuclear Information System (INIS)

    Bishari, M.

    1977-01-01

    The dynamics of low mass inelastic diffractive production in the framework of the ''1/N dual unitarization'' scheme are investigated. The smallness of inelastic diffractive dissociation is explicitly demonstrated by incorporating a Deck type mechanism with the crucial planar bootstrap equation. Although both inelastic and elastic pomeron couplings are of the same order in 1/N, the origin for their smallness is not identical. The work further confirms the validity of the iterative procedure, where the elastic amplitude is first generated from only non-diffractive intermediate states (except possibly for central collisions). Using a previous study of the ''Cylinder'' strength, a semi-quantitative results for the integrated cross-section for low multiplicity diffractive production is also presented, and is compared with the elastic cross-section at very high energies. (author)

  16. A multiple-perspective approach to graphic notation

    DEFF Research Database (Denmark)

    Cohen, Susanna; Gilboa, Avi; Bergstrøm-Nielsen, Carl

    2011-01-01

    The need to describe and analyze the contents of music therapy sessions, together with the need to find methods of raising therapist awareness, are areas of great interest for the music therapist. In this study an expansion of Bergstrøm-Nielsen’s (1993) method of graphic notation for representing...... music therapy improvisations is presented: multipleperspective graphic notation (MGN). A description of the method is provided, followed by a demonstration using a clinical improvisation taken from an individual music therapy session. The use of the MGN as a tool for raising therapist awareness...

  17. Multiple Approaches to Characterizing Pore Structure in Natural Rock

    Science.gov (United States)

    Hu, Q.; Dultz, S.; Hamamoto, S.; Ewing, R. P.

    2012-12-01

    Microscopic characteristics of porous media - pore shape, pore-size distribution, and pore connectivity - control fluid flow and chemical transport, and are important in hydrogeological studies of rock formations in the context of energy, environmental, and water resources management. This presentation discusses various approaches to investigating pore structure of rock, with a particular focus on the Barnett Shale in north Texas used for natural gas production. Approaches include imbibition, tracer diffusion, porosimetry (MIP, vapor adsorption/desorption isotherms, NMR cyroporometry), and imaging (μ-tomography, Wood's metal impregnation, FIB/SEM). Results show that the Barnett Shale pores are predominantly in the nm size range, with a measured median pore-throat diameter of 6.5 nm. But small pore size is not the major contributor to low gas recovery; rather, the low gas diffusivity appears to be caused by low pore connectivity. Chemical diffusion in sparsely-connected pore spaces is not well described by classical Fickian behavior; anomalous behavior is suggested by percolation theory, and confirmed by results of imbibition tests. Our evolving complementary approaches, with their several advantages and disadvantages, provide a rich toolbox for tackling the pore structure characteristics in the Barnett Shale and other natural rocks.

  18. Computing multiple periodic solutions of nonlinear vibration problems using the harmonic balance method and Groebner bases

    Science.gov (United States)

    Grolet, Aurelien; Thouverez, Fabrice

    2015-02-01

    This paper is devoted to the study of vibration of mechanical systems with geometric nonlinearities. The harmonic balance method is used to derive systems of polynomial equations whose solutions give the frequency component of the possible steady states. Groebner basis methods are used for computing all solutions of polynomial systems. This approach allows to reduce the complete system to an unique polynomial equation in one variable driving all solutions of the problem. In addition, in order to decrease the number of variables, we propose to first work on the undamped system, and recover solution of the damped system using a continuation on the damping parameter. The search for multiple solutions is illustrated on a simple system, where the influence of the retained number of harmonic is studied. Finally, the procedure is applied on a simple cyclic system and we give a representation of the multiple states versus frequency.

  19. Drug induced mortality: a multiple cause approach on Italian causes of death Register

    Directory of Open Access Journals (Sweden)

    Francesco Grippo

    2015-04-01

    Full Text Available Background: Drug-related mortality is a complex phenomenon that has several health, social and economic effects. In this paper trends of drug-induced mortality in Italy are analysed. Two approaches have been followed: the traditional analysis of the underlying cause of death (UC (data refers to the Istat mortality database from 1980 to 2011, and the multiple cause (MCanalysis, that is the analysis of all conditions reported on the death certificate (data for 2003-2011 period.Methods: Data presented in this paper are based on the Italian mortality register. The selection of Icd codes used for the analysis follows the definition of the European Monitoring Centre for Drugs and Drug Addiction. Using different indicators (crude and standardized rates, ratio multiple to underlying, the results obtained from the two approaches (UC and MC have been compared. Moreover, as a measure of association between drug-related causes and specific conditions on the death certificate, an estimation of the age-standardized relative risk (RR has been used.Results: In the years 2009-2011, the total number of certificates whit mention of drug use was 1,293, 60% higher than the number UC based. The groups of conditions more strongly associated with drug-related causes are the mental and behavioral disorders (especially alcohol consumption, viral hepatitis, cirrhosis and fibrosis of liver, AIDS and endocarditis.Conclusions : The analysis based on multiple cause approach shows, for the first time, a more detailed picture of the drug related death; it allows to better describe the mortality profiles and to re-evaluate  the contribution of a specific cause to death.

  20. Hybrid MCDA Methods to Integrate Multiple Ecosystem Services in Forest Management Planning: A Critical Review.

    Science.gov (United States)

    Uhde, Britta; Hahn, W Andreas; Griess, Verena C; Knoke, Thomas

    2015-08-01

    Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.

  1. Hybrid MCDA Methods to Integrate Multiple Ecosystem Services in Forest Management Planning: A Critical Review

    Science.gov (United States)

    Uhde, Britta; Andreas Hahn, W.; Griess, Verena C.; Knoke, Thomas

    2015-08-01

    Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.

  2. Using Combinatorial Approach to Improve Students' Learning of the Distributive Law and Multiplicative Identities

    Science.gov (United States)

    Tsai, Yu-Ling; Chang, Ching-Kuch

    2009-01-01

    This article reports an alternative approach, called the combinatorial model, to learning multiplicative identities, and investigates the effects of implementing results for this alternative approach. Based on realistic mathematics education theory, the new instructional materials or modules of the new approach were developed by the authors. From…

  3. Multiple Targeting Approaches on Histamine H3 Receptor Antagonists

    Directory of Open Access Journals (Sweden)

    Mohammad eKhanfar

    2016-05-01

    Full Text Available With the very recent market approval of pitolisant (Wakix®, the interest in clinical applications of novel multifunctional histamine H3 receptor antagonists has clearly increased. Since histamine H3 receptor antagonists in clinical development have been tested for a variety of different indications, the combination of pharmacological properties in one molecule for improved pharmacological effects and reduced unwanted side-effects is rationally based on the increasing knowledge on the complex neurotransmitter regulations. The polypharmacological approaches on histamine H3 receptor antagonists on different G-protein coupled receptors, transporters, enzymes as well as on NO-signaling mechanism are described, supported with some lead structures.

  4. Effectiveness of Cognitive Existential Approach on Decreasing Demoralization in Women with Multiple Sclerosis

    Directory of Open Access Journals (Sweden)

    Nasim Pakniya

    2015-12-01

    Full Text Available Objectives: Multiple Sclerosis is the most prevalent central nervous system diseases thatdue to being chronic, frequent recurrence, uncertainty about its progress, and disability, can lead to various distresses as well as demoralization . Rehabilitation method based on Cognitive-Existential therapy is an integratedapproach which can help to decrease demoralization syndrome in these patients. This study aimed to exploring effectiveness of rehabilitation method based on Cognitive-Existential approach on decreasing demoralization syndrome in patients with MS. Methods: Single subject design is used in this study. Among women who had referred to Tehran MS Association, 3 women (aged between 20-40 were selected through purposeful sampling and separately participated in 10 sessions (90 minutes. Participants were assessed during 7 phases of intervention (2 baselines, 3 measurement during intervention, 2 follow-up through Demoralization Syndrome Scale (2004 and Cognitive Distortion scale (2010. Data were analyzed by calculating process variation index and visual analysis. Results: Comparing patients with MS scores on the diagram during 7 time measurement and calculating recovery percentage, represent decreasing in demoralization syndrome score scale. Discussions: Findings showed that rehabilitation method based on Cognitive Existential approach can decrease demoralization syndrome in patients with MS.

  5. Optimal Route Searching with Multiple Dynamical Constraints—A Geometric Algebra Approach

    Directory of Open Access Journals (Sweden)

    Dongshuang Li

    2018-05-01

    Full Text Available The process of searching for a dynamic constrained optimal path has received increasing attention in traffic planning, evacuation, and personalized or collaborative traffic service. As most existing multiple constrained optimal path (MCOP methods cannot search for a path given various types of constraints that dynamically change during the search, few approaches for dynamic multiple constrained optimal path (DMCOP with type II dynamics are available for practical use. In this study, we develop a method to solve the DMCOP problem with type II dynamics based on the unification of various types of constraints under a geometric algebra (GA framework. In our method, the network topology and three different types of constraints are represented by using algebraic base coding. With a parameterized optimization of the MCOP algorithm based on a greedy search strategy under the generation-refinement paradigm, this algorithm is found to accurately support the discovery of optimal paths as the constraints of numerical values, nodes, and route structure types are dynamically added to the network. The algorithm was tested with simulated cases of optimal tourism route searches in China’s road networks with various combinations of constraints. The case study indicates that our algorithm can not only solve the DMCOP with different types of constraints but also use constraints to speed up the route filtering.

  6. Continuum multiple-scattering approach to electron-molecule scattering and molecular photoionization

    International Nuclear Information System (INIS)

    Dehmer, J.L.; Dill, D.

    1979-01-01

    The multiple-scattering approach to the electronic continuum of molecules is described. The continuum multiple-scattering model (CMSM) was developed as a survey tool and, as such was required to satisfy two requirements. First, it had to have a very broad scope, which means (i) molecules of arbitrary geometry and complexity containing any atom in the periodic system, (ii) continuum electron energies from 0-1000 eV, and (iii) capability to treat a large range of processes involving both photoionization and electron scattering. Second, the structure of the theory was required to lend itself to transparent, physical interpretation of major spectral features such as shape resonances. A comprehensive theoretical framework for the continuum multiple scattering method is presented, as well as its applications to electron-molecule scattering and molecular photoionization. Highlights of recent applications in these two areas are reviewed. The major impact of the resulting studies over the last few years has been to establish the importance of shape resonances in electron collisions and photoionization of practically all (non-hydride) molecules

  7. PATTERN CLASSIFICATION APPROACHES TO MATCHING BUILDING POLYGONS AT MULTIPLE SCALES

    Directory of Open Access Journals (Sweden)

    X. Zhang

    2012-07-01

    Full Text Available Matching of building polygons with different levels of detail is crucial in the maintenance and quality assessment of multi-representation databases. Two general problems need to be addressed in the matching process: (1 Which criteria are suitable? (2 How to effectively combine different criteria to make decisions? This paper mainly focuses on the second issue and views data matching as a supervised pattern classification. Several classifiers (i.e. decision trees, Naive Bayes and support vector machines are evaluated for the matching task. Four criteria (i.e. position, size, shape and orientation are used to extract information for these classifiers. Evidence shows that these classifiers outperformed the weighted average approach.

  8. Automatic classification of hyperactive children: comparing multiple artificial intelligence approaches.

    Science.gov (United States)

    Delavarian, Mona; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Dibajnia, Parvin

    2011-07-12

    Automatic classification of different behavioral disorders with many similarities (e.g. in symptoms) by using an automated approach will help psychiatrists to concentrate on correct disorder and its treatment as soon as possible, to avoid wasting time on diagnosis, and to increase the accuracy of diagnosis. In this study, we tried to differentiate and classify (diagnose) 306 children with many similar symptoms and different behavioral disorders such as ADHD, depression, anxiety, comorbid depression and anxiety and conduct disorder with high accuracy. Classification was based on the symptoms and their severity. With examining 16 different available classifiers, by using "Prtools", we have proposed nearest mean classifier as the most accurate classifier with 96.92% accuracy in this research. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  9. Information-flux approach to multiple-spin dynamics

    International Nuclear Information System (INIS)

    Di Franco, C.; Paternostro, M.; Kim, M. S.; Palma, G. M.

    2007-01-01

    We introduce and formalize the concept of information flux in a many-body register as the influence that the dynamics of a specific element receive from any other element of the register. By quantifying the information flux in a protocol, we can design the most appropriate initial state of the system and, noticeably, the distribution of coupling strengths among the parts of the register itself. The intuitive nature of this tool and its flexibility, which allow for easily manageable numerical approaches when analytic expressions are not straightforward, are greatly useful in interacting many-body systems such as quantum spin chains. We illustrate the use of this concept in quantum cloning and quantum state transfer and we also sketch its extension to nonunitary dynamics

  10. Maternal Smoking During Pregnancy and Offspring Birth Weight: A Genetically-Informed Approach Comparing Multiple Raters

    Science.gov (United States)

    Knopik, Valerie S.; Marceau, Kristine; Palmer, Rohan H. C.; Smith, Taylor F.; Heath, Andrew C.

    2016-01-01

    Maternal smoking during pregnancy (SDP) is a significant public health concern with adverse consequences to the health and well-being of the fetus. There is considerable debate about the best method of assessing SDP, including birth/medical records, timeline follow-back approaches, multiple reporters, and biological verification (e.g., cotinine). This is particularly salient for genetically-informed approaches where it is not always possible or practical to do a prospective study starting during the prenatal period when concurrent biological specimen samples can be collected with ease. In a sample of families (N = 173) specifically selected for sibling pairs discordant for prenatal smoking exposure, we: (1) compare rates of agreement across different types of report—maternal report of SDP, paternal report of maternal SDP, and SDP contained on birth records from the Department of Vital Statistics; (2) examine whether SDP is predictive of birth weight outcomes using our best SDP report as identified via step (1); and (3) use a sibling-comparison approach that controls for genetic and familial influences that siblings share in order to assess the effects of SDP on birth weight. Results show high agreement between reporters and support the utility of retrospective report of SDP. Further, we replicate a causal association between SDP and birth weight, wherein SDP results in reduced birth weight even when accounting for genetic and familial confounding factors via a sibling comparison approach. PMID:26494459

  11. A New Conflict Resolution Method for Multiple Mobile Robots in Cluttered Environments With Motion-Liveness.

    Science.gov (United States)

    Shahriari, Mohammadali; Biglarbegian, Mohammad

    2018-01-01

    This paper presents a new conflict resolution methodology for multiple mobile robots while ensuring their motion-liveness, especially for cluttered and dynamic environments. Our method constructs a mathematical formulation in a form of an optimization problem by minimizing the overall travel times of the robots subject to resolving all the conflicts in their motion. This optimization problem can be easily solved through coordinating only the robots' speeds. To overcome the computational cost in executing the algorithm for very cluttered environments, we develop an innovative method through clustering the environment into independent subproblems that can be solved using parallel programming techniques. We demonstrate the scalability of our approach through performing extensive simulations. Simulation results showed that our proposed method is capable of resolving the conflicts of 100 robots in less than 1.23 s in a cluttered environment that has 4357 intersections in the paths of the robots. We also developed an experimental testbed and demonstrated that our approach can be implemented in real time. We finally compared our approach with other existing methods in the literature both quantitatively and qualitatively. This comparison shows while our approach is mathematically sound, it is more computationally efficient, scalable for very large number of robots, and guarantees the live and smooth motion of robots.

  12. An object-oriented approach to evaluating multiple spectral models

    International Nuclear Information System (INIS)

    Majoras, R.E.; Richardson, W.M.; Seymour, R.S.

    1995-01-01

    A versatile, spectroscopy analysis engine has been developed by using object-oriented design and analysis techniques coupled with an object-oriented language, C++. This engine provides the spectroscopist with the choice of several different peak shape models that are tailored to the type of spectroscopy being performed. It also allows ease of development in adapting the engine to other analytical methods requiring more complex peak fitting in the future. This results in a program that can currently be used across a wide range of spectroscopy applications and anticipates inclusion of future advances in the field. (author) 6 refs.; 1 fig

  13. The double travelling salesman problem with multiple stacks - Formulation and heuristic solution approaches

    DEFF Research Database (Denmark)

    Petersen, Hanne Løhmann; Madsen, Oli B.G.

    2009-01-01

    This paper introduces the double travelling salesman problem with multiple stacks and presents four different metaheuristic approaches to its solution. The double TSP with multiple stacks is concerned with determining the shortest route performing pickups and deliveries in two separated networks...

  14. An agent-based negotiation approach for balancing multiple coupled control domains

    DEFF Research Database (Denmark)

    Umair, Aisha; Clausen, Anders; Jørgensen, Bo Nørregaard

    2015-01-01

    Solving multi-objective multi-issue negotiation problems involving interdependent issues distributed among multiple control domains is inherent to most non-trivial cyber-physical systems. In these systems, the coordinated operation of interconnected subsystems performing autonomous control....... The proposed approach can solve negotiation problems with interdependent issues across multiple coupled control domains. We demonstrate our approach by solving a coordination problem where a Combined Heat and Power Plant must allocate electricity for three commercial greenhouses to ensure the required plant...

  15. Toward noncooperative iris recognition: a classification approach using multiple signatures.

    Science.gov (United States)

    Proença, Hugo; Alexandre, Luís A

    2007-04-01

    This paper focuses on noncooperative iris recognition, i.e., the capture of iris images at large distances, under less controlled lighting conditions, and without active participation of the subjects. This increases the probability of capturing very heterogeneous images (regarding focus, contrast, or brightness) and with several noise factors (iris obstructions and reflections). Current iris recognition systems are unable to deal with noisy data and substantially increase their error rates, especially the false rejections, in these conditions. We propose an iris classification method that divides the segmented and normalized iris image into six regions, makes an independent feature extraction and comparison for each region, and combines each of the dissimilarity values through a classification rule. Experiments show a substantial decrease, higher than 40 percent, of the false rejection rates in the recognition of noisy iris images.

  16. Water supply infrastructure planning under multiple uncertainties: A differentiated approach

    Science.gov (United States)

    Fletcher, S.; Strzepek, K.

    2017-12-01

    desalination. Intense withdrawals for urban and agricultural use will lead to lowering of the water table in the aquifer at rapid but uncertain rates due to poor groundwater characterization. We assess the potential for additional groundwater data collection and a flexible infrastructure approach similar to that in Melbourne to mitigate risk.

  17. An Efficient Implementation of Track-Oriented Multiple Hypothesis Tracker Using Graphical Model Approaches

    Directory of Open Access Journals (Sweden)

    Jinping Sun

    2017-01-01

    Full Text Available The multiple hypothesis tracker (MHT is currently the preferred method for addressing data association problem in multitarget tracking (MTT application. MHT seeks the most likely global hypothesis by enumerating all possible associations over time, which is equal to calculating maximum a posteriori (MAP estimate over the report data. Despite being a well-studied method, MHT remains challenging mostly because of the computational complexity of data association. In this paper, we describe an efficient method for solving the data association problem using graphical model approaches. The proposed method uses the graph representation to model the global hypothesis formation and subsequently applies an efficient message passing algorithm to obtain the MAP solution. Specifically, the graph representation of data association problem is formulated as a maximum weight independent set problem (MWISP, which translates the best global hypothesis formation into finding the maximum weight independent set on the graph. Then, a max-product belief propagation (MPBP inference algorithm is applied to seek the most likely global hypotheses with the purpose of avoiding a brute force hypothesis enumeration procedure. The simulation results show that the proposed MPBP-MHT method can achieve better tracking performance than other algorithms in challenging tracking situations.

  18. The concurrent multiplicative-additive approach for gauge-radar/satellite multisensor precipitation estimates

    Science.gov (United States)

    Garcia-Pintado, J.; Barberá, G. G.; Erena Arrabal, M.; Castillo, V. M.

    2010-12-01

    Objective analysis schemes (OAS), also called ``succesive correction methods'' or ``observation nudging'', have been proposed for multisensor precipitation estimation combining remote sensing data (meteorological radar or satellite) with data from ground-based raingauge networks. However, opposite to the more complex geostatistical approaches, the OAS techniques for this use are not optimized. On the other hand, geostatistical techniques ideally require, at the least, modelling the covariance from the rain gauge data at every time step evaluated, which commonly cannot be soundly done. Here, we propose a new procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) for operational rainfall estimation using rain gauges and meteorological radar, which does not require explicit modelling of spatial covariances. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on the OAS, whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The approach considers radar estimates as background a priori information (first guess), so that nudging to observations (gauges) may be relaxed smoothly to the first guess, and the relaxation shape is obtained from the sequential

  19. Combining morphometric evidence from multiple registration methods using dempster-shafer theory

    Science.gov (United States)

    Rajagopalan, Vidya; Wyatt, Christopher

    2010-03-01

    In tensor-based morphometry (TBM) group-wise differences in brain structure are measured using high degreeof- freedom registration and some form of statistical test. However, it is known that TBM results are sensitive to both the registration method and statistical test used. Given the lack of an objective model of group variation is it difficult to determine a best registration method for TBM. The use of statistical tests is also problematic given the corrections required for multiple testing and the notorius difficulty selecting and intepreting signigance values. This paper presents an approach to address both of these issues by combining multiple registration methods using Dempster-Shafer Evidence theory to produce belief maps of categorical changes between groups. This approach is applied to the comparison brain morphometry in aging, a typical application of TBM, using the determinant of the Jacobian as a measure of volume change. We show that the Dempster-Shafer combination produces a unique and easy to interpret belief map of regional changes between and within groups without the complications associated with hypothesis testing.

  20. Multiple scattering approach to the vibrational excitation of molecules by slow electrons

    International Nuclear Information System (INIS)

    Drukarev, G.

    1976-01-01

    Another approach to the problem of vibrational excitation of homonuclear two-atomic molecules by slow electrons possibly accompanied by rotational transitions is presented based on the picture of multiple scattering of an electron inside the molecule. The scattering of two fixed centers in the zero range potential model is considered. The results indicate that the multiple scattering determines the order of magnitude of the vibrational excitation cross sections in the energy region under consideration even if the zero range potential model is used. Also the connection between the multiple scattering approach and quasi-stationary molecular ion picture is established. 9 refs

  1. Breeding approaches in simultaneous selection for multiple stress tolerance of maize in tropical environments

    Directory of Open Access Journals (Sweden)

    Denić M.

    2007-01-01

    Full Text Available Maize is the principal crop and major staple food in the most countries of Sub-Saharan Africa. However, due to the influence of abiotic and biotic stress factors, maize production faces serious constraints. Among the agro-ecological conditions, the main constraints are: lack and poor distribution of rainfall; low soil fertility; diseases (maize streak virus, downy mildew, leaf blights, rusts, gray leaf spot, stem/cob rots and pests (borers and storage pests. Among the socio-economic production constraints are: poor economy, serious shortage of trained manpower; insufficient management expertise, lack of use of improved varieties and poor cultivation practices. To develop desirable varieties, and thus consequently alleviate some of these constraints, appropriate breeding approaches and field-based methodologies in selection for multiple stress tolerance, were implemented. These approaches are mainly based on: a Crossing selected genotypes with more desirable stress tolerant and other agronomic traits; b Using the disease/pest spreader row method, combined with testing and selection of created progenies under strong to intermediate pressure of drought and low soil fertility in nurseries; and c Evaluation of the varieties developed in multi-location trials under low and "normal" inputs. These approaches provide testing and selection of large number of progenies, which is required for simultaneous selection for multiple stress tolerance. Data obtained revealed that remarkable improvement of the traits under selection was achieved. Biggest progress was obtained in selection for maize streak virus and downy mildew resistance, flintiness and earliness. In the case of drought stress, statistical analyses revealed significant negative correlation between yield and anthesis-silking interval, and between yield and days to silk, but positive correlation between yield and grain weight per ear.

  2. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    Science.gov (United States)

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  3. Strongly and weakly directed approaches to teaching multiple representation use in physics

    Directory of Open Access Journals (Sweden)

    Patrick B. Kohl

    2007-06-01

    Full Text Available Good use of multiple representations is considered key to learning physics, and so there is considerable motivation both to learn how students use multiple representations when solving problems and to learn how best to teach problem solving using multiple representations. In this study of two large-lecture algebra-based physics courses at the University of Colorado (CU and Rutgers, the State University of New Jersey, we address both issues. Students in each of the two courses solved five common electrostatics problems of varying difficulty, and we examine their solutions to clarify the relationship between multiple representation use and performance on problems involving free-body diagrams. We also compare our data across the courses, since the two physics-education-research-based courses take substantially different approaches to teaching the use of multiple representations. The course at Rutgers takes a strongly directed approach, emphasizing specific heuristics and problem-solving strategies. The course at CU takes a weakly directed approach, modeling good problem solving without teaching a specific strategy. We find that, in both courses, students make extensive use of multiple representations, and that this use (when both complete and correct is associated with significantly increased performance. Some minor differences in representation use exist, and are consistent with the types of instruction given. Most significant are the strong and broad similarities in the results, suggesting that either instructional approach or a combination thereof can be useful for helping students learn to use multiple representations for problem solving and concept development.

  4. Review of Monte Carlo methods for particle multiplicity evaluation

    CERN Document Server

    Armesto-Pérez, Nestor

    2005-01-01

    I present a brief review of the existing models for particle multiplicity evaluation in heavy ion collisions which are at our disposal in the form of Monte Carlo simulators. Models are classified according to the physical mechanisms with which they try to describe the different stages of a high-energy collision between heavy nuclei. A comparison of predictions, as available at the beginning of year 2000, for multiplicities in central AuAu collisions at the BNL Relativistic Heavy Ion Collider (RHIC) and PbPb collisions at the CERN Large Hadron Collider (LHC) is provided.

  5. Review of Monte Carlo methods for particle multiplicity evaluation

    International Nuclear Information System (INIS)

    Armesto, Nestor

    2005-01-01

    I present a brief review of the existing models for particle multiplicity evaluation in heavy ion collisions which are at our disposal in the form of Monte Carlo simulators. Models are classified according to the physical mechanisms with which they try to describe the different stages of a high-energy collision between heavy nuclei. A comparison of predictions, as available at the beginning of year 2000, for multiplicities in central AuAu collisions at the BNL Relativistic Heavy Ion Collider (RHIC) and PbPb collisions at the CERN Large Hadron Collider (LHC) is provided

  6. A comparison of confirmatory factor analysis methods : Oblique multiple group method versus confirmatory common factor method

    NARCIS (Netherlands)

    Stuive, Ilse

    2007-01-01

    Confirmatieve Factor Analyse (CFA) is een vaak gebruikte methode wanneer onderzoekers een bepaalde veronderstelling hebben over de indeling van items in één of meerdere subtests en willen onderzoeken of deze indeling ook wordt ondersteund door verzamelde onderzoeksgegevens. De meest gebruikte

  7. An Exact Method for the Double TSP with Multiple Stacks

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    2010-01-01

    The double travelling salesman problem with multiple stacks (DTSPMS) is a pickup and delivery problem in which all pickups must be completed before any deliveries can be made. The problem originates from a real-life application where a 40 foot container (configured as 3 columns of 11 rows) is used...

  8. An Exact Method for the Double TSP with Multiple Stacks

    DEFF Research Database (Denmark)

    Larsen, Jesper; Lusby, Richard Martin; Ehrgott, Matthias

    The double travelling salesman problem with multiple stacks (DTSPMS) is a pickup and delivery problem in which all pickups must be completed before any deliveries can be made. The problem originates from a real-life application where a 40 foot container (configured as 3 columns of 11 rows) is used...

  9. On Thermally Interacting Multiple Boreholes with Variable Heating Strength: Comparison between Analytical and Numerical Approaches

    Directory of Open Access Journals (Sweden)

    Marc A. Rosen

    2012-08-01

    Full Text Available The temperature response in the soil surrounding multiple boreholes is evaluated analytically and numerically. The assumption of constant heat flux along the borehole wall is examined by coupling the problem to the heat transfer problem inside the borehole and presenting a model with variable heat flux along the borehole length. In the analytical approach, a line source of heat with a finite length is used to model the conduction of heat in the soil surrounding the boreholes. In the numerical method, a finite volume method in a three dimensional meshed domain is used. In order to determine the heat flux boundary condition, the analytical quasi-three-dimensional solution to the heat transfer problem of the U-tube configuration inside the borehole is used. This solution takes into account the variation in heating strength along the borehole length due to the temperature variation of the fluid running in the U-tube. Thus, critical depths at which thermal interaction occurs can be determined. Finally, in order to examine the validity of the numerical method, a comparison is made with the results of line source method.

  10. Development of an asymmetric multiple-position neutron source (AMPNS) method to monitor the criticality of a degraded reactor core

    International Nuclear Information System (INIS)

    Kim, S.S.; Levine, S.H.

    1985-01-01

    An analytical/experimental method has been developed to monitor the subcritical reactivity and unfold the k/sub infinity/ distribution of a degraded reactor core. The method uses several fixed neutron detectors and a Cf-252 neutron source placed sequentially in multiple positions in the core. Therefore, it is called the Asymmetric Multiple Position Neutron Source (AMPNS) method. The AMPNS method employs nucleonic codes to analyze the neutron multiplication of a Cf-252 neutron source. An optimization program, GPM, is utilized to unfold the k/sub infinity/ distribution of the degraded core, in which the desired performance measure minimizes the error between the calculated and the measured count rates of the degraded reactor core. The analytical/experimental approach is validated by performing experiments using the Penn State Breazeale TRIGA Reactor (PSBR). A significant result of this study is that it provides a method to monitor the criticality of a damaged core during the recovery period

  11. The optimal approach of detecting stochastic gravitational wave from string cosmology using multiple detectors

    International Nuclear Information System (INIS)

    Fan Xilong; Zhu Zonghong

    2008-01-01

    String cosmology models predict a relic background of gravitational wave produced during the dilaton-driven inflation. It's spectrum is most likely to be detected by ground gravitational wave laser interferometers (IFOs), like LIGO, Virgo, GEO, as the energy density grows rapidly with frequency. We show the certain ranges of the parameters that underlying string cosmology model using two approaches, associated with 5% false alarm and 95% detection rate. The result presents that the approach of combining multiple pairs of IFOs is better than the approach of directly combining the outputs of multiple IFOs for LIGOH, LIGOL, Virgo and GEO

  12. A new fast method for inferring multiple consensus trees using k-medoids.

    Science.gov (United States)

    Tahiri, Nadia; Willems, Matthieu; Makarenkov, Vladimir

    2018-04-05

    Gene trees carry important information about specific evolutionary patterns which characterize the evolution of the corresponding gene families. However, a reliable species consensus tree cannot be inferred from a multiple sequence alignment of a single gene family or from the concatenation of alignments corresponding to gene families having different evolutionary histories. These evolutionary histories can be quite different due to horizontal transfer events or to ancient gene duplications which cause the emergence of paralogs within a genome. Many methods have been proposed to infer a single consensus tree from a collection of gene trees. Still, the application of these tree merging methods can lead to the loss of specific evolutionary patterns which characterize some gene families or some groups of gene families. Thus, the problem of inferring multiple consensus trees from a given set of gene trees becomes relevant. We describe a new fast method for inferring multiple consensus trees from a given set of phylogenetic trees (i.e. additive trees or X-trees) defined on the same set of species (i.e. objects or taxa). The traditional consensus approach yields a single consensus tree. We use the popular k-medoids partitioning algorithm to divide a given set of trees into several clusters of trees. We propose novel versions of the well-known Silhouette and Caliński-Harabasz cluster validity indices that are adapted for tree clustering with k-medoids. The efficiency of the new method was assessed using both synthetic and real data, such as a well-known phylogenetic dataset consisting of 47 gene trees inferred for 14 archaeal organisms. The method described here allows inference of multiple consensus trees from a given set of gene trees. It can be used to identify groups of gene trees having similar intragroup and different intergroup evolutionary histories. The main advantage of our method is that it is much faster than the existing tree clustering approaches, while

  13. Cloud computing methods and practical approaches

    CERN Document Server

    Mahmood, Zaigham

    2013-01-01

    This book presents both state-of-the-art research developments and practical guidance on approaches, technologies and frameworks for the emerging cloud paradigm. Topics and features: presents the state of the art in cloud technologies, infrastructures, and service delivery and deployment models; discusses relevant theoretical frameworks, practical approaches and suggested methodologies; offers guidance and best practices for the development of cloud-based services and infrastructures, and examines management aspects of cloud computing; reviews consumer perspectives on mobile cloud computing an

  14. A Hybrid One-Way ANOVA Approach for the Robust and Efficient Estimation of Differential Gene Expression with Multiple Patterns.

    Directory of Open Access Journals (Sweden)

    Mohammad Manir Hossain Mollah

    Full Text Available Identifying genes that are differentially expressed (DE between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA, are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression.The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0 to outlying expressions and larger weights (≤ 1 to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA.Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large

  15. Qualitative Approaches to Mixed Methods Practice

    Science.gov (United States)

    Hesse-Biber, Sharlene

    2010-01-01

    This article discusses how methodological practices can shape and limit how mixed methods is practiced and makes visible the current methodological assumptions embedded in mixed methods practice that can shut down a range of social inquiry. The article argues that there is a "methodological orthodoxy" in how mixed methods is practiced…

  16. A minimally invasive multiple marker approach allows highly efficient detection of meningioma tumors

    Directory of Open Access Journals (Sweden)

    Meese Eckart

    2006-12-01

    Full Text Available Abstract Background The development of effective frameworks that permit an accurate diagnosis of tumors, especially in their early stages, remains a grand challenge in the field of bioinformatics. Our approach uses statistical learning techniques applied to multiple antigen tumor antigen markers utilizing the immune system as a very sensitive marker of molecular pathological processes. For validation purposes we choose the intracranial meningioma tumors as model system since they occur very frequently, are mostly benign, and are genetically stable. Results A total of 183 blood samples from 93 meningioma patients (WHO stages I-III and 90 healthy controls were screened for seroreactivity with a set of 57 meningioma-associated antigens. We tested several established statistical learning methods on the resulting reactivity patterns using 10-fold cross validation. The best performance was achieved by Naïve Bayes Classifiers. With this classification method, our framework, called Minimally Invasive Multiple Marker (MIMM approach, yielded a specificity of 96.2%, a sensitivity of 84.5%, and an accuracy of 90.3%, the respective area under the ROC curve was 0.957. Detailed analysis revealed that prediction performs particularly well on low-grade (WHO I tumors, consistent with our goal of early stage tumor detection. For these tumors the best classification result with a specificity of 97.5%, a sensitivity of 91.3%, an accuracy of 95.6%, and an area under the ROC curve of 0.971 was achieved using a set of 12 antigen markers only. This antigen set was detected by a subset selection method based on Mutual Information. Remarkably, our study proves that the inclusion of non-specific antigens, detected not only in tumor but also in normal sera, increases the performance significantly, since non-specific antigens contribute additional diagnostic information. Conclusion Our approach offers the possibility to screen members of risk groups as a matter of routine

  17. Search Strategy of Detector Position For Neutron Source Multiplication Method by Using Detected-Neutron Multiplication Factor

    International Nuclear Information System (INIS)

    Endo, Tomohiro

    2011-01-01

    In this paper, an alternative definition of a neutron multiplication factor, detected-neutron multiplication factor kdet, is produced for the neutron source multiplication method..(NSM). By using kdet, a search strategy of appropriate detector position for NSM is also proposed. The NSM is one of the practical subcritical measurement techniques, i.e., the NSM does not require any special equipment other than a stationary external neutron source and an ordinary neutron detector. Additionally, the NSM method is based on steady-state analysis, so that this technique is very suitable for quasi real-time measurement. It is noted that the correction factors play important roles in order to accurately estimate subcriticality from the measured neutron count rates. The present paper aims to clarify how to correct the subcriticality measured by the NSM method, the physical meaning of the correction factors, and how to reduce the impact of correction factors by setting a neutron detector at an appropriate detector position

  18. Comparing the index-flood and multiple-regression methods using L-moments

    Science.gov (United States)

    Malekinezhad, H.; Nachtnebel, H. P.; Klik, A.

    In arid and semi-arid regions, the length of records is usually too short to ensure reliable quantile estimates. Comparing index-flood and multiple-regression analyses based on L-moments was the main objective of this study. Factor analysis was applied to determine main influencing variables on flood magnitude. Ward’s cluster and L-moments approaches were applied to several sites in the Namak-Lake basin in central Iran to delineate homogeneous regions based on site characteristics. Homogeneity test was done using L-moments-based measures. Several distributions were fitted to the regional flood data and index-flood and multiple-regression methods as two regional flood frequency methods were compared. The results of factor analysis showed that length of main waterway, compactness coefficient, mean annual precipitation, and mean annual temperature were the main variables affecting flood magnitude. The study area was divided into three regions based on the Ward’s method of clustering approach. The homogeneity test based on L-moments showed that all three regions were acceptably homogeneous. Five distributions were fitted to the annual peak flood data of three homogeneous regions. Using the L-moment ratios and the Z-statistic criteria, GEV distribution was identified as the most robust distribution among five candidate distributions for all the proposed sub-regions of the study area, and in general, it was concluded that the generalised extreme value distribution was the best-fit distribution for every three regions. The relative root mean square error (RRMSE) measure was applied for evaluating the performance of the index-flood and multiple-regression methods in comparison with the curve fitting (plotting position) method. In general, index-flood method gives more reliable estimations for various flood magnitudes of different recurrence intervals. Therefore, this method should be adopted as regional flood frequency method for the study area and the Namak-Lake basin

  19. INCLUSION OF CHILDREN WITH INTELLECTUAL AND MULTIPLE DISABILITIES: A COMMUNITY-BASED REHABILITATION APPROACH, INDIA

    Directory of Open Access Journals (Sweden)

    Ram LAKHAN

    2013-03-01

    Full Text Available Background: Inclusion of children with intellectual disabilities (ID and multiple disabilities (MD in regular schools in India is extremely poor. One of the key objectives of community-based rehabilitation (CBR is to include ID & MD children in regular schools. This study attempted to find out association with age, ID severity, poverty, gender, parent education, population, and multiple disabilities comprising one or more disorders cerebral palsy, epilepsy and psychiatric disorders with inclusion among 259 children in Barwani Block of Barwani District in the state of Madhya Pradesh, India.Aim: Inclusion of children with intellectual and multiple disabilities in regular schools through CBR approach in India.Method: Chi square test was conducted to investigate association between inclusion and predictor variables ID categories, age, gender, poverty level, parent education, population type and multiple disabilities. Result: Inclusion was possible for borderline 2(66.4%, mild 54(68.3%, moderate 18(18.2%, and age range from 5 to 12 years 63 (43%. Children living in poor families 63 (30.6%, not poor 11(18.9%, parental edu­ca­ti­on none 52 (26%, primary level 11 (65%, midd­le school 10 (48% high school 0 (0% and bachelor degree 1(7%, female 34 (27.9%, male 40 (29.2%, tribal 40 (28.7%, non-tribal 34(28.3% and multiple disabled with cerebral palsy 1(1.2%, epilepsy 3 (4.8% and psychiatry disorders 12 (22.6% were able to receive inclusive education. Sig­ni­ficant difference in inclusion among ID ca­te­gories (c2=99.8, p < 0.001, poverty (c2=3.37, p 0.044, parental education (c2=23.7, p < 0.001, MD CP (c2=43.9, p < 0.001 and epilepsy (c2=22.4, p < 0.001 were seen.Conclusion: Inclusion through CBR is feasible and acceptable in poor rural settings in India. CBR can facilitate inclusion of children with borderline, mild and moderate categories by involving their parents, teachers and community members.

  20. Method for Collision Avoidance Motion Coordination of Multiple Mobile Robots Using Central Observation

    Energy Technology Data Exchange (ETDEWEB)

    Ko, N.Y.; Seo, D.J. [Chosun University, Kwangju (Korea)

    2003-04-01

    This paper presents a new method driving multiple robots to their goal position without collision. Each robot adjusts its motion based on the information on the goal locations, velocity, and position of the robot and the velocity and position of the other robots. To consider the movement of the robots in a work area, we adopt the concept of avoidability measure. The avoidability measure figures the degree of how easily a robot can avoid other robots considering the following factors: the distance from the robot to the other robots, velocity of the robot and the other robots. To implement the concept in moving robot avoidance, relative distance between the robots is derived. Our method combines the relative distance with an artificial potential field method. The proposed method is simulated for several cases. The results show that the proposed method steers robots to open space anticipating the approach of other robots. In contrast, the usual potential field method sometimes fails preventing collision or causes hasty motion, because it initiates avoidance motion later than the proposed method. The proposed method can be used to move robots in a robot soccer team to their appropriate position without collision as fast as possible. (author). 21 refs., 10 figs., 13 tabs.

  1. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  2. An Approach for Predicting Essential Genes Using Multiple Homology Mapping and Machine Learning Algorithms.

    Science.gov (United States)

    Hua, Hong-Li; Zhang, Fa-Zhan; Labena, Abraham Alemayehu; Dong, Chuan; Jin, Yan-Ting; Guo, Feng-Biao

    Investigation of essential genes is significant to comprehend the minimal gene sets of cell and discover potential drug targets. In this study, a novel approach based on multiple homology mapping and machine learning method was introduced to predict essential genes. We focused on 25 bacteria which have characterized essential genes. The predictions yielded the highest area under receiver operating characteristic (ROC) curve (AUC) of 0.9716 through tenfold cross-validation test. Proper features were utilized to construct models to make predictions in distantly related bacteria. The accuracy of predictions was evaluated via the consistency of predictions and known essential genes of target species. The highest AUC of 0.9552 and average AUC of 0.8314 were achieved when making predictions across organisms. An independent dataset from Synechococcus elongatus , which was released recently, was obtained for further assessment of the performance of our model. The AUC score of predictions is 0.7855, which is higher than other methods. This research presents that features obtained by homology mapping uniquely can achieve quite great or even better results than those integrated features. Meanwhile, the work indicates that machine learning-based method can assign more efficient weight coefficients than using empirical formula based on biological knowledge.

  3. Mixed method approaches to evaluate conservation impact

    DEFF Research Database (Denmark)

    Lund, Jens Friis; Burgess, Neil D.; Chamshama, Shabani A.O.

    2015-01-01

    Nearly 10% of the world's total forest area is formally owned by communities and indigenous groups, yet knowledge of the effects of decentralized forest management approaches on conservation (and livelihood) impacts remains elusive. In this paper, the conservation impact of decentralized forest m...

  4. Approaches and Methods of Periodization in Literary History

    Directory of Open Access Journals (Sweden)

    Naser Gholi Sarli

    2013-10-01

    Full Text Available Abstract One of the most fundamental acts of historiography is to classify historical information in diachronic axis. The method of this classification or periodization shows the theoretical approach of the historian and determines the structure and the form of his history. Because of multiple criteria of analysis and various literary genres, periodization in literary history is more complicated than that of general history. We can distinguish two approaches in periodization of literary history, although these can be used together: extrinsic or social-cultural approach (based on criteria extrinsic to literature and intrinsic or formalist approach (based on criteria intrinsic to literature. Then periodization in literary history can be formulated in different methods and may be based upon various criteria: chronological such as century, decade and year organic patterns of evolution great poets and writers literary emblems and evaluations of every period events, concepts and periods of general or political history analogy of literary history and history of ideas or history of arts approaches and styles of language dominant literary norms. These methods actually are used together and everyone has adequacy in special kind of literary history. In periodization of Persian contemporary literature, some methods and models current in periodization of poetry have been applied identically to periodization of prose. Periodization based upon century, decade and year is the simplest and most mechanical method but sometimes certain centuries in some countries have symbolic and stylistic meaning, and decades were used often for subdivisions of literary history, especially nowadays with fast rhythm of literary change. Periodization according to organic patterns of evolution equates the changes of literary history with the life phases of an organism, and offers an account of birth, mature and death (and sometimes re-birth of literary genres, but this method have

  5. Approaches and Methods of Periodization in Literary History

    Directory of Open Access Journals (Sweden)

    Dr. N. Gh. Sarli

    Full Text Available One of the most fundamental acts of historiography is to classify historical information in diachronic axis. The method of this classification or periodization shows the theoretical approach of the historian and determines the structure and the form of his history. Because of multiple criteria of analysis and various literary genres, periodization in literary history is more complicated than that of general history. We can distinguish two approaches in periodization of literary history, although these can be used together: extrinsic or social-cultural approach (based on criteria extrinsic to literature and intrinsic or formalist approach (based on criteria intrinsic to literature. Then periodization in literary history can be formulated in different methods and may be based upon various criteria: chronological such as century, decade and year; organic patterns of evolution; great poets and writers; literary emblems and evaluations of every period; events, concepts and periods of general or political history; analogy of literary history and history of ideas or history of arts; approaches and styles of language; dominant literary norms. These methods actually are used together and everyone has adequacy in special kind of literary history. In periodization of Persian contemporary literature, some methods and models current in periodization of poetry have been applied identically to periodization of prose. Periodization based upon century, decade and year is the simplest and most mechanical method but sometimes certain centuries in some countries have symbolic and stylistic meaning, and decades were used often for subdivisions of literary history, especially nowadays with fast rhythm of literary change.Periodization according to organic patterns of evolution equates the changes of literary history with the life phases of an organism, and offers an account of birth, mature and death (and sometimes re-birth of literary genres, but this method have

  6. Approaches and Methods of Periodization in Literary History

    Directory of Open Access Journals (Sweden)

    Naser Gholi Sarli

    2013-11-01

    Full Text Available Abstract One of the most fundamental acts of historiography is to classify historical information in diachronic axis. The method of this classification or periodization shows the theoretical approach of the historian and determines the structure and the form of his history. Because of multiple criteria of analysis and various literary genres, periodization in literary history is more complicated than that of general history. We can distinguish two approaches in periodization of literary history, although these can be used together: extrinsic or social-cultural approach (based on criteria extrinsic to literature and intrinsic or formalist approach (based on criteria intrinsic to literature. Then periodization in literary history can be formulated in different methods and may be based upon various criteria: chronological such as century, decade and year organic patterns of evolution great poets and writers literary emblems and evaluations of every period events, concepts and periods of general or political history analogy of literary history and history of ideas or history of arts approaches and styles of language dominant literary norms. These methods actually are used together and everyone has adequacy in special kind of literary history. In periodization of Persian contemporary literature, some methods and models current in periodization of poetry have been applied identically to periodization of prose. Periodization based upon century, decade and year is the simplest and most mechanical method but sometimes certain centuries in some countries have symbolic and stylistic meaning, and decades were used often for subdivisions of literary history, especially nowadays with fast rhythm of literary change. Periodization according to organic patterns of evolution equates the changes of literary history with the life phases of an organism, and offers an account of birth, mature and death (and sometimes re-birth of literary genres, but this method have

  7. Normalization method for metabolomics data using optimal selection of multiple internal standards

    Directory of Open Access Journals (Sweden)

    Yetukuri Laxman

    2007-03-01

    Full Text Available Abstract Background Success of metabolomics as the phenotyping platform largely depends on its ability to detect various sources of biological variability. Removal of platform-specific sources of variability such as systematic error is therefore one of the foremost priorities in data preprocessing. However, chemical diversity of molecular species included in typical metabolic profiling experiments leads to different responses to variations in experimental conditions, making normalization a very demanding task. Results With the aim to remove unwanted systematic variation, we present an approach that utilizes variability information from multiple internal standard compounds to find optimal normalization factor for each individual molecular species detected by metabolomics approach (NOMIS. We demonstrate the method on mouse liver lipidomic profiles using Ultra Performance Liquid Chromatography coupled to high resolution mass spectrometry, and compare its performance to two commonly utilized normalization methods: normalization by l2 norm and by retention time region specific standard compound profiles. The NOMIS method proved superior in its ability to reduce the effect of systematic error across the full spectrum of metabolite peaks. We also demonstrate that the method can be used to select best combinations of standard compounds for normalization. Conclusion Depending on experiment design and biological matrix, the NOMIS method is applicable either as a one-step normalization method or as a two-step method where the normalization parameters, influenced by variabilities of internal standard compounds and their correlation to metabolites, are first calculated from a study conducted in repeatability conditions. The method can also be used in analytical development of metabolomics methods by helping to select best combinations of standard compounds for a particular biological matrix and analytical platform.

  8. A Multiple Mobility Support Approach (MMSA Based on PEAS for NCW in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Bong-Joo Koo

    2011-01-01

    Full Text Available Wireless Sensor Networks (WSNs can be implemented as one of sensor systems in Network Centric Warfare (NCW. Mobility support and energy efficiency are key concerns for this application, due to multiple mobile users and stimuli in real combat field. However, mobility support approaches that can be adopted in this circumstance are rare. This paper proposes Multiple Mobility Support Approach (MMSA based on Probing Environment and Adaptive Sleeping (PEAS to support the simultaneous mobility of both multiple users and stimuli by sharing the information of stimuli in WSNs. Simulations using Qualnet are conducted, showing that MMSA can support multiple mobile users and stimuli with good energy efficiency. It is expected that the proposed MMSA can be applied to real combat field.

  9. Assessment of Different Metal Screw Joint Parameters by Using Multiple Criteria Analysis Methods

    Directory of Open Access Journals (Sweden)

    Audrius Čereška

    2018-05-01

    Full Text Available This study compares screw joints made of different materials, including screws of different diameters. For that purpose, 8, 10, 12, 14, 16 mm diameter steel screws and various parts made of aluminum (Al, steel (Stl, bronze (Brz, cast iron (CI, copper (Cu and brass (Br are considered. Multiple criteria decision making (MCDM methods such as evaluation based on distance from average solution (EDAS, simple additive weighting (SAW, technique for order of preference by similarity to ideal solution (TOPSIS and complex proportional assessment (COPRAS are utilized to assess reliability of screw joints also considering cost issues. The entropy, criterion impact loss (CILOS and integrated determination of objective criteria weights (IDOCRIW methods are utilized to assess weights of decision criteria and find the best design alternative. Numerical results confirm the validity of the proposed approach.

  10. A heuristic approach using multiple criteria for environmentally benign 3PLs selection

    Science.gov (United States)

    Kongar, Elif

    2005-11-01

    Maintaining competitiveness in an environment where price and quality differences between competing products are disappearing depends on the company's ability to reduce costs and supply time. Timely responses to rapidly changing market conditions require an efficient Supply Chain Management (SCM). Outsourcing logistics to third-party logistics service providers (3PLs) is one commonly used way of increasing the efficiency of logistics operations, while creating a more "core competency focused" business environment. However, this alone may not be sufficient. Due to recent environmental regulations and growing public awareness regarding environmental issues, 3PLs need to be not only efficient but also environmentally benign to maintain companies' competitiveness. Even though an efficient and environmentally benign combination of 3PLs can theoretically be obtained using exhaustive search algorithms, heuristics approaches to the selection process may be superior in terms of the computational complexity. In this paper, a hybrid approach that combines a multiple criteria Genetic Algorithm (GA) with Linear Physical Weighting Algorithm (LPPW) to be used in efficient and environmentally benign 3PLs is proposed. A numerical example is also provided to illustrate the method and the analyses.

  11. Approach and landing guidance design for reusable launch vehicle using multiple sliding surfaces technique

    Directory of Open Access Journals (Sweden)

    Xiangdong LIU

    2017-08-01

    Full Text Available An autonomous approach and landing (A&L guidance law is presented in this paper for landing an unpowered reusable launch vehicle (RLV at the designated runway touchdown. Considering the full nonlinear point-mass dynamics, a guidance scheme is developed in three-dimensional space. In order to guarantee a successful A&L movement, the multiple sliding surfaces guidance (MSSG technique is applied to derive the closed-loop guidance law, which stems from higher order sliding mode control theory and has advantage in the finite time reaching property. The global stability of the proposed guidance approach is proved by the Lyapunov-based method. The designed guidance law can generate new trajectories on-line without any specific requirement on off-line analysis except for the information on the boundary conditions of the A&L phase and instantaneous states of the RLV. Therefore, the designed guidance law is flexible enough to target different touchdown points on the runway and is capable of dealing with large initial condition errors resulted from the previous flight phase. Finally, simulation results show the effectiveness of the proposed guidance law in different scenarios.

  12. A Fisher Kernel Approach for Multiple Instance Based Object Retrieval in Video Surveillance

    Directory of Open Access Journals (Sweden)

    MIRONICA, I.

    2015-11-01

    Full Text Available This paper presents an automated surveillance system that exploits the Fisher Kernel representation in the context of multiple-instance object retrieval task. The proposed algorithm has the main purpose of tracking a list of persons in several video sources, using only few training examples. In the first step, the Fisher Kernel representation describes a set of features as the derivative with respect to the log-likelihood of the generative probability distribution that models the feature distribution. Then, we learn the generative probability distribution over all features extracted from a reduced set of relevant frames. The proposed approach shows significant improvements and we demonstrate that Fisher kernels are well suited for this task. We demonstrate the generality of our approach in terms of features by conducting an extensive evaluation with a broad range of keypoints features. Also, we evaluate our method on two standard video surveillance datasets attaining superior results comparing to state-of-the-art object recognition algorithms.

  13. Estimation of subcriticality by neutron source multiplication method

    International Nuclear Information System (INIS)

    Sakurai, Kiyoshi; Suzaki, Takenori; Arakawa, Takuya; Naito, Yoshitaka

    1995-03-01

    Subcritical cores were constructed in a core tank of the TCA by arraying 2.6% enriched UO 2 fuel rods into nxn square lattices of 1.956 cm pitch. Vertical distributions of the neutron count rates for the fifteen subcritical cores (n=17, 16, 14, 11, 8) with different water levels were measured at 5 cm interval with 235 U micro-fission counters at the in-core and out-core positions arranging a 252 C f neutron source at near core center. The continuous energy Monte Carlo code MCNP-4A was used for the calculation of neutron multiplication factors and neutron count rates. In this study, important conclusions are as follows: (1) Differences of neutron multiplication factors resulted from exponential experiment and MCNP-4A are below 1% in most cases. (2) Standard deviations of neutron count rates calculated from MCNP-4A with 500000 histories are 5-8%. The calculated neutron count rates are consistent with the measured one. (author)

  14. Approaches and methods of risk assessment

    International Nuclear Information System (INIS)

    Rowe, W.D.

    1983-01-01

    The classification system of risk assessment includes the categories: 1) risk comparisons, 2) cost-effectiveness of risk reduction, 3) balancing of costs, risks and benefits against one another, 4. Metasystems. An overview of methods and systems reveals that no single method can be applied to all cases and situations. The visibility of the process and the absolute consideration of all aspects of judging are, however, of first and fore most importance. (DG) [de

  15. Sensitivity studies on the approaches for addressing multiple initiating events in fire events PSA

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dae Il; Lim, Ho Gon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    A single fire event within a fire compartment or a fire scenario can cause multiple initiating events (IEs). As an example, a fire in a turbine building fire area can cause a loss of the main feed-water (LOMF) and loss of off-site power (LOOP) IEs. Previous domestic fire events PSA had considered only the most severe initiating event among multiple initiating events. NUREG/CR-6850 and ANS/ASME PRA Standard require that multiple IEs are to be addressed in fire events PSA. In this paper, sensitivity studies on the approaches for addressing multiple IEs in fire events PSA for Hanul Unit 3 were performed and their results were presented. In this paper, sensitivity studies on the approaches for addressing multiple IEs in fire events PSA are performed and their results were presented. From the sensitivity analysis results, we can find that the incorporations of multiple IEs into fire events PSA model result in the core damage frequency (CDF) increase and may lead to the generation of the duplicate cutsets. Multiple IEs also can occur at internal flooding event or other external events such as seismic event. They should be considered in the constructions of PSA models in order to realistically estimate risk due to flooding or seismic events.

  16. Multiple Beta Spectrum Analysis Method Based on Spectrum Fitting

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Uk Jae; Jung, Yun Song; Kim, Hee Reyoung [UNIST, Ulsan (Korea, Republic of)

    2016-05-15

    When the sample of several mixed radioactive nuclides is measured, it is difficult to divide each nuclide due to the overlapping of spectrums. For this reason, simple mathematical analysis method for spectrum analysis of the mixed beta ray source has been studied. However, existing research was in need of more accurate spectral analysis method as it has a problem of accuracy. The study will describe the contents of the separation methods of the mixed beta ray source through the analysis of the beta spectrum slope based on the curve fitting to resolve the existing problem. The fitting methods including It was understood that sum of sine fitting method was the best one of such proposed methods as Fourier, polynomial, Gaussian and sum of sine to obtain equation for distribution of mixed beta spectrum. It was shown to be the most appropriate for the analysis of the spectrum with various ratios of mixed nuclides. It was thought that this method could be applied to rapid spectrum analysis of the mixed beta ray source.

  17. Quantifying submarine groundwater discharge in the coastal zone via multiple methods

    International Nuclear Information System (INIS)

    Burnett, W.C.; Aggarwal, P.K.; Aureli, A.; Bokuniewicz, H.; Cable, J.E.; Charette, M.A.; Kontar, E.; Krupa, S.; Kulkarni, K.M.; Loveless, A.; Moore, W.S.; Oberdorfer, J.A.; Oliveira, J.; Ozyurt, N.; Povinec, P.; Privitera, A.M.G.; Rajar, R.; Ramessur, R.T.; Scholten, J.; Stieglitz, T.; Taniguchi, M.; Turner, J.V.

    2006-01-01

    Submarine groundwater discharge (SGD) is now recognized as an important pathway between land and sea. As such, this flow may contribute to the biogeochemical and other marine budgets of near-shore waters. These discharges typically display significant spatial and temporal variability making assessments difficult. Groundwater seepage is patchy, diffuse, temporally variable, and may involve multiple aquifers. Thus, the measurement of its magnitude and associated chemical fluxes is a challenging enterprise. A joint project of UNESCO and the International Atomic Energy Agency (IAEA) has examined several methods of SGD assessment and carried out a series of five intercomparison experiments in different hydrogeologic environments (coastal plain, karst, glacial till, fractured crystalline rock, and volcanic terrains). This report reviews the scientific and management significance of SGD, measurement approaches, and the results of the intercomparison experiments. We conclude that while the process is essentially ubiquitous in coastal areas, the assessment of its magnitude at any one location is subject to enough variability that measurements should be made by a variety of techniques and over large enough spatial and temporal scales to capture the majority of these changing conditions. We feel that all the measurement techniques described here are valid although they each have their own advantages and disadvantages. It is recommended that multiple approaches be applied whenever possible. In addition, a continuing effort is required in order to capture long-period tidal fluctuations, storm effects, and seasonal variations

  18. Numerical Computation of Underground Inundation in Multiple Layers Using the Adaptive Transfer Method

    Directory of Open Access Journals (Sweden)

    Hyung-Jun Kim

    2018-01-01

    Full Text Available Extreme rainfall causes surface runoff to flow towards lowlands and subterranean facilities, such as subway stations and buildings with underground spaces in densely packed urban areas. These facilities and areas are therefore vulnerable to catastrophic submergence. However, flood modeling of underground space has not yet been adequately studied because there are difficulties in reproducing the associated multiple horizontal layers connected with staircases or elevators. This study proposes a convenient approach to simulate underground inundation when two layers are connected. The main facet of this approach is to compute the flow flux passing through staircases in an upper layer and to transfer the equivalent quantity to a lower layer. This is defined as the ‘adaptive transfer method’. This method overcomes the limitations of 2D modeling by introducing layers connecting concepts to prevent large variations in mesh sizes caused by complicated underlying obstacles or local details. Consequently, this study aims to contribute to the numerical analysis of flow in inundated underground spaces with multiple floors.

  19. Quantifying submarine groundwater discharge in the coastal zone via multiple methods

    Energy Technology Data Exchange (ETDEWEB)

    Burnett, W.C. [Department of Oceanography, Florida State University, Tallahassee, FL 32306 (United States); Aggarwal, P.K.; Kulkarni, K.M. [Isotope Hydrology Section, International Atomic Energy Agency (Austria); Aureli, A. [Department Water Resources Management, University of Palermo, Catania (Italy); Bokuniewicz, H. [Marine Science Research Center, Stony Brook University (United States); Cable, J.E. [Department Oceanography, Louisiana State University (United States); Charette, M.A. [Department Marine Chemistry, Woods Hole Oceanographic Institution (United States); Kontar, E. [Shirshov Institute of Oceanology (Russian Federation); Krupa, S. [South Florida Water Management District (United States); Loveless, A. [University of Western Australia (Australia); Moore, W.S. [Department Geological Sciences, University of South Carolina (United States); Oberdorfer, J.A. [Department Geology, San Jose State University (United States); Oliveira, J. [Instituto de Pesquisas Energeticas e Nucleares (Brazil); Ozyurt, N. [Department Geological Engineering, Hacettepe (Turkey); Povinec, P.; Scholten, J. [Marine Environment Laboratory, International Atomic Energy Agency (Monaco); Privitera, A.M.G. [U.O. 4.17 of the G.N.D.C.I., National Research Council (Italy); Rajar, R. [Faculty of Civil and Geodetic Engineering, University of Ljubljana (Slovenia); Ramessur, R.T. [Department Chemistry, University of Mauritius (Mauritius); Stieglitz, T. [Mathematical and Physical Sciences, James Cook University (Australia); Taniguchi, M. [Research Institute for Humanity and Nature (Japan); Turner, J.V. [CSIRO, Land and Water, Perth (Australia)

    2006-08-31

    Submarine groundwater discharge (SGD) is now recognized as an important pathway between land and sea. As such, this flow may contribute to the biogeochemical and other marine budgets of near-shore waters. These discharges typically display significant spatial and temporal variability making assessments difficult. Groundwater seepage is patchy, diffuse, temporally variable, and may involve multiple aquifers. Thus, the measurement of its magnitude and associated chemical fluxes is a challenging enterprise. A joint project of UNESCO and the International Atomic Energy Agency (IAEA) has examined several methods of SGD assessment and carried out a series of five intercomparison experiments in different hydrogeologic environments (coastal plain, karst, glacial till, fractured crystalline rock, and volcanic terrains). This report reviews the scientific and management significance of SGD, measurement approaches, and the results of the intercomparison experiments. We conclude that while the process is essentially ubiquitous in coastal areas, the assessment of its magnitude at any one location is subject to enough variability that measurements should be made by a variety of techniques and over large enough spatial and temporal scales to capture the majority of these changing conditions. We feel that all the measurement techniques described here are valid although they each have their own advantages and disadvantages. It is recommended that multiple approaches be applied whenever possible. In addition, a continuing effort is required in order to capture long-period tidal fluctuations, storm effects, and seasonal variations. (author)

  20. Statistical Genetics Methods for Localizing Multiple Breast Cancer Genes

    National Research Council Canada - National Science Library

    Ott, Jurg

    1998-01-01

    .... For a number of variables measured on a trait, a method, principal components of heritability, was developed that combines these variables in such a way that the resulting linear combination has highest heritability...

  1. Sustainable Assessment of Aerosol Pollution Decrease Applying Multiple Attribute Decision-Making Methods

    Directory of Open Access Journals (Sweden)

    Audrius Čereška

    2016-06-01

    Full Text Available Air pollution with various materials, particularly with aerosols, increases with the advances in technological development. This is a complicated global problem. One of the priorities in achieving sustainable development is the reduction of harmful technological effects on the environment and human health. It is a responsibility of researchers to search for effective methods of reducing pollution. The reliable results can be obtained by combining the approaches used in various fields of science and technology. This paper aims to demonstrate the effectiveness of the multiple attribute decision-making (MADM methods in investigating and solving the environmental pollution problems. The paper presents the study of the process of the evaporation of a toxic liquid based on using the MADM methods. A schematic view of the test setup is presented. The density, viscosity, and rate of the released vapor flow are measured and the dependence of the variation of the solution concentration on its temperature is determined in the experimental study. The concentration of hydrochloric acid solution (HAS varies in the range from 28% to 34%, while the liquid is heated from 50 to 80 °C. The variations in the parameters are analyzed using the well-known VIKOR and COPRAS MADM methods. For determining the criteria weights, a new CILOS (Criterion Impact LOSs method is used. The experimental results are arranged in the priority order, using the MADM methods. Based on the obtained data, the technological parameters of production, ensuring minimum environmental pollution, can be chosen.

  2. Bionic Design Methods - A practical approach

    DEFF Research Database (Denmark)

    Kepler, Jørgen Asbøll; Stokholm, Marianne Denise J.

    2004-01-01

    Nature has served as inspiration for product design throughout history. Applications range from poetic translations of form to utilization of primary functional principles. This paper describes a generally applicable design methodology for transforming natural functional principles to feasible...... product design. From a formulation of design demands, which need not necessarily be very precise, the approach continues with a study of natural objects (anaimals, plants) which are subject to the same demands. From this study, the working principle(s) are derived. This (these) are then clarified through...... illustrative models, which should be simplified as much as possible. The simplified principle may now be evaluated and transformed into practical design. The methodology is clarified through examples, taken from a series of extended workshops held atAalborg University ....

  3. Design of multiple representations e-learning resources based on a contextual approach for the basic physics course

    Science.gov (United States)

    Bakri, F.; Muliyati, D.

    2018-05-01

    This research aims to design e-learning resources with multiple representations based on a contextual approach for the Basic Physics Course. The research uses the research and development methods accordance Dick & Carey strategy. The development carried out in the digital laboratory of Physics Education Department, Mathematics and Science Faculty, Universitas Negeri Jakarta. The result of the process of product development with Dick & Carey strategy, have produced e-learning design of the Basic Physics Course is presented in multiple representations in contextual learning syntax. The appropriate of representation used in the design of learning basic physics include: concept map, video, figures, data tables of experiment results, charts of data tables, the verbal explanations, mathematical equations, problem and solutions example, and exercise. Multiple representations are presented in the form of contextual learning by stages: relating, experiencing, applying, transferring, and cooperating.

  4. THE METHOD OF MULTIPLE SPATIAL PLANNING BASIC MAP

    Directory of Open Access Journals (Sweden)

    C. Zhang

    2018-04-01

    Full Text Available The “Provincial Space Plan Pilot Program” issued in December 2016 pointed out that the existing space management and control information management platforms of various departments were integrated, and a spatial planning information management platform was established to integrate basic data, target indicators, space coordinates, and technical specifications. The planning and preparation will provide supportive decision support, digital monitoring and evaluation of the implementation of the plan, implementation of various types of investment projects and space management and control departments involved in military construction projects in parallel to approve and approve, and improve the efficiency of administrative approval. The space planning system should be set up to delimit the control limits for the development of production, life and ecological space, and the control of use is implemented. On the one hand, it is necessary to clarify the functional orientation between various kinds of planning space. On the other hand, it is necessary to achieve “multi-compliance” of various space planning. Multiple spatial planning intergration need unified and standard basic map(geographic database and technical specificaton to division of urban, agricultural, ecological three types of space and provide technical support for the refinement of the space control zoning for the relevant planning. The article analysis the main space datum, the land use classification standards, base map planning, planning basic platform main technical problems. Based on the geographic conditions, the results of the census preparation of spatial planning map, and Heilongjiang, Hainan many rules combined with a pilot application.

  5. The Method of Multiple Spatial Planning Basic Map

    Science.gov (United States)

    Zhang, C.; Fang, C.

    2018-04-01

    The "Provincial Space Plan Pilot Program" issued in December 2016 pointed out that the existing space management and control information management platforms of various departments were integrated, and a spatial planning information management platform was established to integrate basic data, target indicators, space coordinates, and technical specifications. The planning and preparation will provide supportive decision support, digital monitoring and evaluation of the implementation of the plan, implementation of various types of investment projects and space management and control departments involved in military construction projects in parallel to approve and approve, and improve the efficiency of administrative approval. The space planning system should be set up to delimit the control limits for the development of production, life and ecological space, and the control of use is implemented. On the one hand, it is necessary to clarify the functional orientation between various kinds of planning space. On the other hand, it is necessary to achieve "multi-compliance" of various space planning. Multiple spatial planning intergration need unified and standard basic map(geographic database and technical specificaton) to division of urban, agricultural, ecological three types of space and provide technical support for the refinement of the space control zoning for the relevant planning. The article analysis the main space datum, the land use classification standards, base map planning, planning basic platform main technical problems. Based on the geographic conditions, the results of the census preparation of spatial planning map, and Heilongjiang, Hainan many rules combined with a pilot application.

  6. Enterprise Engineering Method supporting Six Sigma Approach

    OpenAIRE

    Jochem, Roland

    2007-01-01

    Enterprise Modeling (EM) is currently in operation either as a technique to represent and understand the structure and behavior of the enterprise, or as a technique to analyze business processes, and in many cases as support technique for business process reengineering. However, EM architectures and methodes for Enterprise Engineering can also used to support new management techniques like SIX SIGMA, because these new techniques need a clear, transparent and integrated definition and descript...

  7. Performance evaluation of 2D and 3D deep learning approaches for automatic segmentation of multiple organs on CT images

    Science.gov (United States)

    Zhou, Xiangrong; Yamada, Kazuma; Kojima, Takuya; Takayama, Ryosuke; Wang, Song; Zhou, Xinxin; Hara, Takeshi; Fujita, Hiroshi

    2018-02-01

    The purpose of this study is to evaluate and compare the performance of modern deep learning techniques for automatically recognizing and segmenting multiple organ regions on 3D CT images. CT image segmentation is one of the important task in medical image analysis and is still very challenging. Deep learning approaches have demonstrated the capability of scene recognition and semantic segmentation on nature images and have been used to address segmentation problems of medical images. Although several works showed promising results of CT image segmentation by using deep learning approaches, there is no comprehensive evaluation of segmentation performance of the deep learning on segmenting multiple organs on different portions of CT scans. In this paper, we evaluated and compared the segmentation performance of two different deep learning approaches that used 2D- and 3D deep convolutional neural networks (CNN) without- and with a pre-processing step. A conventional approach that presents the state-of-the-art performance of CT image segmentation without deep learning was also used for comparison. A dataset that includes 240 CT images scanned on different portions of human bodies was used for performance evaluation. The maximum number of 17 types of organ regions in each CT scan were segmented automatically and compared to the human annotations by using ratio of intersection over union (IU) as the criterion. The experimental results demonstrated the IUs of the segmentation results had a mean value of 79% and 67% by averaging 17 types of organs that segmented by a 3D- and 2D deep CNN, respectively. All the results of the deep learning approaches showed a better accuracy and robustness than the conventional segmentation method that used probabilistic atlas and graph-cut methods. The effectiveness and the usefulness of deep learning approaches were demonstrated for solving multiple organs segmentation problem on 3D CT images.

  8. Comparison of two methods of surface profile extraction from multiple ultrasonic range measurements

    NARCIS (Netherlands)

    Barshan, B; Baskent, D

    Two novel methods for surface profile extraction based on multiple ultrasonic range measurements are described and compared. One of the methods employs morphological processing techniques, whereas the other employs a spatial voting scheme followed by simple thresholding. Morphological processing

  9. TRANSFER PRICES: MECHANISMS, METHODS AND INTERNATIONAL APPROACHES

    Directory of Open Access Journals (Sweden)

    Pop Cosmina

    2008-05-01

    Full Text Available Transfer prices are considered the prices paid for the goods or services in a cross-border transaction between affiliates companies, often significant reduced or increased in order to avoid the higher imposing rates from one jurisdiction. Presently, over 60% of cross-border transfers are represented by intra-group transfers. The paper presents the variety of methods and mechanisms used by the companies to transfer the funds from one tax jurisdiction to another in order to avoid over taxation.

  10. Microscopic approach to the generator coordinate method

    International Nuclear Information System (INIS)

    Haider, Q.; Gogny, D.; Weiss, M.S.

    1989-01-01

    In this paper, we solve different theoretical problems associated with the calculation of the kernel occurring in the Hill-Wheeler integral equations within the framework of generator coordinate method. In particular, we extend the Wick's theorem to nonorthogonal Bogoliubov states. Expressions for the overlap between Bogoliubov states and for the generalized density matrix are also derived. These expressions are valid even when using an incomplete basis, as in the case of actual calculations. Finally, the Hill-Wheeler formalism is developed for a finite range interaction and the Skyrme force, and evaluated for the latter. 20 refs., 1 fig., 4 tabs

  11. Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods

    Science.gov (United States)

    Werner, Arelia T.; Cannon, Alex J.

    2016-04-01

    Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e. correlation tests) and distributional properties (i.e. tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), the climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3-day peak flow and 7-day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational data sets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational data set. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7-day low-flow events, regardless of reanalysis or observational data set. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event

  12. Multiple regression approach to predict turbine-generator output for Chinshan nuclear power plant

    International Nuclear Information System (INIS)

    Chan, Yea-Kuang; Tsai, Yu-Ching

    2017-01-01

    The objective of this study is to develop a turbine cycle model using the multiple regression approach to estimate the turbine-generator output for the Chinshan Nuclear Power Plant (NPP). The plant operating data was verified using a linear regression model with a corresponding 95% confidence interval for the operating data. In this study, the key parameters were selected as inputs for the multiple regression based turbine cycle model. The proposed model was used to estimate the turbine-generator output. The effectiveness of the proposed turbine cycle model was demonstrated by using plant operating data obtained from the Chinshan NPP Unit 2. The results show that this multiple regression based turbine cycle model can be used to accurately estimate the turbine-generator output. In addition, this study also provides an alternative approach with simple and easy features to evaluate the thermal performance for nuclear power plants.

  13. Multiple regression approach to predict turbine-generator output for Chinshan nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Yea-Kuang; Tsai, Yu-Ching [Institute of Nuclear Energy Research, Taoyuan City, Taiwan (China). Nuclear Engineering Division

    2017-03-15

    The objective of this study is to develop a turbine cycle model using the multiple regression approach to estimate the turbine-generator output for the Chinshan Nuclear Power Plant (NPP). The plant operating data was verified using a linear regression model with a corresponding 95% confidence interval for the operating data. In this study, the key parameters were selected as inputs for the multiple regression based turbine cycle model. The proposed model was used to estimate the turbine-generator output. The effectiveness of the proposed turbine cycle model was demonstrated by using plant operating data obtained from the Chinshan NPP Unit 2. The results show that this multiple regression based turbine cycle model can be used to accurately estimate the turbine-generator output. In addition, this study also provides an alternative approach with simple and easy features to evaluate the thermal performance for nuclear power plants.

  14. Improving Students' Creative Thinking and Achievement through the Implementation of Multiple Intelligence Approach with Mind Mapping

    Science.gov (United States)

    Widiana, I. Wayan; Jampel, I. Nyoman

    2016-01-01

    This classroom action research aimed to improve the students' creative thinking and achievement in learning science. It conducted through the implementation of multiple intelligences with mind mapping approach and describing the students' responses. The subjects of this research were the fifth grade students of SD 8 Tianyar Barat, Kubu, and…

  15. The Effectiveness of Using a Multiple Gating Approach to Discriminate among ADHD Subtypes

    Science.gov (United States)

    Simonsen, Brandi M.; Bullis, Michael D.

    2007-01-01

    This study explored the ability of Systematically Progressive Assessment (SPA), a multiple gating approach for assessing students with attention-deficit/hyperactivity disorder (ADHD), to discriminate between subtypes of ADHD. A total of 48 students with ADHD (ages 6-11) were evaluated with three "gates" of assessment. Logistic regression analysis…

  16. Support Operators Method for the Diffusion Equation in Multiple Materials

    Energy Technology Data Exchange (ETDEWEB)

    Winters, Andrew R. [Los Alamos National Laboratory; Shashkov, Mikhail J. [Los Alamos National Laboratory

    2012-08-14

    A second-order finite difference scheme for the solution of the diffusion equation on non-uniform meshes is implemented. The method allows the heat conductivity to be discontinuous. The algorithm is formulated on a one dimensional mesh and is derived using the support operators method. A key component of the derivation is that the discrete analog of the flux operator is constructed to be the negative adjoint of the discrete divergence, in an inner product that is a discrete analog of the continuum inner product. The resultant discrete operators in the fully discretized diffusion equation are symmetric and positive definite. The algorithm is generalized to operate on meshes with cells which have mixed material properties. A mechanism to recover intermediate temperature values in mixed cells using a limited linear reconstruction is introduced. The implementation of the algorithm is verified and the linear reconstruction mechanism is compared to previous results for obtaining new material temperatures.

  17. Computing multiple zeros using a class of quartically convergent methods

    Directory of Open Access Journals (Sweden)

    F. Soleymani

    2013-09-01

    For functions with finitely many real roots in an interval, relatively little literature is known, while in applications, the users wish to find all the real zeros at the same time. Hence, the second aim of this paper will be presented by designing a fourth-order algorithm, based on the developed methods, to find all the real solutions of a nonlinear equation in an interval using the programming package Mathematica 8.

  18. Some problems of neutron source multiplication method for site measurement technology in nuclear critical safety

    International Nuclear Information System (INIS)

    Shi Yongqian; Zhu Qingfu; Hu Dingsheng; He Tao; Yao Shigui; Lin Shenghuo

    2004-01-01

    The paper gives experiment theory and experiment method of neutron source multiplication method for site measurement technology in the nuclear critical safety. The measured parameter by source multiplication method actually is a sub-critical with source neutron effective multiplication factor k s , but not the neutron effective multiplication factor k eff . The experiment research has been done on the uranium solution nuclear critical safety experiment assembly. The k s of different sub-criticality is measured by neutron source multiplication experiment method, and k eff of different sub-criticality, the reactivity coefficient of unit solution level, is first measured by period method, and then multiplied by difference of critical solution level and sub-critical solution level and obtained the reactivity of sub-critical solution level. The k eff finally can be extracted from reactivity formula. The effect on the nuclear critical safety and different between k eff and k s are discussed

  19. A sequential mixed methods research approach to investigating HIV ...

    African Journals Online (AJOL)

    2016-09-03

    Sep 3, 2016 ... Sequential mixed methods research is an effective approach for ... show the effectiveness of the research method. ... qualitative data before quantitative datasets ..... whereby both types of data are collected simultaneously.

  20. Users in the Driver's Seat: A New Approach to Classifying Teaching Methods in a University Repository

    NARCIS (Netherlands)

    Neumann, Susanne; Oberhuemer, Petra; Koper, Rob

    2009-01-01

    Neumann, S., Oberhuemer, P., & Koper, R. (2009). Users in the Driver's Seat: A New Approach to Classifying Teaching Methods in a University Repository. In U. Cress, V. Dimitrova & M. Specht (Eds.), Learning in the Synergy of Multiple Disciplines. Proceedings of the Fourth European Conference on

  1. Approach to Multi-Criteria Group Decision-Making Problems Based on the Best-Worst-Method and ELECTRE Method

    Directory of Open Access Journals (Sweden)

    Xinshang You

    2016-09-01

    Full Text Available This paper proposes a novel approach to cope with the multi-criteria group decision-making problems. We give the pairwise comparisons based on the best-worst-method (BWM, which can decrease comparison times. Additionally, our comparison results are determined with the positive and negative aspects. In order to deal with the decision matrices effectively, we consider the elimination and choice translation reality (ELECTRE III method under the intuitionistic multiplicative preference relations environment. The ELECTRE III method is designed for a double-automatic system. Under a certain limitation, without bothering the decision-makers to reevaluate the alternatives, this system can adjust some special elements that have the most influence on the group’s satisfaction degree. Moreover, the proposed method is suitable for both the intuitionistic multiplicative preference relation and the interval valued fuzzy preference relations through the transformation formula. An illustrative example is followed to demonstrate the rationality and availability of the novel method.

  2. Improved exact method for the double TSP with multiple stacks

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper

    2011-01-01

    and delivery problems. The results suggest an impressive improvement, and we report, for the first time, optimal solutions to several unsolved instances from the literature containing 18 customers. Instances with 28 customers are also shown to be solvable within a few percent of optimality. © 2011 Wiley...... the first delivery, and the container cannot be repacked once packed. In this paper we improve the previously proposed exact method of Lusby et al. (Int Trans Oper Res 17 (2010), 637–652) through an additional preprocessing technique that uses the longest common subsequence between the respective pickup...

  3. Segmenting Multiple Sclerosis Lesions using a Spatially Constrained K-Nearest Neighbour approach

    DEFF Research Database (Denmark)

    Lyksborg, Mark; Larsen, Rasmus; Sørensen, Per Soelberg

    2012-01-01

    We propose a method for the segmentation of Multiple Sclerosis lesions. The method is based on probability maps derived from a K-Nearest Neighbours classication. These are used as a non parametric likelihood in a Bayesian formulation with a prior that assumes connectivity of neighbouring voxels. ...

  4. Combining multiple FDG-PET radiotherapy target segmentation methods to reduce the effect of variable performance of individual segmentation methods

    Energy Technology Data Exchange (ETDEWEB)

    McGurk, Ross J. [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Bowsher, James; Das, Shiva K. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Lee, John A [Molecular Imaging and Experimental Radiotherapy Unit, Universite Catholique de Louvain, 1200 Brussels (Belgium)

    2013-04-15

    different between 128 Multiplication-Sign 128 and 256 Multiplication-Sign 256 grid sizes for either method (MJV, p= 0.0519; STAPLE, p= 0.5672) but was for SMASD values (MJV, p < 0.0001; STAPLE, p= 0.0164). The best individual method varied depending on object characteristics. However, both MJV and STAPLE provided essentially equivalent accuracy to using the best independent method in every situation, with mean differences in DSC of 0.01-0.03, and 0.05-0.12 mm for SMASD. Conclusions: Combining segmentations offers a robust approach to object segmentation in PET. Both MJV and STAPLE improved accuracy and were robust against the widely varying performance of individual segmentation methods. Differences between MJV and STAPLE are such that either offers good performance when combining volumes. Neither method requires a training dataset but MJV is simpler to interpret, easy to implement and fast.

  5. Effective multiplication factor measurement by feynman-α method. 3

    International Nuclear Information System (INIS)

    Mouri, Tomoaki; Ohtani, Nobuo

    1998-06-01

    The sub-criticality monitoring system has been developed for criticality safety control in nuclear fuel handling plants. In the past experiments performed with the Deuterium Critical Assembly (DCA), it was confirmed that the detection of sub-criticality was possible to k eff = 0.3. To investigate the applicability of the method to more generalized system, experiments were performed in the light-water-moderated system of the modified DCA core. From these experiments, it was confirmed that the prompt decay constant (α), which was a index of the sub-criticality, was detected between k eff = 0.623 and k eff = 0.870 and the difference of 0.05 - 0.1Δk could be distinguished. The α values were numerically calculated with 2D transport code TWODANT and monte carlo code KENO V.a, and the results were compared with the measured values. The differences between calculated and measured values were proved to be less than 13%, which was sufficient accuracy in the sub-criticality monitoring system. It was confirmed that Feynman-α method was applicable to sub-critical measurement of the light-water-moderated system. (author)

  6. A simple method for combining genetic mapping data from multiple crosses and experimental designs.

    Directory of Open Access Journals (Sweden)

    Jeremy L Peirce

    Full Text Available BACKGROUND: Over the past decade many linkage studies have defined chromosomal intervals containing polymorphisms that modulate a variety of traits. Many phenotypes are now associated with enough mapping data that meta-analysis could help refine locations of known QTLs and detect many novel QTLs. METHODOLOGY/PRINCIPAL FINDINGS: We describe a simple approach to combining QTL mapping results for multiple studies and demonstrate its utility using two hippocampus weight loci. Using data taken from two populations, a recombinant inbred strain set and an advanced intercross population we demonstrate considerable improvements in significance and resolution for both loci. 1-LOD support intervals were improved 51% for Hipp1a and 37% for Hipp9a. We first generate locus-wise permuted P-values for association with the phenotype from multiple maps, which can be done using a permutation method appropriate to each population. These results are then assigned to defined physical positions by interpolation between markers with known physical and genetic positions. We then use Fisher's combination test to combine position-by-position probabilities among experiments. Finally, we calculate genome-wide combined P-values by generating locus-specific P-values for each permuted map for each experiment. These permuted maps are then sampled with replacement and combined. The distribution of best locus-specific P-values for each combined map is the null distribution of genome-wide adjusted P-values. CONCLUSIONS/SIGNIFICANCE: Our approach is applicable to a wide variety of segregating and non-segregating mapping populations, facilitates rapid refinement of physical QTL position, is complementary to other QTL fine mapping methods, and provides an appropriate genome-wide criterion of significance for combined mapping results.

  7. Carbon balance assessment of a natural steppe of southern Siberia by multiple constraint approach

    Directory of Open Access Journals (Sweden)

    L. Belelli Marchesini

    2007-08-01

    Full Text Available Steppe ecosystems represent an interesting case in which the assessment of carbon balance may be performed through a cross validation of the eddy covariance measurements against ecological inventory estimates of carbon exchanges (Ehman et al., 2002; Curtis et al., 2002.

    Indeed, the widespread presence of ideal conditions for the applicability of the eddy covariance technique, as vast and homogeneous grass vegetation cover over flat terrains (Baldocchi, 2003, make steppes a suitable ground to ensure a constrain to flux estimates with independent methodological approaches.

    We report about the analysis of the carbon cycle of a true steppe ecosystem in southern Siberia during the growing season of 2004 in the framework of the TCOS-Siberia project activities performed by continuous monitoring of CO2 fluxes at ecosystem scale by the eddy covariance method, fortnightly samplings of phytomass, and ingrowth cores extractions for NPP assessment, and weekly measurements of heterotrophic component of soil CO2 effluxes obtained by an experiment of root exclusion.

    The carbon balance of the monitored natural steppe was, according to micrometeorological measurements, a sink of carbon of 151.7±36.9 g C m−2, cumulated during the growing season from May to September. This result was in agreement with the independent estimate through ecological inventory which yielded a sink of 150.1 g C m−2 although this method was characterized by a large uncertainty (±130% considering the 95% confidence interval of the estimate. Uncertainties in belowground process estimates account for a large part of the error. Thus, in particular efforts to better quantify the dynamics of root biomass (growth and turnover have to be undertaken in order to reduce the uncertainties in the assessment of NPP. This assessment should be preferably based on the application of multiple methods, each one characterized by its

  8. A Monte Carlo Study on Multiple Output Stochastic Frontiers: Comparison of Two Approaches

    DEFF Research Database (Denmark)

    Henningsen, Geraldine; Henningsen, Arne; Jensen, Uwe

    , dividing all other output quantities by the selected output quantity, and using these ratios as regressors (OD). Another approach is the stochastic ray production frontier (SR) which transforms the output quantities into their Euclidean distance as the dependent variable and their polar coordinates......In the estimation of multiple output technologies in a primal approach, the main question is how to handle the multiple outputs. Often an output distance function is used, where the classical approach is to exploit its homogeneity property by selecting one output quantity as the dependent variable...... of both specifications for the case of a Translog output distance function with respect to different common statistical problems as well as problems arising as a consequence of zero values in the output quantities. Although, our results partly show clear reactions to statistical misspecifications...

  9. Gaussian Multiple Instance Learning Approach for Mapping the Slums of the World Using Very High Resolution Imagery

    Energy Technology Data Exchange (ETDEWEB)

    Vatsavai, Raju [ORNL

    2013-01-01

    In this paper, we present a computationally efficient algo- rithm based on multiple instance learning for mapping infor- mal settlements (slums) using very high-resolution remote sensing imagery. From remote sensing perspective, infor- mal settlements share unique spatial characteristics that dis- tinguish them from other urban structures like industrial, commercial, and formal residential settlements. However, regular pattern recognition and machine learning methods, which are predominantly single-instance or per-pixel classi- fiers, often fail to accurately map the informal settlements as they do not capture the complex spatial patterns. To overcome these limitations we employed a multiple instance based machine learning approach, where groups of contigu- ous pixels (image patches) are modeled as generated by a Gaussian distribution. We have conducted several experi- ments on very high-resolution satellite imagery, represent- ing four unique geographic regions across the world. Our method showed consistent improvement in accurately iden- tifying informal settlements.

  10. A Network Pharmacology Approach to Uncover the Multiple Mechanisms of Hedyotis diffusa Willd. on Colorectal Cancer

    Directory of Open Access Journals (Sweden)

    Xinkui Liu

    2018-01-01

    Full Text Available Background. As one of the most frequently diagnosed cancer diseases globally, colorectal cancer (CRC remains an important cause of cancer-related death. Although the traditional Chinese herb Hedyotis diffusa Willd. (HDW has been proven to be effective for treating CRC in clinical practice, its definite mechanisms have not been completely deciphered. Objective. The aim of our research is to systematically explore the multiple mechanisms of HDW on CRC. Methods. This study adopted the network pharmacology approach, which was mainly composed of active component gathering, target prediction, CRC gene collection, network analysis, and gene enrichment analysis. Results. The network analysis showed that 10 targets might be the therapeutic targets of HDW on CRC, namely, HRAS, PIK3CA, KRAS, TP53, APC, BRAF, GSK3B, CDK2, AKT1, and RAF1. The gene enrichment analysis implied that HDW probably benefits patients with CRC by modulating pathways related to cancers, infectious diseases, endocrine system, immune system, nervous system, signal transduction, cellular community, and cell motility. Conclusions. This study partially verified and predicted the pharmacological and molecular mechanism of HDW against CRC from a holistic perspective, which will also lay a foundation for the further experimental research and clinical rational application of HDW.

  11. Analyzing Statistical Mediation with Multiple Informants: A New Approach with an Application in Clinical Psychology.

    Science.gov (United States)

    Papa, Lesther A; Litson, Kaylee; Lockhart, Ginger; Chassin, Laurie; Geiser, Christian

    2015-01-01

    Testing mediation models is critical for identifying potential variables that need to be targeted to effectively change one or more outcome variables. In addition, it is now common practice for clinicians to use multiple informant (MI) data in studies of statistical mediation. By coupling the use of MI data with statistical mediation analysis, clinical researchers can combine the benefits of both techniques. Integrating the information from MIs into a statistical mediation model creates various methodological and practical challenges. The authors review prior methodological approaches to MI mediation analysis in clinical research and propose a new latent variable approach that overcomes some limitations of prior approaches. An application of the new approach to mother, father, and child reports of impulsivity, frustration tolerance, and externalizing problems (N = 454) is presented. The results showed that frustration tolerance mediated the relationship between impulsivity and externalizing problems. The new approach allows for a more comprehensive and effective use of MI data when testing mediation models.

  12. Towards Multi-Method Research Approach in Empirical Software Engineering

    Science.gov (United States)

    Mandić, Vladimir; Markkula, Jouni; Oivo, Markku

    This paper presents results of a literature analysis on Empirical Research Approaches in Software Engineering (SE). The analysis explores reasons why traditional methods, such as statistical hypothesis testing and experiment replication are weakly utilized in the field of SE. It appears that basic assumptions and preconditions of the traditional methods are contradicting the actual situation in the SE. Furthermore, we have identified main issues that should be considered by the researcher when selecting the research approach. In virtue of reasons for weak utilization of traditional methods we propose stronger use of Multi-Method approach with Pragmatism as the philosophical standpoint.

  13. Approaching complexity by stochastic methods: From biological systems to turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Friedrich, Rudolf [Institute for Theoretical Physics, University of Muenster, D-48149 Muenster (Germany); Peinke, Joachim [Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Sahimi, Muhammad [Mork Family Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, CA 90089-1211 (United States); Reza Rahimi Tabar, M., E-mail: mohammed.r.rahimi.tabar@uni-oldenburg.de [Department of Physics, Sharif University of Technology, Tehran 11155-9161 (Iran, Islamic Republic of); Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Fachbereich Physik, Universitaet Osnabrueck, Barbarastrasse 7, 49076 Osnabrueck (Germany)

    2011-09-15

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  14. Approaching complexity by stochastic methods: From biological systems to turbulence

    International Nuclear Information System (INIS)

    Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.

    2011-01-01

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  15. Curvelet-domain multiple matching method combined with cubic B-spline function

    Science.gov (United States)

    Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming

    2018-05-01

    Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.

  16. Rapid descriptive sensory methods – Comparison of Free Multiple Sorting, Partial Napping, Napping, Flash Profiling and conventional profiling

    DEFF Research Database (Denmark)

    Dehlholm, Christian; Brockhoff, Per B.; Meinert, Lene

    2012-01-01

    is a modal restriction of Napping to specific sensory modalities, directing sensation and still allowing a holistic approach to products. The new methods are compared to Flash Profiling, Napping and conventional descriptive sensory profiling. Evaluations are performed by several panels of expert assessors......Two new rapid descriptive sensory evaluation methods are introduced to the field of food sensory evaluation. The first method, free multiple sorting, allows subjects to perform ad libitum free sortings, until they feel that no more relevant dissimilarities among products remain. The second method...... are applied for the graphical validation and comparisons. This allows similar comparisons and is applicable to single-block evaluation designs such as Napping. The partial Napping allows repetitions on multiple sensory modalities, e.g. appearance, taste and mouthfeel, and shows the average...

  17. Multiple-target method for sputtering amorphous films for bubble-domain devices

    International Nuclear Information System (INIS)

    Burilla, C.T.; Bekebrede, W.R.; Smith, A.B.

    1976-01-01

    Previously, sputtered amorphous metal alloys for bubble applications have ordinarily been prepared by standard sputtering techniques using a single target electrode. The deposition of these alloys is reported using a multiple target rf technique in which a separate target is used for each element contained in the alloy. One of the main advantages of this multiple-target approach is that the film composition can be easily changed by simply varying the voltages applied to the elemental targets. In the apparatus, the centers of the targets are positioned on a 15 cm-radius circle. The platform holding the film substrate is on a 15 cm-long arm which can rotate about the center, thus bringing the sample successively under each target. The platform rotation rate is adjustable from 0 to 190 rpm. That this latter speed is sufficient to homogenize the alloys produced is demonstrated by measurements made of the uniaxial anisotropy constant in Gd 0 . 12 Co 0 . 59 Cu 0 . 29 films. The anisotropy is 6.0 x 10 5 ergs/cm 3 and independent of rotation rate above approximately 25 rpm, but it drops rapidly for slower rotation rates, reaching 1.8 x 10 5 ergs/cm 3 for 7 rpm. The film quality is equal to that of films made by conventional methods. Coercivities of a few oersteds in samples with stripe widths of 1 to 2 μm and magnetizations of 800 to 2800 G were observed

  18. Using Module Analysis for Multiple Choice Responses: A New Method Applied to Force Concept Inventory Data

    Science.gov (United States)

    Brewe, Eric; Bruun, Jesper; Bearden, Ian G.

    2016-01-01

    We describe "Module Analysis for Multiple Choice Responses" (MAMCR), a new methodology for carrying out network analysis on responses to multiple choice assessments. This method is used to identify modules of non-normative responses which can then be interpreted as an alternative to factor analysis. MAMCR allows us to identify conceptual…

  19. 29 CFR 4010.12 - Alternative method of compliance for certain sponsors of multiple employer plans.

    Science.gov (United States)

    2010-07-01

    ... BENEFIT GUARANTY CORPORATION CERTAIN REPORTING AND DISCLOSURE REQUIREMENTS ANNUAL FINANCIAL AND ACTUARIAL INFORMATION REPORTING § 4010.12 Alternative method of compliance for certain sponsors of multiple employer... part for an information year if any contributing sponsor of the multiple employer plan provides a...

  20. Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable Video

    Directory of Open Access Journals (Sweden)

    Vladislavs Dovgalecs

    2013-01-01

    Full Text Available The analysis of video acquired with a wearable camera is a challenge that multimedia community is facing with the proliferation of such sensors in various applications. In this paper, we focus on the problem of automatic visual place recognition in a weakly constrained environment, targeting the indexing of video streams by topological place recognition. We propose to combine several machine learning approaches in a time regularized framework for image-based place recognition indoors. The framework combines the power of multiple visual cues and integrates the temporal continuity information of video. We extend it with computationally efficient semisupervised method leveraging unlabeled video sequences for an improved indexing performance. The proposed approach was applied on challenging video corpora. Experiments on a public and a real-world video sequence databases show the gain brought by the different stages of the method.

  1. Parallelised Krylov subspace method for reactor kinetics by IQS approach

    International Nuclear Information System (INIS)

    Gupta, Anurag; Modak, R.S.; Gupta, H.P.; Kumar, Vinod; Bhatt, K.

    2005-01-01

    Nuclear reactor kinetics involves numerical solution of space-time-dependent multi-group neutron diffusion equation. Two distinct approaches exist for this purpose: the direct (implicit time differencing) approach and the improved quasi-static (IQS) approach. Both the approaches need solution of static space-energy-dependent diffusion equations at successive time-steps; the step being relatively smaller for the direct approach. These solutions are usually obtained by Gauss-Seidel type iterative methods. For a faster solution, the Krylov sub-space methods have been tried and also parallelised by many investigators. However, these studies seem to have been done only for the direct approach. In the present paper, parallelised Krylov methods are applied to the IQS approach in addition to the direct approach. It is shown that the speed-up obtained for IQS is higher than that for the direct approach. The reasons for this are also discussed. Thus, the use of IQS approach along with parallelised Krylov solvers seems to be a promising scheme

  2. Field theoretical approach to proton-nucleus reactions: II-Multiple-step excitation process

    International Nuclear Information System (INIS)

    Eiras, A.; Kodama, T.; Nemes, M.

    1989-01-01

    A field theoretical formulation to multiple step excitation process in proton-nucleus collision within the context of a relativistic eikonal approach is presented. A closed form expression for the double differential cross section can be obtained whose structure is very simple and makes the physics transparent. Glauber's formulation of the same process is obtained as a limit of ours and the necessary approximations are studied and discussed. (author) [pt

  3. MULTIPLE CRITERIA DECISION MAKING APPROACH FOR INDUSTRIAL ENGINEER SELECTION USING FUZZY AHP-FUZZY TOPSIS

    OpenAIRE

    Deliktaş, Derya; ÜSTÜN, Özden

    2018-01-01

    In this study, a fuzzy multiple criteria decision-making approach is proposed to select an industrial engineer among ten candidates in a manufacturing environment. The industrial engineer selection problem is a special case of the personal selection problem. This problem, which has hierarchical structure of criteria and many decision makers, contains many criteria. The evaluation process of decision makers also includes ambiguous parameters. The fuzzy AHP is used to determin...

  4. A quantitative approach to choose among multiple mutually exclusive decisions: comparative expected utility theory

    OpenAIRE

    Zhu, Pengyu

    2018-01-01

    Mutually exclusive decisions have been studied for decades. Many well-known decision theories have been defined to help people either to make rational decisions or to interpret people's behaviors, such as expected utility theory, regret theory, prospect theory, and so on. The paper argues that none of these decision theories are designed to provide practical, normative and quantitative approaches for multiple mutually exclusive decisions. Different decision-makers should naturally make differ...

  5. Trace element analysis of environmental samples by multiple prompt gamma-ray analysis method

    International Nuclear Information System (INIS)

    Oshima, Masumi; Matsuo, Motoyuki; Shozugawa, Katsumi

    2011-01-01

    The multiple γ-ray detection method has been proved to be a high-resolution and high-sensitivity method in application to nuclide quantification. The neutron prompt γ-ray analysis method is successfully extended by combining it with the γ-ray detection method, which is called Multiple prompt γ-ray analysis, MPGA. In this review we show the principle of this method and its characteristics. Several examples of its application to environmental samples, especially river sediments in the urban area and sea sediment samples are also described. (author)

  6. Statistical Methods for Magnetic Resonance Image Analysis with Applications to Multiple Sclerosis

    Science.gov (United States)

    Pomann, Gina-Maria

    Multiple sclerosis (MS) is an immune-mediated neurological disease that causes disability and morbidity. In patients with MS, the accumulation of lesions in the white matter of the brain is associated with disease progression and worse clinical outcomes. In the first part of the dissertation, we present methodology to study to compare the brain anatomy between patients with MS and controls. A nonparametric testing procedure is proposed for testing the null hypothesis that two samples of curves observed at discrete grids and with noise have the same underlying distribution. We propose to decompose the curves using functional principal component analysis of an appropriate mixture process, which we refer to as marginal functional principal component analysis. This approach reduces the dimension of the testing problem in a way that enables the use of traditional nonparametric univariate testing procedures. The procedure is computationally efficient and accommodates different sampling designs. Numerical studies are presented to validate the size and power properties of the test in many realistic scenarios. In these cases, the proposed test is more powerful than its primary competitor. The proposed methodology is illustrated on a state-of-the art diffusion tensor imaging study, where the objective is to compare white matter tract profiles in healthy individuals and MS patients. In the second part of the thesis, we present methods to study the behavior of MS in the white matter of the brain. Breakdown of the blood-brain barrier in newer lesions is indicative of more active disease-related processes and is a primary outcome considered in clinical trials of treatments for MS. Such abnormalities in active MS lesions are evaluated in vivo using contrast-enhanced structural magnetic resonance imaging (MRI), during which patients receive an intravenous infusion of a costly magnetic contrast agent. In some instances, the contrast agents can have toxic effects. Recently, local

  7. Interconnection blocks: a method for providing reusable, rapid, multiple, aligned and planar microfluidic interconnections

    DEFF Research Database (Denmark)

    Sabourin, David; Snakenborg, Detlef; Dufva, Hans Martin

    2009-01-01

    In this paper a method is presented for creating 'interconnection blocks' that are re-usable and provide multiple, aligned and planar microfluidic interconnections. Interconnection blocks made from polydimethylsiloxane allow rapid testing of microfluidic chips and unobstructed microfluidic observ...

  8. Group decision-making approach for flood vulnerability identification using the fuzzy VIKOR method

    Science.gov (United States)

    Lee, G.; Jun, K. S.; Chung, E.-S.

    2015-04-01

    This study proposes an improved group decision making (GDM) framework that combines the VIKOR method with data fuzzification to quantify the spatial flood vulnerability including multiple criteria. In general, GDM method is an effective tool for formulating a compromise solution that involves various decision makers since various stakeholders may have different perspectives on their flood risk/vulnerability management responses. The GDM approach is designed to achieve consensus building that reflects the viewpoints of each participant. The fuzzy VIKOR method was developed to solve multi-criteria decision making (MCDM) problems with conflicting and noncommensurable criteria. This comprising method can be used to obtain a nearly ideal solution according to all established criteria. This approach effectively can propose some compromising decisions by combining the GDM method and fuzzy VIKOR method. The spatial flood vulnerability of the southern Han River using the GDM approach combined with the fuzzy VIKOR method was compared with the spatial flood vulnerability using general MCDM methods, such as the fuzzy TOPSIS and classical GDM methods (i.e., Borda, Condorcet, and Copeland). As a result, the proposed fuzzy GDM approach can reduce the uncertainty in the data confidence and weight derivation techniques. Thus, the combination of the GDM approach with the fuzzy VIKOR method can provide robust prioritization because it actively reflects the opinions of various groups and considers uncertainty in the input data.

  9. Upscaling permeability for three-dimensional fractured porous rocks with the multiple boundary method

    Science.gov (United States)

    Chen, Tao; Clauser, Christoph; Marquart, Gabriele; Willbrand, Karen; Hiller, Thomas

    2018-02-01

    Upscaling permeability of grid blocks is crucial for groundwater models. A novel upscaling method for three-dimensional fractured porous rocks is presented. The objective of the study was to compare this method with the commonly used Oda upscaling method and the volume averaging method. First, the multiple boundary method and its computational framework were defined for three-dimensional stochastic fracture networks. Then, the different upscaling methods were compared for a set of rotated fractures, for tortuous fractures, and for two discrete fracture networks. The results computed by the multiple boundary method are comparable with those of the other two methods and fit best the analytical solution for a set of rotated fractures. The errors in flow rate of the equivalent fracture model decrease when using the multiple boundary method. Furthermore, the errors of the equivalent fracture models increase from well-connected fracture networks to poorly connected ones. Finally, the diagonal components of the equivalent permeability tensors tend to follow a normal or log-normal distribution for the well-connected fracture network model with infinite fracture size. By contrast, they exhibit a power-law distribution for the poorly connected fracture network with multiple scale fractures. The study demonstrates the accuracy and the flexibility of the multiple boundary upscaling concept. This makes it attractive for being incorporated into any existing flow-based upscaling procedures, which helps in reducing the uncertainty of groundwater models.

  10. Estimation of Multiple Pitches in Stereophonic Mixtures using a Codebook-based Approach

    DEFF Research Database (Denmark)

    Hansen, Martin Weiss; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2017-01-01

    In this paper, a method for multi-pitch estimation of stereophonic mixtures of multiple harmonic signals is presented. The method is based on a signal model which takes the amplitude and delay panning parameters of the sources in a stereophonic mixture into account. Furthermore, the method is based...... on the extended invariance principle (EXIP), and a codebook of realistic amplitude vectors. For each fundamental frequency candidate in each of the sources, the amplitude estimates are mapped to entries in the codebook, and the pitch and model order are estimated jointly. The performance of the proposed method...

  11. Analysis and performance estimation of the Conjugate Gradient method on multiple GPUs

    NARCIS (Netherlands)

    Verschoor, M.; Jalba, A.C.

    2012-01-01

    The Conjugate Gradient (CG) method is a widely-used iterative method for solving linear systems described by a (sparse) matrix. The method requires a large amount of Sparse-Matrix Vector (SpMV) multiplications, vector reductions and other vector operations to be performed. We present a number of

  12. Statistical Analysis of a Class: Monte Carlo and Multiple Imputation Spreadsheet Methods for Estimation and Extrapolation

    Science.gov (United States)

    Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael

    2017-01-01

    The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…

  13. Agile Service Development: A Rule-Based Method Engineering Approach

    NARCIS (Netherlands)

    dr. Martijn Zoet; Stijn Hoppenbrouwers; Inge van de Weerd; Johan Versendaal

    2011-01-01

    Agile software development has evolved into an increasingly mature software development approach and has been applied successfully in many software vendors’ development departments. In this position paper, we address the broader agile service development. Based on method engineering principles we

  14. An integrated modeling approach to support management decisions of coupled groundwater-agricultural systems under multiple uncertainties

    Science.gov (United States)

    Hagos Subagadis, Yohannes; Schütze, Niels; Grundmann, Jens

    2015-04-01

    The planning and implementation of effective water resources management strategies need an assessment of multiple (physical, environmental, and socio-economic) issues, and often requires new research in which knowledge of diverse disciplines are combined in a unified methodological and operational frameworks. Such integrative research to link different knowledge domains faces several practical challenges. Such complexities are further compounded by multiple actors frequently with conflicting interests and multiple uncertainties about the consequences of potential management decisions. A fuzzy-stochastic multiple criteria decision analysis tool was developed in this study to systematically quantify both probabilistic and fuzzy uncertainties associated with complex hydrosystems management. It integrated physical process-based models, fuzzy logic, expert involvement and stochastic simulation within a general framework. Subsequently, the proposed new approach is applied to a water-scarce coastal arid region water management problem in northern Oman, where saltwater intrusion into a coastal aquifer due to excessive groundwater extraction for irrigated agriculture has affected the aquifer sustainability, endangering associated socio-economic conditions as well as traditional social structure. Results from the developed method have provided key decision alternatives which can serve as a platform for negotiation and further exploration. In addition, this approach has enabled to systematically quantify both probabilistic and fuzzy uncertainties associated with the decision problem. Sensitivity analysis applied within the developed tool has shown that the decision makers' risk aversion and risk taking attitude may yield in different ranking of decision alternatives. The developed approach can be applied to address the complexities and uncertainties inherent in water resources systems to support management decisions, while serving as a platform for stakeholder participation.

  15. A Hybrid Fuzzy Time Series Approach Based on Fuzzy Clustering and Artificial Neural Network with Single Multiplicative Neuron Model

    Directory of Open Access Journals (Sweden)

    Ozge Cagcag Yolcu

    2013-01-01

    Full Text Available Particularly in recent years, artificial intelligence optimization techniques have been used to make fuzzy time series approaches more systematic and improve forecasting performance. Besides, some fuzzy clustering methods and artificial neural networks with different structures are used in the fuzzification of observations and determination of fuzzy relationships, respectively. In approaches considering the membership values, the membership values are determined subjectively or fuzzy outputs of the system are obtained by considering that there is a relation between membership values in identification of relation. This necessitates defuzzification step and increases the model error. In this study, membership values were obtained more systematically by using Gustafson-Kessel fuzzy clustering technique. The use of artificial neural network with single multiplicative neuron model in identification of fuzzy relation eliminated the architecture selection problem as well as the necessity for defuzzification step by constituting target values from real observations of time series. The training of artificial neural network with single multiplicative neuron model which is used for identification of fuzzy relation step is carried out with particle swarm optimization. The proposed method is implemented using various time series and the results are compared with those of previous studies to demonstrate the performance of the proposed method.

  16. A Multiple Identity Approach to Gender: Identification with Women, Identification with Feminists, and Their Interaction

    Science.gov (United States)

    van Breen, Jolien A.; Spears, Russell; Kuppens, Toon; de Lemus, Soledad

    2017-01-01

    Across four studies, we examine multiple identities in the context of gender and propose that women's attitudes toward gender group membership are governed by two largely orthogonal dimensions of gender identity: identification with women and identification with feminists. We argue that identification with women reflects attitudes toward the content society gives to group membership: what does it mean to be a woman in terms of group characteristics, interests and values? Identification with feminists, on the other hand, is a politicized identity dimension reflecting attitudes toward the social position of the group: what does it mean to be a woman in terms of disadvantage, inequality, and relative status? We examine the utility of this multiple identity approach in four studies. Study 1 showed that identification with women reflects attitudes toward group characteristics, such as femininity and self-stereotyping, while identification with feminists reflects attitudes toward the group's social position, such as perceived sexism. The two dimensions are shown to be largely independent, and as such provide support for the multiple identity approach. In Studies 2–4, we examine the utility of this multiple identity approach in predicting qualitative differences in gender attitudes. Results show that specific combinations of identification with women and feminists predicted attitudes toward collective action and gender stereotypes. Higher identification with feminists led to endorsement of radical collective action (Study 2) and critical attitudes toward gender stereotypes (Studies 3–4), especially at lower levels of identification with women. The different combinations of high vs. low identification with women and feminists can be thought of as reflecting four theoretical identity “types.” A woman can be (1) strongly identified with neither women nor feminists (“low identifier”), (2) strongly identified with women but less so with feminists (

  17. A Multiple Identity Approach to Gender: Identification with Women, Identification with Feminists, and Their Interaction

    Directory of Open Access Journals (Sweden)

    Jolien A. van Breen

    2017-06-01

    Full Text Available Across four studies, we examine multiple identities in the context of gender and propose that women's attitudes toward gender group membership are governed by two largely orthogonal dimensions of gender identity: identification with women and identification with feminists. We argue that identification with women reflects attitudes toward the content society gives to group membership: what does it mean to be a woman in terms of group characteristics, interests and values? Identification with feminists, on the other hand, is a politicized identity dimension reflecting attitudes toward the social position of the group: what does it mean to be a woman in terms of disadvantage, inequality, and relative status? We examine the utility of this multiple identity approach in four studies. Study 1 showed that identification with women reflects attitudes toward group characteristics, such as femininity and self-stereotyping, while identification with feminists reflects attitudes toward the group's social position, such as perceived sexism. The two dimensions are shown to be largely independent, and as such provide support for the multiple identity approach. In Studies 2–4, we examine the utility of this multiple identity approach in predicting qualitative differences in gender attitudes. Results show that specific combinations of identification with women and feminists predicted attitudes toward collective action and gender stereotypes. Higher identification with feminists led to endorsement of radical collective action (Study 2 and critical attitudes toward gender stereotypes (Studies 3–4, especially at lower levels of identification with women. The different combinations of high vs. low identification with women and feminists can be thought of as reflecting four theoretical identity “types.” A woman can be (1 strongly identified with neither women nor feminists (“low identifier”, (2 strongly identified with women but less so with feminists (

  18. A Multiple Identity Approach to Gender: Identification with Women, Identification with Feminists, and Their Interaction.

    Science.gov (United States)

    van Breen, Jolien A; Spears, Russell; Kuppens, Toon; de Lemus, Soledad

    2017-01-01

    Across four studies, we examine multiple identities in the context of gender and propose that women's attitudes toward gender group membership are governed by two largely orthogonal dimensions of gender identity: identification with women and identification with feminists. We argue that identification with women reflects attitudes toward the content society gives to group membership: what does it mean to be a woman in terms of group characteristics, interests and values? Identification with feminists, on the other hand, is a politicized identity dimension reflecting attitudes toward the social position of the group: what does it mean to be a woman in terms of disadvantage, inequality, and relative status? We examine the utility of this multiple identity approach in four studies. Study 1 showed that identification with women reflects attitudes toward group characteristics, such as femininity and self-stereotyping, while identification with feminists reflects attitudes toward the group's social position, such as perceived sexism. The two dimensions are shown to be largely independent, and as such provide support for the multiple identity approach. In Studies 2-4, we examine the utility of this multiple identity approach in predicting qualitative differences in gender attitudes. Results show that specific combinations of identification with women and feminists predicted attitudes toward collective action and gender stereotypes. Higher identification with feminists led to endorsement of radical collective action (Study 2) and critical attitudes toward gender stereotypes (Studies 3-4), especially at lower levels of identification with women. The different combinations of high vs. low identification with women and feminists can be thought of as reflecting four theoretical identity "types." A woman can be (1) strongly identified with neither women nor feminists ("low identifier"), (2) strongly identified with women but less so with feminists ("traditional identifier"), (3

  19. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    Science.gov (United States)

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2017-06-01

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  20. Solution of Constrained Optimal Control Problems Using Multiple Shooting and ESDIRK Methods

    DEFF Research Database (Denmark)

    Capolei, Andrea; Jørgensen, John Bagterp

    2012-01-01

    of this paper is the use of ESDIRK integration methods for solution of the initial value problems and the corresponding sensitivity equations arising in the multiple shooting algorithm. Compared to BDF-methods, ESDIRK-methods are advantageous in multiple shooting algorithms in which restarts and frequent...... algorithm. As we consider stiff systems, implicit solvers with sensitivity computation capabilities for initial value problems must be used in the multiple shooting algorithm. Traditionally, multi-step methods based on the BDF algorithm have been used for such problems. The main novel contribution...... discontinuities on each shooting interval are present. The ESDIRK methods are implemented using an inexact Newton method that reuses the factorization of the iteration matrix for the integration as well as the sensitivity computation. Numerical experiments are provided to demonstrate the algorithm....

  1. An integrated lean-methods approach to hospital facilities redesign.

    Science.gov (United States)

    Nicholas, John

    2012-01-01

    Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach.

  2. The multiple imputation method: a case study involving secondary data analysis.

    Science.gov (United States)

    Walani, Salimah R; Cleland, Charles M

    2015-05-01

    To illustrate with the example of a secondary data analysis study the use of the multiple imputation method to replace missing data. Most large public datasets have missing data, which need to be handled by researchers conducting secondary data analysis studies. Multiple imputation is a technique widely used to replace missing values while preserving the sample size and sampling variability of the data. The 2004 National Sample Survey of Registered Nurses. The authors created a model to impute missing values using the chained equation method. They used imputation diagnostics procedures and conducted regression analysis of imputed data to determine the differences between the log hourly wages of internationally educated and US-educated registered nurses. The authors used multiple imputation procedures to replace missing values in a large dataset with 29,059 observations. Five multiple imputed datasets were created. Imputation diagnostics using time series and density plots showed that imputation was successful. The authors also present an example of the use of multiple imputed datasets to conduct regression analysis to answer a substantive research question. Multiple imputation is a powerful technique for imputing missing values in large datasets while preserving the sample size and variance of the data. Even though the chained equation method involves complex statistical computations, recent innovations in software and computation have made it possible for researchers to conduct this technique on large datasets. The authors recommend nurse researchers use multiple imputation methods for handling missing data to improve the statistical power and external validity of their studies.

  3. Two approaches to incorporate clinical data uncertainty into multiple criteria decision analysis for benefit-risk assessment of medicinal products.

    Science.gov (United States)

    Wen, Shihua; Zhang, Lanju; Yang, Bo

    2014-07-01

    The Problem formulation, Objectives, Alternatives, Consequences, Trade-offs, Uncertainties, Risk attitude, and Linked decisions (PrOACT-URL) framework and multiple criteria decision analysis (MCDA) have been recommended by the European Medicines Agency for structured benefit-risk assessment of medicinal products undergoing regulatory review. The objective of this article was to provide solutions to incorporate the uncertainty from clinical data into the MCDA model when evaluating the overall benefit-risk profiles among different treatment options. Two statistical approaches, the δ-method approach and the Monte-Carlo approach, were proposed to construct the confidence interval of the overall benefit-risk score from the MCDA model as well as other probabilistic measures for comparing the benefit-risk profiles between treatment options. Both approaches can incorporate the correlation structure between clinical parameters (criteria) in the MCDA model and are straightforward to implement. The two proposed approaches were applied to a case study to evaluate the benefit-risk profile of an add-on therapy for rheumatoid arthritis (drug X) relative to placebo. It demonstrated a straightforward way to quantify the impact of the uncertainty from clinical data to the benefit-risk assessment and enabled statistical inference on evaluating the overall benefit-risk profiles among different treatment options. The δ-method approach provides a closed form to quantify the variability of the overall benefit-risk score in the MCDA model, whereas the Monte-Carlo approach is more computationally intensive but can yield its true sampling distribution for statistical inference. The obtained confidence intervals and other probabilistic measures from the two approaches enhance the benefit-risk decision making of medicinal products. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  4. The strategic selecting criteria and performance by using the multiple criteria method

    Directory of Open Access Journals (Sweden)

    Lisa Y. Chen

    2008-02-01

    Full Text Available As the increasing competitive intensity in the current service market, organizational capabilities have been recognized as the importance of sustaining competitive advantage. The profitable growth for the firms has been fueled a need to systematically assess and renew the organization. The purpose of this study is to analyze the financial performance of the firms to create an effective evaluating structure for the Taiwan's service industry. This study utilized TOPSIS (technique for order preference by similarity to ideal solution method to evaluate the operating performance of 12 companies. TOPSIS is a multiple criteria decision making method to identify solutions from a finite set of alternatives based upon simultaneous minimization of distance from an ideal point and maximization of distance from a nadir point. By using this approach, this study measures the financial performance of firms through two aspects and ten indicators. The result indicated e-life had outstanding performance among the 12 retailers. The findings of this study provided managers to better understand their market position, competition, and profitability for future strategic planning and operational management.

  5. Approaches to greenhouse gas accounting methods for biomass carbon

    International Nuclear Information System (INIS)

    Downie, Adriana; Lau, David; Cowie, Annette; Munroe, Paul

    2014-01-01

    This investigation examines different approaches for the GHG flux accounting of activities within a tight boundary of biomass C cycling, with scope limited to exclude all other aspects of the lifecycle. Alternative approaches are examined that a) account for all emissions including biogenic CO 2 cycling – the biogenic method; b) account for the quantity of C that is moved to and maintained in the non-atmospheric pool – the stock method; and c) assume that the net balance of C taken up by biomass is neutral over the short-term and hence there is no requirement to include this C in the calculation – the simplified method. This investigation demonstrates the inaccuracies in both emissions forecasting and abatement calculations that result from the use of the simplified method, which is commonly accepted for use. It has been found that the stock method is the most accurate and appropriate approach for use in calculating GHG inventories, however short-comings of this approach emerge when applied to abatement projects, as it does not account for the increase in biogenic CO 2 emissions that are generated when non-CO 2 GHG emissions in the business-as-usual case are offset. Therefore the biogenic method or a modified version of the stock method should be used to accurately estimate GHG emissions abatement achieved by a project. This investigation uses both the derivation of methodology equations from first principles and worked examples to explore the fundamental differences in the alternative approaches. Examples are developed for three project scenarios including; landfill, combustion and slow-pyrolysis (biochar) of biomass. -- Highlights: • Different approaches can be taken to account for the GHG emissions from biomass. • Simplification of GHG accounting methods is useful, however, can lead to inaccuracies. • Approaches used currently are often inadequate for practises that store carbon. • Accounting methods for emissions forecasting can be inadequate for

  6. A Semiparametric Bayesian Approach for Analyzing Longitudinal Data from Multiple Related Groups.

    Science.gov (United States)

    Das, Kiranmoy; Afriyie, Prince; Spirko, Lauren

    2015-11-01

    Often the biological and/or clinical experiments result in longitudinal data from multiple related groups. The analysis of such data is quite challenging due to the fact that groups might have shared information on the mean and/or covariance functions. In this article, we consider a Bayesian semiparametric approach of modeling the mean trajectories for longitudinal response coming from multiple related groups. We consider matrix stick-breaking process priors on the group mean parameters which allows information sharing on the mean trajectories across the groups. Simulation studies are performed to demonstrate the effectiveness of the proposed approach compared to the more traditional approaches. We analyze data from a one-year follow-up of nutrition education for hypercholesterolemic children with three different treatments where the children are from different age-groups. Our analysis provides more clinically useful information than the previous analysis of the same dataset. The proposed approach will be a very powerful tool for analyzing data from clinical trials and other medical experiments.

  7. A multiple-proxy approach to understanding rapid Holocene climate change in Southeast Greenland

    Science.gov (United States)

    Davin, S. H.; Bradley, R. S.; Balascio, N. L.; de Wet, G.

    2012-12-01

    The susceptibility of the Arctic to climate change has made it an excellent workshop for paleoclimatological research. Although there have been previous studies concerning climate variability carried out in the Arctic, there remains a critical dearth of knowledge due the limited number of high-resolution Holocene climate-proxy records available from this region. This gap skews our understanding of observed and predicted climate change, and fuels uncertainty both in the realms of science and policy. This study takes a comprehensive approach to tracking Holocene climate variability in the vicinity of Tasiilaq, Southeast Greenland using a ~5.6 m sediment core from Lower Sermilik Lake. An age-depth model for the core has been established using 8 radiocarbon dates, the oldest of which was taken at 4 m down core and has been been dated to approximately 6.2 kyr BP. The bottom meter of the core below the final radiocarbon date contains a transition from cobbles and coarse sand to organic-rich laminations, indicating the termination of direct glacial influence and therefore likely marking the end of the last glacial period in this region. The remainder of the core is similarly organic-rich, with light-to-dark brown laminations ranging from 0.5 -1 cm in thickness and riddled with turbidites. Using this core in tandem with findings from an on-site assessment of the geomorphic history of the locale we attempt to assess and infer the rapid climatic shifts associated with the Holocene on a sub-centennial scale. Such changes include the termination of the last glacial period, the Mid-Holocene Climatic Optimum, the Neoglacial Period, the Medieval Climatic Optimum, and the Little Ice Age. A multiple proxy approach including magnetic susceptibility, bulk organic geochemistry, elemental profiles acquired by XRF scanning, grain-size, and spectral data will be used to characterize the sediment and infer paleoclimate conditions. Additionally, percent biogenic silica by weight has been

  8. The application of multiple intelligence approach to the learning of human circulatory system

    Science.gov (United States)

    Kumalasari, Lita; Yusuf Hilmi, A.; Priyandoko, Didik

    2017-11-01

    The purpose of this study is to offer an alternative teaching approach or strategies which able to accommodate students’ different ability, intelligence and learning style. Also can gives a new idea for the teacher as a facilitator for exploring how to teach the student in creative ways and more student-center activities, for a lesson such as circulatory system. This study was carried out at one private school in Bandung involved eight students to see their responses toward the lesson that delivered by using Multiple Intelligence approach which is include Linguistic, Logical-Mathematical, Visual-Spatial, Musical, Bodily-Kinesthetic, Interpersonal, Intrapersonal, and Naturalistic. Students were test by using MI test based on Howard Gardner’s MI model to see their dominant intelligence. The result showed the percentage of top three ranks of intelligence are Bodily-Kinesthetic (73%), Visual-Spatial (68%), and Logical-Mathematical (61%). The learning process is given by using some different multimedia and activities to engaged their learning style and intelligence such as mini experiment, short clip, and questions. Student response is given by using self-assessment and the result is all students said the lesson gives them a knowledge and skills that useful for their life, they are clear with the explanation given, they didn’t find difficulties to understand the lesson and can complete the assignment given. At the end of the study, it is reveal that the students who are learned by Multiple Intelligence instructional approach have more enhance to the lesson given. It’s also found out that the students participated in the learning process which Multiple Intelligence approach was applied enjoyed the activities and have great fun.

  9. Approach to proliferation risk assessment based on multiple objective analysis framework

    Energy Technology Data Exchange (ETDEWEB)

    Andrianov, A.; Kuptsov, I. [Obninsk Institute for Nuclear Power Engineering of NNRU MEPhI (Russian Federation); Studgorodok 1, Obninsk, Kaluga region, 249030 (Russian Federation)

    2013-07-01

    The approach to the assessment of proliferation risk using the methods of multi-criteria decision making and multi-objective optimization is presented. The approach allows the taking into account of the specifics features of the national nuclear infrastructure, and possible proliferation strategies (motivations, intentions, and capabilities). 3 examples of applying the approach are shown. First, the approach has been used to evaluate the attractiveness of HEU (high enriched uranium)production scenarios at a clandestine enrichment facility using centrifuge enrichment technology. Secondly, the approach has been applied to assess the attractiveness of scenarios for undeclared production of plutonium or HEU by theft of materials circulating in nuclear fuel cycle facilities and thermal reactors. Thirdly, the approach has been used to perform a comparative analysis of the structures of developing nuclear power systems based on different types of nuclear fuel cycles, the analysis being based on indicators of proliferation risk.

  10. Approach to proliferation risk assessment based on multiple objective analysis framework

    International Nuclear Information System (INIS)

    Andrianov, A.; Kuptsov, I.

    2013-01-01

    The approach to the assessment of proliferation risk using the methods of multi-criteria decision making and multi-objective optimization is presented. The approach allows the taking into account of the specifics features of the national nuclear infrastructure, and possible proliferation strategies (motivations, intentions, and capabilities). 3 examples of applying the approach are shown. First, the approach has been used to evaluate the attractiveness of HEU (high enriched uranium)production scenarios at a clandestine enrichment facility using centrifuge enrichment technology. Secondly, the approach has been applied to assess the attractiveness of scenarios for undeclared production of plutonium or HEU by theft of materials circulating in nuclear fuel cycle facilities and thermal reactors. Thirdly, the approach has been used to perform a comparative analysis of the structures of developing nuclear power systems based on different types of nuclear fuel cycles, the analysis being based on indicators of proliferation risk

  11. Total System Performance Assessment-License Application Methods and Approach

    Energy Technology Data Exchange (ETDEWEB)

    J. McNeish

    2002-09-13

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issue (KTI) agreements, the ''Yucca Mountain Review Plan'' (CNWRA 2002 [158449]), and 10 CFR Part 63. This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are utilized in this document.

  12. Total System Performance Assessment - License Application Methods and Approach

    International Nuclear Information System (INIS)

    McNeish, J.

    2003-01-01

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issues (KTIs) identified in agreements with the U.S. Nuclear Regulatory Commission, the ''Yucca Mountain Review Plan'' (YMRP), ''Final Report'' (NRC 2003 [163274]), and the NRC final rule 10 CFR Part 63 (NRC 2002 [156605]). This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are used in this document

  13. Analyzing Statistical Mediation with Multiple Informants: A New Approach with an Application in Clinical Psychology

    Directory of Open Access Journals (Sweden)

    Lesther ePapa

    2015-11-01

    Full Text Available Testing mediation models is critical for identifying potential variables that need to be targeted to effectively change one or more outcome variables. In addition, it is now common practice for clinicians to use multiple informant (MI data in studies of statistical mediation. By coupling the use of MI data with statistical mediation analysis, clinical researchers can combine the benefits of both techniques. Integrating the information from MIs into a statistical mediation model creates various methodological and practical challenges. The authors review prior methodological approaches to MI mediation analysis in clinical research and propose a new latent variable approach that overcomes some limitations of prior approaches. An application of the new approach to mother, father, and child reports of impulsivity, frustration tolerance, and externalizing problems (N = 454 is presented. The results showed that frustration tolerance mediated the relationship between impulsivity and externalizing problems. Advantages and limitations of the new approach are discussed. The new approach can help clinical researchers overcome limitations of prior techniques. It allows for a more comprehensive and effective use of MI data when testing mediation models.

  14. Assessing Neurocognition via Gamified Experimental Logic: A novel approach to simultaneous acquisition of multiple ERPs

    Directory of Open Access Journals (Sweden)

    Ajay Kumar eNair

    2016-01-01

    Full Text Available The present study describes the development of a neurocognitive paradigm: ‘Assessing Neurocognition via Gamified Experimental Logic’ (ANGEL, for performing the parametric evaluation of multiple neurocognitive functions simultaneously. ANGEL employs an audiovisual sensory motor design for the acquisition of multiple event related potentials (ERPs - the C1, P50, MMN, N1, N170, P2, N2pc, LRP, P300 and ERN. The ANGEL paradigm allows assessment of ten neurocognitive variables over the course of three ‘game’ levels of increasing complexity ranging from simple passive observation to complex discrimination and response in the presence of multiple distractors. The paradigm allows assessment of several levels of rapid decision making: speeded up response vs response-inhibition; responses to easy vs difficult tasks; responses based on gestalt perception of clear vs ambiguous stimuli; and finally, responses with set shifting during challenging tasks. The paradigm has been tested using 18 healthy participants from both sexes and the possibilities of varied data analyses have been presented in this paper. The ANGEL approach provides an ecologically valid assessment (as compared to existing tools that quickly yields a very rich dataset and helps to assess multiple ERPs that can be studied extensively to assess cognitive functions in health and disease conditions.

  15. Assessing Neurocognition via Gamified Experimental Logic: A Novel Approach to Simultaneous Acquisition of Multiple ERPs.

    Science.gov (United States)

    Nair, Ajay K; Sasidharan, Arun; John, John P; Mehrotra, Seema; Kutty, Bindu M

    2016-01-01

    The present study describes the development of a neurocognitive paradigm: "Assessing Neurocognition via Gamified Experimental Logic" (ANGEL), for performing the parametric evaluation of multiple neurocognitive functions simultaneously. ANGEL employs an audiovisual sensory motor design for the acquisition of multiple event related potentials (ERPs)-the C1, P50, MMN, N1, N170, P2, N2pc, LRP, P300, and ERN. The ANGEL paradigm allows assessment of 10 neurocognitive variables over the course of three "game" levels of increasing complexity ranging from simple passive observation to complex discrimination and response in the presence of multiple distractors. The paradigm allows assessment of several levels of rapid decision making: speeded up response vs. response-inhibition; responses to easy vs. difficult tasks; responses based on gestalt perception of clear vs. ambiguous stimuli; and finally, responses with set shifting during challenging tasks. The paradigm has been tested using 18 healthy participants from both sexes and the possibilities of varied data analyses have been presented in this paper. The ANGEL approach provides an ecologically valid assessment (as compared to existing tools) that quickly yields a very rich dataset and helps to assess multiple ERPs that can be studied extensively to assess cognitive functions in health and disease conditions.

  16. Surgical approach in patients with hyperparathyroidism in multiple endocrine neoplasia type 1: total versus partial parathyroidectomy

    Directory of Open Access Journals (Sweden)

    Francesco Tonelli

    2012-01-01

    Full Text Available Usually, primary hyperparathyroidism is the first endocrinopathy to be diagnosed in patients with multiple endocrine neoplasia type 1, and is also the most common one. The timing of the surgery and strategy in multiple endocrine neoplasia type 1/hyperparathyroidism are still under debate. The aims of surgery are to: 1 correct hypercalcemia, thus preventing persistent or recurrent hyperparathyroidism; 2 avoid persistent hypoparathyroidism; and 3 facilitate the surgical treatment of possible recurrences. Currently, two types of surgical approach are indicated: 1 subtotal parathyroidectomy with removal of at least 3-3 K glands; and 2 total parathyroidectomy with grafting of autologous parathyroid tissue. Transcervical thymectomy must be performed with both of these procedures. Unsuccessful surgical treatment of hyperparathyroidism is more frequently observed in multiple endocrine neoplasia type 1 than in sporadic hyperparathyroidism. The recurrence rate is strongly influenced by: 1 the lack of a pre-operative multiple endocrine neoplasia type 1 diagnosis; 2 the surgeon's experience; 3 the timing of surgery; 4 the possibility of performing intra-operative confirmation (histologic examination, rapid parathyroid hormone assay of the curative potential of the surgical procedure; and, 5 the surgical strategy. Persistent hyperparathyroidism seems to be more frequent after subtotal parathyroidectomy than after total parathyroidectomy with autologous graft of parathyroid tissue. Conversely, recurrent hyperparathyroidism has a similar frequency in the two surgical strategies. To plan further operations, it is very helpful to know all the available data about previous surgery and to undertake accurate identification of the site of recurrence.

  17. Use of Multiple Imputation Method to Improve Estimation of Missing Baseline Serum Creatinine in Acute Kidney Injury Research

    Science.gov (United States)

    Peterson, Josh F.; Eden, Svetlana K.; Moons, Karel G.; Ikizler, T. Alp; Matheny, Michael E.

    2013-01-01

    Summary Background and objectives Baseline creatinine (BCr) is frequently missing in AKI studies. Common surrogate estimates can misclassify AKI and adversely affect the study of related outcomes. This study examined whether multiple imputation improved accuracy of estimating missing BCr beyond current recommendations to apply assumed estimated GFR (eGFR) of 75 ml/min per 1.73 m2 (eGFR 75). Design, setting, participants, & measurements From 41,114 unique adult admissions (13,003 with and 28,111 without BCr data) at Vanderbilt University Hospital between 2006 and 2008, a propensity score model was developed to predict likelihood of missing BCr. Propensity scoring identified 6502 patients with highest likelihood of missing BCr among 13,003 patients with known BCr to simulate a “missing” data scenario while preserving actual reference BCr. Within this cohort (n=6502), the ability of various multiple-imputation approaches to estimate BCr and classify AKI were compared with that of eGFR 75. Results All multiple-imputation methods except the basic one more closely approximated actual BCr than did eGFR 75. Total AKI misclassification was lower with multiple imputation (full multiple imputation + serum creatinine) (9.0%) than with eGFR 75 (12.3%; Pcreatinine) (15.3%) versus eGFR 75 (40.5%; P<0.001). Multiple imputation improved specificity and positive predictive value for detecting AKI at the expense of modestly decreasing sensitivity relative to eGFR 75. Conclusions Multiple imputation can improve accuracy in estimating missing BCr and reduce misclassification of AKI beyond currently proposed methods. PMID:23037980

  18. Algebraic Verification Method for SEREs Properties via Groebner Bases Approaches

    Directory of Open Access Journals (Sweden)

    Ning Zhou

    2013-01-01

    Full Text Available This work presents an efficient solution using computer algebra system to perform linear temporal properties verification for synchronous digital systems. The method is essentially based on both Groebner bases approaches and symbolic simulation. A mechanism for constructing canonical polynomial set based symbolic representations for both circuit descriptions and assertions is studied. We then present a complete checking algorithm framework based on these algebraic representations by using Groebner bases. The computational experience result in this work shows that the algebraic approach is a quite competitive checking method and will be a useful supplement to the existent verification methods based on simulation.

  19. A multiple objective test assembly approach for exposure control problems in Computerized Adaptive Testing

    Directory of Open Access Journals (Sweden)

    Theo J.H.M. Eggen

    2010-01-01

    Full Text Available Overexposure and underexposure of items in the bank are serious problems in operational computerized adaptive testing (CAT systems. These exposure problems might result in item compromise, or point at a waste of investments. The exposure control problem can be viewed as a test assembly problem with multiple objectives. Information in the test has to be maximized, item compromise has to be minimized, and pool usage has to be optimized. In this paper, a multiple objectives method is developed to deal with both types of exposure problems. In this method, exposure control parameters based on observed exposure rates are implemented as weights for the information in the item selection procedure. The method does not need time consuming simulation studies, and it can be implemented conditional on ability level. The method is compared with Sympson Hetter method for exposure control, with the Progressive method and with alphastratified testing. The results show that the method is successful in dealing with both kinds of exposure problems.

  20. Multiple Stressors and Ecological Complexity Require A New Approach to Coral Reef Research

    Directory of Open Access Journals (Sweden)

    Linwood Hagan Pendleton

    2016-03-01

    Full Text Available Ocean acidification, climate change, and other environmental stressors threaten coral reef ecosystems and the people who depend upon them. New science reveals that these multiple stressors interact and may affect a multitude of physiological and ecological processes in complex ways. The interaction of multiple stressors and ecological complexity may mean that the negative effects on coral reef ecosystems will happen sooner and be more severe than previously thought. Yet, most research on the effects of global change on coral reefs focus on one or few stressors and pathways or outcomes (e.g. bleaching. Based on a critical review of the literature, we call for a regionally targeted strategy of mesocosm-level research that addresses this complexity and provides more realistic projections about coral reef impacts in the face of global environmental change. We believe similar approaches are needed for other ecosystems that face global environmental change.

  1. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    Energy Technology Data Exchange (ETDEWEB)

    AlRashidi, M.R., E-mail: malrash2002@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait); AlHajri, M.F., E-mail: mfalhajri@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait)

    2011-10-15

    Highlights: {yields} A new hybrid PSO for optimal DGs placement and sizing. {yields} Statistical analysis to fine tune PSO parameters. {yields} Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  2. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    International Nuclear Information System (INIS)

    AlRashidi, M.R.; AlHajri, M.F.

    2011-01-01

    Highlights: → A new hybrid PSO for optimal DGs placement and sizing. → Statistical analysis to fine tune PSO parameters. → Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  3. Multiple emotions: a person-centered approach to the relationship between intergroup emotion and action orientation.

    Science.gov (United States)

    Fernando, Julian W; Kashima, Yoshihisa; Laham, Simon M

    2014-08-01

    Although a great deal of research has investigated the relationship between emotions and action orientations, most studies to date have used variable-centered techniques to identify the best emotion predictor(s) of a particular action. Given that people frequently report multiple or blended emotions, a profitable area of research may be to adopt person-centered approaches to examine the action orientations elicited by a particular combination of emotions or "emotion profile." In two studies, across instances of intergroup inequality in Australia and Canada, we examined participants' experiences of six intergroup emotions: sympathy, anger directed at three targets, shame, and pride. In both studies, five groups of participants with similar emotion profiles were identified by cluster analysis and their action orientations were compared; clusters indicated that the majority of participants experienced multiple emotions. Each action orientation was also regressed on the six emotions. There were a number of differences in the results obtained from the person-centered and variable-centered approaches. This was most apparent for sympathy: the group of participants experiencing only sympathy showed little inclination to perform prosocial actions, yet sympathy was a significant predictor of numerous action orientations in regression analyses. These results imply that sympathy may only prompt a desire for action when experienced in combination with other emotions. We suggest that the use of person-centered and variable-centered approaches as complementary analytic strategies may enrich research into not only the affective predictors of action, but emotion research in general.

  4. An Application of Graphical Approach to Construct Multiple Testing Procedure in a Hypothetical Phase III Design

    Directory of Open Access Journals (Sweden)

    Naitee eTing

    2014-01-01

    Full Text Available Many multiple testing procedures (MTP have been developed in recent years. Among these new procedures, the graphical approach is flexible and easy to communicate with non-statisticians. A hypothetical Phase III clinical trial design is introduced in this manuscript to demonstrate how graphical approach can be applied in clinical product development. In this design, an active comparator is used. It is thought that this test drug under development could potentially be superior to this comparator. For comparison of efficacy, the primary endpoint is well established and widely accepted by regulatory agencies. However, an important secondary endpoint based on Phase II findings looks very promising. The target dose may have a good opportunity to deliver superiority to the comparator. Furthermore, a lower dose is included in case the target dose may demonstrate potential safety concerns. This Phase III study is designed as a non-inferiority trial with two doses, and two endpoints. This manuscript will illustrate how graphical approach is applied to this design in handling multiple testing issues.

  5. Multiple Site-Directed and Saturation Mutagenesis by the Patch Cloning Method.

    Science.gov (United States)

    Taniguchi, Naohiro; Murakami, Hiroshi

    2017-01-01

    Constructing protein-coding genes with desired mutations is a basic step for protein engineering. Herein, we describe a multiple site-directed and saturation mutagenesis method, termed MUPAC. This method has been used to introduce multiple site-directed mutations in the green fluorescent protein gene and in the moloney murine leukemia virus reverse transcriptase gene. Moreover, this method was also successfully used to introduce randomized codons at five desired positions in the green fluorescent protein gene, and for simple DNA assembly for cloning.

  6. The Initial Rise Method in the case of multiple trapping levels

    International Nuclear Information System (INIS)

    Furetta, C.; Guzman, S.; Cruz Z, E.

    2009-10-01

    The aim of the paper is to extent the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to the minerals extracted from Nopal herb and Oregano spice because the thermoluminescent glow curves shape suggests a trap distribution instead of a single trapping level. (Author)

  7. Calculation of U, Ra, Th and K contents in uranium ore by multiple linear regression method

    International Nuclear Information System (INIS)

    Lin Chao; Chen Yingqiang; Zhang Qingwen; Tan Fuwen; Peng Guanghui

    1991-01-01

    A multiple linear regression method was used to compute γ spectra of uranium ore samples and to calculate contents of U, Ra, Th, and K. In comparison with the inverse matrix method, its advantage is that no standard samples of pure U, Ra, Th and K are needed for obtaining response coefficients

  8. The Initial Rise Method in the case of multiple trapping levels

    Energy Technology Data Exchange (ETDEWEB)

    Furetta, C. [Centro de Investigacion en Ciencia Aplicada y Tecnologia Avanzada, IPN, Av. Legaria 694, Col. Irrigacion, 11500 Mexico D. F. (Mexico); Guzman, S.; Cruz Z, E. [Instituto de Ciencias Nucleares, UNAM, A. P. 70-543, 04510 Mexico D. F. (Mexico)

    2009-10-15

    The aim of the paper is to extent the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to the minerals extracted from Nopal herb and Oregano spice because the thermoluminescent glow curves shape suggests a trap distribution instead of a single trapping level. (Author)

  9. A method for the generation of random multiple Coulomb scattering angles

    International Nuclear Information System (INIS)

    Campbell, J.R.

    1995-06-01

    A method for the random generation of spatial angles drawn from non-Gaussian multiple Coulomb scattering distributions is presented. The method employs direct numerical inversion of cumulative probability distributions computed from the universal non-Gaussian angular distributions of Marion and Zimmerman. (author). 12 refs., 3 figs

  10. Freestyle multiple propeller flap reconstruction (jigsaw puzzle approach) for complicated back defects.

    Science.gov (United States)

    Park, Sung Woo; Oh, Tae Suk; Eom, Jin Sup; Sun, Yoon Chi; Suh, Hyun Suk; Hong, Joon Pio

    2015-05-01

    The reconstruction of the posterior trunk remains to be a challenge as defects can be extensive, with deep dead space, and fixation devices exposed. Our goal was to achieve a tension-free closure for complex defects on the posterior trunk. From August 2006 to May 2013, 18 cases were reconstructed with multiple flaps combining perforator(s) and local skin flaps. The reconstructions were performed using freestyle approach. Starting with propeller flap(s) in single or multilobed design and sequentially in conjunction with adjacent random pattern flaps such as fitting puzzle. All defects achieved tensionless primary closure. The final appearance resembled a jigsaw puzzle-like appearance. The average size of defect was 139.6 cm(2) (range, 36-345 cm(2)). A total of 26 perforator flaps were used in addition to 19 random pattern flaps for 18 cases. In all cases, a single perforator was used for each propeller flap. The defect and the donor site all achieved tension-free closure. The reconstruction was 100% successful without flap loss. One case of late infection was noted at 12 months after surgery. Using multiple lobe designed propeller flaps in conjunction with random pattern flaps in a freestyle approach, resembling putting a jigsaw puzzle together, we can achieve a tension-free closure by distributing the tension to multiple flaps, supplying sufficient volume to obliterate dead space, and have reliable vascularity as the flaps do not need to be oversized. This can be a viable approach to reconstruct extensive defects on the posterior trunk. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  11. A modular method to handle multiple time-dependent quantities in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Shin, J; Faddegon, B A; Perl, J; Schümann, J; Paganetti, H

    2012-01-01

    A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call ‘Time Features’. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method. (paper)

  12. A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.

    Science.gov (United States)

    Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying

    2015-09-01

    Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.

  13. Estimating HIV incidence among adults in Kenya and Uganda: a systematic comparison of multiple methods.

    Directory of Open Access Journals (Sweden)

    Andrea A Kim

    2011-03-01

    Full Text Available Several approaches have been used for measuring HIV incidence in large areas, yet each presents specific challenges in incidence estimation.We present a comparison of incidence estimates for Kenya and Uganda using multiple methods: 1 Epidemic Projections Package (EPP and Spectrum models fitted to HIV prevalence from antenatal clinics (ANC and national population-based surveys (NPS in Kenya (2003, 2007 and Uganda (2004/2005; 2 a survey-derived model to infer age-specific incidence between two sequential NPS; 3 an assay-derived measurement in NPS using the BED IgG capture enzyme immunoassay, adjusted for misclassification using a locally derived false-recent rate (FRR for the assay; (4 community cohorts in Uganda; (5 prevalence trends in young ANC attendees. EPP/Spectrum-derived and survey-derived modeled estimates were similar: 0.67 [uncertainty range: 0.60, 0.74] and 0.6 [confidence interval: (CI 0.4, 0.9], respectively, for Uganda (2005 and 0.72 [uncertainty range: 0.70, 0.74] and 0.7 [CI 0.3, 1.1], respectively, for Kenya (2007. Using a local FRR, assay-derived incidence estimates were 0.3 [CI 0.0, 0.9] for Uganda (2004/2005 and 0.6 [CI 0, 1.3] for Kenya (2007. Incidence trends were similar for all methods for both Uganda and Kenya.Triangulation of methods is recommended to determine best-supported estimates of incidence to guide programs. Assay-derived incidence estimates are sensitive to the level of the assay's FRR, and uncertainty around high FRRs can significantly impact the validity of the estimate. Systematic evaluations of new and existing incidence assays are needed to the study the level, distribution, and determinants of the FRR to guide whether incidence assays can produce reliable estimates of national HIV incidence.

  14. A versatile method for confirmatory evaluation of the effects of a covariate in multiple models

    DEFF Research Database (Denmark)

    Pipper, Christian Bressen; Ritz, Christian; Bisgaard, Hans

    2012-01-01

    to provide a fine-tuned control of the overall type I error in a wide range of epidemiological experiments where in reality no other useful alternative exists. The methodology proposed is applied to a multiple-end-point study of the effect of neonatal bacterial colonization on development of childhood asthma.......Modern epidemiology often requires testing of the effect of a covariate on multiple end points from the same study. However, popular state of the art methods for multiple testing require the tests to be evaluated within the framework of a single model unifying all end points. This severely limits...

  15. Regularization methods for ill-posed problems in multiple Hilbert scales

    International Nuclear Information System (INIS)

    Mazzieri, Gisela L; Spies, Ruben D

    2012-01-01

    Several convergence results in Hilbert scales under different source conditions are proved and orders of convergence and optimal orders of convergence are derived. Also, relations between those source conditions are proved. The concept of a multiple Hilbert scale on a product space is introduced, and regularization methods on these scales are defined, both for the case of a single observation and for the case of multiple observations. In the latter case, it is shown how vector-valued regularization functions in these multiple Hilbert scales can be used. In all cases, convergence is proved and orders and optimal orders of convergence are shown. Finally, some potential applications and open problems are discussed. (paper)

  16. Study of the multiple scattering effect in TEBENE using the Monte Carlo method

    International Nuclear Information System (INIS)

    Singkarat, Somsorn.

    1990-01-01

    The neutron time-of-flight and energy spectra, from the TEBENE set-up, have been calculated by a computer program using the Monte Carlo method. The neutron multiple scattering within the polyethylene scatterer ring is closely investigated. The results show that multiple scattering has a significant effect on the detected neutron yield. They also indicate that the thickness of the scatterer ring has to be carefully chosen. (author)

  17. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    Science.gov (United States)

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  18. A Pattern-Oriented Approach to a Methodical Evaluation of Modeling Methods

    Directory of Open Access Journals (Sweden)

    Michael Amberg

    1996-11-01

    Full Text Available The paper describes a pattern-oriented approach to evaluate modeling methods and to compare various methods with each other from a methodical viewpoint. A specific set of principles (the patterns is defined by investigating the notations and the documentation of comparable modeling methods. Each principle helps to examine some parts of the methods from a specific point of view. All principles together lead to an overall picture of the method under examination. First the core ("method neutral" meaning of each principle is described. Then the methods are examined regarding the principle. Afterwards the method specific interpretations are compared with each other and with the core meaning of the principle. By this procedure, the strengths and weaknesses of modeling methods regarding methodical aspects are identified. The principles are described uniformly using a principle description template according to descriptions of object oriented design patterns. The approach is demonstrated by evaluating a business process modeling method.

  19. Non-animal methods to predict skin sensitization (II): an assessment of defined approaches *.

    Science.gov (United States)

    Kleinstreuer, Nicole C; Hoffmann, Sebastian; Alépée, Nathalie; Allen, David; Ashikaga, Takao; Casey, Warren; Clouet, Elodie; Cluzel, Magalie; Desprez, Bertrand; Gellatly, Nichola; Göbel, Carsten; Kern, Petra S; Klaric, Martina; Kühnl, Jochen; Martinozzi-Teissier, Silvia; Mewes, Karsten; Miyazawa, Masaaki; Strickland, Judy; van Vliet, Erwin; Zang, Qingda; Petersohn, Dirk

    2018-05-01

    Skin sensitization is a toxicity endpoint of widespread concern, for which the mechanistic understanding and concurrent necessity for non-animal testing approaches have evolved to a critical juncture, with many available options for predicting sensitization without using animals. Cosmetics Europe and the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods collaborated to analyze the performance of multiple non-animal data integration approaches for the skin sensitization safety assessment of cosmetics ingredients. The Cosmetics Europe Skin Tolerance Task Force (STTF) collected and generated data on 128 substances in multiple in vitro and in chemico skin sensitization assays selected based on a systematic assessment by the STTF. These assays, together with certain in silico predictions, are key components of various non-animal testing strategies that have been submitted to the Organization for Economic Cooperation and Development as case studies for skin sensitization. Curated murine local lymph node assay (LLNA) and human skin sensitization data were used to evaluate the performance of six defined approaches, comprising eight non-animal testing strategies, for both hazard and potency characterization. Defined approaches examined included consensus methods, artificial neural networks, support vector machine models, Bayesian networks, and decision trees, most of which were reproduced using open source software tools. Multiple non-animal testing strategies incorporating in vitro, in chemico, and in silico inputs demonstrated equivalent or superior performance to the LLNA when compared to both animal and human data for skin sensitization.

  20. Multidisciplinary approaches to managing osteoarthritis in multiple joint sites: a systematic review.

    Science.gov (United States)

    Finney, Andrew; Healey, Emma; Jordan, Joanne L; Ryan, Sarah; Dziedzic, Krysia S

    2016-07-08

    The National Institute for Health and Care Excellence's Osteoarthritis (OA) guidelines recommended that future research should consider the benefits of combination therapies in people with OA across multiple joint sites. However, the clinical effectiveness of such approaches to OA management is unknown. This systematic review therefore aimed to identify the clinical and cost effectiveness of multidisciplinary approaches targeting multiple joint sites for OA in primary care. A systematic review of randomised controlled trials. Computerised bibliographic databases were searched (MEDLINE, EMBASE, CINAHL, PsychINFO, BNI, HBE, HMIC, AMED, Web of Science and Cochrane). Studies were included if they met the following criteria; a randomised controlled trial (RCT), a primary care population with OA across at least two different peripheral joint sites (multiple joint sites), and interventions undertaken by at least two different health disciplines (multidisciplinary). The Cochrane 'Risk of Bias' tool and PEDro were used for quality assessment of eligible studies. Clinical and cost effectiveness was determined by extracting and examining self-reported outcomes for pain, function, quality of life (QoL) and health care utilisation. The date range for the search was from database inception until August 2015. The search identified 1148 individual titles of which four were included in the review. A narrative review was conducted due to the heterogeneity of the included trials. Each of the four trials used either educational or exercise interventions facilitated by a range of different health disciplines. Moderate clinical benefits on pain, function and QoL were reported across the studies. The beneficial effects of exercise generally decreased over time within all studies. Two studies were able to show a reduction in healthcare utilisation due to a reduction in visits to a physiotherapist or a reduction in x-rays and orthopaedic referrals. The intervention that showed the most

  1. Stepwise approach to establishing multiple outreach laboratory information system-electronic medical record interfaces.

    Science.gov (United States)

    Pantanowitz, Liron; Labranche, Wayne; Lareau, William

    2010-05-26

    Clinical laboratory outreach business is changing as more physician practices adopt an electronic medical record (EMR). Physician connectivity with the laboratory information system (LIS) is consequently becoming more important. However, there are no reports available to assist the informatician with establishing and maintaining outreach LIS-EMR connectivity. A four-stage scheme is presented that was successfully employed to establish unidirectional and bidirectional interfaces with multiple physician EMRs. This approach involves planning (step 1), followed by interface building (step 2) with subsequent testing (step 3), and finally ongoing maintenance (step 4). The role of organized project management, software as a service (SAAS), and alternate solutions for outreach connectivity are discussed.

  2. Approaches and methods for econometric analysis of market power

    DEFF Research Database (Denmark)

    Perekhozhuk, Oleksandr; Glauben, Thomas; Grings, Michael

    2017-01-01

    , functional forms, estimation methods and derived estimates of the degree of market power. Thereafter, we use our framework to evaluate several structural models based on PTA and GIM to measure oligopsony power in the Ukrainian dairy industry. The PTA-based results suggest that the estimated parameters......This study discusses two widely used approaches in the New Empirical Industrial Organization (NEIO) literature and examines the strengths and weaknesses of the Production-Theoretic Approach (PTA) and the General Identification Method (GIM) for the econometric analysis of market power...... in agricultural and food markets. We provide a framework that may help researchers to evaluate and improve structural models of market power. Starting with the specification of the approaches in question, we compare published empirical studies of market power with respect to the choice of the applied approach...

  3. Integrating multiple programme and policy approaches to hepatitis C prevention and care for injection drug users: a comprehensive approach.

    Science.gov (United States)

    Birkhead, Guthrie S; Klein, Susan J; Candelas, Alma R; O'Connell, Daniel A; Rothman, Jeffrey R; Feldman, Ira S; Tsui, Dennis S; Cotroneo, Richard A; Flanigan, Colleen A

    2007-10-01

    New York State is home to an estimated 230,000 individuals chronically infected with hepatitis C virus (HCV) and roughly 171,500 active injection drug users (IDUs). HCV/HIV co-infection is common and models of service delivery that effectively meet IDUs' needs are required. A HCV strategic plan has stressed integration. HCV prevention and care are integrated within health and human service settings, including HIV/AIDS organisations and drug treatment programmes. Other measures that support comprehensive HCV services for IDUs include reimbursement, clinical guidelines, training and HCV prevention education. Community and provider collaborations inform programme and policy development. IDUs access 5 million syringes annually through harm reduction/syringe exchange programmes (SEPs) and a statewide syringe access programme. Declines in HCV prevalence amongst IDUs in New York City coincided with improved syringe availability. New models of care successfully link IDUs at SEPs and in drug treatment to health care. Over 7000 Medicaid recipients with HCV/HIV co-infection had health care encounters related to their HCV in a 12-month period and 10,547 claims for HCV-related medications were paid. The success rate of transitional case management referrals to drug treatment is over 90%. Training and clinical guidelines promote provider knowledge about HCV and contribute to quality HCV care for IDUs. Chart reviews of 2570 patients with HIV in 2004 documented HCV status 97.4% of the time, overall, in various settings. New HCV surveillance systems are operational. Despite this progress, significant challenges remain. A comprehensive, public health approach, using multiple strategies across systems and mobilizing multiple sectors, can enhance IDUs access to HCV prevention and care. A holisitic approach with integrated services, including for HCV-HIV co-infected IDUs is needed. Leadership, collaboration and resources are essential.

  4. New approach to equipment quality evaluation method with distinct functions

    Directory of Open Access Journals (Sweden)

    Milisavljević Vladimir M.

    2016-01-01

    Full Text Available The paper presents new approach for improving method for quality evaluation and selection of equipment (devices and machinery by applying distinct functions. Quality evaluation and selection of devices and machinery is a multi-criteria problem which involves the consideration of numerous parameters of various origins. Original selection method with distinct functions is based on technical parameters with arbitrary evaluation of each parameter importance (weighting. Improvement of this method, presented in this paper, addresses the issue of weighting of parameters by using Delphi Method. Finally, two case studies are provided, which included quality evaluation of standard boilers for heating and evaluation of load-haul-dump (LHD machines, to demonstrate applicability of this approach. Analytical Hierarchical Process (AHP is used as a control method.

  5. A hybrid approach to simulate multiple photon scattering in X-ray imaging

    International Nuclear Information System (INIS)

    Freud, N.; Letang, J.-M.; Babot, D.

    2005-01-01

    A hybrid simulation approach is proposed to compute the contribution of scattered radiation in X- or γ-ray imaging. This approach takes advantage of the complementarity between the deterministic and probabilistic simulation methods. The proposed hybrid method consists of two stages. Firstly, a set of scattering events occurring in the inspected object is determined by means of classical Monte Carlo simulation. Secondly, this set of scattering events is used as a starting point to compute the energy imparted to the detector, with a deterministic algorithm based on a 'forced detection' scheme. For each scattering event, the probability for the scattered photon to reach each pixel of the detector is calculated using well-known physical models (form factor and incoherent scattering function approximations, in the case of Rayleigh and Compton scattering respectively). The results of the proposed hybrid approach are compared to those obtained with the Monte Carlo method alone (Geant4 code) and found to be in excellent agreement. The convergence of the results when the number of scattering events increases is studied. The proposed hybrid approach makes it possible to simulate the contribution of each type (Compton or Rayleigh) and order of scattering, separately or together, with a single PC, within reasonable computation times (from minutes to hours, depending on the number of pixels of the detector). This constitutes a substantial benefit, compared to classical simulation methods (Monte Carlo or deterministic approaches), which usually requires a parallel computing architecture to obtain comparable results

  6. A hybrid approach to simulate multiple photon scattering in X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Freud, N. [CNDRI, Laboratory of Nondestructive Testing using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, avenue Albert Einstein, 69621 Villeurbanne Cedex (France)]. E-mail: nicolas.freud@insa-lyon.fr; Letang, J.-M. [CNDRI, Laboratory of Nondestructive Testing using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Babot, D. [CNDRI, Laboratory of Nondestructive Testing using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, avenue Albert Einstein, 69621 Villeurbanne Cedex (France)

    2005-01-01

    A hybrid simulation approach is proposed to compute the contribution of scattered radiation in X- or {gamma}-ray imaging. This approach takes advantage of the complementarity between the deterministic and probabilistic simulation methods. The proposed hybrid method consists of two stages. Firstly, a set of scattering events occurring in the inspected object is determined by means of classical Monte Carlo simulation. Secondly, this set of scattering events is used as a starting point to compute the energy imparted to the detector, with a deterministic algorithm based on a 'forced detection' scheme. For each scattering event, the probability for the scattered photon to reach each pixel of the detector is calculated using well-known physical models (form factor and incoherent scattering function approximations, in the case of Rayleigh and Compton scattering respectively). The results of the proposed hybrid approach are compared to those obtained with the Monte Carlo method alone (Geant4 code) and found to be in excellent agreement. The convergence of the results when the number of scattering events increases is studied. The proposed hybrid approach makes it possible to simulate the contribution of each type (Compton or Rayleigh) and order of scattering, separately or together, with a single PC, within reasonable computation times (from minutes to hours, depending on the number of pixels of the detector). This constitutes a substantial benefit, compared to classical simulation methods (Monte Carlo or deterministic approaches), which usually requires a parallel computing architecture to obtain comparable results.

  7. MULTIPLE OBJECTS

    Directory of Open Access Journals (Sweden)

    A. A. Bosov

    2015-04-01

    Full Text Available Purpose. The development of complicated techniques of production and management processes, information systems, computer science, applied objects of systems theory and others requires improvement of mathematical methods, new approaches for researches of application systems. And the variety and diversity of subject systems makes necessary the development of a model that generalizes the classical sets and their development – sets of sets. Multiple objects unlike sets are constructed by multiple structures and represented by the structure and content. The aim of the work is the analysis of multiple structures, generating multiple objects, the further development of operations on these objects in application systems. Methodology. To achieve the objectives of the researches, the structure of multiple objects represents as constructive trio, consisting of media, signatures and axiomatic. Multiple object is determined by the structure and content, as well as represented by hybrid superposition, composed of sets, multi-sets, ordered sets (lists and heterogeneous sets (sequences, corteges. Findings. In this paper we study the properties and characteristics of the components of hybrid multiple objects of complex systems, proposed assessments of their complexity, shown the rules of internal and external operations on objects of implementation. We introduce the relation of arbitrary order over multiple objects, we define the description of functions and display on objects of multiple structures. Originality.In this paper we consider the development of multiple structures, generating multiple objects.Practical value. The transition from the abstract to the subject of multiple structures requires the transformation of the system and multiple objects. Transformation involves three successive stages: specification (binding to the domain, interpretation (multiple sites and particularization (goals. The proposed describe systems approach based on hybrid sets

  8. Using the Direct Sampling Multiple-Point Geostatistical Method for Filling Gaps in Landsat 7 ETM+ SLC-off Imagery

    KAUST Repository

    Yin, Gaohong

    2016-05-01

    Since the failure of the Scan Line Corrector (SLC) instrument on Landsat 7, observable gaps occur in the acquired Landsat 7 imagery, impacting the spatial continuity of observed imagery. Due to the highly geometric and radiometric accuracy provided by Landsat 7, a number of approaches have been proposed to fill the gaps. However, all proposed approaches have evident constraints for universal application. The main issues in gap-filling are an inability to describe the continuity features such as meandering streams or roads, or maintaining the shape of small objects when filling gaps in heterogeneous areas. The aim of the study is to validate the feasibility of using the Direct Sampling multiple-point geostatistical method, which has been shown to reconstruct complicated geological structures satisfactorily, to fill Landsat 7 gaps. The Direct Sampling method uses a conditional stochastic resampling of known locations within a target image to fill gaps and can generate multiple reconstructions for one simulation case. The Direct Sampling method was examined across a range of land cover types including deserts, sparse rural areas, dense farmlands, urban areas, braided rivers and coastal areas to demonstrate its capacity to recover gaps accurately for various land cover types. The prediction accuracy of the Direct Sampling method was also compared with other gap-filling approaches, which have been previously demonstrated to offer satisfactory results, under both homogeneous area and heterogeneous area situations. Studies have shown that the Direct Sampling method provides sufficiently accurate prediction results for a variety of land cover types from homogeneous areas to heterogeneous land cover types. Likewise, it exhibits superior performances when used to fill gaps in heterogeneous land cover types without input image or with an input image that is temporally far from the target image in comparison with other gap-filling approaches.

  9. A multiparameter chaos control method based on OGY approach

    International Nuclear Information System (INIS)

    Souza de Paula, Aline; Amorim Savi, Marcelo

    2009-01-01

    Chaos control is based on the richness of responses of chaotic behavior and may be understood as the use of tiny perturbations for the stabilization of a UPO embedded in a chaotic attractor. Since one of these UPO can provide better performance than others in a particular situation the use of chaos control can make this kind of behavior to be desirable in a variety of applications. The OGY method is a discrete technique that considers small perturbations promoted in the neighborhood of the desired orbit when the trajectory crosses a specific surface, such as a Poincare section. This contribution proposes a multiparameter semi-continuous method based on OGY approach in order to control chaotic behavior. Two different approaches are possible with this method: coupled approach, where all control parameters influences system dynamics although they are not active; and uncoupled approach that is a particular case where control parameters return to the reference value when they become passive parameters. As an application of the general formulation, it is investigated a two-parameter actuation of a nonlinear pendulum control employing coupled and uncoupled approaches. Analyses are carried out considering signals that are generated by numerical integration of the mathematical model using experimentally identified parameters. Results show that the procedure can be a good alternative for chaos control since it provides a more effective UPO stabilization than the classical single-parameter approach.

  10. Prediction Approach of Critical Node Based on Multiple Attribute Decision Making for Opportunistic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Qifan Chen

    2016-01-01

    Full Text Available Predicting critical nodes of Opportunistic Sensor Network (OSN can help us not only to improve network performance but also to decrease the cost in network maintenance. However, existing ways of predicting critical nodes in static network are not suitable for OSN. In this paper, the conceptions of critical nodes, region contribution, and cut-vertex in multiregion OSN are defined. We propose an approach to predict critical node for OSN, which is based on multiple attribute decision making (MADM. It takes RC to present the dependence of regions on Ferry nodes. TOPSIS algorithm is employed to find out Ferry node with maximum comprehensive contribution, which is a critical node. The experimental results show that, in different scenarios, this approach can predict the critical nodes of OSN better.

  11. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2014-03-04

    © 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.

  12. Statistical approaches to assessing single and multiple outcome measures in dry eye therapy and diagnosis.

    Science.gov (United States)

    Tomlinson, Alan; Hair, Mario; McFadyen, Angus

    2013-10-01

    Dry eye is a multifactorial disease which would require a broad spectrum of test measures in the monitoring of its treatment and diagnosis. However, studies have typically reported improvements in individual measures with treatment. Alternative approaches involve multiple, combined outcomes being assessed by different statistical analyses. In order to assess the effect of various statistical approaches to the use of single and combined test measures in dry eye, this review reanalyzed measures from two previous studies (osmolarity, evaporation, tear turnover rate, and lipid film quality). These analyses assessed the measures as single variables within groups, pre- and post-intervention with a lubricant supplement, by creating combinations of these variables and by validating these combinations with the combined sample of data from all groups of dry eye subjects. The effectiveness of single measures and combinations in diagnosis of dry eye was also considered. Copyright © 2013. Published by Elsevier Inc.

  13. Dual worth trade-off method and its application for solving multiple criteria decision making problems

    Institute of Scientific and Technical Information of China (English)

    Feng Junwen

    2006-01-01

    To overcome the limitations of the traditional surrogate worth trade-off (SWT) method and solve the multiple criteria decision making problem more efficiently and interactively, a new method labeled dual worth trade-off (DWT) method is proposed. The DWT method dynamically uses the duality theory related to the multiple criteria decision making problem and analytic hierarchy process technique to obtain the decision maker's solution preference information and finally find the satisfactory compromise solution of the decision maker. Through the interactive process between the analyst and the decision maker, trade-off information is solicited and treated properly, the representative subset of efficient solutions and the satisfactory solution to the problem are found. The implementation procedure for the DWT method is presented. The effectiveness and applicability of the DWT method are shown by a practical case study in the field of production scheduling.

  14. Single-electron multiplication statistics as a combination of Poissonian pulse height distributions using constraint regression methods

    International Nuclear Information System (INIS)

    Ballini, J.-P.; Cazes, P.; Turpin, P.-Y.

    1976-01-01

    Analysing the histogram of anode pulse amplitudes allows a discussion of the hypothesis that has been proposed to account for the statistical processes of secondary multiplication in a photomultiplier. In an earlier work, good agreement was obtained between experimental and reconstructed spectra, assuming a first dynode distribution including two Poisson distributions of distinct mean values. This first approximation led to a search for a method which could give the weights of several Poisson distributions of distinct mean values. Three methods have been briefly exposed: classical linear regression, constraint regression (d'Esopo's method), and regression on variables subject to error. The use of these methods gives an approach of the frequency function which represents the dispersion of the punctual mean gain around the whole first dynode mean gain value. Comparison between this function and the one employed in Polya distribution allows the statement that the latter is inadequate to describe the statistical process of secondary multiplication. Numerous spectra obtained with two kinds of photomultiplier working under different physical conditions have been analysed. Then two points are discussed: - Does the frequency function represent the dynode structure and the interdynode collection process. - Is the model (the multiplication process of all dynodes but the first one, is Poissonian) valid whatever the photomultiplier and the utilization conditions. (Auth.)

  15. Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.

    Science.gov (United States)

    Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen

    2017-11-01

    A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.

  16. MSblender: A probabilistic approach for integrating peptide identifications from multiple database search engines.

    Science.gov (United States)

    Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M

    2011-07-01

    Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.

  17. A differential transformation approach for solving functional differential equations with multiple delays

    Science.gov (United States)

    Rebenda, Josef; Šmarda, Zdeněk

    2017-07-01

    In the paper an efficient semi-analytical approach based on the method of steps and the differential transformation is proposed for numerical approximation of solutions of functional differential models of delayed and neutral type on a finite interval of arbitrary length, including models with several constant delays. Algorithms for both commensurate and non-commensurate delays are described, applications are shown in examples. Validity and efficiency of the presented algorithms is compared with the variational iteration method, the Adomian decomposition method and the polynomial least squares method numerically. Matlab package DDE23 is used to produce reference numerical values.

  18. Why is the Arkavathy River drying? A multiple hypothesis approach in a data scarce region

    Science.gov (United States)

    Srinivasan, V.; Thompson, S.; Madhyastha, K.; Penny, G.; Jeremiah, K.; Lele, S.

    2015-01-01

    The developing world faces unique challenges in achieving water security as it is disproportionately exposed to stressors such as climate change while also undergoing demographic growth, agricultural intensification and industrialization. Investigative approaches are needed that can inform sound policy development and planning to address the water security challenge in the context of data scarcity. We investigated the "predictions under change" problem in the Thippagondanahalli (TG Halli) catchment of the Arkavathy sub-basin in South India. River inflows into the TG Halli reservoir have declined since the 1970s, and the reservoir is currently operating at only 20% of its built capacity. The mechanisms responsible for the drying of the river are not understood, resulting in uncoordinated and potentially counter-productive management responses. The objective of this study was to investigate potential explanations of the drying trend and thus obtain predictive insight. We used a multiple working hypothesis approach to investigate the decline in inflow into TG Halli reservoir. Five hypotheses were tested using data from field surveys and reliable secondary sources: (1) changes in rainfall amount, timing and storm intensity, (2) rising temperatures, (3) increased groundwater extraction, (4) expansion of eucalyptus plantations, and (5) increased fragmentation of the river channel. Our results indicate that proximate anthropogenic drivers of change such as groundwater pumping, expansion of eucalyptus plantations, and to a lesser extent channel fragmentation, are much more likely to have caused the decline in surface flows in the TG Halli catchment than changing climate. The case study shows that direct human interventions play a significant role in altering the hydrology of watersheds. The multiple working hypotheses approach presents a systematic way to quantify the relative contributions of anthropogenic drivers to hydrologic change. The approach not only yields a

  19. A Method to Construct Plasma with Nonlinear Density Enhancement Effect in Multiple Internal Inductively Coupled Plasmas

    International Nuclear Information System (INIS)

    Chen Zhipeng; Li Hong; Liu Qiuyan; Luo Chen; Xie Jinlin; Liu Wandong

    2011-01-01

    A method is proposed to built up plasma based on a nonlinear enhancement phenomenon of plasma density with discharge by multiple internal antennas simultaneously. It turns out that the plasma density under multiple sources is higher than the linear summation of the density under each source. This effect is helpful to reduce the fast exponential decay of plasma density in single internal inductively coupled plasma source and generating a larger-area plasma with multiple internal inductively coupled plasma sources. After a careful study on the balance between the enhancement and the decay of plasma density in experiments, a plasma is built up by four sources, which proves the feasibility of this method. According to the method, more sources and more intensive enhancement effect can be employed to further build up a high-density, large-area plasma for different applications. (low temperature plasma)

  20. Cumulative health risk assessment: integrated approaches for multiple contaminants, exposures, and effects

    International Nuclear Information System (INIS)

    Rice, Glenn; Teuschler, Linda; MacDonel, Margaret; Butler, Jim; Finster, Molly; Hertzberg, Rick; Harou, Lynne

    2007-01-01

    Available in abstract form only. Full text of publication follows: As information about environmental contamination has increased in recent years, so has public interest in the combined effects of multiple contaminants. This interest has been highlighted by recent tragedies such as the World Trade Center disaster and hurricane Katrina. In fact, assessing multiple contaminants, exposures, and effects has long been an issue for contaminated sites, including U.S. Department of Energy (DOE) legacy waste sites. Local citizens have explicitly asked the federal government to account for cumulative risks, with contaminants moving offsite via groundwater flow, surface runoff, and air dispersal being a common emphasis. Multiple exposures range from ingestion and inhalation to dermal absorption and external gamma irradiation. Three types of concerns can lead to cumulative assessments: (1) specific sources or releases - e.g., industrial facilities or accidental discharges; (2) contaminant levels - in environmental media or human tissues; and (3) elevated rates of disease - e.g., asthma or cancer. The specific initiator frames the assessment strategy, including a determination of appropriate models to be used. Approaches are being developed to better integrate a variety of data, extending from environmental to internal co-location of contaminants and combined effects, to support more practical assessments of cumulative health risks. (authors)

  1. Use of ultrasonic array method for positioning multiple partial discharge sources in transformer oil.

    Science.gov (United States)

    Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng

    2014-08-01

    Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method.

  2. A multiple model approach to respiratory motion prediction for real-time IGRT

    International Nuclear Information System (INIS)

    Putra, Devi; Haas, Olivier C L; Burnham, Keith J; Mills, John A

    2008-01-01

    Respiration induces significant movement of tumours in the vicinity of thoracic and abdominal structures. Real-time image-guided radiotherapy (IGRT) aims to adapt radiation delivery to tumour motion during irradiation. One of the main problems for achieving this objective is the presence of time lag between the acquisition of tumour position and the radiation delivery. Such time lag causes significant beam positioning errors and affects the dose coverage. A method to solve this problem is to employ an algorithm that is able to predict future tumour positions from available tumour position measurements. This paper presents a multiple model approach to respiratory-induced tumour motion prediction using the interacting multiple model (IMM) filter. A combination of two models, constant velocity (CV) and constant acceleration (CA), is used to capture respiratory-induced tumour motion. A Kalman filter is designed for each of the local models and the IMM filter is applied to combine the predictions of these Kalman filters for obtaining the predicted tumour position. The IMM filter, likewise the Kalman filter, is a recursive algorithm that is suitable for real-time applications. In addition, this paper proposes a confidence interval (CI) criterion to evaluate the performance of tumour motion prediction algorithms for IGRT. The proposed CI criterion provides a relevant measure for the prediction performance in terms of clinical applications and can be used to specify the margin to accommodate prediction errors. The prediction performance of the IMM filter has been evaluated using 110 traces of 4-minute free-breathing motion collected from 24 lung-cancer patients. The simulation study was carried out for prediction time 0.1-0.6 s with sampling rates 3, 5 and 10 Hz. It was found that the prediction of the IMM filter was consistently better than the prediction of the Kalman filter with the CV or CA model. There was no significant difference of prediction errors for the

  3. An ensemble-based dynamic Bayesian averaging approach for discharge simulations using multiple global precipitation products and hydrological models

    Science.gov (United States)

    Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris

    2018-03-01

    Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.

  4. Resampling Approach for Determination of the Method for Reference Interval Calculation in Clinical Laboratory Practice▿

    Science.gov (United States)

    Pavlov, Igor Y.; Wilson, Andrew R.; Delgado, Julio C.

    2010-01-01

    Reference intervals (RI) play a key role in clinical interpretation of laboratory test results. Numerous articles are devoted to analyzing and discussing various methods of RI determination. The two most widely used approaches are the parametric method, which assumes data normality, and a nonparametric, rank-based procedure. The decision about which method to use is usually made arbitrarily. The goal of this study was to demonstrate that using a resampling approach for the comparison of RI determination techniques could help researchers select the right procedure. Three methods of RI calculation—parametric, transformed parametric, and quantile-based bootstrapping—were applied to multiple random samples drawn from 81 values of complement factor B observations and from a computer-simulated normally distributed population. It was shown that differences in RI between legitimate methods could be up to 20% and even more. The transformed parametric method was found to be the best method for the calculation of RI of non-normally distributed factor B estimations, producing an unbiased RI and the lowest confidence limits and interquartile ranges. For a simulated Gaussian population, parametric calculations, as expected, were the best; quantile-based bootstrapping produced biased results at low sample sizes, and the transformed parametric method generated heavily biased RI. The resampling approach could help compare different RI calculation methods. An algorithm showing a resampling procedure for choosing the appropriate method for RI calculations is included. PMID:20554803

  5. Multiple player tracking in sports video: a dual-mode two-way bayesian inference approach with progressive observation modeling.

    Science.gov (United States)

    Xing, Junliang; Ai, Haizhou; Liu, Liwei; Lao, Shihong

    2011-06-01

    Multiple object tracking (MOT) is a very challenging task yet of fundamental importance for many practical applications. In this paper, we focus on the problem of tracking multiple players in sports video which is even more difficult due to the abrupt movements of players and their complex interactions. To handle the difficulties in this problem, we present a new MOT algorithm which contributes both in the observation modeling level and in the tracking strategy level. For the observation modeling, we develop a progressive observation modeling process that is able to provide strong tracking observations and greatly facilitate the tracking task. For the tracking strategy, we propose a dual-mode two-way Bayesian inference approach which dynamically switches between an offline general model and an online dedicated model to deal with single isolated object tracking and multiple occluded object tracking integrally by forward filtering and backward smoothing. Extensive experiments on different kinds of sports videos, including football, basketball, as well as hockey, demonstrate the effectiveness and efficiency of the proposed method.

  6. Hybrid Optimization-Based Approach for Multiple Intelligent Vehicles Requests Allocation

    Directory of Open Access Journals (Sweden)

    Ahmed Hussein

    2018-01-01

    Full Text Available Self-driving cars are attracting significant attention during the last few years, which makes the technology advances jump fast and reach a point of having a number of automated vehicles on the roads. Therefore, the necessity of cooperative driving for these automated vehicles is exponentially increasing. One of the main issues in the cooperative driving world is the Multirobot Task Allocation (MRTA problem. This paper addresses the MRTA problem, specifically for the problem of vehicles and requests allocation. The objective is to introduce a hybrid optimization-based approach to solve the problem of multiple intelligent vehicles requests allocation as an instance of MRTA problem, to find not only a feasible solution, but also an optimized one as per the objective function. Several test scenarios were implemented in order to evaluate the efficiency of the proposed approach. These scenarios are based on well-known benchmarks; thus a comparative study is conducted between the obtained results and the suboptimal results. The analysis of the experimental results shows that the proposed approach was successful in handling various scenarios, especially with the increasing number of vehicles and requests, which displays the proposed approach efficiency and performance.

  7. Visualization of a City Sustainability Index (CSI: Towards Transdisciplinary Approaches Involving Multiple Stakeholders

    Directory of Open Access Journals (Sweden)

    Koichiro Mori

    2015-09-01

    Full Text Available We have developed a visualized 3-D model of a City Sustainability Index (CSI based on our original concept of city sustainability in which a sustainable city is defined as one that maximizes socio-economic benefits while meeting constraint conditions of the environment and socio-economic equity on a permanent basis. The CSI is based on constraint and maximization indicators. Constraint indicators assess whether a city meets the necessary minimum conditions for city sustainability. Maximization indicators measure the benefits that a city generates in socio-economic aspects. When used in the policy-making process, the choice of constraint indicators should be implemented using a top-down approach. In contrast, a bottom-up approach is more suitable for defining maximization indicators because this technique involves multiple stakeholders (in a transdisciplinary approach. Using different materials of various colors, shapes, sizes, we designed and constructed the visualized physical model of the CSI to help people evaluate and compare the performance of different cities in terms of sustainability. The visualized model of the CSI can convey complicated information in a simple and straightforward manner to diverse stakeholders so that the sustainability analysis can be understood intuitively by ordinary citizens as well as experts. Thus, the CSI model helps stakeholders to develop critical thinking about city sustainability and enables policymakers to make informed decisions for sustainability through a transdisciplinary approach.

  8. Integrated health messaging for multiple neglected zoonoses: Approaches, challenges and opportunities in Morocco.

    Science.gov (United States)

    Ducrotoy, M J; Yahyaoui Azami, H; El Berbri, I; Bouslikhane, M; Fassi Fihri, O; Boué, F; Petavy, A F; Dakkak, A; Welburn, S; Bardosh, K L

    2015-12-01

    Integrating the control of multiple neglected zoonoses at the community-level holds great potential, but critical data is missing to inform the design and implementation of different interventions. In this paper we present an evaluation of an integrated health messaging intervention, using powerpoint presentations, for five bacterial (brucellosis and bovine tuberculosis) and dog-associated (rabies, cystic echinococcosis and leishmaniasis) zoonotic diseases in Sidi Kacem Province, northwest Morocco. Conducted by veterinary and epidemiology students between 2013 and 2014, this followed a process-based approach that encouraged sequential adaptation of images, key messages, and delivery strategies using auto-evaluation and end-user feedback. We describe the challenges and opportunities of this approach, reflecting on who was targeted, how education was conducted, and what tools and approaches were used. Our results showed that: (1) replacing words with local pictures and using "hands-on" activities improved receptivity; (2) information "overload" easily occurred when disease transmission pathways did not overlap; (3) access and receptivity at schools was greater than at the community-level; and (4) piggy-backing on high-priority diseases like rabies offered an important avenue to increase knowledge of other zoonoses. We conclude by discussing the merits of incorporating our validated education approach into the school curriculum in order to influence long-term behaviour change. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. NEWTONIAN IMPERIALIST COMPETITVE APPROACH TO OPTIMIZING OBSERVATION OF MULTIPLE TARGET POINTS IN MULTISENSOR SURVEILLANCE SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. Afghan-Toloee

    2013-09-01

    Full Text Available The problem of specifying the minimum number of sensors to deploy in a certain area to face multiple targets has been generally studied in the literatures. In this paper, we are arguing the multi-sensors deployment problem (MDP. The Multi-sensor placement problem can be clarified as minimizing the cost required to cover the multi target points in the area. We propose a more feasible method for the multi-sensor placement problem. Our method makes provision the high coverage of grid based placements while minimizing the cost as discovered in perimeter placement techniques. The NICA algorithm as improved ICA (Imperialist Competitive Algorithm is used to decrease the performance time to explore an enough solution compared to other meta-heuristic schemes such as GA, PSO and ICA. A three dimensional area is used for clarify the multiple target and placement points, making provision x, y, and z computations in the observation algorithm. A structure of model for the multi-sensor placement problem is proposed: The problem is constructed as an optimization problem with the objective to minimize the cost while covering all multiple target points upon a given probability of observation tolerance.

  10. Practice-oriented optical thin film growth simulation via multiple scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Turowski, Marcus, E-mail: m.turowski@lzh.de [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); Jupé, Marco [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany); Melzig, Thomas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Moskovkin, Pavel [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Daniel, Alain [Centre for Research in Metallurgy, CRM, 21 Avenue du bois Saint Jean, Liège 4000 (Belgium); Pflug, Andreas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Lucas, Stéphane [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Ristau, Detlev [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany)

    2015-10-01

    Simulation of the coating process is a very promising approach for the understanding of thin film formation. Nevertheless, this complex matter cannot be covered by a single simulation technique. To consider all mechanisms and processes influencing the optical properties of the growing thin films, various common theoretical methods have been combined to a multi-scale model approach. The simulation techniques have been selected in order to describe all processes in the coating chamber, especially the various mechanisms of thin film growth, and to enable the analysis of the resulting structural as well as optical and electronic layer properties. All methods are merged with adapted communication interfaces to achieve optimum compatibility of the different approaches and to generate physically meaningful results. The present contribution offers an approach for the full simulation of an Ion Beam Sputtering (IBS) coating process combining direct simulation Monte Carlo, classical molecular dynamics, kinetic Monte Carlo, and density functional theory. The simulation is performed exemplary for an existing IBS-coating plant to achieve a validation of the developed multi-scale approach. Finally, the modeled results are compared to experimental data. - Highlights: • A model approach for simulating an Ion Beam Sputtering (IBS) process is presented. • In order to combine the different techniques, optimized interfaces are developed. • The transport of atomic species in the coating chamber is calculated. • We modeled structural and optical film properties based on simulated IBS parameter. • The modeled and the experimental refractive index data fit very well.

  11. Method of Fusion Diagnosis for Dam Service Status Based on Joint Distribution Function of Multiple Points

    Directory of Open Access Journals (Sweden)

    Zhenxiang Jiang

    2016-01-01

    Full Text Available The traditional methods of diagnosing dam service status are always suitable for single measuring point. These methods also reflect the local status of dams without merging multisource data effectively, which is not suitable for diagnosing overall service. This study proposes a new method involving multiple points to diagnose dam service status based on joint distribution function. The function, including monitoring data of multiple points, can be established with t-copula function. Therefore, the possibility, which is an important fusing value in different measuring combinations, can be calculated, and the corresponding diagnosing criterion is established with typical small probability theory. Engineering case study indicates that the fusion diagnosis method can be conducted in real time and the abnormal point can be detected, thereby providing a new early warning method for engineering safety.

  12. An implementation of multiple multipole method in the analyse of elliptical objects to enhance backscattering light

    Science.gov (United States)

    Jalali, T.

    2015-07-01

    In this paper, we present dielectric elliptical shapes modelling with respect to a highly confined power distribution in the resulting nanojet, which has been parameterized according to the beam waist and its beam divergence. The method is based on spherical bessel function as a basis function, which is adapted to standard multiple multipole method. This method can handle elliptically shaped particles due to the change of size and refractive indices, which have been studied under plane wave illumination in two and three dimensional multiple multipole method. Because of its fast and good convergence, the results obtained from simulation are highly accurate and reliable. The simulation time is less than minute for two and three dimension. Therefore, the proposed method is found to be computationally efficient, fast and accurate.

  13. The initial rise method extended to multiple trapping levels in thermoluminescent materials

    Energy Technology Data Exchange (ETDEWEB)

    Furetta, C. [CICATA-Legaria, Instituto Politecnico Nacional, 11500 Mexico D.F. (Mexico); Guzman, S. [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 Mexico D.F. (Mexico); Ruiz, B. [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 Mexico D.F. (Mexico); Departamento de Agricultura y Ganaderia, Universidad de Sonora, A.P. 305, 83190 Hermosillo, Sonora (Mexico); Cruz-Zaragoza, E., E-mail: ecruz@nucleares.unam.m [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 Mexico D.F. (Mexico)

    2011-02-15

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level.

  14. The initial rise method extended to multiple trapping levels in thermoluminescent materials.

    Science.gov (United States)

    Furetta, C; Guzmán, S; Ruiz, B; Cruz-Zaragoza, E

    2011-02-01

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. The initial rise method extended to multiple trapping levels in thermoluminescent materials

    International Nuclear Information System (INIS)

    Furetta, C.; Guzman, S.; Ruiz, B.; Cruz-Zaragoza, E.

    2011-01-01

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level.

  16. [A factor analysis method for contingency table data with unlimited multiple choice questions].

    Science.gov (United States)

    Toyoda, Hideki; Haiden, Reina; Kubo, Saori; Ikehara, Kazuya; Isobe, Yurie

    2016-02-01

    The purpose of this study is to propose a method of factor analysis for analyzing contingency tables developed from the data of unlimited multiple-choice questions. This method assumes that the element of each cell of the contingency table has a binominal distribution and a factor analysis model is applied to the logit of the selection probability. Scree plot and WAIC are used to decide the number of factors, and the standardized residual, the standardized difference between the sample, and the proportion ratio, is used to select items. The proposed method was applied to real product impression research data on advertised chips and energy drinks. Since the results of the analysis showed that this method could be used in conjunction with conventional factor analysis model, and extracted factors were fully interpretable, and suggests the usefulness of the proposed method in the study of psychology using unlimited multiple-choice questions.

  17. VIKOR Method for Interval Neutrosophic Multiple Attribute Group Decision-Making

    Directory of Open Access Journals (Sweden)

    Yu-Han Huang

    2017-11-01

    Full Text Available In this paper, we will extend the VIKOR (VIsekriterijumska optimizacija i KOmpromisno Resenje method to multiple attribute group decision-making (MAGDM with interval neutrosophic numbers (INNs. Firstly, the basic concepts of INNs are briefly presented. The method first aggregates all individual decision-makers’ assessment information based on an interval neutrosophic weighted averaging (INWA operator, and then employs the extended classical VIKOR method to solve MAGDM problems with INNs. The validity and stability of this method are verified by example analysis and sensitivity analysis, and its superiority is illustrated by a comparison with the existing methods.

  18. Methods of fast, multiple-point in vivo T1 determination

    International Nuclear Information System (INIS)

    Zhang, Y.; Spigarelli, M.; Fencil, L.E.; Yeung, H.N.

    1989-01-01

    Two methods of rapid, multiple-point determination of T1 in vivo have been evaluated with a phantom consisting of vials of gel in different Mn + + concentrations. The first method was an inversion-recovery- on-the-fly technique, and the second method used a variable- tip-angle (α) progressive saturation with two sub- sequences of different repetition times. In the first method, 1/T1 was evaluated by an exponential fit. In the second method, 1/T1 was obtained iteratively with a linear fit and then readjusted together with α to a model equation until self-consistency was reached

  19. Combining qualitative and quantitative operational research methods to inform quality improvement in pathways that span multiple settings

    Science.gov (United States)

    Crowe, Sonya; Brown, Katherine; Tregay, Jenifer; Wray, Jo; Knowles, Rachel; Ridout, Deborah A; Bull, Catherine; Utley, Martin

    2017-01-01

    Background Improving integration and continuity of care across sectors within resource constraints is a priority in many health systems. Qualitative operational research methods of problem structuring have been used to address quality improvement in services involving multiple sectors but not in combination with quantitative operational research methods that enable targeting of interventions according to patient risk. We aimed to combine these methods to augment and inform an improvement initiative concerning infants with congenital heart disease (CHD) whose complex care pathway spans multiple sectors. Methods Soft systems methodology was used to consider systematically changes to services from the perspectives of community, primary, secondary and tertiary care professionals and a patient group, incorporating relevant evidence. Classification and regression tree (CART) analysis of national audit datasets was conducted along with data visualisation designed to inform service improvement within the context of limited resources. Results A ‘Rich Picture’ was developed capturing the main features of services for infants with CHD pertinent to service improvement. This was used, along with a graphical summary of the CART analysis, to guide discussions about targeting interventions at specific patient risk groups. Agreement was reached across representatives of relevant health professions and patients on a coherent set of targeted recommendations for quality improvement. These fed into national decisions about service provision and commissioning. Conclusions When tackling complex problems in service provision across multiple settings, it is important to acknowledge and work with multiple perspectives systematically and to consider targeting service improvements in response to confined resources. Our research demonstrates that applying a combination of qualitative and quantitative operational research methods is one approach to doing so that warrants further

  20. Identifying multiple outliers in linear regression: robust fit and clustering approach

    International Nuclear Information System (INIS)

    Robiah Adnan; Mohd Nor Mohamad; Halim Setan

    2001-01-01

    This research provides a clustering based approach for determining potential candidates for outliers. This is modification of the method proposed by Serbert et. al (1988). It is based on using the single linkage clustering algorithm to group the standardized predicted and residual values of data set fit by least trimmed of squares (LTS). (Author)

  1. The impact of secure messaging on workflow in primary care: Results of a multiple-case, multiple-method study.

    Science.gov (United States)

    Hoonakker, Peter L T; Carayon, Pascale; Cartmill, Randi S

    2017-04-01

    Secure messaging is a relatively new addition to health information technology (IT). Several studies have examined the impact of secure messaging on (clinical) outcomes but very few studies have examined the impact on workflow in primary care clinics. In this study we examined the impact of secure messaging on workflow of clinicians, staff and patients. We used a multiple case study design with multiple data collections methods (observation, interviews and survey). Results show that secure messaging has the potential to improve communication and information flow and the organization of work in primary care clinics, partly due to the possibility of asynchronous communication. However, secure messaging can also have a negative effect on communication and increase workload, especially if patients send messages that are not appropriate for the secure messaging medium (for example, messages that are too long, complex, ambiguous, or inappropriate). Results show that clinicians are ambivalent about secure messaging. Secure messaging can add to their workload, especially if there is high message volume, and currently they are not compensated for these activities. Staff is -especially compared to clinicians- relatively positive about secure messaging and patients are overall very satisfied with secure messaging. Finally, clinicians, staff and patients think that secure messaging can have a positive effect on quality of care and patient safety. Secure messaging is a tool that has the potential to improve communication and information flow. However, the potential of secure messaging to improve workflow is dependent on the way it is implemented and used. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Comparison between Two Assessment Methods; Modified Essay Questions and Multiple Choice Questions

    Directory of Open Access Journals (Sweden)

    Assadi S.N.* MD

    2015-09-01

    Full Text Available Aims Using the best assessment methods is an important factor in educational development of health students. Modified essay questions and multiple choice questions are two prevalent methods of assessing the students. The aim of this study was to compare two methods of modified essay questions and multiple choice questions in occupational health engineering and work laws courses. Materials & Methods This semi-experimental study was performed during 2013 to 2014 on occupational health students of Mashhad University of Medical Sciences. The class of occupational health and work laws course in 2013 was considered as group A and the class of 2014 as group B. Each group had 50 students.The group A students were assessed by modified essay questions method and the group B by multiple choice questions method.Data were analyzed in SPSS 16 software by paired T test and odd’s ratio. Findings The mean grade of occupational health and work laws course was 18.68±0.91 in group A (modified essay questions and was 18.78±0.86 in group B (multiple choice questions which was not significantly different (t=-0.41; p=0.684. The mean grade of chemical chapter (p<0.001 in occupational health engineering and harmful work law (p<0.001 and other (p=0.015 chapters in work laws were significantly different between two groups. Conclusion Modified essay questions and multiple choice questions methods have nearly the same student assessing value for the occupational health engineering and work laws course.

  3. A hybrid approach for efficient anomaly detection using metaheuristic methods

    Directory of Open Access Journals (Sweden)

    Tamer F. Ghanem

    2015-07-01

    Full Text Available Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms.

  4. Systematic approaches to data analysis from the Critical Decision Method

    Directory of Open Access Journals (Sweden)

    Martin Sedlár

    2015-01-01

    Full Text Available The aim of the present paper is to introduce how to analyse the qualitative data from the Critical Decision Method. At first, characterizing the method provides the meaningful introduction into the issue. This method used in naturalistic decision making research is one of the cognitive task analysis methods, it is based on the retrospective semistructured interview about critical incident from the work and it may be applied in various domains such as emergency services, military, transport, sport or industry. Researchers can make two types of methodological adaptation. Within-method adaptations modify the way of conducting the interviews and cross-method adaptations combine this method with other related methods. There are many decsriptions of conducting the interview, but the descriptions how the data should be analysed are rare. Some researchers use conventional approaches like content analysis, grounded theory or individual procedures with reference to the objectives of research project. Wong (2004 describes two approaches to data analysis proposed for this method of data collection, which are described and reviewed in the details. They enable systematic work with a large amount of data. The structured approach organizes the data according to an a priori analysis framework and it is suitable for clearly defined object of research. Each incident is studied separately. At first, the decision chart showing the main decision points and then the incident summary are made. These decision points are used to identify the relevant statements from the transcript, which are analysed in terms of the Recognition-Primed Decision Model. Finally, the results from all the analysed incidents are integrated. The limitation of the structured approach is it may not reveal some interesting concepts. The emergent themes approach helps to identify these concepts while maintaining a systematic framework for analysis and it is used for exploratory research design. It

  5. Experiential Approach to Teaching Statistics and Research Methods ...

    African Journals Online (AJOL)

    Statistics and research methods are among the more demanding topics for students of education to master at both the undergraduate and postgraduate levels. It is our conviction that teaching these topics should be combined with real practical experiences. We discuss an experiential teaching/ learning approach that ...

  6. Book Review: Comparative Education Research: Approaches and Methods

    Directory of Open Access Journals (Sweden)

    Noel Mcginn

    2014-10-01

    Full Text Available Book Review Comparative Education Research: Approaches and Methods (2nd edition By Mark Bray, Bob Adamson and Mark Mason (Eds. (2014, 453p ISBN: 978-988-17852-8-2, Hong Kong: Comparative Education Research Centre and Springer

  7. A sequential mixed methods research approach to investigating HIV ...

    African Journals Online (AJOL)

    Sequential mixed methods research is an effective approach for investigating complex problems, but it has not been extensively used in construction management research. In South Africa, the HIV/AIDS pandemic has seen construction management taking on a vital responsibility since the government called upon the ...

  8. Teaching Psychological Research Methods through a Pragmatic and Programmatic Approach

    Science.gov (United States)

    Rosenkranz, Patrick; Fielden, Amy; Tzemou, Effy

    2014-01-01

    Research methods teaching in psychology is pivotal in preparing students for the transition from student as learner to independent practitioner. We took an action research approach to re-design, implement and evaluate a module guiding students through a programmatic and pragmatic research cycle. These revisions allow students to experience how…

  9. The Feldenkrais Method: A Dynamic Approach to Changing Motor Behavior.

    Science.gov (United States)

    Buchanan, Patricia A.; Ulrich, Beverly D.

    2001-01-01

    Describes the Feldenkrais Method of somatic education, noting parallels with a dynamic systems theory (DST) approach to motor behavior. Feldenkrais uses movement and perception to foster individualized improvement in function. DST explains that a human-environment system continually adapts to changing conditions and assembles behaviors…

  10. Improving discrimination of savanna tree species through a multiple endmember spectral-angle-mapper (SAM) approach: canopy level analysis

    CSIR Research Space (South Africa)

    Cho, Moses A

    2010-11-01

    Full Text Available sensing. The objectives of this paper were to (i) evaluate the classification performance of a multiple-endmember spectral angle mapper (SAM) classification approach (conventionally known as the nearest neighbour) in discriminating ten common African...

  11. Performance of a novel multiple-signal luminescence sediment tracing method

    Science.gov (United States)

    Reimann, Tony

    2014-05-01

    transport. The EET increases with increasing distance from the nourishment source, indicating that our method is capable to quantify sediment transport distances. We furthermore observed that the EET of an aeolian analogue is orders of magnitudes higher than those of the water-lain transported Zandmotor samples, suggesting that our approach is also able to differentiate between different modes of coastal sediment transport. This new luminescence approach offers new possibilities to decipher the sedimentation history of palaeo-environmental archives e.g. in coastal, fluvial or aeolian settings. References: Reimann, T.et al. Quantifying the degreeof bleaching during sediment transport using a polymineral multiple-signalluminescence approach. Submitted. Stive, M.J.F. et al. 2013. A New Alternative to Saving Our Beaches from Sea-Level Rise: The SandEngine. Journal of Coastal research 29, 1001-1008.

  12. MULTIPLE CRITERA METHODS WITH FOCUS ON ANALYTIC HIERARCHY PROCESS AND GROUP DECISION MAKING

    Directory of Open Access Journals (Sweden)

    Lidija Zadnik-Stirn

    2010-12-01

    Full Text Available Managing natural resources is a group multiple criteria decision making problem. In this paper the analytic hierarchy process is the chosen method for handling the natural resource problems. The one decision maker problem is discussed and, three methods: the eigenvector method, data envelopment analysis method, and logarithmic least squares method are presented for the derivation of the priority vector. Further, the group analytic hierarchy process is discussed and six methods for the aggregation of individual judgments or priorities: weighted arithmetic mean method, weighted geometric mean method, and four methods based on data envelopment analysis are compared. The case study on land use in Slovenia is applied. The conclusions review consistency, sensitivity analyses, and some future directions of research.

  13. An Efficient Approach for Identifying Stable Lobes with Discretization Method

    Directory of Open Access Journals (Sweden)

    Baohai Wu

    2013-01-01

    Full Text Available This paper presents a new approach for quick identification of chatter stability lobes with discretization method. Firstly, three different kinds of stability regions are defined: absolute stable region, valid region, and invalid region. Secondly, while identifying the chatter stability lobes, three different regions within the chatter stability lobes are identified with relatively large time intervals. Thirdly, stability boundary within the valid regions is finely calculated to get exact chatter stability lobes. The proposed method only needs to test a small portion of spindle speed and cutting depth set; about 89% computation time is savedcompared with full discretization method. It spends only about10 minutes to get exact chatter stability lobes. Since, based on discretization method, the proposed method can be used for different immersion cutting including low immersion cutting process, the proposed method can be directly implemented in the workshop to promote machining parameters selection efficiency.

  14. Detecting a Weak Association by Testing its Multiple Perturbations: a Data Mining Approach

    Science.gov (United States)

    Lo, Min-Tzu; Lee, Wen-Chung

    2014-05-01

    Many risk factors/interventions in epidemiologic/biomedical studies are of minuscule effects. To detect such weak associations, one needs a study with a very large sample size (the number of subjects, n). The n of a study can be increased but unfortunately only to an extent. Here, we propose a novel method which hinges on increasing sample size in a different direction-the total number of variables (p). We construct a p-based `multiple perturbation test', and conduct power calculations and computer simulations to show that it can achieve a very high power to detect weak associations when p can be made very large. As a demonstration, we apply the method to analyze a genome-wide association study on age-related macular degeneration and identify two novel genetic variants that are significantly associated with the disease. The p-based method may set a stage for a new paradigm of statistical tests.

  15. DEWA: A Multiaspect Approach for Multiple Face Detection in Complex Scene Digital Image

    Directory of Open Access Journals (Sweden)

    Setiawan Hadi

    2013-09-01

    Full Text Available A new approach for detecting faces in a digital image with unconstrained background has been developed. The approach is composed of three phases: segmentation phase, filtering phase and localization phase. In the segmentation phase, we utilized both training and non-training methods, which are implemented in user selectable color space. In the filtering phase, Minkowski addition-based objects removal has been used for image cleaning. In the last phase, an image processing method and a data mining method are employed for grouping and localizing objects, combined with geometric-based image analysis. Several experiments have been conducted using our special face database that consists of simple objects and complex objects. The experiment results demonstrated that the detection accuracy is around 90% and the detection speed is less than 1 second in average.

  16. Whole-body voxel-based personalized dosimetry: Multiple voxel S-value approach for heterogeneous media with non-uniform activity distributions.

    Science.gov (United States)

    Lee, Min Sun; Kim, Joong Hyun; Paeng, Jin Chul; Kang, Keon Wook; Jeong, Jae Min; Lee, Dong Soo; Lee, Jae Sung

    2017-12-14

    Personalized dosimetry with high accuracy is becoming more important because of the growing interests in personalized medicine and targeted radionuclide therapy. Voxel-based dosimetry using dose point kernel or voxel S-value (VSV) convolution is available. However, these approaches do not consider medium heterogeneity. Here, we propose a new method for whole-body voxel-based personalized dosimetry for heterogeneous media with non-uniform activity distributions, which is referred to as the multiple VSV approach. Methods: The multiple numbers (N) of VSVs for media with different densities covering the whole-body density ranges were used instead of using only a single VSV for water. The VSVs were pre-calculated using GATE Monte Carlo simulation; those were convoluted with the time-integrated activity to generate density-specific dose maps. Computed tomography-based segmentation was conducted to generate binary maps for each density region. The final dose map was acquired by the summation of N segmented density-specific dose maps. We tested several sets of VSVs with different densities: N = 1 (single water VSV), 4, 6, 8, 10, and 20. To validate the proposed method, phantom and patient studies were conducted and compared with direct Monte Carlo, which was considered the ground truth. Finally, patient dosimetry (10 subjects) was conducted using the multiple VSV approach and compared with the single VSV and organ-based dosimetry approaches. Errors at the voxel- and organ-levels were reported for eight organs. Results: In the phantom and patient studies, the multiple VSV approach showed significant improvements regarding voxel-level errors, especially for the lung and bone regions. As N increased, voxel-level errors decreased, although some overestimations were observed at lung boundaries. In the case of multiple VSVs ( N = 8), we achieved voxel-level errors of 2.06%. In the dosimetry study, our proposed method showed much improved results compared to the single VSV and

  17. Weighted least-square approach for simultaneous measurement of multiple reflective surfaces

    Science.gov (United States)

    Tang, Shouhong; Bills, Richard E.; Freischlad, Klaus

    2007-09-01

    Phase shifting interferometry (PSI) is a highly accurate method for measuring the nanometer-scale relative surface height of a semi-reflective test surface. PSI is effectively used in conjunction with Fizeau interferometers for optical testing, hard disk inspection, and semiconductor wafer flatness. However, commonly-used PSI algorithms are unable to produce an accurate phase measurement if more than one reflective surface is present in the Fizeau interferometer test cavity. Examples of test parts that fall into this category include lithography mask blanks and their protective pellicles, and plane parallel optical beam splitters. The plane parallel surfaces of these parts generate multiple interferograms that are superimposed in the recording plane of the Fizeau interferometer. When using wavelength shifting in PSI the phase shifting speed of each interferogram is proportional to the optical path difference (OPD) between the two reflective surfaces. The proposed method is able to differentiate each underlying interferogram from each other in an optimal manner. In this paper, we present a method for simultaneously measuring the multiple test surfaces of all underlying interferograms from these superimposed interferograms through the use of a weighted least-square fitting technique. The theoretical analysis of weighted least-square technique and the measurement results will be described in this paper.

  18. A multiple-scale power series method for solving nonlinear ordinary differential equations

    Directory of Open Access Journals (Sweden)

    Chein-Shan Liu

    2016-02-01

    Full Text Available The power series solution is a cheap and effective method to solve nonlinear problems, like the Duffing-van der Pol oscillator, the Volterra population model and the nonlinear boundary value problems. A novel power series method by considering the multiple scales $R_k$ in the power term $(t/R_k^k$ is developed, which are derived explicitly to reduce the ill-conditioned behavior in the data interpolation. In the method a huge value times a tiny value is avoided, such that we can decrease the numerical instability and which is the main reason to cause the failure of the conventional power series method. The multiple scales derived from an integral can be used in the power series expansion, which provide very accurate numerical solutions of the problems considered in this paper.

  19. Novel multiple criteria decision making methods based on bipolar neutrosophic sets and bipolar neutrosophic graphs

    OpenAIRE

    Muhammad, Akram; Musavarah, Sarwar

    2016-01-01

    In this research study, we introduce the concept of bipolar neutrosophic graphs. We present the dominating and independent sets of bipolar neutrosophic graphs. We describe novel multiple criteria decision making methods based on bipolar neutrosophic sets and bipolar neutrosophic graphs. We also develop an algorithm for computing domination in bipolar neutrosophic graphs.

  20. Magic Finger Teaching Method in Learning Multiplication Facts among Deaf Students

    Science.gov (United States)

    Thai, Liong; Yasin, Mohd. Hanafi Mohd

    2016-01-01

    Deaf students face problems in mastering multiplication facts. This study aims to identify the effectiveness of Magic Finger Teaching Method (MFTM) and students' perception towards MFTM. The research employs a quasi experimental with non-equivalent pre-test and post-test control group design. Pre-test, post-test and questionnaires were used. As…

  1. A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants

    Science.gov (United States)

    Cooper, Paul D.

    2010-01-01

    A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…

  2. Non-Abelian Kubo formula and the multiple time-scale method

    International Nuclear Information System (INIS)

    Zhang, X.; Li, J.

    1996-01-01

    The non-Abelian Kubo formula is derived from the kinetic theory. That expression is compared with the one obtained using the eikonal for a Chern endash Simons theory. The multiple time-scale method is used to study the non-Abelian Kubo formula, and the damping rate for longitudinal color waves is computed. copyright 1996 Academic Press, Inc

  3. Creep compliance and percent recovery of Oklahoma certified binder using the multiple stress recovery (MSCR) method.

    Science.gov (United States)

    2015-04-01

    A laboratory study was conducted to develop guidelines for the Multiple Stress Creep Recovery : (MSCR) test method for local conditions prevailing in Oklahoma. The study consisted of : commonly used binders in Oklahoma, namely PG 64-22, PG 70-28, and...

  4. Combining qualitative and quantitative operational research methods to inform quality improvement in pathways that span multiple settings.

    Science.gov (United States)

    Crowe, Sonya; Brown, Katherine; Tregay, Jenifer; Wray, Jo; Knowles, Rachel; Ridout, Deborah A; Bull, Catherine; Utley, Martin

    2017-08-01

    Improving integration and continuity of care across sectors within resource constraints is a priority in many health systems. Qualitative operational research methods of problem structuring have been used to address quality improvement in services involving multiple sectors but not in combination with quantitative operational research methods that enable targeting of interventions according to patient risk. We aimed to combine these methods to augment and inform an improvement initiative concerning infants with congenital heart disease (CHD) whose complex care pathway spans multiple sectors. Soft systems methodology was used to consider systematically changes to services from the perspectives of community, primary, secondary and tertiary care professionals and a patient group, incorporating relevant evidence. Classification and regression tree (CART) analysis of national audit datasets was conducted along with data visualisation designed to inform service improvement within the context of limited resources. A 'Rich Picture' was developed capturing the main features of services for infants with CHD pertinent to service improvement. This was used, along with a graphical summary of the CART analysis, to guide discussions about targeting interventions at specific patient risk groups. Agreement was reached across representatives of relevant health professions and patients on a coherent set of targeted recommendations for quality improvement. These fed into national decisions about service provision and commissioning. When tackling complex problems in service provision across multiple settings, it is important to acknowledge and work with multiple perspectives systematically and to consider targeting service improvements in response to confined resources. Our research demonstrates that applying a combination of qualitative and quantitative operational research methods is one approach to doing so that warrants further consideration. Published by the BMJ Publishing Group

  5. Interconnection blocks: a method for providing reusable, rapid, multiple, aligned and planar microfluidic interconnections

    International Nuclear Information System (INIS)

    Sabourin, D; Snakenborg, D; Dufva, M

    2009-01-01

    In this paper a method is presented for creating 'interconnection blocks' that are re-usable and provide multiple, aligned and planar microfluidic interconnections. Interconnection blocks made from polydimethylsiloxane allow rapid testing of microfluidic chips and unobstructed microfluidic observation. The interconnection block method is scalable, flexible and supports high interconnection density. The average pressure limit of the interconnection block was near 5.5 bar and all individual results were well above the 2 bar threshold considered applicable to most microfluidic applications

  6. Clustering Multiple Sclerosis Subgroups with Multifractal Methods and Self-Organizing Map Algorithm

    Science.gov (United States)

    Karaca, Yeliz; Cattani, Carlo

    Magnetic resonance imaging (MRI) is the most sensitive method to detect chronic nervous system diseases such as multiple sclerosis (MS). In this paper, Brownian motion Hölder regularity functions (polynomial, periodic (sine), exponential) for 2D image, such as multifractal methods were applied to MR brain images, aiming to easily identify distressed regions, in MS patients. With these regions, we have proposed an MS classification based on the multifractal method by using the Self-Organizing Map (SOM) algorithm. Thus, we obtained a cluster analysis by identifying pixels from distressed regions in MR images through multifractal methods and by diagnosing subgroups of MS patients through artificial neural networks.

  7. Total System Performance Assessment - License Application Methods and Approach

    Energy Technology Data Exchange (ETDEWEB)

    J. McNeish

    2003-12-08

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issues (KTIs) identified in agreements with the U.S. Nuclear Regulatory Commission, the ''Yucca Mountain Review Plan'' (YMRP), ''Final Report'' (NRC 2003 [163274]), and the NRC final rule 10 CFR Part 63 (NRC 2002 [156605]). This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are used in this document.

  8. A PRACTICAL APPROACH TO THE GROUND OSCILLATION VELOCITY MEASUREMENT METHOD

    Directory of Open Access Journals (Sweden)

    Siniša Stanković

    2017-01-01

    Full Text Available The use of an explosive’s energy during blasting includes undesired effects on the environment. The seismic influence of a blast, as a major undesired effect, is determined by many national standards, recommendations and calculations where the main parameter is ground oscillation velocity at the field measurement location. There are a few approaches and methods for calculation of expected ground oscillation velocities according to charge weight per delay and the distance from the blast to the point of interest. Utilizations of these methods and formulas do not provide satisfactory results, thus the measured values on diverse distance from the blast field more or less differ from values given by previous calculations. Since blasting works are executed in diverse geological conditions, the aim of this research is the development of a practical and reliable approach which will give a different model for each construction site where blasting works have been or will be executed. The approach is based on a greater number of measuring points in line from the blast field at predetermined distances. This new approach has been compared with other generally used methods and formulas through the use of measurements taken during research along with measurements from several previously executed projects. The results confirmed that the suggested model gives more accurate values.

  9. Parametric optimization of multiple quality characteristics in laser cutting of Inconel-718 by using hybrid approach of multiple regression analysis and genetic algorithm

    Science.gov (United States)

    Shrivastava, Prashant Kumar; Pandey, Arun Kumar

    2018-06-01

    Inconel-718 has found high demand in different industries due to their superior mechanical properties. The traditional cutting methods are facing difficulties for cutting these alloys due to their low thermal potential, lower elasticity and high chemical compatibility at inflated temperature. The challenges of machining and/or finishing of unusual shapes and/or sizes in these materials have also faced by traditional machining. Laser beam cutting may be applied for the miniaturization and ultra-precision cutting and/or finishing by appropriate control of different process parameter. This paper present multi-objective optimization the kerf deviation, kerf width and kerf taper in the laser cutting of Incone-718 sheet. The second order regression models have been developed for different quality characteristics by using the experimental data obtained through experimentation. The regression models have been used as objective function for multi-objective optimization based on the hybrid approach of multiple regression analysis and genetic algorithm. The comparison of optimization results to experimental results shows an improvement of 88%, 10.63% and 42.15% in kerf deviation, kerf width and kerf taper, respectively. Finally, the effects of different process parameters on quality characteristics have also been discussed.

  10. Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems

    Science.gov (United States)

    Razzak, M. A.; Alam, M. Z.; Sharif, M. N.

    2018-03-01

    In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.

  11. Budgetary Approach to Project Management by Percentage of Completion Method

    Directory of Open Access Journals (Sweden)

    Leszek Borowiec

    2011-07-01

    Full Text Available Efficient and effective project management process is made possible by the use of methods and techniques of project management. The aim of this paper is to present the problems of project management by using Percentage of Completion method. The research material was gathered based on the experience in implementing this method by the Johnson Controls International Company. The article attempts to demonstrate the validity of the thesis that the POC project management method, allows for effective implementation and monitoring of the project and thus is an effective tool in the managing of companies which exploit the budgetary approach. The study presents planning process of basic parameters affecting the effectiveness of the project (such as costs, revenue, margin and characterized how the primary measurements used to evaluate it. The present theme is illustrating by numerous examples for showing the essence of the raised problems and the results are presenting by using descriptive methods, graphical and tabular.

  12. Assessing and evaluating multidisciplinary translational teams: a mixed methods approach.

    Science.gov (United States)

    Wooten, Kevin C; Rose, Robert M; Ostir, Glenn V; Calhoun, William J; Ameredes, Bill T; Brasier, Allan R

    2014-03-01

    A case report illustrates how multidisciplinary translational teams can be assessed using outcome, process, and developmental types of evaluation using a mixed-methods approach. Types of evaluation appropriate for teams are considered in relation to relevant research questions and assessment methods. Logic models are applied to scientific projects and team development to inform choices between methods within a mixed-methods design. Use of an expert panel is reviewed, culminating in consensus ratings of 11 multidisciplinary teams and a final evaluation within a team-type taxonomy. Based on team maturation and scientific progress, teams were designated as (a) early in development, (b) traditional, (c) process focused, or (d) exemplary. Lessons learned from data reduction, use of mixed methods, and use of expert panels are explored.

  13. A multiplicative environmental DEA approach to measure efficiency changes in the world's major polluters

    International Nuclear Information System (INIS)

    Valadkhani, Abbas; Roshdi, Israfil; Smyth, Russell

    2016-01-01

    We propose a multiplicative environmental data envelopment analysis (ME-DEA) approach to measure the performance of 46 countries that generate most of the world's carbon dioxide (CO_2) emissions. In the model, we combine economic (labour and capital), environmental (freshwater) and energy inputs with a desirable output (GDP) and three undesirable outputs (CO_2, methane and nitrous oxide emissions). We rank each country according to the optimum use of its resources employing a multiplicative extension of environmental DEA models. By computing partial efficiency scores for each input and output separately, we thus identify major sources of inefficiency for all sample countries. Based on the partial efficiency scores obtained from the model, we define aggregate economic, energy and environmental efficiency indexes for 2002, 2007 and 2011, reflecting points in time before and after the official enactment of the Kyoto Protocol. We find that for most countries efficiency scores increase over this period. In addition, there exists a positive relationship between economic and environmental efficiency, although, at the same time, our results suggest that environmental efficiency cannot be realized without first reaching a certain threshold of economic efficiency. We also find support for the Paradox of Plenty, whereby an abundance of natural and energy resources results in their inefficient use. - Highlights: • This study proposes a multiplicative extension of environmental DEA models. • We examine how countries utilize energy, labour, capital and freshwater over time. • We measure how efficiently countries minimize the emissions of greenhouse gases. • Results support the Paradox of Plenty among 46 countries in 2002, 2007 and 2011. • Countries richest in oil and gas exhibited the worst energy efficiency.

  14. A New Approach to Adaptive Control of Multiple Scales in Plasma Simulations

    Science.gov (United States)

    Omelchenko, Yuri

    2007-04-01

    A new approach to temporal refinement of kinetic (Particle-in-Cell, Vlasov) and fluid (MHD, two-fluid) simulations of plasmas is presented: Discrete-Event Simulation (DES). DES adaptively distributes CPU resources in accordance with local time scales and enables asynchronous integration of inhomogeneous nonlinear systems with multiple time scales on meshes of arbitrary topologies. This removes computational penalties usually incurred in explicit codes due to the global Courant-Friedrich-Levy (CFL) restriction on a time-step size. DES stands apart from multiple time-stepping algorithms in that it requires neither selecting a global synchronization time step nor pre-determining a sequence of time-integration operations for individual parts of the system (local time increments need not bear any integer multiple relations). Instead, elements of a mesh-distributed solution self-adaptively predict and synchronize their temporal trajectories by directly enforcing local causality (accuracy) constraints, which are formulated in terms of incremental changes to the evolving solution. Together with flux-conservative propagation of information, this new paradigm ensures stable and fast asynchronous runs, where idle computation is automatically eliminated. DES is parallelized via a novel Preemptive Event Processing (PEP) technique, which automatically synchronizes elements with similar update rates. In this mode, events with close execution times are projected onto time levels, which are adaptively determined by the program. PEP allows reuse of standard message-passing algorithms on distributed architectures. For optimum accuracy, DES can be combined with adaptive mesh refinement (AMR) techniques for structured and unstructured meshes. Current examples of event-driven models range from electrostatic, hybrid particle-in-cell plasma systems to reactive fluid dynamics simulations. They demonstrate the superior performance of DES in terms of accuracy, speed and robustness.

  15. A Benefit-Risk Analysis Approach to Capture Regulatory Decision-Making: Multiple Myeloma.

    Science.gov (United States)

    Raju, G K; Gurumurthi, Karthik; Domike, Reuben; Kazandjian, Dickran; Landgren, Ola; Blumenthal, Gideon M; Farrell, Ann; Pazdur, Richard; Woodcock, Janet

    2018-01-01

    Drug regulators around the world make decisions about drug approvability based on qualitative benefit-risk analysis. In this work, a quantitative benefit-risk analysis approach captures regulatory decision-making about new drugs to treat multiple myeloma (MM). MM assessments have been based on endpoints such as time to progression (TTP), progression-free survival (PFS), and objective response rate (ORR) which are different than benefit-risk analysis based on overall survival (OS). Twenty-three FDA decisions on MM drugs submitted to FDA between 2003 and 2016 were identified and analyzed. The benefits and risks were quantified relative to comparators (typically the control arm of the clinical trial) to estimate whether the median benefit-risk was positive or negative. A sensitivity analysis was demonstrated using ixazomib to explore the magnitude of uncertainty. FDA approval decision outcomes were consistent and logical using this benefit-risk framework. © 2017 American Society for Clinical Pharmacology and Therapeutics.

  16. An efficient multiple particle filter based on the variational Bayesian approach

    KAUST Repository

    Ait-El-Fquih, Boujemaa

    2015-12-07

    This paper addresses the filtering problem in large-dimensional systems, in which conventional particle filters (PFs) remain computationally prohibitive owing to the large number of particles needed to obtain reasonable performances. To overcome this drawback, a class of multiple particle filters (MPFs) has been recently introduced in which the state-space is split into low-dimensional subspaces, and then a separate PF is applied to each subspace. In this paper, we adopt the variational Bayesian (VB) approach to propose a new MPF, the VBMPF. The proposed filter is computationally more efficient since the propagation of each particle requires generating one (new) particle only, while in the standard MPFs a set of (children) particles needs to be generated. In a numerical test, the proposed VBMPF behaves better than the PF and MPF.

  17. Stepwise approach to establishing multiple outreach laboratory information system-electronic medical record interfaces

    Directory of Open Access Journals (Sweden)

    Liron Pantanowitz

    2010-01-01

    Full Text Available Clinical laboratory outreach business is changing as more physician practices adopt an electronic medical record (EMR. Physician connectivity with the laboratory information system (LIS is consequently becoming more important. However, there are no reports available to assist the informatician with establishing and maintaining outreach LIS-EMR connectivity. A four-stage scheme is presented that was successfully employed to establish unidirectional and bidirectional interfaces with multiple physician EMRs. This approach involves planning (step 1, followed by interface building (step 2 with subsequent testing (step 3, and finally ongoing maintenance (step 4. The role of organized project management, software as a service (SAAS, and alternate solutions for outreach connectivity are discussed.

  18. Material Selection for Dye Sensitized Solar Cells Using Multiple Attribute Decision Making Approach

    Directory of Open Access Journals (Sweden)

    Sarita Baghel

    2014-01-01

    Full Text Available Dye sensitized solar cells (DSCs provide a potential alternative to conventional p-n junction photovoltaic devices. The semiconductor thin film plays a crucial role in the working of DSC. This paper aims at formulating a process for the selection of optimum semiconductor material for nanostructured thin film using multiple attribute decision making (MADM approach. Various possible available semiconducting materials and their properties like band gap, cost, mobility, rate of electron injection, and static dielectric constant are considered and MADM technique is applied to select the best suited material. It was found that, out of all possible candidates, titanium dioxide (TiO2 is the best semiconductor material for application in DSC. It was observed that the proposed results are in good agreement with the experimental findings.

  19. Use of multiple methods to determine factors affecting quality of care of patients with diabetes.

    Science.gov (United States)

    Khunti, K

    1999-10-01

    The process of care of patients with diabetes is complex; however, GPs are playing a greater role in its management. Despite the research evidence, the quality of care of patients with diabetes is variable. In order to improve care, information is required on the obstacles faced by practices in improving care. Qualitative and quantitative methods can be used for formation of hypotheses and the development of survey procedures. However, to date few examples exist in general practice research on the use of multiple methods using both quantitative and qualitative techniques for hypothesis generation. We aimed to determine information on all factors that may be associated with delivery of care to patients with diabetes. Factors for consideration on delivery of diabetes care were generated by multiple qualitative methods including brainstorming with health professionals and patients, a focus group and interviews with key informants which included GPs and practice nurses. Audit data showing variations in care of patients with diabetes were used to stimulate the brainstorming session. A systematic literature search focusing on quality of care of patients with diabetes in primary care was also conducted. Fifty-four potential factors were identified by multiple methods. Twenty (37.0%) were practice-related factors, 14 (25.9%) were patient-related factors and 20 (37.0%) were organizational factors. A combination of brainstorming and the literature review identified 51 (94.4%) factors. Patients did not identify factors in addition to those identified by other methods. The complexity of delivery of care to patients with diabetes is reflected in the large number of potential factors identified in this study. This study shows the feasibility of using multiple methods for hypothesis generation. Each evaluation method provided unique data which could not otherwise be easily obtained. This study highlights a way of combining various traditional methods in an attempt to overcome the

  20. Multiple-scattering formalism for correlated systems: A KKR-DMFT approach

    International Nuclear Information System (INIS)

    Minar, J.; Perlov, A.; Ebert, H.; Chioncel, L.; Katsnelson, M. I.; Lichtenstein, A.I.

    2005-01-01

    We present a charge and self-energy self-consistent computational scheme for correlated systems based on the Korringa-Kohn-Rostoker (KKR) multiple scattering theory with the many-body effects described by the means of dynamical mean field theory (DMFT). The corresponding local multiorbital and energy dependent self-energy is included into the set of radial differential equations for the single-site wave functions. The KKR Green's function is written in terms of the multiple scattering path operator, the later one being evaluated using the single-site solution for the t-matrix that in turn is determined by the wave functions. An appealing feature of this approach is that it allows to consider local quantum and disorder fluctuations on the same footing. Within the coherent potential approximation (CPA) the correlated atoms are placed into a combined effective medium determined by the DMFT self-consistency condition. Results of corresponding calculations for pure Fe, Ni, and Fe x Ni 1-x alloys are presented

  1. Systematic Analysis of the Multiple Bioactivities of Green Tea through a Network Pharmacology Approach

    Directory of Open Access Journals (Sweden)

    Shoude Zhang

    2014-01-01

    Full Text Available During the past decades, a number of studies have demonstrated multiple beneficial health effects of green tea. Polyphenolics are the most biologically active components of green tea. Many targets can be targeted or affected by polyphenolics. In this study, we excavated all of the targets of green tea polyphenolics (GTPs though literature mining and target calculation and analyzed the multiple pharmacology actions of green tea comprehensively through a network pharmacology approach. In the end, a total of 200 Homo sapiens targets were identified for fifteen GTPs. These targets were classified into six groups according to their related disease, which included cancer, diabetes, neurodegenerative disease, cardiovascular disease, muscular disease, and inflammation. Moreover, these targets mapped into 143 KEGG pathways, 26 of which were more enriched, as determined though pathway enrichment analysis and target-pathway network analysis. Among the identified pathways, 20 pathways were selected for analyzing the mechanisms of green tea in these diseases. Overall, this study systematically illustrated the mechanisms of the pleiotropic activity of green tea by analyzing the corresponding “drug-target-pathway-disease” interaction network.

  2. A quantitative approach for integrating multiple lines of evidence for the evaluation of environmental health risks

    Directory of Open Access Journals (Sweden)

    Jerome J. Schleier III

    2015-01-01

    Full Text Available Decision analysis often considers multiple lines of evidence during the decision making process. Researchers and government agencies have advocated for quantitative weight-of-evidence approaches in which multiple lines of evidence can be considered when estimating risk. Therefore, we utilized Bayesian Markov Chain Monte Carlo to integrate several human-health risk assessment, biomonitoring, and epidemiology studies that have been conducted for two common insecticides (malathion and permethrin used for adult mosquito management to generate an overall estimate of risk quotient (RQ. The utility of the Bayesian inference for risk management is that the estimated risk represents a probability distribution from which the probability of exceeding a threshold can be estimated. The mean RQs after all studies were incorporated were 0.4386, with a variance of 0.0163 for malathion and 0.3281 with a variance of 0.0083 for permethrin. After taking into account all of the evidence available on the risks of ULV insecticides, the probability that malathion or permethrin would exceed a level of concern was less than 0.0001. Bayesian estimates can substantially improve decisions by allowing decision makers to estimate the probability that a risk will exceed a level of concern by considering seemingly disparate lines of evidence.

  3. A mixed-methods approach to systematic reviews.

    Science.gov (United States)

    Pearson, Alan; White, Heath; Bath-Hextall, Fiona; Salmond, Susan; Apostolo, Joao; Kirkpatrick, Pamela

    2015-09-01

    There are an increasing number of published single-method systematic reviews that focus on different types of evidence related to a particular topic. As policy makers and practitioners seek clear directions for decision-making from systematic reviews, it is likely that it will be increasingly difficult for them to identify 'what to do' if they are required to find and understand a plethora of syntheses related to a particular topic.Mixed-methods systematic reviews are designed to address this issue and have the potential to produce systematic reviews of direct relevance to policy makers and practitioners.On the basis of the recommendations of the Joanna Briggs Institute International Mixed Methods Reviews Methodology Group in 2012, the Institute adopted a segregated approach to mixed-methods synthesis as described by Sandelowski et al., which consists of separate syntheses of each component method of the review. Joanna Briggs Institute's mixed-methods synthesis of the findings of the separate syntheses uses a Bayesian approach to translate the findings of the initial quantitative synthesis into qualitative themes and pooling these with the findings of the initial qualitative synthesis.

  4. Capturing the experiences of patients across multiple complex interventions: a meta-qualitative approach.

    Science.gov (United States)

    Webster, Fiona; Christian, Jennifer; Mansfield, Elizabeth; Bhattacharyya, Onil; Hawker, Gillian; Levinson, Wendy; Naglie, Gary; Pham, Thuy-Nga; Rose, Louise; Schull, Michael; Sinha, Samir; Stergiopoulos, Vicky; Upshur, Ross; Wilson, Lynn

    2015-09-08

    The perspectives, needs and preferences of individuals with complex health and social needs can be overlooked in the design of healthcare interventions. This study was designed to provide new insights on patient perspectives drawing from the qualitative evaluation of 5 complex healthcare interventions. Patients and their caregivers were recruited from 5 interventions based in primary, hospital and community care in Ontario, Canada. We included 62 interviews from 44 patients and 18 non-clinical caregivers. Our team analysed the transcripts from 5 distinct projects. This approach to qualitative meta-evaluation identifies common issues described by a diverse group of patients, therefore providing potential insights into systems issues. This study is a secondary analysis of qualitative data; therefore, no outcome measures were identified. We identified 5 broad themes that capture the patients' experience and highlight issues that might not be adequately addressed in complex interventions. In our study, we found that: (1) the emergency department is the unavoidable point of care; (2) patients and caregivers are part of complex and variable family systems; (3) non-medical issues mediate patients' experiences of health and healthcare delivery; (4) the unanticipated consequences of complex healthcare interventions are often the most valuable; and (5) patient experiences are shaped by the healthcare discourses on medically complex patients. Our findings suggest that key assumptions about patients that inform intervention design need to be made explicit in order to build capacity to better understand and support patients with multiple chronic diseases. Across many health systems internationally, multiple models are being implemented simultaneously that may have shared features and target similar patients, and a qualitative meta-evaluation approach, thus offers an opportunity for cumulative learning at a system level in addition to informing intervention design and

  5. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    International Nuclear Information System (INIS)

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-01-01

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries

  6. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    Energy Technology Data Exchange (ETDEWEB)

    Spill, Fabian, E-mail: fspill@bu.edu [Department of Biomedical Engineering, Boston University, 44 Cummington Street, Boston, MA 02215 (United States); Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States); Guerrero, Pilar [Department of Mathematics, University College London, Gower Street, London WC1E 6BT (United Kingdom); Alarcon, Tomas [Centre de Recerca Matematica, Campus de Bellaterra, Edifici C, 08193 Bellaterra (Barcelona) (Spain); Departament de Matemàtiques, Universitat Atonòma de Barcelona, 08193 Bellaterra (Barcelona) (Spain); Maini, Philip K. [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Byrne, Helen [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Computational Biology Group, Department of Computer Science, University of Oxford, Oxford OX1 3QD (United Kingdom)

    2015-10-15

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.

  7. A new approach for peat inventory methods; Turvetutkimusten menetelmaekehitystarkastelu

    Energy Technology Data Exchange (ETDEWEB)

    Laatikainen, M.; Leino, J.; Lerssi, J.; Torppa, J.; Turunen, J. Email: jukka.turunen@gtk.fi

    2011-07-01

    Development of the new peatland inventory method started in 2009. There was a need to investigate whether new methods and tools could be developed cost-effectively so field inventory work would more completely cover the whole peatland area and the quality and liability of the final results would remain at a high level. The old inventory method in place at the Geological Survey of Finland (GTK) is based on the main transect and cross transect approach across a peatland area. The goal of this study was to find a practical grid-based method linked to the geographic information system suitable for field conditions. the triangle-grid method with even distance between the study points was found to be the most suitable approach. A new Ramac-ground penetrating radar was obtained by the GTK in 2009, and it was concluded in the study of new peatland inventory methods. This radar model is relatively light and very suitable, for example, to the forestry drained peatlands, which are often difficult to cross because of the intensive ditch network. the goal was to investigate the best working methods for the ground penetrating radar to optimize its use in the large-scale peatland inventory. Together with the new field inventory methods, a novel interpolation-based method (MITTI) for modelling peat depths was developed. MITTI makes it possible to take advantage of all the available peat-depth data including, at the moment, aerogeophysical and ground penetrating radar measurements, drilling data and the mire outline. The characteristic uncertainties of each data type are taken into account and, in addition to the depth model itself, an uncertainty map of the model is computed. Combined with the grid-based field inventory method, this multi-approach provides better tools to more accurately estimate the peat depths, peat amounts and peat type distributions. The development of the new peatland inventory method was divided into four separate sections: (1) Development of new field

  8. Public Transportation Hub Location with Stochastic Demand: An Improved Approach Based on Multiple Attribute Group Decision-Making

    Directory of Open Access Journals (Sweden)

    Sen Liu

    2015-01-01

    Full Text Available Urban public transportation hubs are the key nodes of the public transportation system. The location of such hubs is a combinatorial problem. Many factors can affect the decision-making of location, including both quantitative and qualitative factors; however, most current research focuses solely on either the quantitative or the qualitative factors. Little has been done to combine these two approaches. To fulfill this gap in the research, this paper proposes a novel approach to the public transportation hub location problem, which takes both quantitative and qualitative factors into account. In this paper, an improved multiple attribute group decision-making (MAGDM method based on TOPSIS (Technique for Order Preference by Similarity to Ideal Solution and deviation is proposed to convert the qualitative factors of each hub into quantitative evaluation values. A location model with stochastic passenger flows is then established based on the above evaluation values. Finally, stochastic programming theory is applied to solve the model and to determine the location result. A numerical study shows that this approach is applicable and effective.

  9. Development and application of a unified balancing approach with multiple constraints

    Science.gov (United States)

    Zorzi, E. S.; Lee, C. C.; Giordano, J. C.

    1985-01-01

    The development of a general analytic approach to constrained balancing that is consistent with past influence coefficient methods is described. The approach uses Lagrange multipliers to impose orbit and/or weight constraints; these constraints are combined with the least squares minimization process to provide a set of coupled equations that result in a single solution form for determining correction weights. Proper selection of constraints results in the capability to: (1) balance higher speeds without disturbing previously balanced modes, thru the use of modal trial weight sets; (2) balance off-critical speeds; and (3) balance decoupled modes by use of a single balance plane. If no constraints are imposed, this solution form reduces to the general weighted least squares influence coefficient method. A test facility used to examine the use of the general constrained balancing procedure and application of modal trial weight ratios is also described.

  10. Detection-Discrimination Method for Multiple Repeater False Targets Based on Radar Polarization Echoes

    Directory of Open Access Journals (Sweden)

    Z. W. ZONG

    2014-04-01

    Full Text Available Multiple repeat false targets (RFTs, created by the digital radio frequency memory (DRFM system of jammer, are widely used in practical to effectively exhaust the limited tracking and discrimination resource of defence radar. In this paper, common characteristic of radar polarization echoes of multiple RFTs is used for target recognition. Based on the echoes from two receiving polarization channels, the instantaneous polarization radio (IPR is defined and its variance is derived by employing Taylor series expansion. A detection-discrimination method is designed based on probability grids. By using the data from microwave anechoic chamber, the detection threshold of the method is confirmed. Theoretical analysis and simulations indicate that the method is valid and feasible. Furthermore, the estimation performance of IPRs of RFTs due to the influence of signal noise ratio (SNR is also covered.

  11. A frequency domain global parameter estimation method for multiple reference frequency response measurements

    Science.gov (United States)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.

  12. Towards an integrated, multiple constraint approach. Assessing the regional carbon balance

    International Nuclear Information System (INIS)

    Kruijt, B.; Dolman, A.J.; Lloyd, J.; Ehleringer, J.; Raupach, M.; Finnigan, J.

    2001-01-01

    Full carbon accounting for regions as a whole, considering all fluxes and transports, is ultimately the only real hope if we are to manage global CO2 emissions. Many methods exist to estimate parts of the carbon budget, but none is yet available to measure carbon fluxes directly at regional scales. Atmospheric Boundary-Layer budgeting methods do have this potential. In the winter of 2000, an international workshop was held in Gubbio, Italy, to discuss ways to advance from the pilot experiments that have been carried out on these methods. First, the meeting helped to advance the methodology and clarify its objectives, requirements and merits. But more importantly, the outcome of the meeting was that atmospheric budget methods should be part of an integrative approach, in which data and model results of very different kinds and from very different scales all provide constraints for estimating the net carbon exchange of a region. Several international projects are now operating to achieve this goal

  13. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras

    Directory of Open Access Journals (Sweden)

    Yajie Liao

    2017-06-01

    Full Text Available Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices, which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration.

  14. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    Directory of Open Access Journals (Sweden)

    Tabitha A Graves

    Full Text Available Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic and bear rubs (opportunistic. We used hierarchical abundance models (N-mixture models with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1 lead to the selection of the same variables as important and (2 provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3 yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight, and (4 improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed

  15. Comparing Multiple Intelligences Approach with Traditional Teaching on Eight Grade Students' Achievement in and Attitudes toward Science

    Science.gov (United States)

    Kaya, Osman Nafiz; Dogan, Alev; Gokcek, Nur; Kilic, Ziya; Kilic, Esma

    2007-01-01

    The purpose of this study was to investigate the effects of multiple intelligences (MI) teaching approach on 8th Grade students' achievement in and attitudes toward science. This study used a pretest-posttest control group experimental design. While the experimental group (n=30) was taught a unit on acids and bases using MI teaching approach, the…

  16. Investigation of colistin sensitivity via three different methods in Acinetobacter baumannii isolates with multiple antibiotic resistance.

    Science.gov (United States)

    Sinirtaş, Melda; Akalin, Halis; Gedikoğlu, Suna

    2009-09-01

    In recent years there has been an increase in life-threatening infections caused by Acinetobacter baumannii with multiple antibiotic resistance, which has lead to the use of polymyxins, especially colistin, being reconsidered. The aim of this study was to investigate the colistin sensitivity of A. baumannii isolates with multiple antibiotic resistance via different methods, and to evaluate the disk diffusion method for colistin against multi-resistant Acinetobacter isolates, in comparison to the E-test and Phoenix system. The study was carried out on 100 strains of A. baumannii (colonization or infection) isolated from the microbiological samples of different patients followed in the clinics and intensive care units of Uludağ University Medical School between the years 2004 and 2005. Strains were identified and characterized for their antibiotic sensitivity by Phoenix system (Becton Dickinson, Sparks, MD, USA). In all studied A. baumannii strains, susceptibility to colistin was determined to be 100% with the disk diffusion, E-test, and broth microdilution methods. Results of the E-test and broth microdilution method, which are accepted as reference methods, were found to be 100% consistent with the results of the disk diffusion tests; no very major or major error was identified upon comparison of the tests. The sensitivity and the positive predictive value of the disk diffusion method were found to be 100%. Colistin resistance in A. baumannii was not detected in our region, and disk diffusion method results are in accordance with those of E-test and broth microdilution methods.

  17. Occupational stress and psychopathology in health professionals: an explorative study with the multiple indicators multiple causes (MIMIC) model approach.

    Science.gov (United States)

    Iliceto, Paolo; Pompili, Maurizio; Spencer-Thomas, Sally; Ferracuti, Stefano; Erbuto, Denise; Lester, David; Candilera, Gabriella; Girardi, Paolo

    2013-03-01

    Occupational stress is a multivariate process involving sources of pressure, psycho-physiological distress, locus of control, work dissatisfaction, depression, anxiety, mental health disorders, hopelessness, and suicide ideation. Healthcare professionals are known for higher rates of occupational-related distress (burnout and compassion fatigue) and higher rates of suicide. The purpose of this study was to explain the relationships between occupational stress and some psychopathological dimensions in a sample of health professionals. We investigated 156 nurses and physicians, 62 males and 94 females, who were administered self-report questionnaires to assess occupational stress [occupational stress inventory (OSI)], temperament (temperament evaluation of Memphis, Pisa, Paris, and San Diego autoquestionnaire), and hopelessness (Beck hopelessness scale). The best Multiple Indicators Multiple Causes model with five OSI predictors yielded the following results: χ2(9) = 14.47 (p = 0.11); χ2/df = 1.60; comparative fit index = 0.99; root mean square error of approximation = 0.05. This model provided a good fit to the empirical data, showing a strong direct influence of casual variables such as work dissatisfaction, absence of type A behavior, and especially external locus of control, psychological and physiological distress on latent variable psychopathology. Occupational stress is in a complex relationship with temperament and hopelessness and also common among healthcare professionals.

  18. Methodical approaches to development of classification state methods of regulation business activity in fishery

    OpenAIRE

    She Son Gun

    2014-01-01

    Approaches to development of classification of the state methods of regulation of economy are considered. On the basis of the provided review the complex method of state regulation of business activity is reasonable. The offered principles allow improving public administration and can be used in industry concepts and state programs on support of small business in fishery.

  19. Methodical approach to financial stimulation of logistics managers

    OpenAIRE

    Melnykova Kateryna V.

    2014-01-01

    The article offers a methodical approach to financial stimulation of logistics managers, which allows calculation of the incentive amount with consideration of profit obtained from introduction of optimisation logistics solutions. The author generalises measures, which would allow increase of stimulation of labour of logistics managers by the enterprise top managers. The article marks out motivation factors, which exert influence upon relation of logistics managers to execution of optimisatio...

  20. Comparison of multiple-criteria decision-making methods - results of simulation study

    Directory of Open Access Journals (Sweden)

    Michał Adamczak

    2016-12-01

    Full Text Available Background: Today, both researchers and practitioners have many methods for supporting the decision-making process. Due to the conditions in which supply chains function, the most interesting are multi-criteria methods. The use of sophisticated methods for supporting decisions requires the parameterization and execution of calculations that are often complex. So is it efficient to use sophisticated methods? Methods: The authors of the publication compared two popular multi-criteria decision-making methods: the  Weighted Sum Model (WSM and the Analytic Hierarchy Process (AHP. A simulation study reflects these two decision-making methods. Input data for this study was a set of criteria weights and the value of each in terms of each criterion. Results: The iGrafx Process for Six Sigma simulation software recreated how both multiple-criteria decision-making methods (WSM and AHP function. The result of the simulation was a numerical value defining the preference of each of the alternatives according to the WSM and AHP methods. The alternative producing a result of higher numerical value  was considered preferred, according to the selected method. In the analysis of the results, the relationship between the values of the parameters and the difference in the results presented by both methods was investigated. Statistical methods, including hypothesis testing, were used for this purpose. Conclusions: The simulation study findings prove that the results obtained with the use of two multiple-criteria decision-making methods are very similar. Differences occurred more frequently in lower-value parameters from the "value of each alternative" group and higher-value parameters from the "weight of criteria" group.

  1. Traffic Management by Using Admission Control Methods in Multiple Node IMS Network

    Directory of Open Access Journals (Sweden)

    Filip Chamraz

    2016-01-01

    Full Text Available The paper deals with Admission Control methods (AC as a possible solution for traffic management in IMS networks (IP Multimedia Subsystem - from the point of view of an efficient redistribution of the available network resources and keeping the parameters of Quality of Service (QoS. The paper specifically aims at the selection of the most appropriate method for the specific type of traffic and traffic management concept using AC methods on multiple nodes. The potential benefit and disadvantage of the used solution is evaluated.

  2. Approach to evaluation and management of a patient with multiple food allergies.

    Science.gov (United States)

    Bird, J Andrew

    2016-01-01

    Diagnosing food allergy is often challenging, and validated testing modalities are mostly limited to immunoglobulin E (IgE)-mediated reactions to foods. Use of food-specific IgE tests and skin prick tests in individuals without a history that supports an IgE-mediated reaction to the specific food being tested diminishes the predictive capabilities of the test. To review the literature regarding evaluation of patients with a concern for multiple food allergies and to demonstrate an evidence-based approach to diagnosis and management. A literature search was performed and articles identified as relevant based on the search terms "food allergy," "food allergy diagnosis," "skin prick test," "serum IgE test," "oral food challenge", and "food allergy management." Patients at risk of food allergy are often misdiagnosed and appropriate evaluation of patients with concern for food allergy includes taking a thorough diet history and reaction history, performing specific tests intentionally and when indicated, and conducting an oral food challenge in a safe environment by an experienced provider when test results are inconclusive. An evidence-based approach to diagnosing and managing a patient at risk of having a life-threatening food allergy is reviewed.

  3. A multi-disciplinary approach for the integrated assessment of multiple risks in delta areas.

    Science.gov (United States)

    Sperotto, Anna; Torresan, Silvia; Critto, Andrea; Marcomini, Antonio

    2016-04-01

    The assessment of climate change related risks is notoriously difficult due to the complex and uncertain combinations of hazardous events that might happen, the multiplicity of physical processes involved, the continuous changes and interactions of environmental and socio-economic systems. One important challenge lies in predicting and modelling cascades of natural and man -made hazard events which can be triggered by climate change, encompassing different spatial and temporal scales. Another regard the potentially difficult integration of environmental, social and economic disciplines in the multi-risk concept. Finally, the effective interaction between scientists and stakeholders is essential to ensure that multi-risk knowledge is translated into efficient adaptation and management strategies. The assessment is even more complex at the scale of deltaic systems which are particularly vulnerable to global environmental changes, due to the fragile equilibrium between the presence of valuable natural ecosystems and relevant economic activities. Improving our capacity to assess the combined effects of multiple hazards (e.g. sea-level rise, storm surges, reduction in sediment load, local subsidence, saltwater intrusion) is therefore essential to identify timely opportunities for adaptation. A holistic multi-risk approach is here proposed to integrate terminology, metrics and methodologies from different research fields (i.e. environmental, social and economic sciences) thus creating shared knowledge areas to advance multi risk assessment and management in delta regions. A first testing of the approach, including the application of Bayesian network analysis for the assessment of impacts of climate change on key natural systems (e.g. wetlands, protected areas, beaches) and socio-economic activities (e.g. agriculture, tourism), is applied in the Po river delta in Northern Italy. The approach is based on a bottom-up process involving local stakeholders early in different

  4. A multiple soil ecosystem services approach to evaluate the sustainability of reduced tillage systems

    Science.gov (United States)

    Pérès, Guénola; Menasseri, Safya; Hallaire, Vincent; Cluzeau, Daniel; Heddadj, Djilali; Cotinet, Patrice; Manceau, Olivier; Pulleman, Mirjam

    2017-04-01

    In the current context of soil degradation, reduced tillage systems (including reduced soil disturbance, use of cover crops and crop rotation, and improved organic matter management) are expected to be good alternatives to conventional system which have led to a decrease of soil multi-functionality. Many studies worldwide have analysed the impact of tillage systems on different soil functions, but overran integrated view of the impact of these systems is still lacking. The SUSTAIN project (European SNOWMAN programme), performed in France and the Netherlands, proposes an interdisciplinary collaboration. The goals of SUSTAIN are to assess the multi-functionality of soil and to study how reduced-tillage systems impact on multiple ecosystem services such as soil biodiversity regulation (earthworms, nematodes, microorganisms), soil structure maintenance (aggregate stability, compaction, soil erosion), water regulation (run-off, transfer of pesticides) and food production. Moreover, a socio-economic study on farmer networks has been carried out to identify the drivers of adoption of reduced-tillage systems. Data have been collected in long-term experimental fields (5 - 13 years), representing conventional and organic farming strategies, and were complemented with data from farmer networks. The impact of different reduced tillage systems (direct seeding, minimum tillage, non-inverse tillage, superficial ploughing) were analysed and compared to conventional ploughing. Measurements (biological, chemical, physical, agronomical, water and element transfer) have been done at several dates which allow an overview of the evolution of the soil properties according to climate variation and crop rotation. A sociological approach was performed on several farms covering different production types, different courses (engagement in reduced tillage systems) and different geographical locations. Focusing on French trials, this multiple ecosystem services approach clearly showed that

  5. A permutation-based multiple testing method for time-course microarray experiments

    Directory of Open Access Journals (Sweden)

    George Stephen L

    2009-10-01

    Full Text Available Abstract Background Time-course microarray experiments are widely used to study the temporal profiles of gene expression. Storey et al. (2005 developed a method for analyzing time-course microarray studies that can be applied to discovering genes whose expression trajectories change over time within a single biological group, or those that follow different time trajectories among multiple groups. They estimated the expression trajectories of each gene using natural cubic splines under the null (no time-course and alternative (time-course hypotheses, and used a goodness of fit test statistic to quantify the discrepancy. The null distribution of the statistic was approximated through a bootstrap method. Gene expression levels in microarray data are often complicatedly correlated. An accurate type I error control adjusting for multiple testing requires the joint null distribution of test statistics for a large number of genes. For this purpose, permutation methods have been widely used because of computational ease and their intuitive interpretation. Results In this paper, we propose a permutation-based multiple testing procedure based on the test statistic used by Storey et al. (2005. We also propose an efficient computation algorithm. Extensive simulations are conducted to investigate the performance of the permutation-based multiple testing procedure. The application of the proposed method is illustrated using the Caenorhabditis elegans dauer developmental data. Conclusion Our method is computationally efficient and applicable for identifying genes whose expression levels are time-dependent in a single biological group and for identifying the genes for which the time-profile depends on the group in a multi-group setting.

  6. Multiple external hazards compound level 3 PSA methods research of nuclear power plant

    Science.gov (United States)

    Wang, Handing; Liang, Xiaoyu; Zhang, Xiaoming; Yang, Jianfeng; Liu, Weidong; Lei, Dina

    2017-01-01

    2011 Fukushima nuclear power plant severe accident was caused by both earthquake and tsunami, which results in large amount of radioactive nuclides release. That accident has caused the radioactive contamination on the surrounding environment. Although this accident probability is extremely small, once such an accident happens that is likely to release a lot of radioactive materials into the environment, and cause radiation contamination. Therefore, studying accidents consequences is important and essential to improve nuclear power plant design and management. Level 3 PSA methods of nuclear power plant can be used to analyze radiological consequences, and quantify risk to the public health effects around nuclear power plants. Based on multiple external hazards compound level 3 PSA methods studies of nuclear power plant, and the description of the multiple external hazards compound level 3 PSA technology roadmap and important technical elements, as well as taking a coastal nuclear power plant as the reference site, we analyzed the impact of off-site consequences of nuclear power plant severe accidents caused by multiple external hazards. At last we discussed the impact of off-site consequences probabilistic risk studies and its applications under multiple external hazards compound conditions, and explained feasibility and reasonableness of emergency plans implementation.

  7. Meaning and challenges in the practice of multiple therapeutic massage modalities: a combined methods study.

    Science.gov (United States)

    Porcino, Antony J; Boon, Heather S; Page, Stacey A; Verhoef, Marja J

    2011-09-20

    Therapeutic massage and bodywork (TMB) practitioners are predominantly trained in programs that are not uniformly standardized, and in variable combinations of therapies. To date no studies have explored this variability in training and how this affects clinical practice. Combined methods, consisting of a quantitative, population-based survey and qualitative interviews with practitioners trained in multiple therapies, were used to explore the training and practice of TMB practitioners in Alberta, Canada. Of the 5242 distributed surveys, 791 were returned (15.1%). Practitioners were predominantly female (91.7%), worked in a range of environments, primarily private (44.4%) and home clinics (35.4%), and were not significantly different from other surveyed massage therapist populations. Seventy-seven distinct TMB therapies were identified. Most practitioners were trained in two or more therapies (94.4%), with a median of 8 and range of 40 therapies. Training programs varied widely in number and type of TMB components, training length, or both. Nineteen interviews were conducted. Participants described highly variable training backgrounds, resulting in practitioners learning unique combinations of therapy techniques. All practitioners reported providing individualized patient treatment based on a responsive feedback process throughout practice that they described as being critical to appropriately address the needs of patients. They also felt that research treatment protocols were different from clinical practice because researchers do not usually sufficiently acknowledge the individualized nature of TMB care provision. The training received, the number of therapies trained in, and the practice descriptors of TMB practitioners are all highly variable. In addition, clinical experience and continuing education may further alter or enhance treatment techniques. Practitioners individualize each patient's treatment through a highly adaptive process. Therefore, treatment

  8. Parallelism measurement for base plate of standard artifact with multiple tactile approaches

    Science.gov (United States)

    Ye, Xiuling; Zhao, Yan; Wang, Yiwen; Wang, Zhong; Fu, Luhua; Liu, Changjie

    2018-01-01

    Nowadays, as workpieces become more precise and more specialized which results in more sophisticated structures and higher accuracy for the artifacts, higher requirements have been put forward for measuring accuracy and measuring methods. As an important method to obtain the size of workpieces, coordinate measuring machine (CMM) has been widely used in many industries. In order to achieve the calibration of a self-developed CMM, it is found that the parallelism of the base plate used for fixing the standard artifact is an important factor which affects the measurement accuracy in the process of studying self-made high-precision standard artifact. And aimed to measure the parallelism of the base plate, by using the existing high-precision CMM, gauge blocks, dial gauge and marble platform with the tactile approach, three methods for parallelism measurement of workpieces are employed, and comparisons are made within the measurement results. The results of experiments show that the final accuracy of all the three methods is able to reach micron level and meets the measurement requirements. Simultaneously, these three approaches are suitable for different measurement conditions which provide a basis for rapid and high-precision measurement under different equipment conditions.

  9. A Promising Approach to Integrally Evaluate the Disease Outcome of Cerebral Ischemic Rats Based on Multiple-Biomarker Crosstalk

    Directory of Open Access Journals (Sweden)

    Guimei Ran

    2017-01-01

    Full Text Available Purpose. The study was designed to evaluate the disease outcome based on multiple biomarkers related to cerebral ischemia. Methods. Rats were randomly divided into sham, permanent middle cerebral artery occlusion, and edaravone-treated groups. Cerebral ischemia was induced by permanent middle cerebral artery occlusion surgery in rats. To form a simplified crosstalk network, the related multiple biomarkers were chosen as S100β, HIF-1α, IL-1β, PGI2, TXA2, and GSH-Px. The levels or activities of these biomarkers in plasma were detected before and after ischemia. Concurrently, neurological deficit scores and cerebral infarct volumes were assessed. Based on a mathematic model, network balance maps and three integral disruption parameters (k, φ, and u of the simplified crosstalk network were achieved. Results. The levels or activities of the related biomarkers and neurological deficit scores were significantly impacted by cerebral ischemia. The balance maps intuitively displayed the network disruption, and the integral disruption parameters quantitatively depicted the disruption state of the simplified network after cerebral ischemia. The integral disruption parameter u values correlated significantly with neurological deficit scores and infarct volumes. Conclusion. Our results indicate that the approach based on crosstalk network may provide a new promising way to integrally evaluate the outcome of cerebral ischemia.

  10. A novel sampling method for multiple multiscale targets from scattering amplitudes at a fixed frequency

    Science.gov (United States)

    Liu, Xiaodong

    2017-08-01

    A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.

  11. Exploring multiple intelligences theory in the context of science education: An action research approach

    Science.gov (United States)

    Goodnough, Karen Catherine

    2000-10-01

    Since the publication of Frames of Mind: The Theory in Practice, multiple intelligences, theory (Gardner, 1983) has been used by practitioners in a variety of ways to make teaching and learning more meaningful. However, little attention has been focused on exploring the potential of the theory for science teaching and learning. Consequently, this research study was designed to: (1) explore Howard Gardner's theory of multiple intelligences (1983) and its merit for making science teaching and learning more meaningful; (2) provide a forum for teachers to engage in critical self-reflection about their theory and practice in science education; (3) study the process of action research in the context of science education; and (4) describe the effectiveness of collaborative action research as a framework for teacher development and curriculum development. The study reports on the experiences of four teachers (two elementary teachers, one junior high teacher, and one high school teacher) and myself, a university researcher-facilitator, as we participated in a collaborative action research project. The action research group held weekly meetings over a five-month period (January--May, 1999). The inquiry was a qualitative case study (Stake, 1994) that aimed to understand the perspectives of those directly involved. This was achieved by using multiple methods to collect data: audiotaped action research meetings, fieldnotes, semi-structured interviews, journal writing, and concept mapping. All data were analysed on an ongoing basis. Many positive outcomes resulted from the study in areas such as curriculum development, teacher development, and student learning in science. Through the process of action research, research participants became more reflective about their practice and thus, enhanced their pedagogical content knowledge (Shulman, 1987) in science. Students became more engaged in learning science, gained a greater understanding of how they learn, and experienced a

  12. Practical Approaches to the Use of Lenalidomide in Multiple Myeloma: A Canadian Consensus

    Directory of Open Access Journals (Sweden)

    Donna Reece

    2012-01-01

    Full Text Available In Canada, lenalidomide combined with dexamethasone (Len/Dex is approved for use in relapsed or refractory multiple myeloma (RRMM. Our expert panel sought to provide an up-to-date practical guide on the use of lenalidomide in the managing RRMM within the Canadian clinical setting, including management of common adverse events (AEs. The panel concluded that safe, effective administration of Len/Dex treatment involves the following steps: (1 lenalidomide dose adjustment based on creatinine clearance and the extent of neutropenia or thrombocytopenia, (2 dexamethasone administered at 20–40 mg/week, and (3 continuation of treatment until disease progression or until toxicity persists despite dose reduction. Based on available evidence, the following precautions should reduce the risk of common Len/Dex AEs: (1 all patients treated with Len/Dex should receive thromboprophylaxis, (2 erythropoiesis-stimulating agents (ESAs should be used cautiously, and (3 females of child-bearing potential and males in contact with such females must use multiple contraception methods. Finally, while Len/Dex can be administered irrespective of prior therapy and in all prognostic subsets, patients with chromosomal deletion 17(p13 have less favorable outcomes with all treatments, including Len/Dex. New directions for the use of lenalidomide in RRMM are also considered.

  13. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    Science.gov (United States)

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. A characteristic based multiple balance approach for SN on arbitrary polygonal meshes

    International Nuclear Information System (INIS)

    Grove, R.E.; Pevey, R.E.

    1995-01-01

    The authors introduce a new approach for characteristic based S n transport on arbitrary polygonal meshes in XY geometry. They approximate a general surface as an arbitrary polygon and rotate to a coordinate system aligned with the direction of particle travel. They use exact moment balance equations on whole cells and subregions called slices and close the system by analytically solving the characteristic equation. The authors assume spatial functions for boundary conditions and cell sources and formulate analogous functions for outgoing edge and cell angular fluxes which exactly preserve spatial moments of the analytic solution. In principle, their approach provides the framework to extend characteristic methods formulated on rectangular grids to arbitrary polygonal meshes. The authors derive schemes based on step and linear spatial approximations. Their step characteristic scheme is mathematically equivalent to the Extended Step Characteristic (ESC) method but their approach and scheme differ in the geometry rotation and in the solution form. Their solutions are simple and permit edge-based transport sweep ordering

  15. A prediction method based on wavelet transform and multiple models fusion for chaotic time series

    International Nuclear Information System (INIS)

    Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha

    2017-01-01

    In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

  16. Experimental design and multiple response optimization. Using the desirability function in analytical methods development.

    Science.gov (United States)

    Candioti, Luciana Vera; De Zan, María M; Cámara, María S; Goicoechea, Héctor C

    2014-06-01

    A review about the application of response surface methodology (RSM) when several responses have to be simultaneously optimized in the field of analytical methods development is presented. Several critical issues like response transformation, multiple response optimization and modeling with least squares and artificial neural networks are discussed. Most recent analytical applications are presented in the context of analytLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, ArgentinaLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, Argentinaical methods development, especially in multiple response optimization procedures using the desirability function. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Basis set approach in the constrained interpolation profile method

    International Nuclear Information System (INIS)

    Utsumi, T.; Koga, J.; Yabe, T.; Ogata, Y.; Matsunaga, E.; Aoki, T.; Sekine, M.

    2003-07-01

    We propose a simple polynomial basis-set that is easily extendable to any desired higher-order accuracy. This method is based on the Constrained Interpolation Profile (CIP) method and the profile is chosen so that the subgrid scale solution approaches the real solution by the constraints from the spatial derivative of the original equation. Thus the solution even on the subgrid scale becomes consistent with the master equation. By increasing the order of the polynomial, this solution quickly converges. 3rd and 5th order polynomials are tested on the one-dimensional Schroedinger equation and are proved to give solutions a few orders of magnitude higher in accuracy than conventional methods for lower-lying eigenstates. (author)

  18. Probabilistic atlas-guided eigen-organ method for simultaneous bounding box estimation of multiple organs in volumetric CT images

    International Nuclear Information System (INIS)

    Yao, Cong; Wada, Takashige; Shimizu, Akinobu; Kobatake, Hidefumi; Nawano, Shigeru

    2006-01-01

    We propose an approach for the simultaneous bounding box estimation of multiple organs in volumetric CT images. Local eigen-organ spaces are constructed for different types of training organs, and a global eigen-space, which describes the spatial relationships between the organs, is also constructed. Each volume of interest in the abdominal CT image is projected into the local eigen-organ spaces, and several candidate locations are determined. The final selection of the organ locations is made by projecting the set of candidate locations into the global eigen-space. A probabilistic atlas of organs is used to eliminate locations with low probability and to guide the selection of candidate locations. Evaluation by the leave-one-out method using 10 volumetric abdominal CT images showed that the proposed method provided an average accuracy of 80.38% for 11 different organ types. (author)

  19. Multiplication factor evaluation of bare and reflected small fast assemblies using variational methods

    International Nuclear Information System (INIS)

    Dwivedi, S.R.; Jain, D.

    1979-01-01

    The multigroup collision probability equations were solved by the variational method to derive a simple relation between the multiplication factor and the size of a small spherical bare or reflected fast reactor. This relation was verified by a number of 26-group, S 4 , transport theory calculations in one-dimensional spherical geometry for enriched uranium and plutonium systems. It has been shown that further approximations to the above relation lead to the universal empirical relation obtained by Anil Kumar. (orig.) [de

  20. Research on numerical method for multiple pollution source discharge and optimal reduction program

    Science.gov (United States)

    Li, Mingchang; Dai, Mingxin; Zhou, Bin; Zou, Bin

    2018-03-01

    In this paper, the optimal method for reduction program is proposed by the nonlinear optimal algorithms named that genetic algorithm. The four main rivers in Jiangsu province, China are selected for reducing the environmental pollution in nearshore district. Dissolved inorganic nitrogen (DIN) is studied as the only pollutant. The environmental status and standard in the nearshore district is used to reduce the discharge of multiple river pollutant. The research results of reduction program are the basis of marine environmental management.

  1. An Improved Clutter Suppression Method for Weather Radars Using Multiple Pulse Repetition Time Technique

    Directory of Open Access Journals (Sweden)

    Yingjie Yu

    2017-01-01

    Full Text Available This paper describes the implementation of an improved clutter suppression method for the multiple pulse repetition time (PRT technique based on simulated radar data. The suppression method is constructed using maximum likelihood methodology in time domain and is called parametric time domain method (PTDM. The procedure relies on the assumption that precipitation and clutter signal spectra follow a Gaussian functional form. The multiple interleaved pulse repetition frequencies (PRFs that are used in this work are set to four PRFs (952, 833, 667, and 513 Hz. Based on radar simulation, it is shown that the new method can provide accurate retrieval of Doppler velocity even in the case of strong clutter contamination. The obtained velocity is nearly unbiased for all the range of Nyquist velocity interval. Also, the performance of the method is illustrated on simulated radar data for plan position indicator (PPI scan. Compared with staggered 2-PRT transmission schemes with PTDM, the proposed method presents better estimation accuracy under certain clutter situations.

  2. Multiple Signal Classification Algorithm Based Electric Dipole Source Localization Method in an Underwater Environment

    Directory of Open Access Journals (Sweden)

    Yidong Xu

    2017-10-01

    Full Text Available A novel localization method based on multiple signal classification (MUSIC algorithm is proposed for positioning an electric dipole source in a confined underwater environment by using electric dipole-receiving antenna array. In this method, the boundary element method (BEM is introduced to analyze the boundary of the confined region by use of a matrix equation. The voltage of each dipole pair is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields based localization method, which can be easily implemented in practical engineering applications. Then, a global-multiple region-conjugate gradient (CG hybrid search method is used to reduce the computation burden and to improve the operation speed. Two localization simulation models and a physical experiment are conducted. Both the simulation results and physical experiment result provide accurate positioning performance, with the help to verify the effectiveness of the proposed localization method in underwater environments.

  3. System and method for integrating and accessing multiple data sources within a data warehouse architecture

    Science.gov (United States)

    Musick, Charles R [Castro Valley, CA; Critchlow, Terence [Livermore, CA; Ganesh, Madhaven [San Jose, CA; Slezak, Tom [Livermore, CA; Fidelis, Krzysztof [Brentwood, CA

    2006-12-19

    A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.

  4. An Extended TOPSIS Method for Multiple Attribute Decision Making based on Interval Neutrosophic Uncertain Linguistic Variables

    Directory of Open Access Journals (Sweden)

    Said Broumi

    2015-03-01

    Full Text Available The interval neutrosophic uncertain linguistic variables can easily express the indeterminate and inconsistent information in real world, and TOPSIS is a very effective decision making method more and more extensive applications. In this paper, we will extend the TOPSIS method to deal with the interval neutrosophic uncertain linguistic information, and propose an extended TOPSIS method to solve the multiple attribute decision making problems in which the attribute value takes the form of the interval neutrosophic uncertain linguistic variables and attribute weight is unknown. Firstly, the operational rules and properties for the interval neutrosophic variables are introduced. Then the distance between two interval neutrosophic uncertain linguistic variables is proposed and the attribute weight is calculated by the maximizing deviation method, and the closeness coefficients to the ideal solution for each alternatives. Finally, an illustrative example is given to illustrate the decision making steps and the effectiveness of the proposed method.

  5. CHANGES IN TUMOR NECROSIS FACTOR ALFA DURING TREATMENT OF PATIENTS WITH MULTIPLE SCLEROSIS BY TRANSIMMUNIZATION METHOD

    Directory of Open Access Journals (Sweden)

    A. V. Kil'dyushevskiy

    2016-01-01

    Full Text Available Background: Despite the availability of a  large number of treatments for multiple sclerosis with various targets, these treatments are not always effective. According to the literature, experimental studies have shown a  significant decrease in tumor necrosis factor alfa (TNF-α with the use of extracorporeal photochemotherapy. Aim: To assess changes in TNF-α in patients with multiple sclerosis during treatment with transimmunization. Materials and methods: The study recruited 13 adult patients with multiple sclerosis. Serum TNF-α was measured by immunochemiluminescence analysis (IMMULITE 1000, Siemens. The patients were treated by transimmunization, i.e. a  modified photopheresis. Two hours before the procedure, Ammifurin (8-methoxypsoralene was administered to all the patients, then their mononuclear cells were isolated under PBSC protocol with Haemonetics MCS+ cell separator. Thereafter, mononuclear cells were irradiated with ultraviolet for 90  minutes and incubated for 20 hours at 37 °С. The next day the cells were re-infused to the patients. The procedure was performed 2  times per week for 6  months, then once per 4  months. Results: Before transimmunization, mean TNF-α level in adult patients with multiple sclerosis was 9.958±0.812  pg/mL (normal, below 8.1 pg/mL. After transimmunization, its level was 6.992±0.367  pg/mL (р<0.05. Conclusion: Ultraviolet irradiation of peripheral blood monocytes with their subsequent incubation (transimmunization led to a 30% decrease of serum TNF-α in patients with multiple sclerosis. This indicates a suppressive effect of transimmunization on TNF-α. Hence, in patients with multiple sclerosis transimmunization exerts an anti-inflammatory effect.

  6. The Propagation of Movement Variability in Time: A Methodological Approach for Discrete Movements with Multiple Degrees of Freedom

    Science.gov (United States)

    Krüger, Melanie; Straube, Andreas; Eggert, Thomas

    2017-01-01

    In recent years, theory-building in motor neuroscience and our understanding of the synergistic control of the redundant human motor system has significantly profited from the emergence of a range of different mathematical approaches to analyze the structure of movement variability. Approaches such as the Uncontrolled Manifold method or the Noise-Tolerance-Covariance decomposition method allow to detect and interpret changes in movement coordination due to e.g., learning, external task constraints or disease, by analyzing the structure of within-subject, inter-trial movement variability. Whereas, for cyclical movements (e.g., locomotion), mathematical approaches exist to investigate the propagation of movement variability in time (e.g., time series analysis), similar approaches are missing for discrete, goal-directed movements, such as reaching. Here, we propose canonical correlation analysis as a suitable method to analyze the propagation of within-subject variability across different time points during the execution of discrete movements. While similar analyses have already been applied for discrete movements with only one degree of freedom (DoF; e.g., Pearson's product-moment correlation), canonical correlation analysis allows to evaluate the coupling of inter-trial variability across different time points along the movement trajectory for multiple DoF-effector systems, such as the arm. The theoretical analysis is illustrated by empirical data from a study on reaching movements under normal and disturbed proprioception. The results show increased movement duration, decreased movement amplitude, as well as altered movement coordination under ischemia, which results in a reduced complexity of movement control. Movement endpoint variability is not increased under ischemia. This suggests that healthy adults are able to immediately and efficiently adjust the control of complex reaching movements to compensate for the loss of proprioceptive information. Further, it is

  7. A multiple stage approach to mitigate the risks of telecommunication equipment under free air cooling conditions

    International Nuclear Information System (INIS)

    Dai Jun; Das, Diganta; Pecht, Michael

    2012-01-01

    Highlights: ► Analyze the challenges posed by free air cooling (FAC). ► Present a multi-stage process to mitigate the risks of FAC. ► Propose a prognostics-based method to mitigate risks in data centers in operation. ► Present a case study to show the prognostics-based method implementation. - Abstract: The telecommunication industry is concerned about the energy costs of its operating infrastructure and the associated greenhouse gas emissions. At present, more than half of the total energy consumption of data centers is devoted to the power and cooling infrastructure that supports electronic equipment. One method of reducing energy consumption is an approach called “free air cooling,” where ambient air is used to cool the equipment directly, thereby reducing the energy consumed in cooling and conditioning the air. For example, Intel demonstrated free air cooling in a 10-megawatt (MW) data center, showing a reduction in energy use and savings of US$2.87 million annually. However, the impacts of this approach on the performance and reliability of telecommunication equipment need to be identified. The implementation of free air cooling changes the operating environment, including temperature and humidity, which may have a significant impact on the performance and reliability of telecom equipment. This paper discusses the challenges posed by free air cooling and presents a multi-stage process for evaluating and mitigating the potential risks arising from this new operating environment.

  8. Methods for measuring denitrification: Diverse approaches to a difficult problem

    DEFF Research Database (Denmark)

    Groffman, Peter M.; Altabet, Mark A.; Böhlke, J. K.

    2006-01-01

    , and global scales. Unfortunately, this process is very difficult to measure, and existing methods are problematic for different reasons in different places at different times. In this paper, we review the major approaches that have been taken to measure denitrification in terrestrial and aquatic environments...... based on stable isotopes, (8) in situ gradients with atmospheric environmental tracers, and (9) molecular approaches. Our review makes it clear that the prospects for improved quantification of denitrification vary greatly in different environments and at different scales. While current methodology allows...... for the production of accurate estimates of denitrification at scales relevant to water and air quality and ecosystem fertility questions in some systems (e.g., aquatic sediments, well-defined aquifers), methodology for other systems, especially upland terrestrial areas, still needs development. Comparison of mass...

  9. Source location in plates based on the multiple sensors array method and wavelet analysis

    International Nuclear Information System (INIS)

    Yang, Hong Jun; Shin, Tae Jin; Lee, Sang Kwon

    2014-01-01

    A new method for impact source localization in a plate is proposed based on the multiple signal classification (MUSIC) and wavelet analysis. For source localization, the direction of arrival of the wave caused by an impact on a plate and the distance between impact position and sensor should be estimated. The direction of arrival can be estimated accurately using MUSIC method. The distance can be obtained by using the time delay of arrival and the group velocity of the Lamb wave in a plate. Time delay is experimentally estimated using the continuous wavelet transform for the wave. The elasto dynamic theory is used for the group velocity estimation.

  10. Source location in plates based on the multiple sensors array method and wavelet analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Hong Jun; Shin, Tae Jin; Lee, Sang Kwon [Inha University, Incheon (Korea, Republic of)

    2014-01-15

    A new method for impact source localization in a plate is proposed based on the multiple signal classification (MUSIC) and wavelet analysis. For source localization, the direction of arrival of the wave caused by an impact on a plate and the distance between impact position and sensor should be estimated. The direction of arrival can be estimated accurately using MUSIC method. The distance can be obtained by using the time delay of arrival and the group velocity of the Lamb wave in a plate. Time delay is experimentally estimated using the continuous wavelet transform for the wave. The elasto dynamic theory is used for the group velocity estimation.

  11. Simple method for the generation of multiple homogeneous field volumes inside the bore of superconducting magnets.

    Science.gov (United States)

    Chou, Ching-Yu; Ferrage, Fabien; Aubert, Guy; Sakellariou, Dimitris

    2015-07-17

    Standard Magnetic Resonance magnets produce a single homogeneous field volume, where the analysis is performed. Nonetheless, several modern applications could benefit from the generation of multiple homogeneous field volumes along the axis and inside the bore of the magnet. In this communication, we propose a straightforward method using a combination of ring structures of permanent magnets in order to cancel the gradient of the stray field in a series of distinct volumes. These concepts were demonstrated numerically on an experimentally measured magnetic field profile. We discuss advantages and limitations of our method and present the key steps required for an experimental validation.

  12. Multiple travelling wave solutions of nonlinear evolution equations using a unified algebraic method

    International Nuclear Information System (INIS)

    Fan Engui

    2002-01-01

    A new direct and unified algebraic method for constructing multiple travelling wave solutions of general nonlinear evolution equations is presented and implemented in a computer algebraic system. Compared with most of the existing tanh methods, the Jacobi elliptic function method or other sophisticated methods, the proposed method not only gives new and more general solutions, but also provides a guideline to classify the various types of the travelling wave solutions according to the values of some parameters. The solutions obtained in this paper include (a) kink-shaped and bell-shaped soliton solutions, (b) rational solutions, (c) triangular periodic solutions and (d) Jacobi and Weierstrass doubly periodic wave solutions. Among them, the Jacobi elliptic periodic wave solutions exactly degenerate to the soliton solutions at a certain limit condition. The efficiency of the method can be demonstrated on a large variety of nonlinear evolution equations such as those considered in this paper, KdV-MKdV, Ito's fifth MKdV, Hirota, Nizhnik-Novikov-Veselov, Broer-Kaup, generalized coupled Hirota-Satsuma, coupled Schroedinger-KdV, (2+1)-dimensional dispersive long wave, (2+1)-dimensional Davey-Stewartson equations. In addition, as an illustrative sample, the properties of the soliton solutions and Jacobi doubly periodic solutions for the Hirota equation are shown by some figures. The links among our proposed method, the tanh method, extended tanh method and the Jacobi elliptic function method are clarified generally. (author)

  13. Risk Governance of Multiple Natural Hazards: Centralized versus Decentralized Approach in Europe

    Science.gov (United States)

    Komendantova, Nadejda; Scolobig, Anna; Vinchon, Charlotte

    2014-05-01

    The multi-risk approach is a relatively new field and its definition includes the need to consider multiple hazards and vulnerabilities in their interdependency (Selva, 2013) and the current multi-hazards disasters, such as the 2011 Tohoku earthquake, tsunami and nuclear catastrophe, showed the need for a multi-risk approach in hazard mitigation and management. Our knowledge about multi-risk assessment, including studies from different scientific disciplines and developed assessment tools, is constantly growing (White et al., 2001). However, the link between scientific knowledge, its implementation and the results in terms of improved governance and decision-making have gained significantly less attention (IRGC, 2005; Kappes et al., 2012), even though the interest to risk governance, in general, has increased significantly during the last years (Verweiy and Thompson, 2006). Therefore, the key research question is how risk assessment is implemented and what is the potential for the implementation of a multi-risk approach in different governance systems across Europe. More precisely, how do the characteristics of risk governance, such as the degree of centralization versus decentralization, influence the implementation of a multi-risk approach. The methodology of this research includes comparative case study analysis of top-down and bottom-up interactions in governance in the city of Naples, (Italy), where the institutional landscape is marked by significant autonomy of Italian regions in decision-making processes for assessing the majority of natural risks, excluding volcanic, and in Guadeloupe, French West Indies, an overseas department of France, where the decision-making process is marked by greater centralization in decision making associated with a well established state governance within regions, delegated to the prefect and decentralised services of central ministries. The research design included documentary analysis and extensive empirical work involving

  14. Multiple-Features-Based Semisupervised Clustering DDoS Detection Method

    Directory of Open Access Journals (Sweden)

    Yonghao Gu

    2017-01-01

    Full Text Available DDoS attack stream from different agent host converged at victim host will become very large, which will lead to system halt or network congestion. Therefore, it is necessary to propose an effective method to detect the DDoS attack behavior from the massive data stream. In order to solve the problem that large numbers of labeled data are not provided in supervised learning method, and the relatively low detection accuracy and convergence speed of unsupervised k-means algorithm, this paper presents a semisupervised clustering detection method using multiple features. In this detection method, we firstly select three features according to the characteristics of DDoS attacks to form detection feature vector. Then, Multiple-Features-Based Constrained-K-Means (MF-CKM algorithm is proposed based on semisupervised clustering. Finally, using MIT Laboratory Scenario (DDoS 1.0 data set, we verify that the proposed method can improve the convergence speed and accuracy of the algorithm under the condition of using a small amount of labeled data sets.

  15. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    Science.gov (United States)

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  16. An efficient method to transcription factor binding sites imputation via simultaneous completion of multiple matrices with positional consistency.

    Science.gov (United States)

    Guo, Wei-Li; Huang, De-Shuang

    2017-08-22

    Transcription factors (TFs) are DNA-binding proteins that have a central role in regulating gene expression. Identification of DNA-binding sites of TFs is a key task in understanding transcriptional regulation, cellular processes and disease. Chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) enables genome-wide identification of in vivo TF binding sites. However, it is still difficult to map every TF in every cell line owing to cost and biological material availability, which poses an enormous obstacle for integrated analysis of gene regulation. To address this problem, we propose a novel computational approach, TFBSImpute, for predicting additional TF binding profiles by leveraging information from available ChIP-seq TF binding data. TFBSImpute fuses the dataset to a 3-mode tensor and imputes missing TF binding signals via simultaneous completion of multiple TF binding matrices with positional consistency. We show that signals predicted by our method achieve overall similarity with experimental data and that TFBSImpute significantly outperforms baseline approaches, by assessing the performance of imputation methods against observed ChIP-seq TF binding profiles. Besides, motif analysis shows that TFBSImpute preforms better in capturing binding motifs enriched in observed data compared with baselines, indicating that the higher performance of TFBSImpute is not simply due to averaging related samples. We anticipate that our approach will constitute a useful complement to experimental mapping of TF binding, which is beneficial for further study of regulation mechanisms and disease.

  17. Participant perceptions of a novel physiotherapy approach ("Blue Prescription") for increasing levels of physical activity in people with multiple sclerosis: a qualitative study following intervention.

    Science.gov (United States)

    Smith, Catherine M; Hale, Leigh A; Mulligan, Hilda F; Treharne, Gareth J

    2013-07-01

    The aim of this study was to investigate experiences of participating in a feasibility trial of a novel physiotherapy intervention (Blue Prescription). The trial was designed to increase participation in physical activity for people with multiple sclerosis living in the community. We individually interviewed 27 volunteers from two New Zealand metropolitan areas at the conclusion of their participation in Blue Prescription. We asked volunteers about what participation in Blue Prescription had meant to them; how participants intended to continue with their physical activity; how the approach differed from previous experiences of physiotherapy encounters; and how Blue Prescription could be improved. Interviews were semi-structured, audio-recorded, transcribed verbatim, and analysed using a General Inductive Approach. 'Support' was identified as a key theme with three sub-themes: 'The therapeutic relationship'; 'The Blue Prescription approach'; and 'Supporting themselves'. We identified two additional themes 'Motivation to participate' and 'Improving the Blue Prescription approach'. A novel approach (Blue Prescription) which facilitates engagement in higher levels of desirable physical activity was perceived by participants to be supportive, motivating and enabling. This approach might be particularly useful for people with multiple sclerosis ready to adopt new health-related behaviours. For future studies, this approach requires further refinement, particularly with regards to methods of communication and evaluation.

  18. Methodical Approaches to Communicative Providing of Retailer Branding

    Directory of Open Access Journals (Sweden)

    Andrey Kataev

    2017-07-01

    Full Text Available The thesis is devoted to the rationalization of methodical approaches for provision of branding of retail trade enterprises. The article considers the features of brand perception by retail consumers and clarifies the specifics of customer reviews of stores for the procedures accompanying brand management. It is proved that besides traditional communication mix, the most important tool of communicative influence on buyers is the store itself as a place for comfortable shopping. The shop should have a stimulating effect on all five human senses, including sight, smell, hearing, touch, and taste, which shall help maximize consumer integration into the buying process.

  19. Reasoning methods in medical consultation systems: artificial intelligence approaches.

    Science.gov (United States)

    Shortliffe, E H

    1984-01-01

    It has been argued that the problem of medical diagnosis is fundamentally ill-structured, particularly during the early stages when the number of possible explanations for presenting complaints can be immense. This paper discusses the process of clinical hypothesis evocation, contrasts it with the structured decision making approaches used in traditional computer-based diagnostic systems, and briefly surveys the more open-ended reasoning methods that have been used in medical artificial intelligence (AI) programs. The additional complexity introduced when an advice system is designed to suggest management instead of (or in addition to) diagnosis is also emphasized. Example systems are discussed to illustrate the key concepts.

  20. Augmenting Tertiary Students' Soft Skills Via Multiple Intelligences Instructional Approach: Literature Courses in Focus

    Directory of Open Access Journals (Sweden)

    El Sherief Eman

    2017-01-01

    Full Text Available The second half of the twentieth century is a witness to an unprecedentedly soaring increase in the number of students joining the arena of higher education(UNESCO,2001. Currently, the number of students at Saudi universities and colleges exceeds one million vis-à-vis 7000 in 1970(Royal Embassy of Saudi Arabia, Washington. Such enormous body of learners in higher education is per se diverse enough to embrace distinct learning styles, assorted repertoire of backgrounds, prior knowledge, experiences, and perspectives; at this juncture, they presumably share common aspiration which is hooking a compatible post in the labor market upon graduation, and to subsequently be capable of acting competently in a scrupulously competitive workplace environment. Bunch of potentialities and skills are patently vital for a graduate to reach such a prospect. Such bunch of skills in a conventional undergraduate paradigm of education were given no heed, being rather postponed to the post-graduation phase. The current Paper postulated tremendous  merits of deploying the Multiple Intelligences theory as a project-based approach, within  literature classes in higher education; a strategy geared towards reigniting students’ engagement, nurturing their critical thinking capabilities, sustaining their individualistic dispositions, molding them as inquiry-seekers, and ending up engendering life-long, autonomous learners,  well-armed with the substantial skills for traversing the rigorous competition in future labor market.

  1. Examination of the role of magnetic resonance imaging in multiple sclerosis: A problem-orientated approach

    Directory of Open Access Journals (Sweden)

    McFarland Henry

    2009-01-01

    Full Text Available Magnetic Resonance Imaging (MRI has brought in several benefits to the study of Multiple Sclerosis (MS. It provides accurate measurement of disease activity, facilitates precise diagnosis, and aid in the assessment of newer therapies. The imaging guidelines for MS are broadly divided in to approaches for imaging patients with suspected MS or clinically isolated syndromes (CIS or for monitoring patients with established MS. In this review, the technical aspects of MR imaging for MS are briefly discussed. The imaging process need to capture the twin aspects of acute MS viz. the autoimmune acute inflammatory process and the neurodegenerative process. Gadolinium enhanced MRI can identify acute inflammatory lesions precisely. The commonly applied MRI marker of disease progression is brain atrophy. Whole brain magnetization Transfer Ratio (MTR and Magnetic Resonance Spectroscopy (MRS are two other techniques use to monitor disease progression. A variety of imaging techniques such as Double Inversion Recovery (DIR, Spoiled Gradient Recalled (SPGR acquisition, and Fluid Attenuated Inversion Recovery (FLAIR have been utilized to study the cortical changes in MS. MRI is now extensively used in the Phase I, II and III clinical trials of new therapies. As the technical aspects of MRI advance rapidly, and higher field strengths become available, it is hoped that the impact of MRI on our understanding of MS will be even more profound in the next decade.

  2. Management of pregnancy-related issues in multiple sclerosis patients: the need for an interdisciplinary approach.

    Science.gov (United States)

    Amato, Maria Pia; Bertolotto, Antonio; Brunelli, Roberto; Cavalla, Paola; Goretti, Benedetta; Marrosu, Maria Giovanna; Patti, Francesco; Pozzilli, Carlo; Provinciali, Leandro; Rizzo, Nicola; Strobelt, Nicola; Tedeschi, Gioacchino; Trojano, Maria; Comi, Giancarlo

    2017-10-01

    Multiple sclerosis (MS) is a demyelinating and neurodegenerative disease of the central nervous system (CNS), most probably autoimmune in origin, usually occurring in young adults with a female/male prevalence of approximately 3:1. Women with MS in the reproductive age may face challenging issues in reconciling the desire for parenthood with their condition, owing to the possible influence both on the ongoing or planned treatment with the possible consequences on the disease course and on the potential negative effects of treatments on foetal and pregnancy outcomes. At MS diagnosis, timely counselling should promote informed parenthood, while disease evolution should be assessed before making therapeutic decisions. Current guidelines advise the discontinuation of any treatment during pregnancy, with possible exceptions for some treatments in patients with very active disease. Relapses decline during pregnancy but are more frequent during puerperium, when MS therapy should be promptly resumed in most of the cases. First-line immunomodulatory agents, such as interferon-β (IFN-β) and glatiramer acetate (GA), significantly reduce the post-partum risk of relapse. Due to substantial evidence of safety with the use of GA during pregnancy, a recent change in European marketing authorization removed the pregnancy contraindication for GA. This paper reports a consensus of Italian experts involved in MS management, including neurologists, gynaecologists and psychologists. This consensus, based on a review of the available scientific evidence, promoted an interdisciplinary approach to the management of pregnancy in MS women.

  3. Drug repurposing: a systematic approach to evaluate candidate oral neuroprotective interventions for secondary progressive multiple sclerosis.

    Directory of Open Access Journals (Sweden)

    Hanna M Vesterinen

    Full Text Available To develop and implement an evidence based framework to select, from drugs already licenced, candidate oral neuroprotective drugs to be tested in secondary progressive multiple sclerosis.Systematic review of clinical studies of oral putative neuroprotective therapies in MS and four other neurodegenerative diseases with shared pathological features, followed by systematic review and meta-analyses of the in vivo experimental data for those interventions. We presented summary data to an international multi-disciplinary committee, which assessed each drug in turn using pre-specified criteria including consideration of mechanism of action.We identified a short list of fifty-two candidate interventions. After review of all clinical and pre-clinical evidence we identified ibudilast, riluzole, amiloride, pirfenidone, fluoxetine, oxcarbazepine, and the polyunsaturated fatty-acid class (Linoleic Acid, Lipoic acid; Omega-3 fatty acid, Max EPA oil as lead candidates for clinical evaluation.We demonstrate a standardised and systematic approach to candidate identification for drug rescue and repurposing trials that can be applied widely to neurodegenerative disorders.

  4. A Psychoacoustic-Based Multiple Audio Object Coding Approach via Intra-Object Sparsity

    Directory of Open Access Journals (Sweden)

    Maoshen Jia

    2017-12-01

    Full Text Available Rendering spatial sound scenes via audio objects has become popular in recent years, since it can provide more flexibility for different auditory scenarios, such as 3D movies, spatial audio communication and virtual classrooms. To facilitate high-quality bitrate-efficient distribution for spatial audio objects, an encoding scheme based on intra-object sparsity (approximate k-sparsity of the audio object itself is proposed in this paper. The statistical analysis is presented to validate the notion that the audio object has a stronger sparseness in the Modified Discrete Cosine Transform (MDCT domain than in the Short Time Fourier Transform (STFT domain. By exploiting intra-object sparsity in the MDCT domain, multiple simultaneously occurring audio objects are compressed into a mono downmix signal with side information. To ensure a balanced perception quality of audio objects, a Psychoacoustic-based time-frequency instants sorting algorithm and an energy equalized Number of Preserved Time-Frequency Bins (NPTF allocation strategy are proposed, which are employed in the underlying compression framework. The downmix signal can be further encoded via Scalar Quantized Vector Huffman Coding (SQVH technique at a desirable bitrate, and the side information is transmitted in a lossless manner. Both objective and subjective evaluations show that the proposed encoding scheme outperforms the Sparsity Analysis (SPA approach and Spatial Audio Object Coding (SAOC in cases where eight objects were jointly encoded.

  5. Allocation of police posts in a medium-sized city: a multiple criteria approach

    Directory of Open Access Journals (Sweden)

    Charles Miller Gois de Oliveira

    2015-09-01

    Full Text Available Changes in violent crime rates implies a need for strategies to protect the population. Public safety policies should be focused on preventive security, which has the potential to reduce crime rates. The allocation of the police observation posts contributes to an increase in safety. However, given limited resources, the number of observation units has reduced in most Brazilian cities. Therefore, efficient allocation of these resources is necessary to ensure rational use of security agents. Several aspects influence decisions related to where to install these units, complicating this process. The multiple criteria approach is appropriate in this type of decision-making process, because it allows the decision maker to generate and consolidate knowledge. The results indicate that the most suitable neighborhoods are those with higher rates of violence and greater social gaps. This work presents some benefits to the area of public security, since it formalizes tacit knowledge into explicit knowledge. The information contained in this study may be made available to other public administrators who need to make this kind of decision.

  6. A consensus successive projections algorithm--multiple linear regression method for analyzing near infrared spectra.

    Science.gov (United States)

    Liu, Ke; Chen, Xiaojing; Li, Limin; Chen, Huiling; Ruan, Xiukai; Liu, Wenbin

    2015-02-09

    The successive projections algorithm (SPA) is widely used to select variables for multiple linear regression (MLR) modeling. However, SPA used only once may not obtain all the useful information of the full spectra, because the number of selected variables cannot exceed the number of calibration samples in the SPA algorithm. Therefore, the SPA-MLR method risks the loss of useful information. To make a full use of the useful information in the spectra, a new method named "consensus SPA-MLR" (C-SPA-MLR) is proposed herein. This method is the combination of consensus strategy and SPA-MLR method. In the C-SPA-MLR method, SPA-MLR is used to construct member models with different subsets of variables, which are selected from the remaining variables iteratively. A consensus prediction is obtained by combining the predictions of the member models. The proposed method is evaluated by analyzing the near infrared (NIR) spectra of corn and diesel. The results of C-SPA-MLR method showed a better prediction performance compared with the SPA-MLR and full-spectra PLS methods. Moreover, these results could serve as a reference for combination the consensus strategy and other variable selection methods when analyzing NIR spectra and other spectroscopic techniques. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Meta-analysis methods for combining multiple expression profiles: comparisons, statistical characterization and an application guideline.

    Science.gov (United States)

    Chang, Lun-Ching; Lin, Hui-Min; Sibille, Etienne; Tseng, George C

    2013-12-21

    As high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the literature, but their performance and properties have only been minimally investigated. There is currently no clear conclusion or guideline as to the proper choice of a meta-analysis method given an application; the decision essentially requires both statistical and biological considerations. We performed 12 microarray meta-analysis methods for combining multiple simulated expression profiles, and such methods can be categorized for different hypothesis setting purposes: (1) HS(A): DE genes with non-zero effect sizes in all studies, (2) HS(B): DE genes with non-zero effect sizes in one or more studies and (3) HS(r): DE gene with non-zero effect in "majority" of studies. We then performed a comprehensive comparative analysis through six large-scale real applications using four quantitative statistical evaluation criteria: detection capability, biological association, stability and robustness. We elucidated hypothesis settings behind the methods and further apply multi-dimensional scaling (MDS) and an entropy measure to characterize the meta-analysis methods and data structure, respectively. The aggregated results from the simulation study categorized the 12 methods into three hypothesis settings (HS(A), HS(B), and HS(r)). Evaluation in real data and results from MDS and entropy analyses provided an insightful and practical guideline to the choice of the most suitable method in a given application. All source files for simulation and real data are available on the author's publication website.

  8. A geologic approach to field methods in fluvial geomorphology

    Science.gov (United States)

    Fitzpatrick, Faith A.; Thornbush, Mary J; Allen, Casey D; Fitzpatrick, Faith A.

    2014-01-01

    A geologic approach to field methods in fluvial geomorphology is useful for understanding causes and consequences of past, present, and possible future perturbations in river behavior and floodplain dynamics. Field methods include characterizing river planform and morphology changes and floodplain sedimentary sequences over long periods of time along a longitudinal river continuum. Techniques include topographic and bathymetric surveying of fluvial landforms in valley bottoms and describing floodplain sedimentary sequences through coring, trenching, and examining pits and exposures. Historical sediment budgets that include floodplain sedimentary records can characterize past and present sources and sinks of sediment along a longitudinal river continuum. Describing paleochannels and floodplain vertical accretion deposits, estimating long-term sedimentation rates, and constructing historical sediment budgets can assist in management of aquatic resources, habitat, sedimentation, and flooding issues.

  9. Compression-RSA: New approach of encryption and decryption method

    Science.gov (United States)

    Hung, Chang Ee; Mandangan, Arif

    2013-04-01

    Rivest-Shamir-Adleman (RSA) cryptosystem is a well known asymmetric cryptosystem and it has been applied in a very wide area. Many researches with different approaches have been carried out in order to improve the security and performance of RSA cryptosystem. The enhancement of the performance of RSA cryptosystem is our main interest. In this paper, we propose a new method to increase the efficiency of RSA by shortening the number of plaintext before it goes under encryption process without affecting the original content of the plaintext. Concept of simple Continued Fraction and the new special relationship between it and Euclidean Algorithm have been applied on this newly proposed method. By reducing the number of plaintext-ciphertext, the encryption-decryption processes of a secret message can be accelerated.

  10. A neutron multiplicity analysis method for uranium samples with liquid scintillators

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Hao, E-mail: zhouhao_ciae@126.com [China Institute of Atomic Energy, P.O.BOX 275-8, Beijing 102413 (China); Lin, Hongtao [Xi' an Reasearch Institute of High-tech, Xi' an, Shaanxi 710025 (China); Liu, Guorong; Li, Jinghuai; Liang, Qinglei; Zhao, Yonggang [China Institute of Atomic Energy, P.O.BOX 275-8, Beijing 102413 (China)

    2015-10-11

    A new neutron multiplicity analysis method for uranium samples with liquid scintillators is introduced. An active well-type fast neutron multiplicity counter has been built, which consists of four BC501A liquid scintillators, a n/γdiscrimination module MPD-4, a multi-stop time to digital convertor MCS6A, and two Am–Li sources. A mathematical model is built to symbolize the detection processes of fission neutrons. Based on this model, equations in the form of R=F*P*Q*T could be achieved, where F indicates the induced fission rate by interrogation sources, P indicates the transfer matrix determined by multiplication process, Q indicates the transfer matrix determined by detection efficiency, T indicates the transfer matrix determined by signal recording process and crosstalk in the counter. Unknown parameters about the item are determined by the solutions of the equations. A {sup 252}Cf source and some low enriched uranium items have been measured. The feasibility of the method is proven by its application to the data analysis of the experiments.

  11. A novel method for the sequential removal and separation of multiple heavy metals from wastewater.

    Science.gov (United States)

    Fang, Li; Li, Liang; Qu, Zan; Xu, Haomiao; Xu, Jianfang; Yan, Naiqiang

    2018-01-15

    A novel method was developed and applied for the treatment of simulated wastewater containing multiple heavy metals. A sorbent of ZnS nanocrystals (NCs) was synthesized and showed extraordinary performance for the removal of Hg 2+ , Cu 2+ , Pb 2+ and Cd 2+ . The removal efficiencies of Hg 2+ , Cu 2+ , Pb 2+ and Cd 2+ were 99.9%, 99.9%, 90.8% and 66.3%, respectively. Meanwhile, it was determined that solubility product (K sp ) of heavy metal sulfides was closely related to adsorption selectivity of various heavy metals on the sorbent. The removal efficiency of Hg 2+ was higher than that of Cd 2+ , while the K sp of HgS was lower than that of CdS. It indicated that preferential adsorption of heavy metals occurred when the K sp of the heavy metal sulfide was lower. In addition, the differences in the K sp of heavy metal sulfides allowed for the exchange of heavy metals, indicating the potential application for the sequential removal and separation of heavy metals from wastewater. According to the cumulative adsorption experimental results, multiple heavy metals were sequentially adsorbed and separated from the simulated wastewater in the order of the K sp of their sulfides. This method holds the promise of sequentially removing and separating multiple heavy metals from wastewater. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Supervised pre-processing approaches in multiple class variables classification for fish recruitment forecasting

    KAUST Repository

    Fernandes, José Antonio

    2013-02-01

    A multi-species approach to fisheries management requires taking into account the interactions between species in order to improve recruitment forecasting of the fish species. Recent advances in Bayesian networks direct the learning of models with several interrelated variables to be forecasted simultaneously. These models are known as multi-dimensional Bayesian network classifiers (MDBNs). Pre-processing steps are critical for the posterior learning of the model in these kinds of domains. Therefore, in the present study, a set of \\'state-of-the-art\\' uni-dimensional pre-processing methods, within the categories of missing data imputation, feature discretization and feature subset selection, are adapted to be used with MDBNs. A framework that includes the proposed multi-dimensional supervised pre-processing methods, coupled with a MDBN classifier, is tested with synthetic datasets and the real domain of fish recruitment forecasting. The correctly forecasting of three fish species (anchovy, sardine and hake) simultaneously is doubled (from 17.3% to 29.5%) using the multi-dimensional approach in comparison to mono-species models. The probability assessments also show high improvement reducing the average error (estimated by means of Brier score) from 0.35 to 0.27. Finally, these differences are superior to the forecasting of species by pairs. © 2012 Elsevier Ltd.

  13. Formalized Conflicts Detection Based on the Analysis of Multiple Emails: An Approach Combining Statistics and Ontologies

    Science.gov (United States)

    Zakaria, Chahnez; Curé, Olivier; Salzano, Gabriella; Smaïli, Kamel

    In Computer Supported Cooperative Work (CSCW), it is crucial for project leaders to detect conflicting situations as early as possible. Generally, this task is performed manually by studying a set of documents exchanged between team members. In this paper, we propose a full-fledged automatic solution that identifies documents, subjects and actors involved in relational conflicts. Our approach detects conflicts in emails, probably the most popular type of documents in CSCW, but the methods used can handle other text-based documents. These methods rely on the combination of statistical and ontological operations. The proposed solution is decomposed in several steps: (i) we enrich a simple negative emotion ontology with terms occuring in the corpus of emails, (ii) we categorize each conflicting email according to the concepts of this ontology and (iii) we identify emails, subjects and team members involved in conflicting emails using possibilistic description logic and a set of proposed measures. Each of these steps are evaluated and validated on concrete examples. Moreover, this approach's framework is generic and can be easily adapted to domains other than conflicts, e.g. security issues, and extended with operations making use of our proposed set of measures.

  14. Nonlinear coupled mode approach for modeling counterpropagating solitons in the presence of disorder-induced multiple scattering in photonic crystal waveguides

    Science.gov (United States)

    Mann, Nishan; Hughes, Stephen

    2018-02-01

    We present the analytical and numerical details behind our recently published article [Phys. Rev. Lett. 118, 253901 (2017), 10.1103/PhysRevLett.118.253901], describing the impact of disorder-induced multiple scattering on counterpropagating solitons in photonic crystal waveguides. Unlike current nonlinear approaches using the coupled mode formalism, we account for the effects of intraunit cell multiple scattering. To solve the resulting system of coupled semilinear partial differential equations, we introduce a modified Crank-Nicolson-type norm-preserving implicit finite difference scheme inspired by the transfer matrix method. We provide estimates of the numerical dispersion characteristics of our scheme so that optimal step sizes can be chosen to either minimize numerical dispersion or to mimic the exact dispersion. We then show numerical results of a fundamental soliton propagating in the presence of multiple scattering to demonstrate that choosing a subunit cell spatial step size is critical in accurately capturing the effects of multiple scattering, and illustrate the stochastic nature of disorder by simulating soliton propagation in various instances of disordered photonic crystal waveguides. Our approach is easily extended to include a wide range of optical nonlinearities and is applicable to various photonic nanostructures where power propagation is bidirectional, either by choice, or as a result of multiple scattering.

  15. A mixed methods study of multiple health behaviors among individuals with stroke

    Directory of Open Access Journals (Sweden)

    Matthew Plow

    2017-05-01

    Full Text Available Background Individuals with stroke often have multiple cardiovascular risk factors that necessitate promoting engagement in multiple health behaviors. However, observational studies of individuals with stroke have typically focused on promoting a single health behavior. Thus, there is a poor understanding of linkages between healthy behaviors and the circumstances in which factors, such as stroke impairments, may influence a single or multiple health behaviors. Methods We conducted a mixed methods convergent parallel study of 25 individuals with stroke to examine the relationships between stroke impairments and physical activity, sleep, and nutrition. Our goal was to gain further insight into possible strategies to promote multiple health behaviors among individuals with stroke. This study focused on physical activity, sleep, and nutrition because of their importance in achieving energy balance, maintaining a healthy weight, and reducing cardiovascular risks. Qualitative and quantitative data were collected concurrently, with the former being prioritized over the latter. Qualitative data was prioritized in order to develop a conceptual model of engagement in multiple health behaviors among individuals with stroke. Qualitative and quantitative data were analyzed independently and then were integrated during the inference stage to develop meta-inferences. The 25 individuals with stroke completed closed-ended questionnaires on healthy behaviors and physical function. They also participated in face-to-face focus groups and one-to-one phone interviews. Results We found statistically significant and moderate correlations between hand function and healthy eating habits (r = 0.45, sleep disturbances and limitations in activities of daily living (r =  − 0.55, BMI and limitations in activities of daily living (r =  − 0.49, physical activity and limitations in activities of daily living (r = 0.41, mobility impairments and BMI (r

  16. Hybrid approaches for multiple-species stochastic reaction-diffusion models

    Science.gov (United States)

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-10-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  17. Hybrid approaches for multiple-species stochastic reaction-diffusion models.

    KAUST Repository

    Spill, Fabian

    2015-10-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  18. Hybrid approaches for multiple-species stochastic reaction-diffusion models.

    KAUST Repository

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K; Byrne, Helen

    2015-01-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  19. METHODICAL APPROACH TO AN ESTIMATION OF PROFESSIONALISM OF AN EMPLOYEE

    Directory of Open Access Journals (Sweden)

    Татьяна Александровна Коркина

    2013-08-01

    Full Text Available Analysis of definitions of «professionalism», reflecting the different viewpoints of scientists and practitioners, has shown that it is interpreted as a specific property of the people effectively and reliably carry out labour activity in a variety of conditions. The article presents the methodical approach to an estimation of professionalism of the employee from the position as the external manifestations of the reliability and effectiveness of the work and the position of the personal characteristics of the employee, determining the results of his work. This approach includes the assessment of the level of qualification and motivation of the employee for each key job functions as well as the final results of its implementation on the criteria of efficiency and reliability. The proposed methodological approach to the estimation of professionalism of the employee allows to identify «bottlenecks» in the structure of its labour functions and to define directions of development of the professional qualities of the worker to ensure the required level of reliability and efficiency of the obtained results.DOI: http://dx.doi.org/10.12731/2218-7405-2013-6-11

  20. A Multiple Data Fusion Approach to Wheel Slip Control for Decentralized Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Dejun Yin

    2017-04-01

    Full Text Available Currently, active safety control methods for cars, i.e., the antilock braking system (ABS, the traction control system (TCS, and electronic stability control (ESC, govern the wheel slip control based on the wheel slip ratio, which relies on the information from non-driven wheels. However, these methods are not applicable in the cases without non-driven wheels, e.g., a four-wheel decentralized electric vehicle. Therefore, this paper proposes a new wheel slip control approach based on a novel data fusion method to ensure good traction performance in any driving condition. Firstly, with the proposed data fusion algorithm, the acceleration estimator makes use of the data measured by the sensor installed near the vehicle center of mass (CM to calculate the reference acceleration of each wheel center. Then, the wheel slip is constrained by controlling the acceleration deviation between the actual wheel and the reference wheel center. By comparison with non-control and model following control (MFC cases in double lane change tests, the simulation results demonstrate that the proposed control method has significant anti-slip effectiveness and stabilizing control performance.