WorldWideScience

Sample records for multiple purging methods

  1. Gas centrifuge purge method

    Theurich, Gordon R.

    1976-01-01

    1. In a method of separating isotopes in a high speed gas centrifuge wherein a vertically oriented cylindrical rotor bowl is adapted to rotate about its axis within an evacuated chamber, and wherein an annular molecular pump having an intake end and a discharge end encircles the uppermost portion of said rotor bowl, said molecular pump being attached along its periphery in a leak-tight manner to said evacuated chamber, and wherein end cap closure means are affixed to the upper end of said rotor bowl, and a process gas withdrawal and insertion system enters said bowl through said end cap closure means, said evacuated chamber, molecular pump and end cap defining an upper zone at the discharge end of said molecular pump, said evacuated chamber, molecular pump and rotor bowl defining a lower annular zone at the intake end of said molecular pump, a method for removing gases from said upper and lower zones during centrifuge operation with a minimum loss of process gas from said rotor bowl, comprising, in combination: continuously measuring the pressure in said upper zone, pumping gas from said lower zone from the time the pressure in said upper zone equals a first preselected value until the pressure in said upper zone is equal to a second preselected value, said first preselected value being greater than said second preselected value, and continuously pumping gas from said upper zone from the time the pressure in said upper zone equals a third preselected value until the pressure in said upper zone is equal to a fourth preselected value, said third preselected value being greater than said first, second and fourth preselected values.

  2. Method of controlling weld chamber purge and cover gas atmosphere

    Yeo, D.

    1992-01-01

    A method of controlling the gas atmosphere in a welding chamber includes detecting the absence of a fuel rod from the welding chamber and, in response thereto, initiating the supplying of a flow of argon gas to the chamber to purge air therefrom. Further, the method includes detecting the entry of a fuel rod in the welding chamber and, in response thereto, terminating the supplying of the flow of argon gas to the chamber and initiating the supplying of a flow of helium gas to the chamber to purge argon gas therefrom and displace the argon gas in the chamber. Also, the method includes detecting the withdrawal of the fuel rod from the welding chamber and, in response thereto, terminating the supplying of the flow of helium gas to the chamber and initiating the supplying of argon to the chamber to purge the air therefrom. The method also includes detecting the initiation of a weld cycle and, in response thereto, momentarily supplying a flow of argon gas to the welding electrode tip for initiating the welding arc. (Author)

  3. Molecular purging of multiple myeloma cells by ex-vivo culture and retroviral transduction of mobilized-blood CD34+ cells

    Corneo Gianmarco

    2007-07-01

    Full Text Available Abstract Background Tumor cell contamination of the apheresis in multiple myeloma is likely to affect disease-free and overall survival after autografting. Objective To purge myeloma aphereses from tumor contaminants with a novel culture-based purging method. Methods We cultured myeloma-positive CD34+ PB samples in conditions that retained multipotency of hematopoietic stem cells, but were unfavourable to survival of plasma cells. Moreover, we exploited the resistance of myeloma plasma cells to retroviral transduction by targeting the hematopoietic CD34+ cell population with a retroviral vector carrying a selectable marker (the truncated form of the human receptor for nerve growth factor, ΔNGFR. We performed therefore a further myeloma purging step by selecting the transduced cells at the end of the culture. Results Overall recovery of CD34+ cells after culture was 128.5%; ΔNGFR transduction rate was 28.8% for CD34+ cells and 0% for CD138-selected primary myeloma cells, respectively. Recovery of CD34+ cells after ΔNGFR selection was 22.3%. By patient-specific Ig-gene rearrangements, we assessed a decrease of 0.7–1.4 logs in tumor load after the CD34+ cell selection, and up to 2.3 logs after culture and ΔNGFR selection. Conclusion We conclude that ex-vivo culture and retroviral-mediated transduction of myeloma leukaphereses provide an efficient tumor cell purging.

  4. Comparison of dialysis membrane diffusion samplers and two purging methods in bedrock wells

    Imbrigiotta, T.E.; Ehlke, T.A.; Lacombe, P.J.; Dale, J.M.; ,

    2002-01-01

    Collection of ground-water samples from bedrock wells using low-flow purging techniques is problematic because of the random spacing, variable hydraulic conductivity, and variable contamination of contributing fractures in each well's open interval. To test alternatives to this purging method, a field comparison of three ground-water-sampling techniques was conducted on wells in fractured bedrock at a site contaminated primarily with volatile organic compounds. Constituent concentrations in samples collected with a diffusion sampler constructed from dialysis membrane material were compared to those in samples collected from the same wells with a standard low-flow purging technique and a hybrid (high-flow/low-flow) purging technique. Concentrations of trichloroethene, cis-1,2-dichloroethene, vinyl chloride, calcium, chloride, and alkalinity agreed well among samples collected with all three techniques in 9 of the 10 wells tested. Iron concentrations varied more than those of the other parameters, but their pattern of variation was not consistent. Overall, the results of nonparametric analysis of variance testing on the nine wells sampled twice showed no statistically significant difference at the 95-percent confidence level among the concentrations of volatile organic compounds or inorganic constituents recovered by use of any of the three sampling techniques.

  5. Measurement of residual solvents in a drug substance by a purge-and-trap method.

    Lakatos, Miklós

    2008-08-05

    The purge-and-trap (P&T) gas extraction method combined with gas chromatography was studied for its suitability for quantitative residual solvents determination in a water-soluble active pharmaceutical ingredient (API). Some analytical method performance characteristics were investigated, namely, the repeatability, the accuracy and the detection limit of determination. The results show that the P&T technique is--as expected--more sensitive than the static headspace, thus it can be used for the determination of residual solvents pertaining to the ICH Class 1 group. It was found that it could be an alternative sample preparation method besides the static headspace (HS) method.

  6. Alpha radioimmunotherapy of multiple myeloma: study of feasibility of ex vivo medullary purge

    Couturier, O.; Filippovitch, I.V.; Sorokina, N.I.; Cherel, M.; Thedrez, P.; Faivre-Chauvet, A.; Chatal, J.F.

    1997-01-01

    The efficiency of the radioimmunotherapy (RIT) using beta emitters has been clinically proved in treatments of refractory forms of lymphoma. The alpha-emitting radioelements of short half-life are also good potential candidates for RIT, applicable to tumor targets accessible rapidly to the molecules of the radio-immuno-conjugates of size compatible with the short range of alpha particles (50 to 80 μm). The goal of this study is to demonstrate the feasibility of such an approach on a model of myeloma multiply targeted by specific antibodies (B-B4) coupled to bismuth-213 with a chelating agent (benzyl-DTPA). The efficiency of the alpha RIT was evaluated in vitro by means of different techniques analyzing the cellular mortality (the method of limited dilution), the effects on DNA (the testing of micro-nuclei), the analysis of radio-induced apoptosis (the test with acridine orange) and finally the study of non-specific irradiation on population of cells of hematopoietic system un-recognized by the B-B4 benzyl-DTPA) immuno-conjugate. The first results have shown besides the technical feasibility of the project a strong dose dependent cellular mortality with a survival falling rapidly from 28% to around 1 o/oo for a single doubling of the dose from 14.8 kBq / 10 5 cells (0.4 μCi) to 29.6 kBq/10 5 cells (0.8 μCi). The cellular mortality was total at 300 kBq/10 5 cells (8 μCi). The cells in an apoptosis state were evidenced at rates up to 40% for a dose of 7.4 kBq/10 5 cells (0.2 μ Ci). New experiments will permit confirming these first results and determining the irradiation range having in view a utilization in protocols of purging of the myeloma cells on pockets obtained after plasmaphereses

  7. Determination of cyclic volatile methylsiloxanes in biota with a purge and trap method.

    Kierkegaard, Amelie; Adolfsson-Erici, Margaretha; McLachlan, Michael S

    2010-11-15

    The three cyclic volatile methylsiloxanes (cVMS), octamethylcyclotetrasiloxane (D4), decamethylcyclopentasiloxane (D5), and dodecamethylcyclohexasiloxane (D6), are recently identified environmental contaminants. Methods for the trace analysis of these chemicals in environmental matrices are required. A purge and trap method to prepare highly purified sample extracts with a low risk of sample contamination is presented. Without prior homogenization, the sample is heated in water, and the cVMS are purged from the slurry and trapped on an Isolute ENV+ cartridge. They are subsequently eluted with n-hexane and analyzed with GC/MS. The method was tested for eight different matrices including ragworms, muscle tissue from lean and lipid-rich fish, cod liver, and seal blubber. Analyte recoveries were consistent within and between matrices, averaging 79%, 68%, and 56% for D4, D5, and D6, respectively. Good control of blank levels resulted in limits of quantification of 1.5, 0.6, and 0.6 ng/g wet weight. The repeatability was 12% (D5) and 15% (D6) at concentrations 9 and 2 times above the LOQ. The method was applied to analyze cVMS in fish from Swedish lakes, demonstrating that contamination in fish as a result of long-range atmospheric transport is low as compared to contamination from local sources.

  8. A purge-and-trap capillary column gas chromatographic method for the measurement of halocarbons in water and air

    Happell, J.D.; Wallace, D.W.R.; Wills, K.D.; Wilke, R.J.; Neill, C.C.

    1996-06-01

    This report describes an automated, accurate, precise and sensitive capillary column purge- and -trap method capable of quantifying CFC-12, CFC-11, CFC-113, CH{sub 3}CCL{sub 3}, and CCL{sub 4} during a single chromatographic analysis in either water or gas phase samples.

  9. Distribution characteristics of volatile methylsiloxanes in Tokyo Bay watershed in Japan: Analysis of surface waters by purge and trap method.

    Horii, Yuichi; Minomo, Kotaro; Ohtsuka, Nobutoshi; Motegi, Mamoru; Nojiri, Kiyoshi; Kannan, Kurunthachalam

    2017-05-15

    Surface waters including river water and effluent from sewage treatment plants (STPs) were collected from Tokyo Bay watershed, Japan, and analyzed for seven cyclic and linear volatile methylsiloxanes (VMSs), i.e., D3, D4, D5, D6, L3, L4, and L5 by an optimized purge and trap extraction method. The total concentrations of seven VMSs (ΣVMS) in river water ranged from watershed was estimated at 2300kg. Our results indicate widespread distribution of VMSs in Tokyo Bay watershed and the influence of domestic wastewater discharges as a source of VMSs in the aquatic environment. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Characterization of a Gas-Purge Method to Access 11C-Carbon-Dioxide Radioactivity in Blood

    Ng, Y.; Green, M.A.

    2014-01-01

    Carbon-11 (t 1/2 : 20 minutes) labeled radiotracers, such as 11 C-acetate and 11 C-palmitate are widely used in positron emission tomography (PET) for noninvasive evaluation of myocardial metabolism under varied physiological conditions. These tracers are attractive probes of tissue physiology, because they are simply radiolabeled versions of the native biochemical substrates. One of the major metabolites generated by these tracers upon the administration is 11 CO 2 produced via the citric acid cycle. In quantitative modeling of 11 C-acetate and 11 C-palmitate PET data, the fraction of blood 11 C radioactivity present as 11 CO 2 needs to be measured to obtain a correct radiotracer arterial input function. Accordingly, the literature describes a method whereby the total blood 11 C-activity is counted in blood samples treated with base solution, while the fraction of 11 CO 2 is measured after the blood is treated with acid followed by a 10 minutes gas-purge. However, a detailed description of the experimental validation of this method was not provided. The goal of this study was to test the reliability of a 10-minute gas purging method used to assay 11 CO 2 radioactivity in blood. (author)

  11. Purged window apparatus utilizing heated purge gas

    Ballard, Evan O.

    1984-01-01

    A purged window apparatus utilizing tangentially injected heated purge gases in the vicinity of electromagnetic radiation transmitting windows, and a tapered external mounting tube to accelerate these gases to provide a vortex flow on the window surface and a turbulent flow throughout the mounting tube. Use of this apparatus prevents backstreaming of gases under investigation which are flowing past the mouth of the mounting tube which would otherwise deposit on the windows. Lengthy spectroscopic investigations and analyses can thereby be performed without the necessity of interrupting the procedures in order to clean or replace contaminated windows.

  12. Spent Nuclear Fuel (SNF) Project Cask and MCO Helium Purge System Design Review Completion Report - Project A.5 and A.6

    ARD, K.E.

    2000-01-01

    This report documents the results of the design verification performed on the Cask and Multiple Canister Over-pack (MCO) Helium Purge System. The helium purge system is part of the Spent Nuclear Fuel (SNF) Project Cask Loadout System (CLS) at 100K area. The design verification employed the ''Independent Review Method'' in accordance with Administrative Procedure (AP) EN-6-027-01

  13. Purge ventilation operability

    Marella, J.R.

    1995-01-01

    A determination of minimum requirements for purge exhaust ventilation system operability has been performed. HLWE and HLW Regulatory Program personnel have evaluated the various scenarios of equipment conditions and HLWE has developed the requirements for purge exhaust systems. This report is provided to document operability requirements to assist Tank Farm personnel to determine whether a system is operable/inoperable and to define required compensatory actions

  14. Neutron source multiplication method

    Clayton, E.D.

    1985-01-01

    Extensive use has been made of neutron source multiplication in thousands of measurements of critical masses and configurations and in subcritical neutron-multiplication measurements in situ that provide data for criticality prevention and control in nuclear materials operations. There is continuing interest in developing reliable methods for monitoring the reactivity, or k/sub eff/, of plant operations, but the required measurements are difficult to carry out and interpret on the far subcritical configurations usually encountered. The relationship between neutron multiplication and reactivity is briefly discussed and data presented to illustrate problems associated with the absolute measurement of neutron multiplication and reactivity in subcritical systems. A number of curves of inverse multiplication have been selected from a variety of experiments showing variations observed in multiplication during the course of critical and subcritical experiments where different methods of reactivity addition were used, with different neutron source detector position locations. Concern is raised regarding the meaning and interpretation of k/sub eff/ as might be measured in a far subcritical system because of the modal effects and spectrum differences that exist between the subcritical and critical systems. Because of this, the calculation of k/sub eff/ identical with unity for the critical assembly, although necessary, may not be sufficient to assure safety margins in calculations pertaining to far subcritical systems. Further study is needed on the interpretation and meaning of k/sub eff/ in the far subcritical system

  15. On-line purge-and-trap-gas chromatography with flame ionization detection as an alternative analytical method for dimethyl sulphide trace release from marine algae

    Careri, M.; Musci, M.; Bianchi, F.; Mucchino, C. [Parma Univ., Parma (Italy). Dipt. di Chimica Generale ed Inorganica, Chimica Analitica e Chimica Fisica; Azzoni, R.; Viaroli, P. [Parma Univ., Parma (Italy). Dipt. di Scienze Ambientali

    2001-10-01

    The release of dimethyl sulphide (DMS) by the seaweed Ulva spp at trace level was studied in aqueous solutions at different salinities, temperature and light intensities. For this purpose, the purge-and-trap technique combined with gas chromatography-flame ionization detection was used. The analytical method was evaluated in terms of linearity range, limit of detection, precision and accuracy by considering 10% (w/v) and 30% (w/v) synthetic seawater as aqueous matrices. Calculation of the recovery function evidenced a matrix influence. The method of standard addition was then used for an accurate determination of DMS in synthetic seawater reproduction the matrix effect. DMS fluxes were analysed in batch cultures of Ulva spp reproducing the conditions which usually occur in the Sacca di Goro lagoon (Northern Adriatic Sea, Italy). [Italian] Il rilascio di dimetilsolfuro (DMS) in tracce da parte della macroalga Ulva spp e' stato studiato in soluzioni acquose di differente salinita' mediante la tecnica purge-and-trap accoppiata on-line alla gascromatografia con rivelazione a ionizzazione di fiamma (GC-FID). Il metodo analitico e' stato validato in termini di linearita' di risposta, di limite di rivelabilita', precisione e accuratezza considerando come matrice acqua di mare sintetica a diversa salinita' (10%0 m/v e 30%0 m/v). Il calcolo della funzione di recupero ha consentito di verificare la presenza di errori sistematici dovuti all'effetto matrice. Il metodo sviluppato e' stato quindi applicato a matrici ambientali allo scopo di verificare il rilascio di DMS da parte di Ulva spp, operando in condizioni ambientali simili a quelle che si verificano nella Sacca di Goro (Ferrara, Italia).

  16. Concentration comparison of selected constituents between groundwater samples collected within the Missouri River alluvial aquifer using purge and pump and grab-sampling methods, near the city of Independence, Missouri, 2013

    Krempa, Heather M.

    2015-10-29

    The U.S. Geological Survey, in cooperation with the City of Independence, Missouri, Water Department, has historically collected water-quality samples using the purge and pump method (hereafter referred to as pump method) to identify potential contamination in groundwater supply wells within the Independence well field. If grab sample results are comparable to the pump method, grab samplers may reduce time, labor, and overall cost. This study was designed to compare constituent concentrations between samples collected within the Independence well field using the pump method and the grab method.

  17. Propellant and Purge System Contamination "2007: A Summer of Fun"

    Galloway, Randy

    2010-01-01

    This slide presentation reviews the propellant and purge system contamination that occurred during the summer of 2007 at Stennis Space Center. During this period Multiple propellant/pressurant system contamination events prompted a thorough investigation, the results of which are reviewed.

  18. Realization of dynamic data migration and purge operations

    Wang Juan; Qiu Hongmao; Liu Junmin; Wang Xiaoming; Wang Hong; Zhong Bo; Lu Yuanlei; Xu Jin

    2008-01-01

    In the large scale real time data processing, with the time extend, the data in database system will be expanded, which declines the system capability. To solve the problem, this paper presents a method of migration and purge operations. New or updated database records within a specified time interval can be copied automatically from source database accounts to destination database accounts and the data in the source database will be deleted automatically after a specified period of time according to the rules using the method. The migration and purge operations have been realized in China National Data Center. (authors)

  19. Cardiac Risk and Disordered Eating: Decreased R Wave Amplitude in Women with Bulimia Nervosa and Women with Subclinical Binge/Purge Symptoms.

    Green, Melinda; Rogers, Jennifer; Nguyen, Christine; Blasko, Katherine; Martin, Amanda; Hudson, Dominique; Fernandez-Kong, Kristen; Kaza-Amlak, Zauditu; Thimmesch, Brandon; Thorne, Tyler

    2016-11-01

    The purpose of the present study was threefold. First, we examined whether women with bulimia nervosa (n = 12) and women with subthreshold binge/purge symptoms (n = 20) showed decreased mean R wave amplitude, an indicator of cardiac risk, on electrocardiograph compared to asymptomatic women (n = 20). Second, we examined whether this marker was pervasive across experimental paradigms, including before and after sympathetic challenge tasks. Third, we investigated behavioural predictors of this marker, including binge frequency and purge frequency assessed by subtype (dietary restriction, excessive exercise, self-induced vomiting, and laxative abuse). Results of a 3 (ED symptom status) × 5 (experimental condition) mixed factorial ANCOVA (covariates: body mass index, age) indicated women with bulimia nervosa and women with subclinical binge/purge symptoms demonstrated significantly reduced mean R wave amplitude compared to asymptomatic women; this effect was pervasive across experimental conditions. Multiple regression analyses showed binge and purge behaviours, most notably laxative abuse as a purge method, predicted decreased R wave amplitude across all experimental conditions. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association.

  20. Selective purge for hydrogenation reactor recycle loop

    Baker, Richard W.; Lokhandwala, Kaaeid A.

    2001-01-01

    Processes and apparatus for providing improved contaminant removal and hydrogen recovery in hydrogenation reactors, particularly in refineries and petrochemical plants. The improved contaminant removal is achieved by selective purging, by passing gases in the hydrogenation reactor recycle loop or purge stream across membranes selective in favor of the contaminant over hydrogen.

  1. Bulimia Nervosa/Purging Disorder.

    Castillo, Marigold; Weiselberg, Eric

    2017-04-01

    Bulimia nervosa was first described in 1979 by British psychiatrist Gerald Russell as a "chronic phase of anorexia nervosa" in which patients overeat and then use compensatory mechanisms, such as self-induced vomiting, laxatives, or prolonged periods of starvation. The characterization of bulimia nervosa continues to evolve with the introduction of the DSM-5 in 2013. In this article, the epidemiology and risk factors of bulimia nervosa are identified and reviewed, along with the medical complications and psychiatric comorbidities. The evaluation of a patient with suspected bulimia nervosa is addressed, with an emphasis on acquiring a complete and thorough history as well as discovering any comorbidities that are present. Management of the patient involves both medical interventions and behavioral counseling in order to address physical, psychological, and social needs. Lastly, a new diagnosis introduced in the DSM-5, purging disorder, is described and discussed. Copyright © 2017 Mosby, Inc. All rights reserved.

  2. Purging of working atmospheres inside freight containers.

    Braconnier, Robert; Keller, François-Xavier

    2015-06-01

    This article focuses on prevention of possible exposure to chemical agents, when opening, entering, and stripping freight containers. The container purging process is investigated using tracer gas measurements and numerical airflow simulations. Three different container ventilation conditions are studied, namely natural, mixed mode, and forced ventilation. The tests conducted allow purging time variations to be quantified in relation to various factors such as container size, degree of filling, or type of load. Natural ventilation performance characteristics prove to be highly variable, depending on environmental conditions. Use of a mechanically supplied or extracted airflow under mixed mode and forced ventilation conditions enables purging to be significantly accelerated. Under mixed mode ventilation, extracting air from the end of the container furthest from the door ensures quicker purging than supplying fresh air to this area. Under forced ventilation, purging rate is proportional to the applied ventilation flow. Moreover, purging rate depends mainly on the location at which air is introduced: the most favourable position being above the container loading level. Many of the results obtained during this study can be generalized to other cases of purging air in a confined space by general ventilation, e.g. the significance of air inlet positioning or the advantage of generating high air velocities to maximize stirring within the volume. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  3. Reactor containment purge and vent valve performance experiments

    Hunter, J.A.; Steele, R.; Watkins, J.C.

    1985-01-01

    Three nuclear-designed butterfly valves typical of those used in domestic nuclear power plant containment purge and vent applications were tested. For a comparison of responses, two eight-inch nominal pipe size valves with differing internal design were tested. For extrapolation insights, a 24-inch nominal pipe size valve was also tested. The valve experiments were performed with various piping configurations and valve disc orientations to the flow, to simulate various installation options in field application. As a standard for comparing the effects of the installation options, testing was also performed in a standard ANSI test section. Test cycles were performed at inlet pressures of 5 to 60 psig, while monitoring numerous test parameters, such as the valve disc position, valve shaft torque, mass flow rate, and the pressure and temperature at multiple locations throughout the test section. An experimental data base was developed to assist in the evaluation of the current analytical methods and to determine the influence of inlet pressure, inlet duct geometry, and valve orientation to the flow media on valve torque requirements, along with any resulting limitations to the extrapolation methods. 2 refs., 15 figs

  4. Stable carbon and hydrogen isotope analysis of methyl tert-butyl ether and tert-amyl methyl ether by purge and trap-gas chromatography-isotope ratio mass spectrometry: method evaluation and application.

    Kujawinski, Dorothea M; Stephan, Manuel; Jochmann, Maik A; Krajenke, Karen; Haas, Joe; Schmidt, Torsten C

    2010-01-01

    In order to monitor the behaviour of contaminants in the aqueous environment effective enrichment techniques often have to be employed due to their low concentrations. In this work a robust and sensitive purge and trap-gas chromatography-isotope ratio mass spectrometry method for carbon and hydrogen isotope analysis of fuel oxygenates in water is presented. The method evaluation included the determination of method detection limits, accuracy and reproducibility of deltaD and delta(13)C values. Lowest concentrations at which reliable delta(13)C values could be determined were 5 microg L(-1) and 28 microg L(-1) for TAME and MTBE, respectively. Stable deltaD values for MTBE and TAME could be achieved for concentrations as low as 25 and 50 microg L(-1). Good long-term reproducibility of delta(13)C and deltaD values was obtained for all target compounds. But deltaD values varying more than 5 per thousand were observed using different thermal conversion tubes. Thus, a correction of deltaD values in the analysis of groundwater samples was necessary to guarantee comparability of the results. The applicability of this method was shown by the analysis of groundwater samples from a gasoline contaminated site. By two dimensional isotope analysis two locations within this site were identified at which anaerobic and aerobic degradation of methyl tert-butyl ether occurred.

  5. In-well time-of-travel approach to evaluate optimal purge duration during low-flow sampling of monitoring wells

    Harte, Philip T.

    2017-01-01

    A common assumption with groundwater sampling is that low (time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.

  6. DEM-CFD simulation of purge gas flow in a solid breeder pebble bed

    Zhang, Hao [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230027 (China); Institute of Nuclear Physics and Chemistry, China Academy of Engineering Physics, Mianyang 621900 (China); Li, Zhenghong [Institute of Nuclear Physics and Chemistry, China Academy of Engineering Physics, Mianyang 621900 (China); University of Science and Technology of China, Hefei 230027 (China); Guo, Haibing [Institute of Nuclear Physics and Chemistry, China Academy of Engineering Physics, Mianyang 621900 (China); Ye, Minyou [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230027 (China); Huang, Hongwen, E-mail: inpclane@sina.com [Institute of Nuclear Physics and Chemistry, China Academy of Engineering Physics, Mianyang 621900 (China)

    2016-12-15

    Solid tritium breeding blanket applying pebble bed concept is promising for fusion reactors. Tritium bred in the pebble bed is purged out by inert gas. The flow characteristics of the purge gas are important for the tritium transport from the solid breeder materials. In this study, a randomly packed pebble bed was generated by Discrete Element Method (DEM) and verified by radial porosity distribution. The flow parameters of the purge gas in channels were solved by Computational Fluid Dynamics (CFD) method. The results show that the normalized velocity magnitudes have the same damped oscillating patterns with radial porosity distribution. Besides, the bypass flow near the wall cannot be ignored in this model, and it has a slight increase with inlet velocity. Furthermore, higher purging efficiency becomes with higher inlet velocity and especially higher in near wall region.

  7. Solute transport in a well under slow-purge and no-purge conditions

    Plummer, M. A.; Britt, S. L.; Martin-Hayden, J. M.

    2010-12-01

    Non-purge sampling techniques, such as diffusion bags and in-situ sealed samplers, offer reliable and cost-effective groundwater monitoring methods that are a step closer to the goal of real-time monitoring without pumping or sample collection. Non-purge methods are, however, not yet completely accepted because questions remain about how solute concentrations in an unpurged well relate to concentrations in the adjacent formation. To answer questions about how undisturbed well water samples compare to formation concentrations, and to provide the information necessary to interpret results from non-purge monitoring systems, we have conducted a variety of physical experiments and numerical simulations of flow and transport in and through monitoring wells under low-flow and ambient flow conditions. Previous studies of flow and transport in wells used a Darcy’s law - based continuity equation for flow, which is often justified under the strong, forced-convection flow caused by pumping or large vertical hydraulic potential gradients. In our study, we focus on systems with weakly forced convection, where density-driven free convection may be of similar strength. We therefore solved Darcy’s law for porous media domains and the Navier Stokes equations for flow in the well, and coupled solution of the flow equations to that of solute transport. To illustrate expected in-well transport behavior under low-flow conditions, we present results of three particular studies: (1) time-dependent effluent concentrations from a well purged at low-flow pumping rates, (2) solute-driven density effects in a well under ambient horizontal flow and (3) temperature-driven mixing in a shallow well subject to seasonal temperature variations. Results of the first study illustrate that assumptions about the nature of in-well flow have a significant impact on effluent concentration curves even during pumping, with Poiseuille-type flow producing more rapid removal of concentration differences

  8. Monitoring Tumour Cell Purge by Long Term Marrow Culture in Acute Leukemia

    El-Masry, M.; Hashem, T. M.

    2001-01-01

    Purging of leukemic cells from bone marrow harvested for autologous bone marrow transplantation (ABMT) remains a challenge. This work aimed at evaluating the efficacy of long-term marrow culture (LTMC) on purging leukemic progenitors in acute leukemia. Design and methods: We planned to study the presence of immunoglobulin heavy (lgH) chain gene rearrangements by polymerase chain reaction (PCR) at diagnosis for bone marrow of 23 patients with acute leukemia. LTMC was performed only for patients who showed positive IgH chain gene monoclonality at diagnosis. The efficiency of purge was evaluated by PCR for monoclonal IgH chain gene on weekly basis of LTMC. Results: Of the 23 studied cases, 18 (78.26%) showed positive clonal IgH chain gene at diagnosis. LTMC study showed that 6/]8 (33.33%), 3/18 (16.67%),7/18 (38.89%) and 2/18 (11.11 %) underwent complete purging of the leukemic progenitors at the first, second, third and fourth weeks of culture, respectively. Follow up could be performed for 14 positive ALL cases after induction of remission; 12/14 (85.7%) showed minimal residual disease (MRD) while only two cases did not show MRD. Complete purging of the latter two cases by LTMC occurred on the second and third weeks of culture. Conclusion: LTMC is a useful and successful method for leukemic cell purging. LTMC should be undertaken at initial diagnosis and on an individual basis. Each case should be dealt with solely to determine at which week of culture complete purging could be obtained for subsequent autologous grafting of the purged marrow

  9. [Purging behaviors and nutritional status in anorexia nervosa and bulimia nervosa].

    Vaz, F J; García-Herráiz, A; López-Vinuesa, B; Monge, M; Fernández-Gil, M A; Guisado, J A

    2003-01-01

    The aim of the study was to investigate whether the use of purgative methods in patients with eating disorders (anorexia nervosa [AN] and bulimia nervosa [BN]) could be capable of producing changes in the nutritional status of the patients. The group under study was composed of 184 female eating disordered outpatients. One hundred and sixteen patients (63.0%) fulfilled the DSM-IV diagnostic criteria for BN (90 purging type, 26 nonpurging type). Sixty eight patients (37.0%) fulfilled the DSM-IV criteria for the diagnosis of AN (48 restricting type, 20 binging-purging type). The assessment process included anthropometry (body circumferences and skinfold thickness) and body impedance analysis. The two subgroups of AN patients significantly differed from each of the BN subgroups. From a nutritional point of view, some significant differences between the two DSM-IV subtypes of AN existed, but not between the purging type and the nonpurging type of BN. The paper discusses the clinical significance of these findings. An alternative subtypification of AN patients is proposed: 1) restricting type [patients who control their food intake and do not purge]; 2) purging type [patient with true episodes of binging which are followed by purgative behaviors]; and 3) pseudopurging type [patients with subjective binging episodes who use purging methods].

  10. Optimization of breeding methods when introducing multiple ...

    Optimization of breeding methods when introducing multiple resistance genes from American to Chinese wheat. JN Qi, X Zhang, C Yin, H Li, F Lin. Abstract. Stripe rust is one of the most destructive diseases of wheat worldwide. Growing resistant cultivars with resistance genes is the most effective method to control this ...

  11. Air riding seal with purge cavity

    Sexton, Thomas D; Mills, Jacob A

    2017-08-15

    An air riding seal for a turbine in a gas turbine engine, where an annular piston is axial moveable within an annular piston chamber formed in a stator of the turbine and forms a seal with a surface on the rotor using pressurized air that forms a cushion in a pocket of the annular piston. A purge cavity is formed on the annular piston and is connected to a purge hole that extends through the annular piston to a lower pressure region around the annular piston or through the rotor to an opposite side. The annular piston is sealed also with inner and outer seals that can be a labyrinth seal to form an additional seal than the cushion of air in the pocket to prevent the face of the air riding seal from overheating.

  12. Autologous bone marrow purging with LAK cells.

    Giuliodori, L; Moretti, L; Stramigioli, S; Luchetti, F; Annibali, G M; Baldi, A

    1993-12-01

    In this study we will demonstrate that LAK cells, in vitro, can lyse hematologic neoplastic cells with a minor toxicity of the staminal autologous marrow cells. In fact, after bone marrow and LAK co-culture at a ratio of 1/1 for 8 hours, the inhibition on the GEMM colonies resulted to be 20% less compared to the untreated marrow. These data made LAK an inviting agent for marrow purging in autologous bone marrow transplantation.

  13. The dynamic behavior of pressure during purge process in the anode of a PEM fuel cell

    Gou, Jun; Pei, Pucheng; Wang, Ying [State Key Laboratory of Automotive Safety and Energy, Tsinghua University, Beijing 100084 (China)

    2006-11-22

    A one-dimensional mathematic computational fluid dynamics model of a proton exchange membrane (PEM) fuel cell is presented in this paper to simulate the transient behavior of hydrogen pressure in the flow field during a typical dynamic process-the purge process. This model accounts for the mechanism of pressure wave transmission in the channels by employing the characteristic line method. A unique parameter-pressure swing, which represents the top value of pressure variation at certain point in the channel during the purge process, is brought up and studied as well as the pressure drop. The pressure distribution along the channel and the pressure drop during the purge process for different operating pressures, lengths of purge time, stoichiometric ratios and current densities are studied. The results indicate that the distributed pressure, pressure drop and pressure swing all increase with the increment of operating pressure. With a high operating pressure a second-falling stage can be seen in the pressure drop profile while with a relatively low operating pressure, a homogeneous distribution of pressure swing can be attained. A long purge time will provide enough time to show the whole part of the pressure drop curve, while only a part of the curve can be attained if a short purge time is adopted, but a relatively uniform distribution of pressure swing will show up at the moment. Compared with the condition of stoichiometric ratio 1, the pressure drop curve decreases more sharply after the top value and the pressure swing displays a more uniform distribution when the ratio is set beyond 1. Different current densities have no apparent influence on the pressure drop and the pressure swing during this transient process. All the distribution rules of related parameters deducted from this study will be helpful for optimizing the purging strategies on vehicles. (author)

  14. Hybrid multiple criteria decision-making methods

    Zavadskas, Edmundas Kazimieras; Govindan, K.; Antucheviciene, Jurgita

    2016-01-01

    Formal decision-making methods can be used to help improve the overall sustainability of industries and organisations. Recently, there has been a great proliferation of works aggregating sustainability criteria by using diverse multiple criteria decision-making (MCDM) techniques. A number of revi...

  15. Multiple Shooting and Time Domain Decomposition Methods

    Geiger, Michael; Körkel, Stefan; Rannacher, Rolf

    2015-01-01

    This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms.  The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics.  This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...

  16. Appetite Regulatory Hormones in Women With Anorexia Nervosa: Binge-Eating/Purging Versus Restricting Type

    Eddy, Kamryn T.; Lawson, Elizabeth A.; Meade, Christina; Meenaghan, Erinne; Horton, Sarah E.; Misra, Madhusmita; Klibanski, Anne; Miller, Karen K.

    2015-01-01

    Objective Anorexia nervosa is a psychiatric illness characterized by low weight, disordered eating, and hallmark neuroendocrine dysfunction. Behavioral phenotypes are defined by predominant restriction or bingeing/purging; binge-eating/purging type anorexia nervosa is associated with poorer outcome. The pathophysiology underlying anorexia nervosa types is unknown, but altered hormones, known to be involved in eating behaviors, may play a role. Method To examine the role of anorexigenic hormones in anorexia nervosa subtypes, we examined serum levels of peptide YY (PYY; total and active [3-36] forms), brain-derived neurotrophic factor (BDNF), and leptin as primary outcomes in women with OSM-5 restricting type anorexia nervosa (n=50), binge-eating/purging type anorexia nervosa (n = 22), and healthy controls (n = 22).1n addition, women completed validated secondary outcome measures of eating disorder psychopathology (Eating Disorder Examination-Questionnaire) and depression and anxiety symptoms (Hamilton Rating Scales for Depression [HDRS] and Anxiety [HARS]). The study samples were collected from May 22, 2004, to February 7, 2012. Results Mean PYY 3-36 and leptin levels were lower and BDNF levels higher in binge-eating/purging type anorexia nervosa than in restricting type anorexia nervosa (all Pvalues anorexia nervosa types were significant (Panorexia nervosa, the anorexigenic hormones PYY, BDNF, and leptin are differentially regulated between the restricting and binge/purge types. Whether these hormone pathways play etiologic roles with regard to anorexia nervosa behavioral types or are compensatory merits further study. PMID:25098834

  17. AP600 containment purge radiological analysis

    O`Connor, M.; Schulz, J.; Tan, C. [Bechtel Power Corporation (United States)] [and others

    1995-02-01

    The AP600 Project is a passive pressurized water reactor power plant which is part of the Design Certification and First-of-a-Kind Engineering effort under the Advanced Light Water Reactor program. Included in this process is the design of the containment air filtration system which will be the subject of this paper. We will compare the practice used by previous plants with the AP600 approach to meet the goals of industry standards in sizing the containment air filtration system. The radiological aspects of design are of primary significance and will be the focus of this paper. The AP600 Project optimized the design to combine the functions of the high volumetric flow rate, low volumetric flow rate, and containment cleanup and other filtration systems into one multi-functional system. This achieves a more simplified, standardized, and lower cost design. Studies were performed to determine the possible concentrations of radioactive material in the containment atmosphere and the effectiveness of the purge system to keep concentrations within 10CFR20 limits and within offsite dose objectives. The concentrations were determined for various reactor coolant system leakage rates and containment purge modes of operation. The resultant concentrations were used to determine the containment accessibility during various stages of normal plant operation including refueling. The results of the parametric studies indicate that a dual train purge system with a capacity of 4,000 cfm per train is more than adequate to control the airborne radioactivity levels inside containment during normal plant operation and refueling, and satisfies the goals of ANSI/ANS-56.6-1986 and limits the amount of radioactive material released to the environment per ANSI/ANS 59.2-1985 to provide a safe environment for plant personnel and offsite residents.

  18. Sexual Orientation Disparities in Purging and Binge Eating From Early to Late Adolescence

    Austin, S. Bryn; Ziyadeh, Najat J.; Corliss, Heather L.; Rosario, Margaret; Wypij, David; Haines, Jess; Camargo, Carlos A.; Field, Alison E.

    2009-01-01

    Purpose To describe patterns of purging and binge eating from early through late adolescence in female and male youth across a range of sexual orientations. Methods Using data from the prospective Growing Up Today Study, a large cohort of U.S. youth, we investigated trends in past-year self-reports of purging (ever vomit or use laxatives for weight control) and binge eating at least monthly. The analytic sample included 57,668 observations from repeated measures gathered from 13,795 youth ages 12 to 23 years providing information collected by self-administered questionnaires from six waves of data collection. We used multivariable logistic regression models to examine sexual orientation group (heterosexual, “mostly heterosexual,” bisexual, and lesbian/gay) differences in purging and binge eating throughout adolescence, with same-gender heterosexuals as the referent group and controlling for age and race/ethnicity. Results Throughout adolescence, in most cases, sexual orientation group differences were evident at the youngest ages and persisted through adolescence. Among females and compared to heterosexuals, “mostly heterosexuals,” bisexuals, and lesbians were more likely to report binge eating, but only “mostly heterosexuals” and bisexuals were also more likely to report purging. Among males, all three sexual orientation subgroups were more likely than heterosexual males to report both binge eating and purging. Within each orientation subgroup, females generally reported higher prevalence of purging and binge eating than did males. Conclusions Clinicians need to be alert to the risk of eating disordered behaviors in lesbian, gay, bisexual, and “mostly heterosexual” adolescents of both genders in order to better evaluate these youth and refer them for treatment. PMID:19699419

  19. Multiple predictor smoothing methods for sensitivity analysis

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  20. Multiple predictor smoothing methods for sensitivity analysis.

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  1. Engineering task plan for purged light system

    BOGER, R.M.

    1999-01-01

    A purged, closed circuit television system is currently used to video inside of waste tanks. The video is used to support inspection and assessment of the tank interiors, waste residues, and deployed hardware. The system is also used to facilitate deployment of new equipment. A new light source has been requested by Characterization Project Operations (CPO) for the video system. The current light used is mounted on the camera and provides 75 watts of light, which is insufficient for clear video. Other light sources currently in use on the Hanford site either can not be deployed in a 4-inch riser or do not meet the ignition source controls. The scope of this Engineering Task Plan is to address all activities associated with the specification and procurement of a light source for use with the existing CPO video equipment. The installation design change to tank farm facilities is not within the scope of this ETP

  2. The validity and clinical utility of purging disorder.

    Keel, Pamela K; Striegel-Moore, Ruth H

    2009-12-01

    To review evidence of the validity and clinical utility of Purging Disorder and examine options for the Diagnostic and Statistical Manual of Mental Disorders fifth edition (DSM-V). Articles were identified by computerized and manual searches and reviewed to address five questions about Purging Disorder: Is there "ample" literature? Is the syndrome clearly defined? Can it be measured and diagnosed reliably? Can it be differentiated from other eating disorders? Is there evidence of syndrome validity? Although empirical classification and concurrent validity studies provide emerging support for the distinctiveness of Purging Disorder, questions remain about definition, diagnostic reliability in clinical settings, and clinical utility (i.e., prognostic validity). We discuss strengths and weaknesses associated with various options for the status of Purging Disorder in the DSM-V ranging from making no changes from DSM-IV to designating Purging Disorder a diagnosis on equal footing with Anorexia Nervosa and Bulimia Nervosa.

  3. Evaluation of the Tekmar 3100 Purge and Trap Agilent GC/MSD system for VOC analysis

    Li, K.; Fingas, M.F. [Environment Canada, Ottawa, ON (Canada). Emergencies Science and Technology Div

    2004-07-01

    This presentation described the Tekmar automated purge and trap (PAT) modular analyzer for detecting and quantifying volatile organic compounds (VOCs) in relatively clean water samples. A large percentage of emergency response work involves VOC analysis under various matrices such as water or soil. PAT analysis is an extraction method in which the VOCs from a liquid sample are purged by helium and concentrated on an internal trap, where the analytes are thermally desorbed into a gas chromatograph or a gas chromatograph/mass spectrometer (GS/MS). This high degree of concentration results in good detection limits. The performance of the Tekmar PAT 31000 concentrator with autosampler and GC/MS system was evaluated using a 1 ppb and 100 ppb standard of the Method 524 mixture for some selected VOC on the list. The study also examined the purging parameters such as time and temperature. It also examined a new way of introducing gaseous samples through the 3-way purge vessel valve on the concentrator. The objective was to determine if the versatility of the system could be extended by using the the same instrument configuration for air sampling. Preliminary results indicate that it is not yet practical to use the system for air sampling. 3 tabs., 4 figs.

  4. Case studies: Soil mapping using multiple methods

    Petersen, Hauke; Wunderlich, Tina; Hagrey, Said A. Al; Rabbel, Wolfgang; Stümpel, Harald

    2010-05-01

    Soil is a non-renewable resource with fundamental functions like filtering (e.g. water), storing (e.g. carbon), transforming (e.g. nutrients) and buffering (e.g. contamination). Degradation of soils is meanwhile not only to scientists a well known fact, also decision makers in politics have accepted this as a serious problem for several environmental aspects. National and international authorities have already worked out preservation and restoration strategies for soil degradation, though it is still work of active research how to put these strategies into real practice. But common to all strategies the description of soil state and dynamics is required as a base step. This includes collecting information from soils with methods ranging from direct soil sampling to remote applications. In an intermediate scale mobile geophysical methods are applied with the advantage of fast working progress but disadvantage of site specific calibration and interpretation issues. In the framework of the iSOIL project we present here some case studies for soil mapping performed using multiple geophysical methods. We will present examples of combined field measurements with EMI-, GPR-, magnetic and gammaspectrometric techniques carried out with the mobile multi-sensor-system of Kiel University (GER). Depending on soil type and actual environmental conditions, different methods show a different quality of information. With application of diverse methods we want to figure out, which methods or combination of methods will give the most reliable information concerning soil state and properties. To investigate the influence of varying material we performed mapping campaigns on field sites with sandy, loamy and loessy soils. Classification of measured or derived attributes show not only the lateral variability but also gives hints to a variation in the vertical distribution of soil material. For all soils of course soil water content can be a critical factor concerning a succesful

  5. Decreasing Multicollinearity: A Method for Models with Multiplicative Functions.

    Smith, Kent W.; Sasaki, M. S.

    1979-01-01

    A method is proposed for overcoming the problem of multicollinearity in multiple regression equations where multiplicative independent terms are entered. The method is not a ridge regression solution. (JKS)

  6. The Multiple Intelligences Teaching Method and Mathematics ...

    The Multiple Intelligences teaching approach has evolved and been embraced widely especially in the United States. The approach has been found to be very effective in changing situations for the better, in the teaching and learning of any subject especially mathematics. Multiple Intelligences teaching approach proposes ...

  7. Costs and Risks of Continuous Purges for Instruments

    Secunda, M.; De Garcia, K. Montt

    2018-01-01

    As instruments are built, tested, and launched, they are exposed to environments that have various levels of cleanliness. Often, Scientists and Contamination Control Engineers specify a purge to mitigate the instrument's exposure to a non-clean environment, protect sensitive optics from a specific threat, such as water, or as insurance against things going wrong in a clean environment. The cost of the purge, in effort, dollars and risk, is often understated when the requirements are being established, and the need for purge is not clearly justifiable. This paper will more clearly define some of the costs and risks associated with the continuous purging of instruments during the course of building, testing and launching instruments.

  8. Evaluation of Purging Solutions for Military Fuel Tanks

    Rhee, In-Sik

    2003-01-01

    .... It is also a biodegradable water based solvent. Because of this property, US Army has used this environmentally friendly solvent as a purging solution in all military fuel tanks including Heavy Expanded Mobility Truck (HEMTT...

  9. Verification of the ASTM G-124 Purge Equation

    Robbins, Katherine E.; Davis, Samuel Eddie

    2009-01-01

    ASTM G-124 seeks to evaluate combustion characteristics of metals in high-purity (greater than 99%) oxygen atmospheres. ASTM G-124 provides the following equation to determine the minimum number of purges required to reach this level of purity in a test chamber: n = -4/log10(Pa/Ph), where "n" is the total number of purge cycles required, Ph is the absolute pressure used for the purge on each cycle and Pa is the atmospheric pressure or the vent pressure. The origin of this equation is not known and has been the source of frequent questions as to its accuracy and reliability. This paper shows the derivation of the G-124 purge equation, and experimentally explores the equation to determine if it accurately predicts the number of cycles required.

  10. Basic thinking patterns and working methods for multiple DFX

    Andreasen, Mogens Myrup; Mortensen, Niels Henrik

    1997-01-01

    This paper attempts to describe the theory and methodologies behind DFX and linking multiple DFX's together. The contribution is an articulation of basic thinking patterns and description of some working methods for handling multiple DFX.......This paper attempts to describe the theory and methodologies behind DFX and linking multiple DFX's together. The contribution is an articulation of basic thinking patterns and description of some working methods for handling multiple DFX....

  11. Multiple attenuation to reflection seismic data using Radon filter and Wave Equation Multiple Rejection (WEMR) method

    Erlangga, Mokhammad Puput [Geophysical Engineering, Institut Teknologi Bandung, Ganesha Street no.10 Basic Science B Buliding fl.2-3 Bandung, 40132, West Java Indonesia puput.erlangga@gmail.com (Indonesia)

    2015-04-16

    Separation between signal and noise, incoherent or coherent, is important in seismic data processing. Although we have processed the seismic data, the coherent noise is still mixing with the primary signal. Multiple reflections are a kind of coherent noise. In this research, we processed seismic data to attenuate multiple reflections in the both synthetic and real seismic data of Mentawai. There are several methods to attenuate multiple reflection, one of them is Radon filter method that discriminates between primary reflection and multiple reflection in the τ-p domain based on move out difference between primary reflection and multiple reflection. However, in case where the move out difference is too small, the Radon filter method is not enough to attenuate the multiple reflections. The Radon filter also produces the artifacts on the gathers data. Except the Radon filter method, we also use the Wave Equation Multiple Elimination (WEMR) method to attenuate the long period multiple reflection. The WEMR method can attenuate the long period multiple reflection based on wave equation inversion. Refer to the inversion of wave equation and the magnitude of the seismic wave amplitude that observed on the free surface, we get the water bottom reflectivity which is used to eliminate the multiple reflections. The WEMR method does not depend on the move out difference to attenuate the long period multiple reflection. Therefore, the WEMR method can be applied to the seismic data which has small move out difference as the Mentawai seismic data. The small move out difference on the Mentawai seismic data is caused by the restrictiveness of far offset, which is only 705 meter. We compared the real free multiple stacking data after processing with Radon filter and WEMR process. The conclusion is the WEMR method can more attenuate the long period multiple reflection than the Radon filter method on the real (Mentawai) seismic data.

  12. On multiple level-set regularization methods for inverse problems

    DeCezaro, A; Leitão, A; Tai, X-C

    2009-01-01

    We analyze a multiple level-set method for solving inverse problems with piecewise constant solutions. This method corresponds to an iterated Tikhonov method for a particular Tikhonov functional G α based on TV–H 1 penalization. We define generalized minimizers for our Tikhonov functional and establish an existence result. Moreover, we prove convergence and stability results of the proposed Tikhonov method. A multiple level-set algorithm is derived from the first-order optimality conditions for the Tikhonov functional G α , similarly as the iterated Tikhonov method. The proposed multiple level-set method is tested on an inverse potential problem. Numerical experiments show that the method is able to recover multiple objects as well as multiple contrast levels

  13. Multiple network interface core apparatus and method

    Underwood, Keith D [Albuquerque, NM; Hemmert, Karl Scott [Albuquerque, NM

    2011-04-26

    A network interface controller and network interface control method comprising providing a single integrated circuit as a network interface controller and employing a plurality of network interface cores on the single integrated circuit.

  14. Multiple tag labeling method for DNA sequencing

    Mathies, R.A.; Huang, X.C.; Quesada, M.A.

    1995-07-25

    A DNA sequencing method is described which uses single lane or channel electrophoresis. Sequencing fragments are separated in the lane and detected using a laser-excited, confocal fluorescence scanner. Each set of DNA sequencing fragments is separated in the same lane and then distinguished using a binary coding scheme employing only two different fluorescent labels. Also described is a method of using radioisotope labels. 5 figs.

  15. Multiple time scale methods in tokamak magnetohydrodynamics

    Jardin, S.C.

    1984-01-01

    Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B 2 /2μ 0 , which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed

  16. Purging sensitive science instruments with nitrogen in the STS environment

    Lumsden, J. M.; Noel, M. B.

    1983-01-01

    Potential contamination of extremely sensitive science instruments during prelaunch, launch, and earth orbit operations are a major concern to the Galileo and International Solar Polar Mission (ISPM) Programs. The Galileo Program is developing a system to purify Shuttle supplied nitrogen gas for in-flight purging of seven imaging and non-imaging science instruments. Monolayers of contamination deposited on critical surfaces can degrade some instrument sensitivities as much as fifty percent. The purging system provides a reliable supply of filtered and fried nitrogen gas during these critical phases of the mission when the contamination potential is highest. The Galileo and ISPM Programs are including the system as Airborne Support Equipment (ASE).

  17. Evaluation of the Validity of Groundwater Samples Obtained Using the Purge Water Management System at SRS

    Beardsley, C.C.

    1999-01-01

    trends to the present time. The latter line of evidence is considered to be the most powerful in demonstrating that representative samples are being acquired by the PWMS because it is highly unlikely that previously existing concentration trends would continue if resampling had occurred.Standard procedure for obtaining protocol groundwater monitoring samples at the Savannah River Site (SRS) calls for extracting or ''purging'' sufficient quantities of groundwater to allow removal of stagnant water and to allow certain key indicator parameters to stabilize prior to collection of samples. The water extracted from a well prior to sample collection is termed ''purge water'' and must be managed in an approved fashion if it contains hazardous and/or radiological constituents that exceed specified health-based limits described in the Investigation Derived Waste Management Plan (WSRC, 1994). Typical management practices include containerization, transportation, treatment, and disposal via Clean Water Act -permitted facilities.A technology for handling purge water that eliminates the need to containerize and transport this water to a disposal facility has been developed. This technology, termed the Purge Water Management System (PWMS), is currently under pilot stage deployment at SRS. The PWMS is a ''closed-loop'', non-contact system used to collect and return purge water to the originating aquifer after a sampling event without significantly altering the water quality. A schematic drawing of the PWMS is in Figure 1. The system has been successfully demonstrated at both a ''clean'' well, P-26D, and a ''contaminated'' well, MCB-5, by comparing chemical concentration data obtained by PWMS sampling against the historical data record for each of these wells (Hiergesell et al., 1996). In both cases the PWMS was found to yield sample results that were indistinguishable from the results of the historical protocol sampling conducted at those same wells.For any method used to

  18. Methods for monitoring multiple gene expression

    Berka, Randy [Davis, CA; Bachkirova, Elena [Davis, CA; Rey, Michael [Davis, CA

    2012-05-01

    The present invention relates to methods for monitoring differential expression of a plurality of genes in a first filamentous fungal cell relative to expression of the same genes in one or more second filamentous fungal cells using microarrays containing Trichoderma reesei ESTs or SSH clones, or a combination thereof. The present invention also relates to computer readable media and substrates containing such array features for monitoring expression of a plurality of genes in filamentous fungal cells.

  19. Methods for monitoring multiple gene expression

    Berka, Randy; Bachkirova, Elena; Rey, Michael

    2013-10-01

    The present invention relates to methods for monitoring differential expression of a plurality of genes in a first filamentous fungal cell relative to expression of the same genes in one or more second filamentous fungal cells using microarrays containing Trichoderma reesei ESTs or SSH clones, or a combination thereof. The present invention also relates to computer readable media and substrates containing such array features for monitoring expression of a plurality of genes in filamentous fungal cells.

  20. A simple and reliable method reducing sulfate to sulfide for multiple sulfur isotope analysis.

    Geng, Lei; Savarino, Joel; Savarino, Clara A; Caillon, Nicolas; Cartigny, Pierre; Hattori, Shohei; Ishino, Sakiko; Yoshida, Naohiro

    2018-02-28

    Precise analysis of four sulfur isotopes of sulfate in geological and environmental samples provides the means to extract unique information in wide geological contexts. Reduction of sulfate to sulfide is the first step to access such information. The conventional reduction method suffers from a cumbersome distillation system, long reaction time and large volume of the reducing solution. We present a new and simple method enabling the process of multiple samples at one time with a much reduced volume of reducing solution. One mL of reducing solution made of HI and NaH 2 PO 2 was added to a septum glass tube with dry sulfate. The tube was heated at 124°C and the produced H 2 S was purged with inert gas (He or N 2 ) through gas-washing tubes and then collected by NaOH solution. The collected H 2 S was converted into Ag 2 S by adding AgNO 3 solution and the co-precipitated Ag 2 O was removed by adding a few drops of concentrated HNO 3 . Within 2-3 h, a 100% yield was observed for samples with 0.2-2.5 μmol Na 2 SO 4 . The reduction rate was much slower for BaSO 4 and a complete reduction was not observed. International sulfur reference materials, NBS-127, SO-5 and SO-6, were processed with this method, and the measured against accepted δ 34 S values yielded a linear regression line which had a slope of 0.99 ± 0.01 and a R 2 value of 0.998. The new methodology is easy to handle and allows us to process multiple samples at a time. It has also demonstrated good reproducibility in terms of H 2 S yield and for further isotope analysis. It is thus a good alternative to the conventional manual method, especially when processing samples with limited amount of sulfate available. © 2017 The Authors. Rapid Communications in Mass Spectrometry Pubished by John Wiley & Sons Ltd.

  1. Fuzzy multiple attribute decision making methods and applications

    Chen, Shu-Jen

    1992-01-01

    This monograph is intended for an advanced undergraduate or graduate course as well as for researchers, who want a compilation of developments in this rapidly growing field of operations research. This is a sequel to our previous works: "Multiple Objective Decision Making--Methods and Applications: A state-of-the-Art Survey" (No.164 of the Lecture Notes); "Multiple Attribute Decision Making--Methods and Applications: A State-of-the-Art Survey" (No.186 of the Lecture Notes); and "Group Decision Making under Multiple Criteria--Methods and Applications" (No.281 of the Lecture Notes). In this monograph, the literature on methods of fuzzy Multiple Attribute Decision Making (MADM) has been reviewed thoroughly and critically, and classified systematically. This study provides readers with a capsule look into the existing methods, their characteristics, and applicability to the analysis of fuzzy MADM problems. The basic concepts and algorithms from the classical MADM methods have been used in the development of the f...

  2. N2 vs H20 as purge/hydrostatic head

    Mast, J.C.

    1996-01-01

    This document provides the information to explain to the customer the ETP for the N2 vs H20 as Purge/Hydrostatic Head. This ETP follows the format described in Issurance of New Characterization Equipment Engineering Desk Instructions, 75200-95-013

  3. Optimization of Inventories for Multiple Companies by Fuzzy Control Method

    Kawase, Koichi; Konishi, Masami; Imai, Jun

    2008-01-01

    In this research, Fuzzy control theory is applied to the inventory control of the supply chain between multiple companies. The proposed control method deals with the amountof inventories expressing supply chain between multiple companies. Referring past demand and tardiness, inventory amounts of raw materials are determined by Fuzzy inference. The method that an appropriate inventory control becomes possible optimizing fuzzy control gain by using SA method for Fuzzy control. The variation of ...

  4. Aquifer restoration system improvement using an acid fluid purge

    Hodder, E.A.; Peck, C.A.

    1992-01-01

    The implementation of a water pump acid purge procedure at a free-phase liquid hydrocarbon recovery site has increased water pump operational run times and improved the effectiveness of the aquifer restoration effort. Before introduction of this technique, pumps at some locations would fail within 14 days of operation due to CaSO 4 .2H 2 O (calcium sulfate) precipitate fouling. After acid purge implementation at these locations, pump operational life improved to an average of over 110 days. Other locations, where pump failures would occur within one month, were improved to approximately six months of operation. The increase in water pump run time has also improved the liquid hydrocarbon recovery rate by 2,000 gallons per day; representing a 20% increase for the aquifer restoration system. Other concepts tested in attempts to prolong pump life included: specially designed electric submersible pumps, submersible pump shrouds intended to reduce the fluid pressure shear that enhances CaSO 4 .2H 2 O precipitation, and high volume pneumatic gas lift pumps. Due to marginal pump life improvement or other undesirable operational features, these concepts were primarily ineffective. The purge apparatus utilizes an acid pump, hose, and discharge piping to deliver the solution directly into the inlet of an operating water pump. The water pumps used for this activity require stainless steel construction with Teflon or other acid resistant bearings and seals. Purges are typically conducted before sudden discharge pressure drops (greater than 15 psig) occur for the operating water pump. Depending on volume of precipitate accumulation and pump type, discharge pressure is restored after introduction of 10 to 40 gallons of hydrochloric acid solution. The acid purge procedure outlined herein eliminates operational downtime and does not require well head pump removal and the associated costs of industry cleaning procedures

  5. Multiple histogram method and static Monte Carlo sampling

    Inda, M.A.; Frenkel, D.

    2004-01-01

    We describe an approach to use multiple-histogram methods in combination with static, biased Monte Carlo simulations. To illustrate this, we computed the force-extension curve of an athermal polymer from multiple histograms constructed in a series of static Rosenbluth Monte Carlo simulations. From

  6. A multiple regression method for genomewide association studies ...

    Bujun Mei

    2018-06-07

    Jun 7, 2018 ... Similar to the typical genomewide association tests using LD ... new approach performed validly when the multiple regression based on linkage method was employed. .... the model, two groups of scenarios were simulated.

  7. Purging behavior in anorexia nervosa and eating disorder not otherwise specified

    Støving, René Klinkby; Andries, Alin; Brixen, Kim Torsten

    2012-01-01

    Purging behavior in eating disorders is associated with medical risks. We aimed to compare remission rates in purging and non-purging females with anorexia nervosa (AN) and eating disorder not otherwise specified (EDNOS) in a large retrospective single center cohort. A total of 339 patients...

  8. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    Werry, S.M.

    1995-01-01

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure

  9. Reciprocal associations between negative affect, binge eating, and purging in the natural environment in women with bulimia nervosa.

    Lavender, Jason M; Utzinger, Linsey M; Cao, Li; Wonderlich, Stephen A; Engel, Scott G; Mitchell, James E; Crosby, Ross D

    2016-04-01

    Although negative affect (NA) has been identified as a common trigger for bulimic behaviors, findings regarding NA following such behaviors have been mixed. This study examined reciprocal associations between NA and bulimic behaviors using real-time, naturalistic data. Participants were 133 women with bulimia nervosa (BN) according to the 4th edition of the Diagnostic and Statistical Manual of Mental Disorders who completed a 2-week ecological momentary assessment protocol in which they recorded bulimic behaviors and provided multiple daily ratings of NA. A multilevel autoregressive cross-lagged analysis was conducted to examine concurrent, first-order autoregressive, and prospective associations between NA, binge eating, and purging across the day. Results revealed positive concurrent associations between all variables across all time points, as well as numerous autoregressive associations. For prospective associations, higher NA predicted subsequent bulimic symptoms at multiple time points; conversely, binge eating predicted lower NA at multiple time points, and purging predicted higher NA at 1 time point. Several autoregressive and prospective associations were also found between binge eating and purging. This study used a novel approach to examine NA in relation to bulimic symptoms, contributing to the existing literature by directly examining the magnitude of the associations, examining differences in the associations across the day, and controlling for other associations in testing each effect in the model. These findings may have relevance for understanding the etiology and/or maintenance of bulimic symptoms, as well as potentially informing psychological interventions for BN. (c) 2016 APA, all rights reserved).

  10. Multiple independent identification decisions: a method of calibrating eyewitness identifications.

    Pryke, Sean; Lindsay, R C L; Dysart, Jennifer E; Dupuis, Paul

    2004-02-01

    Two experiments (N = 147 and N = 90) explored the use of multiple independent lineups to identify a target seen live. In Experiment 1, simultaneous face, body, and sequential voice lineups were used. In Experiment 2, sequential face, body, voice, and clothing lineups were used. Both studies demonstrated that multiple identifications (by the same witness) from independent lineups of different features are highly diagnostic of suspect guilt (G. L. Wells & R. C. L. Lindsay, 1980). The number of suspect and foil selections from multiple independent lineups provides a powerful method of calibrating the accuracy of eyewitness identification. Implications for use of current methods are discussed. ((c) 2004 APA, all rights reserved)

  11. Radiological Design Summary Report for TRU Vent and Purge Process

    Taus, L.B.

    2004-01-01

    This report contains top-level requirements for the various areas of radiological protection for workers. Detailed quotations of the requirements for applicable regulatory documents can be found in the accompanying Implementation Guide. For the purposes of demonstrating compliance with these requirements, per Engineering Standard 01064, shall consider / shall evaluate indicates that the designer must examine the requirement for the design and either incorporate or provide a technical justification as to why the requirement is not incorporated. The Transuranic Vent and Purge process is not a project, but is considered a process change. This process has been performed successfully by Solid Waste on lower activity TRU drums. This summary report applies a graded approach and describes how the Transuranic Vent and Purge process meets each of the applicable radiological design criteria and requirements specified in Manual WSRC-TM-95-1, Engineering Standard Number 01064

  12. HARMONIC ANALYSIS OF SVPWM INVERTER USING MULTIPLE-PULSES METHOD

    Mehmet YUMURTACI

    2009-01-01

    Full Text Available Space Vector Modulation (SVM technique is a popular and an important PWM technique for three phases voltage source inverter in the control of Induction Motor. In this study harmonic analysis of Space Vector PWM (SVPWM is investigated using multiple-pulses method. Multiple-Pulses method calculates the Fourier coefficients of individual positive and negative pulses of the output PWM waveform and adds them together using the principle of superposition to calculate the Fourier coefficients of the all PWM output signal. Harmonic magnitudes can be calculated directly by this method without linearization, using look-up tables or Bessel functions. In this study, the results obtained in the application of SVPWM for values of variable parameters are compared with the results obtained with the multiple-pulses method.

  13. Design and development of pressure and repressurization purge system for reusable space shuttle multilayer insulation system

    1972-01-01

    The experimental determination of purge bag materials properties, development of purge bag manufacturing techniques, experimental evaluation of a subscale purge bag under simulated operating conditions and the experimental evaluation of the purge pin concept for MLI purging are discussed. The basic purge bag material, epoxy fiberglass bounded by skins of FEP Teflon, showed no significant permeability to helium flow under normal operating conditions. Purge bag small scale manufacturing tests were conducted to develop tooling and fabrication techniques for use in full scale bag manufacture. A purge bag material layup technique was developed whereby the two plys of epoxy fiberglass enclosed between skins of FEP Teflon are vacuum bag cured in an oven in a single operation. The material is cured on a tool with the shape of a purge bag half. Plastic tooling was selected for use in bag fabrication. A model purge bag 0.6 m in diameter was fabricated and subjected to a series of structural and environmental tests simulating various flight type environments. Pressure cycling tests at high (450 K) and low (200 K) temperature as well as acoustic loading tests were performed. The purge bag concept proved to be structurally sound and was used for the full scale bag detailed design model.

  14. Research on neutron source multiplication method in nuclear critical safety

    Zhu Qingfu; Shi Yongqian; Hu Dingsheng

    2005-01-01

    The paper concerns in the neutron source multiplication method research in nuclear critical safety. Based on the neutron diffusion equation with external neutron source the effective sub-critical multiplication factor k s is deduced, and k s is different to the effective neutron multiplication factor k eff in the case of sub-critical system with external neutron source. The verification experiment on the sub-critical system indicates that the parameter measured with neutron source multiplication method is k s , and k s is related to the external neutron source position in sub-critical system and external neutron source spectrum. The relation between k s and k eff and the effect of them on nuclear critical safety is discussed. (author)

  15. Acceptance/Operational Test Report for Tank 241-AN-104 camera and camera purge control system

    Castleberry, J.L.

    1995-11-01

    This Acceptance/Operational Test Procedure (ATP/OTP) will document the satisfactory operation of the camera purge panel, purge control panel, color camera system and associated control components destined for installation. The final acceptance of the complete system will be performed in the field. The purge panel and purge control panel will be tested for its safety interlock which shuts down the camera and pan-and-tilt inside the tank vapor space during loss of purge pressure and that the correct purge volume exchanges are performed as required by NFPA 496. This procedure is separated into seven sections. This Acceptance/Operational Test Report documents the successful acceptance and operability testing of the 241-AN-104 camera system and camera purge control system

  16. An Intuitionistic Multiplicative ORESTE Method for Patients’ Prioritization of Hospitalization

    Cheng Zhang

    2018-04-01

    Full Text Available The tension brought about by sickbeds is a common and intractable issue in public hospitals in China due to the large population. Assigning the order of hospitalization of patients is difficult because of complex patient information such as disease type, emergency degree, and severity. It is critical to rank the patients taking full account of various factors. However, most of the evaluation criteria for hospitalization are qualitative, and the classical ranking method cannot derive the detailed relations between patients based on these criteria. Motivated by this, a comprehensive multiple criteria decision making method named the intuitionistic multiplicative ORESTE (organísation, rangement et Synthèse dedonnées relarionnelles, in French was proposed to handle the problem. The subjective and objective weights of criteria were considered in the proposed method. To do so, first, considering the vagueness of human perceptions towards the alternatives, an intuitionistic multiplicative preference relation model is applied to represent the experts’ preferences over the pairwise alternatives with respect to the predetermined criteria. Then, a correlation coefficient-based weight determining method is developed to derive the objective weights of criteria. This method can overcome the biased results caused by highly-related criteria. Afterwards, we improved the general ranking method, ORESTE, by introducing a new score function which considers both the subjective and objective weights of criteria. An intuitionistic multiplicative ORESTE method was then developed and further highlighted by a case study concerning the patients’ prioritization.

  17. Symbolic interactionism as a theoretical perspective for multiple method research.

    Benzies, K M; Allen, M N

    2001-02-01

    Qualitative and quantitative research rely on different epistemological assumptions about the nature of knowledge. However, the majority of nurse researchers who use multiple method designs do not address the problem of differing theoretical perspectives. Traditionally, symbolic interactionism has been viewed as one perspective underpinning qualitative research, but it is also the basis for quantitative studies. Rooted in social psychology, symbolic interactionism has a rich intellectual heritage that spans more than a century. Underlying symbolic interactionism is the major assumption that individuals act on the basis of the meaning that things have for them. The purpose of this paper is to present symbolic interactionism as a theoretical perspective for multiple method designs with the aim of expanding the dialogue about new methodologies. Symbolic interactionism can serve as a theoretical perspective for conceptually clear and soundly implemented multiple method research that will expand the understanding of human health behaviour.

  18. A General Method for QTL Mapping in Multiple Related Populations Derived from Multiple Parents

    Yan AO

    2009-03-01

    Full Text Available It's well known that incorporating some existing populations derived from multiple parents may improve QTL mapping and QTL-based breeding programs. However, no general maximum likelihood method has been available for this strategy. Based on the QTL mapping in multiple related populations derived from two parents, a maximum likelihood estimation method was proposed, which can incorporate several populations derived from three or more parents and also can be used to handle different mating designs. Taking a circle design as an example, we conducted simulation studies to study the effect of QTL heritability and sample size upon the proposed method. The results showed that under the same heritability, enhanced power of QTL detection and more precise and accurate estimation of parameters could be obtained when three F2 populations were jointly analyzed, compared with the joint analysis of any two F2 populations. Higher heritability, especially with larger sample sizes, would increase the ability of QTL detection and improve the estimation of parameters. Potential advantages of the method are as follows: firstly, the existing results of QTL mapping in single population can be compared and integrated with each other with the proposed method, therefore the ability of QTL detection and precision of QTL mapping can be improved. Secondly, owing to multiple alleles in multiple parents, the method can exploit gene resource more adequately, which will lay an important genetic groundwork for plant improvement.

  19. Catalytic membrane reactor for tritium extraction system from He purge

    Santucci, Alessia; Incelli, Marco; Sansovini, Mirko; Tosti, Silvano

    2016-01-01

    Highlights: • In the HCBB blanket, the produced tritium is recovered by purging with helium; membrane technologies are able to separate tritium from helium. • The paper presents the results of two experimental campaigns. • In the first, a Pd–Ag diffuser for hydrogen separation is tested at several operating conditions. • In the second, the ability of a Pd–Ag membrane reactor for water decontamination is assessed by performing isotopic swamping and water gas shift reactions. - Abstract: In the Helium Cooled Pebble Bed (HCPB) blanket concept, the produced tritium is recovered purging the breeder with helium at low pressure, thus a tritium extraction system (TES) is foreseen to separate the produced tritium (which contains impurities like water) from the helium gas purge. Several R&D activities are running in parallel to experimentally identify most promising TES technologies: particularly, Pd-based membrane reactors (MR) are under investigation because of their large hydrogen selectivity, continuous operation capability, reliability and compactness. The construction and operation under DEMO relevant conditions (that presently foresee a He purge flow rate of about 10,000 Nm 3 /h and a H 2 /He ratio of 0.1%) of a medium scale MR is scheduled for next year, while presently preliminary experiments on a small scale reactor are performed to identify most suitable operative conditions and catalyst materials. This work presents the results of an experimental campaign carried out on a Pd-based membrane aimed at measuring the capability of this device in separating hydrogen from the helium. Many operative conditions have been investigated by considering different He/H 2 feed flow ratios, several lumen pressures and reactor temperatures. Moreover, the performances of a membrane reactor (composed of a Pd–Ag tube having a wall thickness of about 113 μm, length 500 mm and diameter 10 mm) in processing the water contained in the purge gas have been measured by using

  20. Catalytic membrane reactor for tritium extraction system from He purge

    Santucci, Alessia, E-mail: alessia.santucci@enea.it [ENEA for EUROfusion, Via E. Fermi 45, 00044 Frascati, Roma (Italy); Incelli, Marco [ENEA for EUROfusion, Via E. Fermi 45, 00044 Frascati, Roma (Italy); DEIM, University of Tuscia, Via del Paradiso 47, 01100 Viterbo (Italy); Sansovini, Mirko; Tosti, Silvano [ENEA for EUROfusion, Via E. Fermi 45, 00044 Frascati, Roma (Italy)

    2016-11-01

    Highlights: • In the HCBB blanket, the produced tritium is recovered by purging with helium; membrane technologies are able to separate tritium from helium. • The paper presents the results of two experimental campaigns. • In the first, a Pd–Ag diffuser for hydrogen separation is tested at several operating conditions. • In the second, the ability of a Pd–Ag membrane reactor for water decontamination is assessed by performing isotopic swamping and water gas shift reactions. - Abstract: In the Helium Cooled Pebble Bed (HCPB) blanket concept, the produced tritium is recovered purging the breeder with helium at low pressure, thus a tritium extraction system (TES) is foreseen to separate the produced tritium (which contains impurities like water) from the helium gas purge. Several R&D activities are running in parallel to experimentally identify most promising TES technologies: particularly, Pd-based membrane reactors (MR) are under investigation because of their large hydrogen selectivity, continuous operation capability, reliability and compactness. The construction and operation under DEMO relevant conditions (that presently foresee a He purge flow rate of about 10,000 Nm{sup 3}/h and a H{sub 2}/He ratio of 0.1%) of a medium scale MR is scheduled for next year, while presently preliminary experiments on a small scale reactor are performed to identify most suitable operative conditions and catalyst materials. This work presents the results of an experimental campaign carried out on a Pd-based membrane aimed at measuring the capability of this device in separating hydrogen from the helium. Many operative conditions have been investigated by considering different He/H{sub 2} feed flow ratios, several lumen pressures and reactor temperatures. Moreover, the performances of a membrane reactor (composed of a Pd–Ag tube having a wall thickness of about 113 μm, length 500 mm and diameter 10 mm) in processing the water contained in the purge gas have been

  1. Method for measuring multiple scattering corrections between liquid scintillators

    Verbeke, J.M., E-mail: verbeke2@llnl.gov; Glenn, A.M., E-mail: glenn22@llnl.gov; Keefer, G.J., E-mail: keefer1@llnl.gov; Wurtz, R.E., E-mail: wurtz1@llnl.gov

    2016-07-21

    A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  2. INTEGRATED FUSION METHOD FOR MULTIPLE TEMPORAL-SPATIAL-SPECTRAL IMAGES

    H. Shen

    2012-08-01

    Full Text Available Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.

  3. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  4. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method

    Jure Tuta

    2018-03-01

    Full Text Available This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method, a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.. Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  5. Multiple Contexts, Multiple Methods: A Study of Academic and Cultural Identity among Children of Immigrant Parents

    Urdan, Tim; Munoz, Chantico

    2012-01-01

    Multiple methods were used to examine the academic motivation and cultural identity of a sample of college undergraduates. The children of immigrant parents (CIPs, n = 52) and the children of non-immigrant parents (non-CIPs, n = 42) completed surveys assessing core cultural identity, valuing of cultural accomplishments, academic self-concept,…

  6. Correction of measured multiplicity distributions by the simulated annealing method

    Hafidouni, M.

    1993-01-01

    Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs

  7. System and method for image registration of multiple video streams

    Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton

    2018-02-06

    Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.

  8. Multiple time-scale methods in particle simulations of plasmas

    Cohen, B.I.

    1985-01-01

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling

  9. Statistics of electron multiplication in multiplier phototube: iterative method

    Grau Malonda, A.; Ortiz Sanchez, J.F.

    1985-01-01

    An iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situations are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average anti-r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (author)

  10. Statistics of electron multiplication in a multiplier phototube; Iterative method

    Ortiz, J. F.; Grau, A.

    1985-01-01

    In the present paper an iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situation are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (Author) 11 refs

  11. Walking path-planning method for multiple radiation areas

    Liu, Yong-kuo; Li, Meng-kun; Peng, Min-jun; Xie, Chun-li; Yuan, Cheng-qian; Wang, Shuang-yu; Chao, Nan

    2016-01-01

    Highlights: • Radiation environment modeling method is designed. • Path-evaluating method and segmented path-planning method are proposed. • Path-planning simulation platform for radiation environment is built. • The method avoids to be misled by minimum dose path in single area. - Abstract: Based on minimum dose path-searching method, walking path-planning method for multiple radiation areas was designed to solve minimum dose path problem in single area and find minimum dose path in the whole space in this paper. Path-planning simulation platform was built using C# programming language and DirectX engine. The simulation platform was used in simulations dealing with virtual nuclear facilities. Simulation results indicated that the walking-path planning method is effective in providing safety for people walking in nuclear facilities.

  12. New weighting methods for phylogenetic tree reconstruction using multiple loci.

    Misawa, Kazuharu; Tajima, Fumio

    2012-08-01

    Efficient determination of evolutionary distances is important for the correct reconstruction of phylogenetic trees. The performance of the pooled distance required for reconstructing a phylogenetic tree can be improved by applying large weights to appropriate distances for reconstructing phylogenetic trees and small weights to inappropriate distances. We developed two weighting methods, the modified Tajima-Takezaki method and the modified least-squares method, for reconstructing phylogenetic trees from multiple loci. By computer simulations, we found that both of the new methods were more efficient in reconstructing correct topologies than the no-weight method. Hence, we reconstructed hominoid phylogenetic trees from mitochondrial DNA using our new methods, and found that the levels of bootstrap support were significantly increased by the modified Tajima-Takezaki and by the modified least-squares method.

  13. Disturbance of gut satiety peptide in purging disorder.

    Keel, Pamela K; Eckel, Lisa A; Hildebrandt, Britny A; Haedt-Matt, Alissa A; Appelbaum, Jonathan; Jimerson, David C

    2018-01-01

    Little is known about biological factors that contribute to purging after normal amounts of food-the central feature of purging disorder (PD). This study comes from a series of nested studies examining ingestive behaviors in bulimic syndromes and specifically evaluated the satiety peptide YY (PYY) and the hunger peptide ghrelin in women with PD (n = 25), bulimia nervosa-purging (BNp) (n = 26), and controls (n = 26). Based on distinct subjective responses to a fixed meal in PD (Keel, Wolfe, Liddle, DeYoung, & Jimerson, ), we tested whether postprandial PYY response was significantly greater and ghrelin levels significantly lower in women with PD compared to controls and women with BNp. Participants completed structured clinical interviews, self-report questionnaires, and laboratory assessments of gut peptide and subjective responses to a fixed meal. Women with PD demonstrated a significantly greater postprandial PYY response compared to women with BNp and controls, who did not differ significantly. PD women also endorsed significantly greater gastrointestinal distress, and PYY predicted gastrointestinal intestinal distress. Ghrelin levels were significantly greater in PD and BNp compared to controls, but did not differ significantly between eating disorders. Women with BNp endorsed significantly greater postprandial hunger, and ghrelin predicted hunger. PD is associated with a unique disturbance in PYY response. Findings contribute to growing evidence of physiological distinctions between PD and BNp. Future research should examine whether these distinctions account for differences in clinical presentation as this could inform the development of specific interventions for patients with PD. © 2017 Wiley Periodicals, Inc.

  14. Multiple centroid method to evaluate the adaptability of alfalfa genotypes

    Moysés Nascimento

    2015-02-01

    Full Text Available This study aimed to evaluate the efficiency of multiple centroids to study the adaptability of alfalfa genotypes (Medicago sativa L.. In this method, the genotypes are compared with ideotypes defined by the bissegmented regression model, according to the researcher's interest. Thus, genotype classification is carried out as determined by the objective of the researcher and the proposed recommendation strategy. Despite the great potential of the method, it needs to be evaluated under the biological context (with real data. In this context, we used data on the evaluation of dry matter production of 92 alfalfa cultivars, with 20 cuttings, from an experiment in randomized blocks with two repetitions carried out from November 2004 to June 2006. The multiple centroid method proved efficient for classifying alfalfa genotypes. Moreover, it showed no unambiguous indications and provided that ideotypes were defined according to the researcher's interest, facilitating data interpretation.

  15. Unplanned Complex Suicide-A Consideration of Multiple Methods.

    Ateriya, Navneet; Kanchan, Tanuj; Shekhawat, Raghvendra Singh; Setia, Puneet; Saraf, Ashish

    2018-05-01

    Detailed death investigations are mandatory to find out the exact cause and manner in non-natural deaths. In this reference, use of multiple methods in suicide poses a challenge for the investigators especially when the choice of methods to cause death is unplanned. There is an increased likelihood that doubts of homicide are raised in cases of unplanned complex suicides. A case of complex suicide is reported where the victim resorted to multiple methods to end his life, and what appeared to be an unplanned variant based on the death scene investigations. A meticulous crime scene examination, interviews of the victim's relatives and other witnesses, and a thorough autopsy are warranted to conclude on the cause and manner of death in all such cases. © 2017 American Academy of Forensic Sciences.

  16. Characterizing lentic freshwater fish assemblages using multiple sampling methods

    Fischer, Jesse R.; Quist, Michael C.

    2014-01-01

    Characterizing fish assemblages in lentic ecosystems is difficult, and multiple sampling methods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of sampling methods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.

  17. Geometric calibration method for multiple head cone beam SPECT systems

    Rizo, Ph.; Grangeat, P.; Guillemaud, R.; Sauze, R.

    1993-01-01

    A method is presented for performing geometric calibration on Single Photon Emission Tomography (SPECT) cone beam systems with multiple cone beam collimators, each having its own orientation parameters. This calibration method relies on the fact that, in tomography, for each head, the relative position of the rotation axis and of the collimator does not change during the acquisition. In order to ensure the method stability, the parameters to be estimated in intrinsic parameters and extrinsic parameters are separated. The intrinsic parameters describe the acquisition geometry and the extrinsic parameters position of the detection system with respect to the rotation axis. (authors) 3 refs

  18. A crack growth evaluation method for interacting multiple cracks

    Kamaya, Masayuki

    2003-01-01

    When stress corrosion cracking or corrosion fatigue occurs, multiple cracks are frequently initiated in the same area. According to section XI of the ASME Boiler and Pressure Vessel Code, multiple cracks are considered as a single combined crack in crack growth analysis, if the specified conditions are satisfied. In crack growth processes, however, no prescription for the interference between multiple cracks is given in this code. The JSME Post-Construction Code, issued in May 2000, prescribes the conditions of crack coalescence in the crack growth process. This study aimed to extend this prescription to more general cases. A simulation model was applied, to simulate the crack growth process, taking into account the interference between two cracks. This model made it possible to analyze multiple crack growth behaviors for many cases (e.g. different relative position and length) that could not be studied by experiment only. Based on these analyses, a new crack growth analysis method was suggested for taking into account the interference between multiple cracks. (author)

  19. Galerkin projection methods for solving multiple related linear systems

    Chan, T.F.; Ng, M.; Wan, W.L.

    1996-12-31

    We consider using Galerkin projection methods for solving multiple related linear systems A{sup (i)}x{sup (i)} = b{sup (i)} for 1 {le} i {le} s, where A{sup (i)} and b{sup (i)} are different in general. We start with the special case where A{sup (i)} = A and A is symmetric positive definite. The method generates a Krylov subspace from a set of direction vectors obtained by solving one of the systems, called the seed system, by the CG method and then projects the residuals of other systems orthogonally onto the generated Krylov subspace to get the approximate solutions. The whole process is repeated with another unsolved system as a seed until all the systems are solved. We observe in practice a super-convergence behaviour of the CG process of the seed system when compared with the usual CG process. We also observe that only a small number of restarts is required to solve all the systems if the right-hand sides are close to each other. These two features together make the method particularly effective. In this talk, we give theoretical proof to justify these observations. Furthermore, we combine the advantages of this method and the block CG method and propose a block extension of this single seed method. The above procedure can actually be modified for solving multiple linear systems A{sup (i)}x{sup (i)} = b{sup (i)}, where A{sup (i)} are now different. We can also extend the previous analytical results to this more general case. Applications of this method to multiple related linear systems arising from image restoration and recursive least squares computations are considered as examples.

  20. Evaluation of the Purge Water Management System (PWMS) monitor well sampling technology at SRS

    Hiergesell, R.A.; Cardoso-Neto, J.E.; Williams, D.W.

    1997-01-01

    Due to the complex issues surrounding Investigation Derived Waste (IDW) at SRS, the Environmental Restoration Division has been exploring new technologies to deal with the purge water generated during monitoring well sampling. Standard procedures for sampling generates copious amounts of purge water that must be managed as hazardous waste, when containing hazardous and/or radiological contaminants exceeding certain threshold levels. SRS has obtained Regulator approval to field test an innovative surface release prevention mechanism to manage purge water. This mechanism is referred to as the Purge Water Management System (PWMS) and consists of a collapsible bladder situated within a rigid metal tank

  1. A novel method for producing multiple ionization of noble gas

    Wang Li; Li Haiyang; Dai Dongxu; Bai Jiling; Lu Richang

    1997-01-01

    We introduce a novel method for producing multiple ionization of He, Ne, Ar, Kr and Xe. A nanosecond pulsed electron beam with large number density, which could be energy-controlled, was produced by incidence a focused 308 nm laser beam onto a stainless steel grid. On Time-of-Flight Mass Spectrometer, using this electron beam, we obtained multiple ionization of noble gas He, Ne, Ar and Xe. Time of fight mass spectra of these ions were given out. These ions were supposed to be produced by step by step ionization of the gas atoms by electron beam impact. This method may be used as a ideal soft ionizing point ion source in Time of Flight Mass Spectrometer

  2. A level set method for multiple sclerosis lesion segmentation.

    Zhao, Yue; Guo, Shuxu; Luo, Min; Shi, Xue; Bilello, Michel; Zhang, Shaoxiang; Li, Chunming

    2018-06-01

    In this paper, we present a level set method for multiple sclerosis (MS) lesion segmentation from FLAIR images in the presence of intensity inhomogeneities. We use a three-phase level set formulation of segmentation and bias field estimation to segment MS lesions and normal tissue region (including GM and WM) and CSF and the background from FLAIR images. To save computational load, we derive a two-phase formulation from the original multi-phase level set formulation to segment the MS lesions and normal tissue regions. The derived method inherits the desirable ability to precisely locate object boundaries of the original level set method, which simultaneously performs segmentation and estimation of the bias field to deal with intensity inhomogeneity. Experimental results demonstrate the advantages of our method over other state-of-the-art methods in terms of segmentation accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Does haplodiploidy purge inbreeding depression in rotifer populations?

    Ana M Tortajada

    2009-12-01

    Full Text Available Inbreeding depression is an important evolutionary factor, particularly when new habitats are colonized by few individuals. Then, inbreeding depression by drift could favour the establishment of later immigrants because their hybrid offspring would enjoy higher fitness. Rotifers are the only major zooplanktonic group where information on inbreeding depression is still critically scarce, despite the fact that in cyclical parthenogenetic rotifers males are haploid and could purge deleterious recessive alleles, thereby decreasing inbreeding depression.We studied the effects of inbreeding in two populations of the cyclical parthenogenetic rotifer Brachionus plicatilis. For each population, we compared both the parental fertilization proportion and F1 fitness components from intraclonal (selfed and interclonal (outcrossed crosses. The parental fertilization proportion was similar for both types of crosses, suggesting that there is no mechanism to avoid selfing. In the F1 generation of both populations, we found evidence of inbreeding depression for the fitness components associated with asexual reproduction; whereas inbreeding depression was only found for one of the two sexual reproduction fitness components measured.Our results show that rotifers, like other major zooplanktonic groups, can be affected by inbreeding depression in different stages of their life cycle. These results suggest that haplodiploidy does not purge efficiently deleterious recessive alleles. The inbreeding depression detected here has important implications when a rotifer population is founded and intraclonal crossing is likely to occur. Thus, during the foundation of new populations inbreeding depression may provide opportunities for new immigrants, increasing gene flow between populations, and affecting genetic differentiation.

  4. Measuring multiple residual-stress components using the contour method and multiple cuts

    Prime, Michael B [Los Alamos National Laboratory; Swenson, Hunter [Los Alamos National Laboratory; Pagliaro, Pierluigi [U. PALERMO; Zuccarello, Bernardo [U. PALERMO

    2009-01-01

    The conventional contour method determines one component of stress over the cross section of a part. The part is cut into two, the contour of the exposed surface is measured, and Bueckner's superposition principle is analytically applied to calculate stresses. In this paper, the contour method is extended to the measurement of multiple stress components by making multiple cuts with subsequent applications of superposition. The theory and limitations are described. The theory is experimentally tested on a 316L stainless steel disk with residual stresses induced by plastically indenting the central portion of the disk. The stress results are validated against independent measurements using neutron diffraction. The theory has implications beyond just multiple cuts. The contour method measurements and calculations for the first cut reveal how the residual stresses have changed throughout the part. Subsequent measurements of partially relaxed stresses by other techniques, such as laboratory x-rays, hole drilling, or neutron or synchrotron diffraction, can be superimposed back to the original state of the body.

  5. Measurement of subcritical multiplication by the interval distribution method

    Nelson, G.W.

    1985-01-01

    The prompt decay constant or the subcritical neutron multiplication may be determined by measuring the distribution of the time intervals between successive neutron counts. The distribution data is analyzed by least-squares fitting to a theoretical distribution function derived from a point reactor probability model. Published results of measurements with one- and two-detector systems are discussed. Data collection times are shorter, and statistical errors are smaller the nearer the system is to delayed critical. Several of the measurements indicate that a shorter data collection time and higher accuracy are possible with the interval distribution method than with the Feynman variance method

  6. A global calibration method for multiple vision sensors based on multiple targets

    Liu, Zhen; Zhang, Guangjun; Wei, Zhenzhong; Sun, Junhua

    2011-01-01

    The global calibration of multiple vision sensors (MVS) has been widely studied in the last two decades. In this paper, we present a global calibration method for MVS with non-overlapping fields of view (FOVs) using multiple targets (MT). MT is constructed by fixing several targets, called sub-targets, together. The mutual coordinate transformations between sub-targets need not be known. The main procedures of the proposed method are as follows: one vision sensor is selected from MVS to establish the global coordinate frame (GCF). MT is placed in front of the vision sensors for several (at least four) times. Using the constraint that the relative positions of all sub-targets are invariant, the transformation matrix from the coordinate frame of each vision sensor to GCF can be solved. Both synthetic and real experiments are carried out and good result is obtained. The proposed method has been applied to several real measurement systems and shown to be both flexible and accurate. It can serve as an attractive alternative to existing global calibration methods

  7. Field evaluation of personal sampling methods for multiple bioaerosols.

    Wang, Chi-Hsun; Chen, Bean T; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine

    2015-01-01

    Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  8. Field evaluation of personal sampling methods for multiple bioaerosols.

    Chi-Hsun Wang

    Full Text Available Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min. Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  9. Hesitant fuzzy methods for multiple criteria decision analysis

    Zhang, Xiaolu

    2017-01-01

    The book offers a comprehensive introduction to methods for solving multiple criteria decision making and group decision making problems with hesitant fuzzy information. It reports on the authors’ latest research, as well as on others’ research, providing readers with a complete set of decision making tools, such as hesitant fuzzy TOPSIS, hesitant fuzzy TODIM, hesitant fuzzy LINMAP, hesitant fuzzy QUALIFEX, and the deviation modeling approach with heterogeneous fuzzy information. The main focus is on decision making problems in which the criteria values and/or the weights of criteria are not expressed in crisp numbers but are more suitable to be denoted as hesitant fuzzy elements. The largest part of the book is devoted to new methods recently developed by the authors to solve decision making problems in situations where the available information is vague or hesitant. These methods are presented in detail, together with their application to different type of decision-making problems. All in all, the book ...

  10. Correlation expansion: a powerful alternative multiple scattering calculation method

    Zhao Haifeng; Wu Ziyu; Sebilleau, Didier

    2008-01-01

    We introduce a powerful alternative expansion method to perform multiple scattering calculations. In contrast to standard MS series expansion, where the scattering contributions are grouped in terms of scattering order and may diverge in the low energy region, this expansion, called correlation expansion, partitions the scattering process into contributions from different small atom groups and converges at all energies. It converges faster than MS series expansion when the latter is convergent. Furthermore, it takes less memory than the full MS method so it can be used in the near edge region without any divergence problem, even for large clusters. The correlation expansion framework we derive here is very general and can serve to calculate all the elements of the scattering path operator matrix. Photoelectron diffraction calculations in a cluster containing 23 atoms are presented to test the method and compare it to full MS and standard MS series expansion

  11. Does Haplodiploidy Purge Inbreeding Depression in Rotifer Populations?

    Tortajada, Ana M.; Carmona, María José; Serra, Manuel

    2009-01-01

    Background Inbreeding depression is an important evolutionary factor, particularly when new habitats are colonized by few individuals. Then, inbreeding depression by drift could favour the establishment of later immigrants because their hybrid offspring would enjoy higher fitness. Rotifers are the only major zooplanktonic group where information on inbreeding depression is still critically scarce, despite the fact that in cyclical parthenogenetic rotifers males are haploid and could purge deleterious recessive alleles, thereby decreasing inbreeding depression. Methodology/Principal Findings We studied the effects of inbreeding in two populations of the cyclical parthenogenetic rotifer Brachionus plicatilis. For each population, we compared both the parental fertilization proportion and F1 fitness components from intraclonal (selfed) and interclonal (outcrossed) crosses. The parental fertilization proportion was similar for both types of crosses, suggesting that there is no mechanism to avoid selfing. In the F1 generation of both populations, we found evidence of inbreeding depression for the fitness components associated with asexual reproduction; whereas inbreeding depression was only found for one of the two sexual reproduction fitness components measured. Conclusions/Significance Our results show that rotifers, like other major zooplanktonic groups, can be affected by inbreeding depression in different stages of their life cycle. These results suggest that haplodiploidy does not purge efficiently deleterious recessive alleles. The inbreeding depression detected here has important implications when a rotifer population is founded and intraclonal crossing is likely to occur. Thus, during the foundation of new populations inbreeding depression may provide opportunities for new immigrants, increasing gene flow between populations, and affecting genetic differentiation. PMID:19997616

  12. Multiple predictor smoothing methods for sensitivity analysis: Description of techniques

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  13. Multiple predictor smoothing methods for sensitivity analysis: Example results

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  14. Integrating Multiple Teaching Methods into a General Chemistry Classroom

    Francisco, Joseph S.; Nicoll, Gayle; Trautmann, Marcella

    1998-02-01

    In addition to the traditional lecture format, three other teaching strategies (class discussions, concept maps, and cooperative learning) were incorporated into a freshman level general chemistry course. Student perceptions of their involvement in each of the teaching methods, as well as their perceptions of the utility of each method were used to assess the effectiveness of the integration of the teaching strategies as received by the students. Results suggest that each strategy serves a unique purpose for the students and increased student involvement in the course. These results indicate that the multiple teaching strategies were well received by the students and that all teaching strategies are necessary for students to get the most out of the course.

  15. Fuzzy multiple objective decision making methods and applications

    Lai, Young-Jou

    1994-01-01

    In the last 25 years, the fuzzy set theory has been applied in many disciplines such as operations research, management science, control theory, artificial intelligence/expert system, etc. In this volume, methods and applications of crisp, fuzzy and possibilistic multiple objective decision making are first systematically and thoroughly reviewed and classified. This state-of-the-art survey provides readers with a capsule look into the existing methods, and their characteristics and applicability to analysis of fuzzy and possibilistic programming problems. To realize practical fuzzy modelling, it presents solutions for real-world problems including production/manufacturing, location, logistics, environment management, banking/finance, personnel, marketing, accounting, agriculture economics and data analysis. This book is a guided tour through the literature in the rapidly growing fields of operations research and decision making and includes the most up-to-date bibliographical listing of literature on the topi...

  16. Purge Procedures and Leak Testing for the Morgan Breathing System (MBS) 2000 Closed-Circuit Oxygen Rebreather

    Fothergill, David

    2005-01-01

    .... Since purging accounts for most of the O2 used when breathing on the MBS 2000, optimizing the purge procedure will maximize the duration of the O2 supply and minimize oxygen leaks into the chamber atmosphere...

  17. Flow characteristics analysis of purge gas in unitary pebble beds by CFD simulation coupled with DEM geometry model for fusion blanket

    Chen, Youhua [University of Science and Technology of China, Hefei, Anhui, 230027 (China); Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui, 230031 (China); Chen, Lei [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui, 230031 (China); Liu, Songlin, E-mail: slliu@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui, 230031 (China); Luo, Guangnan [University of Science and Technology of China, Hefei, Anhui, 230027 (China); Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui, 230031 (China)

    2017-01-15

    Highlights: • A unitary pebble bed was built to analyze the flow characteristics of purge gas based on DEM-CFD method. • Flow characteristics between particles were clearly displayed. • Porosity distribution, velocity field distribution, pressure field distribution, pressure drop and the wall effects on velocity distribution were studied. - Abstract: Helium is used as the purge gas to sweep tritium out when it flows through the lithium ceramic and beryllium pebble beds in solid breeder blanket for fusion reactor. The flow characteristics of the purge gas will dominate the tritium sweep capability and tritium recovery system design. In this paper, a computational model for the unitary pebble bed was conducted using DEM-CFD method to study the purge gas flow characteristics in the bed, which include porosity distribution between pebbles, velocity field distribution, pressure field distribution, pressure drop as well as the wall effects on velocity distribution. Pebble bed porosity and velocity distribution with great fluctuations were found in the near-wall region and detailed flow characteristics between pebbles were displayed clearly. The results show that the numerical simulation model has an error with about 11% for estimating pressure drop when compared with the Ergun equation.

  18. The importance of neurophysiological-Bobath method in multiple sclerosis

    Adrian Miler

    2018-02-01

    Full Text Available Rehabilitation treatment in multiple sclerosis should be carried out continuously, can take place in the hospital, ambulatory as well as environmental conditions. In the traditional approach, it focuses on reducing the symptoms of the disease, such as paresis, spasticity, ataxia, pain, sensory disturbances, speech disorders, blurred vision, fatigue, neurogenic bladder dysfunction, and cognitive impairment. In kinesiotherapy in people with paresis, the most common methods are the (Bobathian method.Improvement can be achieved by developing the ability to maintain a correct posture in various positions (so-called postural alignment, patterns based on corrective and equivalent responses. During the therapy, various techniques are used to inhibit pathological motor patterns and stimulate the reaction. The creators of the method believe that each movement pattern has its own postural system, from which it can be initiated, carried out and effectively controlled. Correct movement can not take place in the wrong position of the body. The physiotherapist discusses with the patient how to perform individual movement patterns, which protects him against spontaneous pathological compensation.The aim of the work is to determine the meaning and application of the  Bobath method in the therapy of people with MS

  19. Acoustic scattering by multiple elliptical cylinders using collocation multipole method

    Lee, Wei-Ming

    2012-01-01

    This paper presents the collocation multipole method for the acoustic scattering induced by multiple elliptical cylinders subjected to an incident plane sound wave. To satisfy the Helmholtz equation in the elliptical coordinate system, the scattered acoustic field is formulated in terms of angular and radial Mathieu functions which also satisfy the radiation condition at infinity. The sound-soft or sound-hard boundary condition is satisfied by uniformly collocating points on the boundaries. For the sound-hard or Neumann conditions, the normal derivative of the acoustic pressure is determined by using the appropriate directional derivative without requiring the addition theorem of Mathieu functions. By truncating the multipole expansion, a finite linear algebraic system is derived and the scattered field can then be determined according to the given incident acoustic wave. Once the total field is calculated as the sum of the incident field and the scattered field, the near field acoustic pressure along the scatterers and the far field scattering pattern can be determined. For the acoustic scattering of one elliptical cylinder, the proposed results match well with the analytical solutions. The proposed scattered fields induced by two and three elliptical–cylindrical scatterers are critically compared with those provided by the boundary element method to validate the present method. Finally, the effects of the convexity of an elliptical scatterer, the separation between scatterers and the incident wave number and angle on the acoustic scattering are investigated.

  20. An investigation of the joint longitudinal trajectories of low body weight, binge eating, and purging in women with anorexia nervosa and bulimia nervosa

    Lavender, Jason M.; De Young, Kyle P.; Franko, Debra L.; Eddy, Kamryn T.; Kass, Andrea E.; Sears, Meredith S.; Herzog, David B.

    2015-01-01

    Objectives To describe the longitudinal course of three core eating disorder symptoms – low body weight, binge eating, and purging – in women with anorexia nervosa (AN) and bulimia nervosa (BN) using a novel statistical approach. Method Treatment-seeking women with AN (n=136) or BN (n=110) completed the Eating Disorders Longitudinal Interval Follow-Up Evaluation interview every six months, yielding weekly eating disorder symptom data for a five-year period. Semi-parametric mixture modeling was used to identify longitudinal trajectories for the three core symptoms. Results Four individual trajectories were identified for each eating disorder symptom. The number and general shape of the individual trajectories was similar across symptoms, with each model including trajectories depicting stable absence and stable presence of symptoms as well as one or more trajectories depicting the declining presence of symptoms. Unique trajectories were found for low body weight (fluctuating presence) and purging (increasing presence). Conjunction analyses yielded the following joint trajectories: low body weight and binge eating, low body weight and purging, and binge eating and purging. Conclusions The course of individual eating disorder symptoms among patients with AN and BN is highly variable. Future research identifying clinical predictors of trajectory membership may inform treatment and nosological research. PMID:22072404

  1. Purged window apparatus. [On-line spectroscopic analysis of gas flow systems

    Ballard, E.O.

    1982-04-05

    A purged window apparatus is described which utilizes tangentially injected heated purge gases in the vicinity of electromagnetic radiation transmitting windows and a tapered external mounting tube to accelerate these gases to provide a vortex flow on the window surface and a turbulent flow throughout the mounting tube thereby preventing backstreaming of flowing gases under investigation in a chamber to which a plurality of similar purged apparatus is attached with the consequent result that spectroscopic analyses can be undertaken for lengthy periods without the necessity of interrupting the flow for cleaning or replacing the windows due to contamination.

  2. Multiple instance learning tracking method with local sparse representation

    Xie, Chengjun

    2013-10-01

    When objects undergo large pose change, illumination variation or partial occlusion, most existed visual tracking algorithms tend to drift away from targets and even fail in tracking them. To address this issue, in this study, the authors propose an online algorithm by combining multiple instance learning (MIL) and local sparse representation for tracking an object in a video system. The key idea in our method is to model the appearance of an object by local sparse codes that can be formed as training data for the MIL framework. First, local image patches of a target object are represented as sparse codes with an overcomplete dictionary, where the adaptive representation can be helpful in overcoming partial occlusion in object tracking. Then MIL learns the sparse codes by a classifier to discriminate the target from the background. Finally, results from the trained classifier are input into a particle filter framework to sequentially estimate the target state over time in visual tracking. In addition, to decrease the visual drift because of the accumulative errors when updating the dictionary and classifier, a two-step object tracking method combining a static MIL classifier with a dynamical MIL classifier is proposed. Experiments on some publicly available benchmarks of video sequences show that our proposed tracker is more robust and effective than others. © The Institution of Engineering and Technology 2013.

  3. Characterization of the Three Mile Island Unit-2 reactor building atmosphere prior to the reactor building purge

    Hartwell, J.K.; Mandler, J.W.; Duce, S.W.; Motes, B.G.

    1981-05-01

    The Three Mile Island Unit-2 reactor building atmosphere was sampled prior to the reactor building purge. Samples of the containment atmosphere were obtained using specialized sampling equipment installed through penetration R-626 at the 358-foot (109-meter) level of the TMI-2 reactor building. The samples were subsequently analyzed for radionuclide concentration and for gaseous molecular components (O 2 , N 2 , etc.) by two independent laboratories at the Idaho National Engineering Laboratory (INEL). The sampling procedures, analysis methods, and results are summarized

  4. Determination of C6-C10 aromatic hydrocarbons in water by purge-and-trap capillary gas chromatography

    Eganhouse, R.P.; Dorsey, T.F.; Phinney, C.S.; Westcott, A.M.

    1993-01-01

    A method is described for the determination of the C6-C10 aromatic hydrocarbons in water based on purge-and-trap capillary gas chromatography with flame ionization and mass spectrometric detection. Retention time data and 70 eV mass spectra were obtained for benzene and all 35 C7-C10 aromatic hydrocarbons. With optimized chromatographic conditions and mass spectrometric detection, benzene and 33 of the 35 alkylbenzenes can be identified and measured in a 45-min run. Use of a flame ionization detector permits the simultaneous determination of benzene and 26 alkylbenzenes.

  5. A novel surgical strategy for secondary hyperparathyroidism: Purge parathyroidectomy.

    Shan, Cheng-Xiang; Qiu, Nian-Cun; Zha, Si-Luo; Liu, Miao-E; Wang, Qiang; Zhu, Pei-Pei; Du, Zhi-Peng; Xia, Chun-Yan; Qiu, Ming; Zhang, Wei

    2017-07-01

    This study was intended to demonstrate the feasibility and efficacy of purge parathyroidectomy (PPTX) for patients with secondary hyperparathyroidism (SHPT). The "seed, environment, and soil" medical hypothesis was first raised, following review of the literatures, to demonstrate the possible causes of persistence or recurrence of SHPT after parathyroidectomy. Subsequently, the novel surgical strategy of PPTX was proposed, which involves comprehensive resection of the fibro-fatty tissues, including visible or invisible parathyroid, within the region surrounded by the thyroid cartilage, bilateral carotid artery sheath, and the brachiocephalic artery. The perioperative information and clinical outcomes of patients who underwent PPTX from June 2016 to December 2016 were analyzed. In total, PPTX was performed safely in nine patients with SHPT from June 2016 to December 2016. The operative time for PPTX ranged from 95 to 135 min, and blood loss ranged from 20 to 40 mL. No patients with perioperative death, bleeding, convulsions, or recurrent laryngeal nerve injury were reported. The preoperative concentration of PTH ranged from 1062 to 2879 pg/mL, and from 12.35 to 72.69 pg/mL on the first day after surgery. In total, 37 parathyroid glands were resected. The postoperative pathologic examination showed that supernumerary or ectopic parathyroid tissues were found within the "non-parathyroid" tissues in three patients. No cases encountered persistence or recurrence of SHPT, or severe hypocalcemia during the follow-up period. PPTX involves comprehensive resection of supernumerary and ectopic parathyroid tissues, which may provide a more permanent means of reducing PTH levels. Copyright © 2017. Published by Elsevier Ltd.

  6. Distinguishing Between Risk Factors for Bulimia Nervosa, Binge Eating Disorder, and Purging Disorder.

    Allen, Karina L; Byrne, Susan M; Crosby, Ross D

    2015-08-01

    Binge eating disorder and purging disorder have gained recognition as distinct eating disorder diagnoses, but risk factors for these conditions have not yet been established. This study aimed to evaluate a prospective, mediational model of risk for the full range of binge eating and purging eating disorders, with attention to possible diagnostic differences. Specific aims were to determine, first, whether eating, weight and shape concerns at age 14 would mediate the relationship between parent-perceived childhood overweight at age 10 and a binge eating or purging eating disorder between age 15 and 20, and, second, whether this mediational model would differ across bulimia nervosa, binge eating disorder, and purging disorder. Participants (N = 1,160; 51 % female) were drawn from the Western Australian Pregnancy Cohort (Raine) Study, which has followed children from pre-birth to age 20. Eating disorders were assessed via self-report questionnaires when participants were aged 14, 17 and 20. There were 146 participants (82 % female) with a binge eating or purging eating disorder with onset between age 15 and 20 [bulimia nervosa = 81 (86 % female), binge eating disorder = 43 (74 % female), purging disorder = 22 (77 % female)]. Simple mediation analysis with bootstrapping was used to test the hypothesized model of risk, with early adolescent eating, weight and shape concerns positioned as a mediator between parent-perceived childhood overweight and later onset of a binge eating or purging eating disorder. Subsequently, a conditional process model (a moderated mediation model) was specified to determine if model pathways differed significantly by eating disorder diagnosis. In the simple mediation model, there was a significant indirect effect of parent-perceived childhood overweight on risk for a binge eating or purging eating disorder in late adolescence, mediated by eating, weight and shape concerns in early adolescence. In the conditional process model

  7. The loss of essential oil components induced by the Purge Time in the Pressurized Liquid Extraction (PLE) procedure of Cupressus sempervirens.

    Dawidowicz, Andrzej L; Czapczyńska, Natalia B; Wianowska, Dorota

    2012-05-30

    The influence of different Purge Times on the effectiveness of Pressurized Liquid Extraction (PLE) of volatile oil components from cypress plant matrix (Cupressus sempervirens) was investigated, applying solvents of diverse extraction efficiencies. The obtained results show the decrease of the mass yields of essential oil components as a result of increased Purge Time. The loss of extracted components depends on the extrahent type - the greatest mass yield loss occurred in the case of non-polar solvents, whereas the smallest was found in polar extracts. Comparisons of the PLE method with Sea Sand Disruption Method (SSDM), Matrix Solid-Phase Dispersion Method (MSPD) and Steam Distillation (SD) were performed to assess the method's accuracy. Independent of the solvent and Purge Time applied in the PLE process, the total mass yield was lower than the one obtained for simple, short and relatively cheap low-temperature matrix disruption procedures - MSPD and SSDM. Thus, in the case of volatile oils analysis, the application of these methods is advisable. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Method and system for gas flow mitigation of molecular contamination of optics

    Delgado, Gildardo; Johnson, Terry; Arienti, Marco; Harb, Salam; Klebanoff, Lennie; Garcia, Rudy; Tahmassebpur, Mohammed; Scott, Sarah

    2018-01-23

    A computer-implemented method for determining an optimized purge gas flow in a semi-conductor inspection metrology or lithography apparatus, comprising receiving a permissible contaminant mole fraction, a contaminant outgassing flow rate associated with a contaminant, a contaminant mass diffusivity, an outgassing surface length, a pressure, a temperature, a channel height, and a molecular weight of a purge gas, calculating a flow factor based on the permissible contaminant mole fraction, the contaminant outgassing flow rate, the channel height, and the outgassing surface length, comparing the flow factor to a predefined maximum flow factor value, calculating a minimum purge gas velocity and a purge gas mass flow rate from the flow factor, the contaminant mass diffusivity, the pressure, the temperature, and the molecular weight of the purge gas, and introducing the purge gas into the semi-conductor inspection metrology or lithography apparatus with the minimum purge gas velocity and the purge gas flow rate.

  9. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  10. Review of Monte Carlo methods for particle multiplicity evaluation

    Armesto-Pérez, Nestor

    2005-01-01

    I present a brief review of the existing models for particle multiplicity evaluation in heavy ion collisions which are at our disposal in the form of Monte Carlo simulators. Models are classified according to the physical mechanisms with which they try to describe the different stages of a high-energy collision between heavy nuclei. A comparison of predictions, as available at the beginning of year 2000, for multiplicities in central AuAu collisions at the BNL Relativistic Heavy Ion Collider (RHIC) and PbPb collisions at the CERN Large Hadron Collider (LHC) is provided.

  11. Review of Monte Carlo methods for particle multiplicity evaluation

    Armesto, Nestor

    2005-01-01

    I present a brief review of the existing models for particle multiplicity evaluation in heavy ion collisions which are at our disposal in the form of Monte Carlo simulators. Models are classified according to the physical mechanisms with which they try to describe the different stages of a high-energy collision between heavy nuclei. A comparison of predictions, as available at the beginning of year 2000, for multiplicities in central AuAu collisions at the BNL Relativistic Heavy Ion Collider (RHIC) and PbPb collisions at the CERN Large Hadron Collider (LHC) is provided

  12. A comparison of confirmatory factor analysis methods : Oblique multiple group method versus confirmatory common factor method

    Stuive, Ilse

    2007-01-01

    Confirmatieve Factor Analyse (CFA) is een vaak gebruikte methode wanneer onderzoekers een bepaalde veronderstelling hebben over de indeling van items in één of meerdere subtests en willen onderzoeken of deze indeling ook wordt ondersteund door verzamelde onderzoeksgegevens. De meest gebruikte

  13. An Exact Method for the Double TSP with Multiple Stacks

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    2010-01-01

    The double travelling salesman problem with multiple stacks (DTSPMS) is a pickup and delivery problem in which all pickups must be completed before any deliveries can be made. The problem originates from a real-life application where a 40 foot container (configured as 3 columns of 11 rows) is used...

  14. An Exact Method for the Double TSP with Multiple Stacks

    Larsen, Jesper; Lusby, Richard Martin; Ehrgott, Matthias

    The double travelling salesman problem with multiple stacks (DTSPMS) is a pickup and delivery problem in which all pickups must be completed before any deliveries can be made. The problem originates from a real-life application where a 40 foot container (configured as 3 columns of 11 rows) is used...

  15. Search Strategy of Detector Position For Neutron Source Multiplication Method by Using Detected-Neutron Multiplication Factor

    Endo, Tomohiro

    2011-01-01

    In this paper, an alternative definition of a neutron multiplication factor, detected-neutron multiplication factor kdet, is produced for the neutron source multiplication method..(NSM). By using kdet, a search strategy of appropriate detector position for NSM is also proposed. The NSM is one of the practical subcritical measurement techniques, i.e., the NSM does not require any special equipment other than a stationary external neutron source and an ordinary neutron detector. Additionally, the NSM method is based on steady-state analysis, so that this technique is very suitable for quasi real-time measurement. It is noted that the correction factors play important roles in order to accurately estimate subcriticality from the measured neutron count rates. The present paper aims to clarify how to correct the subcriticality measured by the NSM method, the physical meaning of the correction factors, and how to reduce the impact of correction factors by setting a neutron detector at an appropriate detector position

  16. The Initial Purging Policies after the 1965 Incident at Lubang Buaya

    Yosef Djakababa

    2013-01-01

    Full Text Available After the Lubang Buaya incident on 1 October 1965 in which six top Indonesian Army generals and a lieutenant were killed, the Army began to implement a nationwide purging campaign with the assistance of civilian anti-communist groups. Thousands of PKI members, supporters and pro-Sukarno groups/individuals immediately became the target of this purge. For organisational purposes, several purging policies were released and then strictly enforced. The official purging policies that are highlighted in this paper are a series of initial directives that were released within days of the generals’ executions. They do not explicitly translate into orders to kill, but are more of a guideline to help anti-communist officials classify and contain communists and other PKI followers. This article attempts to show how these initial directives evolved and also discusses competing purge policies from non-military sources. The co-existence and overlapping nature of the various directives indicate that a power struggle existed between the anti-communist group led by General Soeharto and the presidium of the Dwikora Cabinet who were loyal to President Soekarno.

  17. Characteristic regional cerebral blood flow patterns in anorexia nervosa patients with binge/purge behavior.

    Naruo, T; Nakabeppu, Y; Sagiyama, K; Munemoto, T; Homan, N; Deguchi, D; Nakajo, M; Nozoe, S

    2000-09-01

    The authors' goal was to investigate the effect of imagining food on the regional cerebral blood flow (rCBF) of anorexia nervosa patients with and without habitual binge/purge behavior. The subjects included seven female patients with purely restrictive anorexia, seven female patients with anorexia and habitual binge/purge behavior, and seven healthy women. Single photon emission computed tomography examination was performed before and after the subjects were asked to imagine food. Changes in rCBF count ratios (percent change) were then calculated and compared. The subjects were also asked to assess their degree of fear regarding their control of food intake. The anorexia nervosa patients with habitual binge/purge behavior had a significantly higher percent change in the inferior, superior, prefrontal, and parietal regions of the right brain than the patients with purely restrictive anorexia and the healthy volunteers. The patients with habitual binge/purge behavior also had the highest level of apprehension in regard to food intake. Specific activation in cortical regions suggests an association between habitual binge/purge behavior and the food recognition process linked to anxiety in patients with anorexia nervosa.

  18. Acceptance/operational test procedure 101-AW tank camera purge system and 101-AW video camera system

    Castleberry, J.L.

    1994-01-01

    This procedure will document the satisfactory operation of the 101-AW Tank Camera Purge System (CPS) and the 101-AW Video Camera System. The safety interlock which shuts down all the electronics inside the 101-AW vapor space, during loss of purge pressure, will be in place and tested to ensure reliable performance. This procedure is separated into four sections. Section 6.1 is performed in the 306 building prior to delivery to the 200 East Tank Farms and involves leak checking all fittings on the 101-AW Purge Panel for leakage using a Snoop solution and resolving the leakage. Section 7.1 verifies that PR-1, the regulator which maintains a positive pressure within the volume (cameras and pneumatic lines), is properly set. In addition the green light (PRESSURIZED) (located on the Purge Control Panel) is verified to turn on above 10 in. w.g. and after the time delay (TDR) has timed out. Section 7.2 verifies that the purge cycle functions properly, the red light (PURGE ON) comes on, and that the correct flowrate is obtained to meet the requirements of the National Fire Protection Association. Section 7.3 verifies that the pan and tilt, camera, associated controls and components operate correctly. This section also verifies that the safety interlock system operates correctly during loss of purge pressure. During the loss of purge operation the illumination of the amber light (PURGE FAILED) will be verified

  19. Binge Eating, Purging, or Both: Eating Disorder Psychopathology Findings from an Internet Community Survey

    Roberto, Christina A.; Grilo, Carlos M.; Masheb, Robin M.; White, Marney A.

    2010-01-01

    Objective This study aimed to compare bulimia nervosa (BN), binge eating disorder (BED), and purging disorder (PD) on clinically significant variables and examine the utility of once versus twice-weekly diagnostic thresholds for disturbed eating behaviors. Method 234 women with BN, BED, or PD were identified through self-report measures via an online survey and categorized based on either once-weekly or twice-weekly disturbed eating behaviors. Results BN emerged as a more severe disorder than BED and PD. The three groups differed significantly in self-reported restraint and disinhibition and the BN and BED groups reported higher levels of depression than PD. For BN, those engaging in behaviors twice-weekly versus once-weekly were more symptomatic. Discussion The BN, BED, and PD groups differed in clinically meaningful ways. Future research need to clarify the relationship between mood disturbances and eating behaviors. Reducing the twice-weekly behavior threshold for BN would capture individuals with clinically significant eating disorders, though the twice-weekly threshold may provide important information about disorder severity for both BN and BED. PMID:19862702

  20. Lamotrigine use in patients with binge eating and purging, significant affect dysregulation, and poor impulse control.

    Trunko, Mary Ellen; Schwartz, Terry A; Marzola, Enrica; Klein, Angela S; Kaye, Walter H

    2014-04-01

    Some patients with symptoms of binge eating and purging are successfully treated with specific serotonin reuptake inhibitors (SSRIs), but others experience only partial or no benefit. Significant affect dysregulation and poor impulse control may be characteristics that limit responsiveness. We report on the treatment of five patients with bulimia nervosa (BN), anorexia nervosa-binge/purge type (AN-B/P) or eating disorder not otherwise specified (EDNOS), using the anticonvulsant lamotrigine after inadequate response to SSRIs. Following addition of lamotrigine to an antidepressant in four cases, and switch from an antidepressant to lamotrigine in one case, patients experienced substantial improvement in mood reactivity and instability, impulsive drives and behaviors, and eating-disordered symptoms. These findings raise the possibility that lamotrigine, either as monotherapy or as an augmenting agent to antidepressants, may be useful in patients who binge eat and purge, and have significant affect dysregulation with poor impulse control. Copyright © 2013 Wiley Periodicals, Inc.

  1. Local area water removal analysis of a proton exchange membrane fuel cell under gas purge conditions.

    Lee, Chi-Yuan; Lee, Yu-Ming; Lee, Shuo-Jen

    2012-01-01

    In this study, local area water content distribution under various gas purging conditions are experimentally analyzed for the first time. The local high frequency resistance (HFR) is measured using novel micro sensors. The results reveal that the liquid water removal rate in a membrane electrode assembly (MEA) is non-uniform. In the under-the-channel area, the removal of liquid water is governed by both convective and diffusive flux of the through-plane drying. Thus, almost all of the liquid water is removed within 30 s of purging with gas. However, liquid water that is stored in the under-the-rib area is not easy to remove during 1 min of gas purging. Therefore, the re-hydration of the membrane by internal diffusive flux is faster than that in the under-the-channel area. Consequently, local fuel starvation and membrane degradation can degrade the performance of a fuel cell that is started from cold.

  2. Hydrogen recovery by pressure swing adsorption. [From ammonia purge-gas streams

    1979-06-01

    A pressure swing absorption process (PSA) designed to recover H/sub 2/ from ammonia purge-gas streams developed by Bergbarr-Forschung GmbH of West Germany is reviewed. The PSA unit is installed in the process stream after the ammonia absorber unit which washes the ammonia-containing purge gas which consists of NH/sub 3/, H/sub 2/O, CH/sub 4/, Ar, N/sub 2/, and H/sub 2/. Usually 4 absorber units containing carbon molecular sieves make up the PSA unit; however, only one unit is generally used to absorb all components except H/sub 2/ while the other units are being regenerated by depressurization. Economic comparisons of the PSA process with a cryogenic process indicate that for some ammonia plants there may be a 30% saving in fuel gas requirements with the PSA system. The conditions of the purge gas strongly influence which system of recovery is more suitable.

  3. FMECA about pre-treatment system for purge gas of test blanket module in ITER

    Fu Wanfa; Luo Deli; Tang Tao

    2012-01-01

    The pre-treatment system for purge gas of TBM will be installed in Port Cell for installing TBM in ITER, the function of which includes filtering purge gas, removing HTO, cooling, and adjusting flow rate, etc. The purge gas treated will be conveyed into TES (Tritium Extraction System). The technological process and system components in pre-treatment system were introduced. Tritium releasing risk was regarded as failure criterion; failure mode, effects and criticality analysis (FMECA) were carried out and several weaknesses or failure mode in the system were found. Besides, risk priority number (RPN) and failure mode criticality were calculated. Finally, some design improvement measures and usage compensation measures were given. At last, four important potential failure modes were found out. The analysis will provide the design basis for reducing risk of excessive tritium releasing, which is also a useful assist for safety analysis about other tritium system. (authors)

  4. Estimation of subcriticality by neutron source multiplication method

    Sakurai, Kiyoshi; Suzaki, Takenori; Arakawa, Takuya; Naito, Yoshitaka

    1995-03-01

    Subcritical cores were constructed in a core tank of the TCA by arraying 2.6% enriched UO 2 fuel rods into nxn square lattices of 1.956 cm pitch. Vertical distributions of the neutron count rates for the fifteen subcritical cores (n=17, 16, 14, 11, 8) with different water levels were measured at 5 cm interval with 235 U micro-fission counters at the in-core and out-core positions arranging a 252 C f neutron source at near core center. The continuous energy Monte Carlo code MCNP-4A was used for the calculation of neutron multiplication factors and neutron count rates. In this study, important conclusions are as follows: (1) Differences of neutron multiplication factors resulted from exponential experiment and MCNP-4A are below 1% in most cases. (2) Standard deviations of neutron count rates calculated from MCNP-4A with 500000 histories are 5-8%. The calculated neutron count rates are consistent with the measured one. (author)

  5. Human reliability analysis for In-Tank Precipitation alignment and startup of emergency purge ventilation equipment

    Olsen, L.M.

    1993-08-01

    This report documents the methodology used for calculating the human error probability for establishing air based ventilation using emergency purge ventilation equipment on In-Tank Precipitation (ITP) processing tanks 48 and 49 after a failure of the nitrogen purge system following a seismic event. The analyses were performed according to THERP (Technique for Human Error Rate Prediction). The calculated human error probabilities are provided as input to the Fault Tree Analysis for the ITP Nitrogen Purge System. The analysis assumes a seismic event initiator leading to establishing air based ventilation on the ITP processing tanks 48 and 49. At the time of this analysis only the tanks and the emergency purge ventilation equipment are seismically qualified. Consequently, onsite and offsite power is assumed to be unavailable and all operator control actions are to be performed locally on the tank top. Assumptions regarding procedures, staffing, equipment locations, equipment tagging, equipment availability, and training were made and are documented in this report. The human error probability for establishing air based ventilation using the emergency purge ventilation equipment on In-Tank Precipitation processing tanks 48 and 49 after a failure of the nitrogen purge system following a seismic event is 4.2E-6 (median value on the lognormal scale). It is important to note that this result is predicated on the implementation of all of the assumptions listed in the ''Assumptions'' section of this report. This analysis was not based on the current conditions in ITP. The analysis is to be used as a tool to aid ITP operations personnel in achieving the training, procedural, and operational goals outlined in this document

  6. Computational fluid dynamic simulation of pressurizer safety valve loop seal purge phenomena in nuclear power plants

    Park, Jong Woon

    2012-01-01

    In Korean 3 Loop plants a water loop seal pipe is installed containing condensed water upstream of a pressurizer safety valve to protect the valve disk from the hot steam environment. The loop seal water purge time is a key parameter in safety analyses for overpressure transients, because it delays valve opening. The loop seal purge time is uncertain to measure by test and thus 3-dimensional realistic computational fluid dynamics (CFD) model is developed in this paper to predict the seal water purge time before full opening of the valve which is driven by steam after water purge. The CFD model for a typical pressurizer safety valve with a loop seal pipe is developed using the computer code of ANSYS CFX 11. Steady-state simulations are performed for full discharge of steam at the valve full opening. Transient simulations are performed for the loop seal dynamics and to estimate the loop seal purge time. A sudden pressure drop higher than 2,000 psia at the tip of the upper nozzle ring is expected from the steady-state calculation. Through the transient simulation, almost loop seal water is discharged within 1.2 second through the narrow opening between the disk and the nozzle of the valve. It can be expected that the valve fully opens at least before 1.2 second because constant valve opening is assumed in this CFX simulation, which is conservative because the valve opens fully before the loop seal water is completely discharged. The predicted loop seal purge time is compared with previous correlation. (orig.)

  7. Computational fluid dynamic simulation of pressurizer safety valve loop seal purge phenomena in nuclear power plants

    Park, Jong Woon [Dongguk Univ., Gyeongju (Korea, Republic of). Nuclear and Energy Engineering Dept.

    2012-11-15

    In Korean 3 Loop plants a water loop seal pipe is installed containing condensed water upstream of a pressurizer safety valve to protect the valve disk from the hot steam environment. The loop seal water purge time is a key parameter in safety analyses for overpressure transients, because it delays valve opening. The loop seal purge time is uncertain to measure by test and thus 3-dimensional realistic computational fluid dynamics (CFD) model is developed in this paper to predict the seal water purge time before full opening of the valve which is driven by steam after water purge. The CFD model for a typical pressurizer safety valve with a loop seal pipe is developed using the computer code of ANSYS CFX 11. Steady-state simulations are performed for full discharge of steam at the valve full opening. Transient simulations are performed for the loop seal dynamics and to estimate the loop seal purge time. A sudden pressure drop higher than 2,000 psia at the tip of the upper nozzle ring is expected from the steady-state calculation. Through the transient simulation, almost loop seal water is discharged within 1.2 second through the narrow opening between the disk and the nozzle of the valve. It can be expected that the valve fully opens at least before 1.2 second because constant valve opening is assumed in this CFX simulation, which is conservative because the valve opens fully before the loop seal water is completely discharged. The predicted loop seal purge time is compared with previous correlation. (orig.)

  8. Effect of the purging gas on properties of Ti stabilized AISI 321 stainless steel TIG welds

    Taban, Emel; Kaluc, Erdinc; Aykan, T. Serkan [Kocaeli Univ. (Turkey). Dept. of Mechanical Engineering

    2014-07-01

    Gas purging is necessary to provide a high quality of stainless steel pipe welding in order to prevent oxidation of the weld zone inside the pipe. AISI 321 stabilized austenitic stainless steel pipes commonly preferred in refinery applications have been welded by the TIG welding process both with and without the use of purging gas. As purging gases, Ar, N{sub 2}, Ar + N{sub 2} and N{sub 2} + 10% H{sub 2} were used, respectively. The aim of this investigation is to detect the effect of purging gas on the weld joint properties such as microstructure, corrosion, strength and impact toughness. Macro sections and microstructures of the welds were investigated. Chemical composition analysis to obtain the nitrogen, oxygen and hydrogen content of the weld root was done by Leco analysis. Ferrite content of the beads including root and cap passes were measured by a ferritscope. Vickers hardness (HV10) values were obtained. Intergranular and pitting corrosion tests were applied to determine the corrosion resistance of all welds. Type of the purging gas affected pitting corrosion properties as well as the ferrite content and nitrogen, oxygen and hydrogen contents at the roots of the welds. Any hot cracking problems are not predicted as the weld still solidifies with ferrite in the primary phase as confirmed by microstructural and ferrite content analysis. Mechanical testing showed no significant change according to the purge gas. AISI 321 steel and 347 consumable compositions would permit use of nitrogen rich gases for root shielding without a risk of hot cracking.

  9. A feature point identification method for positron emission particle tracking with multiple tracers

    Wiggins, Cody, E-mail: cwiggin2@vols.utk.edu [University of Tennessee-Knoxville, Department of Physics and Astronomy, 1408 Circle Drive, Knoxville, TN 37996 (United States); Santos, Roque [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States); Escuela Politécnica Nacional, Departamento de Ciencias Nucleares (Ecuador); Ruggles, Arthur [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States)

    2017-01-21

    A novel detection algorithm for Positron Emission Particle Tracking (PEPT) with multiple tracers based on optical feature point identification (FPI) methods is presented. This new method, the FPI method, is compared to a previous multiple PEPT method via analyses of experimental and simulated data. The FPI method outperforms the older method in cases of large particle numbers and fine time resolution. Simulated data show the FPI method to be capable of identifying 100 particles at 0.5 mm average spatial error. Detection error is seen to vary with the inverse square root of the number of lines of response (LORs) used for detection and increases as particle separation decreases. - Highlights: • A new approach to positron emission particle tracking is presented. • Using optical feature point identification analogs, multiple particle tracking is achieved. • Method is compared to previous multiple particle method. • Accuracy and applicability of method is explored.

  10. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  11. Application of multiple timestep integration method in SSC

    Guppy, J.G.

    1979-01-01

    The thermohydraulic transient simulation of an entire LMFBR system is, by its very nature, complex. Physically, the entire plant consists of many subsystems which are coupled by various processes and/or components. The characteristic integration timesteps for these processes/components can vary over a wide range. To improve computing efficiency, a multiple timestep scheme (MTS) approach has been used in the development of the Super System Code (SSC). In this paper: (1) the partitioning of the system and the timestep control are described, and (2) results are presented showing a savings in computer running time using the MTS of as much as five times the time required using a single timestep scheme

  12. Multiple Beta Spectrum Analysis Method Based on Spectrum Fitting

    Lee, Uk Jae; Jung, Yun Song; Kim, Hee Reyoung [UNIST, Ulsan (Korea, Republic of)

    2016-05-15

    When the sample of several mixed radioactive nuclides is measured, it is difficult to divide each nuclide due to the overlapping of spectrums. For this reason, simple mathematical analysis method for spectrum analysis of the mixed beta ray source has been studied. However, existing research was in need of more accurate spectral analysis method as it has a problem of accuracy. The study will describe the contents of the separation methods of the mixed beta ray source through the analysis of the beta spectrum slope based on the curve fitting to resolve the existing problem. The fitting methods including It was understood that sum of sine fitting method was the best one of such proposed methods as Fourier, polynomial, Gaussian and sum of sine to obtain equation for distribution of mixed beta spectrum. It was shown to be the most appropriate for the analysis of the spectrum with various ratios of mixed nuclides. It was thought that this method could be applied to rapid spectrum analysis of the mixed beta ray source.

  13. Colonic diverticular bleeding: urgent colonoscopy without purging and endoscopic treatment with epinephrine and hemoclips

    Ignacio Couto-Worner

    2013-09-01

    Full Text Available Diverticular disease is the most frequent cause of lower gastrointestinal bleeding. Most of the times, bleeding stops without any intervention but in 10-20 % of the cases it is necessary to treat the hemorrhage. Several modalities of endoscopic treatment have been described after purging the colon. We present five cases of severe diverticular bleeding treated with injection of epinephrine and hemoclips. All the colonoscopies were performed without purging of the colon in an emergency setting, with correct visualization of the point of bleeding. Patients recovered well avoiding other aggressive procedures such as angiography or surgery.

  14. Statistical Genetics Methods for Localizing Multiple Breast Cancer Genes

    Ott, Jurg

    1998-01-01

    .... For a number of variables measured on a trait, a method, principal components of heritability, was developed that combines these variables in such a way that the resulting linear combination has highest heritability...

  15. THE METHOD OF MULTIPLE SPATIAL PLANNING BASIC MAP

    C. Zhang

    2018-04-01

    Full Text Available The “Provincial Space Plan Pilot Program” issued in December 2016 pointed out that the existing space management and control information management platforms of various departments were integrated, and a spatial planning information management platform was established to integrate basic data, target indicators, space coordinates, and technical specifications. The planning and preparation will provide supportive decision support, digital monitoring and evaluation of the implementation of the plan, implementation of various types of investment projects and space management and control departments involved in military construction projects in parallel to approve and approve, and improve the efficiency of administrative approval. The space planning system should be set up to delimit the control limits for the development of production, life and ecological space, and the control of use is implemented. On the one hand, it is necessary to clarify the functional orientation between various kinds of planning space. On the other hand, it is necessary to achieve “multi-compliance” of various space planning. Multiple spatial planning intergration need unified and standard basic map(geographic database and technical specificaton to division of urban, agricultural, ecological three types of space and provide technical support for the refinement of the space control zoning for the relevant planning. The article analysis the main space datum, the land use classification standards, base map planning, planning basic platform main technical problems. Based on the geographic conditions, the results of the census preparation of spatial planning map, and Heilongjiang, Hainan many rules combined with a pilot application.

  16. The Method of Multiple Spatial Planning Basic Map

    Zhang, C.; Fang, C.

    2018-04-01

    The "Provincial Space Plan Pilot Program" issued in December 2016 pointed out that the existing space management and control information management platforms of various departments were integrated, and a spatial planning information management platform was established to integrate basic data, target indicators, space coordinates, and technical specifications. The planning and preparation will provide supportive decision support, digital monitoring and evaluation of the implementation of the plan, implementation of various types of investment projects and space management and control departments involved in military construction projects in parallel to approve and approve, and improve the efficiency of administrative approval. The space planning system should be set up to delimit the control limits for the development of production, life and ecological space, and the control of use is implemented. On the one hand, it is necessary to clarify the functional orientation between various kinds of planning space. On the other hand, it is necessary to achieve "multi-compliance" of various space planning. Multiple spatial planning intergration need unified and standard basic map(geographic database and technical specificaton) to division of urban, agricultural, ecological three types of space and provide technical support for the refinement of the space control zoning for the relevant planning. The article analysis the main space datum, the land use classification standards, base map planning, planning basic platform main technical problems. Based on the geographic conditions, the results of the census preparation of spatial planning map, and Heilongjiang, Hainan many rules combined with a pilot application.

  17. Comparison of multiple gene assembly methods for metabolic engineering

    Chenfeng Lu; Karen Mansoorabadi; Thomas Jeffries

    2007-01-01

    A universal, rapid DNA assembly method for efficient multigene plasmid construction is important for biological research and for optimizing gene expression in industrial microbes. Three different approaches to achieve this goal were evaluated. These included creating long complementary extensions using a uracil-DNA glycosylase technique, overlap extension polymerase...

  18. Comparison of two methods of surface profile extraction from multiple ultrasonic range measurements

    Barshan, B; Baskent, D

    Two novel methods for surface profile extraction based on multiple ultrasonic range measurements are described and compared. One of the methods employs morphological processing techniques, whereas the other employs a spatial voting scheme followed by simple thresholding. Morphological processing

  19. Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods

    Werner, Arelia T.; Cannon, Alex J.

    2016-04-01

    Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e. correlation tests) and distributional properties (i.e. tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), the climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3-day peak flow and 7-day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational data sets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational data set. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7-day low-flow events, regardless of reanalysis or observational data set. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event

  20. Support Operators Method for the Diffusion Equation in Multiple Materials

    Winters, Andrew R. [Los Alamos National Laboratory; Shashkov, Mikhail J. [Los Alamos National Laboratory

    2012-08-14

    A second-order finite difference scheme for the solution of the diffusion equation on non-uniform meshes is implemented. The method allows the heat conductivity to be discontinuous. The algorithm is formulated on a one dimensional mesh and is derived using the support operators method. A key component of the derivation is that the discrete analog of the flux operator is constructed to be the negative adjoint of the discrete divergence, in an inner product that is a discrete analog of the continuum inner product. The resultant discrete operators in the fully discretized diffusion equation are symmetric and positive definite. The algorithm is generalized to operate on meshes with cells which have mixed material properties. A mechanism to recover intermediate temperature values in mixed cells using a limited linear reconstruction is introduced. The implementation of the algorithm is verified and the linear reconstruction mechanism is compared to previous results for obtaining new material temperatures.

  1. Computing multiple zeros using a class of quartically convergent methods

    F. Soleymani

    2013-09-01

    For functions with finitely many real roots in an interval, relatively little literature is known, while in applications, the users wish to find all the real zeros at the same time. Hence, the second aim of this paper will be presented by designing a fourth-order algorithm, based on the developed methods, to find all the real solutions of a nonlinear equation in an interval using the programming package Mathematica 8.

  2. Some problems of neutron source multiplication method for site measurement technology in nuclear critical safety

    Shi Yongqian; Zhu Qingfu; Hu Dingsheng; He Tao; Yao Shigui; Lin Shenghuo

    2004-01-01

    The paper gives experiment theory and experiment method of neutron source multiplication method for site measurement technology in the nuclear critical safety. The measured parameter by source multiplication method actually is a sub-critical with source neutron effective multiplication factor k s , but not the neutron effective multiplication factor k eff . The experiment research has been done on the uranium solution nuclear critical safety experiment assembly. The k s of different sub-criticality is measured by neutron source multiplication experiment method, and k eff of different sub-criticality, the reactivity coefficient of unit solution level, is first measured by period method, and then multiplied by difference of critical solution level and sub-critical solution level and obtained the reactivity of sub-critical solution level. The k eff finally can be extracted from reactivity formula. The effect on the nuclear critical safety and different between k eff and k s are discussed

  3. Improved exact method for the double TSP with multiple stacks

    Lusby, Richard Martin; Larsen, Jesper

    2011-01-01

    and delivery problems. The results suggest an impressive improvement, and we report, for the first time, optimal solutions to several unsolved instances from the literature containing 18 customers. Instances with 28 customers are also shown to be solvable within a few percent of optimality. © 2011 Wiley...... the first delivery, and the container cannot be repacked once packed. In this paper we improve the previously proposed exact method of Lusby et al. (Int Trans Oper Res 17 (2010), 637–652) through an additional preprocessing technique that uses the longest common subsequence between the respective pickup...

  4. Effective multiplication factor measurement by feynman-α method. 3

    Mouri, Tomoaki; Ohtani, Nobuo

    1998-06-01

    The sub-criticality monitoring system has been developed for criticality safety control in nuclear fuel handling plants. In the past experiments performed with the Deuterium Critical Assembly (DCA), it was confirmed that the detection of sub-criticality was possible to k eff = 0.3. To investigate the applicability of the method to more generalized system, experiments were performed in the light-water-moderated system of the modified DCA core. From these experiments, it was confirmed that the prompt decay constant (α), which was a index of the sub-criticality, was detected between k eff = 0.623 and k eff = 0.870 and the difference of 0.05 - 0.1Δk could be distinguished. The α values were numerically calculated with 2D transport code TWODANT and monte carlo code KENO V.a, and the results were compared with the measured values. The differences between calculated and measured values were proved to be less than 13%, which was sufficient accuracy in the sub-criticality monitoring system. It was confirmed that Feynman-α method was applicable to sub-critical measurement of the light-water-moderated system. (author)

  5. Effect of nitrogen crossover on purging strategy in PEM fuel cell systems

    Rabbani, Raja Abid; Rokni, Masoud

    2013-01-01

    A comprehensive study on nitrogen crossover in polymer electrolyte membrane fuel cell (PEMFC) system with anode recirculation is conducted and associated purging strategies are discussed. Such systems when employed in automobiles are subjected to continuous changes in load and external operating...

  6. Determination of biodegradation process of benzene, toluene, ethylbenzene and xylenes in seabed sediment by purge and trap gas chromatography

    Han, Dongqiang [Key Lab. for Atomic and Molecular Nanosciences of Education Ministry, Tsinghua Univ., Beijing (China). Dept. of Physics; China Pharmaceutical Univ., Nanjing (China). Physics Teaching and Research Section, Dept. of Basic Sciences; Ma, Wanyun; Chen, Dieyan [Key Lab. for Atomic and Molecular Nanosciences of Education Ministry, Tsinghua Univ., Beijing (China). Dept. of Physics

    2007-12-15

    Benzene, toluene, ethylbenzene, and xylenes (BTEX) are commonly found in crude oil and are used in geochemical investigations as direct indicators of the presence of oil and gas. BTEX are easily volatile and can be degraded by microorganisms, which affect their precise measurement seriously. A method for determining the biodegradation process of BTEX in seabed sediment using dynamic headspace (purge and trap) gas chromatography with a photoionization detector (PID) was developed, which had a detection limit of 7.3-13.2 ng L{sup -1} and a recovery rate of 91.6-95.0%. The decrease in the concentration of BTEX components was monitored in seabed sediment samples, which was caused by microorganism biodegradation. The results of BTEX biodegradation process were of great significance in the collection, transportation, preservation, and measurement of seabed sediment samples in the geochemical investigations of oil and gas. (orig.)

  7. Implementation of a fully automated process purge-and-trap gas chromatograph at an environmental remediation site

    Blair, D.S.; Morrison, D.J.

    1997-01-01

    The AQUASCAN, a commercially available, fully automated purge-and-trap gas chromatograph from Sentex Systems Inc., was implemented and evaluated as an in-field, automated monitoring system of contaminated groundwater at an active DOE remediation site in Pinellas, FL. Though the AQUASCAN is designed as a stand alone process analytical unit, implementation at this site required additional hardware. The hardware included a sample dilution system and a method for delivering standard solution to the gas chromatograph for automated calibration. As a result of the evaluation the system was determined to be a reliable and accurate instrument. The AQUASCAN reported concentration values for methylene chloride, trichloroethylene, and toluene in the Pinellas ground water were within 20% of reference laboratory values

  8. Monoclonal antibody-purged bone marrow transplantation therapy for multiple myeloma.

    Anderson, K C; Andersen, J; Soiffer, R; Freedman, A S; Rabinowe, S N; Robertson, M J; Spector, N; Blake, K; Murray, C; Freeman, A

    1993-10-15

    Forty patients with plasma cell dyscrasias underwent high-dose chemoradiotherapy and either anti-B-cell monoclonal antibody (MoAb)-treated autologous, anti-T-cell MoAb-treated HLA-matched sibling allogeneic or syngeneic bone marrow transplantation (BMT). The majority of patients had advanced Durie-Salmon stage myeloma at diagnosis, all were pretreated with chemotherapy, and 17 had received prior radiotherapy. At the time of BMT, all patients demonstrated good performance status with Karnofsky score of 80% or greater and had less than 10% marrow tumor cells; 34 patients had residual monoclonal marrow plasma cells and 38 patients had paraprotein. Following high-dose chemoradiotherapy, there were 18 complete responses (CR), 18 partial responses, one non-responder, and three toxic deaths. Granulocytes greater than 500/microL and untransfused platelets greater than 20,000/microL were noted at a median of 23 (range, 12 to 46) and 25 (range, 10 to 175) days posttransplant (PT), respectively, in 24 of the 26 patients who underwent autografting. In the 14 patients who received allogeneic or syngeneic grafts, granulocytes greater than 500/microL and untransfused platelets greater than 20,000/microL were noted at a median of 19 (range, 12 to 24) and 16 (range, 5 to 32) days PT, respectively. With 24 months median follow-up for survival after autologous BMT, 16 of 26 patients are alive free from progression at 2+ to 55+ months PT; of these, 5 patients remain in CR at 6+ to 55+ months PT. With 24 months median follow-up for survival after allogeneic and syngeneic BMT, 8 of 14 patients are alive free from progression at 8+ to 34+ months PT; of these, 5 patients remain in CR at 8+ to 34+ months PT. This therapy has achieved high response rates and prolonged progression-free survival in some patients and proven to have acceptable toxicity. However, relapses post-BMT, coupled with slow engraftment post-BMT in heavily pretreated patients, suggest that such treatment strategies should be used earlier in the disease course. To define the role of BMT in the treatment of myeloma, its efficacy should be compared with that of conventional chemotherapy in a randomized trial.

  9. Driven exercise in the absence of binge eating: Implications for purging disorder.

    Lydecker, Janet A; Shea, Megan; Grilo, Carlos M

    2018-02-01

    Purging disorder (PD) is characterized by recurrent purging without objectively large binge-eating episodes. PD has received relatively little attention, and questions remain about the clinical significance of "purging" by exercise that is driven or compulsive (i.e., as extreme compensatory or weight-control behavior). The little available research suggests that individuals who use exercise as a compensatory behavior might have less eating-disorder psychopathology than those who purge by vomiting or laxatives, but those studies have had smaller sample sizes, defined PD using low-frequency thresholds, and defined exercise without weight-compensatory or driven elements. Participants (N = 2,017) completed a web-based survey with established measures of eating-disorder psychopathology, depression, and physical activity. Participants were categorized (regular compensatory driven exercise, PD-E, n = 297; regular compensatory vomiting/laxatives, PD-VL, n = 59; broadly defined anorexia nervosa, AN, n = 20; and no eating-disordered behaviors, NED, n = 1,658) and compared. PD-E, PD-VL, and AN had higher eating-disorder psychopathology and physical activity than NED but did not significantly differ from each other on most domains. PD-VL and AN had higher depression than PD-E, which was higher than NED. Findings suggest that among participants with regularly compensatory behaviors without binge eating, those who use exercise alone have similar levels of associated eating-disorder psychopathology as those who use vomiting/laxatives, although they have lower depression levels and overall frequency of purging. Findings provide further support for the clinical significance of PD. Clinicians and researchers should recognize the severity of driven exercise as a compensatory behavior, and the need for further epidemiological and treatment research. © 2017 Wiley Periodicals, Inc.

  10. Better Fitness in Captive Cuvier's Gazelle despite Inbreeding Increase: Evidence of Purging?

    Eulalia Moreno

    Full Text Available Captive breeding of endangered species often aims at preserving genetic diversity and to avoid the harmful effects of inbreeding. However, deleterious alleles causing inbreeding depression can be purged when inbreeding persists over several generations. Despite its great importance both for evolutionary biology and for captive breeding programmes, few studies have addressed whether and to which extent purging may occur. Here we undertake a longitudinal study with the largest captive population of Cuvier's gazelle managed under a European Endangered Species Programme since 1975. Previous results in this population have shown that highly inbred mothers tend to produce more daughters, and this fact was used in 2006 to reach a more appropriate sex-ratio in this polygynous species by changing the pairing strategy (i.e., pairing some inbred females instead of keeping them as surplus individuals in the population. Here, by using studbook data we explore whether purging has occurred in the population by investigating whether after the change in pairing strategy a inbreeding and homozygosity increased at the population level, b fitness (survival increased, and c the relationship between inbreeding and juvenile survival, was positive. Consistent with the existence of purging, we found an increase in inbreeding coefficients, homozygosity and juvenile survival. In addition, we showed that in the course of the breeding programme the relationship between inbreeding and juvenile survival was not uniform but rather changed over time: it was negative in the early years, flat in the middle years and positive after the change in pairing strategy. We highlight that by allowing inbred individuals to mate in captive stocks we may favour sex-ratio bias towards females, a desirable managing strategy to reduce the surplus of males that force most zoos to use ethical culling and euthanizing management tools. We discuss these possibilities but also acknowledge that many

  11. Curvelet-domain multiple matching method combined with cubic B-spline function

    Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming

    2018-05-01

    Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.

  12. Using Module Analysis for Multiple Choice Responses: A New Method Applied to Force Concept Inventory Data

    Brewe, Eric; Bruun, Jesper; Bearden, Ian G.

    2016-01-01

    We describe "Module Analysis for Multiple Choice Responses" (MAMCR), a new methodology for carrying out network analysis on responses to multiple choice assessments. This method is used to identify modules of non-normative responses which can then be interpreted as an alternative to factor analysis. MAMCR allows us to identify conceptual…

  13. 29 CFR 4010.12 - Alternative method of compliance for certain sponsors of multiple employer plans.

    2010-07-01

    ... BENEFIT GUARANTY CORPORATION CERTAIN REPORTING AND DISCLOSURE REQUIREMENTS ANNUAL FINANCIAL AND ACTUARIAL INFORMATION REPORTING § 4010.12 Alternative method of compliance for certain sponsors of multiple employer... part for an information year if any contributing sponsor of the multiple employer plan provides a...

  14. Trace element analysis of environmental samples by multiple prompt gamma-ray analysis method

    Oshima, Masumi; Matsuo, Motoyuki; Shozugawa, Katsumi

    2011-01-01

    The multiple γ-ray detection method has been proved to be a high-resolution and high-sensitivity method in application to nuclide quantification. The neutron prompt γ-ray analysis method is successfully extended by combining it with the γ-ray detection method, which is called Multiple prompt γ-ray analysis, MPGA. In this review we show the principle of this method and its characteristics. Several examples of its application to environmental samples, especially river sediments in the urban area and sea sediment samples are also described. (author)

  15. Measurements of the purge helium pressure drop across pebble beds packed with lithium orthosilicate and glass pebbles

    Abou-Sena, Ali, E-mail: ali.abou-sena@kit.edu; Arbeiter, Frederik; Boccaccini, Lorenzo V.; Schlindwein, Georg

    2014-10-15

    Highlights: • The objective is to measure the purge helium pressure drop across various HCPB-relevant pebble beds packed with lithium orthosilicate and glass pebbles. • The purge helium pressure drop significantly increases with decreasing the pebbles diameter from one run to another. • At the same superficial velocity, the pressure drop is directly proportional to the helium inlet pressure. • The Ergun's equation can successfully model the purge helium pressure drop for the HCPB-relevant pebble beds. • The measured values of the purge helium pressure drop for the lithium orthosilicate pebble bed will support the design of the purge gas system for the HCPB breeder units. - Abstract: The lithium orthosilicate pebble beds of the Helium Cooled Pebble Bed (HCPB) blanket are purged by helium to transport the produced tritium to the tritium extraction system. The pressure drop of the purge helium has a direct impact on the required pumping power and is a limiting factor for the purge mass flow. Therefore, the objective of this study is to measure the helium pressure drop across various HCPB-relevant pebble beds packed with lithium orthosilicate and glass pebbles. The pebble bed was formed by packing the pebbles into a stainless steel cylinder (ID = 30 mm and L = 120 mm); then it was integrated into a gas loop that has four variable-speed side-channel compressors to regulate the helium mass flow. The static pressure was measured at two locations (100 mm apart) along the pebble bed and at inlet and outlet of the pebble bed. The results demonstrated that: (i) the pressure drop significantly increases with decreasing the pebbles diameter, (ii) for the same superficial velocity, the pressure drop is directly proportional to the inlet pressure, and (iii) predictions of Ergun's equation agree well with the experimental results. The measured pressure drop for the lithium orthosilicate pebble bed will support the design of the purge gas system for the HCPB.

  16. Interconnection blocks: a method for providing reusable, rapid, multiple, aligned and planar microfluidic interconnections

    Sabourin, David; Snakenborg, Detlef; Dufva, Hans Martin

    2009-01-01

    In this paper a method is presented for creating 'interconnection blocks' that are re-usable and provide multiple, aligned and planar microfluidic interconnections. Interconnection blocks made from polydimethylsiloxane allow rapid testing of microfluidic chips and unobstructed microfluidic observ...

  17. Upscaling permeability for three-dimensional fractured porous rocks with the multiple boundary method

    Chen, Tao; Clauser, Christoph; Marquart, Gabriele; Willbrand, Karen; Hiller, Thomas

    2018-02-01

    Upscaling permeability of grid blocks is crucial for groundwater models. A novel upscaling method for three-dimensional fractured porous rocks is presented. The objective of the study was to compare this method with the commonly used Oda upscaling method and the volume averaging method. First, the multiple boundary method and its computational framework were defined for three-dimensional stochastic fracture networks. Then, the different upscaling methods were compared for a set of rotated fractures, for tortuous fractures, and for two discrete fracture networks. The results computed by the multiple boundary method are comparable with those of the other two methods and fit best the analytical solution for a set of rotated fractures. The errors in flow rate of the equivalent fracture model decrease when using the multiple boundary method. Furthermore, the errors of the equivalent fracture models increase from well-connected fracture networks to poorly connected ones. Finally, the diagonal components of the equivalent permeability tensors tend to follow a normal or log-normal distribution for the well-connected fracture network model with infinite fracture size. By contrast, they exhibit a power-law distribution for the poorly connected fracture network with multiple scale fractures. The study demonstrates the accuracy and the flexibility of the multiple boundary upscaling concept. This makes it attractive for being incorporated into any existing flow-based upscaling procedures, which helps in reducing the uncertainty of groundwater models.

  18. Analysis and performance estimation of the Conjugate Gradient method on multiple GPUs

    Verschoor, M.; Jalba, A.C.

    2012-01-01

    The Conjugate Gradient (CG) method is a widely-used iterative method for solving linear systems described by a (sparse) matrix. The method requires a large amount of Sparse-Matrix Vector (SpMV) multiplications, vector reductions and other vector operations to be performed. We present a number of

  19. Statistical Analysis of a Class: Monte Carlo and Multiple Imputation Spreadsheet Methods for Estimation and Extrapolation

    Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael

    2017-01-01

    The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…

  20. Human Reliability Analysis for In-Tank Precipitation Alignment and Startup of Emergency Purge Ventilation Equipment. Revision 3

    Shapiro, B.J.; Britt, T.E.

    1994-10-01

    This report documents the methodology used for calculating the human error probability for establishing air based ventilation using emergency purge ventilation equipment on In-Tank Precipitation (ITP) processing tanks 48 and 49 after failure of the nitrogen purge system following a seismic event. The analyses were performed according to THERP (Technique for Human Error Rate Prediction) as described in NUREG/CR-1278-F, ''Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications.'' The calculated human error probabilities are provided as input to the Fault Tree Analysis for the ITP Nitrogen Purge System

  1. Solution of Constrained Optimal Control Problems Using Multiple Shooting and ESDIRK Methods

    Capolei, Andrea; Jørgensen, John Bagterp

    2012-01-01

    of this paper is the use of ESDIRK integration methods for solution of the initial value problems and the corresponding sensitivity equations arising in the multiple shooting algorithm. Compared to BDF-methods, ESDIRK-methods are advantageous in multiple shooting algorithms in which restarts and frequent...... algorithm. As we consider stiff systems, implicit solvers with sensitivity computation capabilities for initial value problems must be used in the multiple shooting algorithm. Traditionally, multi-step methods based on the BDF algorithm have been used for such problems. The main novel contribution...... discontinuities on each shooting interval are present. The ESDIRK methods are implemented using an inexact Newton method that reuses the factorization of the iteration matrix for the integration as well as the sensitivity computation. Numerical experiments are provided to demonstrate the algorithm....

  2. Simple and effective method of determining multiplicity distribution law of neutrons emitted by fissionable material with significant self -multiplication effect

    Yanjushkin, V.A.

    1991-01-01

    At developing new methods of non-destructive determination of plutonium full mass in nuclear materials and products being involved in uranium -plutonium fuel cycle by its intrinsic neutron radiation, it may be useful to know not only separate moments but the multiplicity distribution law itself of neutron leaving this material surface using the following as parameters - firstly, unconditional multiplicity distribution laws of neutrons formed in spontaneous and induced fission acts of the given fissionable material corresponding nuclei and unconditional multiplicity distribution law of neutrons caused by (α,n) reactions at light nuclei of some elements which compose this material chemical structure; -secondly, probability of induced fission of this material nuclei by an incident neutron of any nature formed during the previous fissions or(α,n) reactions. An attempt to develop similar theory has been undertaken. Here the author proposes his approach to this problem. The main advantage of this approach, to our mind, consists in its mathematical simplicity and easy realization at the computer. In principle, the given model guarantees any good accuracy at any real value of induced fission probability without limitations dealing with physico-chemical composition of nuclear material

  3. The multiple imputation method: a case study involving secondary data analysis.

    Walani, Salimah R; Cleland, Charles M

    2015-05-01

    To illustrate with the example of a secondary data analysis study the use of the multiple imputation method to replace missing data. Most large public datasets have missing data, which need to be handled by researchers conducting secondary data analysis studies. Multiple imputation is a technique widely used to replace missing values while preserving the sample size and sampling variability of the data. The 2004 National Sample Survey of Registered Nurses. The authors created a model to impute missing values using the chained equation method. They used imputation diagnostics procedures and conducted regression analysis of imputed data to determine the differences between the log hourly wages of internationally educated and US-educated registered nurses. The authors used multiple imputation procedures to replace missing values in a large dataset with 29,059 observations. Five multiple imputed datasets were created. Imputation diagnostics using time series and density plots showed that imputation was successful. The authors also present an example of the use of multiple imputed datasets to conduct regression analysis to answer a substantive research question. Multiple imputation is a powerful technique for imputing missing values in large datasets while preserving the sample size and variance of the data. Even though the chained equation method involves complex statistical computations, recent innovations in software and computation have made it possible for researchers to conduct this technique on large datasets. The authors recommend nurse researchers use multiple imputation methods for handling missing data to improve the statistical power and external validity of their studies.

  4. Purging of an air-filled vessel by horizontal injection of steam

    Smith, B.L.; Andreani, M

    2000-07-01

    Reported here are results from an idealised 2D problem in which cold air is purged from a large vessel by a steam jet. The focus of the study is the prediction of the evolution of the flow regimes resulting from changes in the relative importance of buoyancy and inertia forces, and time histories of the temperature and concentration fields. Global parameters of interest are the mixture concentration at the vessel outlet and the total time taken to purge the air. The Computational Fluid Dynamics (CFD) code CFX-4 has been used to perform calculations for different inlet velocities, covering a range of (densimetric) Froude numbers from Fr=0.8 (buoyancy dominated) to Fr=7.1 (inertia dominated). Animations have been used to help understand the dynamics of the flow transitions, and temperature and concentration histories at specific monitoring points have been compared with coarse-mesh predictions obtained using the containment code GOTHIC. (authors)

  5. Purging of an air-filled vessel by horizontal injection of steam

    Smith, B.L.; Andreani, M.

    2000-01-01

    Reported here are results from an idealised 2D problem in which cold air is purged from a large vessel by a steam jet. The focus of the study is the prediction of the evolution of the flow regimes resulting from changes in the relative importance of buoyancy and inertia forces, and time histories of the temperature and concentration fields. Global parameters of interest are the mixture concentration at the vessel outlet and the total time taken to purge the air. The Computational Fluid Dynamics (CFD) code CFX-4 has been used to perform calculations for different inlet velocities, covering a range of (densimetric) Froude numbers from Fr=0.8 (buoyancy dominated) to Fr=7.1 (inertia dominated). Animations have been used to help understand the dynamics of the flow transitions, and temperature and concentration histories at specific monitoring points have been compared with coarse-mesh predictions obtained using the containment code GOTHIC. (authors)

  6. An implantable centrifugal blood pump with a recirculating purge system (Cool-Seal system).

    Yamazaki, K; Litwak, P; Tagusari, O; Mori, T; Kono, K; Kameneva, M; Watach, M; Gordon, L; Miyagishima, M; Tomioka, J; Umezu, M; Outa, E; Antaki, J F; Kormos, R L; Koyanagi, H; Griffith, B P

    1998-06-01

    A compact centrifugal blood pump has been developed as an implantable left ventricular assist system. The impeller diameter is 40 mm, and pump dimensions are 55 x 64 mm. This first prototype, fabricated from titanium alloy, resulted in a pump weight of 400 g including a brushless DC motor. The weight of a second prototype pump was reduced to 280 g. The entire blood contacting surface is coated with diamond like carbon (DLC) to improve blood compatibility. Flow rates of over 7 L/min against 100 mm Hg pressure at 2,500 rpm with 9 W total power consumption have been measured. A newly designed mechanical seal with a recirculating purge system (Cool-Seal) is used for the shaft seal. In this seal system, the seal temperature is kept under 40 degrees C to prevent heat denaturation of blood proteins. Purge fluid also cools the pump motor coil and journal bearing. Purge fluid is continuously purified and sterilized by an ultrafiltration unit which is incorporated in the paracorporeal drive console. In vitro experiments with bovine blood demonstrated an acceptably low hemolysis rate (normalized index of hemolysis = 0.005 +/- 0.002 g/100 L). In vivo experiments are currently ongoing using calves. Via left thoracotomy, left ventricular (LV) apex descending aorta bypass was performed utilizing an expanded polytetrafluoroethylene (ePTFE) vascular graft with the pump placed in the left thoracic cavity. In 2 in vivo experiments, the pump flow rate was maintained at 5-9 L/min, and pump power consumption remained stable at 9-10 W. All plasma free Hb levels were measured at less than 15 mg/dl. The seal system has demonstrated good seal capability with negligible purge fluid consumption (<0.5 ml/day). In both calves, the pumps demonstrated trouble free continuous function over 6 month (200 days and 222 days).

  7. Characterizations of gas purge valves for liquid alignment and gas removal in a microfluidic chip

    Chuang, Han-Sheng; Thakur, Raviraj; Wereley, Steven T

    2012-01-01

    Two polydimethylsiloxane (PDMS) gas purge valves for excessive gas removal in general lab-on-a-chip applications are presented in this paper. Both valves are devised based on a three-layer configuration comprising a top layer for liquid channels, a membrane and a bottom layer for gas channels. The pneumatic valves work as a normal gateway for fluids when the membrane is bulged down (open state) by vacuum or pushed up (closed state) by pressure. In the closed state, the air in front of a liquid can be removed through a small notch or a permeable PDMS membrane by compressing the liquid. The purge valve with a small notch across its valve seat, termed surface-tension (ST) valve, can be operated with pressure under 11.5 kPa. The liquid is mainly retained by the surface tension resulting from the hydrophobic channel walls. In contrast, the purge valve with vacuum-filled grooves adjacent to a liquid channel, termed gas-permeation (GP) valve, can be operated at pressure above 5.5 kPa. Based on the principle of gas permeation, the excessive air can be slowly removed through the vent grooves. Detailed evaluations of both valves in a pneumatically driven microfluidic chip were conducted. Specifically, the purge valves enable users to remove gas and passively align liquids at desired locations without using sensing devices or feedback circuits. Finally, a rapid mixing reaction was successfully performed with the GP valves, showing their practicability as incorporated in a microfluidic chip. (paper)

  8. Kinetic modeling of the purging of activated carbon after short term methyl iodide loading

    Friedrich, V.; Lux, I.

    1991-01-01

    A bimolecular reaction model containing the physico-chemical parameters of the adsorption and desorption was developed earlier to describe the kinetics of methyl iodide retention by activated carbon adsorber. Both theoretical model and experimental investigations postulated constant upstream methyl iodide concentration till the maximum break-through. The work reported here includes the extension of the theoretical model to the general case when the concentration of the challenging gas may change in time. The effect of short term loading followed by purging with air, and an impulse-like increase in upstream gas concentration has been simulated. The case of short term loading and subsequent purging has been experimentally studied to validate the model. The investigations were carried out on non-impregnated activated carbon. A 4 cm deep carbon bed had been challenged by methyl iodide for 30, 90, 120 and 180 min and then purged with air, downstream methyl iodide concentration had been measured continuously. The main characteristics of the observed downstream concentration curves (time and slope of break-through, time and amplitude of maximum values) showed acceptable agreement with those predicted by the model

  9. Cold Vacuum Drying (CVD) Facility Vacuum Purge System Chilled Water System Design Description. System 47-4

    IRWIN, J.J.

    2000-01-01

    This system design description (SDD) addresses the Vacuum Purge System Chilled Water (VPSCHW) system. The discussion that follows is limited to the VPSCHW system and its interfaces with associated systems. The reader's attention is directed to Drawings H-1-82162, Cold Vacuum Drying Facility Process Equipment Skid PandID Vacuum System, and H-1-82224, Cold Vacuum Drying Facility Mechanical Utilities Process Chilled Water PandID. Figure 1-1 shows the location and equipment arrangement for the VPSCHW system. The VPSCHW system provides chilled water to the Vacuum Purge System (VPS). The chilled water provides the ability to condense water from the multi-canister overpack (MCO) outlet gases during the MCO vacuum and purge cycles. By condensing water from the MCO purge gas, the VPS can assist in drying the contents of the MCO

  10. Multiple and mixed methods in formative evaluation: Is more better? Reflections from a South African study

    Willem Odendaal

    2016-12-01

    Full Text Available Abstract Background Formative programme evaluations assess intervention implementation processes, and are seen widely as a way of unlocking the ‘black box’ of any programme in order to explore and understand why a programme functions as it does. However, few critical assessments of the methods used in such evaluations are available, and there are especially few that reflect on how well the evaluation achieved its objectives. This paper describes a formative evaluation of a community-based lay health worker programme for TB and HIV/AIDS clients across three low-income communities in South Africa. It assesses each of the methods used in relation to the evaluation objectives, and offers suggestions on ways of optimising the use of multiple, mixed-methods within formative evaluations of complex health system interventions. Methods The evaluation’s qualitative methods comprised interviews, focus groups, observations and diary keeping. Quantitative methods included a time-and-motion study of the lay health workers’ scope of practice and a client survey. The authors conceptualised and conducted the evaluation, and through iterative discussions, assessed the methods used and their results. Results Overall, the evaluation highlighted programme issues and insights beyond the reach of traditional single methods evaluations. The strengths of the multiple, mixed-methods in this evaluation included a detailed description and nuanced understanding of the programme and its implementation, and triangulation of the perspectives and experiences of clients, lay health workers, and programme managers. However, the use of multiple methods needs to be carefully planned and implemented as this approach can overstretch the logistic and analytic resources of an evaluation. Conclusions For complex interventions, formative evaluation designs including multiple qualitative and quantitative methods hold distinct advantages over single method evaluations. However

  11. Multiple Site-Directed and Saturation Mutagenesis by the Patch Cloning Method.

    Taniguchi, Naohiro; Murakami, Hiroshi

    2017-01-01

    Constructing protein-coding genes with desired mutations is a basic step for protein engineering. Herein, we describe a multiple site-directed and saturation mutagenesis method, termed MUPAC. This method has been used to introduce multiple site-directed mutations in the green fluorescent protein gene and in the moloney murine leukemia virus reverse transcriptase gene. Moreover, this method was also successfully used to introduce randomized codons at five desired positions in the green fluorescent protein gene, and for simple DNA assembly for cloning.

  12. Human Reliability Analysis for In-Tank Precipitation Alignment and Startup of Emergency Purge Ventilation Equipment. Revision 4

    Shapiro, B.J.; Britt, T.E.

    1995-06-01

    This report documents the methodology used for calculating the human error probability for establishing air based ventilation using emergency purge ventilation equipment on In-Tank Precipitation (ITP) processing tanks 48 and 49 after a failure of the nitrogen purge system following a seismic event. The analyses were performed according to THERP (Technique for Human Error Rate Prediction) as describes in NUREG/CR-1278-F

  13. The Initial Rise Method in the case of multiple trapping levels

    Furetta, C.; Guzman, S.; Cruz Z, E.

    2009-10-01

    The aim of the paper is to extent the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to the minerals extracted from Nopal herb and Oregano spice because the thermoluminescent glow curves shape suggests a trap distribution instead of a single trapping level. (Author)

  14. Calculation of U, Ra, Th and K contents in uranium ore by multiple linear regression method

    Lin Chao; Chen Yingqiang; Zhang Qingwen; Tan Fuwen; Peng Guanghui

    1991-01-01

    A multiple linear regression method was used to compute γ spectra of uranium ore samples and to calculate contents of U, Ra, Th, and K. In comparison with the inverse matrix method, its advantage is that no standard samples of pure U, Ra, Th and K are needed for obtaining response coefficients

  15. The Initial Rise Method in the case of multiple trapping levels

    Furetta, C. [Centro de Investigacion en Ciencia Aplicada y Tecnologia Avanzada, IPN, Av. Legaria 694, Col. Irrigacion, 11500 Mexico D. F. (Mexico); Guzman, S.; Cruz Z, E. [Instituto de Ciencias Nucleares, UNAM, A. P. 70-543, 04510 Mexico D. F. (Mexico)

    2009-10-15

    The aim of the paper is to extent the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to the minerals extracted from Nopal herb and Oregano spice because the thermoluminescent glow curves shape suggests a trap distribution instead of a single trapping level. (Author)

  16. A method for the generation of random multiple Coulomb scattering angles

    Campbell, J.R.

    1995-06-01

    A method for the random generation of spatial angles drawn from non-Gaussian multiple Coulomb scattering distributions is presented. The method employs direct numerical inversion of cumulative probability distributions computed from the universal non-Gaussian angular distributions of Marion and Zimmerman. (author). 12 refs., 3 figs

  17. A versatile method for confirmatory evaluation of the effects of a covariate in multiple models

    Pipper, Christian Bressen; Ritz, Christian; Bisgaard, Hans

    2012-01-01

    to provide a fine-tuned control of the overall type I error in a wide range of epidemiological experiments where in reality no other useful alternative exists. The methodology proposed is applied to a multiple-end-point study of the effect of neonatal bacterial colonization on development of childhood asthma.......Modern epidemiology often requires testing of the effect of a covariate on multiple end points from the same study. However, popular state of the art methods for multiple testing require the tests to be evaluated within the framework of a single model unifying all end points. This severely limits...

  18. Regularization methods for ill-posed problems in multiple Hilbert scales

    Mazzieri, Gisela L; Spies, Ruben D

    2012-01-01

    Several convergence results in Hilbert scales under different source conditions are proved and orders of convergence and optimal orders of convergence are derived. Also, relations between those source conditions are proved. The concept of a multiple Hilbert scale on a product space is introduced, and regularization methods on these scales are defined, both for the case of a single observation and for the case of multiple observations. In the latter case, it is shown how vector-valued regularization functions in these multiple Hilbert scales can be used. In all cases, convergence is proved and orders and optimal orders of convergence are shown. Finally, some potential applications and open problems are discussed. (paper)

  19. Study of the multiple scattering effect in TEBENE using the Monte Carlo method

    Singkarat, Somsorn.

    1990-01-01

    The neutron time-of-flight and energy spectra, from the TEBENE set-up, have been calculated by a computer program using the Monte Carlo method. The neutron multiple scattering within the polyethylene scatterer ring is closely investigated. The results show that multiple scattering has a significant effect on the detected neutron yield. They also indicate that the thickness of the scatterer ring has to be carefully chosen. (author)

  20. A linear multiple balance method for discrete ordinates neutron transport equations

    Park, Chang Je; Cho, Nam Zin

    2000-01-01

    A linear multiple balance method (LMB) is developed to provide more accurate and positive solutions for the discrete ordinates neutron transport equations. In this multiple balance approach, one mesh cell is divided into two subcells with quadratic approximation of angular flux distribution. Four multiple balance equations are used to relate center angular flux with average angular flux by Simpson's rule. From the analysis of spatial truncation error, the accuracy of the linear multiple balance scheme is ο(Δ 4 ) whereas that of diamond differencing is ο(Δ 2 ). To accelerate the linear multiple balance method, we also describe a simplified additive angular dependent rebalance factor scheme which combines a modified boundary projection acceleration scheme and the angular dependent rebalance factor acceleration schme. It is demonstrated, via fourier analysis of a simple model problem as well as numerical calculations, that the additive angular dependent rebalance factor acceleration scheme is unconditionally stable with spectral radius < 0.2069c (c being the scattering ration). The numerical results tested so far on slab-geometry discrete ordinates transport problems show that the solution method of linear multiple balance is effective and sufficiently efficient

  1. Determination of 226Ra contamination depth in soil using the multiple photopeaks method

    Haddad, Kh.; Al-Masri, M.S.; Doubal, A.W.

    2014-01-01

    Radioactive contamination presents a diverse range of challenges in many industries. Determination of radioactive contamination depth plays a vital role in the assessment of contaminated sites, because it can be used to estimate the activity content. It is determined traditionally by measuring the activity distributions along the depth. This approach gives accurate results, but it is time consuming, lengthy and costly. The multiple photopeaks method was developed in this work for 226 Ra contamination depth determination in a NORM contaminated soil using in-situ gamma spectrometry. The developed method bases on linear correlation between the attenuation ratio of different gamma lines emitted by 214 Bi and the 226 Ra contamination depth. Although this method is approximate, but it is much simpler, faster and cheaper than the traditional one. This method can be applied for any case of multiple gamma emitter contaminant. -- Highlights: • The multiple photopeaks method was developed for 226 Ra contamination depth determination using in-situ gamma spectrometry. • The method bases on linear correlation between the attenuation ratio of 214 Bi gamma lines and 226 Ra contamination depth. • This method is simpler, faster and cheaper than the traditional one, it can be applied for any multiple gamma contaminant

  2. Adjusted permutation method for multiple attribute decision making with meta-heuristic solution approaches

    Hossein Karimi

    2011-04-01

    Full Text Available The permutation method of multiple attribute decision making has two significant deficiencies: high computational time and wrong priority output in some problem instances. In this paper, a novel permutation method called adjusted permutation method (APM is proposed to compensate deficiencies of conventional permutation method. We propose Tabu search (TS and particle swarm optimization (PSO to find suitable solutions at a reasonable computational time for large problem instances. The proposed method is examined using some numerical examples to evaluate the performance of the proposed method. The preliminary results show that both approaches provide competent solutions in relatively reasonable amounts of time while TS performs better to solve APM.

  3. Dual worth trade-off method and its application for solving multiple criteria decision making problems

    Feng Junwen

    2006-01-01

    To overcome the limitations of the traditional surrogate worth trade-off (SWT) method and solve the multiple criteria decision making problem more efficiently and interactively, a new method labeled dual worth trade-off (DWT) method is proposed. The DWT method dynamically uses the duality theory related to the multiple criteria decision making problem and analytic hierarchy process technique to obtain the decision maker's solution preference information and finally find the satisfactory compromise solution of the decision maker. Through the interactive process between the analyst and the decision maker, trade-off information is solicited and treated properly, the representative subset of efficient solutions and the satisfactory solution to the problem are found. The implementation procedure for the DWT method is presented. The effectiveness and applicability of the DWT method are shown by a practical case study in the field of production scheduling.

  4. A Method to Construct Plasma with Nonlinear Density Enhancement Effect in Multiple Internal Inductively Coupled Plasmas

    Chen Zhipeng; Li Hong; Liu Qiuyan; Luo Chen; Xie Jinlin; Liu Wandong

    2011-01-01

    A method is proposed to built up plasma based on a nonlinear enhancement phenomenon of plasma density with discharge by multiple internal antennas simultaneously. It turns out that the plasma density under multiple sources is higher than the linear summation of the density under each source. This effect is helpful to reduce the fast exponential decay of plasma density in single internal inductively coupled plasma source and generating a larger-area plasma with multiple internal inductively coupled plasma sources. After a careful study on the balance between the enhancement and the decay of plasma density in experiments, a plasma is built up by four sources, which proves the feasibility of this method. According to the method, more sources and more intensive enhancement effect can be employed to further build up a high-density, large-area plasma for different applications. (low temperature plasma)

  5. Use of ultrasonic array method for positioning multiple partial discharge sources in transformer oil.

    Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng

    2014-08-01

    Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method.

  6. Method of Fusion Diagnosis for Dam Service Status Based on Joint Distribution Function of Multiple Points

    Zhenxiang Jiang

    2016-01-01

    Full Text Available The traditional methods of diagnosing dam service status are always suitable for single measuring point. These methods also reflect the local status of dams without merging multisource data effectively, which is not suitable for diagnosing overall service. This study proposes a new method involving multiple points to diagnose dam service status based on joint distribution function. The function, including monitoring data of multiple points, can be established with t-copula function. Therefore, the possibility, which is an important fusing value in different measuring combinations, can be calculated, and the corresponding diagnosing criterion is established with typical small probability theory. Engineering case study indicates that the fusion diagnosis method can be conducted in real time and the abnormal point can be detected, thereby providing a new early warning method for engineering safety.

  7. An implementation of multiple multipole method in the analyse of elliptical objects to enhance backscattering light

    Jalali, T.

    2015-07-01

    In this paper, we present dielectric elliptical shapes modelling with respect to a highly confined power distribution in the resulting nanojet, which has been parameterized according to the beam waist and its beam divergence. The method is based on spherical bessel function as a basis function, which is adapted to standard multiple multipole method. This method can handle elliptically shaped particles due to the change of size and refractive indices, which have been studied under plane wave illumination in two and three dimensional multiple multipole method. Because of its fast and good convergence, the results obtained from simulation are highly accurate and reliable. The simulation time is less than minute for two and three dimension. Therefore, the proposed method is found to be computationally efficient, fast and accurate.

  8. The initial rise method extended to multiple trapping levels in thermoluminescent materials

    Furetta, C. [CICATA-Legaria, Instituto Politecnico Nacional, 11500 Mexico D.F. (Mexico); Guzman, S. [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 Mexico D.F. (Mexico); Ruiz, B. [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 Mexico D.F. (Mexico); Departamento de Agricultura y Ganaderia, Universidad de Sonora, A.P. 305, 83190 Hermosillo, Sonora (Mexico); Cruz-Zaragoza, E., E-mail: ecruz@nucleares.unam.m [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 Mexico D.F. (Mexico)

    2011-02-15

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level.

  9. The initial rise method extended to multiple trapping levels in thermoluminescent materials.

    Furetta, C; Guzmán, S; Ruiz, B; Cruz-Zaragoza, E

    2011-02-01

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. The initial rise method extended to multiple trapping levels in thermoluminescent materials

    Furetta, C.; Guzman, S.; Ruiz, B.; Cruz-Zaragoza, E.

    2011-01-01

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level.

  11. [A factor analysis method for contingency table data with unlimited multiple choice questions].

    Toyoda, Hideki; Haiden, Reina; Kubo, Saori; Ikehara, Kazuya; Isobe, Yurie

    2016-02-01

    The purpose of this study is to propose a method of factor analysis for analyzing contingency tables developed from the data of unlimited multiple-choice questions. This method assumes that the element of each cell of the contingency table has a binominal distribution and a factor analysis model is applied to the logit of the selection probability. Scree plot and WAIC are used to decide the number of factors, and the standardized residual, the standardized difference between the sample, and the proportion ratio, is used to select items. The proposed method was applied to real product impression research data on advertised chips and energy drinks. Since the results of the analysis showed that this method could be used in conjunction with conventional factor analysis model, and extracted factors were fully interpretable, and suggests the usefulness of the proposed method in the study of psychology using unlimited multiple-choice questions.

  12. VIKOR Method for Interval Neutrosophic Multiple Attribute Group Decision-Making

    Yu-Han Huang

    2017-11-01

    Full Text Available In this paper, we will extend the VIKOR (VIsekriterijumska optimizacija i KOmpromisno Resenje method to multiple attribute group decision-making (MAGDM with interval neutrosophic numbers (INNs. Firstly, the basic concepts of INNs are briefly presented. The method first aggregates all individual decision-makers’ assessment information based on an interval neutrosophic weighted averaging (INWA operator, and then employs the extended classical VIKOR method to solve MAGDM problems with INNs. The validity and stability of this method are verified by example analysis and sensitivity analysis, and its superiority is illustrated by a comparison with the existing methods.

  13. Methods of fast, multiple-point in vivo T1 determination

    Zhang, Y.; Spigarelli, M.; Fencil, L.E.; Yeung, H.N.

    1989-01-01

    Two methods of rapid, multiple-point determination of T1 in vivo have been evaluated with a phantom consisting of vials of gel in different Mn + + concentrations. The first method was an inversion-recovery- on-the-fly technique, and the second method used a variable- tip-angle (α) progressive saturation with two sub- sequences of different repetition times. In the first method, 1/T1 was evaluated by an exponential fit. In the second method, 1/T1 was obtained iteratively with a linear fit and then readjusted together with α to a model equation until self-consistency was reached

  14. The impact of secure messaging on workflow in primary care: Results of a multiple-case, multiple-method study.

    Hoonakker, Peter L T; Carayon, Pascale; Cartmill, Randi S

    2017-04-01

    Secure messaging is a relatively new addition to health information technology (IT). Several studies have examined the impact of secure messaging on (clinical) outcomes but very few studies have examined the impact on workflow in primary care clinics. In this study we examined the impact of secure messaging on workflow of clinicians, staff and patients. We used a multiple case study design with multiple data collections methods (observation, interviews and survey). Results show that secure messaging has the potential to improve communication and information flow and the organization of work in primary care clinics, partly due to the possibility of asynchronous communication. However, secure messaging can also have a negative effect on communication and increase workload, especially if patients send messages that are not appropriate for the secure messaging medium (for example, messages that are too long, complex, ambiguous, or inappropriate). Results show that clinicians are ambivalent about secure messaging. Secure messaging can add to their workload, especially if there is high message volume, and currently they are not compensated for these activities. Staff is -especially compared to clinicians- relatively positive about secure messaging and patients are overall very satisfied with secure messaging. Finally, clinicians, staff and patients think that secure messaging can have a positive effect on quality of care and patient safety. Secure messaging is a tool that has the potential to improve communication and information flow. However, the potential of secure messaging to improve workflow is dependent on the way it is implemented and used. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Comparison between Two Assessment Methods; Modified Essay Questions and Multiple Choice Questions

    Assadi S.N.* MD

    2015-09-01

    Full Text Available Aims Using the best assessment methods is an important factor in educational development of health students. Modified essay questions and multiple choice questions are two prevalent methods of assessing the students. The aim of this study was to compare two methods of modified essay questions and multiple choice questions in occupational health engineering and work laws courses. Materials & Methods This semi-experimental study was performed during 2013 to 2014 on occupational health students of Mashhad University of Medical Sciences. The class of occupational health and work laws course in 2013 was considered as group A and the class of 2014 as group B. Each group had 50 students.The group A students were assessed by modified essay questions method and the group B by multiple choice questions method.Data were analyzed in SPSS 16 software by paired T test and odd’s ratio. Findings The mean grade of occupational health and work laws course was 18.68±0.91 in group A (modified essay questions and was 18.78±0.86 in group B (multiple choice questions which was not significantly different (t=-0.41; p=0.684. The mean grade of chemical chapter (p<0.001 in occupational health engineering and harmful work law (p<0.001 and other (p=0.015 chapters in work laws were significantly different between two groups. Conclusion Modified essay questions and multiple choice questions methods have nearly the same student assessing value for the occupational health engineering and work laws course.

  16. Simultaneous determination of eight common odors in natural water body using automatic purge and trap coupled to gas chromatography with mass spectrometry.

    Deng, Xuwei; Liang, Gaodao; Chen, Jun; Qi, Min; Xie, Ping

    2011-06-17

    Production and fate of taste and odor (T&O) compounds in natural waters are a pressing environmental issue. Simultaneous determination of these complex compounds (covering a wide range of boiling points) has been difficult. A simple and sensitive method for the determination of eight malodors products of cyanobacterial blooms was developed using automatic purge and trap (P&T) coupled with gas chromatography-mass spectrometry (GC-MS). This extraction and concentration technique is solvent-free. Dimethylsulfide (DMS), dimethyltrisulfide (DMTS), 2-isopropyl-3-methoxypyrazine (IPMP), 2-isobutyl-3-methoxypyrazine (IBMP), 2-methylisoborneol (MIB), β-cyclocitral, geosmin (GSM) and β-ionone were separated within 15.3 min. P&T uses trap #07 and high-purity nitrogen purge gas. The calibration curves of the eight odors show good linearity in the range of 1-500 ng/L with a correlation coefficient above 0.999 (levels=8) and with residuals ranging from approximately 83% to 124%. The limits of detection (LOD) (S/N=3) are all below 1.5 ng/L that of GSM is even lower at 0.08 ng/L. The relative standard deviations (RSD) are between 3.38% and 8.59% (n=5) and recoveries of the analytes from water samples of a eutrophic lake are between 80.54% and 114.91%. This method could be widely employed for monitoring these eight odors in natural waters. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. MULTIPLE CRITERA METHODS WITH FOCUS ON ANALYTIC HIERARCHY PROCESS AND GROUP DECISION MAKING

    Lidija Zadnik-Stirn

    2010-12-01

    Full Text Available Managing natural resources is a group multiple criteria decision making problem. In this paper the analytic hierarchy process is the chosen method for handling the natural resource problems. The one decision maker problem is discussed and, three methods: the eigenvector method, data envelopment analysis method, and logarithmic least squares method are presented for the derivation of the priority vector. Further, the group analytic hierarchy process is discussed and six methods for the aggregation of individual judgments or priorities: weighted arithmetic mean method, weighted geometric mean method, and four methods based on data envelopment analysis are compared. The case study on land use in Slovenia is applied. The conclusions review consistency, sensitivity analyses, and some future directions of research.

  18. A multiple-scale power series method for solving nonlinear ordinary differential equations

    Chein-Shan Liu

    2016-02-01

    Full Text Available The power series solution is a cheap and effective method to solve nonlinear problems, like the Duffing-van der Pol oscillator, the Volterra population model and the nonlinear boundary value problems. A novel power series method by considering the multiple scales $R_k$ in the power term $(t/R_k^k$ is developed, which are derived explicitly to reduce the ill-conditioned behavior in the data interpolation. In the method a huge value times a tiny value is avoided, such that we can decrease the numerical instability and which is the main reason to cause the failure of the conventional power series method. The multiple scales derived from an integral can be used in the power series expansion, which provide very accurate numerical solutions of the problems considered in this paper.

  19. An Extended TOPSIS Method for the Multiple Attribute Decision Making Problems Based on Interval Neutrosophic Set

    Pingping Chi

    2013-03-01

    Full Text Available The interval neutrosophic set (INS can be easier to express the incomplete, indeterminate and inconsistent information, and TOPSIS is one of the most commonly used and effective method for multiple attribute decision making, however, in general, it can only process the attribute values with crisp numbers. In this paper, we have extended TOPSIS to INS, and with respect to the multiple attribute decision making problems in which the attribute weights are unknown and the attribute values take the form of INSs, we proposed an expanded TOPSIS method. Firstly, the definition of INS and the operational laws are given, and distance between INSs is defined. Then, the attribute weights are determined based on the Maximizing deviation method and an extended TOPSIS method is developed to rank the alternatives. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness.

  20. Novel multiple criteria decision making methods based on bipolar neutrosophic sets and bipolar neutrosophic graphs

    Muhammad, Akram; Musavarah, Sarwar

    2016-01-01

    In this research study, we introduce the concept of bipolar neutrosophic graphs. We present the dominating and independent sets of bipolar neutrosophic graphs. We describe novel multiple criteria decision making methods based on bipolar neutrosophic sets and bipolar neutrosophic graphs. We also develop an algorithm for computing domination in bipolar neutrosophic graphs.

  1. Magic Finger Teaching Method in Learning Multiplication Facts among Deaf Students

    Thai, Liong; Yasin, Mohd. Hanafi Mohd

    2016-01-01

    Deaf students face problems in mastering multiplication facts. This study aims to identify the effectiveness of Magic Finger Teaching Method (MFTM) and students' perception towards MFTM. The research employs a quasi experimental with non-equivalent pre-test and post-test control group design. Pre-test, post-test and questionnaires were used. As…

  2. Comparison of Methods to Trace Multiple Subskills: Is LR-DBN Best?

    Xu, Yanbo; Mostow, Jack

    2012-01-01

    A long-standing challenge for knowledge tracing is how to update estimates of multiple subskills that underlie a single observable step. We characterize approaches to this problem by how they model knowledge tracing, fit its parameters, predict performance, and update subskill estimates. Previous methods allocated blame or credit among subskills…

  3. A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants

    Cooper, Paul D.

    2010-01-01

    A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…

  4. Non-Abelian Kubo formula and the multiple time-scale method

    Zhang, X.; Li, J.

    1996-01-01

    The non-Abelian Kubo formula is derived from the kinetic theory. That expression is compared with the one obtained using the eikonal for a Chern endash Simons theory. The multiple time-scale method is used to study the non-Abelian Kubo formula, and the damping rate for longitudinal color waves is computed. copyright 1996 Academic Press, Inc

  5. Creep compliance and percent recovery of Oklahoma certified binder using the multiple stress recovery (MSCR) method.

    2015-04-01

    A laboratory study was conducted to develop guidelines for the Multiple Stress Creep Recovery : (MSCR) test method for local conditions prevailing in Oklahoma. The study consisted of : commonly used binders in Oklahoma, namely PG 64-22, PG 70-28, and...

  6. Interconnection blocks: a method for providing reusable, rapid, multiple, aligned and planar microfluidic interconnections

    Sabourin, D; Snakenborg, D; Dufva, M

    2009-01-01

    In this paper a method is presented for creating 'interconnection blocks' that are re-usable and provide multiple, aligned and planar microfluidic interconnections. Interconnection blocks made from polydimethylsiloxane allow rapid testing of microfluidic chips and unobstructed microfluidic observation. The interconnection block method is scalable, flexible and supports high interconnection density. The average pressure limit of the interconnection block was near 5.5 bar and all individual results were well above the 2 bar threshold considered applicable to most microfluidic applications

  7. Clustering Multiple Sclerosis Subgroups with Multifractal Methods and Self-Organizing Map Algorithm

    Karaca, Yeliz; Cattani, Carlo

    Magnetic resonance imaging (MRI) is the most sensitive method to detect chronic nervous system diseases such as multiple sclerosis (MS). In this paper, Brownian motion Hölder regularity functions (polynomial, periodic (sine), exponential) for 2D image, such as multifractal methods were applied to MR brain images, aiming to easily identify distressed regions, in MS patients. With these regions, we have proposed an MS classification based on the multifractal method by using the Self-Organizing Map (SOM) algorithm. Thus, we obtained a cluster analysis by identifying pixels from distressed regions in MR images through multifractal methods and by diagnosing subgroups of MS patients through artificial neural networks.

  8. A Multiple Criteria Decision Making Method Based on Relative Value Distances

    Shyur Huan-jyh

    2015-12-01

    Full Text Available This paper proposes a new multiple criteria decision-making method called ERVD (election based on relative value distances. The s-shape value function is adopted to replace the expected utility function to describe the risk-averse and risk-seeking behavior of decision makers. Comparisons and experiments contrasting with the TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution method are carried out to verify the feasibility of using the proposed method to represent the decision makers’ preference in the decision making process. Our experimental results show that the proposed approach is an appropriate and effective MCDM method.

  9. Test procedure for calibration, grooming and alignment of the LDUA Purge Air Supply System

    Potter, J.D.

    1995-01-01

    The Light Duty Utility Arm (LDUA) is a remotely operated manipulator used to enter into underground waste tanks through one of the tank risers. National Electric Code requirements mandate that the in-tank portions of the LDUA be maintained at a positive pressure for entrance into a flammable atmosphere. The LDUA Purge Air Supply System (PASS) is a small, portable air compressor, which provides a constant low flow of instrument grade air for this purpose. This procedure is used to assure that the instrumentation and equipment comprising the PASS is properly adjusted in order to achieve its intended functions successfully

  10. Cryogenic system with GM cryocooler for krypton, xenon separation from hydrogen-helium purge gas

    Chu, X. X.; Zhang, D. X.; Qian, Y.; Liu, W. [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai, 201800 (China); Zhang, M. M.; Xu, D. [Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, Beijing, 100190 (China)

    2014-01-29

    In the thorium molten salt reactor (TMSR), fission products such as krypton, xenon and tritium will be produced continuously in the process of nuclear fission reaction. A cryogenic system with a two stage GM cryocooler was designed to separate Kr, Xe, and H{sub 2} from helium purge gas. The temperatures of two stage heat exchanger condensation tanks were maintained at about 38 K and 4.5 K, respectively. The main fluid parameters of heat transfer were confirmed, and the structural heat exchanger equipment and cold box were designed. Designed concentrations after cryogenic separation of Kr, Xe and H{sub 2} in helium recycle gas are less than 1 ppb.

  11. Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems

    Razzak, M. A.; Alam, M. Z.; Sharif, M. N.

    2018-03-01

    In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.

  12. Use of multiple methods to determine factors affecting quality of care of patients with diabetes.

    Khunti, K

    1999-10-01

    The process of care of patients with diabetes is complex; however, GPs are playing a greater role in its management. Despite the research evidence, the quality of care of patients with diabetes is variable. In order to improve care, information is required on the obstacles faced by practices in improving care. Qualitative and quantitative methods can be used for formation of hypotheses and the development of survey procedures. However, to date few examples exist in general practice research on the use of multiple methods using both quantitative and qualitative techniques for hypothesis generation. We aimed to determine information on all factors that may be associated with delivery of care to patients with diabetes. Factors for consideration on delivery of diabetes care were generated by multiple qualitative methods including brainstorming with health professionals and patients, a focus group and interviews with key informants which included GPs and practice nurses. Audit data showing variations in care of patients with diabetes were used to stimulate the brainstorming session. A systematic literature search focusing on quality of care of patients with diabetes in primary care was also conducted. Fifty-four potential factors were identified by multiple methods. Twenty (37.0%) were practice-related factors, 14 (25.9%) were patient-related factors and 20 (37.0%) were organizational factors. A combination of brainstorming and the literature review identified 51 (94.4%) factors. Patients did not identify factors in addition to those identified by other methods. The complexity of delivery of care to patients with diabetes is reflected in the large number of potential factors identified in this study. This study shows the feasibility of using multiple methods for hypothesis generation. Each evaluation method provided unique data which could not otherwise be easily obtained. This study highlights a way of combining various traditional methods in an attempt to overcome the

  13. Detection-Discrimination Method for Multiple Repeater False Targets Based on Radar Polarization Echoes

    Z. W. ZONG

    2014-04-01

    Full Text Available Multiple repeat false targets (RFTs, created by the digital radio frequency memory (DRFM system of jammer, are widely used in practical to effectively exhaust the limited tracking and discrimination resource of defence radar. In this paper, common characteristic of radar polarization echoes of multiple RFTs is used for target recognition. Based on the echoes from two receiving polarization channels, the instantaneous polarization radio (IPR is defined and its variance is derived by employing Taylor series expansion. A detection-discrimination method is designed based on probability grids. By using the data from microwave anechoic chamber, the detection threshold of the method is confirmed. Theoretical analysis and simulations indicate that the method is valid and feasible. Furthermore, the estimation performance of IPRs of RFTs due to the influence of signal noise ratio (SNR is also covered.

  14. A frequency domain global parameter estimation method for multiple reference frequency response measurements

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.

  15. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    Tabitha A Graves

    Full Text Available Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic and bear rubs (opportunistic. We used hierarchical abundance models (N-mixture models with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1 lead to the selection of the same variables as important and (2 provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3 yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight, and (4 improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed

  16. Investigation of colistin sensitivity via three different methods in Acinetobacter baumannii isolates with multiple antibiotic resistance.

    Sinirtaş, Melda; Akalin, Halis; Gedikoğlu, Suna

    2009-09-01

    In recent years there has been an increase in life-threatening infections caused by Acinetobacter baumannii with multiple antibiotic resistance, which has lead to the use of polymyxins, especially colistin, being reconsidered. The aim of this study was to investigate the colistin sensitivity of A. baumannii isolates with multiple antibiotic resistance via different methods, and to evaluate the disk diffusion method for colistin against multi-resistant Acinetobacter isolates, in comparison to the E-test and Phoenix system. The study was carried out on 100 strains of A. baumannii (colonization or infection) isolated from the microbiological samples of different patients followed in the clinics and intensive care units of Uludağ University Medical School between the years 2004 and 2005. Strains were identified and characterized for their antibiotic sensitivity by Phoenix system (Becton Dickinson, Sparks, MD, USA). In all studied A. baumannii strains, susceptibility to colistin was determined to be 100% with the disk diffusion, E-test, and broth microdilution methods. Results of the E-test and broth microdilution method, which are accepted as reference methods, were found to be 100% consistent with the results of the disk diffusion tests; no very major or major error was identified upon comparison of the tests. The sensitivity and the positive predictive value of the disk diffusion method were found to be 100%. Colistin resistance in A. baumannii was not detected in our region, and disk diffusion method results are in accordance with those of E-test and broth microdilution methods.

  17. Comparison of multiple-criteria decision-making methods - results of simulation study

    Michał Adamczak

    2016-12-01

    Full Text Available Background: Today, both researchers and practitioners have many methods for supporting the decision-making process. Due to the conditions in which supply chains function, the most interesting are multi-criteria methods. The use of sophisticated methods for supporting decisions requires the parameterization and execution of calculations that are often complex. So is it efficient to use sophisticated methods? Methods: The authors of the publication compared two popular multi-criteria decision-making methods: the  Weighted Sum Model (WSM and the Analytic Hierarchy Process (AHP. A simulation study reflects these two decision-making methods. Input data for this study was a set of criteria weights and the value of each in terms of each criterion. Results: The iGrafx Process for Six Sigma simulation software recreated how both multiple-criteria decision-making methods (WSM and AHP function. The result of the simulation was a numerical value defining the preference of each of the alternatives according to the WSM and AHP methods. The alternative producing a result of higher numerical value  was considered preferred, according to the selected method. In the analysis of the results, the relationship between the values of the parameters and the difference in the results presented by both methods was investigated. Statistical methods, including hypothesis testing, were used for this purpose. Conclusions: The simulation study findings prove that the results obtained with the use of two multiple-criteria decision-making methods are very similar. Differences occurred more frequently in lower-value parameters from the "value of each alternative" group and higher-value parameters from the "weight of criteria" group.

  18. Traffic Management by Using Admission Control Methods in Multiple Node IMS Network

    Filip Chamraz

    2016-01-01

    Full Text Available The paper deals with Admission Control methods (AC as a possible solution for traffic management in IMS networks (IP Multimedia Subsystem - from the point of view of an efficient redistribution of the available network resources and keeping the parameters of Quality of Service (QoS. The paper specifically aims at the selection of the most appropriate method for the specific type of traffic and traffic management concept using AC methods on multiple nodes. The potential benefit and disadvantage of the used solution is evaluated.

  19. Empirically defining rapid response to intensive treatment to maximize prognostic utility for bulimia nervosa and purging disorder.

    MacDonald, Danielle E; Trottier, Kathryn; McFarlane, Traci; Olmsted, Marion P

    2015-05-01

    Rapid response (RR) to eating disorder treatment has been reliably identified as a predictor of post-treatment and sustained remission, but its definition has varied widely. Although signal detection methods have been used to empirically define RR thresholds in outpatient settings, RR to intensive treatment has not been investigated. This study investigated the optimal definition of RR to day hospital treatment for bulimia nervosa and purging disorder. Participants were 158 patients who completed ≥6 weeks of day hospital treatment. Receiver operating characteristic (ROC) analysis was used to create four definitions of RR that could differentiate between remission and nonremission at the end of treatment. Definitions were based on binge/vomit episode frequency or percent reduction from pre-treatment, during either the first four or first two weeks of treatment. All definitions were associated with higher remission rates in rapid compared to nonrapid responders. Only one definition (i.e., ≤3 episodes in the first four weeks of treatment) predicted sustained remission (versus relapse) at 6- and 12-month follow-up. These findings provide an empirically derived definition of RR to intensive eating disorder treatment, and provide further evidence that early change is an important prognostic indicator. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Neurocognitive Impairments Are More Severe in the Binge-Eating/Purging Anorexia Nervosa Subtype Than in the Restricting Subtype.

    Tamiya, Hiroko; Ouchi, Atushi; Chen, Runshu; Miyazawa, Shiho; Akimoto, Yoritaka; Kaneda, Yasuhiro; Sora, Ichiro

    2018-01-01

    Objective: To evaluate cognitive function impairment in patients with anorexia nervosa (AN) of either the restricting (ANR) or binge-eating/purging (ANBP) subtype. Method: We administered the Japanese version of the MATRICS Consensus Cognitive Battery to 22 patients with ANR, 18 patients with ANBP, and 69 healthy control subjects. Our participants were selected from among the patients at the Kobe University Hospital and community residents. Results: Compared to the healthy controls, the ANR group had significantly lower visual learning and social cognition scores, and the ANBP group had significantly lower processing speed, attention/vigilance, visual learning, reasoning/problem-solving, and social cognition scores. Compared to the ANR group, the ANBP group had significantly lower attention/vigilance scores. Discussion: The AN subtypes differed in cognitive function impairments. Participants with ANBP, which is associated with higher mortality rates than ANR, exhibited greater impairment severities, especially in the attention/vigilance domain, confirming the presence of impairments in continuous concentration. This may relate to the impulsivity, an ANBP characteristic reported in the personality research. Future studies can further clarify the cognitive impairments of each subtype by addressing the subtype cognitive functions and personality characteristics.

  1. Multiple flood vulnerability assessment approach based on fuzzy comprehensive evaluation method and coordinated development degree model.

    Yang, Weichao; Xu, Kui; Lian, Jijian; Bin, Lingling; Ma, Chao

    2018-05-01

    Flood is a serious challenge that increasingly affects the residents as well as policymakers. Flood vulnerability assessment is becoming gradually relevant in the world. The purpose of this study is to develop an approach to reveal the relationship between exposure, sensitivity and adaptive capacity for better flood vulnerability assessment, based on the fuzzy comprehensive evaluation method (FCEM) and coordinated development degree model (CDDM). The approach is organized into three parts: establishment of index system, assessment of exposure, sensitivity and adaptive capacity, and multiple flood vulnerability assessment. Hydrodynamic model and statistical data are employed for the establishment of index system; FCEM is used to evaluate exposure, sensitivity and adaptive capacity; and CDDM is applied to express the relationship of the three components of vulnerability. Six multiple flood vulnerability types and four levels are proposed to assess flood vulnerability from multiple perspectives. Then the approach is applied to assess the spatiality of flood vulnerability in Hainan's eastern area, China. Based on the results of multiple flood vulnerability, a decision-making process for rational allocation of limited resources is proposed and applied to the study area. The study shows that multiple flood vulnerability assessment can evaluate vulnerability more completely, and help decision makers learn more information about making decisions in a more comprehensive way. In summary, this study provides a new way for flood vulnerability assessment and disaster prevention decision. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. TNKVNT: A model of the Tank 48 purge/ventilation exhaust system. Revision 1

    Shadday, M.A. Jr.

    1996-04-01

    The waste tank purge ventilation system for Tank 48 is designed to prevent dangerous concentrations of hydrogen or benzene from accumulating in the gas space of the tank. Fans pull the gas/water vapor mixture from the tank gas space and pass it sequentially through a demister, a condenser, a reheater, and HEPA filters before discharging to the environment. Proper operation of the HEPA filters requires that the gas mixture passing through them has a low relative humidity. The ventilation system has been modified by increasing the capacity of the fans and changing the condenser from a two-pass heat exchanger to a single-pass heat exchanger. It is important to understand the impact of these modifications on the operation of the system. A hydraulic model of the ventilation exhaust system has been developed. This model predicts the properties of the air throughout the system and the flowrate through the system, as functions of the tank gas space and environmental conditions. This document serves as a Software Design Report, a Software Coding report, and a User's Manual. All of the information required for understanding and using this code is herein contained: the governing equations are fully developed, the numerical algorithms are described in detail, and an extensively commented code listing is included. This updated version of the code models the entire purge ventilation system, and is therefore more general in its potential applications

  3. Study of typical nuclear containment purge valves in an accident environment

    Watkins, J.C.; Steele, R. Jr.; Hill, R.C.; DeWall, K.G.

    1986-08-01

    This report presents the results of the containment purge and vent valve test program, conducted under the sponsorship of the United States Nuclear Regulatory Commission (NRC), Office of Nuclear Regulatory Research. The test program investigated butterfly valve operability and leak integrity under light-water-reactor design basis and severe accident conditions. Three nuclear-designed butterfly valves typical of those used in domestic nuclear power plant containment purge and vent applications were tested. For a comparison of response, two valve of the same size with differing internal designs were tested. For extrapolation insights, a larger-sized valve similar to one of the smaller valves was also tested. Dynamic flow tests were performed over the range of design basis accident pressures. Leak integrity testing was also performed at both design basis and severe accident temperatures and pressures. The valve experiments were performed with various piping configurations and valve orientations to the flow to simulate the various installation options in field applications. Testing was also performed in a standard ANSI test section

  4. A permutation-based multiple testing method for time-course microarray experiments

    George Stephen L

    2009-10-01

    Full Text Available Abstract Background Time-course microarray experiments are widely used to study the temporal profiles of gene expression. Storey et al. (2005 developed a method for analyzing time-course microarray studies that can be applied to discovering genes whose expression trajectories change over time within a single biological group, or those that follow different time trajectories among multiple groups. They estimated the expression trajectories of each gene using natural cubic splines under the null (no time-course and alternative (time-course hypotheses, and used a goodness of fit test statistic to quantify the discrepancy. The null distribution of the statistic was approximated through a bootstrap method. Gene expression levels in microarray data are often complicatedly correlated. An accurate type I error control adjusting for multiple testing requires the joint null distribution of test statistics for a large number of genes. For this purpose, permutation methods have been widely used because of computational ease and their intuitive interpretation. Results In this paper, we propose a permutation-based multiple testing procedure based on the test statistic used by Storey et al. (2005. We also propose an efficient computation algorithm. Extensive simulations are conducted to investigate the performance of the permutation-based multiple testing procedure. The application of the proposed method is illustrated using the Caenorhabditis elegans dauer developmental data. Conclusion Our method is computationally efficient and applicable for identifying genes whose expression levels are time-dependent in a single biological group and for identifying the genes for which the time-profile depends on the group in a multi-group setting.

  5. Multiple external hazards compound level 3 PSA methods research of nuclear power plant

    Wang, Handing; Liang, Xiaoyu; Zhang, Xiaoming; Yang, Jianfeng; Liu, Weidong; Lei, Dina

    2017-01-01

    2011 Fukushima nuclear power plant severe accident was caused by both earthquake and tsunami, which results in large amount of radioactive nuclides release. That accident has caused the radioactive contamination on the surrounding environment. Although this accident probability is extremely small, once such an accident happens that is likely to release a lot of radioactive materials into the environment, and cause radiation contamination. Therefore, studying accidents consequences is important and essential to improve nuclear power plant design and management. Level 3 PSA methods of nuclear power plant can be used to analyze radiological consequences, and quantify risk to the public health effects around nuclear power plants. Based on multiple external hazards compound level 3 PSA methods studies of nuclear power plant, and the description of the multiple external hazards compound level 3 PSA technology roadmap and important technical elements, as well as taking a coastal nuclear power plant as the reference site, we analyzed the impact of off-site consequences of nuclear power plant severe accidents caused by multiple external hazards. At last we discussed the impact of off-site consequences probabilistic risk studies and its applications under multiple external hazards compound conditions, and explained feasibility and reasonableness of emergency plans implementation.

  6. A novel sampling method for multiple multiscale targets from scattering amplitudes at a fixed frequency

    Liu, Xiaodong

    2017-08-01

    A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.

  7. Integrated Markov-neural reliability computation method: A case for multiple automated guided vehicle system

    Fazlollahtabar, Hamed; Saidi-Mehrabad, Mohammad; Balakrishnan, Jaydeep

    2015-01-01

    This paper proposes an integrated Markovian and back propagation neural network approaches to compute reliability of a system. While states of failure occurrences are significant elements for accurate reliability computation, Markovian based reliability assessment method is designed. Due to drawbacks shown by Markovian model for steady state reliability computations and neural network for initial training pattern, integration being called Markov-neural is developed and evaluated. To show efficiency of the proposed approach comparative analyses are performed. Also, for managerial implication purpose an application case for multiple automated guided vehicles (AGVs) in manufacturing networks is conducted. - Highlights: • Integrated Markovian and back propagation neural network approach to compute reliability. • Markovian based reliability assessment method. • Managerial implication is shown in an application case for multiple automated guided vehicles (AGVs) in manufacturing networks

  8. A prediction method based on wavelet transform and multiple models fusion for chaotic time series

    Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha

    2017-01-01

    In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

  9. Experimental design and multiple response optimization. Using the desirability function in analytical methods development.

    Candioti, Luciana Vera; De Zan, María M; Cámara, María S; Goicoechea, Héctor C

    2014-06-01

    A review about the application of response surface methodology (RSM) when several responses have to be simultaneously optimized in the field of analytical methods development is presented. Several critical issues like response transformation, multiple response optimization and modeling with least squares and artificial neural networks are discussed. Most recent analytical applications are presented in the context of analytLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, ArgentinaLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, Argentinaical methods development, especially in multiple response optimization procedures using the desirability function. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Computing multiple periodic solutions of nonlinear vibration problems using the harmonic balance method and Groebner bases

    Grolet, Aurelien; Thouverez, Fabrice

    2015-02-01

    This paper is devoted to the study of vibration of mechanical systems with geometric nonlinearities. The harmonic balance method is used to derive systems of polynomial equations whose solutions give the frequency component of the possible steady states. Groebner basis methods are used for computing all solutions of polynomial systems. This approach allows to reduce the complete system to an unique polynomial equation in one variable driving all solutions of the problem. In addition, in order to decrease the number of variables, we propose to first work on the undamped system, and recover solution of the damped system using a continuation on the damping parameter. The search for multiple solutions is illustrated on a simple system, where the influence of the retained number of harmonic is studied. Finally, the procedure is applied on a simple cyclic system and we give a representation of the multiple states versus frequency.

  11. Multiplication factor evaluation of bare and reflected small fast assemblies using variational methods

    Dwivedi, S.R.; Jain, D.

    1979-01-01

    The multigroup collision probability equations were solved by the variational method to derive a simple relation between the multiplication factor and the size of a small spherical bare or reflected fast reactor. This relation was verified by a number of 26-group, S 4 , transport theory calculations in one-dimensional spherical geometry for enriched uranium and plutonium systems. It has been shown that further approximations to the above relation lead to the universal empirical relation obtained by Anil Kumar. (orig.) [de

  12. Research on numerical method for multiple pollution source discharge and optimal reduction program

    Li, Mingchang; Dai, Mingxin; Zhou, Bin; Zou, Bin

    2018-03-01

    In this paper, the optimal method for reduction program is proposed by the nonlinear optimal algorithms named that genetic algorithm. The four main rivers in Jiangsu province, China are selected for reducing the environmental pollution in nearshore district. Dissolved inorganic nitrogen (DIN) is studied as the only pollutant. The environmental status and standard in the nearshore district is used to reduce the discharge of multiple river pollutant. The research results of reduction program are the basis of marine environmental management.

  13. An Improved Clutter Suppression Method for Weather Radars Using Multiple Pulse Repetition Time Technique

    Yingjie Yu

    2017-01-01

    Full Text Available This paper describes the implementation of an improved clutter suppression method for the multiple pulse repetition time (PRT technique based on simulated radar data. The suppression method is constructed using maximum likelihood methodology in time domain and is called parametric time domain method (PTDM. The procedure relies on the assumption that precipitation and clutter signal spectra follow a Gaussian functional form. The multiple interleaved pulse repetition frequencies (PRFs that are used in this work are set to four PRFs (952, 833, 667, and 513 Hz. Based on radar simulation, it is shown that the new method can provide accurate retrieval of Doppler velocity even in the case of strong clutter contamination. The obtained velocity is nearly unbiased for all the range of Nyquist velocity interval. Also, the performance of the method is illustrated on simulated radar data for plan position indicator (PPI scan. Compared with staggered 2-PRT transmission schemes with PTDM, the proposed method presents better estimation accuracy under certain clutter situations.

  14. Multiple Signal Classification Algorithm Based Electric Dipole Source Localization Method in an Underwater Environment

    Yidong Xu

    2017-10-01

    Full Text Available A novel localization method based on multiple signal classification (MUSIC algorithm is proposed for positioning an electric dipole source in a confined underwater environment by using electric dipole-receiving antenna array. In this method, the boundary element method (BEM is introduced to analyze the boundary of the confined region by use of a matrix equation. The voltage of each dipole pair is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields based localization method, which can be easily implemented in practical engineering applications. Then, a global-multiple region-conjugate gradient (CG hybrid search method is used to reduce the computation burden and to improve the operation speed. Two localization simulation models and a physical experiment are conducted. Both the simulation results and physical experiment result provide accurate positioning performance, with the help to verify the effectiveness of the proposed localization method in underwater environments.

  15. System and method for integrating and accessing multiple data sources within a data warehouse architecture

    Musick, Charles R [Castro Valley, CA; Critchlow, Terence [Livermore, CA; Ganesh, Madhaven [San Jose, CA; Slezak, Tom [Livermore, CA; Fidelis, Krzysztof [Brentwood, CA

    2006-12-19

    A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.

  16. An Extended TOPSIS Method for Multiple Attribute Decision Making based on Interval Neutrosophic Uncertain Linguistic Variables

    Said Broumi

    2015-03-01

    Full Text Available The interval neutrosophic uncertain linguistic variables can easily express the indeterminate and inconsistent information in real world, and TOPSIS is a very effective decision making method more and more extensive applications. In this paper, we will extend the TOPSIS method to deal with the interval neutrosophic uncertain linguistic information, and propose an extended TOPSIS method to solve the multiple attribute decision making problems in which the attribute value takes the form of the interval neutrosophic uncertain linguistic variables and attribute weight is unknown. Firstly, the operational rules and properties for the interval neutrosophic variables are introduced. Then the distance between two interval neutrosophic uncertain linguistic variables is proposed and the attribute weight is calculated by the maximizing deviation method, and the closeness coefficients to the ideal solution for each alternatives. Finally, an illustrative example is given to illustrate the decision making steps and the effectiveness of the proposed method.

  17. CHANGES IN TUMOR NECROSIS FACTOR ALFA DURING TREATMENT OF PATIENTS WITH MULTIPLE SCLEROSIS BY TRANSIMMUNIZATION METHOD

    A. V. Kil'dyushevskiy

    2016-01-01

    Full Text Available Background: Despite the availability of a  large number of treatments for multiple sclerosis with various targets, these treatments are not always effective. According to the literature, experimental studies have shown a  significant decrease in tumor necrosis factor alfa (TNF-α with the use of extracorporeal photochemotherapy. Aim: To assess changes in TNF-α in patients with multiple sclerosis during treatment with transimmunization. Materials and methods: The study recruited 13 adult patients with multiple sclerosis. Serum TNF-α was measured by immunochemiluminescence analysis (IMMULITE 1000, Siemens. The patients were treated by transimmunization, i.e. a  modified photopheresis. Two hours before the procedure, Ammifurin (8-methoxypsoralene was administered to all the patients, then their mononuclear cells were isolated under PBSC protocol with Haemonetics MCS+ cell separator. Thereafter, mononuclear cells were irradiated with ultraviolet for 90  minutes and incubated for 20 hours at 37 °С. The next day the cells were re-infused to the patients. The procedure was performed 2  times per week for 6  months, then once per 4  months. Results: Before transimmunization, mean TNF-α level in adult patients with multiple sclerosis was 9.958±0.812  pg/mL (normal, below 8.1 pg/mL. After transimmunization, its level was 6.992±0.367  pg/mL (р<0.05. Conclusion: Ultraviolet irradiation of peripheral blood monocytes with their subsequent incubation (transimmunization led to a 30% decrease of serum TNF-α in patients with multiple sclerosis. This indicates a suppressive effect of transimmunization on TNF-α. Hence, in patients with multiple sclerosis transimmunization exerts an anti-inflammatory effect.

  18. Effects of the two types of anorexia nervosa (binge eating/purging and restrictive) on bone metabolism in female patients.

    Maïmoun, Laurent; Guillaume, Sébastien; Lefebvre, Patrick; Bertet, Helena; Seneque, Maude; Philibert, Pascal; Picot, Marie-Christine; Dupuy, Anne-Marie; Paris, Françoise; Gaspari, Laura; Ben Bouallègue, Fayçal; Courtet, Philippe; Mariano-Goulart, Denis; Renard, Eric; Sultan, Charles

    2018-04-06

    This study compared the profiles of the two types of anorexia nervosa (AN; restrictive: AN-R, and binge eating/purging: AN-BP) in terms of body composition, gynaecological status, disease history and the potential effects on bone metabolism. Two hundred and eighty-six women with AN (21.8 ± 6.5 years; 204 AN-R and 82 AN-BP) and 130 age-matched controls (CON; 22.6 ± 6.8 years) were enrolled. Areal bone mineral density (aBMD) was determined using DXA and resting energy expenditure (REE) was indirectly assessed using calorimetry. Markers of bone formation (osteocalcin [OC], procollagen type I N-terminal propeptide [PINP] and resorption (type I-C telopeptide breakdown products [CTX]) and leptin were concomitantly evaluated. Anorexia nervosa patients presented an alteration in aBMD and bone turnover. When compared according to type, AN-BP were older than AN-R and showed less severe undernutrition, lower CTx levels, longer duration of AN, and higher REE levels and aBMD at radius and lumbar spine. After adjustment for age, weight and hormonal contraceptive use, the aBMD and CTx differences disappeared. In both AN groups, aBMD was positively correlated with anthropometric parameters and negatively correlated with durations of AN and amenorrhoea, the bone formation markers (OC and PINP) and the leptin/fat mass ratio. REE was positively correlated with aBMD in AN-R patients only. This study shows the profiles of AN patients according to AN type. However, the impact of the profile characteristics on bone status, although significant, was minor and disappeared after multiple adjustments. The positive correlation between REE and aBMD reinforces the concept that energy disposal and bone metabolism are strongly interdependent. © 2018 John Wiley & Sons Ltd.

  19. Analytic Methods for Evaluating Patterns of Multiple Congenital Anomalies in Birth Defect Registries.

    Agopian, A J; Evans, Jane A; Lupo, Philip J

    2018-01-15

    It is estimated that 20 to 30% of infants with birth defects have two or more birth defects. Among these infants with multiple congenital anomalies (MCA), co-occurring anomalies may represent either chance (i.e., unrelated etiologies) or pathogenically associated patterns of anomalies. While some MCA patterns have been recognized and described (e.g., known syndromes), others have not been identified or characterized. Elucidating these patterns may result in a better understanding of the etiologies of these MCAs. This article reviews the literature with regard to analytic methods that have been used to evaluate patterns of MCAs, in particular those using birth defect registry data. A popular method for MCA assessment involves a comparison of the observed to expected ratio for a given combination of MCAs, or one of several modified versions of this comparison. Other methods include use of numerical taxonomy or other clustering techniques, multiple regression analysis, and log-linear analysis. Advantages and disadvantages of these approaches, as well as specific applications, were outlined. Despite the availability of multiple analytic approaches, relatively few MCA combinations have been assessed. The availability of large birth defects registries and computing resources that allow for automated, big data strategies for prioritizing MCA patterns may provide for new avenues for better understanding co-occurrence of birth defects. Thus, the selection of an analytic approach may depend on several considerations. Birth Defects Research 110:5-11, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  20. Source location in plates based on the multiple sensors array method and wavelet analysis

    Yang, Hong Jun; Shin, Tae Jin; Lee, Sang Kwon

    2014-01-01

    A new method for impact source localization in a plate is proposed based on the multiple signal classification (MUSIC) and wavelet analysis. For source localization, the direction of arrival of the wave caused by an impact on a plate and the distance between impact position and sensor should be estimated. The direction of arrival can be estimated accurately using MUSIC method. The distance can be obtained by using the time delay of arrival and the group velocity of the Lamb wave in a plate. Time delay is experimentally estimated using the continuous wavelet transform for the wave. The elasto dynamic theory is used for the group velocity estimation.

  1. Source location in plates based on the multiple sensors array method and wavelet analysis

    Yang, Hong Jun; Shin, Tae Jin; Lee, Sang Kwon [Inha University, Incheon (Korea, Republic of)

    2014-01-15

    A new method for impact source localization in a plate is proposed based on the multiple signal classification (MUSIC) and wavelet analysis. For source localization, the direction of arrival of the wave caused by an impact on a plate and the distance between impact position and sensor should be estimated. The direction of arrival can be estimated accurately using MUSIC method. The distance can be obtained by using the time delay of arrival and the group velocity of the Lamb wave in a plate. Time delay is experimentally estimated using the continuous wavelet transform for the wave. The elasto dynamic theory is used for the group velocity estimation.

  2. Simple method for the generation of multiple homogeneous field volumes inside the bore of superconducting magnets.

    Chou, Ching-Yu; Ferrage, Fabien; Aubert, Guy; Sakellariou, Dimitris

    2015-07-17

    Standard Magnetic Resonance magnets produce a single homogeneous field volume, where the analysis is performed. Nonetheless, several modern applications could benefit from the generation of multiple homogeneous field volumes along the axis and inside the bore of the magnet. In this communication, we propose a straightforward method using a combination of ring structures of permanent magnets in order to cancel the gradient of the stray field in a series of distinct volumes. These concepts were demonstrated numerically on an experimentally measured magnetic field profile. We discuss advantages and limitations of our method and present the key steps required for an experimental validation.

  3. A novel EMD selecting thresholding method based on multiple iteration for denoising LIDAR signal

    Li, Meng; Jiang, Li-hui; Xiong, Xing-long

    2015-06-01

    Empirical mode decomposition (EMD) approach has been believed to be potentially useful for processing the nonlinear and non-stationary LIDAR signals. To shed further light on its performance, we proposed the EMD selecting thresholding method based on multiple iteration, which essentially acts as a development of EMD interval thresholding (EMD-IT), and randomly alters the samples of noisy parts of all the corrupted intrinsic mode functions to generate a better effect of iteration. Simulations on both synthetic signals and LIDAR signals from real world support this method.

  4. Power-efficient method for IM-DD optical transmission of multiple OFDM signals.

    Effenberger, Frank; Liu, Xiang

    2015-05-18

    We propose a power-efficient method for transmitting multiple frequency-division multiplexed (FDM) orthogonal frequency-division multiplexing (OFDM) signals in intensity-modulation direct-detection (IM-DD) optical systems. This method is based on quadratic soft clipping in combination with odd-only channel mapping. We show, both analytically and experimentally, that the proposed approach is capable of improving the power efficiency by about 3 dB as compared to conventional FDM OFDM signals under practical bias conditions, making it a viable solution in applications such as optical fiber-wireless integrated systems where both IM-DD optical transmission and OFDM signaling are important.

  5. Multiple travelling wave solutions of nonlinear evolution equations using a unified algebraic method

    Fan Engui

    2002-01-01

    A new direct and unified algebraic method for constructing multiple travelling wave solutions of general nonlinear evolution equations is presented and implemented in a computer algebraic system. Compared with most of the existing tanh methods, the Jacobi elliptic function method or other sophisticated methods, the proposed method not only gives new and more general solutions, but also provides a guideline to classify the various types of the travelling wave solutions according to the values of some parameters. The solutions obtained in this paper include (a) kink-shaped and bell-shaped soliton solutions, (b) rational solutions, (c) triangular periodic solutions and (d) Jacobi and Weierstrass doubly periodic wave solutions. Among them, the Jacobi elliptic periodic wave solutions exactly degenerate to the soliton solutions at a certain limit condition. The efficiency of the method can be demonstrated on a large variety of nonlinear evolution equations such as those considered in this paper, KdV-MKdV, Ito's fifth MKdV, Hirota, Nizhnik-Novikov-Veselov, Broer-Kaup, generalized coupled Hirota-Satsuma, coupled Schroedinger-KdV, (2+1)-dimensional dispersive long wave, (2+1)-dimensional Davey-Stewartson equations. In addition, as an illustrative sample, the properties of the soliton solutions and Jacobi doubly periodic solutions for the Hirota equation are shown by some figures. The links among our proposed method, the tanh method, extended tanh method and the Jacobi elliptic function method are clarified generally. (author)

  6. Multiple-Features-Based Semisupervised Clustering DDoS Detection Method

    Yonghao Gu

    2017-01-01

    Full Text Available DDoS attack stream from different agent host converged at victim host will become very large, which will lead to system halt or network congestion. Therefore, it is necessary to propose an effective method to detect the DDoS attack behavior from the massive data stream. In order to solve the problem that large numbers of labeled data are not provided in supervised learning method, and the relatively low detection accuracy and convergence speed of unsupervised k-means algorithm, this paper presents a semisupervised clustering detection method using multiple features. In this detection method, we firstly select three features according to the characteristics of DDoS attacks to form detection feature vector. Then, Multiple-Features-Based Constrained-K-Means (MF-CKM algorithm is proposed based on semisupervised clustering. Finally, using MIT Laboratory Scenario (DDoS 1.0 data set, we verify that the proposed method can improve the convergence speed and accuracy of the algorithm under the condition of using a small amount of labeled data sets.

  7. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  8. A consensus successive projections algorithm--multiple linear regression method for analyzing near infrared spectra.

    Liu, Ke; Chen, Xiaojing; Li, Limin; Chen, Huiling; Ruan, Xiukai; Liu, Wenbin

    2015-02-09

    The successive projections algorithm (SPA) is widely used to select variables for multiple linear regression (MLR) modeling. However, SPA used only once may not obtain all the useful information of the full spectra, because the number of selected variables cannot exceed the number of calibration samples in the SPA algorithm. Therefore, the SPA-MLR method risks the loss of useful information. To make a full use of the useful information in the spectra, a new method named "consensus SPA-MLR" (C-SPA-MLR) is proposed herein. This method is the combination of consensus strategy and SPA-MLR method. In the C-SPA-MLR method, SPA-MLR is used to construct member models with different subsets of variables, which are selected from the remaining variables iteratively. A consensus prediction is obtained by combining the predictions of the member models. The proposed method is evaluated by analyzing the near infrared (NIR) spectra of corn and diesel. The results of C-SPA-MLR method showed a better prediction performance compared with the SPA-MLR and full-spectra PLS methods. Moreover, these results could serve as a reference for combination the consensus strategy and other variable selection methods when analyzing NIR spectra and other spectroscopic techniques. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Meta-analysis methods for combining multiple expression profiles: comparisons, statistical characterization and an application guideline.

    Chang, Lun-Ching; Lin, Hui-Min; Sibille, Etienne; Tseng, George C

    2013-12-21

    As high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the literature, but their performance and properties have only been minimally investigated. There is currently no clear conclusion or guideline as to the proper choice of a meta-analysis method given an application; the decision essentially requires both statistical and biological considerations. We performed 12 microarray meta-analysis methods for combining multiple simulated expression profiles, and such methods can be categorized for different hypothesis setting purposes: (1) HS(A): DE genes with non-zero effect sizes in all studies, (2) HS(B): DE genes with non-zero effect sizes in one or more studies and (3) HS(r): DE gene with non-zero effect in "majority" of studies. We then performed a comprehensive comparative analysis through six large-scale real applications using four quantitative statistical evaluation criteria: detection capability, biological association, stability and robustness. We elucidated hypothesis settings behind the methods and further apply multi-dimensional scaling (MDS) and an entropy measure to characterize the meta-analysis methods and data structure, respectively. The aggregated results from the simulation study categorized the 12 methods into three hypothesis settings (HS(A), HS(B), and HS(r)). Evaluation in real data and results from MDS and entropy analyses provided an insightful and practical guideline to the choice of the most suitable method in a given application. All source files for simulation and real data are available on the author's publication website.

  10. A neutron multiplicity analysis method for uranium samples with liquid scintillators

    Zhou, Hao, E-mail: zhouhao_ciae@126.com [China Institute of Atomic Energy, P.O.BOX 275-8, Beijing 102413 (China); Lin, Hongtao [Xi' an Reasearch Institute of High-tech, Xi' an, Shaanxi 710025 (China); Liu, Guorong; Li, Jinghuai; Liang, Qinglei; Zhao, Yonggang [China Institute of Atomic Energy, P.O.BOX 275-8, Beijing 102413 (China)

    2015-10-11

    A new neutron multiplicity analysis method for uranium samples with liquid scintillators is introduced. An active well-type fast neutron multiplicity counter has been built, which consists of four BC501A liquid scintillators, a n/γdiscrimination module MPD-4, a multi-stop time to digital convertor MCS6A, and two Am–Li sources. A mathematical model is built to symbolize the detection processes of fission neutrons. Based on this model, equations in the form of R=F*P*Q*T could be achieved, where F indicates the induced fission rate by interrogation sources, P indicates the transfer matrix determined by multiplication process, Q indicates the transfer matrix determined by detection efficiency, T indicates the transfer matrix determined by signal recording process and crosstalk in the counter. Unknown parameters about the item are determined by the solutions of the equations. A {sup 252}Cf source and some low enriched uranium items have been measured. The feasibility of the method is proven by its application to the data analysis of the experiments.

  11. A novel method for the sequential removal and separation of multiple heavy metals from wastewater.

    Fang, Li; Li, Liang; Qu, Zan; Xu, Haomiao; Xu, Jianfang; Yan, Naiqiang

    2018-01-15

    A novel method was developed and applied for the treatment of simulated wastewater containing multiple heavy metals. A sorbent of ZnS nanocrystals (NCs) was synthesized and showed extraordinary performance for the removal of Hg 2+ , Cu 2+ , Pb 2+ and Cd 2+ . The removal efficiencies of Hg 2+ , Cu 2+ , Pb 2+ and Cd 2+ were 99.9%, 99.9%, 90.8% and 66.3%, respectively. Meanwhile, it was determined that solubility product (K sp ) of heavy metal sulfides was closely related to adsorption selectivity of various heavy metals on the sorbent. The removal efficiency of Hg 2+ was higher than that of Cd 2+ , while the K sp of HgS was lower than that of CdS. It indicated that preferential adsorption of heavy metals occurred when the K sp of the heavy metal sulfide was lower. In addition, the differences in the K sp of heavy metal sulfides allowed for the exchange of heavy metals, indicating the potential application for the sequential removal and separation of heavy metals from wastewater. According to the cumulative adsorption experimental results, multiple heavy metals were sequentially adsorbed and separated from the simulated wastewater in the order of the K sp of their sulfides. This method holds the promise of sequentially removing and separating multiple heavy metals from wastewater. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Resampling-based methods in single and multiple testing for equality of covariance/correlation matrices.

    Yang, Yang; DeGruttola, Victor

    2012-06-22

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.

  13. A mixed methods study of multiple health behaviors among individuals with stroke

    Matthew Plow

    2017-05-01

    Full Text Available Background Individuals with stroke often have multiple cardiovascular risk factors that necessitate promoting engagement in multiple health behaviors. However, observational studies of individuals with stroke have typically focused on promoting a single health behavior. Thus, there is a poor understanding of linkages between healthy behaviors and the circumstances in which factors, such as stroke impairments, may influence a single or multiple health behaviors. Methods We conducted a mixed methods convergent parallel study of 25 individuals with stroke to examine the relationships between stroke impairments and physical activity, sleep, and nutrition. Our goal was to gain further insight into possible strategies to promote multiple health behaviors among individuals with stroke. This study focused on physical activity, sleep, and nutrition because of their importance in achieving energy balance, maintaining a healthy weight, and reducing cardiovascular risks. Qualitative and quantitative data were collected concurrently, with the former being prioritized over the latter. Qualitative data was prioritized in order to develop a conceptual model of engagement in multiple health behaviors among individuals with stroke. Qualitative and quantitative data were analyzed independently and then were integrated during the inference stage to develop meta-inferences. The 25 individuals with stroke completed closed-ended questionnaires on healthy behaviors and physical function. They also participated in face-to-face focus groups and one-to-one phone interviews. Results We found statistically significant and moderate correlations between hand function and healthy eating habits (r = 0.45, sleep disturbances and limitations in activities of daily living (r =  − 0.55, BMI and limitations in activities of daily living (r =  − 0.49, physical activity and limitations in activities of daily living (r = 0.41, mobility impairments and BMI (r

  14. A new fast method for inferring multiple consensus trees using k-medoids.

    Tahiri, Nadia; Willems, Matthieu; Makarenkov, Vladimir

    2018-04-05

    Gene trees carry important information about specific evolutionary patterns which characterize the evolution of the corresponding gene families. However, a reliable species consensus tree cannot be inferred from a multiple sequence alignment of a single gene family or from the concatenation of alignments corresponding to gene families having different evolutionary histories. These evolutionary histories can be quite different due to horizontal transfer events or to ancient gene duplications which cause the emergence of paralogs within a genome. Many methods have been proposed to infer a single consensus tree from a collection of gene trees. Still, the application of these tree merging methods can lead to the loss of specific evolutionary patterns which characterize some gene families or some groups of gene families. Thus, the problem of inferring multiple consensus trees from a given set of gene trees becomes relevant. We describe a new fast method for inferring multiple consensus trees from a given set of phylogenetic trees (i.e. additive trees or X-trees) defined on the same set of species (i.e. objects or taxa). The traditional consensus approach yields a single consensus tree. We use the popular k-medoids partitioning algorithm to divide a given set of trees into several clusters of trees. We propose novel versions of the well-known Silhouette and Caliński-Harabasz cluster validity indices that are adapted for tree clustering with k-medoids. The efficiency of the new method was assessed using both synthetic and real data, such as a well-known phylogenetic dataset consisting of 47 gene trees inferred for 14 archaeal organisms. The method described here allows inference of multiple consensus trees from a given set of gene trees. It can be used to identify groups of gene trees having similar intragroup and different intergroup evolutionary histories. The main advantage of our method is that it is much faster than the existing tree clustering approaches, while

  15. An improved early detection method of type-2 diabetes mellitus using multiple classifier system

    Zhu, Jia

    2015-01-01

    The specific causes of complex diseases such as Type-2 Diabetes Mellitus (T2DM) have not yet been identified. Nevertheless, many medical science researchers believe that complex diseases are caused by a combination of genetic, environmental, and lifestyle factors. Detection of such diseases becomes an issue because it is not free from false presumptions and is accompanied by unpredictable effects. Given the greatly increased amount of data gathered in medical databases, data mining has been used widely in recent years to detect and improve the diagnosis of complex diseases. However, past research showed that no single classifier can be considered optimal for all problems. Therefore, in this paper, we focus on employing multiple classifier systems to improve the accuracy of detection for complex diseases, such as T2DM. We proposed a dynamic weighted voting scheme called multiple factors weighted combination for classifiers\\' decision combination. This method considers not only the local and global accuracy but also the diversity among classifiers and localized generalization error of each classifier. We evaluated our method on two real T2DM data sets and other medical data sets. The favorable results indicated that our proposed method significantly outperforms individual classifiers and other fusion methods.

  16. Direct integration multiple collision integral transport analysis method for high energy fusion neutronics

    Koch, K.R.

    1985-01-01

    A new analysis method specially suited for the inherent difficulties of fusion neutronics was developed to provide detailed studies of the fusion neutron transport physics. These studies should provide a better understanding of the limitations and accuracies of typical fusion neutronics calculations. The new analysis method is based on the direct integration of the integral form of the neutron transport equation and employs a continuous energy formulation with the exact treatment of the energy angle kinematics of the scattering process. In addition, the overall solution is analyzed in terms of uncollided, once-collided, and multi-collided solution components based on a multiple collision treatment. Furthermore, the numerical evaluations of integrals use quadrature schemes that are based on the actual dependencies exhibited in the integrands. The new DITRAN computer code was developed on the Cyber 205 vector supercomputer to implement this direct integration multiple-collision fusion neutronics analysis. Three representative fusion reactor models were devised and the solutions to these problems were studied to provide suitable choices for the numerical quadrature orders as well as the discretized solution grid and to understand the limitations of the new analysis method. As further verification and as a first step in assessing the accuracy of existing fusion-neutronics calculations, solutions obtained using the new analysis method were compared to typical multigroup discrete ordinates calculations

  17. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages

    Kim, Yoonsang; Emery, Sherry

    2013-01-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415

  18. Method for Multiple Targets Tracking in Cognitive Radar Based on Compressed Sensing

    Yang Jun

    2016-02-01

    Full Text Available A multiple targets cognitive radar tracking method based on Compressed Sensing (CS is proposed. In this method, the theory of CS is introduced to the case of cognitive radar tracking process in multiple targets scenario. The echo signal is sparsely expressed. The designs of sparse matrix and measurement matrix are accomplished by expressing the echo signal sparsely, and subsequently, the restruction of measurement signal under the down-sampling condition is realized. On the receiving end, after considering that the problems that traditional particle filter suffers from degeneracy, and require a large number of particles, the particle swarm optimization particle filter is used to track the targets. On the transmitting end, the Posterior Cramér-Rao Bounds (PCRB of the tracking accuracy is deduced, and the radar waveform parameters are further cognitively designed using PCRB. Simulation results show that the proposed method can not only reduce the data quantity, but also provide a better tracking performance compared with traditional method.

  19. Multiple alignment analysis on phylogenetic tree of the spread of SARS epidemic using distance method

    Amiroch, S.; Pradana, M. S.; Irawan, M. I.; Mukhlash, I.

    2017-09-01

    Multiple Alignment (MA) is a particularly important tool for studying the viral genome and determine the evolutionary process of the specific virus. Application of MA in the case of the spread of the Severe acute respiratory syndrome (SARS) epidemic is an interesting thing because this virus epidemic a few years ago spread so quickly that medical attention in many countries. Although there has been a lot of software to process multiple sequences, but the use of pairwise alignment to process MA is very important to consider. In previous research, the alignment between the sequences to process MA algorithm, Super Pairwise Alignment, but in this study used a dynamic programming algorithm Needleman wunchs simulated in Matlab. From the analysis of MA obtained and stable region and unstable which indicates the position where the mutation occurs, the system network topology that produced the phylogenetic tree of the SARS epidemic distance method, and system area networks mutation.

  20. Selective removal of water in purge and cold-trap capillary gas chromatographic analysis of volatile organic traces in aqueous samples

    Noij, T.H.M.; van Es, A.J.J.; Cramers, C.A.M.G.; Rijks, J.A.; Dooper, R.P.M.

    1987-01-01

    The design and features of an on-line purge and cold-trap pre-concentration device for rapid analysis of volatile organic compounds in aqueous samples are discussed. Excessive water is removed from the purge gas by a condenser or a water permeable membrane in order to avoid blocking of the capillary

  1. Krohne Flow Indicator and High Flow Alarm - Local Indicator and High Flow Alarm of Helium Flow from the SCHe Purge Lines C and D to the Process Vent

    MISKA, C.R.

    2000-01-01

    Flow Indicators/alarms FI/FSH-5*52 and -5*72 are located in the process vent lines connected to the 2 psig SCHe purge lines C and D. They monitor the flow from the 2 psig SCHe purge going to the process vent. The switch/alarm is non-safety class GS

  2. BIOFEEDBACK: A NEW METHOD FOR CORRECTION OF MOTOR DISORDERS IN PATIENTS WITH MULTIPLE SCLEROSIS

    Ya. S. Pekker

    2014-01-01

    Full Text Available Major disabling factors in multiple sclerosis is motor disorders. Rehabilitation of such violations is one of the most important medical and social problems. Currently, most of the role given to the development of methods for correction of motor disorders based on accessing natural resources of the human body. One of these methods is the adaptive control with biofeedback (BFB. The aim of our study was the correction of motor disorders in multiple sclerosis patients using biofeedback training. In the study, we have developed scenarios for training rehabilitation program computer EMG biofeedback aimed at correction of motor disorders in patients with multiple sclerosis (MS. The method was tested in the neurological clinic of SSMU. The study included 9 patients with definite diagnosis of MS with the presence of the clinical picture of combined pyramidal and cerebellar symptoms. Assessed the effectiveness of rehabilitation procedures biofeedback training using specialized scales (rating scale functional systems Kurtzke; questionnaire research quality of life – SF-36, evaluation of disease impact Profile – SIP and score on a scale fatigue – FSS. In the studied group of patients decreased score on a scale of fatigue (FSS, increased motor control (SIP2, the physical and mental components of health (SF-36. The tendency to reduce the amount of neurological deficit by reducing the points on the pyramidal Kurtske violations. Analysis of the exchange rate dynamics of biofeedback training on EMG for trained muscles indicates an increase in the recorded signal OEMG from session to session. Proved a tendency to increase strength and coordination trained muscles of patients studied.Positive results of biofeedback therapy in patients with MS can be recommended to use this method in the complex rehabilitation measures to correct motor and psycho-emotional disorders.

  3. Method for Collision Avoidance Motion Coordination of Multiple Mobile Robots Using Central Observation

    Ko, N.Y.; Seo, D.J. [Chosun University, Kwangju (Korea)

    2003-04-01

    This paper presents a new method driving multiple robots to their goal position without collision. Each robot adjusts its motion based on the information on the goal locations, velocity, and position of the robot and the velocity and position of the other robots. To consider the movement of the robots in a work area, we adopt the concept of avoidability measure. The avoidability measure figures the degree of how easily a robot can avoid other robots considering the following factors: the distance from the robot to the other robots, velocity of the robot and the other robots. To implement the concept in moving robot avoidance, relative distance between the robots is derived. Our method combines the relative distance with an artificial potential field method. The proposed method is simulated for several cases. The results show that the proposed method steers robots to open space anticipating the approach of other robots. In contrast, the usual potential field method sometimes fails preventing collision or causes hasty motion, because it initiates avoidance motion later than the proposed method. The proposed method can be used to move robots in a robot soccer team to their appropriate position without collision as fast as possible. (author). 21 refs., 10 figs., 13 tabs.

  4. Should methods of correction for multiple comparisons be applied in pharmacovigilance?

    Lorenza Scotti

    2015-12-01

    Full Text Available Purpose. In pharmacovigilance, spontaneous reporting databases are devoted to the early detection of adverse event ‘signals’ of marketed drugs. A common limitation of these systems is the wide number of concurrently investigated associations, implying a high probability of generating positive signals simply by chance. However it is not clear if the application of methods aimed to adjust for the multiple testing problems are needed when at least some of the drug-outcome relationship under study are known. To this aim we applied a robust estimation method for the FDR (rFDR particularly suitable in the pharmacovigilance context. Methods. We exploited the data available for the SAFEGUARD project to apply the rFDR estimation methods to detect potential false positive signals of adverse reactions attributable to the use of non-insulin blood glucose lowering drugs. Specifically, the number of signals generated from the conventional disproportionality measures and after the application of the rFDR adjustment method was compared. Results. Among the 311 evaluable pairs (i.e., drug-event pairs with at least one adverse event report, 106 (34% signals were considered as significant from the conventional analysis. Among them 1 resulted in false positive signals according to rFDR method. Conclusions. The results of this study seem to suggest that when a restricted number of drug-outcome pairs is considered and warnings about some of them are known, multiple comparisons methods for recognizing false positive signals are not so useful as suggested by theoretical considerations.

  5. Spent nuclear fuel project cold vacuum drying facility vacuum and purge system design description

    IRWIN, J.J.

    1998-11-30

    This document provides the System Design Description (SDD) for the Cold Vacuum Drying Facility (CVDF) Vacuum and Purge System (VPS) . The SDD was developed in conjunction with HNF-SD-SNF-SAR-O02, Safety Analysis Report for the Cold Vacuum Drying Facility, Phase 2, Supporting Installation of Processing Systems (Garvin 1998), The HNF-SD-SNF-DRD-002, 1998, Cold Vacuum Drying Facility Design Requirements, and the CVDF Design Summary Report. The SDD contains general descriptions of the VPS equipment, the system functions, requirements and interfaces. The SDD provides references for design and fabrication details, operation sequences and maintenance. This SDD has been developed for the SNFP Operations Organization and shall be updated, expanded, and revised in accordance with future design, construction and startup phases of the CVDF until the CVDF final ORR is approved.

  6. Identification of Aroma Compounds of Lamiaceae Species in Turkey Using the Purge and Trap Technique

    Sonmezdag, Ahmet Salih; Kelebek, Hasim; Selli, Serkan

    2017-01-01

    The present research was planned to characterize the aroma composition of important members of the Lamiaceae family such as Salvia officinalis, Lavandula angustifolia and Mentha asiatica. Aroma components of the S. officinalis, L. angustifolia and M. asiatica were extracted with the purge and trap technique with dichloromethane and analyzed with the gas chromatography–mass spectrometry (GC–MS) technique. A total of 23, 33 and 33 aroma compounds were detected in Salvia officinalis, Lavandula angustifolia and Mentha asiatica, respectively including, acids, alcohols, aldehydes, esters, hydrocarbons and terpenes. Terpene compounds were both qualitatively and quantitatively the major chemical group among the identified aroma compounds, followed by esters. The main terpene compounds were 1,8-cineole, sabinene and linalool in Salvia officinalis, Lavandula angustifolia and Mentha asiatica, respectively. Among esters, linalyl acetate was the only and most important ester compound which was detected in all samples. PMID:28231089

  7. Identification of Aroma Compounds of Lamiaceae Species in Turkey Using the Purge and Trap Technique

    Ahmet Salih Sonmezdag

    2017-02-01

    Full Text Available The present research was planned to characterize the aroma composition of important members of the Lamiaceae family such as Salvia officinalis, Lavandula angustifolia and Mentha asiatica. Aroma components of the S. officinalis, L. angustifolia and M. asiatica were extracted with the purge and trap technique with dichloromethane and analyzed with the gas chromatography–mass spectrometry (GC–MS technique. A total of 23, 33 and 33 aroma compounds were detected in Salvia officinalis, Lavandula angustifolia and Mentha asiatica, respectively including, acids, alcohols, aldehydes, esters, hydrocarbons and terpenes. Terpene compounds were both qualitatively and quantitatively the major chemical group among the identified aroma compounds, followed by esters. The main terpene compounds were 1,8-cineole, sabinene and linalool in Salvia officinalis, Lavandula angustifolia and Mentha asiatica, respectively. Among esters, linalyl acetate was the only and most important ester compound which was detected in all samples.

  8. Spent nuclear fuel project cold vacuum drying facility vacuum and purge system design description

    IRWIN, J.J.

    1998-01-01

    This document provides the System Design Description (SDD) for the Cold Vacuum Drying Facility (CVDF) Vacuum and Purge System (VPS) . The SDD was developed in conjunction with HNF-SD-SNF-SAR-O02, Safety Analysis Report for the Cold Vacuum Drying Facility, Phase 2, Supporting Installation of Processing Systems (Garvin 1998), The HNF-SD-SNF-DRD-002, 1998, Cold Vacuum Drying Facility Design Requirements, and the CVDF Design Summary Report. The SDD contains general descriptions of the VPS equipment, the system functions, requirements and interfaces. The SDD provides references for design and fabrication details, operation sequences and maintenance. This SDD has been developed for the SNFP Operations Organization and shall be updated, expanded, and revised in accordance with future design, construction and startup phases of the CVDF until the CVDF final ORR is approved

  9. Comparing the index-flood and multiple-regression methods using L-moments

    Malekinezhad, H.; Nachtnebel, H. P.; Klik, A.

    In arid and semi-arid regions, the length of records is usually too short to ensure reliable quantile estimates. Comparing index-flood and multiple-regression analyses based on L-moments was the main objective of this study. Factor analysis was applied to determine main influencing variables on flood magnitude. Ward’s cluster and L-moments approaches were applied to several sites in the Namak-Lake basin in central Iran to delineate homogeneous regions based on site characteristics. Homogeneity test was done using L-moments-based measures. Several distributions were fitted to the regional flood data and index-flood and multiple-regression methods as two regional flood frequency methods were compared. The results of factor analysis showed that length of main waterway, compactness coefficient, mean annual precipitation, and mean annual temperature were the main variables affecting flood magnitude. The study area was divided into three regions based on the Ward’s method of clustering approach. The homogeneity test based on L-moments showed that all three regions were acceptably homogeneous. Five distributions were fitted to the annual peak flood data of three homogeneous regions. Using the L-moment ratios and the Z-statistic criteria, GEV distribution was identified as the most robust distribution among five candidate distributions for all the proposed sub-regions of the study area, and in general, it was concluded that the generalised extreme value distribution was the best-fit distribution for every three regions. The relative root mean square error (RRMSE) measure was applied for evaluating the performance of the index-flood and multiple-regression methods in comparison with the curve fitting (plotting position) method. In general, index-flood method gives more reliable estimations for various flood magnitudes of different recurrence intervals. Therefore, this method should be adopted as regional flood frequency method for the study area and the Namak-Lake basin

  10. A mixed methods study of multiple health behaviors among individuals with stroke.

    Plow, Matthew; Moore, Shirley M; Sajatovic, Martha; Katzan, Irene

    2017-01-01

    Individuals with stroke often have multiple cardiovascular risk factors that necessitate promoting engagement in multiple health behaviors. However, observational studies of individuals with stroke have typically focused on promoting a single health behavior. Thus, there is a poor understanding of linkages between healthy behaviors and the circumstances in which factors, such as stroke impairments, may influence a single or multiple health behaviors. We conducted a mixed methods convergent parallel study of 25 individuals with stroke to examine the relationships between stroke impairments and physical activity, sleep, and nutrition. Our goal was to gain further insight into possible strategies to promote multiple health behaviors among individuals with stroke. This study focused on physical activity, sleep, and nutrition because of their importance in achieving energy balance, maintaining a healthy weight, and reducing cardiovascular risks. Qualitative and quantitative data were collected concurrently, with the former being prioritized over the latter. Qualitative data was prioritized in order to develop a conceptual model of engagement in multiple health behaviors among individuals with stroke. Qualitative and quantitative data were analyzed independently and then were integrated during the inference stage to develop meta-inferences. The 25 individuals with stroke completed closed-ended questionnaires on healthy behaviors and physical function. They also participated in face-to-face focus groups and one-to-one phone interviews. We found statistically significant and moderate correlations between hand function and healthy eating habits ( r  = 0.45), sleep disturbances and limitations in activities of daily living ( r  =  - 0.55), BMI and limitations in activities of daily living ( r  =  - 0.49), physical activity and limitations in activities of daily living ( r  = 0.41), mobility impairments and BMI ( r  =  - 0.41), sleep disturbances and physical

  11. The methods for detecting multiple small nodules from 3D chest X-ray CT images

    Hayase, Yosuke; Mekada, Yoshito; Mori, Kensaku; Toriwaki, Jun-ichiro; Natori, Hiroshi

    2004-01-01

    This paper describes a method for detecting small nodules, whose CT values and diameters are more than -600 Hounsfield unit (H.U.) and 2 mm, from three-dimensional chest X-ray CT images. The proposed method roughly consists of two submodules: initial detection of nodule candidates by discriminating between nodule regions and other regions such as blood vessels or bronchi using a shape feature computed from distance values inside the regions and reduction of false positive (FP) regions by using a minimum directional difference filter called minimum directional difference filter (Min-DD) changing its radius suit to the size of the initial candidates. The performance of the proposed method was evaluated by using seven cases of chest X-ray CT images including six abnormal cases where multiple lung cancers are observed. The experimental results for nodules (361 regions in total) showed that sensitivity and FP regions are 71% and 7.4 regions in average per case. (author)

  12. The Green Function cellular method and its relation to multiple scattering theory

    Butler, W.H.; Zhang, X.G.; Gonis, A.

    1992-01-01

    This paper investigates techniques for solving the wave equation which are based on the idea of obtaining exact local solutions within each potential cell, which are then joined to form a global solution. The authors derive full potential multiple scattering theory (MST) from the Lippmann-Schwinger equation and show that it as well as a closely related cellular method are techniques of this type. This cellular method appears to have all of the advantages of MST and the added advantage of having a secular matrix with only nearest neighbor interactions. Since this cellular method is easily linearized one can rigorously reduce electronic structure calculation to the problem of solving a nearest neighbor tight-binding problem

  13. Study on validation method for femur finite element model under multiple loading conditions

    Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu

    2018-03-01

    Acquisition of accurate and reliable constitutive parameters related to bio-tissue materials was beneficial to improve biological fidelity of a Finite Element (FE) model and predict impact damages more effectively. In this paper, a femur FE model was established under multiple loading conditions with diverse impact positions. Then, based on sequential response surface method and genetic algorithms, the material parameters identification was transformed to a multi-response optimization problem. Finally, the simulation results successfully coincided with force-displacement curves obtained by numerous experiments. Thus, computational accuracy and efficiency of the entire inverse calculation process were enhanced. This method was able to effectively reduce the computation time in the inverse process of material parameters. Meanwhile, the material parameters obtained by the proposed method achieved higher accuracy.

  14. TODIM Method for Single-Valued Neutrosophic Multiple Attribute Decision Making

    Dong-Sheng Xu

    2017-10-01

    Full Text Available Recently, the TODIM has been used to solve multiple attribute decision making (MADM problems. The single-valued neutrosophic sets (SVNSs are useful tools to depict the uncertainty of the MADM. In this paper, we will extend the TODIM method to the MADM with the single-valued neutrosophic numbers (SVNNs. Firstly, the definition, comparison, and distance of SVNNs are briefly presented, and the steps of the classical TODIM method for MADM problems are introduced. Then, the extended classical TODIM method is proposed to deal with MADM problems with the SVNNs, and its significant characteristic is that it can fully consider the decision makers’ bounded rationality which is a real action in decision making. Furthermore, we extend the proposed model to interval neutrosophic sets (INSs. Finally, a numerical example is proposed.

  15. Seasonal patterns of birth for subjects with bulimia nervosa, binge eating, and purging: results from the National Women's Study.

    Brewerton, Timothy D; Dansky, Bonnie S; O'Neil, Patrick M; Kilpatrick, Dean G

    2012-01-01

    Studies of birth patterns in anorexia nervosa have shown relative increases between March and August, while studies in Bulimia Nervosa (BN) have been negative. Since there are no studies using representative, nonclinical samples, we looked for seasonal birth patterns in women with BN and in those who ever endorsed bingeing or purging. A national, representative sample of 3,006 adult women completed structured telephone interviews including screenings for bulimia nervosa (BN) and questions about month, date, and year of birth. Season of birth was calculated using traditional definitions. Differences across season of birth between subjects with (n = 85) and without BN (n = 2,898), those with (n = 749) and without bingeing (n = 2,229), and those with (n = 267) and without any purging (n = 2,715) were compared using chi-square analyses. There were significant differences across season of birth between subjects: (1) with and without BN (p = 0.033); (2) with and without bingeing (p = 0.034), and; (3) with and without purging (p = 0.001). Fall had the highest relative number of births for all categories, while spring had the lowest. In a national representative study of nontreatment seeking subjects significant differences in season of birth were found for subjects with lifetime histories of BN, binge eating and purging. © 2011 by Wiley Periodicals, Inc. (Int J Eat Disord 2012). Copyright © 2011 Wiley Periodicals, Inc.

  16. Increases in frontostriatal connectivity are associated with response to dorsomedial repetitive transcranial magnetic stimulation in refractory binge/purge behaviors

    Katharine Dunlop

    2015-01-01

    Conclusions: Enhanced frontostriatal connectivity was associated with responders to dmPFC-rTMS for binge/purge behavior. rTMS caused paradoxical suppression of frontostriatal connectivity in nonresponders. rs-fMRI could prove critical for optimizing stimulation parameters in a future sham-controlled trial of rTMS in disordered eating.

  17. Less symptomatic, but equally impaired: Clinical impairment in restricting versus binge-eating/purging subtype of anorexia nervosa.

    Reas, Deborah Lynn; Rø, Øyvind

    2018-01-01

    This study investigated subtype differences in eating disorder-specific impairment in a treatment-seeking sample of individuals with anorexia nervosa (AN). The Clinical Impairment Assessment (CIA) and the Eating Disorder Examination-Questionnaire (EDE-Q) were administered to 142 patients. Of these, 54.9% were classified as restricting type (AN-R) and 45.1% were classified as binge-eating/purging type (AN-B/P) based on an average weekly occurrence of binge eating and/or purging episodes (≥4 episodes/28days). Individuals with AN-B/P exhibited higher levels of core ED psychopathology (dietary restraint, eating concern, shape/weight concerns) in addition to the expected higher frequency of binge/purge episodes. No significant differences existed between AN subtypes in the severity of ED-related impairment. Weight/shape concerns and binge eating frequency significantly predicted level of impairment. Differential associations were observed between the type of ED pathology that significantly contributed to impairment according to AN subtype. Although those with AN-B/P displayed higher levels of core attitudinal and behavioral ED pathology than AN-R, no significant differences in ED-specific impairment were found between AN subtypes. Eating disorder-related impairment in AN was not related to the severity of underweight or purging behaviors, but was uniquely and positively associated with weight/shape concerns and binge eating frequency. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. A location-based multiple point statistics method: modelling the reservoir with non-stationary characteristics

    Yin Yanshu

    2017-12-01

    Full Text Available In this paper, a location-based multiple point statistics method is developed to model a non-stationary reservoir. The proposed method characterizes the relationship between the sedimentary pattern and the deposit location using the relative central position distance function, which alleviates the requirement that the training image and the simulated grids have the same dimension. The weights in every direction of the distance function can be changed to characterize the reservoir heterogeneity in various directions. The local integral replacements of data events, structured random path, distance tolerance and multi-grid strategy are applied to reproduce the sedimentary patterns and obtain a more realistic result. This method is compared with the traditional Snesim method using a synthesized 3-D training image of Poyang Lake and a reservoir model of Shengli Oilfield in China. The results indicate that the new method can reproduce the non-stationary characteristics better than the traditional method and is more suitable for simulation of delta-front deposits. These results show that the new method is a powerful tool for modelling a reservoir with non-stationary characteristics.

  19. Analyzing the Impacts of Alternated Number of Iterations in Multiple Imputation Method on Explanatory Factor Analysis

    Duygu KOÇAK

    2017-11-01

    Full Text Available The study aims to identify the effects of iteration numbers used in multiple iteration method, one of the methods used to cope with missing values, on the results of factor analysis. With this aim, artificial datasets of different sample sizes were created. Missing values at random and missing values at complete random were created in various ratios by deleting data. For the data in random missing values, a second variable was iterated at ordinal scale level and datasets with different ratios of missing values were obtained based on the levels of this variable. The data were generated using “psych” program in R software, while “dplyr” program was used to create codes that would delete values according to predetermined conditions of missing value mechanism. Different datasets were generated by applying different iteration numbers. Explanatory factor analysis was conducted on the datasets completed and the factors and total explained variances are presented. These values were first evaluated based on the number of factors and total variance explained of the complete datasets. The results indicate that multiple iteration method yields a better performance in cases of missing values at random compared to datasets with missing values at complete random. Also, it was found that increasing the number of iterations in both missing value datasets decreases the difference in the results obtained from complete datasets.

  20. Combining morphometric evidence from multiple registration methods using dempster-shafer theory

    Rajagopalan, Vidya; Wyatt, Christopher

    2010-03-01

    In tensor-based morphometry (TBM) group-wise differences in brain structure are measured using high degreeof- freedom registration and some form of statistical test. However, it is known that TBM results are sensitive to both the registration method and statistical test used. Given the lack of an objective model of group variation is it difficult to determine a best registration method for TBM. The use of statistical tests is also problematic given the corrections required for multiple testing and the notorius difficulty selecting and intepreting signigance values. This paper presents an approach to address both of these issues by combining multiple registration methods using Dempster-Shafer Evidence theory to produce belief maps of categorical changes between groups. This approach is applied to the comparison brain morphometry in aging, a typical application of TBM, using the determinant of the Jacobian as a measure of volume change. We show that the Dempster-Shafer combination produces a unique and easy to interpret belief map of regional changes between and within groups without the complications associated with hypothesis testing.

  1. Hybrid MCDA Methods to Integrate Multiple Ecosystem Services in Forest Management Planning: A Critical Review.

    Uhde, Britta; Hahn, W Andreas; Griess, Verena C; Knoke, Thomas

    2015-08-01

    Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.

  2. Hybrid MCDA Methods to Integrate Multiple Ecosystem Services in Forest Management Planning: A Critical Review

    Uhde, Britta; Andreas Hahn, W.; Griess, Verena C.; Knoke, Thomas

    2015-08-01

    Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.

  3. Neutron reflection effect on total absorption detector method used in SWINPC neutron multiplication experiment for beryllium

    Tian Dongfeng; Ho Yukun; Yang Fujia

    2001-01-01

    The SWINPC integral experiment on neutron multiplication in bulk beryllium showed that there were marked discrepancies between experimental data and calculated values with the ENDF/B-VI data. The calculated values become higher than experimental ones as the sample thickness increases. Several works had been devoted to find problems existing in the experiment. This paper discusses the neutron reflection effect on the total absorption detector method which was used in the experiment to measure the neutron leakage from samples. One systematic correction is suggested to make the experimental values agree with the calculated ones with the ENDF/B-VI data within experimental errors. (author)

  4. Dynamical properties of the growing continuum using multiple-scale method

    Hynčík L.

    2008-12-01

    Full Text Available The theory of growth and remodeling is applied to the 1D continuum. This can be mentioned e.g. as a model of the muscle fibre or piezo-electric stack. Hyperelastic material described by free energy potential suggested by Fung is used whereas the change of stiffness is taken into account. Corresponding equations define the dynamical system with two degrees of freedom. Its stability and the properties of bifurcations are studied using multiple-scale method. There are shown the conditions under which the degenerated Hopf's bifurcation is occuring.

  5. Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)

    2016-05-01

    subject to code matrices that follows the structure given by (113). [⃗ yR y⃗I ] = √ Es 2L [ GR1 −GI1 GI2 GR2 ] [ QR −QI QI QR ] [⃗ bR b⃗I ] + [⃗ nR n⃗I... QR ] [⃗ b+ b⃗− ] + [⃗ n+ n⃗− ] (115) The average likelihood for type 4 CDMA (116) is a special case of type 1 CDMA with twice the code length and...AVERAGE LIKELIHOOD METHODS OF CLASSIFICATION OF CODE DIVISION MULTIPLE ACCESS (CDMA) MAY 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE

  6. Seismic PSA method for multiple nuclear power plants in a site

    Hakata, Tadakuni [Nuclear Safety Commission, Tokyo (Japan)

    2007-07-15

    The maximum number of nuclear power plants in a site is eight and about 50% of power plants are built in sites with three or more plants in the world. Such nuclear sites have potential risks of simultaneous multiple plant damages especially at external events. Seismic probabilistic safety assessment method (Level-1 PSA) for multi-unit sites with up to 9 units has been developed. The models include Fault-tree linked Monte Carlo computation, taking into consideration multivariate correlations of components and systems from partial to complete, inside and across units. The models were programmed as a computer program CORAL reef. Sample analysis and sensitivity studies were performed to verify the models and algorithms and to understand some of risk insights and risk metrics, such as site core damage frequency (CDF per site-year) for multiple reactor plants. This study will contribute to realistic state of art seismic PSA, taking consideration of multiple reactor power plants, and to enhancement of seismic safety. (author)

  7. Combining multiple FDG-PET radiotherapy target segmentation methods to reduce the effect of variable performance of individual segmentation methods

    McGurk, Ross J. [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Bowsher, James; Das, Shiva K. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Lee, John A [Molecular Imaging and Experimental Radiotherapy Unit, Universite Catholique de Louvain, 1200 Brussels (Belgium)

    2013-04-15

    different between 128 Multiplication-Sign 128 and 256 Multiplication-Sign 256 grid sizes for either method (MJV, p= 0.0519; STAPLE, p= 0.5672) but was for SMASD values (MJV, p < 0.0001; STAPLE, p= 0.0164). The best individual method varied depending on object characteristics. However, both MJV and STAPLE provided essentially equivalent accuracy to using the best independent method in every situation, with mean differences in DSC of 0.01-0.03, and 0.05-0.12 mm for SMASD. Conclusions: Combining segmentations offers a robust approach to object segmentation in PET. Both MJV and STAPLE improved accuracy and were robust against the widely varying performance of individual segmentation methods. Differences between MJV and STAPLE are such that either offers good performance when combining volumes. Neither method requires a training dataset but MJV is simpler to interpret, easy to implement and fast.

  8. Dynamic reflexivity in action: an armchair walkthrough of a qualitatively driven mixed-method and multiple methods study of mindfulness training in schoolchildren.

    Cheek, Julianne; Lipschitz, David L; Abrams, Elizabeth M; Vago, David R; Nakamura, Yoshio

    2015-06-01

    Dynamic reflexivity is central to enabling flexible and emergent qualitatively driven inductive mixed-method and multiple methods research designs. Yet too often, such reflexivity, and how it is used at various points of a study, is absent when we write our research reports. Instead, reports of mixed-method and multiple methods research focus on what was done rather than how it came to be done. This article seeks to redress this absence of emphasis on the reflexive thinking underpinning the way that mixed- and multiple methods, qualitatively driven research approaches are thought about and subsequently used throughout a project. Using Morse's notion of an armchair walkthrough, we excavate and explore the layers of decisions we made about how, and why, to use qualitatively driven mixed-method and multiple methods research in a study of mindfulness training (MT) in schoolchildren. © The Author(s) 2015.

  9. Single- versus multiple-sample method to measure glomerular filtration rate.

    Delanaye, Pierre; Flamant, Martin; Dubourg, Laurence; Vidal-Petiot, Emmanuelle; Lemoine, Sandrine; Cavalier, Etienne; Schaeffner, Elke; Ebert, Natalie; Pottel, Hans

    2018-01-08

    There are many different ways to measure glomerular filtration rate (GFR) using various exogenous filtration markers, each having their own strengths and limitations. However, not only the marker, but also the methodology may vary in many ways, including the use of urinary or plasma clearance, and, in the case of plasma clearance, the number of time points used to calculate the area under the concentration-time curve, ranging from only one (Jacobsson method) to eight (or more) blood samples. We collected the results obtained from 5106 plasma clearances (iohexol or 51Cr-ethylenediaminetetraacetic acid (EDTA)) using three to four time points, allowing GFR calculation using the slope-intercept method and the Bröchner-Mortensen correction. For each time point, the Jacobsson formula was applied to obtain the single-sample GFR. We used Bland-Altman plots to determine the accuracy of the Jacobsson method at each time point. The single-sample method showed within 10% concordances with the multiple-sample method of 66.4%, 83.6%, 91.4% and 96.0% at the time points 120, 180, 240 and ≥300 min, respectively. Concordance was poorer at lower GFR levels, and this trend is in parallel with increasing age. Results were similar in males and females. Some discordance was found in the obese subjects. Single-sample GFR is highly concordant with a multiple-sample strategy, except in the low GFR range (<30 mL/min). © The Author 2018. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  10. An Application of Robust Method in Multiple Linear Regression Model toward Credit Card Debt

    Amira Azmi, Nur; Saifullah Rusiman, Mohd; Khalid, Kamil; Roslan, Rozaini; Sufahani, Suliadi; Mohamad, Mahathir; Salleh, Rohayu Mohd; Hamzah, Nur Shamsidah Amir

    2018-04-01

    Credit card is a convenient alternative replaced cash or cheque, and it is essential component for electronic and internet commerce. In this study, the researchers attempt to determine the relationship and significance variables between credit card debt and demographic variables such as age, household income, education level, years with current employer, years at current address, debt to income ratio and other debt. The provided data covers 850 customers information. There are three methods that applied to the credit card debt data which are multiple linear regression (MLR) models, MLR models with least quartile difference (LQD) method and MLR models with mean absolute deviation method. After comparing among three methods, it is found that MLR model with LQD method became the best model with the lowest value of mean square error (MSE). According to the final model, it shows that the years with current employer, years at current address, household income in thousands and debt to income ratio are positively associated with the amount of credit debt. Meanwhile variables for age, level of education and other debt are negatively associated with amount of credit debt. This study may serve as a reference for the bank company by using robust methods, so that they could better understand their options and choice that is best aligned with their goals for inference regarding to the credit card debt.

  11. Sustainable Assessment of Aerosol Pollution Decrease Applying Multiple Attribute Decision-Making Methods

    Audrius Čereška

    2016-06-01

    Full Text Available Air pollution with various materials, particularly with aerosols, increases with the advances in technological development. This is a complicated global problem. One of the priorities in achieving sustainable development is the reduction of harmful technological effects on the environment and human health. It is a responsibility of researchers to search for effective methods of reducing pollution. The reliable results can be obtained by combining the approaches used in various fields of science and technology. This paper aims to demonstrate the effectiveness of the multiple attribute decision-making (MADM methods in investigating and solving the environmental pollution problems. The paper presents the study of the process of the evaporation of a toxic liquid based on using the MADM methods. A schematic view of the test setup is presented. The density, viscosity, and rate of the released vapor flow are measured and the dependence of the variation of the solution concentration on its temperature is determined in the experimental study. The concentration of hydrochloric acid solution (HAS varies in the range from 28% to 34%, while the liquid is heated from 50 to 80 °C. The variations in the parameters are analyzed using the well-known VIKOR and COPRAS MADM methods. For determining the criteria weights, a new CILOS (Criterion Impact LOSs method is used. The experimental results are arranged in the priority order, using the MADM methods. Based on the obtained data, the technological parameters of production, ensuring minimum environmental pollution, can be chosen.

  12. Normalization method for metabolomics data using optimal selection of multiple internal standards

    Yetukuri Laxman

    2007-03-01

    Full Text Available Abstract Background Success of metabolomics as the phenotyping platform largely depends on its ability to detect various sources of biological variability. Removal of platform-specific sources of variability such as systematic error is therefore one of the foremost priorities in data preprocessing. However, chemical diversity of molecular species included in typical metabolic profiling experiments leads to different responses to variations in experimental conditions, making normalization a very demanding task. Results With the aim to remove unwanted systematic variation, we present an approach that utilizes variability information from multiple internal standard compounds to find optimal normalization factor for each individual molecular species detected by metabolomics approach (NOMIS. We demonstrate the method on mouse liver lipidomic profiles using Ultra Performance Liquid Chromatography coupled to high resolution mass spectrometry, and compare its performance to two commonly utilized normalization methods: normalization by l2 norm and by retention time region specific standard compound profiles. The NOMIS method proved superior in its ability to reduce the effect of systematic error across the full spectrum of metabolite peaks. We also demonstrate that the method can be used to select best combinations of standard compounds for normalization. Conclusion Depending on experiment design and biological matrix, the NOMIS method is applicable either as a one-step normalization method or as a two-step method where the normalization parameters, influenced by variabilities of internal standard compounds and their correlation to metabolites, are first calculated from a study conducted in repeatability conditions. The method can also be used in analytical development of metabolomics methods by helping to select best combinations of standard compounds for a particular biological matrix and analytical platform.

  13. Robust design method and thermostatic experiment for multiple piezoelectric vibration absorber system

    Nambu, Yohsuke; Takashima, Toshihide; Inagaki, Akiya

    2015-01-01

    This paper examines the effects of connecting multiplexing shunt circuits composed of inductors and resistors to piezoelectric transducers so as to improve the robustness of a piezoelectric vibration absorber (PVA). PVAs are well known to be effective at suppressing the vibration of an adaptive structure; their weakness is low robustness to changes in the dynamic parameters of the system, including the main structure and the absorber. In the application to space structures, the temperature-dependency of capacitance of piezoelectric ceramics is the factor that causes performance reduction. To improve robustness to the temperature-dependency of the capacitance, this paper proposes a multiple-PVA system that is composed of distributed piezoelectric transducers and several shunt circuits. The optimization problems that determine both the frequencies and the damping ratios of the PVAs are multi-objective problems, which are solved using a real-coded genetic algorithm in this paper. A clamped aluminum beam with four groups of piezoelectric ceramics attached was considered in simulations and experiments. Numerical simulations revealed that the PVA systems designed using the proposed method had tolerance to changes in the capacitances. Furthermore, experiments using a thermostatic bath were conducted to reveal the effectiveness and robustness of the PVA systems. The maximum peaks of the transfer functions of the beam with the open circuit, the single-PVA system, the double-PVA system, and the quadruple-PVA system at 20 °C were 14.3 dB, −6.91 dB, −7.47 dB, and −8.51 dB, respectively. The experimental results also showed that the multiple-PVA system is more robust than a single PVA in a variable temperature environment from −10 °C to 50 °C. In conclusion, the use of multiple PVAs results in an effective, robust vibration control method for adaptive structures. (paper)

  14. Numerical study on influences of bed resettling, breeding zone orientation, and purge gas on temperatures in solid breeders

    Van Lew, Jon T., E-mail: jtvanlew@fusion.ucla.edu; Ying, Alice; Abdou, Mohamed

    2016-11-01

    Highlights: • Volume-conserving pebble fragmentation model in DEM to study thermomechanical responses to crushed pebbles in ensembles. • Parametric studies of ITER-relevant pebble beds with coupled CFD-DEM models. • Finding breeder temperatures are complex functions of orientation, fragmentation size, and packing fraction. • Recommendations of breeder unit orientation are given in terms of material selection. - Abstract: We apply coupled computational fluid dynamics and discrete element method (CFD-DEM) modeling tools with new numerical implementations of pebble fragmentation to study the combined effects of granular crushing and ensemble restructuring, granular fragment size, and initial packing for different breeder volume configurations. In typical solid breeder modules, heat removal from beds relies on maintaining pebble–pebble and pebble–wall contact integrity. However, contact is disrupted when an ensemble responds to individually crushed pebbles. Furthermore, restructuring of metastable packings after crushing events are, in part, dependent on gravity forces acting upon the pebbles. We investigate two representative pebble bed configurations under constant volumetric heat sources; modeling heat removed from beds via inter-particle conduction, purge gas convection, and contact between pebble beds and containers. In one configuration, heat is removed from at walls oriented parallel to the gravity vector (no gap formation possible); in the second, heat is removed at walls perpendicular to gravity, allowing for the possibility of gap formation between bed and wall. Judging beds on increase in maximum temperatures as a function of crushed pebble amount, we find that both pebble bed configurations to have advantageous features that manifest at different stages of pebble crushing. However, all configurations benefit from achieving high initial packing fractions.

  15. Method of remote powering and detecting multiple UWB passive tags in an RFID system

    Dowla, Farid U [Castro Valley, CA; Nekoogar, Faranak [San Ramon, CA; Benzel, David M [Livermore, CA; Dallum, Gregory E [Livermore, CA; Spiridon, Alex [Palo Alto, CA

    2012-05-29

    A new Radio Frequency Identification (RFID), tracking, powering apparatus/system and method using coded Ultra-wideband (UWB) signaling is introduced. The proposed hardware and techniques disclosed herein utilize a plurality of passive UWB transponders in a field of an RFID-radar system. The radar system itself enables multiple passive tags to be remotely powered (activated) at about the same time frame via predetermined frequency UWB pulsed formats. Once such tags are in an activated state, an UWB radar transmits specific "interrogating codes" to put predetermined tags in an awakened status. Such predetermined tags can then communicate by a unique "response code" so as to be detected by an UWB system using radar methods.

  16. Accuracy and Numerical Stabilty Analysis of Lattice Boltzmann Method with Multiple Relaxation Time for Incompressible Flows

    Pradipto; Purqon, Acep

    2017-07-01

    Lattice Boltzmann Method (LBM) is the novel method for simulating fluid dynamics. Nowadays, the application of LBM ranges from the incompressible flow, flow in the porous medium, until microflows. The common collision model of LBM is the BGK with a constant single relaxation time τ. However, BGK suffers from numerical instabilities. These instabilities could be eliminated by implementing LBM with multiple relaxation time. Both of those scheme have implemented for incompressible 2 dimensions lid-driven cavity. The stability analysis has done by finding the maximum Reynolds number and velocity for converged simulations. The accuracy analysis is done by comparing the velocity profile with the benchmark results from Ghia, et al and calculating the net velocity flux. The tests concluded that LBM with MRT are more stable than BGK, and have a similar accuracy. The maximum Reynolds number that converges for BGK is 3200 and 7500 for MRT respectively.

  17. Assessment of Different Metal Screw Joint Parameters by Using Multiple Criteria Analysis Methods

    Audrius Čereška

    2018-05-01

    Full Text Available This study compares screw joints made of different materials, including screws of different diameters. For that purpose, 8, 10, 12, 14, 16 mm diameter steel screws and various parts made of aluminum (Al, steel (Stl, bronze (Brz, cast iron (CI, copper (Cu and brass (Br are considered. Multiple criteria decision making (MCDM methods such as evaluation based on distance from average solution (EDAS, simple additive weighting (SAW, technique for order of preference by similarity to ideal solution (TOPSIS and complex proportional assessment (COPRAS are utilized to assess reliability of screw joints also considering cost issues. The entropy, criterion impact loss (CILOS and integrated determination of objective criteria weights (IDOCRIW methods are utilized to assess weights of decision criteria and find the best design alternative. Numerical results confirm the validity of the proposed approach.

  18. A method of risk assessment for a multi-plant site

    White, R.F.

    1983-06-01

    A model is presented which can be used in conjunction with probabilistic risk assessment to estimate whether a site on which there are several plants (reactors or chemical plants containing radioactive materials) meets whatever risk acceptance criteria or numerical risk guidelines are applied at the time of the assessment in relation to various groups of people and for various sources of risk. The application of the multi-plant site model to the direct and inverse methods of risk assessment is described. A method is proposed by which the potential hazard rating associated with a given plant can be quantified so that an appropriate allocation can be made when assessing the risks associated with each of the plants on a site. (author)

  19. Three-dimensional multiple reciprocity boundary element method for one-group neutron diffusion eigenvalue computations

    Itagaki, Masafumi; Sahashi, Naoki.

    1996-01-01

    The multiple reciprocity method (MRM) in conjunction with the boundary element method has been employed to solve one-group eigenvalue problems described by the three-dimensional (3-D) neutron diffusion equation. The domain integral related to the fission source is transformed into a series of boundary-only integrals, with the aid of the higher order fundamental solutions based on the spherical and the modified spherical Bessel functions. Since each degree of the higher order fundamental solutions in the 3-D cases has a singularity of order (1/r), the above series of boundary integrals requires additional terms which do not appear in the 2-D MRM formulation. The critical eigenvalue itself can be also described using only boundary integrals. Test calculations show that Wielandt's spectral shift technique guarantees rapid and stable convergence of 3-D MRM computations. (author)

  20. Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System.

    Shi, Shuai; Zhao, Kaichun; You, Zheng; Ouyang, Chenguang; Cao, Yongkui; Wang, Zhenzhou

    2017-03-22

    The Multiple Field-of-view Navigation System (MFNS) is a spacecraft subsystem built to realize the autonomous navigation of the Spacecraft Inside Tiangong Space Station. This paper introduces the basics of the MFNS, including its architecture, mathematical model and analysis, and numerical simulation of system errors. According to the performance requirement of the MFNS, the calibration of both intrinsic and extrinsic parameters of the system is assumed to be essential and pivotal. Hence, a novel method based on the geometrical constraints in object space, called checkerboard-fixed post-processing calibration (CPC), is proposed to solve the problem of simultaneously obtaining the intrinsic parameters of the cameras integrated in the MFNS and the transformation between the MFNS coordinate and the cameras' coordinates. This method utilizes a two-axis turntable and a prior alignment of the coordinates is needed. Theoretical derivation and practical operation of the CPC method are introduced. The calibration experiment results of the MFNS indicate that the extrinsic parameter accuracy of the CPC reaches 0.1° for each Euler angle and 0.6 mm for each position vector component (1σ). A navigation experiment verifies the calibration result and the performance of the MFNS. The MFNS is found to work properly, and the accuracy of the position vector components and Euler angle reaches 1.82 mm and 0.17° (1σ) respectively. The basic mechanism of the MFNS may be utilized as a reference for the design and analysis of multiple-camera systems. Moreover, the calibration method proposed has practical value for its convenience for use and potential for integration into a toolkit.

  1. Isothermal multiple displacement amplification: a methodical approach enhancing molecular routine diagnostics of microcarcinomas and small biopsies

    Mairinger FD

    2014-08-01

    Full Text Available Fabian D Mairinger,1 Robert FH Walter,2 Claudia Vollbrecht,3 Thomas Hager,1 Karl Worm,1 Saskia Ting,1 Jeremias Wohlschläger,1 Paul Zarogoulidis,4 Konstantinos Zarogoulidis,4 Kurt W Schmid1 1Institute of Pathology, 2Ruhrlandklinik, West German Lung Center, University Hospital Essen, Essen, 3Institute of Pathology, University Hospital Cologne, Cologne, Germany; 4Pulmonary Department, Oncology Unit, G Papanikolaou General Hospital, Aristotle University of Thessaloniki, Thessaloniki, Greece Background and methods: Isothermal multiple displacement amplification (IMDA can be a powerful tool in molecular routine diagnostics for homogeneous and sequence-independent whole-genome amplification of notably small tumor samples, eg, microcarcinomas and biopsies containing a small amount of tumor. Currently, this method is not well established in pathology laboratories. We designed a study to confirm the feasibility and convenience of this method for routine diagnostics with formalin-fixed, paraffin-embedded samples prepared by laser-capture microdissection. Results: A total of 250 µg DNA (concentration 5 µg/µL was generated by amplification over a period of 8 hours with a material input of approximately 25 cells, approximately equivalent to 175 pg of genomic DNA. In the generated DNA, a representation of all chromosomes could be shown and the presence of elected genes relevant for diagnosis in clinical samples could be proven. Mutational analysis of clinical samples could be performed without any difficulty and showed concordance with earlier diagnostic findings. Conclusion: We established the feasibility and convenience of IMDA for routine diagnostics. We also showed that small amounts of DNA, which were not analyzable with current molecular methods, could be sufficient for a wide field of applications in molecular routine diagnostics when they are preamplified with IMDA. Keywords: isothermal multiple displacement amplification, isothermal, whole

  2. The System of Inventory Forecasting in PT. XYZ by using the Method of Holt Winter Multiplicative

    Shaleh, W.; Rasim; Wahyudin

    2018-01-01

    Problems at PT. XYZ currently only rely on manual bookkeeping, then the cost of production will swell and all investments invested to be less to predict sales and inventory of goods. If the inventory prediction of goods is to large, then the cost of production will swell and all investments invested to be less efficient. Vice versa, if the inventory prediction is too small it will impact on consumers, so that consumers are forced to wait for the desired product. Therefore, in this era of globalization, the development of computer technology has become a very important part in every business plan. Almost of all companies, both large and small, use computer technology. By utilizing computer technology, people can make time in solving complex business problems. Computer technology for companies has become an indispensable activity to provide enhancements to the business services they manage but systems and technologies are not limited to the distribution model and data processing but the existing system must be able to analyze the possibilities of future company capabilities. Therefore, the company must be able to forecast conditions and circumstances, either from inventory of goods, force, or profits to be obtained. To forecast it, the data of total sales from December 2014 to December 2016 will be calculated by using the method of Holt Winters, which is the method of time series prediction (Multiplicative Seasonal Method) it is seasonal data that has increased and decreased, also has 4 equations i.e. Single Smoothing, Trending Smoothing, Seasonal Smoothing and Forecasting. From the results of research conducted, error value in the form of MAPE is below 1%, so it can be concluded that forecasting with the method of Holt Winter Multiplicative.

  3. Geometric calibration method for multiple-head cone-beam SPECT system

    Rizo, P.; Grangeat, P.; Guillemaud, R.

    1994-01-01

    A method is presented for estimating the geometrical parameters of cone beam systems with multiple heads, each head having its own orientation. In tomography, for each head, the relative position of the rotation axis and f the collimator do not change during the data acquisition. The authors thus can separate the parameters into intrinsic parameters and extrinsic parameters. The intrinsic parameters describe the detection system geometry and the extrinsic parameters the position of the detection system with respect to the rotation axis. Intrinsic parameters must be estimated each time the acquisition geometry is modified. Extrinsic parameters are estimated by minimizing the distances between the measured position of a point source projection and the computed position obtained using the estimated extrinsic parameters. The main advantage of this method is that the extrinsic parameters are only weakly correlated when the intrinsic parameters are known. Thus the authors can use any simple least square error minimization method to perform the estimation of the extrinsic parameters. Giving a fixed value to the distance between the point source and the rotation axis in the estimation process, ensures the coherence of the extrinsic parameters between each head. They show that with this calibration method, the full width at half maximum measured with point sources is very close to the theoretical one, and remains almost unchanged when more than one head is used. Simulation results and reconstructions on a Jaszczak phantom are presented that show the capabilities of this method

  4. Expanded beam deflection method for simultaneous measurement of displacement and vibrations of multiple microcantilevers

    Nieradka, K.; MaloziePc, G.; Kopiec, D.; Gotszalk, T.; Grabiec, P.; Janus, P.; Sierakowski, A.

    2011-01-01

    Here we present an extension of optical beam deflection (OBD) method for measuring displacement and vibrations of an array of microcantilevers. Instead of focusing on the cantilever, the optical beam is either focused above or below the cantilever array, or focused only in the axis parallel to the cantilevers length, allowing a wide optical line to span multiple cantilevers in the array. Each cantilever reflects a part of the incident beam, which is then directed onto a photodiode array detector in a manner allowing distinguishing between individual beams. Each part of reflected beam behaves like a single beam of roughly the same divergence angle in the bending sensing axis as the incident beam. Since sensitivity of the OBD method depends on the divergence angle of deflected beam, high sensitivity is preserved in proposed expanded beam deflection (EBD) method. At the detector, each spot's position is measured at the same time, without time multiplexing of light sources. This provides real simultaneous readout of entire array, unavailable in most of competitive methods, and thus increases time resolution of the measurement. Expanded beam can also span another line of cantilevers allowing monitoring of specially designed two-dimensional arrays. In this paper, we present first results of application of EBD method to cantilever sensors. We show how thermal noise resolution can be easily achieved and combined with thermal noise based resonance frequency measurement.

  5. Multiple-Fault Diagnosis Method Based on Multiscale Feature Extraction and MSVM_PPA

    Min Zhang

    2018-01-01

    Full Text Available Identification of rolling bearing fault patterns, especially for the compound faults, has attracted notable attention and is still a challenge in fault diagnosis. In this paper, a novel method called multiscale feature extraction (MFE and multiclass support vector machine (MSVM with particle parameter adaptive (PPA is proposed. MFE is used to preprocess the process signals, which decomposes the data into intrinsic mode function by empirical mode decomposition method, and instantaneous frequency of decomposed components was obtained by Hilbert transformation. Then, statistical features and principal component analysis are utilized to extract significant information from the features, to get effective data from multiple faults. MSVM method with PPA parameters optimization will classify the fault patterns. The results of a case study of the rolling bearings faults data from Case Western Reserve University show that (1 the proposed intelligent method (MFE_PPA_MSVM improves the classification recognition rate; (2 the accuracy will decline when the number of fault patterns increases; (3 prediction accuracy can be the best when the training set size is increased to 70% of the total sample set. It verifies the method is feasible and efficient for fault diagnosis.

  6. Simulation of Cavity Flow by the Lattice Boltzmann Method using Multiple-Relaxation-Time scheme

    Ryu, Seung Yeob; Kang, Ha Nok; Seo, Jae Kwang; Yun, Ju Hyeon; Zee, Sung Quun

    2006-01-01

    Recently, the lattice Boltzmann method(LBM) has gained much attention for its ability to simulate fluid flows, and for its potential advantages over conventional CFD method. The key advantages of LBM are, (1) suitability for parallel computations, (2) absence of the need to solve the time-consuming Poisson equation for pressure, and (3) ease with multiphase flows, complex geometries and interfacial dynamics may be treated. The LBM using relaxation technique was introduced by Higuerea and Jimenez to overcome some drawbacks of lattice gas automata(LGA) such as large statistical noise, limited range of physical parameters, non- Galilean invariance, and implementation difficulty in three-dimensional problem. The simplest LBM is the lattice Bhatnager-Gross-Krook(LBGK) equation, which based on a single-relaxation-time(SRT) approximation. Due to its extreme simplicity, the lattice BGK(LBGK) equation has become the most popular lattice Boltzmann model in spite of its well-known deficiencies, for example, in simulating high-Reynolds numbers flow. The Multiple-Relaxation-Time(MRT) LBM was originally developed by D'Humieres. Lallemand and Luo suggests that the use of a Multiple-Relaxation-Time(MRT) models are much more stable than LBGK, because the different relaxation times can be individually tuned to achieve 'optimal' stability. A lid-driven cavity flow is selected as the test problem because it has geometrically singular points in the flow, but geometrically simple. Results are compared with those using SRT, MRT model in the LBGK method and previous simulation data using Navier-Stokes equations for the same flow conditions. In summary, LBM using MRT model introduces much less spatial oscillations near geometrical singular points, which is important for the successful simulation of higher Reynolds number flows

  7. Novel method to load multiple genes onto a mammalian artificial chromosome.

    Tóth, Anna; Fodor, Katalin; Praznovszky, Tünde; Tubak, Vilmos; Udvardy, Andor; Hadlaczky, Gyula; Katona, Robert L

    2014-01-01

    Mammalian artificial chromosomes are natural chromosome-based vectors that may carry a vast amount of genetic material in terms of both size and number. They are reasonably stable and segregate well in both mitosis and meiosis. A platform artificial chromosome expression system (ACEs) was earlier described with multiple loading sites for a modified lambda-integrase enzyme. It has been shown that this ACEs is suitable for high-level industrial protein production and the treatment of a mouse model for a devastating human disorder, Krabbe's disease. ACEs-treated mutant mice carrying a therapeutic gene lived more than four times longer than untreated counterparts. This novel gene therapy method is called combined mammalian artificial chromosome-stem cell therapy. At present, this method suffers from the limitation that a new selection marker gene should be present for each therapeutic gene loaded onto the ACEs. Complex diseases require the cooperative action of several genes for treatment, but only a limited number of selection marker genes are available and there is also a risk of serious side-effects caused by the unwanted expression of these marker genes in mammalian cells, organs and organisms. We describe here a novel method to load multiple genes onto the ACEs by using only two selectable marker genes. These markers may be removed from the ACEs before therapeutic application. This novel technology could revolutionize gene therapeutic applications targeting the treatment of complex disorders and cancers. It could also speed up cell therapy by allowing researchers to engineer a chromosome with a predetermined set of genetic factors to differentiate adult stem cells, embryonic stem cells and induced pluripotent stem (iPS) cells into cell types of therapeutic value. It is also a suitable tool for the investigation of complex biochemical pathways in basic science by producing an ACEs with several genes from a signal transduction pathway of interest.

  8. Novel method to load multiple genes onto a mammalian artificial chromosome.

    Anna Tóth

    Full Text Available Mammalian artificial chromosomes are natural chromosome-based vectors that may carry a vast amount of genetic material in terms of both size and number. They are reasonably stable and segregate well in both mitosis and meiosis. A platform artificial chromosome expression system (ACEs was earlier described with multiple loading sites for a modified lambda-integrase enzyme. It has been shown that this ACEs is suitable for high-level industrial protein production and the treatment of a mouse model for a devastating human disorder, Krabbe's disease. ACEs-treated mutant mice carrying a therapeutic gene lived more than four times longer than untreated counterparts. This novel gene therapy method is called combined mammalian artificial chromosome-stem cell therapy. At present, this method suffers from the limitation that a new selection marker gene should be present for each therapeutic gene loaded onto the ACEs. Complex diseases require the cooperative action of several genes for treatment, but only a limited number of selection marker genes are available and there is also a risk of serious side-effects caused by the unwanted expression of these marker genes in mammalian cells, organs and organisms. We describe here a novel method to load multiple genes onto the ACEs by using only two selectable marker genes. These markers may be removed from the ACEs before therapeutic application. This novel technology could revolutionize gene therapeutic applications targeting the treatment of complex disorders and cancers. It could also speed up cell therapy by allowing researchers to engineer a chromosome with a predetermined set of genetic factors to differentiate adult stem cells, embryonic stem cells and induced pluripotent stem (iPS cells into cell types of therapeutic value. It is also a suitable tool for the investigation of complex biochemical pathways in basic science by producing an ACEs with several genes from a signal transduction pathway of interest.

  9. Pilot points method for conditioning multiple-point statistical facies simulation on flow data

    Ma, Wei; Jafarpour, Behnam

    2018-05-01

    We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  10. AN EFFICIENT METHOD FOR AUTOMATIC ROAD EXTRACTION BASED ON MULTIPLE FEATURES FROM LiDAR DATA

    Y. Li

    2016-06-01

    Full Text Available The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1 road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2 local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3 hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for “Urban Classification and 3D Building Reconstruction” project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  11. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  12. A New Conflict Resolution Method for Multiple Mobile Robots in Cluttered Environments With Motion-Liveness.

    Shahriari, Mohammadali; Biglarbegian, Mohammad

    2018-01-01

    This paper presents a new conflict resolution methodology for multiple mobile robots while ensuring their motion-liveness, especially for cluttered and dynamic environments. Our method constructs a mathematical formulation in a form of an optimization problem by minimizing the overall travel times of the robots subject to resolving all the conflicts in their motion. This optimization problem can be easily solved through coordinating only the robots' speeds. To overcome the computational cost in executing the algorithm for very cluttered environments, we develop an innovative method through clustering the environment into independent subproblems that can be solved using parallel programming techniques. We demonstrate the scalability of our approach through performing extensive simulations. Simulation results showed that our proposed method is capable of resolving the conflicts of 100 robots in less than 1.23 s in a cluttered environment that has 4357 intersections in the paths of the robots. We also developed an experimental testbed and demonstrated that our approach can be implemented in real time. We finally compared our approach with other existing methods in the literature both quantitatively and qualitatively. This comparison shows while our approach is mathematically sound, it is more computationally efficient, scalable for very large number of robots, and guarantees the live and smooth motion of robots.

  13. Multiple Method Contraception Use among African American Adolescents in Four US Cities

    Jennifer L. Brown

    2011-01-01

    Full Text Available We report on African American adolescents' (N=850; M age = 15.4 contraceptive practices and type of contraception utilized during their last sexual encounter. Respondents completed measures of demographics, contraceptive use, sexual partner type, and ability to select “safe” sexual partners. 40% endorsed use of dual or multiple contraceptive methods; a total of 35 different contraceptive combinations were reported. Perceived ability to select “safe” partners was associated with not using contraception (OR = 1.25, using less effective contraceptive methods (OR = 1.23, or hormonal birth control (OR = 1.50. Female gender predicted hormonal birth control use (OR = 2.33, use of less effective contraceptive methods (e.g., withdrawal; OR = 2.47, and using no contraception (OR = 2.37. Respondents' age and partner type did not predict contraception use. Adolescents used contraceptive methods with limited ability to prevent both unintended pregnancies and STD/HIV. Adolescents who believed their partners posed low risk were more likely to use contraceptive practices other than condoms or no contraception. Reproductive health practitioners are encouraged to help youth negotiate contraceptive use with partners, regardless of the partner's perceived riskiness.

  14. Expansion methods for solving integral equations with multiple time lags using Bernstein polynomial of the second kind

    Mahmoud Paripour

    2014-08-01

    Full Text Available In this paper, the Bernstein polynomials are used to approximatethe solutions of linear integral equations with multiple time lags (IEMTL through expansion methods (collocation method, partition method, Galerkin method. The method is discussed in detail and illustrated by solving some numerical examples. Comparison between the exact and approximated results obtained from these methods is carried out

  15. Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: power and applicability analysis

    Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.

    2017-01-01

    Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels

  16. Methods for meta-analysis of multiple traits using GWAS summary statistics.

    Ray, Debashree; Boehnke, Michael

    2018-03-01

    Genome-wide association studies (GWAS) for complex diseases have focused primarily on single-trait analyses for disease status and disease-related quantitative traits. For example, GWAS on risk factors for coronary artery disease analyze genetic associations of plasma lipids such as total cholesterol, LDL-cholesterol, HDL-cholesterol, and triglycerides (TGs) separately. However, traits are often correlated and a joint analysis may yield increased statistical power for association over multiple univariate analyses. Recently several multivariate methods have been proposed that require individual-level data. Here, we develop metaUSAT (where USAT is unified score-based association test), a novel unified association test of a single genetic variant with multiple traits that uses only summary statistics from existing GWAS. Although the existing methods either perform well when most correlated traits are affected by the genetic variant in the same direction or are powerful when only a few of the correlated traits are associated, metaUSAT is designed to be robust to the association structure of correlated traits. metaUSAT does not require individual-level data and can test genetic associations of categorical and/or continuous traits. One can also use metaUSAT to analyze a single trait over multiple studies, appropriately accounting for overlapping samples, if any. metaUSAT provides an approximate asymptotic P-value for association and is computationally efficient for implementation at a genome-wide level. Simulation experiments show that metaUSAT maintains proper type-I error at low error levels. It has similar and sometimes greater power to detect association across a wide array of scenarios compared to existing methods, which are usually powerful for some specific association scenarios only. When applied to plasma lipids summary data from the METSIM and the T2D-GENES studies, metaUSAT detected genome-wide significant loci beyond the ones identified by univariate analyses

  17. A multiple-well method for immunohistochemical testing of many reagents on a single microscopic slide.

    McKeever, P E; Letica, L H; Shakui, P; Averill, D R

    1988-09-01

    Multiple wells (M-wells) have been made over tissue sections on single microscopic slides to simultaneously localize binding specificity of many antibodies. More than 20 individual 4-microliter wells over tissue have been applied/slide, representing more than a 5-fold improvement in wells/slide and a 25-fold reduction in reagent volume over previous methods. More than 30 wells/slide have been applied over cellular monolayers. To produce the improvement, previous strategies of placing specimens into wells were changed to instead create wells over the specimen. We took advantage of the hydrophobic properties of paint to surround the wells and to segregate the various different primary antibodies. Segregation was complete on wells alternating with and without primary monoclonal antibody. The procedure accommodates both frozen and paraffin sections, yielding slides which last more than a year. After monoclonal antibody detection, standard histologic stains can be applied as counterstains. M-wells are suitable for localizing binding of multiple reagents or sample unknowns (polyclonal or monoclonal antibodies, hybridoma supernatants, body fluids, lectins) to either tissues or cells. Their small sample volume and large number of sample wells/slide could be particularly useful for early screening of hybridoma supernatants and for titration curves in immunohistochemistry (McKeever PE, Shakui P, Letica LH, Averill DR: J Histochem Cytochem 36:931, 1988).

  18. Quantifying submarine groundwater discharge in the coastal zone via multiple methods

    Burnett, W.C.; Aggarwal, P.K.; Aureli, A.; Bokuniewicz, H.; Cable, J.E.; Charette, M.A.; Kontar, E.; Krupa, S.; Kulkarni, K.M.; Loveless, A.; Moore, W.S.; Oberdorfer, J.A.; Oliveira, J.; Ozyurt, N.; Povinec, P.; Privitera, A.M.G.; Rajar, R.; Ramessur, R.T.; Scholten, J.; Stieglitz, T.; Taniguchi, M.; Turner, J.V.

    2006-01-01

    Submarine groundwater discharge (SGD) is now recognized as an important pathway between land and sea. As such, this flow may contribute to the biogeochemical and other marine budgets of near-shore waters. These discharges typically display significant spatial and temporal variability making assessments difficult. Groundwater seepage is patchy, diffuse, temporally variable, and may involve multiple aquifers. Thus, the measurement of its magnitude and associated chemical fluxes is a challenging enterprise. A joint project of UNESCO and the International Atomic Energy Agency (IAEA) has examined several methods of SGD assessment and carried out a series of five intercomparison experiments in different hydrogeologic environments (coastal plain, karst, glacial till, fractured crystalline rock, and volcanic terrains). This report reviews the scientific and management significance of SGD, measurement approaches, and the results of the intercomparison experiments. We conclude that while the process is essentially ubiquitous in coastal areas, the assessment of its magnitude at any one location is subject to enough variability that measurements should be made by a variety of techniques and over large enough spatial and temporal scales to capture the majority of these changing conditions. We feel that all the measurement techniques described here are valid although they each have their own advantages and disadvantages. It is recommended that multiple approaches be applied whenever possible. In addition, a continuing effort is required in order to capture long-period tidal fluctuations, storm effects, and seasonal variations

  19. Numerical Computation of Underground Inundation in Multiple Layers Using the Adaptive Transfer Method

    Hyung-Jun Kim

    2018-01-01

    Full Text Available Extreme rainfall causes surface runoff to flow towards lowlands and subterranean facilities, such as subway stations and buildings with underground spaces in densely packed urban areas. These facilities and areas are therefore vulnerable to catastrophic submergence. However, flood modeling of underground space has not yet been adequately studied because there are difficulties in reproducing the associated multiple horizontal layers connected with staircases or elevators. This study proposes a convenient approach to simulate underground inundation when two layers are connected. The main facet of this approach is to compute the flow flux passing through staircases in an upper layer and to transfer the equivalent quantity to a lower layer. This is defined as the ‘adaptive transfer method’. This method overcomes the limitations of 2D modeling by introducing layers connecting concepts to prevent large variations in mesh sizes caused by complicated underlying obstacles or local details. Consequently, this study aims to contribute to the numerical analysis of flow in inundated underground spaces with multiple floors.

  20. Multiple-target method for sputtering amorphous films for bubble-domain devices

    Burilla, C.T.; Bekebrede, W.R.; Smith, A.B.

    1976-01-01

    Previously, sputtered amorphous metal alloys for bubble applications have ordinarily been prepared by standard sputtering techniques using a single target electrode. The deposition of these alloys is reported using a multiple target rf technique in which a separate target is used for each element contained in the alloy. One of the main advantages of this multiple-target approach is that the film composition can be easily changed by simply varying the voltages applied to the elemental targets. In the apparatus, the centers of the targets are positioned on a 15 cm-radius circle. The platform holding the film substrate is on a 15 cm-long arm which can rotate about the center, thus bringing the sample successively under each target. The platform rotation rate is adjustable from 0 to 190 rpm. That this latter speed is sufficient to homogenize the alloys produced is demonstrated by measurements made of the uniaxial anisotropy constant in Gd 0 . 12 Co 0 . 59 Cu 0 . 29 films. The anisotropy is 6.0 x 10 5 ergs/cm 3 and independent of rotation rate above approximately 25 rpm, but it drops rapidly for slower rotation rates, reaching 1.8 x 10 5 ergs/cm 3 for 7 rpm. The film quality is equal to that of films made by conventional methods. Coercivities of a few oersteds in samples with stripe widths of 1 to 2 μm and magnetizations of 800 to 2800 G were observed

  1. Quantifying submarine groundwater discharge in the coastal zone via multiple methods

    Burnett, W.C. [Department of Oceanography, Florida State University, Tallahassee, FL 32306 (United States); Aggarwal, P.K.; Kulkarni, K.M. [Isotope Hydrology Section, International Atomic Energy Agency (Austria); Aureli, A. [Department Water Resources Management, University of Palermo, Catania (Italy); Bokuniewicz, H. [Marine Science Research Center, Stony Brook University (United States); Cable, J.E. [Department Oceanography, Louisiana State University (United States); Charette, M.A. [Department Marine Chemistry, Woods Hole Oceanographic Institution (United States); Kontar, E. [Shirshov Institute of Oceanology (Russian Federation); Krupa, S. [South Florida Water Management District (United States); Loveless, A. [University of Western Australia (Australia); Moore, W.S. [Department Geological Sciences, University of South Carolina (United States); Oberdorfer, J.A. [Department Geology, San Jose State University (United States); Oliveira, J. [Instituto de Pesquisas Energeticas e Nucleares (Brazil); Ozyurt, N. [Department Geological Engineering, Hacettepe (Turkey); Povinec, P.; Scholten, J. [Marine Environment Laboratory, International Atomic Energy Agency (Monaco); Privitera, A.M.G. [U.O. 4.17 of the G.N.D.C.I., National Research Council (Italy); Rajar, R. [Faculty of Civil and Geodetic Engineering, University of Ljubljana (Slovenia); Ramessur, R.T. [Department Chemistry, University of Mauritius (Mauritius); Stieglitz, T. [Mathematical and Physical Sciences, James Cook University (Australia); Taniguchi, M. [Research Institute for Humanity and Nature (Japan); Turner, J.V. [CSIRO, Land and Water, Perth (Australia)

    2006-08-31

    Submarine groundwater discharge (SGD) is now recognized as an important pathway between land and sea. As such, this flow may contribute to the biogeochemical and other marine budgets of near-shore waters. These discharges typically display significant spatial and temporal variability making assessments difficult. Groundwater seepage is patchy, diffuse, temporally variable, and may involve multiple aquifers. Thus, the measurement of its magnitude and associated chemical fluxes is a challenging enterprise. A joint project of UNESCO and the International Atomic Energy Agency (IAEA) has examined several methods of SGD assessment and carried out a series of five intercomparison experiments in different hydrogeologic environments (coastal plain, karst, glacial till, fractured crystalline rock, and volcanic terrains). This report reviews the scientific and management significance of SGD, measurement approaches, and the results of the intercomparison experiments. We conclude that while the process is essentially ubiquitous in coastal areas, the assessment of its magnitude at any one location is subject to enough variability that measurements should be made by a variety of techniques and over large enough spatial and temporal scales to capture the majority of these changing conditions. We feel that all the measurement techniques described here are valid although they each have their own advantages and disadvantages. It is recommended that multiple approaches be applied whenever possible. In addition, a continuing effort is required in order to capture long-period tidal fluctuations, storm effects, and seasonal variations. (author)

  2. On the nonlinear dynamics of trolling-mode AFM: Analytical solution using multiple time scales method

    Sajjadi, Mohammadreza; Pishkenari, Hossein Nejat; Vossoughi, Gholamreza

    2018-06-01

    Trolling mode atomic force microscopy (TR-AFM) has resolved many imaging problems by a considerable reduction of the liquid-resonator interaction forces in liquid environments. The present study develops a nonlinear model of the meniscus force exerted to the nanoneedle of TR-AFM and presents an analytical solution to the distributed-parameter model of TR-AFM resonator utilizing multiple time scales (MTS) method. Based on the developed analytical solution, the frequency-response curves of the resonator operation in air and liquid (for different penetration length of the nanoneedle) are obtained. The closed-form analytical solution and the frequency-response curves are validated by the comparison with both the finite element solution of the main partial differential equations and the experimental observations. The effect of excitation angle of the resonator on horizontal oscillation of the probe tip and the effect of different parameters on the frequency-response of the system are investigated.

  3. Application of X-ray methods to assess grain vulnerability to damage resulting from multiple loads

    Zlobecki, A.

    1995-01-01

    The aim of the work is to describe wheat grain behavior under multiple dynamic loads with various multipliers. The experiments were conducted on Almari variety grain. Grain moisture was 11, 16, 21 and 28%. A special ram stand was used for loading the grain. The experiments were carried out using an 8 g weight, equivalent to impact energy of 4,6 x 10 -3 [J]. The X-ray method was used to assess damage. The exposure time was 8 minutes with X-ray lamp voltage equal to 15 kV. The position index was used as the measure of the damage. The investigation results were elaborated statistically. Based on the results of analysis of variance, regression analysis, the d-Duncan test and the Kolmogorov-Smirnov test, the damage number was shown to depend greatly on the number of impacts for the whole range of moisture of the grain loaded. (author)

  4. Methods for radiation detection and characterization using a multiple detector probe

    Akers, Douglas William; Roybal, Lyle Gene

    2014-11-04

    Apparatuses, methods, and systems relating to radiological characterization of environments are disclosed. Multi-detector probes with a plurality of detectors in a common housing may be used to substantially concurrently detect a plurality of different radiation activities and types. Multiple multi-detector probes may be used in a down-hole environment to substantially concurrently detect radioactive activity and contents of a buried waste container. Software may process, analyze, and integrate the data from the different multi-detector probes and the different detector types therein to provide source location and integrated analysis as to the source types and activity in the measured environment. Further, the integrated data may be used to compensate for differential density effects and the effects of radiation shielding materials within the volume being measured.

  5. A Nonparametric, Multiple Imputation-Based Method for the Retrospective Integration of Data Sets

    Carrig, Madeline M.; Manrique-Vallier, Daniel; Ranby, Krista W.; Reiter, Jerome P.; Hoyle, Rick H.

    2015-01-01

    Complex research questions often cannot be addressed adequately with a single data set. One sensible alternative to the high cost and effort associated with the creation of large new data sets is to combine existing data sets containing variables related to the constructs of interest. The goal of the present research was to develop a flexible, broadly applicable approach to the integration of disparate data sets that is based on nonparametric multiple imputation and the collection of data from a convenient, de novo calibration sample. We demonstrate proof of concept for the approach by integrating three existing data sets containing items related to the extent of problematic alcohol use and associations with deviant peers. We discuss both necessary conditions for the approach to work well and potential strengths and weaknesses of the method compared to other data set integration approaches. PMID:26257437

  6. Method of estimating the leakage of multiple barriers in a radioactive materials shipping package

    Towell, R.H.; Kapoor, A.; Oras, J.J.

    1997-01-01

    This paper presents the results of a theoretical study of the performance of multiple leaky barriers in containing radioactive materials in a shipping package. The methods used are reasoned analysis and finite element modeling barriers. The finite element model is developed and evaluated with parameters set to bracket 6M configurations with three to six nested plastic jars, food-pack cans, and plastic bags inside Department of Transportation (DOT) Specification 2R inner containers with pipe thread closures. The results show that nested barriers reach the regulatory limit of 1x10 -6 A 2 /hr in 11 to 52 days, even though individually the barriers would exceed the regulatory limit by a factor of as much as 370 instantaneously. These times are within normal shipping times. The finite element model is conservative because it does not consider the deposition and sticking of the leaking radioactive material on the surfaces inside each boundary

  7. Simulation of a method to directly image exoplanets around multiple stars systems

    Thomas, Sandrine J.; Bendek, Eduardo; Belikov, Ruslan

    2014-08-01

    Direct imaging of extra-solar planets has now become a reality, especially with the deployment and commissioning of the first generation of specialized ground-based instruments such as the GPI, SPHERE, P1640 and SCExAO. These systems will allow detection of planets 107 times fainter than their host star. For space- based missions, such as EXCEDE, EXO-C, EXO-S, WFIRST/AFTA, different teams have shown in laboratories contrasts reaching 10-10 within a few diffraction limits from the star using a combination of a coronagraph to suppress light coming from the host star and a wavefront control system. These demonstrations use a de- formable mirror (DM) to remove residual starlight (speckles) created by the imperfections of telescope. However, all these current and future systems focus on detecting faint planets around a single host star or unresolved bi- naries/multiples, while several targets or planet candidates are located around nearby binary stars such as our neighbor star Alpha Centauri. Until now, it has been thought that removing the light of a companion star is impossible with current technology, excluding binary star systems from target lists of direct imaging missions. Direct imaging around binaries/multiple systems at a level of contrast allowing Earth-like planet detection is challenging because the region of interest, where a dark zone is essential, is contaminated by the light coming from the hosts star companion. We propose a method to simultaneously correct aberrations and diffraction of light coming from the target star as well as its companion star in order to reveal planets orbiting the target star. This method works even if the companion star is outside the control region of the DM (beyond its half-Nyquist frequency), by taking advantage of aliasing effects.

  8. A simple method for combining genetic mapping data from multiple crosses and experimental designs.

    Jeremy L Peirce

    Full Text Available BACKGROUND: Over the past decade many linkage studies have defined chromosomal intervals containing polymorphisms that modulate a variety of traits. Many phenotypes are now associated with enough mapping data that meta-analysis could help refine locations of known QTLs and detect many novel QTLs. METHODOLOGY/PRINCIPAL FINDINGS: We describe a simple approach to combining QTL mapping results for multiple studies and demonstrate its utility using two hippocampus weight loci. Using data taken from two populations, a recombinant inbred strain set and an advanced intercross population we demonstrate considerable improvements in significance and resolution for both loci. 1-LOD support intervals were improved 51% for Hipp1a and 37% for Hipp9a. We first generate locus-wise permuted P-values for association with the phenotype from multiple maps, which can be done using a permutation method appropriate to each population. These results are then assigned to defined physical positions by interpolation between markers with known physical and genetic positions. We then use Fisher's combination test to combine position-by-position probabilities among experiments. Finally, we calculate genome-wide combined P-values by generating locus-specific P-values for each permuted map for each experiment. These permuted maps are then sampled with replacement and combined. The distribution of best locus-specific P-values for each combined map is the null distribution of genome-wide adjusted P-values. CONCLUSIONS/SIGNIFICANCE: Our approach is applicable to a wide variety of segregating and non-segregating mapping populations, facilitates rapid refinement of physical QTL position, is complementary to other QTL fine mapping methods, and provides an appropriate genome-wide criterion of significance for combined mapping results.

  9. Multiplicative version of Promethee method in assesment of parks in Novi Sad

    Lakićević Milena D.

    2017-01-01

    Full Text Available Decision support methods have an important role regarding the envi­ronmental and landscape planning problems. In this research, one of the decision support methods - multiplicative version of Promethee - has been applied for assessment of five main parks in Novi Sad. The procedure required defining a set of criteria that were as follows: aesthetic, ecological and social values of analyzed parks. For each criterion an appropriate Promethee preference function was adopted with corresponding threshold values. The final result of the process was the ranking of parks by their aesthetic, ecological and social quality and importance for the City of Novi Sad. The result can help urban planners and responsible city bodies in their future actions aimed at improving development and management of analyzed parks. Two main directions of a future research were identified: (a testing appli­cability of other decision support methods, along with Promethee, on the same problem and comparison of their results; and (b analysis of the criteria set more closely by expanding it and/or including a set of indicators. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. 174003: Theory and application of analytic hierarchy process (AHP in multi-criteria decision making under conditions of risk and uncertainty (individual and group context

  10. An Advanced Method to Apply Multiple Rainfall Thresholds for Urban Flood Warnings

    Jiun-Huei Jang

    2015-11-01

    Full Text Available Issuing warning information to the public when rainfall exceeds given thresholds is a simple and widely-used method to minimize flood risk; however, this method lacks sophistication when compared with hydrodynamic simulation. In this study, an advanced methodology is proposed to improve the warning effectiveness of the rainfall threshold method for urban areas through deterministic-stochastic modeling, without sacrificing simplicity and efficiency. With regards to flooding mechanisms, rainfall thresholds of different durations are divided into two groups accounting for flooding caused by drainage overload and disastrous runoff, which help in grading the warning level in terms of emergency and severity when the two are observed together. A flood warning is then classified into four levels distinguished by green, yellow, orange, and red lights in ascending order of priority that indicate the required measures, from standby, flood defense, evacuation to rescue, respectively. The proposed methodology is tested according to 22 historical events in the last 10 years for 252 urbanized townships in Taiwan. The results show satisfactory accuracy in predicting the occurrence and timing of flooding, with a logical warning time series for taking progressive measures. For systems with multiple rainfall thresholds already in place, the methodology can be used to ensure better application of rainfall thresholds in urban flood warnings.

  11. Isothermal multiple displacement amplification: a methodical approach enhancing molecular routine diagnostics of microcarcinomas and small biopsies.

    Mairinger, Fabian D; Walter, Robert Fh; Vollbrecht, Claudia; Hager, Thomas; Worm, Karl; Ting, Saskia; Wohlschläger, Jeremias; Zarogoulidis, Paul; Zarogoulidis, Konstantinos; Schmid, Kurt W

    2014-01-01

    Isothermal multiple displacement amplification (IMDA) can be a powerful tool in molecular routine diagnostics for homogeneous and sequence-independent whole-genome amplification of notably small tumor samples, eg, microcarcinomas and biopsies containing a small amount of tumor. Currently, this method is not well established in pathology laboratories. We designed a study to confirm the feasibility and convenience of this method for routine diagnostics with formalin-fixed, paraffin-embedded samples prepared by laser-capture microdissection. A total of 250 μg DNA (concentration 5 μg/μL) was generated by amplification over a period of 8 hours with a material input of approximately 25 cells, approximately equivalent to 175 pg of genomic DNA. In the generated DNA, a representation of all chromosomes could be shown and the presence of elected genes relevant for diagnosis in clinical samples could be proven. Mutational analysis of clinical samples could be performed without any difficulty and showed concordance with earlier diagnostic findings. We established the feasibility and convenience of IMDA for routine diagnostics. We also showed that small amounts of DNA, which were not analyzable with current molecular methods, could be sufficient for a wide field of applications in molecular routine diagnostics when they are preamplified with IMDA.

  12. A Bayesian method and its variational approximation for prediction of genomic breeding values in multiple traits

    Hayashi Takeshi

    2013-01-01

    Full Text Available Abstract Background Genomic selection is an effective tool for animal and plant breeding, allowing effective individual selection without phenotypic records through the prediction of genomic breeding value (GBV. To date, genomic selection has focused on a single trait. However, actual breeding often targets multiple correlated traits, and, therefore, joint analysis taking into consideration the correlation between traits, which might result in more accurate GBV prediction than analyzing each trait separately, is suitable for multi-trait genomic selection. This would require an extension of the prediction model for single-trait GBV to multi-trait case. As the computational burden of multi-trait analysis is even higher than that of single-trait analysis, an effective computational method for constructing a multi-trait prediction model is also needed. Results We described a Bayesian regression model incorporating variable selection for jointly predicting GBVs of multiple traits and devised both an MCMC iteration and variational approximation for Bayesian estimation of parameters in this multi-trait model. The proposed Bayesian procedures with MCMC iteration and variational approximation were referred to as MCBayes and varBayes, respectively. Using simulated datasets of SNP genotypes and phenotypes for three traits with high and low heritabilities, we compared the accuracy in predicting GBVs between multi-trait and single-trait analyses as well as between MCBayes and varBayes. The results showed that, compared to single-trait analysis, multi-trait analysis enabled much more accurate GBV prediction for low-heritability traits correlated with high-heritability traits, by utilizing the correlation structure between traits, while the prediction accuracy for uncorrelated low-heritability traits was comparable or less with multi-trait analysis in comparison with single-trait analysis depending on the setting for prior probability that a SNP has zero

  13. Different attention bias patterns in anorexia nervosa restricting and binge/purge types.

    Gilon Mann, Tal; Hamdan, Sami; Bar-Haim, Yair; Lazarov, Amit; Enoch-Levy, Adi; Dubnov-Raz, Gal; Treasure, Janet; Stein, Daniel

    2018-04-03

    Patients with anorexia nervosa (AN) have been shown to display both elevated anxiety and attentional biases in threat processing. In this study, we compared threat-related attention patterns of patients with AN restricting type (AN-R; n = 32), AN binge/purge type (AN-B/P; n = 23), and healthy controls (n = 19). A dot-probe task with either eating disorder-related or general and social anxiety-related words was used to measure attention patterns. Severity of eating disorder symptoms, depression, anxiety, and stress were also assessed. Patients with AN-R showed vigilance to both types of threat words, whereas patients with AN-B/P showed avoidance of both threat types. Healthy control participants did not show any attention bias. Attention bias was not associated with any of the demographic, clinical, and psychometric parameters introduced. These findings suggest that there are differential patterns of attention allocation in patients with AN-R and AN-B/P. More research is needed to identify what causes/underlies these differential patterns. Copyright © 2018 John Wiley & Sons, Ltd and Eating Disorders Association.

  14. Reward Dependence and Harm Avoidance among Patients with Binge-Purge Type Eating Disorders.

    Gat-Lazer, Sigal; Geva, Ronny; Gur, Eitan; Stein, Daniel

    2017-05-01

    The Cloninger's Psychobiological Model of Temperament and Character includes temperamental dimensions such as reward dependence (RD) and harm avoidance (HA). Studies of RD differentiate between sensitivity to reward (SR) versus to punishment (SP). We investigated the interrelationship between HA and RD in acutely ill patients with binge/purge (B/P) type eating disorders (EDs) and following symptomatic stabilization. Fifty patients with B/P EDs were assessed at admission to inpatient treatment, 36 of whom were reassessed at discharge. Thirty-six controls were similarly assessed. Participants completed Tridimensional Personality Questionnaire (TPQ), Sensitivity to Punishment and Sensitivity to Reward Questionnaire (SPSRQ), and took the Gambling Task. Patients with B/P EDs had higher TPQ-RD and lower TPQ-HA accompanied by lower SPSRQ-SR and SPSRQ-SP. SPSRQ-SP correlated positively and negatively with TPQ-HA and TPQ-RD, respectively. Combination of lower TPQ-HA, lower SPSRQ-SP, and greater risk-taking inclination may maintain disordered eating in patients with B/P EDs. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.

  15. Adolescent risk factors for purging in young women: findings from the national longitudinal study of adolescent health

    Stephen, Eric M; Rose, Jennifer; Kenney, Lindsay; Rosselli-Navarra, Francine; Weissman, Ruth Striegel

    2014-01-01

    Background There exists a dearth of prospective adolescent eating disorder studies with samples that are large enough to detect small or medium sized effects for risk factors, that are generalizable to the broader population, and that follow adolescents long enough to fully capture the period of development when the risk of eating disorder symptoms occurring is highest. As a result, the purpose of this study was to examine psychosocial risk factors for purging for weight control in a national...

  16. Optimal planning approaches with multiple impulses for rendezvous based on hybrid genetic algorithm and control method

    JingRui Zhang

    2015-03-01

    Full Text Available In this article, we focus on safe and effective completion of a rendezvous and docking task by looking at planning approaches and control with fuel-optimal rendezvous for a target spacecraft running on a near-circular reference orbit. A variety of existent practical path constraints are considered, including the constraints of field of view, impulses, and passive safety. A rendezvous approach is calculated by using a hybrid genetic algorithm with those constraints. Furthermore, a control method of trajectory tracking is adopted to overcome the external disturbances. Based on Clohessy–Wiltshire equations, we first construct the mathematical model of optimal planning approaches of multiple impulses with path constraints. Second, we introduce the principle of hybrid genetic algorithm with both stronger global searching ability and local searching ability. We additionally explain the application of this algorithm in the problem of trajectory planning. Then, we give three-impulse simulation examples to acquire an optimal rendezvous trajectory with the path constraints presented in this article. The effectiveness and applicability of the tracking control method are verified with the optimal trajectory above as control objective through the numerical simulation.

  17. A composite state method for ensemble data assimilation with multiple limited-area models

    Matthew Kretschmer

    2015-04-01

    Full Text Available Limited-area models (LAMs allow high-resolution forecasts to be made for geographic regions of interest when resources are limited. Typically, boundary conditions for these models are provided through one-way boundary coupling from a coarser resolution global model. Here, data assimilation is considered in a situation in which a global model supplies boundary conditions to multiple LAMs. The data assimilation method presented combines information from all of the models to construct a single ‘composite state’, on which data assimilation is subsequently performed. The analysis composite state is then used to form the initial conditions of the global model and all of the LAMs for the next forecast cycle. The method is tested by using numerical experiments with simple, chaotic models. The results of the experiments show that there is a clear forecast benefit to allowing LAM states to influence one another during the analysis. In addition, adding LAM information at analysis time has a strong positive impact on global model forecast performance, even at points not covered by the LAMs.

  18. QSAR Study of Insecticides of Phthalamide Derivatives Using Multiple Linear Regression and Artificial Neural Network Methods

    Adi Syahputra

    2014-03-01

    Full Text Available Quantitative structure activity relationship (QSAR for 21 insecticides of phthalamides containing hydrazone (PCH was studied using multiple linear regression (MLR, principle component regression (PCR and artificial neural network (ANN. Five descriptors were included in the model for MLR and ANN analysis, and five latent variables obtained from principle component analysis (PCA were used in PCR analysis. Calculation of descriptors was performed using semi-empirical PM6 method. ANN analysis was found to be superior statistical technique compared to the other methods and gave a good correlation between descriptors and activity (r2 = 0.84. Based on the obtained model, we have successfully designed some new insecticides with higher predicted activity than those of previously synthesized compounds, e.g.2-(decalinecarbamoyl-5-chloro-N’-((5-methylthiophen-2-ylmethylene benzohydrazide, 2-(decalinecarbamoyl-5-chloro-N’-((thiophen-2-yl-methylene benzohydrazide and 2-(decaline carbamoyl-N’-(4-fluorobenzylidene-5-chlorobenzohydrazide with predicted log LC50 of 1.640, 1.672, and 1.769 respectively.

  19. LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics

    Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel

    2017-10-01

    Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.

  20. Method of steering the gain of a multiple antenna global positioning system receiver

    Evans, Alan G.; Hermann, Bruce R.

    1992-06-01

    A method for steering the gain of a multiple antenna Global Positioning System (GPS) receiver toward a plurality of a GPS satellites simultaneously is provided. The GPS signals of a known wavelength are processed digitally for a particular instant in time. A range difference or propagation delay between each antenna for GPS signals received from each satellite is first resolved. The range difference consists of a fractional wavelength difference and an integer wavelength difference. The fractional wavelength difference is determined by each antenna's tracking loop. The integer wavelength difference is based upon the known wavelength and separation between each antenna with respect to each satellite position. The range difference is then used to digitally delay the GPS signals at each antenna with respect to a reference antenna. The signal at the reference antenna is then summed with the digitally delayed signals to generate a composite antenna gain. The method searches for the correct number of integer wavelengths to maximize the composite gain. The range differences are also used to determine the attitude of the array.

  1. Method of experimental and theoretical modeling for multiple pressure tube rupture for RBMK reactor

    Medvedeva, N.Y.; Goldstein, R.V.; Burrows, J.A.

    2001-01-01

    The rupture of single RBMK reactor channels has occurred at a number of stations with a variety of initiating events. It is assumed in RBMK Safety Cases that the force of the escaping fluid will not cause neighbouring channels to break. This assumption has not been justified. A chain reaction of tube breaks could over-pressurise the reactor cavity leading to catastrophic failure of the containment. To validate the claims of the RBMK Safety Cases the Electrogorsk Research and Engineering Centre, in participation with experts from the Institute of Mechanics of RAS, has developed the method of interacting multiscale physical and mathematical modelling for coupled thermophysical, hydrogasodynamic processes and deformation and break processes causing and (or) accompanying potential failures, design and beyond the design RBMK reactor accidents. To realise the method the set of rigs, physical and mathematical models and specialized computer codes are under creation. This article sets out an experimental philosophy and programme for achieving this objective to solve the problem of credibility or non-credibility for multiple fuel channel rupture in RBMK.(author)

  2. The strategic selecting criteria and performance by using the multiple criteria method

    Lisa Y. Chen

    2008-02-01

    Full Text Available As the increasing competitive intensity in the current service market, organizational capabilities have been recognized as the importance of sustaining competitive advantage. The profitable growth for the firms has been fueled a need to systematically assess and renew the organization. The purpose of this study is to analyze the financial performance of the firms to create an effective evaluating structure for the Taiwan's service industry. This study utilized TOPSIS (technique for order preference by similarity to ideal solution method to evaluate the operating performance of 12 companies. TOPSIS is a multiple criteria decision making method to identify solutions from a finite set of alternatives based upon simultaneous minimization of distance from an ideal point and maximization of distance from a nadir point. By using this approach, this study measures the financial performance of firms through two aspects and ten indicators. The result indicated e-life had outstanding performance among the 12 retailers. The findings of this study provided managers to better understand their market position, competition, and profitability for future strategic planning and operational management.

  3. On the solution of a few problems of multiple scattering by Monte Carlo method

    Bluet, J.C.

    1966-02-01

    Three problems of multiple scattering arising from neutron cross sections experiments, are reported here. The common hypothesis are: - Elastic scattering is the only possible process - Angular distributions are isotropic - Losses of particle energy are negligible in successive collisions. In the three cases practical results, corresponding to actual experiments are given. Moreover the results are shown in more general way, using dimensionless variable such as the ratio of geometrical dimensions to neutron mean free path. The FORTRAN codes are given together with to the corresponding flow charts, and lexicons of symbols. First problem: Measurement of sodium capture cross-section. A sodium sample of given geometry is submitted to a neutron flux. Induced activity is then measured by means of a sodium iodide cristal. The distribution of active nuclei in the sample, and the counter efficiency are calculated by Monte-Carlo method taking multiple scattering into account. Second problem: absolute measurement of a neutron flux using a glass scintillator. The scintillator is a use of lithium 6 loaded glass, submitted to neutron flux perpendicular to its plane faces. If the glass thickness is not negligible compared with scattering mean free path λ, the mean path e' of neutrons in the glass is different from the thickness. Monte-Carlo calculation are made to compute this path and a relative correction to efficiency equal to (e' - e)/e. Third problem: study of a neutron collimator. A neutron detector is placed at the bottom of a cylinder surrounded with water. A neutron source is placed on the cylinder axis, in front of the water shield. The number of neutron tracks going directly and indirectly through the water from the source to the detector are counted. (author) [fr

  4. Rapid improvements in emotion regulation predict intensive treatment outcome for patients with bulimia nervosa and purging disorder.

    MacDonald, Danielle E; Trottier, Kathryn; Olmsted, Marion P

    2017-10-01

    Rapid and substantial behavior change (RSBC) early in cognitive behavior therapy (CBT) for eating disorders is the strongest known predictor of treatment outcome. Rapid change in other clinically relevant variables may also be important. This study examined whether rapid change in emotion regulation predicted treatment outcomes, beyond the effects of RSBC. Participants were diagnosed with bulimia nervosa or purging disorder (N = 104) and completed ≥6 weeks of CBT-based intensive treatment. Hierarchical regression models were used to test whether rapid change in emotion regulation variables predicted posttreatment outcomes, defined in three ways: (a) binge/purge abstinence; (b) cognitive eating disorder psychopathology; and (c) depression symptoms. Baseline psychopathology and emotion regulation difficulties and RSBC were controlled for. After controlling for baseline variables and RSBC, rapid improvement in access to emotion regulation strategies made significant unique contributions to the prediction of posttreatment binge/purge abstinence, cognitive psychopathology of eating disorders, and depression symptoms. Individuals with eating disorders who rapidly improve their belief that they can effectively modulate negative emotions are more likely to achieve a variety of good treatment outcomes. This supports the formal inclusion of emotion regulation skills early in CBT, and encouraging patient beliefs that these strategies are helpful. © 2017 Wiley Periodicals, Inc.

  5. Modeling and simulation of ammonia removal from purge gases of ammonia plants using a catalytic Pd-Ag membrane reactor

    Rahimpour, M.R.; Asgari, A.

    2008-01-01

    In this work, the removal of ammonia from synthesis purge gas of an ammonia plant has been investigated. Since the ammonia decomposition is thermodynamically limited, a membrane reactor is used for complete decomposition. A double pipe catalytic membrane reactor is used to remove ammonia from purge gas. The purge gas is flowing in the reaction side and is converted to hydrogen and nitrogen over nickel-alumina catalyst. The hydrogen is transferred through the Pd-Ag membrane of tube side to the shell side. A mathematical model including conservation of mass in the tube and shell side of reactor is proposed. The proposed model was solved numerically and the effects of different parameters on the rector performance were investigated. The effects of pressure, temperature, flow rate (sweep ratio), membrane thickness and reactor diameter have been investigated in the present study. Increasing ammonia conversion was observed by raising the temperature, sweep ratio and reducing membrane thickness. When the pressure increases, the decomposition is gone toward completion but, at low pressure the ammonia conversion in the outset of reactor is higher than other pressures, but complete destruction of the ammonia cannot be achieved. The proposed model can be used for design of an industrial catalytic membrane reactor for removal of ammonia from ammonia plant and reducing NO x emissions

  6. Development of an asymmetric multiple-position neutron source (AMPNS) method to monitor the criticality of a degraded reactor core

    Kim, S.S.; Levine, S.H.

    1985-01-01

    An analytical/experimental method has been developed to monitor the subcritical reactivity and unfold the k/sub infinity/ distribution of a degraded reactor core. The method uses several fixed neutron detectors and a Cf-252 neutron source placed sequentially in multiple positions in the core. Therefore, it is called the Asymmetric Multiple Position Neutron Source (AMPNS) method. The AMPNS method employs nucleonic codes to analyze the neutron multiplication of a Cf-252 neutron source. An optimization program, GPM, is utilized to unfold the k/sub infinity/ distribution of the degraded core, in which the desired performance measure minimizes the error between the calculated and the measured count rates of the degraded reactor core. The analytical/experimental approach is validated by performing experiments using the Penn State Breazeale TRIGA Reactor (PSBR). A significant result of this study is that it provides a method to monitor the criticality of a damaged core during the recovery period

  7. Neuroethologic differences in sleep deprivation induced by the single- and multiple-platform methods

    R. Medeiros

    1998-05-01

    Full Text Available It has been proposed that the multiple-platform method (MP for desynchronized sleep (DS deprivation eliminates the stress induced by social isolation and by the restriction of locomotion in the single-platform (SP method. MP, however, induces a higher increase in plasma corticosterone and ACTH levels than SP. Since deprivation is of heuristic value to identify the functional role of this state of sleep, the objective of the present study was to determine the behavioral differences exhibited by rats during sleep deprivation induced by these two methods. All behavioral patterns exhibited by a group of 7 albino male Wistar rats submitted to 4 days of sleep deprivation by the MP method (15 platforms, spaced 150 mm apart and by 7 other rats submitted to sleep deprivation by the SP method were recorded in order to elaborate an ethogram. The behavioral patterns were quantitated in 10 replications by naive observers using other groups of 7 rats each submitted to the same deprivation schedule. Each quantification session lasted 35 min and the behavioral patterns presented by each rat over a period of 5 min were counted. The results obtained were: a rats submitted to the MP method changed platforms at a mean rate of 2.62 ± 1.17 platforms h-1 animal-1; b the number of episodes of noninteractive waking patterns for the MP animals was significantly higher than that for SP animals (1077 vs 768; c additional episodes of waking patterns (26.9 ± 18.9 episodes/session were promoted by social interaction in MP animals; d the cumulative number of sleep episodes observed in the MP test (311 was significantly lower (chi-square test, 1 d.f., P<0.05 than that observed in the SP test (534; e rats submitted to the MP test did not show the well-known increase in ambulatory activity observed after the end of the SP test; f comparison of 6 MP and 6 SP rats showed a significantly shorter latency to the onset of DS in MP rats (7.8 ± 4.3 and 29.0 ± 25.0 min, respectively

  8. Linking landscape characteristics to local grizzly bear abundance using multiple detection methods in a hierarchical model

    Graves, T.A.; Kendall, Katherine C.; Royle, J. Andrew; Stetz, J.B.; Macleod, A.C.

    2011-01-01

    Few studies link habitat to grizzly bear Ursus arctos abundance and these have not accounted for the variation in detection or spatial autocorrelation. We collected and genotyped bear hair in and around Glacier National Park in northwestern Montana during the summer of 2000. We developed a hierarchical Markov chain Monte Carlo model that extends the existing occupancy and count models by accounting for (1) spatially explicit variables that we hypothesized might influence abundance; (2) separate sub-models of detection probability for two distinct sampling methods (hair traps and rub trees) targeting different segments of the population; (3) covariates to explain variation in each sub-model of detection; (4) a conditional autoregressive term to account for spatial autocorrelation; (5) weights to identify most important variables. Road density and per cent mesic habitat best explained variation in female grizzly bear abundance; spatial autocorrelation was not supported. More female bears were predicted in places with lower road density and with more mesic habitat. Detection rates of females increased with rub tree sampling effort. Road density best explained variation in male grizzly bear abundance and spatial autocorrelation was supported. More male bears were predicted in areas of low road density. Detection rates of males increased with rub tree and hair trap sampling effort and decreased over the sampling period. We provide a new method to (1) incorporate multiple detection methods into hierarchical models of abundance; (2) determine whether spatial autocorrelation should be included in final models. Our results suggest that the influence of landscape variables is consistent between habitat selection and abundance in this system.

  9. A modular method to handle multiple time-dependent quantities in Monte Carlo simulations

    Shin, J; Faddegon, B A; Perl, J; Schümann, J; Paganetti, H

    2012-01-01

    A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call ‘Time Features’. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method. (paper)

  10. A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.

    Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying

    2015-09-01

    Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.

  11. Estimating HIV incidence among adults in Kenya and Uganda: a systematic comparison of multiple methods.

    Andrea A Kim

    2011-03-01

    Full Text Available Several approaches have been used for measuring HIV incidence in large areas, yet each presents specific challenges in incidence estimation.We present a comparison of incidence estimates for Kenya and Uganda using multiple methods: 1 Epidemic Projections Package (EPP and Spectrum models fitted to HIV prevalence from antenatal clinics (ANC and national population-based surveys (NPS in Kenya (2003, 2007 and Uganda (2004/2005; 2 a survey-derived model to infer age-specific incidence between two sequential NPS; 3 an assay-derived measurement in NPS using the BED IgG capture enzyme immunoassay, adjusted for misclassification using a locally derived false-recent rate (FRR for the assay; (4 community cohorts in Uganda; (5 prevalence trends in young ANC attendees. EPP/Spectrum-derived and survey-derived modeled estimates were similar: 0.67 [uncertainty range: 0.60, 0.74] and 0.6 [confidence interval: (CI 0.4, 0.9], respectively, for Uganda (2005 and 0.72 [uncertainty range: 0.70, 0.74] and 0.7 [CI 0.3, 1.1], respectively, for Kenya (2007. Using a local FRR, assay-derived incidence estimates were 0.3 [CI 0.0, 0.9] for Uganda (2004/2005 and 0.6 [CI 0, 1.3] for Kenya (2007. Incidence trends were similar for all methods for both Uganda and Kenya.Triangulation of methods is recommended to determine best-supported estimates of incidence to guide programs. Assay-derived incidence estimates are sensitive to the level of the assay's FRR, and uncertainty around high FRRs can significantly impact the validity of the estimate. Systematic evaluations of new and existing incidence assays are needed to the study the level, distribution, and determinants of the FRR to guide whether incidence assays can produce reliable estimates of national HIV incidence.

  12. Investigation of the purging effect on a dead-end anode PEM fuel cell-powered vehicle during segments of a European driving cycle

    Gomez, Alberto; Sasmito, Agus P.; Shamim, Tariq

    2015-01-01

    Highlights: • Experimental study of a dead-end anode PEM fuel cell stack during a driving cycle. • Low purging duration is preferred at high current. • High purging frequency can sustain a better performance over time. • Lower cathode stoichiometry is preferred to minimize the parasitic loads. - Abstract: The dynamic performance of the PEM fuel cell is one of the key factors for successful operation of a fuel cell-powered vehicle. Maintaining fast time response while keeping stable and high stack performance is of importance, especially during acceleration and deceleration. In this paper, we evaluate the transient response of a PEM fuel cell stack with a dead-end anode during segments of a legislated European driving cycle together with the effect of purging factors. The PEM fuel cell stack comprises of 24 cells with a 300 cm"2 active catalyst area and operates at a low hydrogen and air pressure. Humidified air is supplied to the cathode side and the dry hydrogen is fed to the anode. The liquid coolant is circulated to the stack and the radiator to maintain the thermal envelope throughout the stack. The stack performance deterioration over time is prevented by utilizing the purging, which removes the accumulated water and impurities. The effect of purging period, purging duration, coolant flow rate and cathode stoichiometry are examined with regard to the fuel cell’s transient performance during the driving cycle. The results show that a low purging duration may avoid the undesired deceleration at a high current, and a high purging period may sustain a better performance over time. Moreover, the coolant flow rate is found to be an important parameter, which affects the stack temperature–time response of the cooling control and the stack performance, especially at high operating currents.

  13. 3-D thermal weight function method and multiple virtual crack extension technique for thermal shock problems

    Lu Yanlin; Zhou Xiao; Qu Jiadi; Dou Yikang; He Yinbiao

    2005-01-01

    An efficient scheme, 3-D thermal weight function (TWF) method, and a novel numerical technique, multiple virtual crack extension (MVCE) technique, were developed for determination of histories of transient stress intensity factor (SIF) distributions along 3-D crack fronts of a body subjected to thermal shock. The TWF is a universal function, which is dependent only on the crack configuration and body geometry. TWF is independent of time during thermal shock, so the whole history of transient SIF distributions along crack fronts can be directly calculated through integration of the products of TWF and transient temperatures and temperature gradients. The repeated determinations of the distributions of stresses (or displacements) fields for individual time instants are thus avoided in the TWF method. An expression of the basic equation for the 3-D universal weight function method for Mode I in an isotropic elastic body is derived. This equation can also be derived from Bueckner-Rice's 3-D WF formulations in the framework of transformation strain. It can be understood from this equation that the so-called thermal WF is in fact coincident with the mechanical WF except for some constants of elasticity. The details and formulations of the MVCE technique are given for elliptical cracks. The MVCE technique possesses several advantages. The specially selected linearly independent VCE modes can directly be used as shape functions for the interpolation of unknown SIFs. As a result, the coefficient matrix of the final system of equations in the MVCE method is a triple-diagonal matrix and the values of the coefficients on the main diagonal are large. The system of equations has good numerical properties. The number of linearly independent VCE modes that can be introduced in a problem is unlimited. Complex situations in which the SIFs vary dramatically along crack fronts can be numerically well simulated by the MVCE technique. An integrated system of programs for solving the

  14. Improving multiple-point-based a priori models for inverse problems by combining Sequential Simulation with the Frequency Matching Method

    Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine

    In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...

  15. The Moderating Role of Purging Behaviour in the Relationship Between Sexual/Physical Abuse and Nonsuicidal Self-Injury in Eating Disorder Patients.

    Gonçalves, Sónia; Machado, Bárbara; Silva, Cátia; Crosby, Ross D; Lavender, Jason M; Cao, Li; Machado, Paulo P P

    2016-03-01

    This study sought to examine predictors of nonsuicidal self-injury (NSSI) in eating disorder patients and to evaluate the moderating role of purging behaviours in the relationship between a theorised predictor (i.e. sexual/physical abuse) and NSSI. Participants in this study were 177 female patients with eating disorders (age range = 14-38 years) who completed semistructured interviews assessing eating disorder symptoms and eating disorder-related risk factors (e.g. history of sexual and physical abuse, history of NSSI and feelings of fatness). Results revealed that 65 participants (36.7%) reported lifetime engagement in NSSI, and 48 participants (27.1%) reported a history of sexual/physical abuse. Early onset of eating problems, lower BMI, feeling fat, a history of sexual/physical abuse and the presence of purging behaviours were all positively associated with the lifetime occurrence of NSSI. The relationship between sexual/physical abuse before eating disorder onset and lifetime NSSI was moderated by the presence of purging behaviours, such that the relationship was stronger in the absence of purging. These findings are consistent with the notion that purging and NSSI may serve similar functions in eating disorder patients (e.g. emotion regulation), such that the presence of purging may attenuate the strength of the association between sexual/physical abuse history (which is also associated with elevated NSSI risk) and engagement in NSSI behaviours. Copyright © 2015 John Wiley & Sons, Ltd and Eating Disorders Association.

  16. Statistical Methods for Magnetic Resonance Image Analysis with Applications to Multiple Sclerosis

    Pomann, Gina-Maria

    Multiple sclerosis (MS) is an immune-mediated neurological disease that causes disability and morbidity. In patients with MS, the accumulation of lesions in the white matter of the brain is associated with disease progression and worse clinical outcomes. In the first part of the dissertation, we present methodology to study to compare the brain anatomy between patients with MS and controls. A nonparametric testing procedure is proposed for testing the null hypothesis that two samples of curves observed at discrete grids and with noise have the same underlying distribution. We propose to decompose the curves using functional principal component analysis of an appropriate mixture process, which we refer to as marginal functional principal component analysis. This approach reduces the dimension of the testing problem in a way that enables the use of traditional nonparametric univariate testing procedures. The procedure is computationally efficient and accommodates different sampling designs. Numerical studies are presented to validate the size and power properties of the test in many realistic scenarios. In these cases, the proposed test is more powerful than its primary competitor. The proposed methodology is illustrated on a state-of-the art diffusion tensor imaging study, where the objective is to compare white matter tract profiles in healthy individuals and MS patients. In the second part of the thesis, we present methods to study the behavior of MS in the white matter of the brain. Breakdown of the blood-brain barrier in newer lesions is indicative of more active disease-related processes and is a primary outcome considered in clinical trials of treatments for MS. Such abnormalities in active MS lesions are evaluated in vivo using contrast-enhanced structural magnetic resonance imaging (MRI), during which patients receive an intravenous infusion of a costly magnetic contrast agent. In some instances, the contrast agents can have toxic effects. Recently, local

  17. Decision Making in Manufacturing Environment Using Graph Theory and Fuzzy Multiple Attribute Decision Making Methods Volume 2

    Rao, R Venkata

    2013-01-01

    Decision Making in Manufacturing Environment Using Graph Theory and Fuzzy Multiple Attribute Decision Making Methods presents the concepts and details of applications of MADM methods. A range of methods are covered including Analytic Hierarchy Process (AHP), Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), VIšekriterijumsko KOmpromisno Rangiranje (VIKOR), Data Envelopment Analysis (DEA), Preference Ranking METHod for Enrichment Evaluations (PROMETHEE), ELimination Et Choix Traduisant la Realité (ELECTRE), COmplex PRoportional ASsessment (COPRAS), Grey Relational Analysis (GRA), UTility Additive (UTA), and Ordered Weighted Averaging (OWA). The existing MADM methods are improved upon and three novel multiple attribute decision making methods for solving the decision making problems of the manufacturing environment are proposed. The concept of integrated weights is introduced in the proposed subjective and objective integrated weights (SOIW) method and the weighted Euclidean distance ba...

  18. A method for multiple sequential analyses of macrophage functions using a small single cell sample

    F.R.F. Nascimento

    2003-09-01

    Full Text Available Microbial pathogens such as bacillus Calmette-Guérin (BCG induce the activation of macrophages. Activated macrophages can be characterized by the increased production of reactive oxygen and nitrogen metabolites, generated via NADPH oxidase and inducible nitric oxide synthase, respectively, and by the increased expression of major histocompatibility complex class II molecules (MHC II. Multiple microassays have been developed to measure these parameters. Usually each assay requires 2-5 x 10(5 cells per well. In some experimental conditions the number of cells is the limiting factor for the phenotypic characterization of macrophages. Here we describe a method whereby this limitation can be circumvented. Using a single 96-well microassay and a very small number of peritoneal cells obtained from C3H/HePas mice, containing as little as <=2 x 10(5 macrophages per well, we determined sequentially the oxidative burst (H2O2, nitric oxide production and MHC II (IAk expression of BCG-activated macrophages. More specifically, with 100 µl of cell suspension it was possible to quantify H2O2 release and nitric oxide production after 1 and 48 h, respectively, and IAk expression after 48 h of cell culture. In addition, this microassay is easy to perform, highly reproducible and more economical.

  19. Merging daily sea surface temperature data from multiple satellites using a Bayesian maximum entropy method

    Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei

    2015-12-01

    Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.

  20. Detecting Renibacterium salmoninarum in wild brown trout by use of multiple organ samples and diagnostic methods

    Guomundsdottir, S.; Applegate, Lynn M.; Arnason, I.O.; Kristmundsson, A.; Purcell, Maureen K.; Elliott, Diane G.

    2017-01-01

    Renibacterium salmoninarum, the causative agent of salmonid bacterial kidney disease (BKD), is endemic in many wild trout species in northerly regions. The aim of the present study was to determine the optimal R. salmoninarum sampling/testing strategy for wild brown trout (Salmo trutta L.) populations in Iceland. Fish were netted in a lake and multiple organs—kidney, spleen, gills, oesophagus and mid-gut—were sampled and subjected to five detection tests i.e. culture, polyclonal enzyme-linked immunosorbent assay (pELISA) and three different PCR tests. The results showed that each fish had encountered R. salmoninarum but there were marked differences between results obtained depending on organ and test. The bacterium was not cultured from any kidney sample while all kidney samples were positive by pELISA. At least one organ from 92.9% of the fish tested positive by PCR. The results demonstrated that the choice of tissue and diagnostic method can dramatically influence the outcome of R. salmoninarum surveys.

  1. EPMLR: sequence-based linear B-cell epitope prediction method using multiple linear regression.

    Lian, Yao; Ge, Meng; Pan, Xian-Ming

    2014-12-19

    B-cell epitopes have been studied extensively due to their immunological applications, such as peptide-based vaccine development, antibody production, and disease diagnosis and therapy. Despite several decades of research, the accurate prediction of linear B-cell epitopes has remained a challenging task. In this work, based on the antigen's primary sequence information, a novel linear B-cell epitope prediction model was developed using the multiple linear regression (MLR). A 10-fold cross-validation test on a large non-redundant dataset was performed to evaluate the performance of our model. To alleviate the problem caused by the noise of negative dataset, 300 experiments utilizing 300 sub-datasets were performed. We achieved overall sensitivity of 81.8%, precision of 64.1% and area under the receiver operating characteristic curve (AUC) of 0.728. We have presented a reliable method for the identification of linear B cell epitope using antigen's primary sequence information. Moreover, a web server EPMLR has been developed for linear B-cell epitope prediction: http://www.bioinfo.tsinghua.edu.cn/epitope/EPMLR/ .

  2. A novel multiple locus variable number of tandem repeat (VNTR) analysis (MLVA) method for Propionibacterium acnes.

    Hauck, Yolande; Soler, Charles; Gérôme, Patrick; Vong, Rithy; Macnab, Christine; Appere, Géraldine; Vergnaud, Gilles; Pourcel, Christine

    2015-07-01

    Propionibacterium acnes plays a central role in the pathogenesis of acne and is responsible for severe opportunistic infections. Numerous typing schemes have been developed that allow the identification of phylotypes, but they are often insufficient to differentiate subtypes. To better understand the genetic diversity of this species and to perform epidemiological analyses, high throughput discriminant genotyping techniques are needed. Here we describe the development of a multiple locus variable number of tandem repeats (VNTR) analysis (MLVA) method. Thirteen VNTRs were identified in the genome of P. acnes and were used to genotype a collection of clinical isolates. In addition, publically available sequencing data for 102 genomes were analyzed in silico, providing an MLVA genotype. The clustering of MLVA data was in perfect congruence with whole genome based clustering. Analysis of the clustered regularly interspaced short palindromic repeat (CRISPR) element uncovered new spacers, a supplementary source of genotypic information. The present MLVA13 scheme and associated internet database represents a first line genotyping assay to investigate large number of isolates. Particular strains may then be submitted to full genome sequencing in order to better analyze their pathogenic potential. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. New Method of Calculating a Multiplication by using the Generalized Bernstein-Vazirani Algorithm

    Nagata, Koji; Nakamura, Tadao; Geurdes, Han; Batle, Josep; Abdalla, Soliman; Farouk, Ahmed

    2018-06-01

    We present a new method of more speedily calculating a multiplication by using the generalized Bernstein-Vazirani algorithm and many parallel quantum systems. Given the set of real values a1,a2,a3,\\ldots ,aN and a function g:bf {R}→ {0,1}, we shall determine the following values g(a1),g(a2),g(a3),\\ldots , g(aN) simultaneously. The speed of determining the values is shown to outperform the classical case by a factor of N. Next, we consider it as a number in binary representation; M 1 = ( g( a 1), g( a 2), g( a 3),…, g( a N )). By using M parallel quantum systems, we have M numbers in binary representation, simultaneously. The speed of obtaining the M numbers is shown to outperform the classical case by a factor of M. Finally, we calculate the product; M1× M2× \\cdots × MM. The speed of obtaining the product is shown to outperform the classical case by a factor of N × M.

  4. Base Isolation for Seismic Retrofitting of a Multiple Building Structure: Evaluation of Equivalent Linearization Method

    Massimiliano Ferraioli

    2016-01-01

    Full Text Available Although the most commonly used isolation systems exhibit nonlinear inelastic behaviour, the equivalent linear elastic analysis is commonly used in the design and assessment of seismic-isolated structures. The paper investigates if the linear elastic model is suitable for the analysis of a seismically isolated multiple building structure. To this aim, its computed responses were compared with those calculated by nonlinear dynamic analysis. A common base isolation plane connects the isolation bearings supporting the adjacent structures. In this situation, the conventional equivalent linear elastic analysis may have some problems of accuracy because this method is calibrated on single base-isolated structures. Moreover, the torsional characteristics of the combined system are significantly different from those of separate isolated buildings. A number of numerical simulations and parametric studies under earthquake excitations were performed. The accuracy of the dynamic response obtained by the equivalent linear elastic model was calculated by the magnitude of the error with respect to the corresponding response considering the nonlinear behaviour of the isolation system. The maximum displacements at the isolation level, the maximum interstorey drifts, and the peak absolute acceleration were selected as the most important response measures. The influence of mass eccentricity, torsion, and high-modes effects was finally investigated.

  5. Idiopathic Pulmonary Fibrosis: The Association between the Adaptive Multiple Features Method and Fibrosis Outcomes.

    Salisbury, Margaret L; Lynch, David A; van Beek, Edwin J R; Kazerooni, Ella A; Guo, Junfeng; Xia, Meng; Murray, Susan; Anstrom, Kevin J; Yow, Eric; Martinez, Fernando J; Hoffman, Eric A; Flaherty, Kevin R

    2017-04-01

    Adaptive multiple features method (AMFM) lung texture analysis software recognizes high-resolution computed tomography (HRCT) patterns. To evaluate AMFM and visual quantification of HRCT patterns and their relationship with disease progression in idiopathic pulmonary fibrosis. Patients with idiopathic pulmonary fibrosis in a clinical trial of prednisone, azathioprine, and N-acetylcysteine underwent HRCT at study start and finish. Proportion of lung occupied by ground glass, ground glass-reticular (GGR), honeycombing, emphysema, and normal lung densities were measured by AMFM and three radiologists, documenting baseline disease extent and postbaseline change. Disease progression includes composite mortality, hospitalization, and 10% FVC decline. Agreement between visual and AMFM measurements was moderate for GGR (Pearson's correlation r = 0.60, P fibrosis (as measured by GGR densities) is independently associated with elevated hazard for disease progression. Postbaseline change in AMFM-measured and visually measured GGR densities are modestly correlated with change in FVC. AMFM-measured fibrosis is an automated adjunct to existing prognostic markers and may allow for study enrichment with subjects at increased disease progression risk.

  6. Optimization of Selective Laser Melting by Evaluation Method of Multiple Quality Characteristics

    Khaimovich, A. I.; Stepanenko, I. S.; Smelov, V. G.

    2018-01-01

    Article describes the adoption of the Taguchi method in selective laser melting process of sector of combustion chamber by numerical and natural experiments for achieving minimum temperature deformation. The aim was to produce a quality part with minimum amount of numeric experiments. For the study, the following optimization parameters (independent factors) were chosen: the laser beam power and velocity; two factors for compensating the effect of the residual thermal stresses: the scale factor of the preliminary correction of the part geometry and the number of additional reinforcing elements. We used an orthogonal plan of 9 experiments with a factor variation at three levels (L9). As quality criterias, the values of distortions for 9 zones of the combustion chamber and the maximum strength of the material of the chamber were chosen. Since the quality parameters are multidirectional, a grey relational analysis was used to solve the optimization problem for multiple quality parameters. As a result, according to the parameters obtained, the combustion chamber segments of the gas turbine engine were manufactured.

  7. Potential Flammable Gas Explosion in the TRU Vent and Purge Machine

    Vincent, A

    2006-01-01

    The objective of the analysis was to determine the failure of the Vent and Purge (V and P) Machine due to potential explosion in the Transuranic (TRU) drum during its venting and/or subsequent explosion in the V and P machine from the flammable gases (e.g., hydrogen and Volatile Organic Compounds [VOCs]) vented into the V and P machine from the TRU drum. The analysis considers: (a) increase in the pressure in the V and P cabinet from the original deflagration in the TRU drum including lid ejection, (b) pressure wave impact from TRU drum failure, and (c) secondary burns or deflagrations resulting from excess, unburned gases in the cabinet area. A variety of cases were considered that maximized the pressure produced in the V and P cabinet. Also, cases were analyzed that maximized the shock wave pressure in the cabinet from TRU drum failure. The calculations were performed for various initial drum pressures (e.g., 1.5 and 6 psig) for 55 gallon TRU drum. The calculated peak cabinet pressures ranged from 16 psig to 50 psig for various flammable gas compositions. The blast on top of cabinet and in outlet duct ranged from 50 psig to 63 psig and 12 psig to 16 psig, respectively, for various flammable gas compositions. The failure pressures of the cabinet and the ducts calculated by structural analysis were higher than the pressure calculated from potential flammable gas deflagrations, thus, assuring that V and P cabinet would not fail during this event. National Fire Protection Association (NFPA) 68 calculations showed that for a failure pressure of 20 psig, the available vent area in the V and P cabinet is 1.7 to 2.6 times the required vent area depending on whether hydrogen or VOCs burn in the V and P cabinet. This analysis methodology could be used to design the process equipment needed for venting TRU waste containers at other sites across the Department of Energy (DOE) Complex

  8. Assessment of hydrogen fuel cell applications using fuzzy multiple-criteria decision making method

    Chang, Pao-Long; Hsu, Chiung-Wen; Lin, Chiu-Yue

    2012-01-01

    Highlights: ► This study uses the fuzzy MCDM method to assess hydrogen fuel cell applications. ► We evaluate seven different hydrogen fuel cell applications based on 14 criteria. ► Results show that fuel cell backup power systems should be chosen for development in Taiwan. -- Abstract: Assessment is an essential process in framing government policy. It is critical to select the appropriate targets to meet the needs of national development. This study aimed to develop an assessment model for evaluating hydrogen fuel cell applications and thus provide a screening tool for decision makers. This model operates by selecting evaluation criteria, determining criteria weights, and assessing the performance of hydrogen fuel cell applications for each criterion. The fuzzy multiple-criteria decision making method was used to select the criteria and the preferred hydrogen fuel cell products based on information collected from a group of experts. Survey questionnaires were distributed to collect opinions from experts in different fields. After the survey, the criteria weights and a ranking of alternatives were obtained. The study first defined the evaluation criteria in terms of the stakeholders, so that comprehensive influence criteria could be identified. These criteria were then classified as environmental, technological, economic, or social to indicate the purpose of each criterion in the assessment process. The selected criteria included 14 indicators, such as energy efficiency and CO 2 emissions, as well as seven hydrogen fuel cell applications, such as forklifts and backup power systems. The results show that fuel cell backup power systems rank the highest, followed by household fuel cell electric-heat composite systems. The model provides a screening tool for decision makers to select hydrogen-related applications.

  9. Density, ultrasound velocity, acoustic impedance, reflection and absorption coefficient determination of liquids via multiple reflection method.

    Hoche, S; Hussein, M A; Becker, T

    2015-03-01

    The accuracy of density, reflection coefficient, and acoustic impedance determination via multiple reflection method was validated experimentally. The ternary system water-maltose-ethanol was used to execute a systematic, temperature dependent study over a wide range of densities and viscosities aiming an application as inline sensor in beverage industries. The validation results of the presented method and setup show root mean square errors of: 1.201E-3 g cm(-3) (±0.12%) density, 0.515E-3 (0.15%) reflection coefficient and 1.851E+3 kg s(-1) m(-2) (0.12%) specific acoustic impedance. The results of the diffraction corrected absorption showed an average standard deviation of only 0.12%. It was found that the absorption change shows a good correlation to concentration variations and may be useful for laboratory analysis of sufficiently pure liquids. The main part of the observed errors can be explained by the observed noise, temperature variation and the low signal resolution of 50 MHz. In particular, the poor signal-to-noise ratio of the second reflector echo was found to be a main accuracy limitation. Concerning the investigation of liquids the unstable properties of the reference material PMMA, due to hygroscopicity, were identified to be an additional, unpredictable source of uncertainty. While dimensional changes can be considered by adequate methodology, the impact of the time and temperature dependent water absorption on relevant reference properties like the buffer's sound velocity and density could not be considered and may explain part of the observed deviations. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Billing for pharmacists' cognitive services in physicians' offices: multiple methods of reimbursement.

    Scott, Mollie Ashe; Hitch, William J; Wilson, Courtenay Gilmore; Lugo, Amy M

    2012-01-01

    To evaluate the charges and reimbursement for pharmacist services using multiple methods of billing and determine the number of patients that must be managed by a pharmacist to cover the cost of salary and fringe benefits. Large teaching ambulatory clinic in North Carolina. Annual charges and reimbursement, patient no-show rate, clinic capacity, number of patients seen monthly and annually, and number of patients that must be seen to pay for a pharmacist's salary and benefits. A total of 6,930 patient encounters were documented during the study period. Four different clinics were managed by the pharmacists, including anticoagulation, pharmacotherapy, osteoporosis, and wellness clinics. "Incident to" level 1 billing was used for the anticoagulation and pharmacotherapy clinics, whereas level 4 codes were used for the osteoporosis clinic. The wellness clinic utilized a negotiated fee-for-service model. Mean annual charges were $65,022, and the mean reimbursement rate was 47%. The mean charge and collection per encounter were $41 and $19, respectively. Eleven encounters per day were necessary to generate enough charges to pay for the cost of the pharmacist. Considering actual reimbursement rates, the number of patient encounters necessary increased to 24 per day. "What if" sensitivity analysis indicated that billing at the level of service provided instead of level 1 decreased the number of patients needed to be seen daily. Billing a level 4 visit necessitated that five patients would need to be seen daily to generate adequate charges. Taking into account the 47% reimbursement rate, 10 level 4 encounters per day were necessary to generate appropriate reimbursement to pay for the pharmacist. Unique opportunities for pharmacists to provide direct patient care in the ambulatory setting continue to develop. Use of a combination of billing methods resulted in sustainable reimbursement. The ability to bill at the level of service provided instead of a level 1 visit would

  11. Integrating Multiple Geophysical Methods to Quantify Alpine Groundwater- Surface Water Interactions: Cordillera Blanca, Peru

    Glas, R. L.; Lautz, L.; McKenzie, J. M.; Baker, E. A.; Somers, L. D.; Aubry-Wake, C.; Wigmore, O.; Mark, B. G.; Moucha, R.

    2016-12-01

    Groundwater- surface water interactions in alpine catchments are often poorly understood as groundwater and hydrologic data are difficult to acquire in these remote areas. The Cordillera Blanca of Peru is a region where dry-season water supply is increasingly stressed due to the accelerated melting of glaciers throughout the range, affecting millions of people country-wide. The alpine valleys of the Cordillera Blanca have shown potential for significant groundwater storage and discharge to valley streams, which could buffer the dry-season variability of streamflow throughout the watershed as glaciers continue to recede. Known as pampas, the clay-rich, low-relief valley bottoms are interfingered with talus deposits, providing a likely pathway for groundwater recharged at the valley edges to be stored and slowly released to the stream throughout the year by springs. Multiple geophysical methods were used to determine areas of groundwater recharge and discharge as well as aquifer geometry of the pampa system. Seismic refraction tomography, vertical electrical sounding (VES), electrical resistivity tomography (ERT), and horizontal-to-vertical spectral ratio (HVSR) seismic methods were used to determine the physical properties of the unconsolidated valley sediments, the depth to saturation, and the depth to bedrock for a representative section of the Quilcayhuanca Valley in the Cordillera Blanca. Depth to saturation and lithological boundaries were constrained by comparing geophysical results to continuous records of water levels and sediment core logs from a network of seven piezometers installed to depths of up to 6 m. Preliminary results show an average depth to bedrock for the study area of 25 m, which varies spatially along with water table depths across the valley. The conceptual model of groundwater flow and storage derived from these geophysical data will be used to inform future groundwater flow models of the area, allowing for the prediction of groundwater

  12. Addressing the targeting range of the ABILHAND-56 in relapsing-remitting multiple sclerosis: A mixed methods psychometric study.

    Cleanthous, Sophie; Strzok, Sara; Pompilus, Farrah; Cano, Stefan; Marquis, Patrick; Cohan, Stanley; Goldman, Myla D; Kresa-Reahl, Kiren; Petrillo, Jennifer; Castrillo-Viguera, Carmen; Cadavid, Diego; Chen, Shih-Yin

    2018-01-01

    ABILHAND, a manual ability patient-reported outcome instrument originally developed for stroke patients, has been used in multiple sclerosis clinical trials; however, psychometric analyses indicated the measure's limited measurement range and precision in higher-functioning multiple sclerosis patients. The purpose of this study was to identify candidate items to expand the measurement range of the ABILHAND-56, thus improving its ability to detect differences in manual ability in higher-functioning multiple sclerosis patients. A step-wise mixed methods design strategy was used, comprising two waves of patient interviews, a combination of qualitative (concept elicitation and cognitive debriefing) and quantitative (Rasch measurement theory) analytic techniques, and consultation interviews with three clinical neurologists specializing in multiple sclerosis. Original ABILHAND was well understood in this context of use. Eighty-two new manual ability concepts were identified. Draft supplementary items were generated and refined with patient and neurologist input. Rasch measurement theory psychometric analysis indicated supplementary items improved targeting to higher-functioning multiple sclerosis patients and measurement precision. The final pool of Early Multiple Sclerosis Manual Ability items comprises 20 items. The synthesis of qualitative and quantitative methods used in this study improves the ABILHAND content validity to more effectively identify manual ability changes in early multiple sclerosis and potentially help determine treatment effect in higher-functioning patients in clinical trials.

  13. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs

    Min-Kyu Kim

    2015-12-01

    Full Text Available This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs. The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.

  14. Interaction of amidated single-walled carbon nanotubes with protein by multiple spectroscopic methods

    Li, Lili [China Pharmaceutical University, Nanjing 210009 (China); The Nursing College of Pingdingshan University, Pingdingshan 467000 (China); Lin, Rui [Yancheng Health Vocational and Technical College, Yancheng 224005 (China); He, Hua, E-mail: dochehua@163.com [China Pharmaceutical University, Nanjing 210009 (China); Key Laboratory of Drug Quality Control and Pharmacovigilance, Ministry of Education, China Pharmaceutical University, Nanjing 210009 (China); Sun, Meiling, E-mail: sml-nir@sohu.com [China Pharmaceutical University, Nanjing 210009 (China); Jiang, Li; Gao, Mengmeng [China Pharmaceutical University, Nanjing 210009 (China)

    2014-01-15

    The aim of this work was to investigate the detailed interaction between BSA and amidated single walled carbon nanotubes (e-SWNTs) in vitro. Ethylenediamine (EDA) was successfully linked on the surface of single-walled carbon nanotubes (SWNTs) via acylation to improve their dispersion and to introduce active groups. Bovine serum albumin (BSA) was selected as the template protein to inspect the interaction of e-SWNTs with protein. Decreases in fluorescence intensity of BSA induced by e-SWNTs demonstrated the occurrence of interaction between BSA and e-SWNTs. Quenching parameters and different absorption spectra for e-SWNTs–BSA show that the quenching effect of e-SWNTs was static quenching. Hydrophobic force had a leading contribution to the binding roles of BSA on e-SWNTs, which was confirmed by positive enthalpy change and entropy change. The interference of Na{sup +} with the quenching effect of e-SWNTs authenticated that electrostatic force existed in the interactive process simultaneously. The hydrophobicity of amino acid residues markedly increased with the addition of e-SWNTs viewed from UV spectra of BSA. The content of α-helix structure in BSA decreased by 6.8% due to the addition of e-SWNTs, indicating that e-SWNTs have an effect on the secondary conformation of BSA. -- Highlights: • The interaction between e-SWNTs and BSA was investigated by multiple spectroscopic methods. • Quenching mechanism was static quenching. • Changes in structure of BSA were inspected by synchronous fluorescence, UV–vis and CD spectrum.

  15. Exploring the use of storytelling in quantitative research fields using a multiple case study method

    Matthews, Lori N. Hamlet

    The purpose of this study was to explore the emerging use of storytelling in quantitative research fields. The focus was not on examining storytelling in research, but rather how stories are used in various ways within the social context of quantitative research environments. In-depth interviews were conducted with seven professionals who had experience using storytelling in their work and my personal experience with the subject matter was also used as a source of data according to the notion of researcher-as-instrument. This study is qualitative in nature and is guided by two supporting theoretical frameworks, the sociological perspective and narrative inquiry. A multiple case study methodology was used to gain insight about why participants decided to use stories or storytelling in a quantitative research environment that may not be traditionally open to such methods. This study also attempted to identify how storytelling can strengthen or supplement existing research, as well as what value stories can provide to the practice of research in general. Five thematic findings emerged from the data and were grouped under two headings, "Experiencing Research" and "Story Work." The themes were found to be consistent with four main theoretical functions of storytelling identified in existing scholarly literature: (a) sense-making; (b) meaning-making; (c) culture; and (d) communal function. The five thematic themes that emerged from this study and were consistent with the existing literature include: (a) social context; (b) quantitative versus qualitative; (c) we think and learn in terms of stories; (d) stories tie experiences together; and (e) making sense and meaning. Recommendations are offered in the form of implications for various social contexts and topics for further research are presented as well.

  16. Grey Matter Atrophy in Multiple Sclerosis: Clinical Interpretation Depends on Choice of Analysis Method.

    Veronica Popescu

    Full Text Available Studies disagree on the location of grey matter (GM atrophy in the multiple sclerosis (MS brain.To examine the consistency between FSL, FreeSurfer, SPM for GM atrophy measurement (for volumes, patient/control discrimination, and correlations with cognition.127 MS patients and 50 controls were included and cortical and deep grey matter (DGM volumetrics were performed. Consistency of volumes was assessed with Intraclass Correlation Coefficient/ICC. Consistency of patients/controls discrimination was assessed with Cohen's d, t-tests, MANOVA and a penalized double-loop logistic classifier. Consistency of association with cognition was assessed with Pearson correlation coefficient and ANOVA. Voxel-based morphometry (SPM-VBM and FSL-VBM and vertex-wise FreeSurfer were used for group-level comparisons.The highest volumetry ICC were between SPM and FreeSurfer for cortical regions, and the lowest between SPM and FreeSurfer for DGM. The caudate nucleus and temporal lobes had high consistency between all software, while amygdala had lowest volumetric consistency. Consistency of patients/controls discrimination was largest in the DGM for all software, especially for thalamus and pallidum. The penalized double-loop logistic classifier most often selected the thalamus, pallidum and amygdala for all software. FSL yielded the largest number of significant correlations. DGM yielded stronger correlations with cognition than cortical volumes. Bilateral putamen and left insula volumes correlated with cognition using all methods.GM volumes from FreeSurfer, FSL and SPM are different, especially for cortical regions. While group-level separation between MS and controls is comparable, correlations between regional GM volumes and clinical/cognitive variables in MS should be cautiously interpreted.

  17. Meaning and challenges in the practice of multiple therapeutic massage modalities: a combined methods study.

    Porcino, Antony J; Boon, Heather S; Page, Stacey A; Verhoef, Marja J

    2011-09-20

    Therapeutic massage and bodywork (TMB) practitioners are predominantly trained in programs that are not uniformly standardized, and in variable combinations of therapies. To date no studies have explored this variability in training and how this affects clinical practice. Combined methods, consisting of a quantitative, population-based survey and qualitative interviews with practitioners trained in multiple therapies, were used to explore the training and practice of TMB practitioners in Alberta, Canada. Of the 5242 distributed surveys, 791 were returned (15.1%). Practitioners were predominantly female (91.7%), worked in a range of environments, primarily private (44.4%) and home clinics (35.4%), and were not significantly different from other surveyed massage therapist populations. Seventy-seven distinct TMB therapies were identified. Most practitioners were trained in two or more therapies (94.4%), with a median of 8 and range of 40 therapies. Training programs varied widely in number and type of TMB components, training length, or both. Nineteen interviews were conducted. Participants described highly variable training backgrounds, resulting in practitioners learning unique combinations of therapy techniques. All practitioners reported providing individualized patient treatment based on a responsive feedback process throughout practice that they described as being critical to appropriately address the needs of patients. They also felt that research treatment protocols were different from clinical practice because researchers do not usually sufficiently acknowledge the individualized nature of TMB care provision. The training received, the number of therapies trained in, and the practice descriptors of TMB practitioners are all highly variable. In addition, clinical experience and continuing education may further alter or enhance treatment techniques. Practitioners individualize each patient's treatment through a highly adaptive process. Therefore, treatment

  18. An engineering method to estimate the junction temperatures of light-emitting diodes in multiple LED application

    Fu, Xing; Hu, Run; Luo, Xiaobing

    2014-01-01

    Acquiring the junction temperature of light emitting diode (LED) is essential for performance evaluation. But it is hard to get in the multiple LED applications. In this paper, an engineering method is presented to estimate the junction temperatures of LEDs in multiple LED applications. This method is mainly based on an analytical model, and it can be easily applied with some simple measurements. Simulations and experiments were conducted to prove the feasibility of the method, and the deviations among the results obtained by the present method with those by simulation as well as experiments are less than 2% and 3%, respectively. In the final part of this study, the engineering method was used to analyze the thermal resistances of a street lamp. The material of lead frame was found to affect the system thermal resistance mostly, and the choice of solder material strongly depended on the material of the lead frame.

  19. Speciation of organotin compounds in waters and marine sediments using purge-and-trap capillary gas chromatography with atomic emission detection

    Campillo, Natalia; Aguinaga, Nerea; Vin-tilde as, Pilar; Lopez-Garcia, Ignacio; Hernandez-Cordoba, Manuel

    2004-01-01

    A procedure for the simultaneous determination of six organotin compounds, including methyl-, butyl- and phenyltins, in waters and marine sediments is developed. The analytes were leached from the solid samples into an acetic acid:methanol mixture by using an ultrasonic probe. The organotins were derivatized with sodium tetraethylborate (NaBEt 4 ) in the aqueous phase, stripped by a flow of helium, pre-concentrated in a trap and thermally desorbed. This was followed by capillary gas chromatography with microwave-induced plasma atomic emission spectrometry as the detection system (GC-AED). Each chromatographic run took 22 min, including the purge time. Calibration curves were obtained by plotting peak area versus concentration and the correlation coefficients for linear calibration were at least 0.9991. Detection limits ranged from 11 to 50 ng Sn l -1 for tributyltin and tetramethyltin, respectively. The seawater samples analyzed contained variable concentrations of mono-, di- and tributyl- and monophenyltin, ranging from 0.05 to 0.48 μg Sn l -1 , depending on the compound. Some of the sediments analyzed contained concentrations of dibutyl- and tributyltin of between 6.0 and 13.0 ng Sn g -1 . Analysis of the certified reference material PACS-2, as well as of spiked water and sediment samples showed the accuracy of the method. The proposed method is selective and reproducible, and is considered suitable for monitoring organotin compounds in water and sediment samples

  20. Speciation of organotin compounds in waters and marine sediments using purge-and-trap capillary gas chromatography with atomic emission detection

    Campillo, Natalia [Department of Analytical Chemistry, Faculty of Chemistry, University of Murcia, E-30071 Murcia (Spain); Aguinaga, Nerea [Department of Analytical Chemistry, Faculty of Chemistry, University of Murcia, E-30071 Murcia (Spain); Vin-tilde as, Pilar [Department of Analytical Chemistry, Faculty of Chemistry, University of Murcia, E-30071 Murcia (Spain); Lopez-Garcia, Ignacio [Department of Analytical Chemistry, Faculty of Chemistry, University of Murcia, E-30071 Murcia (Spain); Hernandez-Cordoba, Manuel [Department of Analytical Chemistry, Faculty of Chemistry, University of Murcia, E-30071 Murcia (Spain)]. E-mail: hcordoba@um.es

    2004-11-08

    A procedure for the simultaneous determination of six organotin compounds, including methyl-, butyl- and phenyltins, in waters and marine sediments is developed. The analytes were leached from the solid samples into an acetic acid:methanol mixture by using an ultrasonic probe. The organotins were derivatized with sodium tetraethylborate (NaBEt{sub 4}) in the aqueous phase, stripped by a flow of helium, pre-concentrated in a trap and thermally desorbed. This was followed by capillary gas chromatography with microwave-induced plasma atomic emission spectrometry as the detection system (GC-AED). Each chromatographic run took 22 min, including the purge time. Calibration curves were obtained by plotting peak area versus concentration and the correlation coefficients for linear calibration were at least 0.9991. Detection limits ranged from 11 to 50 ng Sn l{sup -1} for tributyltin and tetramethyltin, respectively. The seawater samples analyzed contained variable concentrations of mono-, di- and tributyl- and monophenyltin, ranging from 0.05 to 0.48 {mu}g Sn l{sup -1}, depending on the compound. Some of the sediments analyzed contained concentrations of dibutyl- and tributyltin of between 6.0 and 13.0 ng Sn g{sup -1}. Analysis of the certified reference material PACS-2, as well as of spiked water and sediment samples showed the accuracy of the method. The proposed method is selective and reproducible, and is considered suitable for monitoring organotin compounds in water and sediment samples.

  1. Performance of a novel multiple-signal luminescence sediment tracing method

    Reimann, Tony

    2014-05-01

    transport. The EET increases with increasing distance from the nourishment source, indicating that our method is capable to quantify sediment transport distances. We furthermore observed that the EET of an aeolian analogue is orders of magnitudes higher than those of the water-lain transported Zandmotor samples, suggesting that our approach is also able to differentiate between different modes of coastal sediment transport. This new luminescence approach offers new possibilities to decipher the sedimentation history of palaeo-environmental archives e.g. in coastal, fluvial or aeolian settings. References: Reimann, T.et al. Quantifying the degreeof bleaching during sediment transport using a polymineral multiple-signalluminescence approach. Submitted. Stive, M.J.F. et al. 2013. A New Alternative to Saving Our Beaches from Sea-Level Rise: The SandEngine. Journal of Coastal research 29, 1001-1008.

  2. Rapid descriptive sensory methods – Comparison of Free Multiple Sorting, Partial Napping, Napping, Flash Profiling and conventional profiling

    Dehlholm, Christian; Brockhoff, Per B.; Meinert, Lene

    2012-01-01

    is a modal restriction of Napping to specific sensory modalities, directing sensation and still allowing a holistic approach to products. The new methods are compared to Flash Profiling, Napping and conventional descriptive sensory profiling. Evaluations are performed by several panels of expert assessors......Two new rapid descriptive sensory evaluation methods are introduced to the field of food sensory evaluation. The first method, free multiple sorting, allows subjects to perform ad libitum free sortings, until they feel that no more relevant dissimilarities among products remain. The second method...... are applied for the graphical validation and comparisons. This allows similar comparisons and is applicable to single-block evaluation designs such as Napping. The partial Napping allows repetitions on multiple sensory modalities, e.g. appearance, taste and mouthfeel, and shows the average...

  3. Application of the modified neutron source multiplication method for a measurement of sub-criticality in AGN-201K reactor

    Myung-Hyun Kim

    2010-01-01

    Measurement of sub-criticality is a challenging and required task in nuclear industry both for nuclear criticality safety and physics test in nuclear power plant. A relatively new method named as Modified Neutron Source Multiplication Method (MNSM) was proposed in Japan. This method is an improvement of traditional Neutron Source Multiplication (NSM) Method, in which three correction factors are applied additionally. In this study, MNSM was tested in calculation of rod worth using an educational reactor in Kyung Hee University, AGN-201K. For this study, a revised nuclear data library and a neutron transport code system TRANSX-PARTISN were used for the calculation of correction factors for various control rod positions and source locations. Experiments were designed and performed to enhance errors in NSM from the location effects of source and detectors. MNSM can correct these effects but current results showed not much correction effects. (author)

  4. On methods to determine the π0 multiplicity distribution and its moments from the observed γ radiation

    Ekspong, G.; Johansson, H.

    1976-04-01

    In high energy particle reactions where many neutral pions may be produced the information contained in the decay gamma radiation can be converted to information about the neutral pions. Two methods are described to obtain the moments of the multiplicity distribution of the neutral pions from the distribution of the number of electron-positron pairs. (Auth.)

  5. Zero-leakage multiple key-binding scenarios for SRAM-PUF systems based on the XOR-method

    Kusters, C.J.; Ignatenko, T.; Willems, F.M.J.

    2016-01-01

    We show that the XOR-method based on linear error-correcting codes can be applied to achieve the secret-key capacity of binary-symmetric SRAM-PUFs. Then we focus on multiple key-bindings. We prove that no information is leaked by all the helper data about a single secret key both in the case where

  6. Zero-leakage multiple key-binding scenarios for SRAM-PUF systems based on the XOR-Method

    Kusters, C.J.; Ignatenko, T.; Willems, F.M.J.

    2016-01-01

    We show that the XOR-method based on linear error-correcting codes can be applied to achieve the secret-key capacity of binary-symmetric SRAM-PUFs. Then we focus on multiple key-bindings. We prove that no information is leaked by all the helper data about a single secret key both in the case where

  7. Privacy Protection Method for Multiple Sensitive Attributes Based on Strong Rule

    Tong Yi

    2015-01-01

    Full Text Available At present, most studies on data publishing only considered single sensitive attribute, and the works on multiple sensitive attributes are still few. And almost all the existing studies on multiple sensitive attributes had not taken the inherent relationship between sensitive attributes into account, so that adversary can use the background knowledge about this relationship to attack the privacy of users. This paper presents an attack model with the association rules between the sensitive attributes and, accordingly, presents a data publication for multiple sensitive attributes. Through proof and analysis, the new model can prevent adversary from using the background knowledge about association rules to attack privacy, and it is able to get high-quality released information. At last, this paper verifies the above conclusion with experiments.

  8. Self-calibrated multiple-echo acquisition with radial trajectories using the conjugate gradient method (SMART-CG).

    Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F

    2011-04-01

    To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast three-dimensional MRI data acquisition. Copyright © 2011 Wiley-Liss, Inc.

  9. The fairness, predictive validity and acceptability of multiple mini interview in an internationally diverse student population- a mixed methods study

    Kelly, Maureen E.; Dowell, Jon; Husbands, Adrian; Newell, John; O'Flynn, Siun; Kropmans, Thomas; Dunne, Fidelma P.; Murphy, Andrew W.

    2014-01-01

    Background International medical students, those attending medical school outside of their country of citizenship, account for a growing proportion of medical undergraduates worldwide. This study aimed to establish the fairness, predictive validity and acceptability of Multiple Mini Interview (MMI) in an internationally diverse student population. Methods This was an explanatory sequential, mixed methods study. All students in First Year Medicine, National University of Ireland Galway 2012 we...

  10. Analysis of underlying and multiple-cause mortality data: the life table methods.

    Moussa, M A

    1987-02-01

    The stochastic compartment model concepts are employed to analyse and construct complete and abbreviated total mortality life tables, multiple-decrement life tables for a disease, under the underlying and pattern-of-failure definitions of mortality risk, cause-elimination life tables, cause-elimination effects on saved population through the gain in life expectancy as a consequence of eliminating the mortality risk, cause-delay life tables designed to translate the clinically observed increase in survival time as the population gain in life expectancy that would occur if a treatment protocol was made available to the general population and life tables for disease dependency in multiple-cause data.

  11. Use of Multiple Imputation Method to Improve Estimation of Missing Baseline Serum Creatinine in Acute Kidney Injury Research

    Peterson, Josh F.; Eden, Svetlana K.; Moons, Karel G.; Ikizler, T. Alp; Matheny, Michael E.

    2013-01-01

    Summary Background and objectives Baseline creatinine (BCr) is frequently missing in AKI studies. Common surrogate estimates can misclassify AKI and adversely affect the study of related outcomes. This study examined whether multiple imputation improved accuracy of estimating missing BCr beyond current recommendations to apply assumed estimated GFR (eGFR) of 75 ml/min per 1.73 m2 (eGFR 75). Design, setting, participants, & measurements From 41,114 unique adult admissions (13,003 with and 28,111 without BCr data) at Vanderbilt University Hospital between 2006 and 2008, a propensity score model was developed to predict likelihood of missing BCr. Propensity scoring identified 6502 patients with highest likelihood of missing BCr among 13,003 patients with known BCr to simulate a “missing” data scenario while preserving actual reference BCr. Within this cohort (n=6502), the ability of various multiple-imputation approaches to estimate BCr and classify AKI were compared with that of eGFR 75. Results All multiple-imputation methods except the basic one more closely approximated actual BCr than did eGFR 75. Total AKI misclassification was lower with multiple imputation (full multiple imputation + serum creatinine) (9.0%) than with eGFR 75 (12.3%; Pcreatinine) (15.3%) versus eGFR 75 (40.5%; P<0.001). Multiple imputation improved specificity and positive predictive value for detecting AKI at the expense of modestly decreasing sensitivity relative to eGFR 75. Conclusions Multiple imputation can improve accuracy in estimating missing BCr and reduce misclassification of AKI beyond currently proposed methods. PMID:23037980

  12. Compensation Methods for Non-uniform and Incomplete Data Sampling in High Resolution PET with Multiple Scintillation Crystal Layers

    Lee, Jae Sung; Kim, Soo Mee; Lee, Dong Soo; Hong, Jong Hong; Sim, Kwang Souk; Rhee, June Tak

    2008-01-01

    To establish the methods for sinogram formation and correction in order to appropriately apply the filtered backprojection (FBP) reconstruction algorithm to the data acquired using PET scanner with multiple scintillation crystal layers. Formation for raw PET data storage and conversion methods from listmode data to histogram and sinogram were optimized. To solve the various problems occurred while the raw histogram was converted into sinogram, optimal sampling strategy and sampling efficiency correction method were investigated. Gap compensation methods that is unique in this system were also investigated. All the sinogram data were reconstructed using 2D filtered backprojection algorithm and compared to estimate the improvements by the correction algorithms. Optimal radial sampling interval and number of angular samples in terms of the sampling theorem and sampling efficiency correction algorithm were pitch/2 and 120, respectively. By applying the sampling efficiency correction and gap compensation, artifacts and background noise on the reconstructed image could be reduced. Conversion method from the histogram to sinogram was investigated for the FBP reconstruction of data acquired using multiple scintillation crystal layers. This method will be useful for the fast 2D reconstruction of multiple crystal layer PET data

  13. The Role of Non-Suicidal Self-Injury and Binge-Eating/Purging Behaviours in Family Functioning in Eating Disorders.

    Depestele, Lies; Claes, Laurence; Dierckx, Eva; Baetens, Imke; Schoevaerts, Katrien; Lemmens, Gilbert M D

    2015-09-01

    This study aimed to investigate family functioning of restrictive and binge-eating/purging eating disordered adolescents with or without non-suicidal self-injury (NSSI), as perceived by the patients and their parents (mothers and fathers). In total, 123 patients (between 14 and 24 years), 98 mothers and 79 fathers completed the Family Assessment Device. Patients completed the Self-Injury Questionnaire-Treatment Related and the Symptom Checklist 90-Revised. No main effects were found of restrictive versus binge-eating/purging behaviour nor of presence/absence of NSSI. For the parents, a significant interaction between binge-eating/purging behaviour and NSSI emerged: Mothers and fathers reported worse family functioning in the binge-eating/purging group in presence of NSSI, whereas mothers reported worse family functioning in the restrictive group without NSSI. Parental perception of family functioning is affected by the combined presence of binge-eating/purging behaviour and NSSI. This finding should be taken into account when treating families living with eating disorders. Copyright © 2015 John Wiley & Sons, Ltd and Eating Disorders Association.

  14. Multiple imputation strategies for zero-inflated cost data in economic evaluations : which method works best?

    MacNeil Vroomen, Janet; Eekhout, Iris; Dijkgraaf, Marcel G; van Hout, Hein; de Rooij, Sophia E; Heymans, Martijn W; Bosmans, Judith E

    2016-01-01

    Cost and effect data often have missing data because economic evaluations are frequently added onto clinical studies where cost data are rarely the primary outcome. The objective of this article was to investigate which multiple imputation strategy is most appropriate to use for missing

  15. Novel method of simultaneous multiple immunogold localization on resin sections in high resolution scanning electron microscopy

    Nebesářová, Jana; Wandrol, P.; Vancová, Marie

    2016-01-01

    Roč. 12, č. 1 (2016), s. 105-517 ISSN 1549-9634 R&D Projects: GA TA ČR(CZ) TE01020118 Institutional support: RVO:60077344 Keywords : multiple immunolabeling * gold nanoparticles * high resolution SEM * STEM imaging * BSE imaging Subject RIV: EA - Cell Biology Impact factor: 5.720, year: 2016

  16. A novel Multiple-Marker Method for the Early Diagnosis of Oral Squamous Cell Carcinoma

    Jutta Ries

    2009-01-01

    Full Text Available Objective: Melanoma associated antigens-A (MAGE-A expression is highly specific to cancer cells. Thus, they can be the most suitable targets for the diagnosis of malignancy. The aim of this study was to evaluate the sensitivity of multiple MAGE-A expression analysis for the diagnosis of oral squamous cell carcinoma (OSCC.

  17. Matrix-type multiple reciprocity boundary element method for solving three-dimensional two-group neutron diffusion equations

    Itagaki, Masafumi; Sahashi, Naoki.

    1997-01-01

    The multiple reciprocity boundary element method has been applied to three-dimensional two-group neutron diffusion problems. A matrix-type boundary integral equation has been derived to solve the first and the second group neutron diffusion equations simultaneously. The matrix-type fundamental solutions used here satisfy the equation which has a point source term and is adjoint to the neutron diffusion equations. A multiple reciprocity method has been employed to transform the matrix-type domain integral related to the fission source into an equivalent boundary one. The higher order fundamental solutions required for this formulation are composed of a series of two types of analytic functions. The eigenvalue itself is also calculated using only boundary integrals. Three-dimensional test calculations indicate that the present method provides stable and accurate solutions for criticality problems. (author)

  18. Efficient optical probes for fast surface velocimetry: multiple frequency issues for Fabry and VISAR methods

    Goosman, David R.; Avara, George R.; Perry, Stephen J.

    2001-04-01

    velocimeter. The Doppler-shifted light enters the collection fiber with angles between 0.11 and 0.2, with little light in the 0 to 0.11 NA region. However, the manybeam velocimeter uses just the light in the 0 to 0.11 NA range, except when we link two analyzer tables together. A slight amount of mode scrambling of the Doppler shifted light converts the light into a uniformly filled NA equals 0.2 angular range before entering the velocimeter analyzer table. We have expended seven hundred plastic nested lenses in various experiments. The most recent version of the fiber cable assembly will be shown. Six situations will be discussed where multiple reflected frequencies were observed in experiments, illustrating an advantage of the Fabry-Perot vs. the VISAR method.

  19. Satiation deficits and binge eating: Probing differences between bulimia nervosa and purging disorder using an ad lib test meal.

    Keel, Pamela K; Haedt-Matt, Alissa A; Hildebrandt, Britny; Bodell, Lindsay P; Wolfe, Barbara E; Jimerson, David C

    2018-04-11

    Purging disorder (PD) has been included as a named condition within the DSM-5 category of Other Specified Feeding or Eating Disorder and differs from bulimia nervosa (BN) in the absence of binge-eating episodes. The current study evaluated satiation through behavioral and self-report measures to understand how this construct may explain distinct symptom presentations for bulimia nervosa (BN) and purging disorder (PD). Women (N = 119) were recruited from the community if they met DSM-5 criteria for BN (n = 57), PD (n = 31), or were free of eating pathology (n = 31 controls). Participants completed structured clinical interviews and questionnaires and an ad lib test meal during which they provided reports of subjective states. Significant group differences were found on self-reported symptoms, ad lib test meal intake, and subjective responses to food intake between individuals with eating disorders and controls and between BN and PD. Further, ad lib intake was associated with self-reported frequency and size of binge episodes. In a multivariable model, the amount of food consumed during binges as reported during clinical interviews predicted amount of food consumed during the ad lib test meal, controlling for other binge-related variables. Satiation deficits distinguish BN from PD and appear to be specifically linked to the size of binge episodes. Future work should expand exploration of physiological bases of these differences to contribute to novel interventions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Frequency of deflagration in the in-tank precipitation process tanks due to loss of nitrogen purge system

    Jansen, J.M.; Mason, C.L.; Olsen, L.M.; Shapiro, B.J.; Gupta, M.K.; Britt, T.E.

    1994-01-01

    High-level liquid wastes (HLLW) from the processing of nuclear material at the Savannah River Site (SRS) are stored in large tanks in the F- and H-Area tank farms. The In-Tank Precipitation (ITP) process is one step in the processing and disposal of HLLW. The process hazards review for the ITP identified the need to implement provisions that minimize deflagration/explosion hazards associated with the process. The objective of this analysis is to determine the frequency of a deflagration in Tank 48 and/or 49 due to nitrogen purge system failures (including external events) and coincident ignition source. A fault tree of the nitrogen purge system coupled with ignition source probability is used to identify dominant system failures that contribute to the frequency of deflagration. These system failures are then used in the recovery analysis. Several human actions, recovery actions, and repair activities are identified that reduce total frequency. The actions are analyzed and quantified as part of a Human Reliability Analysis (HRA). The probabilities of failure of these actions are applied to the fault tree cutsets and the event trees

  1. Liquid chromatography-electrospray ionization tandem mass spectrometry and dynamic multiple reaction monitoring method for determining multiple pesticide residues in tomato.

    Andrade, G C R M; Monteiro, S H; Francisco, J G; Figueiredo, L A; Botelho, R G; Tornisielo, V L

    2015-05-15

    A quick and sensitive liquid chromatography-electrospray ionization tandem mass spectrometry method, using dynamic multiple reaction monitoring and a 1.8-μm particle size analytical column, was developed to determine 57 pesticides in tomato in a 13-min run. QuEChERS (quick, easy, cheap, effective, rugged, and safe) method for samples preparations and validations was carried out in compliance with EU SANCO guidelines. The method was applied to 58 tomato samples. More than 84% of the compounds investigated showed limits of detection equal to or lower than 5 mg kg(-1). A mild (50%) matrix effect was observed for 72%, 25%, and 3% of the pesticides studied, respectively. Eighty-one percent of the pesticides showed recoveries ranging between 70% and 120%. Twelve pesticides were detected in 35 samples, all below the maximum residue levels permitted in the Brazilian legislation; 15 samples exceeded the maximum residue levels established by the EU legislation for methamidophos; and 10 exceeded limits for acephate and four for bromuconazole. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Application of the 2-D discrete-ordinates method to multiple scattering of laser radiation

    Zardecki, A.; Gerstl, S.A.W.; Embury, J.F.

    1983-01-01

    The discrete-ordinates finite-element radiation transport code twotran is applied to describe the multiple scattering of a laser beam from a reflecting target. For a model scenario involving a 99% relative humidity rural aerosol we compute the average intensity of the scattered radiation and correction factors to the Beer-Lambert law arising from multiple scattering. As our results indicate, 2-D x-y and r-z geometry modeling can reliably describe a realistic 3-D scenario. Specific results are presented for the two visual ranges of 1.52 and 0.76 km which show that, for sufficiently high aerosol concentrations (e.g., equivalent to V = 0.76 km), the target signature in a distant detector becomes dominated by multiply scattered radiation from interactions of the laser light with the aerosol environment. The merits of the scaling group and the delta-M approximation for the transfer equation are also explored

  3. Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System

    Shi, Shuai; Zhao, Kaichun; You, Zheng; Ouyang, Chenguang; Cao, Yongkui; Wang, Zhenzhou

    2017-01-01

    The Multiple Field-of-view Navigation System (MFNS) is a spacecraft subsystem built to realize the autonomous navigation of the Spacecraft Inside Tiangong Space Station. This paper introduces the basics of the MFNS, including its architecture, mathematical model and analysis, and numerical simulation of system errors. According to the performance requirement of the MFNS, the calibration of both intrinsic and extrinsic parameters of the system is assumed to be essential and pivotal. Hence, a n...

  4. Application of multiple correlation analysis method to the prognosis and evaluation of uranium metallogenisys in Jiangzha region

    Zhu Hongxun; Pan Hongping; Jian Xingxiang

    2008-01-01

    Prognosis and evaluation of uranium resources in Jiangzha region, Sichuan province are carried out through the multiple correlation analysis method. Through combining the characteristics of the methods and geology circumstance of areas to be predict, the uranium source, rock types, structure, terrain, hot springs and red basin are selected as estimation variable (factor). The original data of reference and predict unit are listed first, then correlation degree is calculated and uranium mineralization prospect areas are discriminated finally. The result shows that the method is credible, and should be applied to the whole Ruoergai uranium metallogenic area. (authors)

  5. Carbon sequestration in wood products: a method for attribution to multiple parties

    Tonn, Bruce; Marland, Gregg

    2007-01-01

    When forest is harvested some of the forest carbon ends up in wood products. If the forest is managed so that the standing stock of the forest remains constant over time, and the stock of wood products is increasing, then carbon dioxide is being removed from the atmosphere in net and this should be reflected in accounting for greenhouse gas emissions. We suggest that carbon sequestration in wood products requires cooperation of multiple parties; from the forest owner to the product manufacturer to the product user, and perhaps others. Credit for sequestering carbon away from the atmosphere could acknowledge the contributions of these multiple parties. Accounting under a cap-and-trade or tax system is not necessarily an inventory system, it is a system designed to motivate and/or reward an environmental objective. We describe a system of attribution whereby credits for carbon sequestration would be shared among multiple, contributing parties. It is hoped that the methodology outlined herein proves attractive enough to parties concerned to spur them to address the details of such a system. The system of incentives one would choose for limiting or controlling greenhouse gas emissions could be quite different, depending on how the attribution for emissions and sequestration is chosen

  6. Multiple-Trait Genomic Selection Methods Increase Genetic Value Prediction Accuracy

    Jia, Yi; Jannink, Jean-Luc

    2012-01-01

    Genetic correlations between quantitative traits measured in many breeding programs are pervasive. These correlations indicate that measurements of one trait carry information on other traits. Current single-trait (univariate) genomic selection does not take advantage of this information. Multivariate genomic selection on multiple traits could accomplish this but has been little explored and tested in practical breeding programs. In this study, three multivariate linear models (i.e., GBLUP, BayesA, and BayesCπ) were presented and compared to univariate models using simulated and real quantitative traits controlled by different genetic architectures. We also extended BayesA with fixed hyperparameters to a full hierarchical model that estimated hyperparameters and BayesCπ to impute missing phenotypes. We found that optimal marker-effect variance priors depended on the genetic architecture of the trait so that estimating them was beneficial. We showed that the prediction accuracy for a low-heritability trait could be significantly increased by multivariate genomic selection when a correlated high-heritability trait was available. Further, multiple-trait genomic selection had higher prediction accuracy than single-trait genomic selection when phenotypes are not available on all individuals and traits. Additional factors affecting the performance of multiple-trait genomic selection were explored. PMID:23086217

  7. Single-electron multiplication statistics as a combination of Poissonian pulse height distributions using constraint regression methods

    Ballini, J.-P.; Cazes, P.; Turpin, P.-Y.

    1976-01-01

    Analysing the histogram of anode pulse amplitudes allows a discussion of the hypothesis that has been proposed to account for the statistical processes of secondary multiplication in a photomultiplier. In an earlier work, good agreement was obtained between experimental and reconstructed spectra, assuming a first dynode distribution including two Poisson distributions of distinct mean values. This first approximation led to a search for a method which could give the weights of several Poisson distributions of distinct mean values. Three methods have been briefly exposed: classical linear regression, constraint regression (d'Esopo's method), and regression on variables subject to error. The use of these methods gives an approach of the frequency function which represents the dispersion of the punctual mean gain around the whole first dynode mean gain value. Comparison between this function and the one employed in Polya distribution allows the statement that the latter is inadequate to describe the statistical process of secondary multiplication. Numerous spectra obtained with two kinds of photomultiplier working under different physical conditions have been analysed. Then two points are discussed: - Does the frequency function represent the dynode structure and the interdynode collection process. - Is the model (the multiplication process of all dynodes but the first one, is Poissonian) valid whatever the photomultiplier and the utilization conditions. (Auth.)

  8. Combining qualitative and quantitative operational research methods to inform quality improvement in pathways that span multiple settings

    Crowe, Sonya; Brown, Katherine; Tregay, Jenifer; Wray, Jo; Knowles, Rachel; Ridout, Deborah A; Bull, Catherine; Utley, Martin

    2017-01-01

    Background Improving integration and continuity of care across sectors within resource constraints is a priority in many health systems. Qualitative operational research methods of problem structuring have been used to address quality improvement in services involving multiple sectors but not in combination with quantitative operational research methods that enable targeting of interventions according to patient risk. We aimed to combine these methods to augment and inform an improvement initiative concerning infants with congenital heart disease (CHD) whose complex care pathway spans multiple sectors. Methods Soft systems methodology was used to consider systematically changes to services from the perspectives of community, primary, secondary and tertiary care professionals and a patient group, incorporating relevant evidence. Classification and regression tree (CART) analysis of national audit datasets was conducted along with data visualisation designed to inform service improvement within the context of limited resources. Results A ‘Rich Picture’ was developed capturing the main features of services for infants with CHD pertinent to service improvement. This was used, along with a graphical summary of the CART analysis, to guide discussions about targeting interventions at specific patient risk groups. Agreement was reached across representatives of relevant health professions and patients on a coherent set of targeted recommendations for quality improvement. These fed into national decisions about service provision and commissioning. Conclusions When tackling complex problems in service provision across multiple settings, it is important to acknowledge and work with multiple perspectives systematically and to consider targeting service improvements in response to confined resources. Our research demonstrates that applying a combination of qualitative and quantitative operational research methods is one approach to doing so that warrants further

  9. Self-destruction by multiple methods during a single episode: a case ...

    Results: Three different methods of suicide were apparent in this instance: hanging, leaping down the cliff and drowning as was evidenced by the autopsy and positive diatom test. The complexity of this case was the planned protection against the failure of one method employed to commit suicide. The methods used were ...

  10. A novel String Banana Template Method for Tracks Reconstruction in High Multiplicity Events with significant Multiple Scattering and its Firmware Implementation

    Kulinich, P; Krylov, V

    2004-01-01

    Novel String Banana Template Method (SBTM) for track reconstruction in difficult conditions is proposed and implemented for off-line analysis of relativistic heavy ion collision events. The main idea of the method is in use of features of ensembles of tracks selected by 3-fold coincidence. Two steps model of track is used: the first one - averaged over selected ensemble and the second - per event dependent. It takes into account Multiple Scattering (MS) for this particular track. SBTM relies on use of stored templates generated by precise Monte Carlo simulation, so it's more time efficient for the case of 2D spectrometer. All data required for track reconstruction in such difficult conditions could be prepared in convenient format for fast use. Its template based nature and the fact that the SBTM track model is actually very close to the hits implies that it can be implemented in a firmware processor. In this report a block diagram of firmware based pre-processor for track reconstruction in CMS-like Si tracke...

  11. Demonstration of containment purge and vent valve operability for the Hope Creek Generating Station, Unit 1 (Docket No. 50-354)

    Kido, C.

    1985-05-01

    The containment purge and vent valve qualification program for the Hope Creek Generating Station has been reviewed by the NRC Licensing Support Section. The review indicates that the licensee has demonstrated the dependability of containment isolation against the buildup of containment pressure due to a LOCA/DBA with the restrictions that during operating conditions 1, 2, and 3 all purge and vent valves will be sealed closed and under administrative control, and during power ascension and descension conditions the 26 in. inboard valve (1-GS-HV-4952) will be used in series with the 2 in. bypass valve (1-GS-HV-4951) to control the release of containment pressure

  12. Comparison of Adaline and Multiple Linear Regression Methods for Rainfall Forecasting

    Sutawinaya, IP; Astawa, INGA; Hariyanti, NKD

    2018-01-01

    Heavy rainfall can cause disaster, therefore need a forecast to predict rainfall intensity. Main factor that cause flooding is there is a high rainfall intensity and it makes the river become overcapacity. This will cause flooding around the area. Rainfall factor is a dynamic factor, so rainfall is very interesting to be studied. In order to support the rainfall forecasting, there are methods that can be used from Artificial Intelligence (AI) to statistic. In this research, we used Adaline for AI method and Regression for statistic method. The more accurate forecast result shows the method that used is good for forecasting the rainfall. Through those methods, we expected which is the best method for rainfall forecasting here.

  13. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    Gora, D.; Bernardini, E.; Cruz Silva, A.H.

    2011-04-01

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  14. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  15. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    Gora, D. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute of Nuclear Physics PAN, Cracow (Poland); Bernardini, E.; Cruz Silva, A.H. [Institute of Nuclear Physics PAN, Cracow (Poland)

    2011-04-15

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  16. Combining qualitative and quantitative operational research methods to inform quality improvement in pathways that span multiple settings.

    Crowe, Sonya; Brown, Katherine; Tregay, Jenifer; Wray, Jo; Knowles, Rachel; Ridout, Deborah A; Bull, Catherine; Utley, Martin

    2017-08-01

    Improving integration and continuity of care across sectors within resource constraints is a priority in many health systems. Qualitative operational research methods of problem structuring have been used to address quality improvement in services involving multiple sectors but not in combination with quantitative operational research methods that enable targeting of interventions according to patient risk. We aimed to combine these methods to augment and inform an improvement initiative concerning infants with congenital heart disease (CHD) whose complex care pathway spans multiple sectors. Soft systems methodology was used to consider systematically changes to services from the perspectives of community, primary, secondary and tertiary care professionals and a patient group, incorporating relevant evidence. Classification and regression tree (CART) analysis of national audit datasets was conducted along with data visualisation designed to inform service improvement within the context of limited resources. A 'Rich Picture' was developed capturing the main features of services for infants with CHD pertinent to service improvement. This was used, along with a graphical summary of the CART analysis, to guide discussions about targeting interventions at specific patient risk groups. Agreement was reached across representatives of relevant health professions and patients on a coherent set of targeted recommendations for quality improvement. These fed into national decisions about service provision and commissioning. When tackling complex problems in service provision across multiple settings, it is important to acknowledge and work with multiple perspectives systematically and to consider targeting service improvements in response to confined resources. Our research demonstrates that applying a combination of qualitative and quantitative operational research methods is one approach to doing so that warrants further consideration. Published by the BMJ Publishing Group

  17. The shooting method and multiple solutions of two/multi-point BVPs of second-order ODE

    Man Kam Kwong

    2006-06-01

    Full Text Available Within the last decade, there has been growing interest in the study of multiple solutions of two- and multi-point boundary value problems of nonlinear ordinary differential equations as fixed points of a cone mapping. Undeniably many good results have emerged. The purpose of this paper is to point out that, in the special case of second-order equations, the shooting method can be an effective tool, sometimes yielding better results than those obtainable via fixed point techniques.

  18. A Study of Japanese Consumption Tax System : Mainly on Multiple Tax Rates and Input Tax Credit Methods

    栗原, 克文

    2007-01-01

    One of the most important discussions on Japanese tax system reform includes how consumption tax (Value-added tax) system ought to be. Facing issues like depopulation, aging society and large budget deficit, consumption tax can be an effective source of revenue to secure social security. This article mainly focuses on multiple tax rates and input tax credit methods of Japanese consumption tax system. Because of regressive nature of consumption tax, tax rate reduction, exemption on foodstuffs ...

  19. Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar

    Shouguo Yang

    2015-12-01

    Full Text Available A novel spatio-temporal 2-dimensional (2-D processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters’ outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD and direction of arrival (DOA, and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results.

  20. Ensemble approach combining multiple methods improves human transcription start site prediction.

    Dineen, David G

    2010-01-01

    The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets.

  1. System and method for design and optimization of grid connected photovoltaic power plant with multiple photovoltaic module technologies

    Thomas, Bex George; Elasser, Ahmed; Bollapragada, Srinivas; Galbraith, Anthony William; Agamy, Mohammed; Garifullin, Maxim Valeryevich

    2016-03-29

    A system and method of using one or more DC-DC/DC-AC converters and/or alternative devices allows strings of multiple module technologies to coexist within the same PV power plant. A computing (optimization) framework estimates the percentage allocation of PV power plant capacity to selected PV module technologies. The framework and its supporting components considers irradiation, temperature, spectral profiles, cost and other practical constraints to achieve the lowest levelized cost of electricity, maximum output and minimum system cost. The system and method can function using any device enabling distributed maximum power point tracking at the module, string or combiner level.

  2. Measuring method for effective neutron multiplication factor upon containing irradiated fuel assembly

    Ueda, Makoto; Mitsuhashi, Ishi; Sasaki, Tomoharu.

    1993-01-01

    A portion of irradiated fuel assemblies at a place where a reactivity effect is high, that is, at a place where neutron importance is high is replaced with standard fuel assemblies having a known composition to measure neutron fluxes at each of the places. An effective composition at the periphery of the standard fuel assemblies is determined by utilizing a calibration curve determined separately based on the composition and neutron flux values of the standard assemblies. By using the calibration curve determined separately based on this composition and the known composition of the standard fuel assemblies, an effective neutron multiplication factor for the fuel containing portion containing the irradiated fuel assemblies is recognized. Then, subcriticality is ensured and critical safety upon containing the fuel assemblies can be secured quantitatively. (N.H.)

  3. EasyClone: method for iterative chromosomal integration of multiple genes in Saccharomyces cerevisiae

    Jensen, Niels Bjerg; Strucko, Tomas; Kildegaard, Kanchana Rueksomtawin

    2014-01-01

    of multiple genes with an option of recycling selection markers. The vectors combine the advantage of efficient uracil excision reaction-based cloning and Cre-LoxP-mediated marker recycling system. The episomal and integrative vector sets were tested by inserting genes encoding cyan, yellow, and red...... fluorescent proteins into separate vectors and analyzing for co-expression of proteins by flow cytometry. Cells expressing genes encoding for the three fluorescent proteins from three integrations exhibited a much higher level of simultaneous expression than cells producing fluorescent proteins encoded...... on episomal plasmids, where correspondingly 95% and 6% of the cells were within a fluorescence interval of Log10 mean ± 15% for all three colors. We demonstrate that selective markers can be simultaneously removed using Cre-mediated recombination and all the integrated heterologous genes remain...

  4. A BAC-bacterial recombination method to generate physically linked multiple gene reporter DNA constructs

    Gong Shiaochin

    2009-03-01

    Full Text Available Abstract Background Reporter gene mice are valuable animal models for biological research providing a gene expression readout that can contribute to cellular characterization within the context of a developmental process. With the advancement of bacterial recombination techniques to engineer reporter gene constructs from BAC genomic clones and the generation of optically distinguishable fluorescent protein reporter genes, there is an unprecedented capability to engineer more informative transgenic reporter mouse models relative to what has been traditionally available. Results We demonstrate here our first effort on the development of a three stage bacterial recombination strategy to physically link multiple genes together with their respective fluorescent protein (FP reporters in one DNA fragment. This strategy uses bacterial recombination techniques to: (1 subclone genes of interest into BAC linking vectors, (2 insert desired reporter genes into respective genes and (3 link different gene-reporters together. As proof of concept, we have generated a single DNA fragment containing the genes Trap, Dmp1, and Ibsp driving the expression of ECFP, mCherry, and Topaz FP reporter genes, respectively. Using this DNA construct, we have successfully generated transgenic reporter mice that retain two to three gene readouts. Conclusion The three stage methodology to link multiple genes with their respective fluorescent protein reporter works with reasonable efficiency. Moreover, gene linkage allows for their common chromosomal integration into a single locus. However, the testing of this multi-reporter DNA construct by transgenesis does suggest that the linkage of two different genes together, despite their large size, can still create a positional effect. We believe that gene choice, genomic DNA fragment size and the presence of endogenous insulator elements are critical variables.

  5. A BAC-bacterial recombination method to generate physically linked multiple gene reporter DNA constructs.

    Maye, Peter; Stover, Mary Louise; Liu, Yaling; Rowe, David W; Gong, Shiaochin; Lichtler, Alexander C

    2009-03-13

    Reporter gene mice are valuable animal models for biological research providing a gene expression readout that can contribute to cellular characterization within the context of a developmental process. With the advancement of bacterial recombination techniques to engineer reporter gene constructs from BAC genomic clones and the generation of optically distinguishable fluorescent protein reporter genes, there is an unprecedented capability to engineer more informative transgenic reporter mouse models relative to what has been traditionally available. We demonstrate here our first effort on the development of a three stage bacterial recombination strategy to physically link multiple genes together with their respective fluorescent protein (FP) reporters in one DNA fragment. This strategy uses bacterial recombination techniques to: (1) subclone genes of interest into BAC linking vectors, (2) insert desired reporter genes into respective genes and (3) link different gene-reporters together. As proof of concept, we have generated a single DNA fragment containing the genes Trap, Dmp1, and Ibsp driving the expression of ECFP, mCherry, and Topaz FP reporter genes, respectively. Using this DNA construct, we have successfully generated transgenic reporter mice that retain two to three gene readouts. The three stage methodology to link multiple genes with their respective fluorescent protein reporter works with reasonable efficiency. Moreover, gene linkage allows for their common chromosomal integration into a single locus. However, the testing of this multi-reporter DNA construct by transgenesis does suggest that the linkage of two different genes together, despite their large size, can still create a positional effect. We believe that gene choice, genomic DNA fragment size and the presence of endogenous insulator elements are critical variables.

  6. Methods for synthesizing findings on moderation effects across multiple randomized trials.

    Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana

    2013-04-01

    This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design.

  7. Methods for Synthesizing Findings on Moderation Effects Across Multiple Randomized Trials

    Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana

    2011-01-01

    This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis, and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design. PMID:21360061

  8. Computer-generated holograms by multiple wavefront recording plane method with occlusion culling.

    Symeonidou, Athanasia; Blinder, David; Munteanu, Adrian; Schelkens, Peter

    2015-08-24

    We propose a novel fast method for full parallax computer-generated holograms with occlusion processing, suitable for volumetric data such as point clouds. A novel light wave propagation strategy relying on the sequential use of the wavefront recording plane method is proposed, which employs look-up tables in order to reduce the computational complexity in the calculation of the fields. Also, a novel technique for occlusion culling with little additional computation cost is introduced. Additionally, the method adheres a Gaussian distribution to the individual points in order to improve visual quality. Performance tests show that for a full-parallax high-definition CGH a speedup factor of more than 2,500 compared to the ray-tracing method can be achieved without hardware acceleration.

  9. A hybrid multiple attribute decision making method for solving problems of industrial environment

    Dinesh Singh

    2011-01-01

    Full Text Available The selection of appropriate alternative in the industrial environment is an important but, at the same time, a complex and difficult problem because of the availability of a wide range of alternatives and similarity among them. Therefore, there is a need for simple, systematic, and logical methods or mathematical tools to guide decision makers in considering a number of selection attributes and their interrelations. In this paper, a hybrid decision making method of graph theory and matrix approach (GTMA and analytical hierarchy process (AHP is proposed. Three examples are presented to illustrate the potential of the proposed GTMA-AHP method and the results are compared with the results obtained using other decision making methods.

  10. Methods of natural gas liquefaction and natural gas liquefaction plants utilizing multiple and varying gas streams

    Wilding, Bruce M; Turner, Terry D

    2014-12-02

    A method of natural gas liquefaction may include cooling a gaseous NG process stream to form a liquid NG process stream. The method may further include directing the first tail gas stream out of a plant at a first pressure and directing a second tail gas stream out of the plant at a second pressure. An additional method of natural gas liquefaction may include separating CO.sub.2 from a liquid NG process stream and processing the CO.sub.2 to provide a CO.sub.2 product stream. Another method of natural gas liquefaction may include combining a marginal gaseous NG process stream with a secondary substantially pure NG stream to provide an improved gaseous NG process stream. Additionally, a NG liquefaction plant may include a first tail gas outlet, and at least a second tail gas outlet, the at least a second tail gas outlet separate from the first tail gas outlet.

  11. Monte Carlo Library Least Square (MCLLS) Method for Multiple Radioactive Particle Tracking in BPR

    Wang, Zhijian; Lee, Kyoung; Gardner, Robin

    2010-03-01

    In This work, a new method of radioactive particles tracking is proposed. An accurate Detector Response Functions (DRF's) was developed from MCNP5 to generate library for NaI detectors with a significant speed-up factor of 200. This just make possible for the idea of MCLLS method which is used for locating and tracking the radioactive particle in a modular Pebble Bed Reactor (PBR) by searching minimum Chi-square values. The method was tested to work pretty good in our lab condition with a six 2" X 2" NaI detectors array only. This method was introduced in both forward and inverse ways. A single radioactive particle tracking system with three collimated 2" X 2" NaI detectors is used for benchmark purpose.

  12. Method to measure the position offset of multiple light spots in a distributed aperture laser angle measurement system.

    Jing, Xiaoli; Cheng, Haobo; Xu, Chunyun; Feng, Yunpeng

    2017-02-20

    In this paper, an accurate measurement method of multiple spots' position offsets on a four-quadrant detector is proposed for a distributed aperture laser angle measurement system (DALAMS). The theoretical model is put forward, as well as the corresponding calculation method. This method includes two steps. First, as the initial estimation, integral approximation is applied to fit the distributed spots' offset function; second, the Boltzmann function is employed to compensate for the estimation error to improve detection accuracy. The simulation results attest to the correctness and effectiveness of the proposed method, and tolerance synthesis analysis of DALAMS is conducted to determine the maximum uncertainties of manufacturing and installation. The maximum angle error is less than 0.08° in the prototype distributed measurement system, which shows the stability and robustness for prospective applications.

  13. A multiple hollow fibre liquid-phase microextraction method for the determination of halogenated solvent residues in olive oil.

    Manso, J; García-Barrera, T; Gómez-Ariza, J L; González, A G

    2014-02-01

    The present paper describes a method based on the extraction of analytes by multiple hollow fibre liquid-phase microextraction and detection by ion-trap mass spectrometry and electron capture detectors after gas chromatographic separation. The limits of detection are in the range of 0.13-0.67 μg kg(-1), five orders of magnitude lower than those reached with the European Commission Official method of analysis, with three orders of magnitude of linear range (from the quantification limits to 400 μg kg(-1) for all the analytes) and recoveries in fortified olive oils in the range of 78-104 %. The main advantages of the analytical method are the absence of sample carryover (due to the disposable nature of the membranes), high enrichment factors in the range of 79-488, high throughput and low cost. The repeatability of the analytical method ranged from 8 to 15 % for all the analytes, showing a good performance.

  14. Investigating lithological and geophysical relationships with applications to geological uncertainty analysis using Multiple-Point Statistical methods

    Barfod, Adrian

    The PhD thesis presents a new method for analyzing the relationship between resistivity and lithology, as well as a method for quantifying the hydrostratigraphic modeling uncertainty related to Multiple-Point Statistical (MPS) methods. Three-dimensional (3D) geological models are im...... is to improve analysis and research of the resistivity-lithology relationship and ensemble geological/hydrostratigraphic modeling. The groundwater mapping campaign in Denmark, beginning in the 1990’s, has resulted in the collection of large amounts of borehole and geophysical data. The data has been compiled...... in two publicly available databases, the JUPITER and GERDA databases, which contain borehole and geophysical data, respectively. The large amounts of available data provided a unique opportunity for studying the resistivity-lithology relationship. The method for analyzing the resistivity...

  15. Purging of acute myeloid leukaemia cells from stem cell grafts by hyperthermia : enhancement of the therapeutic index by the tetrapeptide AcSDKP and the alkyl-lysophospholipid ET-18-OCH3

    Wierenga, PK; Setroikromo, R; Vellenga, E; Kampinga, HH

    2000-01-01

    Hyperthermia has been shown to be a potential purging modality in autologous stem cell transplantation settings owing to its selective toxicity towards leukaemic cells, We describe two approaches to further increase the therapeutic index of the hyperthermic purging modality by using normal murine

  16. Flow Cytometry Method as a Diagnostic Tool for Pleural Fluid Involvement in a Patient with Multiple Myeloma

    MUZAFFER KEKLIK

    2012-01-01

    Full Text Available

    Multiple myeloma is a malignant proliferation of plasma cells that mainly affects bone marrow. Pleural effusions secondary to pleural myelomatous involvement have rarely been reported in the literature. As it is rarely detected, we aimed to report a case in which pleural effusion of a multiple myeloma was confirmed as true myelomatous involvement by flow cytometry method. A 52-years old man presented to our clinic with chest and back pain lasting for 3 months. On the chest radiography, pleural fluid was detected in left hemithorax. Pleural fluid flow cytometry was performed. In the flow cytometry, CD56, CD38 and CD138 found to be positive, while CD19 was negative. True myelomatous pleural effusions are very uncommon, with fewer than 100 cases reported worldwide. Flow cytometry is a potentially useful diagnostic tool for clinical practice. We presented our case; as it has been rarely reported, although flow cytometer is a simple method for detection of pleural fluid involvement in multiple myeloma.

  17. Flow Cytometry Method as a Diagnostic Tool for Pleural Fluid Involvement in a Patient with Multiple Myeloma

    Muzaffer Keklik

    2012-10-01

    Full Text Available Multiple myeloma is a malignant proliferation of plasma cells that mainly affects bone marrow. Pleural effusions secondary to pleural myelomatous involvement have rarely been reported in the literature. As it is rarely detected, we aimed to report a case in which pleural effusion of a multiple myeloma was confirmed as true myelomatous involvement by flow cytometry method. A 52-years old man presented to our clinic with chest and back pain lasting for 3 months. On the chest radiography, pleural fluid was detected in left hemithorax. Pleural fluid flow cytometry was performed. In the flow cytometry, CD56, CD38 and CD138 found to be positive, while CD19 was negative. True myelomatous pleural effusions are very uncommon, with fewer than 100 cases reported worldwide. Flow cytometry is a potentially useful diagnostic tool for clinical practice. We presented our case; as it has been rarely reported, although flow cytometer is a simple method for detection of pleural fluid involvement in multiple myeloma.

  18. Thermal analysis of LOFT waste gas processing system nitrogen supply for process line purge and blower seal

    Tatar, G.A.

    1979-01-01

    The LOFT Waste Gas Processing System uses gaseous nitrogen (GN 2 ) to purge the main process line and to supply pressure on the blower labyrinth seal. The purpose of this analysis was to determine the temperature of the GN 2 at the blower seals and the main process line. Since these temperatures were below 32 0 F the heat rate necessary to raise these temperatures was calculated. This report shows that the GN 2 temperatures at the points mentioned above were below 10 0 F. A heat rate into the GN 2 of 389 Watts added at the point where the supply line enters the vault would raise the GN 2 temperature above 32 0 F

  19. Calorie estimation accuracy and menu labeling perceptions among individuals with and without binge eating and/or purging disorders.

    Roberto, Christina A; Haynos, Ann F; Schwartz, Marlene B; Brownell, Kelly D; White, Marney A

    2013-09-01

    Menu labeling is a public health policy that requires chain restaurants in the USA to post kilocalorie information on their menus to help consumers make informed choices. However, there is concern that such a policy might promote disordered eating. This web-based study compared individuals with self-reported binge eating disorder (N = 52), bulimia nervosa (N = 25), and purging disorder (N = 17) and those without eating disorders (No ED) (N = 277) on restaurant calorie information knowledge and perceptions of menu labeling legislation. On average, people answered 1.46 ± 1.08 questions correctly (out of 6) (25%) on a calorie information quiz and 92% of the sample was in favor of menu labeling. The findings did not differ based on eating disorder, dieting, or weight status, or race/ethnicity. The results indicated that people have difficulty estimating the calories in restaurant meals and individuals with and without eating disorders are largely in favor of menu labeling laws.

  20. The Analysis of Loop Seal Purge Time for the KHNP Pressurizer Safety Valve Test Facility Using the GOTHIC Code

    Kim, Young Ae; Kim, Chang Hyun; Kweon, Gab Joo; Park, Jong Woon [Korea Hydro and Nuclear Power Co., Ltd., Daejeon (Korea, Republic of)

    2007-10-15

    The pressurizer safety valves (PSV) in Pressurized Water Reactors are required to provide the overpressure protection for the Reactor Coolant System (RCS) during the overpressure transients. Korea Hydro and Nuclear Power Company (KHNP) plans to build the PSV test facility for the purpose of providing the PSV pop-up characteristics and the loop seal dynamics for the new safety analysis. When the pressurizer safety valve is mounted in a loop seal configuration, the valve must initially pass the loop seal water prior to popping open on steam. The loop seal in the upstream of PSV prevents leakage of hydrogen gas or steam through the safety valve seat. This paper studies on the loop seal clearing dynamics using GOTHIC-7.2a code to verify the effects of loop seal purge time on the reactor coolant system overpressure.

  1. A method of mounting multiple otoliths for beam-based microchemical analyses

    Donohoe, C.J.; Zimmerman, C.E.

    2010-01-01

    Beam-based analytical methods are widely used to measure the concentrations of elements and isotopes in otoliths. These methods usually require that otoliths be individually mounted and prepared to properly expose the desired growth region to the analytical beam. Most analytical instruments, such as LA-ICPMS and ion and electron microprobes, have sample holders that will accept only one to six slides or mounts at a time. We describe a method of mounting otoliths that allows for easy transfer of many otoliths to a single mount after they have been prepared. Such an approach increases the number of otoliths that can be analyzed in a single session by reducing the need open the sample chamber to exchange slides-a particularly time consuming step on instruments that operate under vacuum. For ion and electron microprobes, the method also greatly reduces the number of slides that must be coated with an electrical conductor prior to analysis. In this method, a narrow strip of cover glass is first glued at one end to a standard microscope slide. The otolith is then mounted in thermoplastic resin on the opposite, free end of the strip. The otolith can then be ground and flipped, if needed, by reheating the mounting medium. After otolith preparation is complete, the cover glass is cut with a scribe to free the otolith and up to 20 small otoliths can be arranged on a single petrographic slide. ?? 2010 The Author(s).

  2. Testing an Adapted Modified Delphi Method: Synthesizing Multiple Stakeholder Ratings of Health Care Service Effectiveness.

    Escaron, Anne L; Chang Weir, Rosy; Stanton, Petra; Vangala, Sitaram; Grogan, Tristan R; Clarke, Robin M

    2016-03-01

    The Affordable Care Act incentivizes health systems for better meeting patient needs, but often guidance about patient preferences for particular health services is limited. All too often vulnerable patient populations are excluded from these decision-making settings. A community-based participatory approach harnesses the in-depth knowledge of those experiencing barriers to health care. We made three modifications to the RAND-UCLA appropriateness method, a modified Delphi approach, involving patients, adding an advisory council group to characterize existing knowledge in this little studied area, and using effectiveness rather than "appropriateness" as the basis for rating. As a proof of concept, we tested this method by examining the broadly delivered but understudied nonmedical services that community health centers provide. This method created discrete, new knowledge about these services by defining 6 categories and 112 unique services and by prioritizing among these services based on effectiveness using a 9-point scale. Consistent with the appropriateness method, we found statistical convergence of ratings among the panelists. Challenges include time commitment and adherence to a clear definition of effectiveness of services. This diverse stakeholder engagement method efficiently addresses gaps in knowledge about the effectiveness of health care services to inform population health management. © 2015 Society for Public Health Education.

  3. Improved vertical streambed flux estimation using multiple diurnal temperature methods in series

    Irvine, Dylan J.; Briggs, Martin A.; Cartwright, Ian; Scruggs, Courtney; Lautz, Laura K.

    2017-01-01

    Analytical solutions that use diurnal temperature signals to estimate vertical fluxes between groundwater and surface water based on either amplitude ratios (Ar) or phase shifts (Δϕ) produce results that rarely agree. Analytical solutions that simultaneously utilize Ar and Δϕ within a single solution have more recently been derived, decreasing uncertainty in flux estimates in some applications. Benefits of combined (ArΔϕ) methods also include that thermal diffusivity and sensor spacing can be calculated. However, poor identification of either Ar or Δϕ from raw temperature signals can lead to erratic parameter estimates from ArΔϕ methods. An add-on program for VFLUX 2 is presented to address this issue. Using thermal diffusivity selected from an ArΔϕ method during a reliable time period, fluxes are recalculated using an Ar method. This approach maximizes the benefits of the Ar and ArΔϕ methods. Additionally, sensor spacing calculations can be used to identify periods with unreliable flux estimates, or to assess streambed scour. Using synthetic and field examples, the use of these solutions in series was particularly useful for gaining conditions where fluxes exceeded 1 m/d.

  4. Development and evaluation of nursing user interface screens using multiple methods.

    Hyun, Sookyung; Johnson, Stephen B; Stetson, Peter D; Bakken, Suzanne

    2009-12-01

    Building upon the foundation of the Structured Narrative Electronic Health Record (EHR) model, we applied theory-based (combined Technology Acceptance Model and Task-Technology Fit Model) and user-centered methods to explore nurses' perceptions of functional requirements for an electronic nursing documentation system, design user interface screens reflective of the nurses' perspectives, and assess nurses' perceptions of the usability of the prototype user interface screens. The methods resulted in user interface screens that were perceived to be easy to use, potentially useful, and well-matched to nursing documentation tasks associated with Nursing Admission Assessment, Blood Administration, and Nursing Discharge Summary. The methods applied in this research may serve as a guide for others wishing to implement user-centered processes to develop or extend EHR systems. In addition, some of the insights obtained in this study may be informative to the development of safe and efficient user interface screens for nursing document templates in EHRs.

  5. Ensemble approach combining multiple methods improves human transcription start site prediction

    Dineen, David G

    2010-11-30

    Abstract Background The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets. Results We demonstrate the heterogeneity of current prediction sets, and take advantage of this heterogeneity to construct a two-level classifier (\\'Profisi Ensemble\\') using predictions from 7 programs, along with 2 other data sources. Support vector machines using \\'full\\' and \\'reduced\\' data sets are combined in an either\\/or approach. We achieve a 14% increase in performance over the current state-of-the-art, as benchmarked by a third-party tool. Conclusions Supervised learning methods are a useful way to combine predictions from diverse sources.

  6. Multiple objective optimization of hydro-thermal systems using Ritz's method

    Arnáu L. Bayón

    1999-01-01

    Full Text Available This paper examines the applicability of the Ritz method to multi-objective optimization of hydro-thermal systems. The algorithm proposed is aimed to minimize an objective functional that incorporates the cost of energy losses, the conventional fuel cost and the production of atmospheric emissions such as NO x and SO 2 caused by the operation of fossil-fueled thermal generation. The formulation includes a general layout of hydro-plants that may form multi-chains of reservoir network. Time-delays are included and the electric network is considered by using the active power balance equation. The volume of water discharge for each hydro-plant is a given constant amount from the optimization interval. The generic minimization algorithm, which is not difficult to construct on the basis of the Ritz method, has certain advantages in comparison with the conventional methods.

  7. Flexible rotor balancing by the influence coefficient method: Multiple critical speeds with rigid or flexible supports

    Tessarzik, J. M.

    1975-01-01

    Experimental tests were conducted to demonstrate the ability of the influence coefficient method to achieve precise balance of flexible rotors of virtually any design for operation through virtually any speed range. Various practical aspects of flexible-rotor balancing were investigated. Tests were made on a laboratory quality machine having a 122 cm (48 in.) long rotor weighing 50 kg (110 lb) and covering a speed range up to 18000 rpm. The balancing method was in every instance effective, practical, and economical and permitted safe rotor operation over the full speed range covering four rotor bending critical speeds. Improved correction weight removal methods for rotor balancing were investigated. Material removal from a rotating disk was demonstrated through application of a commercially available laser.

  8. Impact of consensus contours from multiple PET segmentation methods on the accuracy of functional volume delineation

    Schaefer, A. [Saarland University Medical Centre, Department of Nuclear Medicine, Homburg (Germany); Vermandel, M. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); CHU Lille, Nuclear Medicine Department, Lille (France); Baillet, C. [CHU Lille, Nuclear Medicine Department, Lille (France); Dewalle-Vignion, A.S. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); Modzelewski, R.; Vera, P.; Gardin, I. [Centre Henri-Becquerel and LITIS EA4108, Rouen (France); Massoptier, L.; Parcq, C.; Gibon, D. [AQUILAB, Research and Innovation Department, Loos Les Lille (France); Fechter, T.; Nestle, U. [University Medical Center Freiburg, Department for Radiation Oncology, Freiburg (Germany); German Cancer Consortium (DKTK) Freiburg and German Cancer Research Center (DKFZ), Heidelberg (Germany); Nemer, U. [University Medical Center Freiburg, Department of Nuclear Medicine, Freiburg (Germany)

    2016-05-15

    The aim of this study was to evaluate the impact of consensus algorithms on segmentation results when applied to clinical PET images. In particular, whether the use of the majority vote or STAPLE algorithm could improve the accuracy and reproducibility of the segmentation provided by the combination of three semiautomatic segmentation algorithms was investigated. Three published segmentation methods (contrast-oriented, possibility theory and adaptive thresholding) and two consensus algorithms (majority vote and STAPLE) were implemented in a single software platform (Artiview registered). Four clinical datasets including different locations (thorax, breast, abdomen) or pathologies (primary NSCLC tumours, metastasis, lymphoma) were used to evaluate accuracy and reproducibility of the consensus approach in comparison with pathology as the ground truth or CT as a ground truth surrogate. Variability in the performance of the individual segmentation algorithms for lesions of different tumour entities reflected the variability in PET images in terms of resolution, contrast and noise. Independent of location and pathology of the lesion, however, the consensus method resulted in improved accuracy in volume segmentation compared with the worst-performing individual method in the majority of cases and was close to the best-performing method in many cases. In addition, the implementation revealed high reproducibility in the segmentation results with small changes in the respective starting conditions. There were no significant differences in the results with the STAPLE algorithm and the majority vote algorithm. This study showed that combining different PET segmentation methods by the use of a consensus algorithm offers robustness against the variable performance of individual segmentation methods and this approach would therefore be useful in radiation oncology. It might also be relevant for other scenarios such as the merging of expert recommendations in clinical routine and

  9. Comparison of approximate methods for multiple scattering in high-energy collisions. II

    Nolan, A.M.; Tobocman, W.; Werby, M.F.

    1976-01-01

    The scattering in one dimension of a particle by a target of N like particles in a bound state has been studied. The exact result for the transmission probability has been compared with the predictions of the Glauber theory, the Watson optical potential model, and the adiabatic (or fixed scatterer) approximation. The approximate methods optical potential model is second best. The Watson method is found to work better when the kinematics suggested by Foldy and Walecka are used rather than that suggested by Watson, that is to say, when the two-body of the nucleon-nucleon reduced mass

  10. A method for detecting IBD regions simultaneously in multiple individuals--with applications to disease genetics

    Moltke, Ida; Albrechtsen, Anders; Hansen, Thomas V O

    2011-01-01

    genome containing disease-causing variants. However, IBD regions can be difficult to detect, especially in the common case where no pedigree information is available. In particular, all existing non-pedigree based methods can only infer IBD sharing between two individuals. Here, we present a new Markov...... Chain Monte Carlo method for detection of IBD regions, which does not rely on any pedigree information. It is based on a probabilistic model applicable to unphased SNP data. It can take inbreeding, allele frequencies, genotyping errors, and genomic distances into account. And most importantly, it can...

  11. Hydrostratigraphic modelling using multiple-point statistics and airborne transient electromagnetic methods

    Barfod, Adrian; Straubhaar, Julien; Høyer, Anne-Sophie

    2017-01-01

    the incorporation of elaborate datasets and provides a framework for stochastic hydrostratigraphic modelling. This paper focuses on comparing three MPS methods: snesim, DS and iqsim. The MPS methods are tested and compared on a real-world hydrogeophysical survey from Kasted in Denmark, which covers an area of 45 km......2. The comparison of the stochastic hydrostratigraphic MPS models is carried out in an elaborate scheme of visual inspection, mathematical similarity and consistency with boreholes. Using the Kasted survey data, a practical example for modelling new survey areas is presented. A cognitive...

  12. Simple tool for the rapid, automated quantification of glacier advance/retreat observations using multiple methods

    Lea, J.

    2017-12-01

    The quantification of glacier change is a key variable within glacier monitoring, with the method used potentially being crucial to ensuring that data can be appropriately compared with environmental data. The topic and timescales of study (e.g. land/marine terminating environments; sub-annual/decadal/centennial/millennial timescales) often mean that different methods are more suitable for different problems. However, depending on the GIS/coding expertise of the user, some methods can potentially be time consuming to undertake, making large-scale studies problematic. In addition, examples exist where different users have nominally applied the same methods in different studies, though with minor methodological inconsistencies in their approach. In turn, this will have implications for data homogeneity where regional/global datasets may be constructed. Here, I present a simple toolbox scripted in a Matlab® environment that requires only glacier margin and glacier centreline data to quantify glacier length, glacier change between observations, rate of change, in addition to other metrics. The toolbox includes the option to apply the established centreline or curvilinear box methods, or a new method: the variable box method - designed for tidewater margins where box width is defined as the total width of the individual terminus observation. The toolbox is extremely flexible, and has the option to be applied as either Matlab® functions within user scripts, or via a graphical user interface (GUI) for those unfamiliar with a coding environment. In both instances, there is potential to apply the methods quickly to large datasets (100s-1000s of glaciers, with potentially similar numbers of observations each), thus ensuring large scale methodological consistency (and therefore data homogeneity) and allowing regional/global scale analyses to be achievable for those with limited GIS/coding experience. The toolbox has been evaluated against idealised scenarios demonstrating

  13. A domain decomposition method for analyzing a coupling between multiple acoustical spaces (L).

    Chen, Yuehua; Jin, Guoyong; Liu, Zhigang

    2017-05-01

    This letter presents a domain decomposition method to predict the acoustic characteristics of an arbitrary enclosure made up of any number of sub-spaces. While the Lagrange multiplier technique usually has good performance for conditional extremum problems, the present method avoids involving extra coupling parameters and theoretically ensures the continuity conditions of both sound pressure and particle velocity at the coupling interface. Comparisons with the finite element results illustrate the accuracy and efficiency of the present predictions and the effect of coupling parameters between sub-spaces on the natural frequencies and mode shapes of the overall enclosure is revealed.

  14. Method to obtain g-functions for multiple precast quadratic pile heat exchangers

    Pagola, Maria Alberdi; Jensen, Rasmus Lund; Madsen, Søren

    The average fluid temperature circulating through the ground loop is one of the main parameters required when choosing the most adequate heat pump for a ground source heat pump installation. Besides, the analysis of the fluid temperature over time will show the sustainability of the energy supply...... over the lifetime of the installation. The average fluid temperature is subjected to the type of ground heat exchangers and the thermal interactions between them, which also depend on the soil thermal properties. For the case of precast piles, the thermal interactions become significant...... as they are usually placed within short distances (0.5 to 4 metres). Fast models that can account for these interactions are required to enable feasibility studies and support the design phase. Besides, since pile heat exchangers have a main structural role, it is also relevant to develop models that can determine...... the temperature changes that the foundation might be subjected to, to assess thermo-mechanical implications. 3D finite element model (FEM) computation of the thermal behaviour of multiple pile heat exchanger foundations is not cost effective nor for feasibility studies, nor for most design applications. Therefore...

  15. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    Xie, Weihong; Yu, Yang

    2017-01-01

    Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG) in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM) estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly. PMID:29124062

  16. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    Fan Liang

    2017-01-01

    Full Text Available Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly.

  17. Prognostic value of deep sequencing method for minimal residual disease detection in multiple myeloma

    Lahuerta, Juan J.; Pepin, François; González, Marcos; Barrio, Santiago; Ayala, Rosa; Puig, Noemí; Montalban, María A.; Paiva, Bruno; Weng, Li; Jiménez, Cristina; Sopena, María; Moorhead, Martin; Cedena, Teresa; Rapado, Immaculada; Mateos, María Victoria; Rosiñol, Laura; Oriol, Albert; Blanchard, María J.; Martínez, Rafael; Bladé, Joan; San Miguel, Jesús; Faham, Malek; García-Sanz, Ramón

    2014-01-01

    We assessed the prognostic value of minimal residual disease (MRD) detection in multiple myeloma (MM) patients using a sequencing-based platform in bone marrow samples from 133 MM patients in at least very good partial response (VGPR) after front-line therapy. Deep sequencing was carried out in patients in whom a high-frequency myeloma clone was identified and MRD was assessed using the IGH-VDJH, IGH-DJH, and IGK assays. The results were contrasted with those of multiparametric flow cytometry (MFC) and allele-specific oligonucleotide polymerase chain reaction (ASO-PCR). The applicability of deep sequencing was 91%. Concordance between sequencing and MFC and ASO-PCR was 83% and 85%, respectively. Patients who were MRD– by sequencing had a significantly longer time to tumor progression (TTP) (median 80 vs 31 months; P < .0001) and overall survival (median not reached vs 81 months; P = .02), compared with patients who were MRD+. When stratifying patients by different levels of MRD, the respective TTP medians were: MRD ≥10−3 27 months, MRD 10−3 to 10−5 48 months, and MRD <10−5 80 months (P = .003 to .0001). Ninety-two percent of VGPR patients were MRD+. In complete response patients, the TTP remained significantly longer for MRD– compared with MRD+ patients (131 vs 35 months; P = .0009). PMID:24646471

  18. Disordered and Multiple Destinations Path Planning Methods for Mobile Robot in Dynamic Environment

    Yong-feng Dong

    2016-01-01

    Full Text Available In the smart home environment, aiming at the disordered and multiple destinations path planning, the sequencing rule is proposed to determine the order of destinations. Within each branching process, the initial feasible path set is generated according to the law of attractive destination. A sinusoidal adaptive genetic algorithm is adopted. It can calculate the crossover probability and mutation probability adaptively changing with environment at any time. According to the cultural-genetic algorithm, it introduces the concept of reducing turns by parallelogram and reducing length by triangle in the belief space, which can improve the quality of population. And the fallback strategy can help to jump out of the “U” trap effectively. The algorithm analyses the virtual collision in dynamic environment with obstacles. According to the different collision types, different strategies are executed to avoid obstacles. The experimental results show that cultural-genetic algorithm can overcome the problems of premature and convergence of original algorithm effectively. It can avoid getting into the local optimum. And it is more effective for mobile robot path planning. Even in complex environment with static and dynamic obstacles, it can avoid collision safely and plan an optimal path rapidly at the same time.

  19. Method and Apparatus for Virtual Interactive Medical Imaging by Multiple Remotely-Located Users

    Ross, Muriel D. (Inventor); Twombly, Ian Alexander (Inventor); Senger, Steven O. (Inventor)

    2003-01-01

    A virtual interactive imaging system allows the displaying of high-resolution, three-dimensional images of medical data to a user and allows the user to manipulate the images, including rotation of images in any of various axes. The system includes a mesh component that generates a mesh to represent a surface of an anatomical object, based on a set of data of the object, such as from a CT or MRI scan or the like. The mesh is generated so as to avoid tears, or holes, in the mesh, providing very high-quality representations of topographical features of the object, particularly at high- resolution. The system further includes a virtual surgical cutting tool that enables the user to simulate the removal of a piece or layer of a displayed object, such as a piece of skin or bone, view the interior of the object, manipulate the removed piece, and reattach the removed piece if desired. The system further includes a virtual collaborative clinic component, which allows the users of multiple, remotely-located computer systems to collaboratively and simultaneously view and manipulate the high-resolution, three-dimensional images of the object in real-time.

  20. A comparison on parameter-estimation methods in multiple regression analysis with existence of multicollinearity among independent variables

    Hukharnsusatrue, A.

    2005-11-01

    Full Text Available The objective of this research is to compare multiple regression coefficients estimating methods with existence of multicollinearity among independent variables. The estimation methods are Ordinary Least Squares method (OLS, Restricted Least Squares method (RLS, Restricted Ridge Regression method (RRR and Restricted Liu method (RL when restrictions are true and restrictions are not true. The study used the Monte Carlo Simulation method. The experiment was repeated 1,000 times under each situation. The analyzed results of the data are demonstrated as follows. CASE 1: The restrictions are true. In all cases, RRR and RL methods have a smaller Average Mean Square Error (AMSE than OLS and RLS method, respectively. RRR method provides the smallest AMSE when the level of correlations is high and also provides the smallest AMSE for all level of correlations and all sample sizes when standard deviation is equal to 5. However, RL method provides the smallest AMSE when the level of correlations is low and middle, except in the case of standard deviation equal to 3, small sample sizes, RRR method provides the smallest AMSE.The AMSE varies with, most to least, respectively, level of correlations, standard deviation and number of independent variables but inversely with to sample size.CASE 2: The restrictions are not true.In all cases, RRR method provides the smallest AMSE, except in the case of standard deviation equal to 1 and error of restrictions equal to 5%, OLS method provides the smallest AMSE when the level of correlations is low or median and there is a large sample size, but the small sample sizes, RL method provides the smallest AMSE. In addition, when error of restrictions is increased, OLS method provides the smallest AMSE for all level, of correlations and all sample sizes, except when the level of correlations is high and sample sizes small. Moreover, the case OLS method provides the smallest AMSE, the most RLS method has a smaller AMSE than

  1. Determination of (4-methylcyclohexyl)methanol isomers by heated purge-and-trap GC/MS in water samples from the 2014 Elk River, West Virginia, chemical spill

    Foreman, William T.; Rose, Donna L.; Chambers, Douglas B.; Crain, Angela S.; Murtagh, Lucinda K.; Thakellapalli, Haresh; Wang, Kung K.

    2015-01-01

    A heated purge-and-trap gas chromatography/mass spectrometry method was used to determine the cis- and trans-isomers of (4-methylcyclohexyl)methanol (4-MCHM), the reported major component of the Crude MCHM/Dowanol™ PPh glycol ether material spilled into the Elk River upriver from Charleston, West Virginia, on January 9, 2014. The trans-isomer eluted first and method detection limits were 0.16-μg L−1trans-, 0.28-μg L−1cis-, and 0.4-μg L−1 Total (total response of isomers) 4-MCHM. Estimated concentrations in the spill source material were 491-g L−1trans- and 277-g L−1cis-4-MCHM, the sum constituting 84% of the source material assuming its density equaled 4-MCHM. Elk River samples collected ⩽ 3.2 km downriver from the spill on January 15 had low (⩽2.9 μg L−1 Total) 4-MCHM concentrations, whereas the isomers were not detected in samples collected 2 d earlier at the same sites. Similar 4-MCHM concentrations (range 4.2–5.5 μg L−1 Total) occurred for samples of the Ohio River at Louisville, Kentucky, on January 17, ∼630 km downriver from the spill. Total 4-MCHM concentrations in Charleston, WV, office tap water decreased from 129 μg L−1 on January 27 to 2.2 μg L−1on February 3, but remained detectable in tap samples through final collection on February 25 indicating some persistence of 4-MCHM within the water distribution system. One isomer of methyl 4-methylcyclohexanecarboxylate was detected in all Ohio River and tap water samples, and both isomers were detected in the source material spilled.

  2. Multiple Revolution Solutions for the Perturbed Lambert Problem using the Method of Particular Solutions and Picard Iteration

    Woollands, Robyn M.; Read, Julie L.; Probe, Austin B.; Junkins, John L.

    2017-12-01

    We present a new method for solving the multiple revolution perturbed Lambert problem using the method of particular solutions and modified Chebyshev-Picard iteration. The method of particular solutions differs from the well-known Newton-shooting method in that integration of the state transition matrix (36 additional differential equations) is not required, and instead it makes use of a reference trajectory and a set of n particular solutions. Any numerical integrator can be used for solving two-point boundary problems with the method of particular solutions, however we show that using modified Chebyshev-Picard iteration affords an avenue for increased efficiency that is not available with other step-by-step integrators. We take advantage of the path approximation nature of modified Chebyshev-Picard iteration (nodes iteratively converge to fixed points in space) and utilize a variable fidelity force model for propagating the reference trajectory. Remarkably, we demonstrate that computing the particular solutions with only low fidelity function evaluations greatly increases the efficiency of the algorithm while maintaining machine precision accuracy. Our study reveals that solving the perturbed Lambert's problem using the method of particular solutions with modified Chebyshev-Picard iteration is about an order of magnitude faster compared with the classical shooting method and a tenth-twelfth order Runge-Kutta integrator. It is well known that the solution to Lambert's problem over multiple revolutions is not unique and to ensure that all possible solutions are considered we make use of a reliable preexisting Keplerian Lambert solver to warm start our perturbed algorithm.

  3. A Newton method for solving continuous multiple material minimum compliance problems

    Stolpe, M; Stegmann, Jan

    method, one or two linear saddle point systems are solved. These systems involve the Hessian of the objective function, which is both expensive to compute and completely dense. Therefore, the linear algebra is arranged such that the Hessian is not explicitly formed. The main concern is to solve...

  4. A Newton method for solving continuous multiple material minimum compliance problems

    Stolpe, Mathias; Stegmann, Jan

    2007-01-01

    method, one or two linear saddle point systems are solved. These systems involve the Hessian of the objective function, which is both expensive to compute and completely dense. Therefore, the linear algebra is arranged such that the Hessian is not explicitly formed. The main concern is to solve...

  5. Method of applying single higher order polynomial basis function over multiple domains

    Lysko, AA

    2010-03-01

    Full Text Available A novel method has been devised where one set of higher order polynomial-based basis functions can be applied over several wire segments, thus permitting to decouple the number of unknowns from the number of segments, and so from the geometrical...

  6. Adaptation of eddy current methods to the multiple problems of reactor testing

    Stumm, W.

    1975-01-01

    In reactor testing, the eddy current method is mainly used for the testing of surface regions inside the pressure vessel, on welds and joints, and for the testing of thin-walled pipes, e.g. the heat exchanger pipes. (RW/AK) [de

  7. Semiorders, Intervals Orders and Pseudo Orders Preference Structures in Multiple Criteria Decision Aid Methods

    Fernández Barberis, Gabriela

    2013-06-01

    Full Text Available During the last decades, an important number of Multicriteria Decision Aid Methods (MCDA has been proposed to help the decision maker to select the best compromise alternative. Meanwhile, the PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluations family of outranking method and their applications has attracted much attention from academics and practitioners. In this paper, an extension of these methods is presented, consisting of analyze its functioning under New Preference Structures (NPS. The preference structures taken into account are, namely: semiorders, intervals orders and pseudo orders. These structures outstandingly improve the modelization as they give more flexibility, amplitude and certainty at the preferences formulation, since they tend to abandon the Complete Transitive Comparability Axiom of Preferences in order to substitute it by the Partial Comparability Axiom of Preferences. It must be remarked the introduction of Incomparability relations to the analysis and the consideration of preference structures that accept the Indifference intransitivity. The NPS incorporation is carried out in three phases that the PROMETHEE Methodology takes in: preference structure enrichment, dominance relation enrichment and outranking relation exploitation for decision aid, in order to finally arrive at solving the alternatives ranking problem through the PROMETHEE I or the PROMETHEE II utilization, according to whether a partial ranking or a complete one, is respectively required under the NPS

  8. Benefits of Multiple Methods for Evaluating HIV Counseling and Testing Sites in Pennsylvania.

    Encandela, John A.; Gehl, Mary Beth; Silvestre, Anthony; Schelzel, George

    1999-01-01

    Examines results from two methods used to evaluate publicly funded human immunodeficiency virus (HIV) counseling and testing in Pennsylvania. Results of written mail surveys of all sites and interviews from a random sample of 30 sites were similar in terms of questions posed and complementary in other ways. (SLD)

  9. Genetic Risk by Experience Interaction for Childhood Internalizing Problems: Converging Evidence across Multiple Methods

    Vendlinski, Matthew K.; Lemery-Chalfant, Kathryn; Essex, Marilyn J.; Goldsmith, H. Hill

    2011-01-01

    Background: Identifying how genetic risk interacts with experience to predict psychopathology is an important step toward understanding the etiology of mental health problems. Few studies have examined genetic risk by experience interaction (GxE) in the development of childhood psychopathology. Methods: We used both co-twin and parent mental…

  10. Hydrologic evaluation of a Mediterranean watershed using the SWAT model with multiple PET estimation methods

    The Penman-Monteith method suggested by the Food Agricultural Organization in the Irrigation and drainage paper 56 (FAO-56 P-M) was used to evaluate surface runoff and sediment yield predictions by the Soil and Water Assessment Tool (SWAT) model at the outlet of an experimental watershed in Sicily. ...

  11. Evaluating Blended and Flipped Instruction in Numerical Methods at Multiple Engineering Schools

    Clark, Renee; Kaw, Autar; Lou, Yingyan; Scott, Andrew; Besterfield-Sacre, Mary

    2018-01-01

    With the literature calling for comparisons among technology-enhanced or active-learning pedagogies, a blended versus flipped instructional comparison was made for numerical methods coursework using three engineering schools with diverse student demographics. This study contributes to needed comparisons of enhanced instructional approaches in STEM…

  12. Multiple methods for assessing the dose to skin exposed to radioactive contamination

    Dubeau, J.; Heinmiller, B.E.; Corrigan, M.

    2017-01-01

    There is the possibility for a worker at a nuclear installation, such as a nuclear power reactor, a fuel production facility or a medical facility, to come in contact with radioactive contaminants. When such an event occurs, the first order of business is to care for the worker by promptly initiating a decontamination process. Usually, the radiation protection personnel performs a G-M pancake probe measurement of the contamination in situ and collects part or all of the radioactive contamination for further laboratory analysis. The health physicist on duty must then perform, using the available information, a skin dose assessment that will go into the worker's permanent dose record. The contamination situations are often complex and the dose assessment can be laborious. This article compares five dose assessment methods that involve analysis, new technologies and new software. The five methods are applied to 13 actual contamination incidents consisting of direct skin contact, contamination on clothing and contamination on clothing in the presence of an air gap between the clothing and the skin. This work shows that, for the cases studied, the methods provided dose estimates that were usually within 12% (1σ) of each other, for those cases where absolute activity information for every radionuclide was available. One method, which relies simply on a G-M pancake probe measurement, appeared to be particularly useful in situations where a contamination sample could not be recovered for laboratory analysis. (authors)

  13. An investigation of the joint longitudinal trajectories of low body weight, binge eating, and purging in women with anorexia nervosa and bulimia nervosa.

    Lavender, Jason M; De Young, Kyle P; Franko, Debra L; Eddy, Kamryn T; Kass, Andrea E; Sears, Meredith S; Herzog, David B

    2011-12-01

    To describe the longitudinal course of three core eating disorder symptoms-low body weight, binge eating, and purging-in women with anorexia nervosa (AN) and bulimia nervosa (BN) using a novel statistical approach. Treatment-seeking women with AN (n = 136) or BN (n = 110) completed the Eating Disorders Longitudinal Interval Follow-Up Evaluation interview every 6 months, yielding weekly eating disorder symptom data for a 5-year period. Semiparametric mixture modeling was used to identify longitudinal trajectories for the three core symptoms. Four individual trajectories were identified for each eating disorder symptom. The number and general shape of the individual trajectories was similar across symptoms, with each model including trajectories depicting stable absence and stable presence of symptoms as well as one or more trajectories depicting the declining presence of symptoms. Unique trajectories were found for low body weight (fluctuating presence) and purging (increasing presence). Conjunction analyses yielded the following joint trajectories: low body weight and binge eating, low body weight and purging, and binge eating and purging. The course of individual eating disorder symptoms among patients with AN and BN is highly variable. Future research identifying clinical predictors of trajectory membership may inform treatment and nosological research. Copyright © 2010 Wiley Periodicals, Inc.

  14. Dynamic performance of a high-temperature PEM (proton exchange membrane) fuel cell – Modelling and fuzzy control of purging process

    Zhang, Caizhi; Liu, Zhitao; Zhang, Xiongwen; Chan, Siew Hwa; Wang, Youyi

    2016-01-01

    To improve fuel utilization of HT-PEMFC (high-temperature proton exchange membrane fuel cell), which normally operates under dead-end mode, with properly periodical purging to flush out the accumulated water vapour in the anode flow-field is necessary, otherwise the performance of HT-PEMFC would drop gradually. In this paper, a semi-empirical dynamic voltage model of HT-PEMFC is developed for controller design purpose via fitting the experimental data and validated with experimental results. Then, a fuzzy controller is designed to schedule the purging based on the obtained model. According to the result, the developed model well reflects transient characteristics of HT-PEMFC voltage and the fuzzy controller offers good performance for purging scheduling under uncertain load demands. - Highlights: • A semi-empirical dynamic voltage model of HT-PEMFC is developed for control design. • The model is developed via fitting and validated with experimental results. • A fuzzy controller is designed to schedule the purging based on the obtained model.

  15. Calculation of the fast multiplication factor by the fission matrix method

    Naumov, V.A.; Rozin, S.G.; Ehl'perin, T.I.

    1976-01-01

    A variation of the Monte Carlo method to calculate an effective breeding factor of a nuclear reactor is described. The evaluation procedure of reactivity perturbations by the Monte Carlo method in the first order perturbation theory is considered. The method consists in reducing an integral neutron transport equation to a set of linear algebraic equations. The coefficients of this set are elements of a fission matrix. The fission matrix being a Grin function of the neutron transport equation, is evaluated by the Monte Carlo method. In the program realizing the suggested algorithm, the game for initial neutron energy of a fission spectrum and then for the region of neutron birth, ΔVsub(f)sup(i)has been played in proportion to the product of Σsub(f)sup(i)ΔVsub(f)sup(i), where Σsub(f)sup(i) is a macroscopic cross section in the region numbered at the birth energy. Further iterations of a space distribution of neutrons in the system are performed by the generation method. In the adopted scheme of simulation of neutron histories the emission of secondary neutrons is controlled by weights; it occurs at every collision and not only in the end on the history. The breeding factor is calculated simultaneously with the space distribution of neutron worth in the system relative to the fission process and neutron flux. Efficiency of the described procedure has been tested on the calculation of the breeding factor for the Godiva assembly, simulating a fast reactor with a hard spectrum. A high accuracy of calculations at moderate number of zones in the core and reasonable statistics has been stated

  16. Stepwise multiple regression method of greenhouse gas emission modeling in the energy sector in Poland.

    Kolasa-Wiecek, Alicja

    2015-04-01

    The energy sector in Poland is the source of 81% of greenhouse gas (GHG) emissions. Poland, among other European Union countries, occupies a leading position with regard to coal consumption. Polish energy sector actively participates in efforts to reduce GHG emissions to the atmosphere, through a gradual decrease of the share of coal in the fuel mix and development of renewable energy sources. All evidence which completes the knowledge about issues related to GHG emissions is a valuable source of information. The article presents the results of modeling of GHG emissions which are generated by the energy sector in Poland. For a better understanding of the quantitative relationship between total consumption of primary energy and greenhouse gas emission, multiple stepwise regression model was applied. The modeling results of CO2 emissions demonstrate a high relationship (0.97) with the hard coal consumption variable. Adjustment coefficient of the model to actual data is high and equal to 95%. The backward step regression model, in the case of CH4 emission, indicated the presence of hard coal (0.66), peat and fuel wood (0.34), solid waste fuels, as well as other sources (-0.64) as the most important variables. The adjusted coefficient is suitable and equals R2=0.90. For N2O emission modeling the obtained coefficient of determination is low and equal to 43%. A significant variable influencing the amount of N2O emission is the peat and wood fuel consumption. Copyright © 2015. Published by Elsevier B.V.

  17. Multiple-aperture optical design for micro-level cameras using 3D-printing method

    Peng, Wei-Jei; Hsu, Wei-Yao; Cheng, Yuan-Chieh; Lin, Wen-Lung; Yu, Zong-Ru; Chou, Hsiao-Yu; Chen, Fong-Zhi; Fu, Chien-Chung; Wu, Chong-Syuan; Huang, Chao-Tsung

    2018-02-01

    The design of the ultra miniaturized camera using 3D-printing technology directly printed on to the complementary metal-oxide semiconductor (CMOS) imaging sensor is presented in this paper. The 3D printed micro-optics is manufactured using the femtosecond two-photon direct laser writing, and the figure error which could achieve submicron accuracy is suitable for the optical system. Because the size of the micro-level camera is approximately several hundreds of micrometers, the resolution is reduced much and highly limited by the Nyquist frequency of the pixel pitch. For improving the reduced resolution, one single-lens can be replaced by multiple-aperture lenses with dissimilar field of view (FOV), and then stitching sub-images with different FOV can achieve a high resolution within the central region of the image. The reason is that the angular resolution of the lens with smaller FOV is higher than that with larger FOV, and then the angular resolution of the central area can be several times than that of the outer area after stitching. For the same image circle, the image quality of the central area of the multi-lens system is significantly superior to that of a single-lens. The foveated image using stitching FOV breaks the limitation of the resolution for the ultra miniaturized imaging system, and then it can be applied such as biomedical endoscopy, optical sensing, and machine vision, et al. In this study, the ultra miniaturized camera with multi-aperture optics is designed and simulated for the optimum optical performance.

  18. Development of method for evaluating estimated inundation area by using river flood analysis based on multiple flood scenarios

    Ono, T.; Takahashi, T.

    2017-12-01

    Non-structural mitigation measures such as flood hazard map based on estimated inundation area have been more important because heavy rains exceeding the design rainfall frequently occur in recent years. However, conventional method may lead to an underestimation of the area because assumed locations of dike breach in river flood analysis are limited to the cases exceeding the high-water level. The objective of this study is to consider the uncertainty of estimated inundation area with difference of the location of dike breach in river flood analysis. This study proposed multiple flood scenarios which can set automatically multiple locations of dike breach in river flood analysis. The major premise of adopting this method is not to be able to predict the location of dike breach correctly. The proposed method utilized interval of dike breach which is distance of dike breaches placed next to each other. That is, multiple locations of dike breach were set every interval of dike breach. The 2D shallow water equations was adopted as the governing equation of river flood analysis, and the leap-frog scheme with staggered grid was used. The river flood analysis was verified by applying for the 2015 Kinugawa river flooding, and the proposed multiple flood scenarios was applied for the Akutagawa river in Takatsuki city. As the result of computation in the Akutagawa river, a comparison with each computed maximum inundation depth of dike breaches placed next to each other proved that the proposed method enabled to prevent underestimation of estimated inundation area. Further, the analyses on spatial distribution of inundation class and maximum inundation depth in each of the measurement points also proved that the optimum interval of dike breach which can evaluate the maximum inundation area using the minimum assumed locations of dike breach. In brief, this study found the optimum interval of dike breach in the Akutagawa river, which enabled estimated maximum inundation area

  19. Imaging method of minute injured area at achilles tendon from multiple MR Images

    Tokui, Takahiro; Imura, Masataka; Kuroda, Yoshihiro; Oshiro, Osamu; Oguchi, Makoto; Fujiwara, Kazuhisa; Tabata, Yoshito; Ishigaki, Rikuta

    2011-01-01

    Ruptures of Achilles tendon frequently occur while doing sports. Since two-thirds of the people who suffered from the rupture of Achilles tendon feel the pain at Achilles tendon before rupture, to detect the predictor of the rupture is possible. Achilles tendon is soft tissue consisting of unidirectionally-aligned collagen fibers. Therefore, ordinary MRI scanner, ultrasonic instrument or X-ray scanner cannot acquire medical images of Achilles tendon. However, because MR signal intensity changes according to the angle between static magnetic field direction and fiber orientation, MR device can detect strong signal when the angle is 55 deg. In this research, the authors propose the imaging method to detect injured area at Achilles tendon. The method calculates and visualizes the value representing fiber tropism from the matching between MR signal intensity and the model of signal intensity of angle dependence. (author)

  20. Estimation Method of Center of Inertia Frequency based on Multiple Synchronized Phasor Measurement Data

    Hashiguchi, Takuhei; Watanabe, Masayuki; Goda, Tadahiro; Mitani, Yasunori; Saeki, Osamu; Hojo, Masahide; Ukai, Hiroyuki

    Open access and deregulation have been introduced into Japan and some independent power producers (IPP) and power producer and suppliers (PPS) are participating in the power generation business, which is possible to makes power system dynamics more complex. To maintain power system condition under various situations, it is essential that a real time measurement system over wide area is available. Therefore we started a project to construct an original measurement system by the use of phasor measurement units (PMU) in Japan. This paper describes the estimation method of a center of inertia frequency by applying actual measurement data. The application of this method enables us to extract power system oscillations from measurement data appropriately. Moreover, the analysis of power system dynamics for power system oscillations occurring in western Japan 60Hz system is shown. These results will lead to the clarification of power system dynamics and may make it possible to realize the monitoring of power system oscillations associated with power system stability.