WorldWideScience

Sample records for method results revealed

  1. Comparative analyses reveal discrepancies among results of commonly used methods for Anopheles gambiaemolecular form identification

    Directory of Open Access Journals (Sweden)

    Pinto João

    2011-08-01

    Full Text Available Abstract Background Anopheles gambiae M and S molecular forms, the major malaria vectors in the Afro-tropical region, are ongoing a process of ecological diversification and adaptive lineage splitting, which is affecting malaria transmission and vector control strategies in West Africa. These two incipient species are defined on the basis of single nucleotide differences in the IGS and ITS regions of multicopy rDNA located on the X-chromosome. A number of PCR and PCR-RFLP approaches based on form-specific SNPs in the IGS region are used for M and S identification. Moreover, a PCR-method to detect the M-specific insertion of a short interspersed transposable element (SINE200 has recently been introduced as an alternative identification approach. However, a large-scale comparative analysis of four widely used PCR or PCR-RFLP genotyping methods for M and S identification was never carried out to evaluate whether they could be used interchangeably, as commonly assumed. Results The genotyping of more than 400 A. gambiae specimens from nine African countries, and the sequencing of the IGS-amplicon of 115 of them, highlighted discrepancies among results obtained by the different approaches due to different kinds of biases, which may result in an overestimation of MS putative hybrids, as follows: i incorrect match of M and S specific primers used in the allele specific-PCR approach; ii presence of polymorphisms in the recognition sequence of restriction enzymes used in the PCR-RFLP approaches; iii incomplete cleavage during the restriction reactions; iv presence of different copy numbers of M and S-specific IGS-arrays in single individuals in areas of secondary contact between the two forms. Conclusions The results reveal that the PCR and PCR-RFLP approaches most commonly utilized to identify A. gambiae M and S forms are not fully interchangeable as usually assumed, and highlight limits of the actual definition of the two molecular forms, which might

  2. Trojan Horse Method: Recent Results

    International Nuclear Information System (INIS)

    Pizzone, R. G.; Spitaleri, C.

    2008-01-01

    Owing the presence of the Coulomb barrier at astrophysically relevant kinetic energies, it is very difficult, or sometimes impossible to measure astrophysical reaction rates in laboratory. This is why different indirect techniques are being used along with direct measurements. The THM is unique indirect technique allowing one measure astrophysical rearrangement reactions down to astrophysical relevant energies. The basic principle and a review of the main application of the Trojan Horse Method are presented. The applications aiming at the extraction of the bare S b (E) astrophysical factor and electron screening potentials U e for several two body processes are discussed

  3. The WOMBAT Attack Attribution Method: Some Results

    Science.gov (United States)

    Dacier, Marc; Pham, Van-Hau; Thonnard, Olivier

    In this paper, we present a new attack attribution method that has been developed within the WOMBAT project. We illustrate the method with some real-world results obtained when applying it to almost two years of attack traces collected by low interaction honeypots. This analytical method aims at identifying large scale attack phenomena composed of IP sources that are linked to the same root cause. All malicious sources involved in a same phenomenon constitute what we call a Misbehaving Cloud (MC). The paper offers an overview of the various steps the method goes through to identify these clouds, providing pointers to external references for more detailed information. Four instances of misbehaving clouds are then described in some more depth to demonstrate the meaningfulness of the concept.

  4. German precursor study: methods and results

    International Nuclear Information System (INIS)

    Hoertner, H.; Frey, W.; von Linden, J.; Reichart, G.

    1985-01-01

    This study has been prepared by the GRS by contract of the Federal Minister of Interior. The purpose of the study is to show how the application of system-analytic tools and especially of probabilistic methods on the Licensee Event Reports (LERs) and on other operating experience can support a deeper understanding of the safety-related importance of the events reported in reactor operation, the identification of possible weak points, and further conclusions to be drawn from the events. Additionally, the study aimed at a comparison of its results for the severe core damage frequency with those of the German Risk Study as far as this is possible and useful. The German Precursor Study is a plant-specific study. The reference plant is Biblis NPP with its very similar Units A and B, whereby the latter was also the reference plant for the German Risk Study

  5. Mechanics of Nanostructures: Methods and Results

    Science.gov (United States)

    Ruoff, Rod

    2003-03-01

    We continue to develop and use new tools to measure the mechanics and electromechanics of nanostructures. Here we discuss: (a) methods for making nanoclamps and the resulting: nanoclamp geometry, chemical composition and type of chemical bonding, and nanoclamp strength (effectiveness as a nanoclamp for the mechanics measurements to be made); (b) mechanics of carbon nanocoils. We have received carbon nanocoils from colleagues in Japan [1], measured their spring constants, and have observed extensions exceeding 100% relative to the unloaded length, using our scanning electron microscope nanomanipulator tool; (c) several new devices that are essentially MEMS-based, that allow for improved measurements of the mechanics of psuedo-1D and planar nanostructures. [1] Zhang M., Nakayama Y., Pan L., Japanese J. Appl. Phys. 39, L1242-L1244 (2000).

  6. Multiband discrete ordinates method: formalism and results

    International Nuclear Information System (INIS)

    Luneville, L.

    1998-06-01

    The multigroup discrete ordinates method is a classical way to solve transport equation (Boltzmann) for neutral particles. Self-shielding effects are not correctly treated due to large variations of cross sections in a group (in the resonance range). To treat the resonance domain, the multiband method is introduced. The main idea is to divide the cross section domain into bands. We obtain the multiband parameters using the moment method; the code CALENDF provides probability tables for these parameters. We present our implementation in an existing discrete ordinates code: SN1D. We study deep penetration benchmarks and show the improvement of the method in the treatment of self-shielding effects. (author)

  7. Cocaine Hydrochloride Structure in Solution Revealed by Three Chiroptical Methods

    Czech Academy of Sciences Publication Activity Database

    Fagan, P.; Kocourková, L.; Tatarkovič, M.; Králík, F.; Kuchař, M.; Setnička, V.; Bouř, Petr

    2017-01-01

    Roč. 18, č. 16 (2017), s. 2258-2265 ISSN 1439-4235 R&D Projects: GA ČR(CZ) GA16-05935S; GA MŠk(CZ) LTC17012 Institutional support: RVO:61388963 Keywords : analytical methods * circular dichroism * density functional calculations * Raman spectroscopy * structure elucidation Subject RIV: CF - Physical ; Theoretical Chemistry OBOR OECD: Physical chemistry Impact factor: 3.075, year: 2016

  8. Project Oriented Immersion Learning: Method and Results

    DEFF Research Database (Denmark)

    Icaza, José I.; Heredia, Yolanda; Borch, Ole M.

    2005-01-01

    A pedagogical approach called “project oriented immersion learning” is presented and tested on a graduate online course. The approach combines the Project Oriented Learning method with immersion learning in a virtual enterprise. Students assumed the role of authors hired by a fictitious publishing...... house that develops digital products including e-books, tutorials, web sites and so on. The students defined the problem that their product was to solve; choose the type of product and the content; and built the product following a strict project methodology. A wiki server was used as a platform to hold...

  9. Learning phacoemulsification. Results of different teaching methods.

    Directory of Open Access Journals (Sweden)

    Hennig Albrecht

    2004-01-01

    Full Text Available We report the learning curves of three eye surgeons converting from sutureless extracapsular cataract extraction to phacoemulsification using different teaching methods. Posterior capsule rupture (PCR as a per-operative complication and visual outcome of the first 100 operations were analysed. The PCR rate was 4% and 15% in supervised and unsupervised surgery respectively. Likewise, an uncorrected visual acuity of > or = 6/18 on the first postoperative day was seen in 62 (62% of patients and in 22 (22% in supervised and unsupervised surgery respectively.

  10. Single primer amplification reaction methods reveal exotic and ...

    Indian Academy of Sciences (India)

    Unknown

    mulberry varieties using three different PCR based single primer amplification ..... the results of a multi- variate analysis using Mahalanobis D2 statistic in case of .... Rajan M V, Chaturvedi H K and Sarkar A 1997 Multivariate analysis as an aid ...

  11. RESULTS OF THE QUESTIONNAIRE: ANALYSIS METHODS

    CERN Multimedia

    Staff Association

    2014-01-01

    Five-yearly review of employment conditions   Article S V 1.02 of our Staff Rules states that the CERN “Council shall periodically review and determine the financial and social conditions of the members of the personnel. These periodic reviews shall consist of a five-yearly general review of financial and social conditions;” […] “following methods […] specified in § I of Annex A 1”. Then, turning to the relevant part in Annex A 1, we read that “The purpose of the five-yearly review is to ensure that the financial and social conditions offered by the Organization allow it to recruit and retain the staff members required for the execution of its mission from all its Member States. […] these staff members must be of the highest competence and integrity.” And for the menu of such a review we have: “The five-yearly review must include basic salaries and may include any other financial or soc...

  12. Two different hematocrit detection methods: Different methods, different results?

    Directory of Open Access Journals (Sweden)

    Schuepbach Reto A

    2010-03-01

    Full Text Available Abstract Background Less is known about the influence of hematocrit detection methodology on transfusion triggers. Therefore, the aim of the present study was to compare two different hematocrit-assessing methods. In a total of 50 critically ill patients hematocrit was analyzed using (1 blood gas analyzer (ABLflex 800 and (2 the central laboratory method (ADVIA® 2120 and compared. Findings Bland-Altman analysis for repeated measurements showed a good correlation with a bias of +1.39% and 2 SD of ± 3.12%. The 24%-hematocrit-group showed a correlation of r2 = 0.87. With a kappa of 0.56, 22.7% of the cases would have been transfused differently. In the-28%-hematocrit group with a similar correlation (r2 = 0.8 and a kappa of 0.58, 21% of the cases would have been transfused differently. Conclusions Despite a good agreement between the two methods used to determine hematocrit in clinical routine, the calculated difference of 1.4% might substantially influence transfusion triggers depending on the employed method.

  13. Barcoded pyrosequencing reveals that consumption of galactooligosaccharides results in a highly specific bifidogenic response in humans.

    Directory of Open Access Journals (Sweden)

    Lauren M G Davis

    Full Text Available Prebiotics are selectively fermented ingredients that allow specific changes in the gastrointestinal microbiota that confer health benefits to the host. However, the effects of prebiotics on the human gut microbiota are incomplete as most studies have relied on methods that fail to cover the breadth of the bacterial community. The goal of this research was to use high throughput multiplex community sequencing of 16S rDNA tags to gain a community wide perspective of the impact of prebiotic galactooligosaccharide (GOS on the fecal microbiota of healthy human subjects. Fecal samples from eighteen healthy adults were previously obtained during a feeding trial in which each subject consumed a GOS-containing product for twelve weeks, with four increasing dosages (0, 2.5, 5, and 10 gram of GOS. Multiplex sequencing of the 16S rDNA tags revealed that GOS induced significant compositional alterations in the fecal microbiota, principally by increasing the abundance of organisms within the Actinobacteria. Specifically, several distinct lineages of Bifidobacterium were enriched. Consumption of GOS led to five- to ten-fold increases in bifidobacteria in half of the subjects. Increases in Firmicutes were also observed, however, these changes were detectable in only a few individuals. The enrichment of bifidobacteria was generally at the expense of one group of bacteria, the Bacteroides. The responses to GOS and the magnitude of the response varied between individuals, were reversible, and were in accordance with dosage. The bifidobacteria were the only bacteria that were consistently and significantly enriched by GOS, although this substrate supported the growth of diverse colonic bacteria in mono-culture experiments. These results suggest that GOS can be used to enrich bifidobacteria in the human gastrointestinal tract with remarkable specificity, and that the bifidogenic properties of GOS that occur in vivo are caused by selective fermentation as well as by

  14. PALEOEARTHQUAKES IN THE PRIBAIKALIE: METHODS AND RESULTS OF DATING

    Directory of Open Access Journals (Sweden)

    Oleg P. Smekalin

    2010-01-01

    Full Text Available In the Pribaikalie and adjacent territories, seismogeological studies have been underway for almost a half of the century and resulted in discovery of more than 70 dislocations of seismic or presumably seismic origin. With commencement of paleoseismic studies, dating of paleo-earthquakes was focused on as an indicator useful for long-term prediction of strong earthquakes. V.P. Solonenko [Solonenko, 1977] distinguished five methods for dating paleoseismogenic deformations, i.e. geological, engineering geological, historico-archeological, dendrochronological and radiocarbon methods. However, ages of the majority of seismic deformations, which were subject to studies at the initial stage of development of seismogeology in Siberia, were defined by methods of relative or correlation age determination.Since the 1980s, studies of seismogenic deformation in the Pribaikalie have been widely conducted with trenching. Mass sampling, followed with radiocarbon analyses and definition of absolute ages of paleo-earthquakes, provided new data on seismic regimes of the territory and rates of and recent displacements along active faults, and enhanced validity of methods of relative dating, in particular morphometry. Capacities of the morphometry method has significantly increased with introduction of laser techniques in surveys and digital processing of 3D relief models.Comprehensive seismogeological studies conducted in the Pribaikalie revealed 43 paleo-events within 16 seismogenic structures. Absolute ages of 18 paleo-events were defined by the radiocarbon age determination method. Judging by their ages, a number of dislocations were related with historical earthquakes which occurred in the 18th and 19th centuries, yet any reliable data on epicenters of such events are not available. The absolute and relative dating methods allowed us to identify sections in some paleoseismogenic structures by differences in ages of activation and thus provided new data for

  15. Kinds of access: different methods for report reveal different kinds of metacognitive access

    Science.gov (United States)

    Overgaard, Morten; Sandberg, Kristian

    2012-01-01

    In experimental investigations of consciousness, participants are asked to reflect upon their own experiences by issuing reports about them in different ways. For this reason, a participant needs some access to the content of her own conscious experience in order to report. In such experiments, the reports typically consist of some variety of ratings of confidence or direct descriptions of one's own experiences. Whereas different methods of reporting are typically used interchangeably, recent experiments indicate that different results are obtained with different kinds of reporting. We argue that there is not only a theoretical, but also an empirical difference between different methods of reporting. We hypothesize that differences in the sensitivity of different scales may reveal that different types of access are used to issue direct reports about experiences and metacognitive reports about the classification process. PMID:22492747

  16. A norm knockout method on indirect reciprocity to reveal indispensable norms

    Science.gov (United States)

    Yamamoto, Hitoshi; Okada, Isamu; Uchida, Satoshi; Sasaki, Tatsuya

    2017-03-01

    Although various norms for reciprocity-based cooperation have been suggested that are evolutionarily stable against invasion from free riders, the process of alternation of norms and the role of diversified norms remain unclear in the evolution of cooperation. We clarify the co-evolutionary dynamics of norms and cooperation in indirect reciprocity and also identify the indispensable norms for the evolution of cooperation. Inspired by the gene knockout method, a genetic engineering technique, we developed the norm knockout method and clarified the norms necessary for the establishment of cooperation. The results of numerical investigations revealed that the majority of norms gradually transitioned to tolerant norms after defectors are eliminated by strict norms. Furthermore, no cooperation emerges when specific norms that are intolerant to defectors are knocked out.

  17. Field trip method as an effort to reveal student environmental literacy on biodiversity issue and context

    Science.gov (United States)

    Rijal, M.; Saefudin; Amprasto

    2018-05-01

    Field trip method through investigation of local biodiversity cases can give educational experiences for students. This learning activity was efforts to reveal students environmental literacy on biodiversity. The aim of study were (1) to describe the activities of students get information about the biodiversity issue and its context through field trip, (2) to describe the students findings during field trip, and (3) to reveal students environmental literacy based on pre test and post test. The research method used weak-experiment and involved 34 participants at senior high school students in Bandung-Indonesia. The research instruments for collecting data were environmental literacy test, observation sheets and questionnaire sheets for students. The analysis of data was quantitative descriptive. The results show that more than 79% of the students gave positive view for each field trip activity, i.e students activity during work (97%-100%); students activity during gather information (79%- 100%); students activity during exchange information with friend (82%-100%); and students interested to Biodiversity after field trip activity (85%-100%). Students gain knowledge about the diversity of animal vertebrate and its characteristics, the status and condition of animals, and the source of animal with the cases of animal diversity. The students environmental literacy tends to be moderate level based on test. Meanwhile, the average of the attitudes and action greater than the components of knowledge and cognitive skills.

  18. A Method to Reveal Fine-Grained and Diverse Conceptual Progressions during Learning

    Science.gov (United States)

    Lombard, François; Merminod, Marie; Widmer, Vincent; Schneider, Daniel K.

    2018-01-01

    Empirical data on learners' conceptual progression is required to design curricula and guide students. In this paper, we present the Reference Map Change Coding (RMCC) method for revealing students' progression at a fine-grained level. The method has been developed and tested through the analysis of successive versions of the productions of eight…

  19. Revealing barriers and facilitators to use a new genetic test: comparison of three user involvement methods.

    Science.gov (United States)

    Rhebergen, Martijn D F; Visser, Maaike J; Verberk, Maarten M; Lenderink, Annet F; van Dijk, Frank J H; Kezic, Sanja; Hulshof, Carel T J

    2012-10-01

    We compared three common user involvement methods in revealing barriers and facilitators from intended users that might influence their use of a new genetic test. The study was part of the development of a new genetic test on the susceptibility to hand eczema for nurses. Eighty student nurses participated in five focus groups (n = 33), 15 interviews (n = 15) or questionnaires (n = 32). For each method, data were collected until saturation. We compared the mean number of items and relevant remarks that could influence the use of the genetic test obtained per method, divided by the number of participants in that method. Thematic content analysis was performed using MAXQDA software. The focus groups revealed 30 unique items compared to 29 in the interviews and 21 in the questionnaires. The interviews produced more items and relevant remarks per participant (1.9 and 8.4 pp) than focus groups (0.9 and 4.8 pp) or questionnaires (0.7 and 2.3 pp). All three involvement methods revealed relevant barriers and facilitators to use a new genetic test. Focus groups and interviews revealed substantially more items than questionnaires. Furthermore, this study suggests a preference for the use of interviews because the number of items per participant was higher than for focus groups and questionnaires. This conclusion may be valid for other genetic tests as well.

  20. Antarctic Temperature Extremes from MODIS Land Surface Temperatures: New Processing Methods Reveal Data Quality Puzzles

    Science.gov (United States)

    Grant, G.; Gallaher, D. W.

    2017-12-01

    New methods for processing massive remotely sensed datasets are used to evaluate Antarctic land surface temperature (LST) extremes. Data from the MODIS/Terra sensor (Collection 6) provides a twice-daily look at Antarctic LSTs over a 17 year period, at a higher spatiotemporal resolution than past studies. Using a data condensation process that creates databases of anomalous values, our processes create statistical images of Antarctic LSTs. In general, the results find few significant trends in extremes; however, they do reveal a puzzling picture of inconsistent cloud detection and possible systemic errors, perhaps due to viewing geometry. Cloud discrimination shows a distinct jump in clear-sky detections starting in 2011, and LSTs around the South Pole exhibit a circular cooling pattern, which may also be related to cloud contamination. Possible root causes are discussed. Ongoing investigations seek to determine whether the results are a natural phenomenon or, as seems likely, the results of sensor degradation or processing artefacts. If the unusual LST patterns or cloud detection discontinuities are natural, they point to new, interesting processes on the Antarctic continent. If the data artefacts are artificial, MODIS LST users should be alerted to the potential issues.

  1. Stepwise multiphoton activation fluorescence reveals a new method of melanin detection

    Science.gov (United States)

    Lai, Zhenhua; Kerimo, Josef; Mega, Yair; DiMarzio, Charles A.

    2013-06-01

    The stepwise multiphoton activated fluorescence (SMPAF) of melanin, activated by a continuous-wave mode near infrared (NIR) laser, reveals a broad spectrum extending from the visible spectra to the NIR and has potential application for a low-cost, reliable method of detecting melanin. SMPAF images of melanin in mouse hair and skin are compared with conventional multiphoton fluorescence microscopy and confocal reflectance microscopy (CRM). By combining CRM with SMPAF, we can locate melanin reliably. However, we have the added benefit of eliminating background interference from other components inside mouse hair and skin. The melanin SMPAF signal from the mouse hair is a mixture of a two-photon process and a third-order process. The melanin SMPAF emission spectrum is activated by a 1505.9-nm laser light, and the resulting spectrum has a peak at 960 nm. The discovery of the emission peak may lead to a more energy-efficient method of background-free melanin detection with less photo-bleaching.

  2. The estimation of the measurement results with using statistical methods

    International Nuclear Information System (INIS)

    Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" data-affiliation=" (State Enterprise Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" >Velychko, O; UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" data-affiliation=" (State Scientific Institution UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" >Gordiyenko, T

    2015-01-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed

  3. The estimation of the measurement results with using statistical methods

    Science.gov (United States)

    Velychko, O.; Gordiyenko, T.

    2015-02-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.

  4. Kinds of access: Different methods for report reveal different kinds of metacognitive access

    DEFF Research Database (Denmark)

    Overgaard, Morten; Sandberg, Kristian

    2012-01-01

    that there is not only a theoretical, but also an empirical difference between different methods of reporting. We hypothesize that differences in the sensitivity of different scales may reveal that different types of access are used to issue direct reports about experiences and metacognitive reports about...

  5. Introduction of e-learning in dental radiology reveals significantly improved results in final examination.

    Science.gov (United States)

    Meckfessel, Sandra; Stühmer, Constantin; Bormann, Kai-Hendrik; Kupka, Thomas; Behrends, Marianne; Matthies, Herbert; Vaske, Bernhard; Stiesch, Meike; Gellrich, Nils-Claudius; Rücker, Martin

    2011-01-01

    Because a traditionally instructed dental radiology lecture course is very time-consuming and labour-intensive, online courseware, including an interactive-learning module, was implemented to support the lectures. The purpose of this study was to evaluate the perceptions of students who have worked with web-based courseware as well as the effect on their results in final examinations. Users (n(3+4)=138) had access to the e-program from any networked computer at any time. Two groups (n(3)=71, n(4)=67) had to pass a final exam after using the e-course. Results were compared with two groups (n(1)=42, n(2)=48) who had studied the same content by attending traditional lectures. In addition a survey of the students was statistically evaluated. Most of the respondents reported a positive attitude towards e-learning and would have appreciated more access to computer-assisted instruction. Two years after initiating the e-course the failure rate in the final examination dropped significantly, from 40% to less than 2%. The very positive response to the e-program and improved test scores demonstrated the effectiveness of our e-course as a learning aid. Interactive modules in step with clinical practice provided learning that is not achieved by traditional teaching methods alone. To what extent staff savings are possible is part of a further study. Copyright © 2010 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  6. Indirect questioning method reveals hidden support for female genital cutting in South Central Ethiopia.

    Science.gov (United States)

    Gibson, Mhairi A; Gurmu, Eshetu; Cobo, Beatriz; Rueda, María M; Scott, Isabel M

    2018-01-01

    Female genital cutting (FGC) has major implications for women's physical, sexual and psychological health, and eliminating the practice is a key target for public health policy-makers. To date one of the main barriers to achieving this has been an inability to infer privately-held views on FGC within communities where it is prevalent. As a sensitive (and often illegal) topic, people are anticipated to hide their true support for the practice when questioned directly. Here we use an indirect questioning method (unmatched count technique) to identify hidden support for FGC in a rural South Central Ethiopian community where the practice is common, but thought to be in decline. Employing a socio-demographic household survey of 1620 Arsi Oromo adults, which incorporated both direct and indirect direct response (unmatched count) techniques we compare directly-stated versus privately-held views in support of FGC, and individual variation in responses by age, gender and education and target female (daughters versus daughters-in-law). Both genders express low support for FGC when questioned directly, while indirect methods reveal substantially higher acceptance (of cutting both daughters and daughters-in-law). Educated adults (those who have attended school) are privately more supportive of the practice than they are prepared to admit openly to an interviewer, indicating that education may heighten secrecy rather than decrease support for FGC. Older individuals hold the strongest views in favour of FGC (particularly educated older males), but they are also more inclined to conceal their support for FGC when questioned directly. As these elders represent the most influential members of society, their hidden support for FGC may constitute a pivotal barrier to eliminating the practice in this community. Our results demonstrate the great potential for indirect questioning methods to advance knowledge and inform policy on culturally-sensitive topics like FGC; providing more

  7. EQUITY SHARES EQUATING THE RESULTS OF FCFF AND FCFE METHODS

    Directory of Open Access Journals (Sweden)

    Bartłomiej Cegłowski

    2012-06-01

    Full Text Available The aim of the article is to present the method of establishing equity shares in weight average cost of capital (WACC, in which the value of loan capital results from the fixed assumptions accepted in the financial plan (for example a schedule of loan repayment and own equity is evaluated by means of a discount method. The described method causes that, regardless of whether cash flows are calculated as FCFF or FCFE, the result of the company valuation will be identical.

  8. New method of scoliosis assessment: preliminary results using computerized photogrammetry.

    Science.gov (United States)

    Aroeira, Rozilene Maria Cota; Leal, Jefferson Soares; de Melo Pertence, Antônio Eustáquio

    2011-09-01

    A new method for nonradiographic evaluation of scoliosis was independently compared with the Cobb radiographic method, for the quantification of scoliotic curvature. To develop a protocol for computerized photogrammetry, as a nonradiographic method, for the quantification of scoliosis, and to mathematically relate this proposed method with the Cobb radiographic method. Repeated exposure to radiation of children can be harmful to their health. Nevertheless, no nonradiographic method until now proposed has gained popularity as a routine method for evaluation, mainly due to a low correspondence to the Cobb radiographic method. Patients undergoing standing posteroanterior full-length spine radiographs, who were willing to participate in this study, were submitted to dorsal digital photography in the orthostatic position with special surface markers over the spinous process, specifically the vertebrae C7 to L5. The radiographic and photographic images were sent separately for independent analysis to two examiners, trained in quantification of scoliosis for the types of images received. The scoliosis curvature angles obtained through computerized photogrammetry (the new method) were compared to those obtained through the Cobb radiographic method. Sixteen individuals were evaluated (14 female and 2 male). All presented idiopathic scoliosis, and were between 21.4 ± 6.1 years of age; 52.9 ± 5.8 kg in weight; 1.63 ± 0.05 m in height, with a body mass index of 19.8 ± 0.2. There was no statistically significant difference between the scoliosis angle measurements obtained in the comparative analysis of both methods, and a mathematical relationship was formulated between both methods. The preliminary results presented demonstrate equivalence between the two methods. More studies are needed to firmly assess the potential of this new method as a coadjuvant tool in the routine following of scoliosis treatment.

  9. Convergence results for a class of abstract continuous descent methods

    Directory of Open Access Journals (Sweden)

    Sergiu Aizicovici

    2004-03-01

    Full Text Available We study continuous descent methods for the minimization of Lipschitzian functions defined on a general Banach space. We establish convergence theorems for those methods which are generated by approximate solutions to evolution equations governed by regular vector fields. Since the complement of the set of regular vector fields is $sigma$-porous, we conclude that our results apply to most vector fields in the sense of Baire's categories.

  10. Visual Display of Scientific Studies, Methods, and Results

    Science.gov (United States)

    Saltus, R. W.; Fedi, M.

    2015-12-01

    The need for efficient and effective communication of scientific ideas becomes more urgent each year.A growing number of societal and economic issues are tied to matters of science - e.g., climate change, natural resource availability, and public health. Societal and political debate should be grounded in a general understanding of scientific work in relevant fields. It is difficult for many participants in these debates to access science directly because the formal method for scientific documentation and dissemination is the journal paper, generally written for a highly technical and specialized audience. Journal papers are very effective and important for documentation of scientific results and are essential to the requirements of science to produce citable and repeatable results. However, journal papers are not effective at providing a quick and intuitive summary useful for public debate. Just as quantitative data are generally best viewed in graphic form, we propose that scientific studies also can benefit from visual summary and display. We explore the use of existing methods for diagramming logical connections and dependencies, such as Venn diagrams, mind maps, flow charts, etc., for rapidly and intuitively communicating the methods and results of scientific studies. We also discuss a method, specifically tailored to summarizing scientific papers that we introduced last year at AGU. Our method diagrams the relative importance and connections between data, methods/models, results/ideas, and implications/importance using a single-page format with connected elements in these four categories. Within each category (e.g., data) the spatial location of individual elements (e.g., seismic, topographic, gravity) indicates relative novelty (e.g., are these new data?) and importance (e.g., how critical are these data to the results of the paper?). The goal is to find ways to rapidly and intuitively share both the results and the process of science, both for communication

  11. Life cycle analysis of electricity systems: Methods and results

    International Nuclear Information System (INIS)

    Friedrich, R.; Marheineke, T.

    1996-01-01

    The two methods for full energy chain analysis, process analysis and input/output analysis, are discussed. A combination of these two methods provides the most accurate results. Such a hybrid analysis of the full energy chains of six different power plants is presented and discussed. The results of such analyses depend on time, site and technique of each process step and, therefore have no general validity. For renewable energy systems the emissions form the generation of a back-up system should be added. (author). 7 figs, 1 fig

  12. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    Science.gov (United States)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  13. Pathway-based outlier method reveals heterogeneous genomic structure of autism in blood transcriptome.

    Science.gov (United States)

    Campbell, Malcolm G; Kohane, Isaac S; Kong, Sek Won

    2013-09-24

    Decades of research strongly suggest that the genetic etiology of autism spectrum disorders (ASDs) is heterogeneous. However, most published studies focus on group differences between cases and controls. In contrast, we hypothesized that the heterogeneity of the disorder could be characterized by identifying pathways for which individuals are outliers rather than pathways representative of shared group differences of the ASD diagnosis. Two previously published blood gene expression data sets--the Translational Genetics Research Institute (TGen) dataset (70 cases and 60 unrelated controls) and the Simons Simplex Consortium (Simons) dataset (221 probands and 191 unaffected family members)--were analyzed. All individuals of each dataset were projected to biological pathways, and each sample's Mahalanobis distance from a pooled centroid was calculated to compare the number of case and control outliers for each pathway. Analysis of a set of blood gene expression profiles from 70 ASD and 60 unrelated controls revealed three pathways whose outliers were significantly overrepresented in the ASD cases: neuron development including axonogenesis and neurite development (29% of ASD, 3% of control), nitric oxide signaling (29%, 3%), and skeletal development (27%, 3%). Overall, 50% of cases and 8% of controls were outliers in one of these three pathways, which could not be identified using group comparison or gene-level outlier methods. In an independently collected data set consisting of 221 ASD and 191 unaffected family members, outliers in the neurogenesis pathway were heavily biased towards cases (20.8% of ASD, 12.0% of control). Interestingly, neurogenesis outliers were more common among unaffected family members (Simons) than unrelated controls (TGen), but the statistical significance of this effect was marginal (Chi squared P < 0.09). Unlike group difference approaches, our analysis identified the samples within the case and control groups that manifested each expression

  14. A Fuzzy Logic Based Method for Analysing Test Results

    Directory of Open Access Journals (Sweden)

    Le Xuan Vinh

    2017-11-01

    Full Text Available Network operators must perform many tasks to ensure smooth operation of the network, such as planning, monitoring, etc. Among those tasks, regular testing of network performance, network errors and troubleshooting is very important. Meaningful test results will allow the operators to evaluate network performanceof any shortcomings and to better plan for network upgrade. Due to the diverse and mainly unquantifiable nature of network testing results, there is a needs to develop a method for systematically and rigorously analysing these results. In this paper, we present STAM (System Test-result Analysis Method which employs a bottom-up hierarchical processing approach using Fuzzy logic. STAM is capable of combining all test results into a quantitative description of the network performance in terms of network stability, the significance of various network erros, performance of each function blocks within the network. The validity of this method has been successfully demonstrated in assisting the testing of a VoIP system at the Research Instiute of Post and Telecoms in Vietnam. The paper is organized as follows. The first section gives an overview of fuzzy logic theory the concepts of which will be used in the development of STAM. The next section describes STAM. The last section, demonstrating STAM’s capability, presents a success story in which STAM is successfully applied.

  15. Evaluating rehabilitation methods - some practical results from Rum Jungle

    International Nuclear Information System (INIS)

    Ryan, P.

    1987-01-01

    Research and analysis of the following aspects of rehabilitation have been conducted at the Rum Jungle mine site over the past three years: drainage structure stability; rock batter stability; soil fauna; tree growth in compacted soils; rehabilitation costs. The results show that, for future rehabilitation projects adopting refined methods, attention to final construction detail and biospheric influences is most important. The mine site offers a unique opportunity to evaluate the success of a variety of rehabilitation methods to the benefit of the industry in Australia overseas. It is intended that practical, economic, research will continue for some considerable time

  16. Microbial Diversity of Browning Peninsula, Eastern Antarctica Revealed Using Molecular and Cultivation Methods.

    Science.gov (United States)

    Pudasaini, Sarita; Wilson, John; Ji, Mukan; van Dorst, Josie; Snape, Ian; Palmer, Anne S; Burns, Brendan P; Ferrari, Belinda C

    2017-01-01

    Browning Peninsula is an ice-free polar desert situated in the Windmill Islands, Eastern Antarctica. The entire site is described as a barren landscape, comprised of frost boils with soils dominated by microbial life. In this study, we explored the microbial diversity and edaphic drivers of community structure across this site using traditional cultivation methods, a novel approach the soil substrate membrane system (SSMS), and culture-independent 454-tag pyrosequencing. The measured soil environmental and microphysical factors of chlorine, phosphate, aspect and elevation were found to be significant drivers of the bacterial community, while none of the soil parameters analyzed were significantly correlated to the fungal community. Overall, Browning Peninsula soil harbored a distinctive microbial community in comparison to other Antarctic soils comprised of a unique bacterial diversity and extremely limited fungal diversity. Tag pyrosequencing data revealed the bacterial community to be dominated by Actinobacteria (36%), followed by Chloroflexi (18%), Cyanobacteria (14%), and Proteobacteria (10%). For fungi, Ascomycota (97%) dominated the soil microbiome, followed by Basidiomycota. As expected the diversity recovered from culture-based techniques was lower than that detected using tag sequencing. However, in the SSMS enrichments, that mimic the natural conditions for cultivating oligophilic "k-selected" bacteria, a larger proportion of rare bacterial taxa (15%), such as Blastococcus, Devosia, Herbaspirillum, Propionibacterium and Methylocella and fungal (11%) taxa, such as Nigrospora, Exophiala, Hortaea , and Penidiella were recovered at the genus level. At phylum level, a comparison of OTU's showed that the SSMS shared 21% of Acidobacteria, 11% of Actinobacteria and 10% of Proteobacteria OTU's with soil. For fungi, the shared OTUs was 4% (Basidiomycota) and <0.5% (Ascomycota). This was the first known attempt to culture microfungi using the SSMS which resulted in

  17. Multiple predictor smoothing methods for sensitivity analysis: Example results

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  18. Quantifying the measurement uncertainty of results from environmental analytical methods.

    Science.gov (United States)

    Moser, J; Wegscheider, W; Sperka-Gottlieb, C

    2001-07-01

    The Eurachem-CITAC Guide Quantifying Uncertainty in Analytical Measurement was put into practice in a public laboratory devoted to environmental analytical measurements. In doing so due regard was given to the provisions of ISO 17025 and an attempt was made to base the entire estimation of measurement uncertainty on available data from the literature or from previously performed validation studies. Most environmental analytical procedures laid down in national or international standards are the result of cooperative efforts and put into effect as part of a compromise between all parties involved, public and private, that also encompasses environmental standards and statutory limits. Central to many procedures is the focus on the measurement of environmental effects rather than on individual chemical species. In this situation it is particularly important to understand the measurement process well enough to produce a realistic uncertainty statement. Environmental analytical methods will be examined as far as necessary, but reference will also be made to analytical methods in general and to physical measurement methods where appropriate. This paper describes ways and means of quantifying uncertainty for frequently practised methods of environmental analysis. It will be shown that operationally defined measurands are no obstacle to the estimation process as described in the Eurachem/CITAC Guide if it is accepted that the dominating component of uncertainty comes from the actual practice of the method as a reproducibility standard deviation.

  19. [Adverse events management. Methods and results of a development project].

    Science.gov (United States)

    Rabøl, Louise Isager; Jensen, Elisabeth Brøgger; Hellebek, Annemarie H; Pedersen, Beth Lilja

    2006-11-27

    This article describes the methods and results of a project in the Copenhagen Hospital Corporation (H:S) on preventing adverse events. The aim of the project was to raise awareness about patients' safety, test a reporting system for adverse events, develop and test methods of analysis of events and propagate ideas about how to prevent adverse events. H:S developed an action plan and a reporting system for adverse events, founded an organization and developed an educational program on theories and methods of learning from adverse events for both leaders and employees. During the three-year period from 1 January 2002 to 31 December 2004, the H:S staff reported 6011 adverse events. In the same period, the organization completed 92 root cause analyses. More than half of these dealt with events that had been optional to report, the other half events that had been mandatory to report. The number of reports and the front-line staff's attitude towards reporting shows that the H:S succeeded in founding a safety culture. Future work should be centred on developing and testing methods that will prevent adverse events from happening. The objective is to suggest and complete preventive initiatives which will help increase patient safety.

  20. Revealed Preference Methods for Studying Bicycle Route Choice—A Systematic Review

    Directory of Open Access Journals (Sweden)

    Ray Pritchard

    2018-03-01

    Full Text Available One fundamental aspect of promoting utilitarian bicycle use involves making modifications to the built environment to improve the safety, efficiency and enjoyability of cycling. Revealed preference data on bicycle route choice can assist greatly in understanding the actual behaviour of a highly heterogeneous group of users, which in turn assists the prioritisation of infrastructure or other built environment initiatives. This systematic review seeks to compare the relative strengths and weaknesses of the empirical approaches for evaluating whole journey route choices of bicyclists. Two electronic databases were systematically searched for a selection of keywords pertaining to bicycle and route choice. In total seven families of methods are identified: GPS devices, smartphone applications, crowdsourcing, participant-recalled routes, accompanied journeys, egocentric cameras and virtual reality. The study illustrates a trade-off in the quality of data obtainable and the average number of participants. Future additional methods could include dockless bikeshare, multiple camera solutions using computer vision and immersive bicycle simulator environments.

  1. A Rapid Colorimetric Method Reveals Fraudulent Substitutions in Sea Urchin Roe Marketed in Sardinia (Italy).

    Science.gov (United States)

    Meloni, Domenico; Spina, Antonio; Satta, Gianluca; Chessa, Vittorio

    2016-06-25

    In recent years, besides the consumption of fresh sea urchin specimens, the demand of minimally-processed roe has grown considerably. This product has made frequent consumption in restaurants possible and frauds are becoming widespread with the partial replacement of sea urchin roe with surrogates that are similar in colour. One of the main factors that determines the quality of the roe is its colour and small differences in colour scale cannot be easily discerned by the consumers. In this study we have applied a rapid colorimetric method for reveal the fraudulent partial substitution of semi-solid sea urchin roe with liquid egg yolk. Objective assessment of whiteness (L*), redness (a*), yellowness (b*), hue (h*), and chroma (C*) was carried out with a digital spectrophotometer using the CIE L*a*b* colour measurement system. The colorimetric method highlighted statistically significant differences among sea urchin roe and liquid egg yolk that could be easily discerned quantitatively.

  2. Processing method and results of meteor shower radar observations

    International Nuclear Information System (INIS)

    Belkovich, O.I.; Suleimanov, N.I.; Tokhtasjev, V.S.

    1987-01-01

    Studies of meteor showers permit the solving of some principal problems of meteor astronomy: to obtain the structure of a stream in cross section and along its orbits; to retrace the evolution of particle orbits of the stream taking into account gravitational and nongravitational forces and to discover the orbital elements of its parent body; to find out the total mass of solid particles ejected from the parent body taking into account physical and chemical evolution of meteor bodies; and to use meteor streams as natural probes for investigation of the average characteristics of the meteor complex in the solar system. A simple and effective method of determining the flux density and mass exponent parameter was worked out. This method and its results are discussed

  3. Method of vacuum correlation functions: Results and prospects

    International Nuclear Information System (INIS)

    Badalian, A. M.; Simonov, Yu. A.; Shevchenko, V. I.

    2006-01-01

    Basic results obtained within the QCD method of vacuum correlation functions over the past 20 years in the context of investigations into strong-interaction physics at the Institute of Theoretical and Experimental Physics (ITEP, Moscow) are formulated Emphasis is placed primarily on the prospects of the general theory developed within QCD by employing both nonperturbative and perturbative methods. On the basis of ab initio arguments, it is shown that the lowest two field correlation functions play a dominant role in QCD dynamics. A quantitative theory of confinement and deconfinement, as well as of the spectra of light and heavy quarkonia, glueballs, and hybrids, is given in terms of these two correlation functions. Perturbation theory in a nonperturbative vacuum (background perturbation theory) plays a significant role, not possessing drawbacks of conventional perturbation theory and leading to the infrared freezing of the coupling constant α s

  4. Application of NUREG-1150 methods and results to accident management

    International Nuclear Information System (INIS)

    Dingman, S.; Sype, T.; Camp, A.; Maloney, K.

    1991-01-01

    The use of NUREG-1150 and similar probabilistic risk assessments in the Nuclear Regulatory Commission (NRC) and industry risk management programs is discussed. Risk management is more comprehensive than the commonly used term accident management. Accident management includes strategies to prevent vessel breach, mitigate radionuclide releases from the reactor coolant system, and mitigate radionuclide releases to the environment. Risk management also addresses prevention of accident initiators, prevention of core damage, and implementation of effective emergency response procedures. The methods and results produced in NUREG-1150 provide a framework within which current risk management strategies can be evaluated, and future risk management programs can be developed and assessed. Examples of the use of the NUREG-1150 framework for identifying and evaluating risk management options are presented. All phases of risk management are discussed, with particular attention given to the early phases of accidents. Plans and methods for evaluating accident management strategies that have been identified in the NRC accident management program are discussed

  5. Application of NUREG-1150 methods and results to accident management

    International Nuclear Information System (INIS)

    Dingman, S.; Sype, T.; Camp, A.; Maloney, K.

    1990-01-01

    The use of NUREG-1150 and similar Probabilistic Risk Assessments in NRC and industry risk management programs is discussed. ''Risk management'' is more comprehensive than the commonly used term ''accident management.'' Accident management includes strategies to prevent vessel breach, mitigate radionuclide releases from the reactor coolant system, and mitigate radionuclide releases to the environment. Risk management also addresses prevention of accident initiators, prevention of core damage, and implementation of effective emergency response procedures. The methods and results produced in NUREG-1150 provide a framework within which current risk management strategies can be evaluated, and future risk management programs can be developed and assessed. Examples of the use of the NUREG-1150 framework for identifying and evaluating risk management options are presented. All phases of risk management are discussed, with particular attention given to the early phases of accidents. Plans and methods for evaluating accident management strategies that have been identified in the NRC accident management program are discussed. 2 refs., 3 figs

  6. Performance of various mathematical methods for calculation of radioimmunoassay results

    International Nuclear Information System (INIS)

    Sandel, P.; Vogt, W.

    1977-01-01

    Interpolation and regression methods are available for computer aided determination of radioimmunological end results. We compared the performance of eight algorithms (weighted and unweighted linear logit-log regression, quadratic logit-log regression, Rodbards logistic model in the weighted and unweighted form, smoothing spline interpolation with a large and small smoothing factor and polygonal interpolation) on the basis of three radioimmunoassays with different reference curve characteristics (digoxin, estriol, human chorionic somatomammotropin = HCS). Great store was set by the accuracy of the approximation at the intermediate points on the curve, ie. those points that lie midway between two standard concentrations. These concentrations were obtained by weighing and inserted as unknown samples. In the case of digoxin and estriol the polygonal interpolation provided the best results while the weighted logit-log regression proved superior in the case of HCS. (orig.) [de

  7. Transparency Trade-offs for a 3-channel Controller Revealed by the Bounded Environment Passivity Method

    OpenAIRE

    Willaert, Bert; Corteville, Brecht; Reynaerts, Dominiek; Van Brussel, Hendrik; Vander Poorten, Emmanuel

    2010-01-01

    In this paper, the Bounded Environment Passivity method [1] is applied to a 3-channel controller. This method enables the design of teleoperation controllers that show passive behaviour for interactions with a bounded range of environments. The resulting tuning guidelines, derived analytically, provide interesting tuning flexibility, which allows to focus on different aspects of transparency. As telesurgery is the motivation behind this work, the focus lies on correctly r...

  8. Methodics of computing the results of monitoring the exploratory gallery

    Directory of Open Access Journals (Sweden)

    Krúpa Víazoslav

    2000-09-01

    Full Text Available At building site of motorway tunnel Višòové-Dubná skala , the priority is given to driving of exploration galley that secures in detail: geologic, engineering geology, hydrogeology and geotechnics research. This research is based on gathering information for a supposed use of the full profile driving machine that would drive the motorway tunnel. From a part of the exploration gallery which is driven by the TBM method, a fulfilling information is gathered about the parameters of the driving process , those are gathered by a computer monitoring system. The system is mounted on a driving machine. This monitoring system is based on the industrial computer PC 104. It records 4 basic values of the driving process: the electromotor performance of the driving machine Voest-Alpine ATB 35HA, the speed of driving advance, the rotation speed of the disintegrating head TBM and the total head pressure. The pressure force is evaluated from the pressure in the hydraulic cylinders of the machine. Out of these values, the strength of rock mass, the angle of inner friction, etc. are mathematically calculated. These values characterize rock mass properties as their changes. To define the effectivity of the driving process, the value of specific energy and the working ability of driving head is used. The article defines the methodics of computing the gathered monitoring information, that is prepared for the driving machine Voest – Alpine ATB 35H at the Institute of Geotechnics SAS. It describes the input forms (protocols of the developed method created by an EXCEL program and shows selected samples of the graphical elaboration of the first monitoring results obtained from exploratory gallery driving process in the Višòové – Dubná skala motorway tunnel.

  9. Radioimmunological determination of plasma progesterone. Methods - Results - Indications

    International Nuclear Information System (INIS)

    Gonon-Estrangin, Chantal.

    1978-10-01

    The aim of this work is to describe the radioimmunological determination of plasma progesterone carried out at the hormonology Laboratory of the Grenoble University Hospital Centre (Professor E. Chambaz), to compare our results with those of the literature and to present the main clinical indications of this analysis. The measurement method has proved reproducible, specific (the steroid purification stage is unnecessary) and sensitive (detection: 10 picograms of progesterone per tube). In seven normally menstruating women our results agree with published values: (in nanograms per millilitre: ng/ml) 0.07 ng/ml to 0.9 ng/ml in the follicular phase, from the start of menstruation until ovulation, then rapid increase at ovulation with a maximum in the middle of the luteal phase (our values for this maximum range from 7.9 ng/ml to 21.7 ng/ml) and gradual drop in progesterone secretion until the next menstrual period. In gynecology the radioimmunoassay of plasma progesterone is valuable for diagnostic and therapeutic purposes: - to diagnosis the absence of corpus luteum, - to judge the effectiveness of an ovulation induction treatment [fr

  10. Two-step extraction method for lead isotope fractionation to reveal anthropogenic lead pollution.

    Science.gov (United States)

    Katahira, Kenshi; Moriwaki, Hiroshi; Kamura, Kazuo; Yamazaki, Hideo

    2018-05-28

    This study developed the 2-step extraction method which eluted the Pb adsorbing on the surface of sediments in the first solution by aqua regia and extracted the Pb absorbed inside particles into the second solution by mixed acid of nitric acid, hydrofluoric acid and hydrogen peroxide solution. We applied the method to sediments in the enclosed water area and found out that the isotope ratios of Pb in the second solution represented those of natural origin. This advantage of the method makes it possible to distinguish the Pb between natural origin and anthropogenic source on the basis of the isotope ratios. The results showed that the method was useful to discuss the Pb sources and that anthropogenic Pb in the sediment samples analysed was mainly derived from China because of transboundary air pollution.

  11. Lesion insertion in the projection domain: Methods and initial results

    International Nuclear Information System (INIS)

    Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Yu, Zhicong; Ma, Chi; McCollough, Cynthia

    2015-01-01

    Purpose: To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way of achieving this objective is to create hybrid images that combine patient images with inserted lesions. Because conventional hybrid images generated in the image domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Methods: Lesions were segmented from patient images and forward projected to acquire lesion projections. The forward-projection geometry was designed according to a commercial CT scanner and accommodated both axial and helical modes with various focal spot movement patterns. The energy employed by the commercial CT scanner for beam hardening correction was measured and used for the forward projection. The lesion projections were inserted into patient projections decoded from commercial CT projection data. The combined projections were formatted to match those of commercial CT raw data, loaded onto a commercial CT scanner, and reconstructed to create the hybrid images. Two validations were performed. First, to validate the accuracy of the forward-projection geometry, images were reconstructed from the forward projections of a virtual ACR phantom and compared to physically acquired ACR phantom images in terms of CT number accuracy and high-contrast resolution. Second, to validate the realism of the lesion in hybrid images, liver lesions were segmented from patient images and inserted back into the same patients, each at a new location specified by a radiologist. The inserted lesions were compared to the original lesions and visually assessed for realism by two experienced radiologists in a blinded fashion. Results: For the validation of the forward-projection geometry, the images reconstructed from the forward projections of the virtual ACR phantom were consistent with the images physically

  12. Lesion insertion in the projection domain: Methods and initial results

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Yu, Zhicong; Ma, Chi; McCollough, Cynthia, E-mail: mccollough.cynthia@mayo.edu [Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States)

    2015-12-15

    Purpose: To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way of achieving this objective is to create hybrid images that combine patient images with inserted lesions. Because conventional hybrid images generated in the image domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Methods: Lesions were segmented from patient images and forward projected to acquire lesion projections. The forward-projection geometry was designed according to a commercial CT scanner and accommodated both axial and helical modes with various focal spot movement patterns. The energy employed by the commercial CT scanner for beam hardening correction was measured and used for the forward projection. The lesion projections were inserted into patient projections decoded from commercial CT projection data. The combined projections were formatted to match those of commercial CT raw data, loaded onto a commercial CT scanner, and reconstructed to create the hybrid images. Two validations were performed. First, to validate the accuracy of the forward-projection geometry, images were reconstructed from the forward projections of a virtual ACR phantom and compared to physically acquired ACR phantom images in terms of CT number accuracy and high-contrast resolution. Second, to validate the realism of the lesion in hybrid images, liver lesions were segmented from patient images and inserted back into the same patients, each at a new location specified by a radiologist. The inserted lesions were compared to the original lesions and visually assessed for realism by two experienced radiologists in a blinded fashion. Results: For the validation of the forward-projection geometry, the images reconstructed from the forward projections of the virtual ACR phantom were consistent with the images physically

  13. Method of Check of Statistical Hypotheses for Revealing of “Fraud” Point of Sale

    Directory of Open Access Journals (Sweden)

    T. M. Bolotskaya

    2011-06-01

    Full Text Available Application method checking of statistical hypotheses fraud Point of Sale working with purchasing cards and suspected of accomplishment of unauthorized operations is analyzed. On the basis of the received results the algorithm is developed, allowing receive an assessment of works of terminals in regime off-line.

  14. A novel method for assessing elbow pain resulting from epicondylitis

    Science.gov (United States)

    Polkinghorn, Bradley S.

    2002-01-01

    Abstract Objective To describe a novel orthopedic test (Polk's test) which can assist the clinician in differentiating between me- dial and lateral epicondylitis, 2 of the most common causes of elbow pain. This test has not been previously described in the literature. Clinical Features The testing procedure described in this paper is easy to learn, simple to perform and may provide the clinician with a quick and effective method of differentiating between lateral and medial epicondylitis. The test also helps to elucidate normal activities of daily living that the patient may unknowingly be performing on a repetitive basis that are hindering recovery. The results of this simple test allow the clinician to make immediate lifestyle recommendations to the patient that should improve and hasten the response to subsequent treatment. It may be used in conjunction with other orthopedic testing procedures, as it correlates well with other clinical tests for assessing epicondylitis. Conclusion The use of Polk's Test may help the clinician to diagnostically differentiate between lateral and medial epicondylitis, as well as supply information relative to choosing proper instructions for the patient to follow as part of their treatment program. Further research, performed in an academic setting, should prove helpful in more thoroughly evaluating the merits of this test. In the meantime, clinical experience over the years suggests that the practicing physician should find a great deal of clinical utility in utilizing this simple, yet effective, diagnostic procedure. PMID:19674572

  15. Comparison of microstickies measurement methods. Part II, Results and discussion

    Science.gov (United States)

    Mahendra R. Doshi; Angeles Blanco; Carlos Negro; Concepcion Monte; Gilles M. Dorris; Carlos C. Castro; Axel Hamann; R. Daniel Haynes; Carl Houtman; Karen Scallon; Hans-Joachim Putz; Hans Johansson; R. A. Venditti; K. Copeland; H.-M. Chang

    2003-01-01

    In part I of the article we discussed sample preparation procedure and described various methods used for the measurement of microstickies. Some of the important features of different methods are highlighted in Table 1. Temperatures used in the measurement methods vary from room temperature in some cases, 45 °C to 65 °C in other cases. Sample size ranges from as low as...

  16. The review and results of different methods for facial recognition

    Science.gov (United States)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  17. Resource costing for multinational neurologic clinical trials: methods and results.

    Science.gov (United States)

    Schulman, K; Burke, J; Drummond, M; Davies, L; Carlsson, P; Gruger, J; Harris, A; Lucioni, C; Gisbert, R; Llana, T; Tom, E; Bloom, B; Willke, R; Glick, H

    1998-11-01

    We present the results of a multinational resource costing study for a prospective economic evaluation of a new medical technology for treatment of subarachnoid hemorrhage within a clinical trial. The study describes a framework for the collection and analysis of international resource cost data that can contribute to a consistent and accurate intercountry estimation of cost. Of the 15 countries that participated in the clinical trial, we collected cost information in the following seven: Australia, France, Germany, the UK, Italy, Spain, and Sweden. The collection of cost data in these countries was structured through the use of worksheets to provide accurate and efficient cost reporting. We converted total average costs to average variable costs and then aggregated the data to develop study unit costs. When unit costs were unavailable, we developed an index table, based on a market-basket approach, to estimate unit costs. To estimate the cost of a given procedure, the market-basket estimation process required that cost information be available for at least one country. When cost information was unavailable in all countries for a given procedure, we estimated costs using a method based on physician-work and practice-expense resource-based relative value units. Finally, we converted study unit costs to a common currency using purchasing power parity measures. Through this costing exercise we developed a set of unit costs for patient services and per diem hospital services. We conclude by discussing the implications of our costing exercise and suggest guidelines to facilitate more effective multinational costing exercises.

  18. Psychophysical "blinding" methods reveal a functional hierarchy of unconscious visual processing.

    Science.gov (United States)

    Breitmeyer, Bruno G

    2015-09-01

    Numerous non-invasive experimental "blinding" methods exist for suppressing the phenomenal awareness of visual stimuli. Not all of these suppressive methods occur at, and thus index, the same level of unconscious visual processing. This suggests that a functional hierarchy of unconscious visual processing can in principle be established. The empirical results of extant studies that have used a number of different methods and additional reasonable theoretical considerations suggest the following tentative hierarchy. At the highest levels in this hierarchy is unconscious processing indexed by object-substitution masking. The functional levels indexed by crowding, the attentional blink (and other attentional blinding methods), backward pattern masking, metacontrast masking, continuous flash suppression, sandwich masking, and single-flash interocular suppression, fall at progressively lower levels, while unconscious processing at the lowest levels is indexed by eye-based binocular-rivalry suppression. Although unconscious processing levels indexed by additional blinding methods is yet to be determined, a tentative placement at lower levels in the hierarchy is also given for unconscious processing indexed by Troxler fading and adaptation-induced blindness, and at higher levels in the hierarchy indexed by attentional blinding effects in addition to the level indexed by the attentional blink. The full mapping of levels in the functional hierarchy onto cortical activation sites and levels is yet to be determined. The existence of such a hierarchy bears importantly on the search for, and the distinctions between, neural correlates of conscious and unconscious vision. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Comparative study of methods on outlying data detection in experimental results

    International Nuclear Information System (INIS)

    Oliveira, P.M.S.; Munita, C.S.; Hazenfratz, R.

    2009-01-01

    The interpretation of experimental results through multivariate statistical methods might reveal the outliers existence, which is rarely taken into account by the analysts. However, their presence can influence the results interpretation, generating false conclusions. This paper shows the importance of the outliers determination for one data base of 89 samples of ceramic fragments, analyzed by neutron activation analysis. The results were submitted to five procedures to detect outliers: Mahalanobis distance, cluster analysis, principal component analysis, factor analysis, and standardized residual. The results showed that although cluster analysis is one of the procedures most used to identify outliers, it can fail by not showing the samples that are easily identified as outliers by other methods. In general, the statistical procedures for the identification of the outliers are little known by the analysts. (author)

  20. Effect of tidal triggering on seismicity in Taiwan revealed by the empirical mode decomposition method

    Directory of Open Access Journals (Sweden)

    H.-J. Chen

    2012-07-01

    Full Text Available The effect of tidal triggering on earthquake occurrence has been controversial for many years. This study considered earthquakes that occurred near Taiwan between 1973 and 2008. Because earthquake data are nonlinear and non-stationary, we applied the empirical mode decomposition (EMD method to analyze the temporal variations in the number of daily earthquakes to investigate the effect of tidal triggering. We compared the results obtained from the non-declustered catalog with those from two kinds of declustered catalogs and discuss the aftershock effect on the EMD-based analysis. We also investigated stacking the data based on in-phase phenomena of theoretical Earth tides with statistical significance tests. Our results show that the effects of tidal triggering, particularly the lunar tidal effect, can be extracted from the raw seismicity data using the approach proposed here. Our results suggest that the lunar tidal force is likely a factor in the triggering of earthquakes.

  1. Results and current trends of nuclear methods used in agriculture

    International Nuclear Information System (INIS)

    Horacek, P.

    1983-01-01

    The significance is evaluated of nuclear methods for agricultural research. The number of breeds induced by radiation mutations is increasing. The main importance of radiation mutation breeding consists in obtaining sources of the desired genetic properties for further hybridization. Radiostimulation is conducted with the aim of increasing yields. The irradiation of foods has not substantially increased worldwide. Very important is the irradiation of excrements and sludges which after such inactivation of pathogenic microorganisms may be used as humus-forming manure or as feed additives. In some countries the method is successfully being used of sexual sterilization for eradication of insect pests. The application of labelled compounds in the nutrition, physiology and protection of plants, farm animals and in food hygiene makes it possible to acquire new and accurate knowledge very quickly. Radioimmunoassay is a highly promising method in this respect. Labelling compounds with the stable 15 N isotope is used for the research of nitrogen metabolism. (M.D.)

  2. Methods of early revealing, prognosis of further course and complications of pollinosis

    Directory of Open Access Journals (Sweden)

    Chukhrienko N.D.

    2013-10-01

    Full Text Available Under our observation there were 59 patients with pollinosis – 39 females and 20 males at the age from 18 to 68 years. All patients were in the phase of disease exacerbation. General clinical symptoms were: rhinitis, conjunctivitis and bronchial spasm. The results showed that first clinical manifestations appear in persons of young age. Half of the patients had aggravated allergologic anamnesis. Taking into account that pollinosis is a typical representative of diseases having mechanism of immunoglobulin E (IgE-dependent allergic reactions of the first type, the authors have studied in detail level of IgE and its link with other factors. Practically in all patients with pollinosis level of total IgE exceeded the norm. As a result of studies performed, it was established that high IgE level, presence of phagocytosis defect and prolong duration of illness are the criteria which affect disease progress, aggravation of patients’ state, less efficacy of treatment. Due to the fact that development of bronchial obstruction and transformation of pollinosis into bronchial asthma is the most topical issue nowadays, the authors studied its link with other factors and findings. It was established that risk of pollinosis transformation into pollen bronchial asthma increases in the presence of high level of total IgE, aggravation of allergologic anamnesis, decrease of forced expiration volume (FEV, significant duration of disease course. In the course of investigation it was revealed that the highest efficacy of treatment is noted in patients receiving allergen-specific therapy; this confirms data of world scientific literature. The best treatment results are observed in pollinosis patients, with aggravated family history not in parents but in grandparents.

  3. Results from the Application of Uncertainty Methods in the CSNI Uncertainty Methods Study (UMS)

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    Within licensing procedures there is the incentive to replace the conservative requirements for code application by a - best estimate - concept supplemented by an uncertainty analysis to account for predictive uncertainties of code results. Methods have been developed to quantify these uncertainties. The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced -best estimate- thermal-hydraulic codes. Most of the methods identify and combine input uncertainties. The major differences between the predictions of the methods came from the choice of uncertain parameters and the quantification of the input uncertainties, i.e. the wideness of the uncertainty ranges. Therefore, suitable experimental and analytical information has to be selected to specify these uncertainty ranges or distributions. After the closure of the Uncertainty Method Study (UMS) and after the report was issued comparison calculations of experiment LSTF-SB-CL-18 were performed by University of Pisa using different versions of the RELAP 5 code. It turned out that the version used by two of the participants calculated a 170 K higher peak clad temperature compared with other versions using the same input deck. This may contribute to the differences of the upper limit of the uncertainty ranges.

  4. Application of a hierarchical enzyme classification method reveals the role of gut microbiome in human metabolism

    Science.gov (United States)

    2015-01-01

    Background Enzymes are known as the molecular machines that drive the metabolism of an organism; hence identification of the full enzyme complement of an organism is essential to build the metabolic blueprint of that species as well as to understand the interplay of multiple species in an ecosystem. Experimental characterization of the enzymatic reactions of all enzymes in a genome is a tedious and expensive task. The problem is more pronounced in the metagenomic samples where even the species are not adequately cultured or characterized. Enzymes encoded by the gut microbiota play an essential role in the host metabolism; thus, warranting the need to accurately identify and annotate the full enzyme complements of species in the genomic and metagenomic projects. To fulfill this need, we develop and apply a method called ECemble, an ensemble approach to identify enzymes and enzyme classes and study the human gut metabolic pathways. Results ECemble method uses an ensemble of machine-learning methods to accurately model and predict enzymes from protein sequences and also identifies the enzyme classes and subclasses at the finest resolution. A tenfold cross-validation result shows accuracy between 97 and 99% at different levels in the hierarchy of enzyme classification, which is superior to comparable methods. We applied ECemble to predict the entire complements of enzymes from ten sequenced proteomes including the human proteome. We also applied this method to predict enzymes encoded by the human gut microbiome from gut metagenomic samples, and to study the role played by the microbe-derived enzymes in the human metabolism. After mapping the known and predicted enzymes to canonical human pathways, we identified 48 pathways that have at least one bacteria-encoded enzyme, which demonstrates the complementary role of gut microbiome in human gut metabolism. These pathways are primarily involved in metabolizing dietary nutrients such as carbohydrates, amino acids, lipids

  5. Application of NDE methods to green ceramics: initial results

    International Nuclear Information System (INIS)

    Kupperman, D.S.; Karplus, H.B.; Poeppel, R.B.; Ellingson, W.A.; Berger, H.; Robbins, C.; Fuller, E.

    1984-03-01

    This paper describes a preliminary investigation to assess the effectiveness of microradiography, ultrasonic methods, nuclear magnetic resonance, and neutron radiography for the nondestructive evaluation of green (unfired), ceramics. Objective is to obtain useful information on defects, cracking, delaminations, agglomerates, inclusions, regions of high porosity, and anisotropy

  6. Model films of cellulose. I. Method development and initial results

    NARCIS (Netherlands)

    Gunnars, S.; Wågberg, L.; Cohen Stuart, M.A.

    2002-01-01

    This report presents a new method for the preparation of thin cellulose films. NMMO (N- methylmorpholine- N-oxide) was used to dissolve cellulose and addition of DMSO (dimethyl sulfoxide) was used to control viscosity of the cellulose solution. A thin layer of the cellulose solution is spin- coated

  7. Wide Binaries in TGAS: Search Method and First Results

    Science.gov (United States)

    Andrews, Jeff J.; Chanamé, Julio; Agüeros, Marcel A.

    2018-04-01

    Half of all stars reside in binary systems, many of which have orbital separations in excess of 1000 AU. Such binaries are typically identified in astrometric catalogs by matching the proper motions vectors of close stellar pairs. We present a fully Bayesian method that properly takes into account positions, proper motions, parallaxes, and their correlated uncertainties to identify widely separated stellar binaries. After applying our method to the >2 × 106 stars in the Tycho-Gaia astrometric solution from Gaia DR1, we identify over 6000 candidate wide binaries. For those pairs with separations less than 40,000 AU, we determine the contamination rate to be ~5%. This sample has an orbital separation (a) distribution that is roughly flat in log space for separations less than ~5000 AU and follows a power law of a -1.6 at larger separations.

  8. Viscous wing theory development. Volume 1: Analysis, method and results

    Science.gov (United States)

    Chow, R. R.; Melnik, R. E.; Marconi, F.; Steinhoff, J.

    1986-01-01

    Viscous transonic flows at large Reynolds numbers over 3-D wings were analyzed using a zonal viscid-inviscid interaction approach. A new numerical AFZ scheme was developed in conjunction with the finite volume formulation for the solution of the inviscid full-potential equation. A special far-field asymptotic boundary condition was developed and a second-order artificial viscosity included for an improved inviscid solution methodology. The integral method was used for the laminar/turbulent boundary layer and 3-D viscous wake calculation. The interaction calculation included the coupling conditions of the source flux due to the wing surface boundary layer, the flux jump due to the viscous wake, and the wake curvature effect. A method was also devised incorporating the 2-D trailing edge strong interaction solution for the normal pressure correction near the trailing edge region. A fully automated computer program was developed to perform the proposed method with one scalar version to be used on an IBM-3081 and two vectorized versions on Cray-1 and Cyber-205 computers.

  9. Algorithms for monitoring warfarin use: Results from Delphi Method.

    Science.gov (United States)

    Kano, Eunice Kazue; Borges, Jessica Bassani; Scomparini, Erika Burim; Curi, Ana Paula; Ribeiro, Eliane

    2017-10-01

    Warfarin stands as the most prescribed oral anticoagulant. New oral anticoagulants have been approved recently; however, their use is limited and the reversibility techniques of the anticoagulation effect are little known. Thus, our study's purpose was to develop algorithms for therapeutic monitoring of patients taking warfarin based on the opinion of physicians who prescribe this medicine in their clinical practice. The development of the algorithm was performed in two stages, namely: (i) literature review and (ii) algorithm evaluation by physicians using a Delphi Method. Based on the articles analyzed, two algorithms were developed: "Recommendations for the use of warfarin in anticoagulation therapy" and "Recommendations for the use of warfarin in anticoagulation therapy: dose adjustment and bleeding control." Later, these algorithms were analyzed by 19 medical doctors that responded to the invitation and agreed to participate in the study. Of these, 16 responded to the first round, 11 to the second and eight to the third round. A 70% consensus or higher was reached for most issues and at least 50% for six questions. We were able to develop algorithms to monitor the use of warfarin by physicians using a Delphi Method. The proposed method is inexpensive and involves the participation of specialists, and it has proved adequate for the intended purpose. Further studies are needed to validate these algorithms, enabling them to be used in clinical practice.

  10. Application of a hierarchical enzyme classification method reveals the role of gut microbiome in human metabolism.

    Science.gov (United States)

    Mohammed, Akram; Guda, Chittibabu

    2015-01-01

    Enzymes are known as the molecular machines that drive the metabolism of an organism; hence identification of the full enzyme complement of an organism is essential to build the metabolic blueprint of that species as well as to understand the interplay of multiple species in an ecosystem. Experimental characterization of the enzymatic reactions of all enzymes in a genome is a tedious and expensive task. The problem is more pronounced in the metagenomic samples where even the species are not adequately cultured or characterized. Enzymes encoded by the gut microbiota play an essential role in the host metabolism; thus, warranting the need to accurately identify and annotate the full enzyme complements of species in the genomic and metagenomic projects. To fulfill this need, we develop and apply a method called ECemble, an ensemble approach to identify enzymes and enzyme classes and study the human gut metabolic pathways. ECemble method uses an ensemble of machine-learning methods to accurately model and predict enzymes from protein sequences and also identifies the enzyme classes and subclasses at the finest resolution. A tenfold cross-validation result shows accuracy between 97 and 99% at different levels in the hierarchy of enzyme classification, which is superior to comparable methods. We applied ECemble to predict the entire complements of enzymes from ten sequenced proteomes including the human proteome. We also applied this method to predict enzymes encoded by the human gut microbiome from gut metagenomic samples, and to study the role played by the microbe-derived enzymes in the human metabolism. After mapping the known and predicted enzymes to canonical human pathways, we identified 48 pathways that have at least one bacteria-encoded enzyme, which demonstrates the complementary role of gut microbiome in human gut metabolism. These pathways are primarily involved in metabolizing dietary nutrients such as carbohydrates, amino acids, lipids, cofactors and

  11. A Filtering Method to Reveal Crystalline Patterns from Atom Probe Microscopy Desorption Maps

    Science.gov (United States)

    2016-03-26

    reveal crystalline patterns from atom probe microscopy desorption maps Lan Yao Department of Materials Science and Engineering, University of Michigan, Ann...reveal the crystallographic information present in Atom Probe Microscopy (APM) data is presented. Themethod filters atoms based on the time difference...between their evaporation and the evaporation of the previous atom . Since this time difference correlates with the location and the local structure of

  12. Further results for crack-edge mappings by ray methods

    International Nuclear Information System (INIS)

    Norris, A.N.; Achenbach, J.D.; Ahlberg, L.; Tittman, B.R.

    1984-01-01

    This chapter discusses further extensions of the local edge mapping method to the pulse-echo case and to configurations of water-immersed specimens and transducers. Crack edges are mapped by the use of arrival times of edge-diffracted signals. Topics considered include local edge mapping in a homogeneous medium, local edge mapping algorithms, local edge mapping through an interface, and edge mapping through an interface using synthetic data. Local edge mapping is iterative, with two or three iterations required for convergence

  13. Method of fabricating nested shells and resulting product

    Science.gov (United States)

    Henderson, Timothy M.; Kool, Lawrence B.

    1982-01-01

    A multiple shell structure and a method of manufacturing such structure wherein a hollow glass microsphere is surface treated in an organosilane solution so as to render the shell outer surface hydrophobic. The surface treated glass shell is then suspended in the oil phase of an oil-aqueous phase dispersion. The oil phase includes an organic film-forming monomer, a polymerization initiator and a blowing agent. A polymeric film forms at each phase boundary of the dispersion and is then expanded in a blowing operation so as to form an outer homogeneously integral monocellular substantially spherical thermoplastic shell encapsulating an inner glass shell of lesser diameter.

  14. Trial sequential analysis reveals insufficient information size and potentially false positive results in many meta-analyses

    DEFF Research Database (Denmark)

    Brok, J.; Thorlund, K.; Gluud, C.

    2008-01-01

    in 80% (insufficient information size). TSA(15%) and TSA(LBHIS) found that 95% and 91% had absence of evidence. The remaining nonsignificant meta-analyses had evidence of lack of effect. CONCLUSION: TSA reveals insufficient information size and potentially false positive results in many meta......OBJECTIVES: To evaluate meta-analyses with trial sequential analysis (TSA). TSA adjusts for random error risk and provides the required number of participants (information size) in a meta-analysis. Meta-analyses not reaching information size are analyzed with trial sequential monitoring boundaries...... analogous to interim monitoring boundaries in a single trial. STUDY DESIGN AND SETTING: We applied TSA on meta-analyses performed in Cochrane Neonatal reviews. We calculated information sizes and monitoring boundaries with three different anticipated intervention effects of 30% relative risk reduction (TSA...

  15. The Accident Sequence Precursor program: Methods improvements and current results

    International Nuclear Information System (INIS)

    Minarick, J.W.; Manning, F.M.; Harris, J.D.

    1987-01-01

    Changes in the US NRC Accident Sequence Precursor program methods since the initial program evaluations of 1969-81 operational events are described, along with insights from the review of 1984-85 events. For 1984-85, the number of significant precursors was consistent with the number observed in 1980-81, dominant sequences associated with significant events were reasonably consistent with PRA estimates for BWRs, but lacked the contribution due to small-break LOCAs previously observed and predicted in PWRs, and the frequency of initiating events and non-recoverable system failures exhibited some reduction compared to 1980-81. Operational events which provide information concerning additional PRA modeling needs are also described

  16. Use of results from microscopic methods in optical model calculations

    International Nuclear Information System (INIS)

    Lagrange, C.

    1985-11-01

    A concept of vectorization for coupled-channel programs based upon conventional methods is first presented. This has been implanted in our program for its use on the CRAY-1 computer. In a second part we investigate the capabilities of a semi-microscopic optical model involving fewer adjustable parameters than phenomenological ones. The two main ingredients of our calculations are, for spherical or well-deformed nuclei, the microscopic optical-model calculations of Jeukenne, Lejeune and Mahaux and nuclear densities from Hartree-Fock-Bogoliubov calculations using the density-dependent force D1. For transitional nuclei deformation-dependent nuclear structure wave functions are employed to weigh the scattering potentials for different shapes and channels [fr

  17. Lesion insertion in the projection domain: Methods and initial results.

    Science.gov (United States)

    Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Yu, Zhicong; Ma, Chi; McCollough, Cynthia

    2015-12-01

    phantom in terms of Hounsfield unit and high-contrast resolution. For the validation of the lesion realism, lesions of various types were successfully inserted, including well circumscribed and invasive lesions, homogeneous and heterogeneous lesions, high-contrast and low-contrast lesions, isolated and vessel-attached lesions, and small and large lesions. The two experienced radiologists who reviewed the original and inserted lesions could not identify the lesions that were inserted. The same lesion, when inserted into the projection domain and reconstructed with different parameters, demonstrated a parameter-dependent appearance. A framework has been developed for projection-domain insertion of lesions into commercial CT images, which can be potentially expanded to all geometries of CT scanners. Compared to conventional image-domain methods, the authors' method reflected the impact of scan and reconstruction parameters on lesion appearance. Compared to prior projection-domain methods, the authors' method has the potential to achieve higher anatomical complexity by employing clinical patient projections and real patient lesions.

  18. Assessment of South African uranium resources: methods and results

    International Nuclear Information System (INIS)

    Camisani-Calzolari, F.A.G.M.; De Klerk, W.J.; Van der Merwe, P.J.

    1985-01-01

    This paper deals primarily with the methods used by the Atomic Energy Corporation of South Africa, in arriving at the assessment of the South African uranium resources. The Resource Evaluation Group is responsible for this task, which is carried out on a continuous basis. The evaluation is done on a property-by-property basis and relies upon data submitted to the Nuclear Development Corporation of South Africa by the various companies involved in uranium mining and prospecting in South Africa. Resources are classified into Reasonably Assured (RAR), Estimated Additional (EAR) and Speculative (SR) categories as defined by the NEA/IAEA Steering Group on Uranium Resources. Each category is divided into three categories, viz, resources exploitable at less than $80/kg uranium, at $80-130/kg uranium and at $130-260/kg uranium. Resources are reported in quantities of uranium metal that could be recovered after mining and metallurgical losses have been taken into consideration. Resources in the RAR and EAR categories exploitable at costs of less than $130/kg uranium are now estimated at 460 000 t uranium which represents some 14 per cent of WOCA's (World Outside the Centrally Planned Economies Area) resources. The evaluation of a uranium venture is carried out in various steps, of which the most important, in order of implementation, are: geological interpretation, assessment of in situ resources using techniques varying from manual contouring of values, geostatistics, feasibility studies and estimation of recoverable resources. Because the choice of an evaluation method is, to some extent, dictated by statistical consderations, frequency distribution curves of the uranium grade variable are illustrated and discussed for characteristic deposits

  19. UV spectroscopy applied to stratospheric chemistry, methods and results

    Energy Technology Data Exchange (ETDEWEB)

    Karlsen, K.

    1996-03-01

    The publication from the Norwegian Institute for Air Research (NILU) deals with an investigation done on stratospheric chemistry by UV spectroscopy. The scientific goals are briefly discussed, and it gives the results from the measuring and analysing techniques used in the investigation. 6 refs., 11 figs.

  20. Creep in rock salt with temperature. Testing methods and results

    International Nuclear Information System (INIS)

    Charpentier, J.P.; Berest, P.

    1985-01-01

    The growing interest shown in the delayed behaviour of rocks at elevated temperature has led the Solid Mechanics Laboratory to develop specific equipment designed for creep tests. The design and dimensioning of these units offer the possibility of investigating a wide range of materials. The article describes the test facilities used (uni-axial and tri-axial creep units) and presents the experimental results obtained on samples of Bresse salt [fr

  1. TMI-2 core debris analytical methods and results

    International Nuclear Information System (INIS)

    Akers, D.W.; Cook, B.A.

    1984-01-01

    A series of six grab samples was taken from the debris bed of the TMI-2 core in early September 1983. Five of these samples were sent to the Idaho National Engineering Laboratory for analysis. Presented is the analysis strategy for the samples and some of the data obtained from the early stages of examination of the samples (i.e., particle size-analysis, gamma spectrometry results, and fissile/fertile material analysis)

  2. Studies of LMFBR: method of analysis and some results

    International Nuclear Information System (INIS)

    Ishiguro, Y.; Dias, A.F.; Nascimento, J.A. do.

    1983-01-01

    Some results of recent studies of LMFBR characteristics are summarized. A two-dimensional model of the LMFBR is taken from a publication and used as the base model for the analysis. Axial structures are added to the base model and a three-dimensional (Δ - Z) calculation has been done. Two dimensional (Δ and RZ) calculations are compared with the three-dimensional and published results. The eigenvalue, flux and power distributions, breeding characteristics, control rod worth, sodium-void and Doppler reactivities are analysed. Calculations are done by CITATION using six-group cross sections collapsed regionwise by EXPANDA in one-dimensional geometries from the 70-group JFS library. Burnup calculations of a simplified thorium-cycle LMFBR have also been done in the RZ geometry. Principal results of the studies are: (1) the JFS library appears adequate for predicting overall characteristics of an LMFBR, (2) the sodium void reactivity is negative within - 25 cm from the outer boundary of the core, (3) the halflife of Pa-233 must be considered explicitly in burnup analyses, and (4) two-dimensional (RZ and Δ) calculations can be used iteratively to analyze three-dimensional reactor systems. (Author) [pt

  3. Methods and results of radiotherapy in case of medulloblastoma

    International Nuclear Information System (INIS)

    Bamberg, M.; Sauerwein, W.; Scherer, E.

    1982-01-01

    The prognosis of the medulloblastoma with its marked tendency towards early formation of metastases by way of liquor circulation can be decisively improved by post-surgical homogenous irradiation. A successful radiotherapy is only possible by means of new irradiation methods which have been developed for high-voltage units during recent years and which require great experience and skill on the part of the radiotherapeutist. At the Radiological Centre of Essen, 26 patients with medulloblastoma have been submitted to such a specially developed post-surgical radiotherapy since 1974. After a follow-up period of at most seven years, 16 patients have survived (two of them with recurrences) and 10 patients died because of a local recurrence. In dependence on the patient's state of health after surgery and before irradiation, the neurologic state and physical condition of these patients seem favorable after unique post-operative radiotherapy. New therapeutic possibilities are provided by radiosensitizing substances. The actually most effective radiosensitizer Misonidazol, however, could not respond hitherto to clinical expectances. (orig.) [de

  4. Application of NDE methods to green ceramics: initial results

    International Nuclear Information System (INIS)

    Kupperman, D.S.; Karplus, H.B.; Poeppel, R.B.; Ellingson, W.A.; Berger, H.; Robbins, C.; Fuller, E.

    1983-01-01

    The effectiveness of microradiography, ultrasonic methods, unclear magnetic resonance, and neutron radiography was assessed for the nondestructive evaluation of green (unfired) ceramics. The application of microradiography to ceramics is reviewed, and preliminary experiments with a commercial microradiography unit are described. Conventional ultrasonic techniques are difficult to apply to flaw detection green ceramics because of the high attenuation, fragility, and couplant-absorbing properties of these materials. However, velocity, attenuation, and spectral data were obtained with pressure-coupled transducers and provided useful informaion related to density variations and the presence of agglomerates. Nuclear magnetic resonance (NMR) imaging techniques and neutron radiography were considered for detection of anomalies in the distribution of porosity. With NMR, areas of high porosity might be detected after the samples are doped with water. In the case of neutron radiography, although imaging the binder distribution throughout the sample may not be feasible because of the low overall concentration of binder, regions of high binder concentration (thus high porosity) should be detectable

  5. New test methods for BIPV. Results from IP performance

    International Nuclear Information System (INIS)

    Jol, J.C.; Van Kampen, B.J.M.; De Boer, B.J.; Reil, F.; Geyer, D.

    2009-11-01

    Within the Performance project new test procedures for PV building products and the building performance as a whole when PV is applied in buildings have been drafted. It has resulted in a first draft of new test procedures for PV building products and proposals for tests for novel BIPV technology like thin film. The test proposed are a module breakage test for BIPV products, a fire safety test for BIPV products and a dynamic load test for BIPV products. Furthermore first proposals of how flexible PV modules could be tested in an appropriate way to ensure long time quality and safety of these new products are presented.

  6. Diamagnetic measurements on ISX-B: method and results

    International Nuclear Information System (INIS)

    Neilson, G.H.

    1983-10-01

    A diamagnetic loop is used on the ISX-B tokamak to measure the change in toroidal magnetic flux, sigma phi, caused by finite plasma current and perpendicular pressure. From this measurement, the perpendicular poloidal beta β/sub I perpendicular to/ is determined. The principal difficulty encountered is in identifying and making corrections for various noise components which appear in the measured flux. These result from coupling between the measuring loops and the toroidal and poloidal field windings, both directly and through currents induced in the vacuum vessel and coils themselves. An analysis of these couplings is made and techniques for correcting them developed. Results from the diamagnetic measurement, employing some of these correction techniques, are presented and compared with other data. The obtained values of β/sub I perpendicular to/ agree with those obtained from the equilibrium magnetic analysis (β/sub IΔ/) in ohmically heated plasmas, indicating no anisotropy. However, with 0.3 to 2.0 MW of tangential neutral beam injection, β/sub IΔ/ is consistently greater than β/sub I pependicular to/ and qualitatively consistent with the formation of an anisotropic ion velocity distribution and with toroidal rotation. Quantitatively, the difference between β/sub IΔ/ and β/sub I perpendicular to/ is more than can be accounted for on the basis of the usual classical fast ion calculations and spectroscopic rotation measurements

  7. [Integrated intensive treatment of tinnitus: method and initial results].

    Science.gov (United States)

    Mazurek, B; Georgiewa, P; Seydel, C; Haupt, H; Scherer, H; Klapp, B F; Reisshauer, A

    2005-07-01

    In recent years, no major advances have been made in understanding the mechanisms underlying the development of tinnitus. Hence, the present therapeutic strategies aim at decoupling the subconscious from the perception of tinnitus. Mindful of the lessons drawn from existing tinnitus retraining and desensitisation therapies, a new integrated day hospital strategy of treatment lasting 7-14 days has been developed at the Charité Hospital and is presented in the present paper. The strategy for treating tinnitus in the proximity of patient domicile is designed for patients who feel disturbed in their world of perception and their efficiency due to tinnitus and give evidence of mental and physical strain. In view of the etiologically non-uniform and multiple events connected with tinnitus, consideration was also given to the fact that somatic and psychosocial factors are equally involved. Therefore, therapy should aim at diagnosing and therapeutically influencing those psychosocial factors that reduce the hearing impression to such an extent that the affected persons suffer from strain. The first results of therapy-dependent changes of 46 patients suffering from chronic tinnitus are presented. The data were evaluated before and after 7 days of treatment and 6 months after the end of treatment. Immediately after the treatment, the scores of both the tinnitus questionnaire (Goebel and Hiller) and the subscales improved significantly. These results were maintained during the 6-month post-treatment period and even improved.

  8. Assessing Internet energy intensity: A review of methods and results

    Energy Technology Data Exchange (ETDEWEB)

    Coroama, Vlad C., E-mail: vcoroama@gmail.com [Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Hilty, Lorenz M. [Department of Informatics, University of Zurich, Binzmühlestrasse 14, 8050 Zurich (Switzerland); Empa, Swiss Federal Laboratories for Materials Science and Technology, Lerchenfeldstr. 5, 9014 St. Gallen (Switzerland); Centre for Sustainable Communications, KTH Royal Institute of Technology, Lindstedtsvägen 5, 100 44 Stockholm (Sweden)

    2014-02-15

    Assessing the average energy intensity of Internet transmissions is a complex task that has been a controversial subject of discussion. Estimates published over the last decade diverge by up to four orders of magnitude — from 0.0064 kilowatt-hours per gigabyte (kWh/GB) to 136 kWh/GB. This article presents a review of the methodological approaches used so far in such assessments: i) top–down analyses based on estimates of the overall Internet energy consumption and the overall Internet traffic, whereby average energy intensity is calculated by dividing energy by traffic for a given period of time, ii) model-based approaches that model all components needed to sustain an amount of Internet traffic, and iii) bottom–up approaches based on case studies and generalization of the results. Our analysis of the existing studies shows that the large spread of results is mainly caused by two factors: a) the year of reference of the analysis, which has significant influence due to efficiency gains in electronic equipment, and b) whether end devices such as personal computers or servers are included within the system boundary or not. For an overall assessment of the energy needed to perform a specific task involving the Internet, it is necessary to account for the types of end devices needed for the task, while the energy needed for data transmission can be added based on a generic estimate of Internet energy intensity for a given year. Separating the Internet as a data transmission system from the end devices leads to more accurate models and to results that are more informative for decision makers, because end devices and the networking equipment of the Internet usually belong to different spheres of control. -- Highlights: • Assessments of the energy intensity of the Internet differ by a factor of 20,000. • We review top–down, model-based, and bottom–up estimates from literature. • Main divergence factors are the year studied and the inclusion of end devices

  9. Assessing Internet energy intensity: A review of methods and results

    International Nuclear Information System (INIS)

    Coroama, Vlad C.; Hilty, Lorenz M.

    2014-01-01

    Assessing the average energy intensity of Internet transmissions is a complex task that has been a controversial subject of discussion. Estimates published over the last decade diverge by up to four orders of magnitude — from 0.0064 kilowatt-hours per gigabyte (kWh/GB) to 136 kWh/GB. This article presents a review of the methodological approaches used so far in such assessments: i) top–down analyses based on estimates of the overall Internet energy consumption and the overall Internet traffic, whereby average energy intensity is calculated by dividing energy by traffic for a given period of time, ii) model-based approaches that model all components needed to sustain an amount of Internet traffic, and iii) bottom–up approaches based on case studies and generalization of the results. Our analysis of the existing studies shows that the large spread of results is mainly caused by two factors: a) the year of reference of the analysis, which has significant influence due to efficiency gains in electronic equipment, and b) whether end devices such as personal computers or servers are included within the system boundary or not. For an overall assessment of the energy needed to perform a specific task involving the Internet, it is necessary to account for the types of end devices needed for the task, while the energy needed for data transmission can be added based on a generic estimate of Internet energy intensity for a given year. Separating the Internet as a data transmission system from the end devices leads to more accurate models and to results that are more informative for decision makers, because end devices and the networking equipment of the Internet usually belong to different spheres of control. -- Highlights: • Assessments of the energy intensity of the Internet differ by a factor of 20,000. • We review top–down, model-based, and bottom–up estimates from literature. • Main divergence factors are the year studied and the inclusion of end devices

  10. Ilmenite exploration on the Senegal continental shelf. Methods and results

    International Nuclear Information System (INIS)

    Horn, R.; Le Lann, F.; Scolari, G.; Tixeront, M.

    1975-01-01

    From the results of a study based on geomorphology, geophysics and sedimentology, it has been possible to point out, South of Dakar, the existence of a fossil lagoon (peat dated 8400 years BP) partly isolated from the open sea by a littoral sand bar at -25m and strongly eroded. To the North of Dakar, the unconsolidated sediments, with a thickness over 40m, are thinning out seawards and from North of Dakar, the unconsolidated sediments, with a thickness over 40m, are thinning out seawards and from North to South. This pattern reflect the action of the longshore current which overstates the drainage effect to the Cayar canyon. The distribution of ilmenites in the sediments is studied in terms of a possible exploitation however, the grades are too low in the present economic conditions [fr

  11. Climate Action Gaming Experiment: Methods and Example Results

    Directory of Open Access Journals (Sweden)

    Clifford Singer

    2015-09-01

    Full Text Available An exercise has been prepared and executed to simulate international interactions on policies related to greenhouse gases and global albedo management. Simulation participants are each assigned one of six regions that together contain all of the countries in the world. Participants make quinquennial policy decisions on greenhouse gas emissions, recapture of CO2 from the atmosphere, and/or modification of the global albedo. Costs of climate change and of implementing policy decisions impact each region’s gross domestic product. Participants are tasked with maximizing economic benefits to their region while nearly stabilizing atmospheric CO2 concentrations by the end of the simulation in Julian year 2195. Results are shown where regions most adversely affected by effects of greenhouse gas emissions resort to increases in the earth’s albedo to reduce net solar insolation. These actions induce temperate region countries to reduce net greenhouse gas emissions. An example outcome is a trajectory to the year 2195 of atmospheric greenhouse emissions and concentrations, sea level, and global average temperature.

  12. COSMIC EVOLUTION OF DUST IN GALAXIES: METHODS AND PRELIMINARY RESULTS

    International Nuclear Information System (INIS)

    Bekki, Kenji

    2015-01-01

    We investigate the redshift (z) evolution of dust mass and abundance, their dependences on initial conditions of galaxy formation, and physical correlations between dust, gas, and stellar contents at different z based on our original chemodynamical simulations of galaxy formation with dust growth and destruction. In this preliminary investigation, we first determine the reasonable ranges of the most important two parameters for dust evolution, i.e., the timescales of dust growth and destruction, by comparing the observed and simulated dust mass and abundances and molecular hydrogen (H 2 ) content of the Galaxy. We then investigate the z-evolution of dust-to-gas ratios (D), H 2 gas fraction (f H 2 ), and gas-phase chemical abundances (e.g., A O = 12 + log (O/H)) in the simulated disk and dwarf galaxies. The principal results are as follows. Both D and f H 2 can rapidly increase during the early dissipative formation of galactic disks (z ∼ 2-3), and the z-evolution of these depends on initial mass densities, spin parameters, and masses of galaxies. The observed A O -D relation can be qualitatively reproduced, but the simulated dispersion of D at a given A O is smaller. The simulated galaxies with larger total dust masses show larger H 2 and stellar masses and higher f H 2 . Disk galaxies show negative radial gradients of D and the gradients are steeper for more massive galaxies. The observed evolution of dust masses and dust-to-stellar-mass ratios between z = 0 and 0.4 cannot be reproduced so well by the simulated disks. Very extended dusty gaseous halos can be formed during hierarchical buildup of disk galaxies. Dust-to-metal ratios (i.e., dust-depletion levels) are different within a single galaxy and between different galaxies at different z

  13. COSMIC EVOLUTION OF DUST IN GALAXIES: METHODS AND PRELIMINARY RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Bekki, Kenji [ICRAR, M468, The University of Western Australia, 35 Stirling Highway, Crawley, Western Australia 6009 (Australia)

    2015-02-01

    We investigate the redshift (z) evolution of dust mass and abundance, their dependences on initial conditions of galaxy formation, and physical correlations between dust, gas, and stellar contents at different z based on our original chemodynamical simulations of galaxy formation with dust growth and destruction. In this preliminary investigation, we first determine the reasonable ranges of the most important two parameters for dust evolution, i.e., the timescales of dust growth and destruction, by comparing the observed and simulated dust mass and abundances and molecular hydrogen (H{sub 2}) content of the Galaxy. We then investigate the z-evolution of dust-to-gas ratios (D), H{sub 2} gas fraction (f{sub H{sub 2}}), and gas-phase chemical abundances (e.g., A {sub O} = 12 + log (O/H)) in the simulated disk and dwarf galaxies. The principal results are as follows. Both D and f{sub H{sub 2}} can rapidly increase during the early dissipative formation of galactic disks (z ∼ 2-3), and the z-evolution of these depends on initial mass densities, spin parameters, and masses of galaxies. The observed A {sub O}-D relation can be qualitatively reproduced, but the simulated dispersion of D at a given A {sub O} is smaller. The simulated galaxies with larger total dust masses show larger H{sub 2} and stellar masses and higher f{sub H{sub 2}}. Disk galaxies show negative radial gradients of D and the gradients are steeper for more massive galaxies. The observed evolution of dust masses and dust-to-stellar-mass ratios between z = 0 and 0.4 cannot be reproduced so well by the simulated disks. Very extended dusty gaseous halos can be formed during hierarchical buildup of disk galaxies. Dust-to-metal ratios (i.e., dust-depletion levels) are different within a single galaxy and between different galaxies at different z.

  14. Sequencing of the Chlamydophila psittaci ompA Gene Reveals a New Genotype, E/B, and the Need for a Rapid Discriminatory Genotyping Method

    Science.gov (United States)

    Geens, Tom; Desplanques, Ann; Van Loock, Marnix; Bönner, Brigitte M.; Kaleta, Erhard F.; Magnino, Simone; Andersen, Arthur A.; Everett, Karin D. E.; Vanrompay, Daisy

    2005-01-01

    Twenty-one avian Chlamydophila psittaci isolates from different European countries were characterized using ompA restriction fragment length polymorphism, ompA sequencing, and major outer membrane protein serotyping. Results reveal the presence of a new genotype, E/B, in several European countries and stress the need for a discriminatory rapid genotyping method. PMID:15872282

  15. Application of Semiempirical Methods to Transition Metal Complexes: Fast Results but Hard-to-Predict Accuracy.

    KAUST Repository

    Minenkov, Yury; Sharapa, Dmitry I.; Cavallo, Luigi

    2018-01-01

    -point energy evaluations on density functional theory (DFT) optimized conformers revealed pronounced deviations between semiempirical and DFT methods indicating fundamental difference in potential energy surfaces (PES). To identify the origin of the deviation

  16. Analysis of Transcriptional Signatures in Response to Listeria monocytogenes Infection Reveals Temporal Changes That Result from Type I Interferon Signaling

    Science.gov (United States)

    Potempa, Krzysztof; Graham, Christine M.; Moreira-Teixeira, Lucia; McNab, Finlay W.; Howes, Ashleigh; Stavropoulos, Evangelos; Pascual, Virginia; Banchereau, Jacques; Chaussabel, Damien; O’Garra, Anne

    2016-01-01

    Analysis of the mouse transcriptional response to Listeria monocytogenes infection reveals that a large set of genes are perturbed in both blood and tissue and that these transcriptional responses are enriched for pathways of the immune response. Further we identified enrichment for both type I and type II interferon (IFN) signaling molecules in the blood and tissues upon infection. Since type I IFN signaling has been reported widely to impair bacterial clearance we examined gene expression from blood and tissues of wild type (WT) and type I IFNαβ receptor-deficient (Ifnar1-/-) mice at the basal level and upon infection with L. monocytogenes. Measurement of the fold change response upon infection in the absence of type I IFN signaling demonstrated an upregulation of specific genes at day 1 post infection. A less marked reduction of the global gene expression signature in blood or tissues from infected Ifnar1-/- as compared to WT mice was observed at days 2 and 3 after infection, with marked reduction in key genes such as Oasg1 and Stat2. Moreover, on in depth analysis, changes in gene expression in uninfected mice of key IFN regulatory genes including Irf9, Irf7, Stat1 and others were identified, and although induced by an equivalent degree upon infection this resulted in significantly lower final gene expression levels upon infection of Ifnar1-/- mice. These data highlight how dysregulation of this network in the steady state and temporally upon infection may determine the outcome of this bacterial infection and how basal levels of type I IFN-inducible genes may perturb an optimal host immune response to control intracellular bacterial infections such as L. monocytogenes. PMID:26918359

  17. Revealing −1 Programmed Ribosomal Frameshifting Mechanisms by Single-Molecule Techniques and Computational Methods

    Directory of Open Access Journals (Sweden)

    Kai-Chun Chang

    2012-01-01

    Full Text Available Programmed ribosomal frameshifting (PRF serves as an intrinsic translational regulation mechanism employed by some viruses to control the ratio between structural and enzymatic proteins. Most viral mRNAs which use PRF adapt an H-type pseudoknot to stimulate −1 PRF. The relationship between the thermodynamic stability and the frameshifting efficiency of pseudoknots has not been fully understood. Recently, single-molecule force spectroscopy has revealed that the frequency of −1 PRF correlates with the unwinding forces required for disrupting pseudoknots, and that some of the unwinding work dissipates irreversibly due to the torsional restraint of pseudoknots. Complementary to single-molecule techniques, computational modeling provides insights into global motions of the ribosome, whose structural transitions during frameshifting have not yet been elucidated in atomic detail. Taken together, recent advances in biophysical tools may help to develop antiviral therapies that target the ubiquitous −1 PRF mechanism among viruses.

  18. Comparison of biosurfactant detection methods reveals hydrophobic surfactants and contact-regulated production

    Science.gov (United States)

    Biosurfactants are diverse molecules with numerous biological functions and industrial applications. A variety of environments were examined for biosurfactant-producing bacteria using a versatile new screening method. The utility of an atomized oil assay was assessed for a large number of bacteria...

  19. A Computer-Supported Method to Reveal and Assess Personal Professional Theories in Vocational Education

    Science.gov (United States)

    van den Bogaart, Antoine C. M.; Bilderbeek, Richel J. C.; Schaap, Harmen; Hummel, Hans G. K.; Kirschner, Paul A.

    2016-01-01

    This article introduces a dedicated, computer-supported method to construct and formatively assess open, annotated concept maps of Personal Professional Theories (PPTs). These theories are internalised, personal bodies of formal and practical knowledge, values, norms and convictions that professionals use as a reference to interpret and acquire…

  20. Revealing metabolomic variations in Cortex Moutan from different root parts using HPLC-MS method.

    Science.gov (United States)

    Xiao, Chaoni; Wu, Man; Chen, Yongyong; Zhang, Yajun; Zhao, Xinfeng; Zheng, Xiaohui

    2015-01-01

    The distribution of metabolites in the different root parts of Cortex Moutan (the root bark of Paeonia suffruticosa Andrews) is not well understood, therefore, scientific evidence is not available for quality assessment of Cortex Moutan. To reveal metabolomic variations in Cortex Moutan in order to gain deeper insights to enable quality control. Metabolomic variations in the different root parts of Cortex Moutan were characterised using high-performance liquid chromatography combined with mass spectrometry (HPLC-MS) and multivariate data analysis. The discriminating metabolites in different root parts were evaluated by the one-way analysis of variance and a fold change parameter. The metabolite profiles of Cortex Moutan were largely dominated by five primary and 41 secondary metabolites . Higher levels of malic acid, gallic acid and mudanoside-B were mainly observed in the second lateral roots, whereas dihydroxyacetophenone, benzoyloxypaeoniflorin, suffruticoside-A, kaempferol dihexoside, mudanpioside E and mudanpioside J accumulated in the first lateral and axial roots. The highest contents of paeonol, galloyloxypaeoniflorin and procyanidin B were detected in the axial roots. Accordingly, metabolite compositions of Cortex Moutan were found to vary among different root parts. The axial roots have higher quality than the lateral roots in Cortex Moutan due to the accumulation of bioactive secondary metabolites associated with plant physiology. These findings provided important scientific evidence for grading Cortex Moutan on the general market. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Estimating the Economic Value of Environmental Amenities of Isfahan Sofeh Highland Park (The Individual Revealed and Expressed Travel Cost Method

    Directory of Open Access Journals (Sweden)

    H. Amirnejad

    2016-03-01

    for the revealed and the expressed travel costs and total travel costs. Results Discussion: Collected data shows that the average age of visitors is 31 years. Most of them are young, 66% of visitors are male and the rest are female. Most of the respondents chose the spring season for visiting Sofeh Park. Results of negative binomial regression estimation showed that age, income, distance and the revealed and total travel cost have a significant effect on the total number of visits in both scenarios. Age and income coefficients are positive. Thus, these variables have a direct effect on the number of visit in both scenarios. But distance and travel cost coefficients are negative. Therefore, these variables have a reverse effect on the total number of visits in both scenarios. These results confirm the demand law. The law of demand states that the quantity demanded and the price of a commodity are inversely related. Travel cost as commodity price for tourism demand function in the first scenario is only revealed -travel- cost of trip to location and in the second scenario is the revealed cost in addition to the opportunity cost of trip to the recreational location. Consumer surplus as the average value of environmental amenities is calculated by1/ , where is the coefficient related to travel cost variable in the tourism demand functions. Also, the average value of environmental amenities for anyone visit in the first and the second scenarios are 797 and 1145 thousand Rials, respectively. The obvious difference between recreational values in the two scenarios is due to opportunity cost. The total recreational value of the Sofeh Highland Park equals to the product of number of annual visits and average recreational value. Finally the total value of annual visits to the park, in the above scenarios is more than 11952 and 17174 billion Rials, respectively. Conclusion: In this study, the value of environmental amenities of the Sofeh Highland Park were estimated. Notice that

  2. Neuroanatomical heterogeneity of schizophrenia revealed by semi-supervised machine learning methods.

    Science.gov (United States)

    Honnorat, Nicolas; Dong, Aoyan; Meisenzahl-Lechner, Eva; Koutsouleris, Nikolaos; Davatzikos, Christos

    2017-12-20

    Schizophrenia is associated with heterogeneous clinical symptoms and neuroanatomical alterations. In this work, we aim to disentangle the patterns of neuroanatomical alterations underlying a heterogeneous population of patients using a semi-supervised clustering method. We apply this strategy to a cohort of patients with schizophrenia of varying extends of disease duration, and we describe the neuroanatomical, demographic and clinical characteristics of the subtypes discovered. We analyze the neuroanatomical heterogeneity of 157 patients diagnosed with Schizophrenia, relative to a control population of 169 subjects, using a machine learning method called CHIMERA. CHIMERA clusters the differences between patients and a demographically-matched population of healthy subjects, rather than clustering patients themselves, thereby specifically assessing disease-related neuroanatomical alterations. Voxel-Based Morphometry was conducted to visualize the neuroanatomical patterns associated with each group. The clinical presentation and the demographics of the groups were then investigated. Three subgroups were identified. The first two differed substantially, in that one involved predominantly temporal-thalamic-peri-Sylvian regions, whereas the other involved predominantly frontal regions and the thalamus. Both subtypes included primarily male patients. The third pattern was a mix of these two and presented milder neuroanatomic alterations and comprised a comparable number of men and women. VBM and statistical analyses suggest that these groups could correspond to different neuroanatomical dimensions of schizophrenia. Our analysis suggests that schizophrenia presents distinct neuroanatomical variants. This variability points to the need for a dimensional neuroanatomical approach using data-driven, mathematically principled multivariate pattern analysis methods, and should be taken into account in clinical studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Multiband discrete ordinates method: formalism and results; Methode multibande aux ordonnees discretes: formalisme et resultats

    Energy Technology Data Exchange (ETDEWEB)

    Luneville, L

    1998-06-01

    The multigroup discrete ordinates method is a classical way to solve transport equation (Boltzmann) for neutral particles. Self-shielding effects are not correctly treated due to large variations of cross sections in a group (in the resonance range). To treat the resonance domain, the multiband method is introduced. The main idea is to divide the cross section domain into bands. We obtain the multiband parameters using the moment method; the code CALENDF provides probability tables for these parameters. We present our implementation in an existing discrete ordinates code: SN1D. We study deep penetration benchmarks and show the improvement of the method in the treatment of self-shielding effects. (author) 15 refs.

  4. Application of Semiempirical Methods to Transition Metal Complexes: Fast Results but Hard-to-Predict Accuracy.

    KAUST Repository

    Minenkov, Yury

    2018-05-22

    A series of semiempirical PM6* and PM7 methods has been tested in reproducing of relative conformational energies of 27 realistic-size complexes of 16 different transition metals (TMs). An analysis of relative energies derived from single-point energy evaluations on density functional theory (DFT) optimized conformers revealed pronounced deviations between semiempirical and DFT methods indicating fundamental difference in potential energy surfaces (PES). To identify the origin of the deviation, we compared fully optimized PM7 and respective DFT conformers. For many complexes, differences in PM7 and DFT conformational energies have been confirmed often manifesting themselves in false coordination of some atoms (H, O) to TMs and chemical transformations/distortion of coordination center geometry in PM7 structures. Despite geometry optimization with fixed coordination center geometry leads to some improvements in conformational energies, the resulting accuracy is still too low to recommend explored semiempirical methods for out-of-the-box conformational search/sampling: careful testing is always needed.

  5. The history of ironware in Japan revealed by the AMS-carbon 14 age method

    International Nuclear Information System (INIS)

    Fujio, Shin'ichirou

    2005-01-01

    This paper focuses on the influence what the AMS-carbon 14 age method attains to the history of the iron in the Japanese Islands. The research team in National Museum of Japanese History makes a clear that the Yayoi period began in 10 Cen. cal BC. However, there was a problem in this. It is iron. If the Yayoi period has started in the 10th Cen. BC, it means that the ironware in Japanese Islands had spread early rather than it spreads in China. The research team reexamined the ironware excavated from Magarita site in the Fukuoka Pref. considered to be the oldest ironware in Japan. Consequently, the excavation situation was indefinite and it turned out that we cannot specify the time to belong. Furthermore, 36 ironwares in the initial and early Yayoi were also already found by that time cannot be specified except for two points. Therefore, it turned out that Japanese ironware appeared in the 3rd century of B.C. What does this mean? Although it had been thought that the beginning of agriculture in Japan and the appearance of ironware were simultaneous, it turned out that agriculture has appeared early about in 700 years. Therefore, it became clear that agriculture of Japan started at the Stone Age. (author)

  6. Combining genomic sequencing methods to explore viral diversity and reveal potential virus-host interactions

    Directory of Open Access Journals (Sweden)

    Cheryl-Emiliane Tien Chow

    2015-04-01

    Full Text Available Viral diversity and virus-host interactions in oxygen-starved regions of the ocean, also known as oxygen minimum zones (OMZs, remain relatively unexplored. Microbial community metabolism in OMZs alters nutrient and energy flow through marine food webs, resulting in biological nitrogen loss and greenhouse gas production. Thus, viruses infecting OMZ microbes have the potential to modulate community metabolism with resulting feedback on ecosystem function. Here, we describe viral communities inhabiting oxic surface (10m and oxygen-starved basin (200m waters of Saanich Inlet, a seasonally anoxic fjord on the coast of Vancouver Island, British Columbia using viral metagenomics and complete viral fosmid sequencing on samples collected between April 2007 and April 2010. Of 6459 open reading frames (ORFs predicted across all 34 viral fosmids, 77.6% (n=5010 had no homology to reference viral genomes. These fosmids recruited a higher proportion of viral metagenomic sequences from Saanich Inlet than from nearby northeastern subarctic Pacific Ocean (Line P waters, indicating differences in the viral communities between coastal and open ocean locations. While functional annotations of fosmid ORFs were limited, recruitment to NCBI’s non-redundant ‘nr’ database and publicly available single-cell genomes identified putative viruses infecting marine thaumarchaeal and SUP05 proteobacteria to provide potential host linkages with relevance to coupled biogeochemical cycling processes in OMZ waters. Taken together, these results highlight the power of coupled analyses of multiple sequence data types, such as viral metagenomic and fosmid sequence data with prokaryotic single cell genomes, to chart viral diversity, elucidate genomic and ecological contexts for previously unclassifiable viral sequences, and identify novel host interactions in natural and engineered ecosystems.

  7. Estimating Population Turnover Rates by Relative Quantification Methods Reveals Microbial Dynamics in Marine Sediment.

    Science.gov (United States)

    Kevorkian, Richard; Bird, Jordan T; Shumaker, Alexander; Lloyd, Karen G

    2018-01-01

    The difficulty involved in quantifying biogeochemically significant microbes in marine sediments limits our ability to assess interspecific interactions, population turnover times, and niches of uncultured taxa. We incubated surface sediments from Cape Lookout Bight, North Carolina, USA, anoxically at 21°C for 122 days. Sulfate decreased until day 68, after which methane increased, with hydrogen concentrations consistent with the predicted values of an electron donor exerting thermodynamic control. We measured turnover times using two relative quantification methods, quantitative PCR (qPCR) and the product of 16S gene read abundance and total cell abundance (FRAxC, which stands for "fraction of read abundance times cells"), to estimate the population turnover rates of uncultured clades. Most 16S rRNA reads were from deeply branching uncultured groups, and ∼98% of 16S rRNA genes did not abruptly shift in relative abundance when sulfate reduction gave way to methanogenesis. Uncultured Methanomicrobiales and Methanosarcinales increased at the onset of methanogenesis with population turnover times estimated from qPCR at 9.7 ± 3.9 and 12.6 ± 4.1 days, respectively. These were consistent with FRAxC turnover times of 9.4 ± 5.8 and 9.2 ± 3.5 days, respectively. Uncultured Syntrophaceae , which are possibly fermentative syntrophs of methanogens, and uncultured Kazan-3A-21 archaea also increased at the onset of methanogenesis, with FRAxC turnover times of 14.7 ± 6.9 and 10.6 ± 3.6 days. Kazan-3A-21 may therefore either perform methanogenesis or form a fermentative syntrophy with methanogens. Three genera of sulfate-reducing bacteria, Desulfovibrio , Desulfobacter , and Desulfobacterium , increased in the first 19 days before declining rapidly during sulfate reduction. We conclude that population turnover times on the order of days can be measured robustly in organic-rich marine sediment, and the transition from sulfate-reducing to methanogenic conditions stimulates

  8. Cultivation-independent methods reveal differences among bacterial gut microbiota in triatomine vectors of Chagas disease.

    Directory of Open Access Journals (Sweden)

    Fabio Faria da Mota

    Full Text Available BACKGROUND: Chagas disease is a trypanosomiasis whose agent is the protozoan parasite Trypanosoma cruzi, which is transmitted to humans by hematophagous bugs known as triatomines. Even though insecticide treatments allow effective control of these bugs in most Latin American countries where Chagas disease is endemic, the disease still affects a large proportion of the population of South America. The features of the disease in humans have been extensively studied, and the genome of the parasite has been sequenced, but no effective drug is yet available to treat Chagas disease. The digestive tract of the insect vectors in which T. cruzi develops has been much less well investigated than blood from its human hosts and constitutes a dynamic environment with very different conditions. Thus, we investigated the composition of the predominant bacterial species of the microbiota in insect vectors from Rhodnius, Triatoma, Panstrongylus and Dipetalogaster genera. METHODOLOGY/PRINCIPAL FINDINGS: Microbiota of triatomine guts were investigated using cultivation-independent methods, i.e., phylogenetic analysis of 16s rDNA using denaturing gradient gel electrophoresis (DGGE and cloned-based sequencing. The Chao index showed that the diversity of bacterial species in triatomine guts is low, comprising fewer than 20 predominant species, and that these species vary between insect species. The analyses showed that Serratia predominates in Rhodnius, Arsenophonus predominates in Triatoma and Panstrongylus, while Candidatus Rohrkolberia predominates in Dipetalogaster. CONCLUSIONS/SIGNIFICANCE: The microbiota of triatomine guts represents one of the factors that may interfere with T. cruzi transmission and virulence in humans. The knowledge of its composition according to insect species is important for designing measures of biological control for T. cruzi. We found that the predominant species of the bacterial microbiota in triatomines form a group of low

  9. How community environment shapes physical activity: perceptions revealed through the PhotoVoice method.

    Science.gov (United States)

    Belon, Ana Paula; Nieuwendyk, Laura M; Vallianatos, Helen; Nykiforuk, Candace I J

    2014-09-01

    A growing body of evidence shows that community environment plays an important role in individuals' physical activity engagement. However, while attributes of the physical environment are widely investigated, sociocultural, political, and economic aspects of the environment are often neglected. This article helps to fill these knowledge gaps by providing a more comprehensive understanding of multiple dimensions of the community environment relative to physical activity. The purpose of this study was to qualitatively explore how people's experiences and perceptions of their community environments affect their abilities to engage in physical activity. A PhotoVoice method was used to identify barriers to and opportunities for physical activity among residents in four communities in the province of Alberta, Canada, in 2009. After taking pictures, the thirty-five participants shared their perceptions of those opportunities and barriers in their community environments during individual interviews. Using the Analysis Grid for Environments Linked to Obesity (ANGELO) framework, themes emerging from these photo-elicited interviews were organized in four environment types: physical, sociocultural, economic, and political. The data show that themes linked to the physical (56.6%) and sociocultural (31.4%) environments were discussed more frequently than the themes of the economic (5.9%) and political (6.1%) environments. Participants identified nuanced barriers and opportunities for physical activity, which are illustrated by their quotes and photographs. The findings suggest that a myriad of factors from physical, sociocultural, economic, and political environments influence people's abilities to be physically active in their communities. Therefore, adoption of a broad, ecological perspective is needed to address the barriers and build upon the opportunities described by participants to make communities more healthy and active. Copyright © 2014 Elsevier Ltd. All rights

  10. Cultivation-independent methods reveal differences among bacterial gut microbiota in triatomine vectors of Chagas disease.

    Science.gov (United States)

    da Mota, Fabio Faria; Marinho, Lourena Pinheiro; Moreira, Carlos José de Carvalho; Lima, Marli Maria; Mello, Cícero Brasileiro; Garcia, Eloi Souza; Carels, Nicolas; Azambuja, Patricia

    2012-01-01

    Chagas disease is a trypanosomiasis whose agent is the protozoan parasite Trypanosoma cruzi, which is transmitted to humans by hematophagous bugs known as triatomines. Even though insecticide treatments allow effective control of these bugs in most Latin American countries where Chagas disease is endemic, the disease still affects a large proportion of the population of South America. The features of the disease in humans have been extensively studied, and the genome of the parasite has been sequenced, but no effective drug is yet available to treat Chagas disease. The digestive tract of the insect vectors in which T. cruzi develops has been much less well investigated than blood from its human hosts and constitutes a dynamic environment with very different conditions. Thus, we investigated the composition of the predominant bacterial species of the microbiota in insect vectors from Rhodnius, Triatoma, Panstrongylus and Dipetalogaster genera. Microbiota of triatomine guts were investigated using cultivation-independent methods, i.e., phylogenetic analysis of 16s rDNA using denaturing gradient gel electrophoresis (DGGE) and cloned-based sequencing. The Chao index showed that the diversity of bacterial species in triatomine guts is low, comprising fewer than 20 predominant species, and that these species vary between insect species. The analyses showed that Serratia predominates in Rhodnius, Arsenophonus predominates in Triatoma and Panstrongylus, while Candidatus Rohrkolberia predominates in Dipetalogaster. The microbiota of triatomine guts represents one of the factors that may interfere with T. cruzi transmission and virulence in humans. The knowledge of its composition according to insect species is important for designing measures of biological control for T. cruzi. We found that the predominant species of the bacterial microbiota in triatomines form a group of low complexity whose structure differs according to the vector genus.

  11. Interval estimation methods of the mean in small sample situation and the results' comparison

    International Nuclear Information System (INIS)

    Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen

    2009-01-01

    The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)

  12. A high HIV-1 strain variability in London, UK, revealed by full-genome analysis: Results from the ICONIC project

    Science.gov (United States)

    Frampton, Dan; Gallo Cassarino, Tiziano; Raffle, Jade; Hubb, Jonathan; Ferns, R. Bridget; Waters, Laura; Tong, C. Y. William; Kozlakidis, Zisis; Hayward, Andrew; Kellam, Paul; Pillay, Deenan; Clark, Duncan; Nastouli, Eleni; Leigh Brown, Andrew J.

    2018-01-01

    Background & methods The ICONIC project has developed an automated high-throughput pipeline to generate HIV nearly full-length genomes (NFLG, i.e. from gag to nef) from next-generation sequencing (NGS) data. The pipeline was applied to 420 HIV samples collected at University College London Hospitals NHS Trust and Barts Health NHS Trust (London) and sequenced using an Illumina MiSeq at the Wellcome Trust Sanger Institute (Cambridge). Consensus genomes were generated and subtyped using COMET, and unique recombinants were studied with jpHMM and SimPlot. Maximum-likelihood phylogenetic trees were constructed using RAxML to identify transmission networks using the Cluster Picker. Results The pipeline generated sequences of at least 1Kb of length (median = 7.46Kb, IQR = 4.01Kb) for 375 out of the 420 samples (89%), with 174 (46.4%) being NFLG. A total of 365 sequences (169 of them NFLG) corresponded to unique subjects and were included in the down-stream analyses. The most frequent HIV subtypes were B (n = 149, 40.8%) and C (n = 77, 21.1%) and the circulating recombinant form CRF02_AG (n = 32, 8.8%). We found 14 different CRFs (n = 66, 18.1%) and multiple URFs (n = 32, 8.8%) that involved recombination between 12 different subtypes/CRFs. The most frequent URFs were B/CRF01_AE (4 cases) and A1/D, B/C, and B/CRF02_AG (3 cases each). Most URFs (19/26, 73%) lacked breakpoints in the PR+RT pol region, rendering them undetectable if only that was sequenced. Twelve (37.5%) of the URFs could have emerged within the UK, whereas the rest were probably imported from sub-Saharan Africa, South East Asia and South America. For 2 URFs we found highly similar pol sequences circulating in the UK. We detected 31 phylogenetic clusters using the full dataset: 25 pairs (mostly subtypes B and C), 4 triplets and 2 quadruplets. Some of these were not consistent across different genes due to inter- and intra-subtype recombination. Clusters involved 70 sequences, 19.2% of the dataset. Conclusions

  13. New inducible genetic method reveals critical roles of GABA in the control of feeding and metabolism.

    Science.gov (United States)

    Meng, Fantao; Han, Yong; Srisai, Dollada; Belakhov, Valery; Farias, Monica; Xu, Yong; Palmiter, Richard D; Baasov, Timor; Wu, Qi

    2016-03-29

    Currently available inducible Cre/loxP systems, despite their considerable utility in gene manipulation, have pitfalls in certain scenarios, such as unsatisfactory recombination rates and deleterious effects on physiology and behavior. To overcome these limitations, we designed a new, inducible gene-targeting system by introducing an in-frame nonsense mutation into the coding sequence of Cre recombinase (nsCre). Mutant mRNAs transcribed from nsCre transgene can be efficiently translated into full-length, functional Cre recombinase in the presence of nonsense suppressors such as aminoglycosides. In a proof-of-concept model, GABA signaling from hypothalamic neurons expressing agouti-related peptide (AgRP) was genetically inactivated within 4 d after treatment with a synthetic aminoglycoside. Disruption of GABA synthesis in AgRP neurons in young adult mice led to a dramatic loss of body weight due to reduced food intake and elevated energy expenditure; they also manifested glucose intolerance. In contrast, older mice with genetic inactivation of GABA signaling by AgRP neurons had only transient reduction of feeding and body weight; their energy expenditure and glucose tolerance were unaffected. These results indicate that GABAergic signaling from AgRP neurons plays a key role in the control of feeding and metabolism through an age-dependent mechanism. This new genetic technique will augment current tools used to elucidate mechanisms underlying many physiological and neurological processes.

  14. Active sites of two orthologous cytochromes P450 2E1: Differences revealed by spectroscopic methods

    International Nuclear Information System (INIS)

    Anzenbacherova, Eva; Hudecek, Jiri; Murgida, Daniel; Hildebrandt, Peter; Marchal, Stephane; Lange, Reinhard; Anzenbacher, Pavel

    2005-01-01

    Cytochromes P450 2E1 of human and minipig origin were examined by absorption spectroscopy under high hydrostatic pressure and by resonance Raman spectroscopy. Human enzyme tends to denature to the P420 form more easily than the minipig form; moreover, the apparent compressibility of the heme active site (as judged from a redshift of the absorption maximum with pressure) is greater than that of the minipig counterpart. Relative compactness of the minipig enzyme is also seen in the Raman spectra, where the presence of planar heme conformation was inferred from band positions characteristic of the low-spin heme with high degree of symmetry. In this respect, the CYP2E1 seems to be another example of P450 conformational heterogeneity as shown, e.g., by Davydov et al. for CYP3A4 [Biochem. Biophys. Res. Commun. 312 (2003) 121-130]. The results indicate that the flexibility of the CYP active site is likely one of its basic structural characteristics

  15. Salmonid Chromosome Evolution as Revealed by a Novel Method for Comparing RADseq Linkage Maps

    Science.gov (United States)

    Gosselin, Thierry; Normandeau, Eric; Lamothe, Manuel; Isabel, Nathalie; Audet, Céline; Bernatchez, Louis

    2016-01-01

    Whole genome duplication (WGD) can provide material for evolutionary innovation. Family Salmonidae is ideal for studying the effects of WGD as the ancestral salmonid underwent WGD relatively recently, ∼65 Ma, then rediploidized and diversified. Extensive synteny between homologous chromosome arms occurs in extant salmonids, but each species has both conserved and unique chromosome arm fusions and fissions. Assembly of large, outbred eukaryotic genomes can be difficult, but structural rearrangements within such taxa can be investigated using linkage maps. RAD sequencing provides unprecedented ability to generate high-density linkage maps for nonmodel species, but can result in low numbers of homologous markers between species due to phylogenetic distance or differences in library preparation. Here, we generate a high-density linkage map (3,826 markers) for the Salvelinus genera (Brook Charr S. fontinalis), and then identify corresponding chromosome arms among the other available salmonid high-density linkage maps, including six species of Oncorhynchus, and one species for each of Salmo, Coregonus, and the nonduplicated sister group for the salmonids, Northern Pike Esox lucius for identifying post-duplicated homeologs. To facilitate this process, we developed MapComp to identify identical and proximate (i.e. nearby) markers between linkage maps using a reference genome of a related species as an intermediate, increasing the number of comparable markers between linkage maps by 5-fold. This enabled a characterization of the most likely history of retained chromosomal rearrangements post-WGD, and several conserved chromosomal inversions. Analyses of RADseq-based linkage maps from other taxa will also benefit from MapComp, available at: https://github.com/enormandeau/mapcomp/ PMID:28173098

  16. A Haplotype Information Theory Method Reveals Genes of Evolutionary Interest in European vs. Asian Pigs.

    Science.gov (United States)

    Hudson, Nicholas J; Naval-Sánchez, Marina; Porto-Neto, Laercio; Pérez-Enciso, Miguel; Reverter, Antonio

    2018-06-05

    Asian and European wild boars were independently domesticated ca. 10,000 years ago. Since the 17th century, Chinese breeds have been imported to Europe to improve the genetics of European animals by introgression of favourable alleles, resulting in a complex mosaic of haplotypes. To interrogate the structure of these haplotypes further, we have run a new haplotype segregation analysis based on information theory, namely compression efficiency (CE). We applied the approach to sequence data from individuals from each phylogeographic region (n = 23 from Asia and Europe) including a number of major pig breeds. Our genome-wide CE is able to discriminate the breeds in a manner reflecting phylogeography. Furthermore, 24,956 non-overlapping sliding windows (each comprising 1,000 consecutive SNP) were quantified for extent of haplotype sharing within and between Asia and Europe. The genome-wide distribution of extent of haplotype sharing was quite different between groups. Unlike European pigs, Asian pigs haplotype sharing approximates a normal distribution. In line with this, we found the European breeds possessed a number of genomic windows of dramatically higher haplotype sharing than the Asian breeds. Our CE analysis of sliding windows capture some of the genomic regions reported to contain signatures of selection in domestic pigs. Prominent among these regions, we highlight the role of a gene encoding the mitochondrial enzyme LACTB which has been associated with obesity, and the gene encoding MYOG a fundamental transcriptional regulator of myogenesis. The origin of these regions likely reflects either a population bottleneck in European animals, or selective targets on commercial phenotypes reducing allelic diversity in particular genes and/or regulatory regions.

  17. A novel mouse model reveals that polycystin-1 deficiency in ependyma and choroid plexus results in dysfunctional cilia and hydrocephalus.

    Directory of Open Access Journals (Sweden)

    Claas Wodarczyk

    2009-09-01

    Full Text Available Polycystin-1 (PC-1, the product of the PKD1 gene, mutated in the majority of cases of Autosomal Dominant Polycystic Kidney Disease (ADPKD, is a very large (approximately 520 kDa plasma membrane receptor localized in several subcellular compartments including cell-cell/matrix junctions as well as cilia. While heterologous over-expression systems have allowed identification of several of the potential biological roles of this receptor, its precise function remains largely elusive. Studying PC-1 in vivo has been a challenging task due to its complexity and low expression levels. To overcome these limitations and facilitate the study of endogenous PC-1, we have inserted HA- or Myc-tag sequences into the Pkd1 locus by homologous recombination. Here, we show that our approach was successful in generating a fully functional and easily detectable endogenous PC-1. Characterization of PC-1 distribution in vivo showed that it is expressed ubiquitously and is developmentally-regulated in most tissues. Furthermore, our novel tool allowed us to investigate the role of PC-1 in brain, where the protein is abundantly expressed. Subcellular localization of PC-1 revealed strong and specific staining in ciliated ependymal and choroid plexus cells. Consistent with this distribution, we observed hydrocephalus formation both in the ubiquitous knock-out embryos and in newborn mice with conditional inactivation of the Pkd1 gene in the brain. Both choroid plexus and ependymal cilia were morphologically normal in these mice, suggesting a role for PC-1 in ciliary function or signalling in this compartment, rather than in ciliogenesis. We propose that the role of PC-1 in the brain cilia might be to prevent hydrocephalus, a previously unrecognized role for this receptor and one that might have important implications for other genetic or sporadic diseases.

  18. Revealing the sequence and resulting cellular morphology of receptor-ligand interactions during Plasmodium falciparum invasion of erythrocytes.

    Directory of Open Access Journals (Sweden)

    Greta E Weiss

    2015-02-01

    Full Text Available During blood stage Plasmodium falciparum infection, merozoites invade uninfected erythrocytes via a complex, multistep process involving a series of distinct receptor-ligand binding events. Understanding each element in this process increases the potential to block the parasite's life cycle via drugs or vaccines. To investigate specific receptor-ligand interactions, they were systematically blocked using a combination of genetic deletion, enzymatic receptor cleavage and inhibition of binding via antibodies, peptides and small molecules, and the resulting temporal changes in invasion and morphological effects on erythrocytes were filmed using live cell imaging. Analysis of the videos have shown receptor-ligand interactions occur in the following sequence with the following cellular morphologies; 1 an early heparin-blockable interaction which weakly deforms the erythrocyte, 2 EBA and PfRh ligands which strongly deform the erythrocyte, a process dependant on the merozoite's actin-myosin motor, 3 a PfRh5-basigin binding step which results in a pore or opening between parasite and host through which it appears small molecules and possibly invasion components can flow and 4 an AMA1-RON2 interaction that mediates tight junction formation, which acts as an anchor point for internalization. In addition to enhancing general knowledge of apicomplexan biology, this work provides a rational basis to combine sequentially acting merozoite vaccine candidates in a single multi-receptor-blocking vaccine.

  19. Steady-state transport equation resolution by particle methods, and numerical results

    International Nuclear Information System (INIS)

    Mercier, B.

    1985-10-01

    A method to solve steady-state transport equation has been given. Principles of the method are given. The method is studied in two different cases; estimations given by the theory are compared to numerical results. Results got in 1-D (spherical geometry) and in 2-D (axisymmetric geometry) are given [fr

  20. Ocular-following responses to white noise stimuli in humans reveal a novel nonlinearity that results from temporal sampling.

    Science.gov (United States)

    Sheliga, Boris M; Quaia, Christian; FitzGibbon, Edmond J; Cumming, Bruce G

    2016-01-01

    White noise stimuli are frequently used to study the visual processing of broadband images in the laboratory. A common goal is to describe how responses are derived from Fourier components in the image. We investigated this issue by recording the ocular-following responses (OFRs) to white noise stimuli in human subjects. For a given speed we compared OFRs to unfiltered white noise with those to noise filtered with band-pass filters and notch filters. Removing components with low spatial frequency (SF) reduced OFR magnitudes, and the SF associated with the greatest reduction matched the SF that produced the maximal response when presented alone. This reduction declined rapidly with SF, compatible with a winner-take-all operation. Removing higher SF components increased OFR magnitudes. For higher speeds this effect became larger and propagated toward lower SFs. All of these effects were quantitatively well described by a model that combined two factors: (a) an excitatory drive that reflected the OFRs to individual Fourier components and (b) a suppression by higher SF channels where the temporal sampling of the display led to flicker. This nonlinear interaction has an important practical implication: Even with high refresh rates (150 Hz), the temporal sampling introduced by visual displays has a significant impact on visual processing. For instance, we show that this distorts speed tuning curves, shifting the peak to lower speeds. Careful attention to spectral content, in the light of this nonlinearity, is necessary to minimize the resulting artifact when using white noise patterns undergoing apparent motion.

  1. The Use of Data Mining Methods to Predict the Result of Infertility Treatment Using the IVF ET Method

    Directory of Open Access Journals (Sweden)

    Malinowski Paweł

    2014-12-01

    Full Text Available The IVF ET method is a scientifically recognized infertility treat- ment method. The problem, however, is this method’s unsatisfactory efficiency. This calls for a more thorough analysis of the information available in the treat- ment process, in order to detect the factors that have an effect on the results, as well as to effectively predict result of treatment. Classical statistical methods have proven to be inadequate in this issue. Only the use of modern methods of data mining gives hope for a more effective analysis of the collected data. This work provides an overview of the new methods used for the analysis of data on infertility treatment, and formulates a proposal for further directions for research into increasing the efficiency of the predicted result of the treatment process.

  2. Comparison of Results according to the treatment Method in Maxillary Sinus Carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Woong Ki; Jo, Jae Sik; Ahn, Sung Ja; Nam, Taek Keun; Nah, Byung Sik [Chonnam National University College of Medicine, Kwangju (Korea, Republic of); Park, Seung Jin [Gyeongsang National Univ., Jinju (Korea, Republic of)

    1995-03-15

    Purpose : A retrospective analysis was performed to investigate the proper management of maxillary sinus carcinoma. Materials and Methods : Authors analysed 33 patients of squamous cell carcinoma of maxillary sinus treated at Chonnam University Hospital from January 1986 to December 1992. There were 24 men and 9 women with median age of 55 years. According to AJCC TNM system of 1988, a patient of T2, 10 patients of T3 and 22 patients of T4 were available, respectively. Cervical lymph node metastases was observed in 5 patients(N1;4/33, N2b;1/33). Patients were classified as 3 groups according to management method. The first group, named as 'FAR' (16 patients), was consisted of preoperative intra-arterial chemotherapy with 5-fluorouracil(5-FU;mean of total dosage;3078mg) through the superficial temporal artery with concurrent radiation(mean dose delivered;3433cGy, daily 180-200cGy) and vitamin A(50,000 IU daily), and followed by total maxillectomy and postoperative radiation therapy(mean dose;2351cGy). The second group, named as 'SR'(7 patients), was consisted of total maxillectomy followed by postoperative radiation therapy(mean dose 5920 cGy). Her third group, named as 'R'(6 patients), was treated with radiation alone(mean dose;7164cGy). Kaplan-Meier product limit method was used for survival analysis and Mantel-Cox test was performed for significance of survival difference between two groups. Results : Local recurrence free survival rate in the end of 2 year was 100%, 5-% and 0% in FAR, SR and R group, respectively. Disease free survival rate in 2 years was 88.9%, 40% and 50% in Far, SR and R group, respectively. There were statistically significant difference between FAR and SR or FAR and R group in their local recurrence free, disease free and overall survival rates. But difference of each survival rate between SR and R group was not significant. Conclusion : In this study FAR group revealed better results that SR or R group. In the

  3. Comparison of Results according to the treatment Method in Maxillary Sinus Carcinoma

    International Nuclear Information System (INIS)

    Chung, Woong Ki; Jo, Jae Sik; Ahn, Sung Ja; Nam, Taek Keun; Nah, Byung Sik; Park, Seung Jin

    1995-01-01

    Purpose : A retrospective analysis was performed to investigate the proper management of maxillary sinus carcinoma. Materials and Methods : Authors analysed 33 patients of squamous cell carcinoma of maxillary sinus treated at Chonnam University Hospital from January 1986 to December 1992. There were 24 men and 9 women with median age of 55 years. According to AJCC TNM system of 1988, a patient of T2, 10 patients of T3 and 22 patients of T4 were available, respectively. Cervical lymph node metastases was observed in 5 patients(N1;4/33, N2b;1/33). Patients were classified as 3 groups according to management method. The first group, named as 'FAR' (16 patients), was consisted of preoperative intra-arterial chemotherapy with 5-fluorouracil(5-FU;mean of total dosage;3078mg) through the superficial temporal artery with concurrent radiation(mean dose delivered;3433cGy, daily 180-200cGy) and vitamin A(50,000 IU daily), and followed by total maxillectomy and postoperative radiation therapy(mean dose;2351cGy). The second group, named as 'SR'(7 patients), was consisted of total maxillectomy followed by postoperative radiation therapy(mean dose 5920 cGy). Her third group, named as 'R'(6 patients), was treated with radiation alone(mean dose;7164cGy). Kaplan-Meier product limit method was used for survival analysis and Mantel-Cox test was performed for significance of survival difference between two groups. Results : Local recurrence free survival rate in the end of 2 year was 100%, 5-% and 0% in FAR, SR and R group, respectively. Disease free survival rate in 2 years was 88.9%, 40% and 50% in Far, SR and R group, respectively. There were statistically significant difference between FAR and SR or FAR and R group in their local recurrence free, disease free and overall survival rates. But difference of each survival rate between SR and R group was not significant. Conclusion : In this study FAR group revealed better results that SR or R group. In the future prospective randomized

  4. Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems

    Science.gov (United States)

    Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding

    2007-09-01

    In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.

  5. Image restoration by the method of convex projections: part 2 applications and numerical results.

    Science.gov (United States)

    Sezan, M I; Stark, H

    1982-01-01

    The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method.

  6. RESULTS OF ANALYSIS OF BENCHMARKING METHODS OF INNOVATION SYSTEMS ASSESSMENT IN ACCORDANCE WITH AIMS OF SUSTAINABLE DEVELOPMENT OF SOCIETY

    Directory of Open Access Journals (Sweden)

    A. Vylegzhanina

    2016-01-01

    Full Text Available In this work, we introduce results of comparative analysis of international ratings indexes of innovation systems for their compliance with purposes of sustainable development. Purpose of this research is defining requirements to benchmarking methods of assessing national or regional innovation systems and compare them basing on assumption, that innovation system is aligned with sustainable development concept. Analysis of goal sets and concepts, which underlie observed international composite innovation indexes, comparison of their metrics and calculation techniques, allowed us to reveal opportunities and limitations of using these methods in frames of sustainable development concept. We formulated targets of innovation development on the base of innovation priorities of sustainable socio-economic development. Using comparative analysis of indexes with these targets, we revealed two methods of assessing innovation systems, maximally connected with goals of sustainable development. Nevertheless, today no any benchmarking method, which meets need of innovation systems assessing in compliance with sustainable development concept to a sufficient extent. We suggested practical directions of developing methods, assessing innovation systems in compliance with goals of societal sustainable development.

  7. Comparison of multiple-criteria decision-making methods - results of simulation study

    Directory of Open Access Journals (Sweden)

    Michał Adamczak

    2016-12-01

    Full Text Available Background: Today, both researchers and practitioners have many methods for supporting the decision-making process. Due to the conditions in which supply chains function, the most interesting are multi-criteria methods. The use of sophisticated methods for supporting decisions requires the parameterization and execution of calculations that are often complex. So is it efficient to use sophisticated methods? Methods: The authors of the publication compared two popular multi-criteria decision-making methods: the  Weighted Sum Model (WSM and the Analytic Hierarchy Process (AHP. A simulation study reflects these two decision-making methods. Input data for this study was a set of criteria weights and the value of each in terms of each criterion. Results: The iGrafx Process for Six Sigma simulation software recreated how both multiple-criteria decision-making methods (WSM and AHP function. The result of the simulation was a numerical value defining the preference of each of the alternatives according to the WSM and AHP methods. The alternative producing a result of higher numerical value  was considered preferred, according to the selected method. In the analysis of the results, the relationship between the values of the parameters and the difference in the results presented by both methods was investigated. Statistical methods, including hypothesis testing, were used for this purpose. Conclusions: The simulation study findings prove that the results obtained with the use of two multiple-criteria decision-making methods are very similar. Differences occurred more frequently in lower-value parameters from the "value of each alternative" group and higher-value parameters from the "weight of criteria" group.

  8. results

    Directory of Open Access Journals (Sweden)

    Salabura Piotr

    2017-01-01

    Full Text Available HADES experiment at GSI is the only high precision experiment probing nuclear matter in the beam energy range of a few AGeV. Pion, proton and ion beams are used to study rare dielectron and strangeness probes to diagnose properties of strongly interacting matter in this energy regime. Selected results from p + A and A + A collisions are presented and discussed.

  9. Chemical abundances of fast-rotating massive stars. I. Description of the methods and individual results

    Science.gov (United States)

    Cazorla, Constantin; Morel, Thierry; Nazé, Yaël; Rauw, Gregor; Semaan, Thierry; Daflon, Simone; Oey, M. S.

    2017-07-01

    Aims: Recent observations have challenged our understanding of rotational mixing in massive stars by revealing a population of fast-rotating objects with apparently normal surface nitrogen abundances. However, several questions have arisen because of a number of issues, which have rendered a reinvestigation necessary; these issues include the presence of numerous upper limits for the nitrogen abundance, unknown multiplicity status, and a mix of stars with different physical properties, such as their mass and evolutionary state, which are known to control the amount of rotational mixing. Methods: We have carefully selected a large sample of bright, fast-rotating early-type stars of our Galaxy (40 objects with spectral types between B0.5 and O4). Their high-quality, high-resolution optical spectra were then analysed with the stellar atmosphere modelling codes DETAIL/SURFACE or CMFGEN, depending on the temperature of the target. Several internal and external checks were performed to validate our methods; notably, we compared our results with literature data for some well-known objects, studied the effect of gravity darkening, or confronted the results provided by the two codes for stars amenable to both analyses. Furthermore, we studied the radial velocities of the stars to assess their binarity. Results: This first part of our study presents our methods and provides the derived stellar parameters, He, CNO abundances, and the multiplicity status of every star of the sample. It is the first time that He and CNO abundances of such a large number of Galactic massive fast rotators are determined in a homogeneous way. Based on observations obtained with the Heidelberg Extended Range Optical Spectrograph (HEROS) at the Telescopio Internacional de Guanajuato (TIGRE) with the SOPHIE échelle spectrograph at the Haute-Provence Observatory (OHP; Institut Pytheas; CNRS, France), and with the Magellan Inamori Kyocera Echelle (MIKE) spectrograph at the Magellan II Clay telescope

  10. The anchors of steel wire ropes, testing methods and their results

    Directory of Open Access Journals (Sweden)

    J. Krešák

    2012-10-01

    Full Text Available The present paper introduces an application of the acoustic and thermographic method in the defectoscopic testing of immobile steel wire ropes at the most critical point, the anchor. First measurements and their results by these new defectoscopic methods are shown. In defectoscopic tests at the anchor, the widely used magnetic method gives unreliable results, and therefore presents a problem for steel wire defectoscopy. Application of the two new methods in the steel wire defectoscopy at the anchor point will enable increased safety measures at the anchor of steel wire ropes in bridge, roof, tower and aerial cable lift constructions.

  11. A Systematic Protein Refolding Screen Method using the DGR Approach Reveals that Time and Secondary TSA are Essential Variables.

    Science.gov (United States)

    Wang, Yuanze; van Oosterwijk, Niels; Ali, Ameena M; Adawy, Alaa; Anindya, Atsarina L; Dömling, Alexander S S; Groves, Matthew R

    2017-08-24

    Refolding of proteins derived from inclusion bodies is very promising as it can provide a reliable source of target proteins of high purity. However, inclusion body-based protein production is often limited by the lack of techniques for the detection of correctly refolded protein. Thus, the selection of the refolding conditions is mostly achieved using trial and error approaches and is thus a time-consuming process. In this study, we use the latest developments in the differential scanning fluorimetry guided refolding approach as an analytical method to detect correctly refolded protein. We describe a systematic buffer screen that contains a 96-well primary pH-refolding screen in conjunction with a secondary additive screen. Our research demonstrates that this approach could be applied for determining refolding conditions for several proteins. In addition, it revealed which "helper" molecules, such as arginine and additives are essential. Four different proteins: HA-RBD, MDM2, IL-17A and PD-L1 were used to validate our refolding approach. Our systematic protocol evaluates the impact of the "helper" molecules, the pH, buffer system and time on the protein refolding process in a high-throughput fashion. Finally, we demonstrate that refolding time and a secondary thermal shift assay buffer screen are critical factors for improving refolding efficiency.

  12. A method that reveals the multi-level ultrametric tree hidden in p -spin-glass-like systems

    International Nuclear Information System (INIS)

    Baviera, R; Virasoro, M A

    2015-01-01

    In the study of disordered models like spin glasses the key object of interest is the rugged energy hypersurface defined in configuration space. The statistical mechanics calculation of the Gibbs–Boltzmann partition function gives the information necessary to understand the equilibrium behavior of the system as a function of the temperature but is not enough if we are interested in the more general aspects of the hypersurface: it does not give us, for instance, the different degrees of ruggedness at different scales. In the context of the replica symmetry breaking (RSB) approach we discuss here a rather simple extension that can provide a much more detailed picture. The attractiveness of the method relies on the fact that it is conceptually transparent and the additional calculations are rather straightforward. We think that this approach reveals an ultrametric organisation with many levels in models like p-spin glasses when we include saddle points. In this first paper we present detailed calculations for the spherical p-spin glass model where we discover that the corresponding decreasing Parisi function q(x) codes this hidden ultrametric organisation. (paper)

  13. Tensile strength of concrete under static and intermediate strain rates: Correlated results from different testing methods

    International Nuclear Information System (INIS)

    Wu Shengxing; Chen Xudong; Zhou Jikai

    2012-01-01

    Highlights: ► Tensile strength of concrete increases with increase in strain rate. ► Strain rate sensitivity of tensile strength of concrete depends on test method. ► High stressed volume method can correlate results from various test methods. - Abstract: This paper presents a comparative experiment and analysis of three different methods (direct tension, splitting tension and four-point loading flexural tests) for determination of the tensile strength of concrete under low and intermediate strain rates. In addition, the objective of this investigation is to analyze the suitability of the high stressed volume approach and Weibull effective volume method to the correlation of the results of different tensile tests of concrete. The test results show that the strain rate sensitivity of tensile strength depends on the type of test, splitting tensile strength of concrete is more sensitive to an increase in the strain rate than flexural and direct tensile strength. The high stressed volume method could be used to obtain a tensile strength value of concrete, free from the influence of the characteristics of tests and specimens. However, the Weibull effective volume method is an inadequate method for describing failure of concrete specimens determined by different testing methods.

  14. Non-Destructive Evaluation Method Based On Dynamic Invariant Stress Resultants

    Directory of Open Access Journals (Sweden)

    Zhang Junchi

    2015-01-01

    Full Text Available Most of the vibration based damage detection methods are based on changes in frequencies, mode shapes, mode shape curvature, and flexibilities. These methods are limited and typically can only detect the presence and location of damage. Current methods seldom can identify the exact severity of damage to structures. This paper will present research in the development of a new non-destructive evaluation method to identify the existence, location, and severity of damage for structural systems. The method utilizes the concept of invariant stress resultants (ISR. The basic concept of ISR is that at any given cross section the resultant internal force distribution in a structural member is not affected by the inflicted damage. The method utilizes dynamic analysis of the structure to simulate direct measurements of acceleration, velocity and displacement simultaneously. The proposed dynamic ISR method is developed and utilized to detect the damage of corresponding changes in mass, damping and stiffness. The objectives of this research are to develop the basic theory of the dynamic ISR method, apply it to the specific types of structures, and verify the accuracy of the developed theory. Numerical results that demonstrate the application of the method will reflect the advanced sensitivity and accuracy in characterizing multiple damage locations.

  15. Monitoring ambient ozone with a passive measurement technique method, field results and strategy

    NARCIS (Netherlands)

    Scheeren, BA; Adema, EH

    1996-01-01

    A low-cost, accurate and sensitive passive measurement method for ozone has been developed and tested. The method is based on the reaction of ozone with indigo carmine which results in colourless reaction products which are detected spectrophotometrically after exposure. Coated glass filters are

  16. Nondestructive methods for the structural evaluation of wood floor systems in historic buildings : preliminary results : [abstract

    Science.gov (United States)

    Zhiyong Cai; Michael O. Hunt; Robert J. Ross; Lawrence A. Soltis

    1999-01-01

    To date, there is no standard method for evaluating the structural integrity of wood floor systems using nondestructive techniques. Current methods of examination and assessment are often subjective and therefore tend to yield imprecise or variable results. For this reason, estimates of allowable wood floor loads are often conservative. The assignment of conservatively...

  17. Comparison result of inversion of gravity data of a fault by particle swarm optimization and Levenberg-Marquardt methods.

    Science.gov (United States)

    Toushmalani, Reza

    2013-01-01

    The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.

  18. Comparison Of Simulation Results When Using Two Different Methods For Mold Creation In Moldflow Simulation

    Directory of Open Access Journals (Sweden)

    Kaushikbhai C. Parmar

    2017-04-01

    Full Text Available Simulation gives different results when using different methods for the same simulation. Autodesk Moldflow Simulation software provide two different facilities for creating mold for the simulation of injection molding process. Mold can be created inside the Moldflow or it can be imported as CAD file. The aim of this paper is to study the difference in the simulation results like mold temperature part temperature deflection in different direction time for the simulation and coolant temperature for this two different methods.

  19. Experimental Results and Numerical Simulation of the Target RCS using Gaussian Beam Summation Method

    Directory of Open Access Journals (Sweden)

    Ghanmi Helmi

    2018-05-01

    Full Text Available This paper presents a numerical and experimental study of Radar Cross Section (RCS of radar targets using Gaussian Beam Summation (GBS method. The purpose GBS method has several advantages over ray method, mainly on the caustic problem. To evaluate the performance of the chosen method, we started the analysis of the RCS using Gaussian Beam Summation (GBS and Gaussian Beam Launching (GBL, the asymptotic models Physical Optic (PO, Geometrical Theory of Diffraction (GTD and the rigorous Method of Moment (MoM. Then, we showed the experimental validation of the numerical results using experimental measurements which have been executed in the anechoic chamber of Lab-STICC at ENSTA Bretagne. The numerical and experimental results of the RCS are studied and given as a function of various parameters: polarization type, target size, Gaussian beams number and Gaussian beams width.

  20. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON

    International Nuclear Information System (INIS)

    BEEBE - WANG, J.; LUCCIO, A.U.; D IMPERIO, N.; MACHIDA, S.

    2002-01-01

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed

  1. Application of the DSA preconditioned GMRES formalism to the method of characteristics - First results

    International Nuclear Information System (INIS)

    Le Tellier, R.; Hebert, A.

    2004-01-01

    The method of characteristics is well known for its slow convergence; consequently, as it is often done for SN methods, the Generalized Minimal Residual approach (GMRES) has been investigated for its practical implementation and its high reliability. GMRES is one of the most effective Krylov iterative methods to solve large linear systems. Moreover, the system has been 'left preconditioned' with the Algebraic Collapsing Acceleration (ACA) a variant of the Diffusion Synthetic Acceleration (DSA) based on I. Suslov's former works. This paper presents the first numerical results of these methods in 2D geometries with material discontinuities. Indeed, previous investigations have shown a degraded effectiveness of Diffusion Synthetic Accelerations with this kind of geometries. Results are presented for 9 x 9 Cartesian assemblies in terms of the speed of convergence of the inner iterations (fixed source) of the method of characteristics. It shows a significant improvement on the convergence rate. (authors)

  2. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON.

    Energy Technology Data Exchange (ETDEWEB)

    BEEBE - WANG,J.; LUCCIO,A.U.; D IMPERIO,N.; MACHIDA,S.

    2002-06-03

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed.

  3. A result-driven minimum blocking method for PageRank parallel computing

    Science.gov (United States)

    Tao, Wan; Liu, Tao; Yu, Wei; Huang, Gan

    2017-01-01

    Matrix blocking is a common method for improving computational efficiency of PageRank, but the blocking rules are hard to be determined, and the following calculation is complicated. In tackling these problems, we propose a minimum blocking method driven by result needs to accomplish a parallel implementation of PageRank algorithm. The minimum blocking just stores the element which is necessary for the result matrix. In return, the following calculation becomes simple and the consumption of the I/O transmission is cut down. We do experiments on several matrixes of different data size and different sparsity degree. The results show that the proposed method has better computational efficiency than traditional blocking methods.

  4. A Pragmatic Smoothing Method for Improving the Quality of the Results in Atomic Spectroscopy

    Science.gov (United States)

    Bennun, Leonardo

    2017-07-01

    A new smoothing method for the improvement on the identification and quantification of spectral functions based on the previous knowledge of the signals that are expected to be quantified, is presented. These signals are used as weighted coefficients in the smoothing algorithm. This smoothing method was conceived to be applied in atomic and nuclear spectroscopies preferably to these techniques where net counts are proportional to acquisition time, such as particle induced X-ray emission (PIXE) and other X-ray fluorescence spectroscopic methods, etc. This algorithm, when properly applied, does not distort the form nor the intensity of the signal, so it is well suited for all kind of spectroscopic techniques. This method is extremely effective at reducing high-frequency noise in the signal much more efficient than a single rectangular smooth of the same width. As all of smoothing techniques, the proposed method improves the precision of the results, but in this case we found also a systematic improvement on the accuracy of the results. We still have to evaluate the improvement on the quality of the results when this method is applied over real experimental results. We expect better characterization of the net area quantification of the peaks, and smaller Detection and Quantification Limits. We have applied this method to signals that obey Poisson statistics, but with the same ideas and criteria, it could be applied to time series. In a general case, when this algorithm is applied over experimental results, also it would be required that the sought characteristic functions, required for this weighted smoothing method, should be obtained from a system with strong stability. If the sought signals are not perfectly clean, this method should be carefully applied

  5. Project Deep Drilling KLX02 - Phase 2. Methods, scope of activities and results. Summary report

    International Nuclear Information System (INIS)

    Ekman, L.

    2001-04-01

    Geoscientific investigations performed by SKB, including those at the Aespoe Hard Rock Laboratory, have so far comprised the bedrock horizon down to about 1000 m. The primary purposes with the c. 1700 m deep, φ76 mm, sub vertical core borehole KLX02, drilled during the autumn 1992 at Laxemar, Oskarshamn, was to test core drilling technique at large depths and with a relatively large diameter and to enable geoscientific investigations beyond 1000 m. Drilling of borehole KLX02 was fulfilled very successfully. Results of the drilling commission and the borehole investigations conducted in conjunction with drilling have been reported earlier. The present report provides a summary of the investigations made during a five year period after completion of drilling. Results as well as methods applied are described. A variety of geoscientific investigations to depths exceeding 1600 m were successfully performed. However, the investigations were not entirely problem-free. For example, borehole equipment got stuck in the borehole at several occasions. Special investigations, among them a fracture study, were initiated in order to reveal the mechanisms behind this problem. Different explanations seem possible, e.g. breakouts from the borehole wall, which may be a specific problem related to the stress situation in deep boreholes. The investigation approach for borehole KLX02 followed, in general outline, the SKB model for site investigations, where a number of key issues for site characterization are studied. For each of those, a number of geoscientific parameters are investigated and determined. One important aim is to erect a lithological-structural model of the site, which constitutes the basic requirement for modelling mechanical stability, thermal properties, groundwater flow, groundwater chemistry and transport of solutes. The investigations in borehole KLX02 resulted in a thorough lithological-structural characterization of the rock volume near the borehole. In order to

  6. Project Deep Drilling KLX02 - Phase 2. Methods, scope of activities and results. Summary report

    Energy Technology Data Exchange (ETDEWEB)

    Ekman, L. [GEOSIGMA AB/LE Geokonsult AB, Uppsala (Sweden)

    2001-04-01

    Geoscientific investigations performed by SKB, including those at the Aespoe Hard Rock Laboratory, have so far comprised the bedrock horizon down to about 1000 m. The primary purposes with the c. 1700 m deep, {phi}76 mm, sub vertical core borehole KLX02, drilled during the autumn 1992 at Laxemar, Oskarshamn, was to test core drilling technique at large depths and with a relatively large diameter and to enable geoscientific investigations beyond 1000 m. Drilling of borehole KLX02 was fulfilled very successfully. Results of the drilling commission and the borehole investigations conducted in conjunction with drilling have been reported earlier. The present report provides a summary of the investigations made during a five year period after completion of drilling. Results as well as methods applied are described. A variety of geoscientific investigations to depths exceeding 1600 m were successfully performed. However, the investigations were not entirely problem-free. For example, borehole equipment got stuck in the borehole at several occasions. Special investigations, among them a fracture study, were initiated in order to reveal the mechanisms behind this problem. Different explanations seem possible, e.g. breakouts from the borehole wall, which may be a specific problem related to the stress situation in deep boreholes. The investigation approach for borehole KLX02 followed, in general outline, the SKB model for site investigations, where a number of key issues for site characterization are studied. For each of those, a number of geoscientific parameters are investigated and determined. One important aim is to erect a lithological-structural model of the site, which constitutes the basic requirement for modelling mechanical stability, thermal properties, groundwater flow, groundwater chemistry and transport of solutes. The investigations in borehole KLX02 resulted in a thorough lithological-structural characterization of the rock volume near the borehole. In order

  7. Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.

    Science.gov (United States)

    Kieffer, Kevin M.; Thompson, Bruce

    As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…

  8. A revised method of presenting wavenumber-frequency power spectrum diagrams that reveals the asymmetric nature of tropical large-scale waves

    Energy Technology Data Exchange (ETDEWEB)

    Chao, Winston C. [NASA/Goddard Space Flight Center, Global Modeling and Assimilation Office, Mail Code 610.1, Greenbelt, MD (United States); Yang, Bo; Fu, Xiouhua [University of Hawaii at Manoa, School of Ocean and Earth Science and Technology, International Pacific Research Center, Honolulu, HI (United States)

    2009-11-15

    The popular method of presenting wavenumber-frequency power spectrum diagrams for studying tropical large-scale waves in the literature is shown to give an incomplete presentation of these waves. The so-called ''convectively coupled Kelvin (mixed Rossby-gravity) waves'' are presented as existing only in the symmetric (anti-symmetric) component of the diagrams. This is obviously not consistent with the published composite/regression studies of ''convectively coupled Kelvin waves,'' which illustrate the asymmetric nature of these waves. The cause of this inconsistency is revealed in this note and a revised method of presenting the power spectrum diagrams is proposed. When this revised method is used, ''convectively coupled Kelvin waves'' do show anti-symmetric components, and ''convectively coupled mixed Rossby-gravity waves (also known as Yanai waves)'' do show a hint of symmetric components. These results bolster a published proposal that these waves should be called ''chimeric Kelvin waves,'' ''chimeric mixed Rossby-gravity waves,'' etc. This revised method of presenting power spectrum diagrams offers an additional means of comparing the GCM output with observations by calling attention to the capability of GCMs to correctly simulate the asymmetric characteristics of equatorial waves. (orig.)

  9. Non-invasive genetics outperforms morphological methods in faecal dietary analysis, revealing wild boar as a considerable conservation concern for ground-nesting birds.

    Science.gov (United States)

    Oja, Ragne; Soe, Egle; Valdmann, Harri; Saarma, Urmas

    2017-01-01

    Capercaillie (Tetrao urogallus) and other grouse species represent conservation concerns across Europe due to their negative abundance trends. In addition to habitat deterioration, predation is considered a major factor contributing to population declines. While the role of generalist predators on grouse predation is relatively well known, the impact of the omnivorous wild boar has remained elusive. We hypothesize that wild boar is an important predator of ground-nesting birds, but has been neglected as a bird predator because traditional morphological methods underestimate the proportion of birds in wild boar diet. To distinguish between different mammalian predator species, as well as different grouse prey species, we developed a molecular method based on the analysis of mitochondrial DNA that allows accurate species identification. We collected 109 wild boar faeces at protected capercaillie leks and surrounding areas and analysed bird consumption using genetic methods and classical morphological examination. Genetic analysis revealed that the proportion of birds in wild boar faeces was significantly higher (17.3%; 4.5×) than indicated by morphological examination (3.8%). Moreover, the genetic method allowed considerably more precise taxonomic identification of consumed birds compared to morphological analysis. Our results demonstrate: (i) the value of using genetic approaches in faecal dietary analysis due to their higher sensitivity, and (ii) that wild boar is an important predator of ground-nesting birds, deserving serious consideration in conservation planning for capercaillie and other grouse.

  10. Ti α - ω phase transformation and metastable structure, revealed by the solid-state nudged elastic band method

    Science.gov (United States)

    Zarkevich, Nikolai; Johnson, Duane D.

    Titanium is on of the four most utilized structural metals, and, hence, its structural changes and potential metastable phases under stress are of considerable importance. Using DFT+U combined with the generalized solid-state nudged elastic band (SS-NEB) method, we consider the pressure-driven transformation between Ti α and ω phases, and find an intermediate metastable body-centered orthorhombic (bco) structure of lower density. We verify its stability, assess the phonons and electronic structure, and compare computational results to experiment. Interestingly, standard density functional theory (DFT) yields the ω phase as the Ti ground state, in contradiction to the observed α phase at low pressure and temperature. We correct this by proper consideration of the strongly correlated d-electrons, and utilize DFT+U method in the SS-NEB to obtain the relevant transformation pathway and structures. We use methods developed with support by the U.S. Department of Energy (DE-FG02-03ER46026 and DE-AC02-07CH11358). Ames Laboratory is operated for the DOE by Iowa State University under Contract DE-AC02-07CH11358.

  11. Doppler method leak detection for LMFBR steam generators. Pt. 1. Experimental results of bubble detection using small models

    International Nuclear Information System (INIS)

    Kumagai, Hiromichi

    1999-01-01

    To prevent the expansion of the tube damage and to maintain structural integrity in the steam generators (SGs) of fast breeder reactors (FBRs), it is necessary to detect precisely and immediately the leakage of water from heat transfer tubes. Therefore, an active acoustic method was developed. Previous studies have revealed that in practical steam generators the active acoustic method can detect bubbles of 10 l/s within 10 seconds. To prevent the expansion of damage to neighboring tubes, it is necessary to detect smaller leakages of water from the heat transfer tubes. The Doppler method is designed to detect small leakages and to find the source of the leak before damage spreads to neighboring tubes. To evaluate the relationship between the detection sensitivity of the Doppler method and the bubble volume and bubble size, the structural shapes and bubble flow conditions were investigated experimentally, using a small structural model. The results show that the Doppler method can detect the bubbles under bubble flow conditions, and it is sensitive enough to detect small leakages within a short time. The doppler method thus has strong potential for the detection of water leakage in SGs. (author)

  12. A method for data handling numerical results in parallel OpenFOAM simulations

    International Nuclear Information System (INIS)

    nd Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania))" data-affiliation=" (Faculty of Automatic Control and Computing, Politehnica University of Timişoara, 2nd Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania))" >Anton, Alin; th Mihai Viteazu Ave., 300221, TM Timişoara (Romania))" data-affiliation=" (Center for Advanced Research in Engineering Science, Romanian Academy – Timişoara Branch, 24th Mihai Viteazu Ave., 300221, TM Timişoara (Romania))" >Muntean, Sebastian

    2015-01-01

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit ® [1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms

  13. A method for data handling numerical results in parallel OpenFOAM simulations

    Energy Technology Data Exchange (ETDEWEB)

    Anton, Alin [Faculty of Automatic Control and Computing, Politehnica University of Timişoara, 2" n" d Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania); Muntean, Sebastian [Center for Advanced Research in Engineering Science, Romanian Academy – Timişoara Branch, 24" t" h Mihai Viteazu Ave., 300221, TM Timişoara (Romania)

    2015-12-31

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.

  14. Soil Particle Size Analysis by Laser Diffractometry: Result Comparison with Pipette Method

    Science.gov (United States)

    Šinkovičová, Miroslava; Igaz, Dušan; Kondrlová, Elena; Jarošová, Miriam

    2017-10-01

    Soil texture as the basic soil physical property provides a basic information on the soil grain size distribution as well as grain size fraction representation. Currently, there are several methods of particle dimension measurement available that are based on different physical principles. Pipette method based on the different sedimentation velocity of particles with different diameter is considered to be one of the standard methods of individual grain size fraction distribution determination. Following the technical advancement, optical methods such as laser diffraction can be also used nowadays for grain size distribution determination in the soil. According to the literature review of domestic as well as international sources related to this topic, it is obvious that the results obtained by laser diffractometry do not correspond with the results obtained by pipette method. The main aim of this paper was to analyse 132 samples of medium fine soil, taken from the Nitra River catchment in Slovakia, from depths of 15-20 cm and 40-45 cm, respectively, using laser analysers: ANALYSETTE 22 MicroTec plus (Fritsch GmbH) and Mastersizer 2000 (Malvern Instruments Ltd). The results obtained by laser diffractometry were compared with pipette method and the regression relationships using linear, exponential, power and polynomial trend were derived. Regressions with the three highest regression coefficients (R2) were further investigated. The fit with the highest tightness was observed for the polynomial regression. In view of the results obtained, we recommend using the estimate of the representation of the clay fraction (analysis is done according to laser diffractometry. The advantages of laser diffraction method comprise the short analysis time, usage of small sample amount, application for the various grain size fraction and soil type classification systems, and a wide range of determined fractions. Therefore, it is necessary to focus on this issue further to address the

  15. Qualitative and quantitative methods for human factor analysis and assessment in NPP. Investigations and results

    International Nuclear Information System (INIS)

    Hristova, R.; Kalchev, B.; Atanasov, D.

    2005-01-01

    We consider here two basic groups of methods for analysis and assessment of the human factor in the NPP area and give some results from performed analyses as well. The human factor is the human interaction with the design equipment, with the working environment and takes into account the human capabilities and limits. In the frame of the qualitative methods for analysis of the human factor are considered concepts and structural methods for classifying of the information, connected with the human factor. Emphasize is given to the HPES method for human factor analysis in NPP. Methods for quantitative assessment of the human reliability are considered. These methods allow assigning of probabilities to the elements of the already structured information about human performance. This part includes overview of classical methods for human reliability assessment (HRA, THERP), and methods taking into account specific information about human capabilities and limits and about the man-machine interface (CHR, HEART, ATHEANA). Quantitative and qualitative results concerning human factor influence in the initiating events occurrences in the Kozloduy NPP are presented. (authors)

  16. Results of an interlaboratory comparison of analytical methods for contaminants of emerging concern in water.

    Science.gov (United States)

    Vanderford, Brett J; Drewes, Jörg E; Eaton, Andrew; Guo, Yingbo C; Haghani, Ali; Hoppe-Jones, Christiane; Schluesener, Michael P; Snyder, Shane A; Ternes, Thomas; Wood, Curtis J

    2014-01-07

    An evaluation of existing analytical methods used to measure contaminants of emerging concern (CECs) was performed through an interlaboratory comparison involving 25 research and commercial laboratories. In total, 52 methods were used in the single-blind study to determine method accuracy and comparability for 22 target compounds, including pharmaceuticals, personal care products, and steroid hormones, all at ng/L levels in surface and drinking water. Method biases ranged from caffeine, NP, OP, and triclosan had false positive rates >15%. In addition, some methods reported false positives for 17β-estradiol and 17α-ethynylestradiol in unspiked drinking water and deionized water, respectively, at levels higher than published predicted no-effect concentrations for these compounds in the environment. False negative rates were also generally contamination, misinterpretation of background interferences, and/or inappropriate setting of detection/quantification levels for analysis at low ng/L levels. The results of both comparisons were collectively assessed to identify parameters that resulted in the best overall method performance. Liquid chromatography-tandem mass spectrometry coupled with the calibration technique of isotope dilution were able to accurately quantify most compounds with an average bias of <10% for both matrixes. These findings suggest that this method of analysis is suitable at environmentally relevant levels for most of the compounds studied. This work underscores the need for robust, standardized analytical methods for CECs to improve data quality, increase comparability between studies, and help reduce false positive and false negative rates.

  17. Methods and results of diuresis renography in infants and children. Methodik und Ergebnisse der Diurese-Nephrographie im Kindesalter

    Energy Technology Data Exchange (ETDEWEB)

    Kleinhans, E. (Klinik fuer Nuklearmedizin, RWTH Aachen (Germany)); Rohrmann, D. (Urologische Klinik, RWTH Aachen (Germany)); Stollbrink, C. (Paediatrische Klinik, RWTH Aachen (Germany)); Mertens, R. (Paediatrische Klinik, RWTH Aachen (Germany)); Jakse, G. (Urologische Klinik, RWTH Aachen (Germany)); Buell, U. (Klinik fuer Nuklearmedizin, RWTH Aachen (Germany))

    1994-02-01

    In infants and children with hydronephrosis, the decision-making process for those instances of urinary tract dilatation that require surgical correction and those that do not is based in part on the findings of diuresis renography. Quantitative analysis of renogram curve pattern is a well established tool which, in addition, provides comparable results in follow-up studies. However, standardization of the method including data analysis does not yet exist. In this study, three parameters obtained by mathematical curve analysis were examined: clearance half-time for diuretic response, clearance within 5 minutes and clearance within 16 minutes. As a result, 16 minutes clearance revealed superior results in discriminating obstructive impairments of urine drainage from not obstructive ones. Compared to the clearance halftime, the markedly shorter duration of the examination (16 minutes) is an additional benefit. (orig.)

  18. Application of Statistical Methods to Activation Analytical Results near the Limit of Detection

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Wanscher, B.

    1978-01-01

    Reporting actual numbers instead of upper limits for analytical results at or below the detection limit may produce reliable data when these numbers are subjected to appropriate statistical processing. Particularly in radiometric methods, such as activation analysis, where individual standard...... deviations of analytical results may be estimated, improved discrimination may be based on the Analysis of Precision. Actual experimental results from a study of the concentrations of arsenic in human skin demonstrate the power of this principle....

  19. How the RNA isolation method can affect microRNA microarray results

    DEFF Research Database (Denmark)

    Podolska, Agnieszka; Kaczkowski, Bogumil; Litman, Thomas

    2011-01-01

    RNA microarray analysis on porcine brain tissue. One method is a phenol-guanidine isothiocyanate-based procedure that permits isolation of total RNA. The second method, miRVana™ microRNA isolation, is column based and recovers the small RNA fraction alone. We found that microarray analyses give different results...... that depend on the RNA fraction used, in particular because some microRNAs appear very sensitive to the RNA isolation method. We conclude that precautions need to be taken when comparing microarray studies based on RNA isolated with different methods.......The quality of RNA is crucial in gene expression experiments. RNA degradation interferes in the measurement of gene expression, and in this context, microRNA quantification can lead to an incorrect estimation. In the present study, two different RNA isolation methods were used to perform micro...

  20. Three magnetic particles solid phase radioimmunoassay for T4: Comparison of their results with established methods

    International Nuclear Information System (INIS)

    Bashir, T.

    1996-01-01

    The introduction of solid phase separation techniques is an important improvement in radioimmunoassays and immunoradiometric assays. Magnetic particle solid phase method has additional advantages over others, as the separation is rapid and centrifugation is not required. Three types of magnetic particles have been studied in T 4 RIA and the results have been compared with commercial kits and other established methods. (author). 4 refs, 9 figs, 2 tabs

  1. A homologous mapping method for three-dimensional reconstruction of protein networks reveals disease-associated mutations.

    Science.gov (United States)

    Huang, Sing-Han; Lo, Yu-Shu; Luo, Yong-Chun; Tseng, Yu-Yao; Yang, Jinn-Moon

    2018-03-19

    One of the crucial steps toward understanding the associations among molecular interactions, pathways, and diseases in a cell is to investigate detailed atomic protein-protein interactions (PPIs) in the structural interactome. Despite the availability of large-scale methods for analyzing PPI networks, these methods often focused on PPI networks using genome-scale data and/or known experimental PPIs. However, these methods are unable to provide structurally resolved interaction residues and their conservations in PPI networks. Here, we reconstructed a human three-dimensional (3D) structural PPI network (hDiSNet) with the detailed atomic binding models and disease-associated mutations by enhancing our PPI families and 3D-domain interologs from 60,618 structural complexes and complete genome database with 6,352,363 protein sequences across 2274 species. hDiSNet is a scale-free network (γ = 2.05), which consists of 5177 proteins and 19,239 PPIs with 5843 mutations. These 19,239 structurally resolved PPIs not only expanded the number of PPIs compared to present structural PPI network, but also achieved higher agreement with gene ontology similarities and higher co-expression correlation than the ones of 181,868 experimental PPIs recorded in public databases. Among 5843 mutations, 1653 and 790 mutations involved in interacting domains and contacting residues, respectively, are highly related to diseases. Our hDiSNet can provide detailed atomic interactions of human disease and their associated proteins with mutations. Our results show that the disease-related mutations are often located at the contacting residues forming the hydrogen bonds or conserved in the PPI family. In addition, hDiSNet provides the insights of the FGFR (EGFR)-MAPK pathway for interpreting the mechanisms of breast cancer and ErbB signaling pathway in brain cancer. Our results demonstrate that hDiSNet can explore structural-based interactions insights for understanding the mechanisms of disease

  2. Numerical proceessing of radioimmunoassay results using logit-log transformation method

    International Nuclear Information System (INIS)

    Textoris, R.

    1983-01-01

    The mathematical model and algorithm are described of the numerical processing of the results of a radioimmunoassay by the logit-log transformation method and by linear regression with weight factors. The limiting value of the curve for zero concentration is optimized with regard to the residual sum by the iterative method by multiple repeats of the linear regression. Typical examples are presented of the approximation of calibration curves. The method proved suitable for all hitherto used RIA sets and is well suited for small computers with internal memory of min. 8 Kbyte. (author)

  3. A comparison of short-term dispersion estimates resulting from various atmospheric stability classification methods

    International Nuclear Information System (INIS)

    Mitchell, A.E. Jr.

    1982-01-01

    Four methods of classifying atmospheric stability class are applied at four sites to make short-term (1-h) dispersion estimates from a ground-level source based on a model consistent with U.S. Nuclear Regulatory Commission practice. The classification methods include vertical temperature gradient, standard deviation of horizontal wind direction fluctuations (sigma theta), Pasquill-Turner, and modified sigma theta which accounts for meander. Results indicate that modified sigma theta yields reasonable dispersion estimates compared to those produced using methods of vertical temperature gradient and Pasquill-Turner, and can be considered as a potential economic alternative in establishing onsite monitoring programs. (author)

  4. LOGICAL CONDITIONS ANALYSIS METHOD FOR DIAGNOSTIC TEST RESULTS DECODING APPLIED TO COMPETENCE ELEMENTS PROFICIENCY

    Directory of Open Access Journals (Sweden)

    V. I. Freyman

    2015-11-01

    Full Text Available Subject of Research.Representation features of education results for competence-based educational programs are analyzed. Solution importance of decoding and proficiency estimation for elements and components of discipline parts of competences is shown. The purpose and objectives of research are formulated. Methods. The paper deals with methods of mathematical logic, Boolean algebra, and parametrical analysis of complex diagnostic test results, that controls proficiency of some discipline competence elements. Results. The method of logical conditions analysis is created. It will give the possibility to formulate logical conditions for proficiency determination of each discipline competence element, controlled by complex diagnostic test. Normalized test result is divided into noncrossing zones; a logical condition about controlled elements proficiency is formulated for each of them. Summarized characteristics for test result zones are imposed. An example of logical conditions forming for diagnostic test with preset features is provided. Practical Relevance. The proposed method of logical conditions analysis is applied in the decoding algorithm of proficiency test diagnosis for discipline competence elements. It will give the possibility to automate the search procedure for elements with insufficient proficiency, and is also usable for estimation of education results of a discipline or a component of competence-based educational program.

  5. Short overview of PSA quantification methods, pitfalls on the road from approximate to exact results

    International Nuclear Information System (INIS)

    Banov, Reni; Simic, Zdenko; Sterc, Davor

    2014-01-01

    Over time the Probabilistic Safety Assessment (PSA) models have become an invaluable companion in the identification and understanding of key nuclear power plant (NPP) vulnerabilities. PSA is an effective tool for this purpose as it assists plant management to target resources where the largest benefit for plant safety can be obtained. PSA has quickly become an established technique to numerically quantify risk measures in nuclear power plants. As complexity of PSA models increases, the computational approaches become more or less feasible. The various computational approaches can be basically classified in two major groups: approximate and exact (BDD based) methods. In recent time modern commercially available PSA tools started to provide both methods for PSA model quantification. Besides availability of both methods in proven PSA tools the usage must still be taken carefully since there are many pitfalls which can drive to wrong conclusions and prevent efficient usage of PSA tool. For example, typical pitfalls involve the usage of higher precision approximation methods and getting a less precise result, or mixing minimal cuts and prime implicants in the exact computation method. The exact methods are sensitive to selected computational paths in which case a simple human assisted rearrangement may help and even switch from computationally non-feasible to feasible methods. Further improvements to exact method are possible and desirable which opens space for a new research. In this paper we will show how these pitfalls may be detected and how carefully actions must be done especially when working with large PSA models. (authors)

  6. Ancient dissolved methane in inland waters at low concentrations revealed by a new collection method for radiocarbon (^{14}C) analysis

    Science.gov (United States)

    Dean, Joshua F.; Billett, Michael F.; Murray, Callum; Garnett, Mark H.

    2017-04-01

    Methane (CH4) is a powerful greenhouse gas and is released to the atmosphere from freshwater systems in numerous biomes globally. Radiocarbon (14C) analysis of methane can provide unique information about its age, source and rate of cycling in natural environments. Methane is often released from aquatic sediments in bubbles (ebullition), but dissolved methane is also present in lakes and streams at lower concentrations, and may not be of the same age or source. Obtaining sufficient non-ebullitive aquatic methane for 14C analysis remains a major technical challenge. Previous studies have shown that freshwater methane, in both dissolved and ebullitive form, can be significantly older than other forms of aquatic carbon (C), and it is therefore important to characterise this part of the terrestrial C balance. We present a novel method to capture sufficient amounts of dissolved methane from freshwater environments for 14C analysis by circulating water across a hydrophobic, gas-permeable membrane and collecting the methane in a large collapsible vessel. The results of laboratory and field tests show that reliable dissolved δ13CH4 and 14CH4 samples can be readily collected over short time periods (˜4 to 24 hours), at relatively low cost and from a variety of surface water types. The initial results further support previous findings that dissolved methane can be significantly older than other forms of aquatic C, especially in organic-rich catchments, and is currently unaccounted for in many terrestrial C balances and models. This method is suitable for use in remote locations, and could potentially be used to detect the leakage of unique 14CH4 signatures from point sources into waterways, e.g. coal seam gas and landfill gas.

  7. Methods uncovering usability issues in medication-related alerting functions: results from a systematic review.

    Science.gov (United States)

    Marcilly, Romaric; Vasseur, Francis; Ammenwerth, Elske; Beuscart-Zephir, Marie-Catherine

    2014-01-01

    This paper aims at listing the methods used to evaluate the usability of medication-related alerting functions and at knowing what type of usability issues those methods allow to detect. A sub-analysis of data from this systematic review has been performed. Methods applied in the included papers were collected. Then, included papers were sorted in four types of evaluation: "expert evaluation", "user- testing/simulation", "on site observation" and "impact studies". The types of usability issues (usability flaws, usage problems and negative outcomes) uncovered by those evaluations were analyzed. Results show that a large set of methods are used. The largest proportion of papers uses "on site observation" evaluation. This is the only evaluation type for which every kind of usability flaws, usage problems and outcomes are detected. It is somehow surprising that, in a usability systematic review, most of the papers included use a method that is not often presented as a usability method. Results are discussed about the opportunity to provide usability information collected after the implementation of the technology during their design process, i.e. before their implementation.

  8. A holographic method for investigating cylindrical symmetry plasmas resulting from electric discharges

    International Nuclear Information System (INIS)

    Rosu, N.; Ralea, M.; Foca, M.; Iova, I.

    1992-01-01

    A new method based on holographic interferometry in real time with reference fringes for diagnosing gas electric discharges in cylindrical symmetry tubes is presented. A method for obtaining and quantitatively investigating interferograms obtained with a video camera is described. By studying the resulting images frame by frame and introducing the measurements into an adequate computer programme one gets a graphical recording of the radial distribution of the charged particle concentration in the plasma in any region of the tube at a given time, as well as their axial distribution. The real time evolution of certain phenomena occurring in the discharge tube can also be determined by this non-destructive method. The method is used for electric discharges in Ar at average pressures in a discharge tube with hollow cathode effect. (Author)

  9. MULTICRITERIA METHODS IN PERFORMING COMPANIES’ RESULTS USING ELECTRONIC RECRUITING, CORPORATE COMMUNICATION AND FINANCIAL RATIOS

    Directory of Open Access Journals (Sweden)

    Ivana Bilić

    2011-02-01

    Full Text Available Human resources represent one of the most important companies’ resources responsible in creation of companies’ competitive advantage. In search for the most valuable resources, companies use different methods. Lately, one of the growing methods is electronic recruiting, not only as a recruitment tool, but also as a mean of external communication. Additionally, in the process of corporate communication, companies nowadays use the electronic corporate communication as the easiest, the cheapest and the simplest form of business communication. The aim of this paper is to investigate relationship between three groups of different criteria; including main characteristics of performed electronic recruiting, corporate communication and selected financial performances. Selected companies were ranked separately by each group of criteria by usage of multicriteria decision making method PROMETHEE II. The main idea is to research whether companies which are the highest performers by certain group of criteria obtain the similar results regarding other group of criteria or performing results.

  10. Integrating Quantitative and Qualitative Results in Health Science Mixed Methods Research Through Joint Displays

    Science.gov (United States)

    Guetterman, Timothy C.; Fetters, Michael D.; Creswell, John W.

    2015-01-01

    PURPOSE Mixed methods research is becoming an important methodology to investigate complex health-related topics, yet the meaningful integration of qualitative and quantitative data remains elusive and needs further development. A promising innovation to facilitate integration is the use of visual joint displays that bring data together visually to draw out new insights. The purpose of this study was to identify exemplar joint displays by analyzing the various types of joint displays being used in published articles. METHODS We searched for empirical articles that included joint displays in 3 journals that publish state-of-the-art mixed methods research. We analyzed each of 19 identified joint displays to extract the type of display, mixed methods design, purpose, rationale, qualitative and quantitative data sources, integration approaches, and analytic strategies. Our analysis focused on what each display communicated and its representation of mixed methods analysis. RESULTS The most prevalent types of joint displays were statistics-by-themes and side-by-side comparisons. Innovative joint displays connected findings to theoretical frameworks or recommendations. Researchers used joint displays for convergent, explanatory sequential, exploratory sequential, and intervention designs. We identified exemplars for each of these designs by analyzing the inferences gained through using the joint display. Exemplars represented mixed methods integration, presented integrated results, and yielded new insights. CONCLUSIONS Joint displays appear to provide a structure to discuss the integrated analysis and assist both researchers and readers in understanding how mixed methods provides new insights. We encourage researchers to use joint displays to integrate and represent mixed methods analysis and discuss their value. PMID:26553895

  11. Tank 48H Waste Composition and Results of Investigation of Analytical Methods

    Energy Technology Data Exchange (ETDEWEB)

    Walker , D.D. [Westinghouse Savannah River Company, AIKEN, SC (United States)

    1997-04-02

    This report serves two purposes. First, it documents the analytical results of Tank 48H samples taken between April and August 1996. Second, it describes investigations of the precision of the sampling and analytical methods used on the Tank 48H samples.

  12. Results of clinical approbation of new local treatment method in the complex therapy of inflammatory parodontium diseases

    Directory of Open Access Journals (Sweden)

    Yu. G. Romanova

    2017-08-01

    Full Text Available Treatment and prevention of inflammatory diseases of parodontium are one of the most difficult problems in stomatology today. Purpose of research: estimation of clinical efficiency of local combined application of developed agent apigel for oral cavity care and low-frequency electromagnetic field magnetotherapy at treatment of inflammatory diseases of parodontium. Materials and methods: 46 patients with chronic generalized catarrhal gingivitis and chronic generalized periodontitis of 1st degree were included into the study. Patients were divided into 2 groups depending on treatment management: basic (n = 23 and control (n = 23. Conventional treatment with the local use of the dental gel with camomile was used in the control group. Patients of the basic group were treated with local combined application of apigel and magnetotherapy. Efficiency was estimated with clinical, laboratory, microbiological and functional (ultrasonic Doppler examination methods of examination. Results: The application of the apigel and pulsating electromagnetic field in the complex medical treatment of patients with chronic generalized periodontitis (GhGP caused positive changes in clinical symptom and condition of parodontal tissues, that was accompanied by decline of hygienic and parodontal indexes. As compared with patients who had traditional anti-inflammatory therapy, patients who were treated with local application of apigel and magnetoterapy had decline of edema incidence. It was revealed that decrease of the pain correlated with improvement of hygienic condition of oral cavity and promoted prevention of bacterial contamination of damaged mucous membranes. Estimation of microvasculatory blood stream with the method of ultrasonic doppler flowmetry revealed more rapid normalization of volume and linear high systole, speed of blood stream in the parodontal tissues in case of use of new complex local method. Conclusions: Effect of the developed local agent in patients

  13. [Do different interpretative methods used for evaluation of checkerboard synergy test affect the results?].

    Science.gov (United States)

    Ozseven, Ayşe Gül; Sesli Çetin, Emel; Ozseven, Levent

    2012-07-01

    In recent years, owing to the presence of multi-drug resistant nosocomial bacteria, combination therapies are more frequently applied. Thus there is more need to investigate the in vitro activity of drug combinations against multi-drug resistant bacteria. Checkerboard synergy testing is among the most widely used standard technique to determine the activity of antibiotic combinations. It is based on microdilution susceptibility testing of antibiotic combinations. Although this test has a standardised procedure, there are many different methods for interpreting the results. In many previous studies carried out with multi-drug resistant bacteria, different rates of synergy have been reported with various antibiotic combinations using checkerboard technique. These differences might be attributed to the different features of the strains. However, different synergy rates detected by checkerboard method have also been reported in other studies using the same drug combinations and same types of bacteria. It was thought that these differences in synergy rates might be due to the different methods of interpretation of synergy test results. In recent years, multi-drug resistant Acinetobacter baumannii has been the most commonly encountered nosocomial pathogen especially in intensive-care units. For this reason, multidrug resistant A.baumannii has been the subject of a considerable amount of research about antimicrobial combinations. In the present study, the in vitro activities of frequently preferred combinations in A.baumannii infections like imipenem plus ampicillin/sulbactam, and meropenem plus ampicillin/sulbactam were tested by checkerboard synergy method against 34 multi-drug resistant A.baumannii isolates. Minimum inhibitory concentration (MIC) values for imipenem, meropenem and ampicillin/sulbactam were determined by the broth microdilution method. Subsequently the activity of two different combinations were tested in the dilution range of 4 x MIC and 0.03 x MIC in

  14. A semantics-based method for clustering of Chinese web search results

    Science.gov (United States)

    Zhang, Hui; Wang, Deqing; Wang, Li; Bi, Zhuming; Chen, Yong

    2014-01-01

    Information explosion is a critical challenge to the development of modern information systems. In particular, when the application of an information system is over the Internet, the amount of information over the web has been increasing exponentially and rapidly. Search engines, such as Google and Baidu, are essential tools for people to find the information from the Internet. Valuable information, however, is still likely submerged in the ocean of search results from those tools. By clustering the results into different groups based on subjects automatically, a search engine with the clustering feature allows users to select most relevant results quickly. In this paper, we propose an online semantics-based method to cluster Chinese web search results. First, we employ the generalised suffix tree to extract the longest common substrings (LCSs) from search snippets. Second, we use the HowNet to calculate the similarities of the words derived from the LCSs, and extract the most representative features by constructing the vocabulary chain. Third, we construct a vector of text features and calculate snippets' semantic similarities. Finally, we improve the Chameleon algorithm to cluster snippets. Extensive experimental results have shown that the proposed algorithm has outperformed over the suffix tree clustering method and other traditional clustering methods.

  15. Aircrew Exposure To Cosmic Radiation Evaluated By Means Of Several Methods; Results Obtained In 2006

    International Nuclear Information System (INIS)

    Ploc, Ondrej; Spurny, Frantisek; Jadrnickova, Iva; Turek, Karel

    2008-01-01

    Routine evaluation of aircraft crew exposure to cosmic radiation in the Czech Republic is performed by means of calculation method. Measurements onboard aircraft work as a control tool of the routine method, as well as a possibility of comparison of results measured by means of several methods. The following methods were used in 2006: (1) mobile dosimetry unit (MDU) type Liulin--a spectrometer of energy deposited in Si-detector; (2) two types of LET spectrometers based on the chemically etched track detectors (TED); (3) two types of thermoluminescent detectors; and (4) two calculation methods. MDU represents currently one of the most reliable equipments for evaluation of the aircraft crew exposure to cosmic radiation. It is an active device which measures total energy depositions (E dep ) in the semiconductor unit, and, after appropriate calibration, is able to give a separate estimation for non-neutron and neutron-like components of H*(10). This contribution consists mostly of results acquired by means of this equipment; measurements with passive detectors and calculations are mentioned because of comparison. Reasonably good agreement of all data sets could be stated

  16. Comments on Brodsky's statistical methods for evaluating epidemiological results, and reply by Brodsky, A

    International Nuclear Information System (INIS)

    Frome, E.L.; Khare, M.

    1980-01-01

    Brodsky's paper 'A Statistical Method for Testing Epidemiological Results, as applied to the Hanford Worker Population', (Health Phys., 36, 611-628, 1979) proposed two test statistics for use in comparing the survival experience of a group of employees and controls. This letter states that both of the test statistics were computed using incorrect formulas and concludes that the results obtained using these statistics may also be incorrect. In his reply Brodsky concurs with the comments on the proper formulation of estimates of pooled standard errors in constructing test statistics but believes that the erroneous formulation does not invalidate the major points, results and discussions of his paper. (author)

  17. On Calculation Methods and Results for Straight Cylindrical Roller Bearing Deflection, Stiffness, and Stress

    Science.gov (United States)

    Krantz, Timothy L.

    2011-01-01

    The purpose of this study was to assess some calculation methods for quantifying the relationships of bearing geometry, material properties, load, deflection, stiffness, and stress. The scope of the work was limited to two-dimensional modeling of straight cylindrical roller bearings. Preparations for studies of dynamic response of bearings with damaged surfaces motivated this work. Studies were selected to exercise and build confidence in the numerical tools. Three calculation methods were used in this work. Two of the methods were numerical solutions of the Hertz contact approach. The third method used was a combined finite element surface integral method. Example calculations were done for a single roller loaded between an inner and outer raceway for code verification. Next, a bearing with 13 rollers and all-steel construction was used as an example to do additional code verification, including an assessment of the leading order of accuracy of the finite element and surface integral method. Results from that study show that the method is at least first-order accurate. Those results also show that the contact grid refinement has a more significant influence on precision as compared to the finite element grid refinement. To explore the influence of material properties, the 13-roller bearing was modeled as made from Nitinol 60, a material with very different properties from steel and showing some potential for bearing applications. The codes were exercised to compare contact areas and stress levels for steel and Nitinol 60 bearings operating at equivalent power density. As a step toward modeling the dynamic response of bearings having surface damage, static analyses were completed to simulate a bearing with a spall or similar damage.

  18. Development and application of a new deterministic method for calculating computer model result uncertainties

    International Nuclear Information System (INIS)

    Maerker, R.E.; Worley, B.A.

    1989-01-01

    Interest in research into the field of uncertainty analysis has recently been stimulated as a result of a need in high-level waste repository design assessment for uncertainty information in the form of response complementary cumulative distribution functions (CCDFs) to show compliance with regulatory requirements. The solution to this problem must obviously rely on the analysis of computer code models, which, however, employ parameters that can have large uncertainties. The motivation for the research presented in this paper is a search for a method involving a deterministic uncertainty analysis approach that could serve as an improvement over those methods that make exclusive use of statistical techniques. A deterministic uncertainty analysis (DUA) approach based on the use of first derivative information is the method studied in the present procedure. The present method has been applied to a high-level nuclear waste repository problem involving use of the codes ORIGEN2, SAS, and BRINETEMP in series, and the resulting CDF of a BRINETEMP result of interest is compared with that obtained through a completely statistical analysis

  19. Analytical method and result of radiation exposure for depressurization accident of HTTR

    International Nuclear Information System (INIS)

    Sawa, K.; Shiozawa, S.; Mikami, H.

    1990-01-01

    The Japan Atomic Energy Research Institute (JAERI) is now proceeding with the construction design of the High Temperature Engineering Test Reactor (HTTR). Since the HTTR has some characteristics different from LWRs, analytical method of radiation exposure in accidents provided for LWRs can not be applied directly. This paper describes the analytical method of radiation exposure developed by JAERI for the depressurization accident, which is the severest accident in respect to radiation exposure among the design basis accidents of the HTTR. The result is also described in this paper

  20. Methods and optical fibers that decrease pulse degradation resulting from random chromatic dispersion

    Science.gov (United States)

    Chertkov, Michael; Gabitov, Ildar

    2004-03-02

    The present invention provides methods and optical fibers for periodically pinning an actual (random) accumulated chromatic dispersion of an optical fiber to a predicted accumulated dispersion of the fiber through relatively simple modifications of fiber-optic manufacturing methods or retrofitting of existing fibers. If the pinning occurs with sufficient frequency (at a distance less than or are equal to a correlation scale), pulse degradation resulting from random chromatic dispersion is minimized. Alternatively, pinning may occur quasi-periodically, i.e., the pinning distance is distributed between approximately zero and approximately two to three times the correlation scale.

  1. Testing the ISP method with the PARIO device: Accuracy of results and influence of homogenization technique

    Science.gov (United States)

    Durner, Wolfgang; Huber, Magdalena; Yangxu, Li; Steins, Andi; Pertassek, Thomas; Göttlein, Axel; Iden, Sascha C.; von Unold, Georg

    2017-04-01

    The particle-size distribution (PSD) is one of the main properties of soils. To determine the proportions of the fine fractions silt and clay, sedimentation experiments are used. Most common are the Pipette and Hydrometer method. Both need manual sampling at specific times. Both are thus time-demanding and rely on experienced operators. Durner et al. (Durner, W., S.C. Iden, and G. von Unold (2017): The integral suspension pressure method (ISP) for precise particle-size analysis by gravitational sedimentation, Water Resources Research, doi:10.1002/2016WR019830) recently developed the integral suspension method (ISP) method, which is implemented in the METER Group device PARIOTM. This new method estimates continuous PSD's from sedimentation experiments by recording the temporal evolution of the suspension pressure at a certain measurement depth in a sedimentation cylinder. It requires no manual interaction after start and thus no specialized training of the lab personnel. The aim of this study was to test the precision and accuracy of new method with a variety of materials, to answer the following research questions: (1) Are the results obtained by PARIO reliable and stable? (2) Are the results affected by the initial mixing technique to homogenize the suspension, or by the presence of sand in the experiment? (3) Are the results identical to the one that are obtained with the Pipette method as reference method? The experiments were performed with a pure quartz silt material and four real soil materials. PARIO measurements were done repetitively on the same samples in a temperature-controlled lab to characterize the repeatability of the measurements. Subsequently, the samples were investigated by the pipette method to validate the results. We found that the statistical error for silt fraction from replicate and repetitive measurements was in the range of 1% for the quartz material to 3% for soil materials. Since the sand fractions, as in any sedimentation method, must

  2. Influence of Specimen Preparation and Test Methods on the Flexural Strength Results of Monolithic Zirconia Materials.

    Science.gov (United States)

    Schatz, Christine; Strickstrock, Monika; Roos, Malgorzata; Edelhoff, Daniel; Eichberger, Marlis; Zylla, Isabella-Maria; Stawarczyk, Bogna

    2016-03-09

    The aim of this work was to evaluate the influence of specimen preparation and test method on the flexural strength results of monolithic zirconia. Different monolithic zirconia materials (Ceramill Zolid (Amann Girrbach, Koblach, Austria), Zenostar ZrTranslucent (Wieland Dental, Pforzheim, Germany), and DD Bio zx² (Dental Direkt, Spenge, Germany)) were tested with three different methods: 3-point, 4-point, and biaxial flexural strength. Additionally, different specimen preparation methods were applied: either dry polishing before sintering or wet polishing after sintering. Each subgroup included 40 specimens. The surface roughness was assessed using scanning electron microscopy (SEM) and a profilometer whereas monoclinic phase transformation was investigated with X-ray diffraction. The data were analyzed using a three-way Analysis of Variance (ANOVA) with respect to the three factors: zirconia, specimen preparation, and test method. One-way ANOVA was conducted for the test method and zirconia factors within the combination of two other factors. A 2-parameter Weibull distribution assumption was applied to analyze the reliability under different testing conditions. In general, values measured using the 4-point test method presented the lowest flexural strength values. The flexural strength findings can be grouped in the following order: 4-point strength values than prepared before sintering. The Weibull moduli ranged from 5.1 to 16.5. Specimens polished before sintering showed higher surface roughness values than specimens polished after sintering. In contrast, no strong impact of the polishing procedures on the monoclinic surface layer was observed. No impact of zirconia material on flexural strength was found. The test method and the preparation method significantly influenced the flexural strength values.

  3. Analysis of the robustness of network-based disease-gene prioritization methods reveals redundancy in the human interactome and functional diversity of disease-genes.

    Directory of Open Access Journals (Sweden)

    Emre Guney

    Full Text Available Complex biological systems usually pose a trade-off between robustness and fragility where a small number of perturbations can substantially disrupt the system. Although biological systems are robust against changes in many external and internal conditions, even a single mutation can perturb the system substantially, giving rise to a pathophenotype. Recent advances in identifying and analyzing the sequential variations beneath human disorders help to comprehend a systemic view of the mechanisms underlying various disease phenotypes. Network-based disease-gene prioritization methods rank the relevance of genes in a disease under the hypothesis that genes whose proteins interact with each other tend to exhibit similar phenotypes. In this study, we have tested the robustness of several network-based disease-gene prioritization methods with respect to the perturbations of the system using various disease phenotypes from the Online Mendelian Inheritance in Man database. These perturbations have been introduced either in the protein-protein interaction network or in the set of known disease-gene associations. As the network-based disease-gene prioritization methods are based on the connectivity between known disease-gene associations, we have further used these methods to categorize the pathophenotypes with respect to the recoverability of hidden disease-genes. Our results have suggested that, in general, disease-genes are connected through multiple paths in the human interactome. Moreover, even when these paths are disturbed, network-based prioritization can reveal hidden disease-gene associations in some pathophenotypes such as breast cancer, cardiomyopathy, diabetes, leukemia, parkinson disease and obesity to a greater extend compared to the rest of the pathophenotypes tested in this study. Gene Ontology (GO analysis highlighted the role of functional diversity for such diseases.

  4. Improved Method of Detection Falsification Results the Digital Image in Conditions of Attacks

    Directory of Open Access Journals (Sweden)

    Kobozeva A.A.

    2016-08-01

    Full Text Available The modern level of information technologies development has led to unheard ease embodiments hitherto unauthorized modifications of digital content. At the moment, very important question is the effective expert examination of authenticity of digital images, video, audio, development of the methods for identification and localization of violations of their integrity using these contents for purposes other than entertainment. Present paper deals with the improvement of the detection method of the cloning results in digital images - one of the most frequently used in the software tools falsification realized in all modern graphics editors. The method is intended for clone detection areas and pre-image in terms of additional disturbing influences in the image after the cloning operation for "masking" of the results, which complicates the search process. The improvement is aimed at reducing the number of "false alarms", when the area of the clone / pre-image detected in the original image or the localization of the identified areas do not correspond to the real clone and pre-image. The proposed improvement, based on analysis of different sizes per-pixel image blocks with the least difference from each other, has made it possible efficient functioning of the method, regardless of the specificity of the analyzed digital image.

  5. A statistical method for testing epidemiological results, as applied to the Hanford worker population

    International Nuclear Information System (INIS)

    Brodsky, A.

    1979-01-01

    Some recent reports of Mancuso, Stewart and Kneale claim findings of radiation-produced cancer in the Hanford worker population. These claims are based on statistical computations that use small differences in accumulated exposures between groups dying of cancer and groups dying of other causes; actual mortality and longevity were not reported. This paper presents a statistical method for evaluation of actual mortality and longevity longitudinally over time, as applied in a primary analysis of the mortality experience of the Hanford worker population. Although available, this method was not utilized in the Mancuso-Stewart-Kneale paper. The author's preliminary longitudinal analysis shows that the gross mortality experience of persons employed at Hanford during 1943-70 interval did not differ significantly from that of certain controls, when both employees and controls were selected from families with two or more offspring and comparison were matched by age, sex, race and year of entry into employment. This result is consistent with findings reported by Sanders (Health Phys. vol.35, 521-538, 1978). The method utilizes an approximate chi-square (1 D.F.) statistic for testing population subgroup comparisons, as well as the cumulation of chi-squares (1 D.F.) for testing the overall result of a particular type of comparison. The method is available for computer testing of the Hanford mortality data, and could also be adapted to morbidity or other population studies. (author)

  6. No Results? No Problem! Why We Are Publishing Methods of a Landmark Study With Results Still Pending.

    Science.gov (United States)

    Lacy, Brian E; Spiegel, Brennan

    2017-11-01

    Colorectal cancer (CRC) is the third most commonly diagnosed cancer in both men and women in the United States, and screening for CRC is a national health-care priority. In this issue, investigators from the CONFIRM study group report on the aims and study design of a large, multicenter, randomized prospective study of whether screening colonoscopy is superior to an annual fecal immunochemical test (FIT). CONFRIM hopes to enroll 50,000 individuals, aged 50-75 years, from 46 Veterans Affairs Medical Centers and monitor them for 10 years. This article is unique in that no results are presented as the study is not yet complete. We have taken this unusual step as we believe the topic of CRC screening is critically important for our readers and that the results of this massive study have the potential to change clinical practice throughout all fields of medicine.

  7. Surface-enhanced Raman scattering spectra revealing the inter-cultivar differences for Chinese ornamental Flos Chrysanthemum: a new promising method for plant taxonomy

    Directory of Open Access Journals (Sweden)

    Heng Zhang

    2017-10-01

    Full Text Available Abstract Background Flos Chrysanthemi, as a part of Chinese culture for a long history, is valuable for not only environmental decoration but also the medicine and food additive. Due to their voluminously various breeds and extensive distributions worldwide, it is burdensome to make recognition and classification among numerous cultivars with conventional methods which still rest on the level of morphologic observation and description. As a fingerprint spectrum for parsing molecular information, surface-enhanced Raman scattering (SERS could be a suitable candidate technique to characterize and distinguish the inter-cultivar differences at molecular level. Results SERS spectra were used to analyze the inter-cultivar differences among 26 cultivars of Chinese ornamental Flos Chrysanthemum. The characteristic peaks distribution patterns were abstracted from SERS spectra and varied from cultivars to cultivars. For the bands distributed in the pattern map, the similarities in general showed their commonality while in the finer scales, the deviations and especially the particular bands owned by few cultivars revealed their individualities. Since the Raman peaks could characterize specific chemical components, those diversity of patterns could indicate the inter-cultivar differences at the chemical level in fact. Conclusion In this paper, SERS technique is feasible for distinguishing the inter-cultivar differences among Flos Chrysanthemum. The Raman spectral library was built with SERS characteristic peak distribution patterns. A new method was proposed for Flos Chrysanthemum recognition and taxonomy.

  8. Testing the usability of the Rapid Impact Assessment Matrix (RIAM) method for comparison of EIA and SEA results

    International Nuclear Information System (INIS)

    Kuitunen, Markku; Jalava, Kimmo; Hirvonen, Kimmo

    2008-01-01

    This study examines how the results of Environmental Impact Assessment (EIA) and Strategic Environmental Assessment (SEA) could be compared using the Rapid Impact Assessment Matrix (RIAM) method. There are many tools and techniques that have been developed for use in impact assessment processes, including scoping, checklists, matrices, qualitative and quantitative models, literature reviews, and decision-support systems. While impact assessment processes have become more technically complicated, it is recognized that approaches including simpler applications of available tools and techniques are also appropriate. The Rapid Impact Assessment Matrix (RIAM) is a tool for organizing, analysing and presenting the results of a holistic EIA. RIAM was originally developed to compare the impact of alternative procedures in a single project. In this study, we used RIAM to compare the environmental and social impact of different projects, plans and programs realized within the same geographical area. RIAM scoring is based on five separate criteria. The RIAM criteria were applied to the impact that was considered to be the most significant in the evaluated cases, and scores were given both on environmental and social impact. Our results revealed that the RIAM method could be used for comparison and ranking of separate and distinct projects, plans, programs and policies, based on their negative or positive impact. Our data included 142 cases from the area of Central Finland that is covered by the Regional Council of Central Finland. This sample consisted of various types of projects, ranging from road construction to education programs that applied for EU funding

  9. Development of methods to measure hemoglobin adducts by gel electrophoresis - Preliminary results

    International Nuclear Information System (INIS)

    Sun, J.D.; McBride, S.M.

    1988-01-01

    Chemical adducts formed on blood hemoglobin may be a useful biomarker for assessing human exposures to these compounds. This paper reports preliminary results in the development of methods to measure such adducts that may be generally applicable for a wide variety of chemicals. Male F344/N rats were intraperitoneally injected with 14 C-BaP dissolved in corn oil. Twenty-four hours later, the rats were sacrificed. Blood samples were collected and globin was isolated. Globin protein was then cleaved into peptide fragments using cyanogen bromide and the fragments separated using 2-dimensional gel electrophoresis. The results showed that the adducted 14 C-globin fragments migrated to different areas of the gel than did unadducted fragments. Further research is being conducted to develop methods that will allow quantitation of separated adducted globin fragments from human blood samples without the use of a radiolabel. (author)

  10. Method and equipment for treating waste water resulting from the technological testing processes of NPP equipment

    International Nuclear Information System (INIS)

    Radulescu, M. C.; Valeca, S.; Iorga, C.

    2016-01-01

    Modern methods and technologies coupled together with advanced equipment for treating residual substances resulted from technological processes are mandatory measures for all industrial facilities. The correct management of the used working agents and of the all wastes resulted from the different technological process (preparation, use, collection, neutralization, discharge) is intended to reduce up to removal of their potential negative impact on the environment. The high pressure and temperature testing stands from INR intended for functional testing of nuclear components (fuel bundles, fuelling machines, etc.) were included in these measures since the use of oils, demineralized water chemically treated, greases, etc. This paper is focused on the method and equipment used at INR Pitesti in the chemical treatment of demineralized waters, as well as the equipment for collecting, neutralizing and discharging them after use. (authors)

  11. [Results of treatment of milk teeth pulp by modified formocresol method].

    Science.gov (United States)

    Wochna-Sobańska, M

    1989-01-01

    The purpose of the study was evaluation of the results of treatment of molar pulp diseases by the formocresol method from the standpoint of the development of inflammatory complications in periapical tissues, disturbances of physiological resorption of roots, disturbances of mineralization of crowns of homologous permanent teeth. For the treatment milk molars were qualified with the diagnosis of grade II pulpopathy in children aged from 3 to 9 years. The treatment was done using formocresol by a modified method of pulp amputation according to Buckley after previous devitalization with parapaste. The status of 143 teeth was examined again 1 to 4 years after completion of treatment. The proportion of positive results after one year was 94%, after two years it was 90%, after three years 87% and after four years 80%. The cause of premature loss of most teeth was root resorption acceleration by 18-24 months. No harmful action of formocresol on the buds of permanent teeth was noted.

  12. Integrating Quantitative and Qualitative Results in Health Science Mixed Methods Research Through Joint Displays.

    Science.gov (United States)

    Guetterman, Timothy C; Fetters, Michael D; Creswell, John W

    2015-11-01

    Mixed methods research is becoming an important methodology to investigate complex health-related topics, yet the meaningful integration of qualitative and quantitative data remains elusive and needs further development. A promising innovation to facilitate integration is the use of visual joint displays that bring data together visually to draw out new insights. The purpose of this study was to identify exemplar joint displays by analyzing the various types of joint displays being used in published articles. We searched for empirical articles that included joint displays in 3 journals that publish state-of-the-art mixed methods research. We analyzed each of 19 identified joint displays to extract the type of display, mixed methods design, purpose, rationale, qualitative and quantitative data sources, integration approaches, and analytic strategies. Our analysis focused on what each display communicated and its representation of mixed methods analysis. The most prevalent types of joint displays were statistics-by-themes and side-by-side comparisons. Innovative joint displays connected findings to theoretical frameworks or recommendations. Researchers used joint displays for convergent, explanatory sequential, exploratory sequential, and intervention designs. We identified exemplars for each of these designs by analyzing the inferences gained through using the joint display. Exemplars represented mixed methods integration, presented integrated results, and yielded new insights. Joint displays appear to provide a structure to discuss the integrated analysis and assist both researchers and readers in understanding how mixed methods provides new insights. We encourage researchers to use joint displays to integrate and represent mixed methods analysis and discuss their value. © 2015 Annals of Family Medicine, Inc.

  13. Comparison of the analysis result between two laboratories using different methods

    International Nuclear Information System (INIS)

    Sri Murniasih; Agus Taftazani

    2017-01-01

    Comparison of the analysis result of volcano ash sample between two laboratories using different analysis methods. The research aims to improve the testing laboratory quality and cooperate with the testing laboratory from other country. Samples were tested at the Center for Accelerator of Science and Technology (CAST)-NAA laboratory using NAA, while at the University of Texas (UT) USA using ICP-MS and ENAA method. From 12 elements of target, CAST-NAA able to present 11 elements of data analysis. The comparison results shows that the analysis of the K, Mn, Ti and Fe elements from both laboratories have a very good comparison and close one to other. It is known from RSD values and correlation coefficients of the both laboratories analysis results. While observed of the results difference known that the analysis results of Al, Na, K, Fe, V, Mn, Ti, Cr and As elements from both laboratories is not significantly different. From 11 elements were reported, only Zn which have significantly different values for both laboratories. (author)

  14. THE RESULTS OF THE ANALYSIS OF THE STUDENTS’ BODY COMPOSITION BY BIOIMPEDANCE METHOD

    Directory of Open Access Journals (Sweden)

    Dmitry S. Blinov

    2016-06-01

    Full Text Available Introduction. Tissues of the human body can conduct electricity. Liquid medium (water, blood, the contents of hollow bodies, have a low impedance, i.e. good conductors, while denser tissue (muscle, nerves, etc. resistance is significantly higher. The biggest impedance have fat and bone tissues. The bioimpendancemetry – a method which allows to determine the composition of the human body by measuring electrical resistance (impedance of its tissues. Relevance. This technique is indispensable to dieticians and fitness trainers. In addition, the results of the study can provide invaluable assistance in the appointment of effective treatment physicians, gynecologists, orthopedists, and other specialists. The bioimpedance method helps to determine the risks of developing diabetes type 2, atherosclerosis, hypertension, diseases of the musculoskeletal system, disorders of the endocrine system, gall-stone disease and etc. Materials and Methods. In the list of parameters of body composition assessed by bioimpedance analysis method, included absolute and relative indicators. Depending on the method of measurement of the absolute rates were determined for the whole body. To absolute performance were: fat and skinny body mass index, active cell and skeletal muscle mass, total body water, cellular and extracellular fluid. Along with them were calculated relatively (normalized to body weight, lean mass, or other variables indicators of body composition. Results. In the result of the comparison of anthropometric and bioimpedance method found that growth performance, vital capacity, weight, waist circumference, circumfer¬ence of waist and hip, basal metabolism, body fat mass, normalized on growth, lean mass, percentage skeletal muscle mass in boys and girls with normal and excessive body weight had statistically significant differences. Discussion and Conclusions. In the present study physical development with consideration of body composition in students

  15. [Methods in neonatal abstinence syndrome (NAS): results of a nationwide survey in Austria].

    Science.gov (United States)

    Bauchinger, S; Sapetschnig, I; Danda, M; Sommer, C; Resch, B; Urlesberger, B; Raith, W

    2015-08-01

    Neonatal abstinence syndrome (NAS) occurs in neonates whose mothers have taken addictive drugs or were under substitution therapy during pregnancy. Incidence numbers of NAS are on the rise globally, even in Austria NAS is not rare anymore. The aim of our survey was to reveal the status quo of dealing with NAS in Austria. A questionnaire was sent to 20 neonatology departments all over Austria, items included questions on scoring, therapy, breast-feeding and follow-up procedures. The response rate was 95%, of which 94.7% had written guidelines concerning NAS. The median number of children being treated per year for NAS was 4. Finnegan scoring system is used in 100% of the responding departments. Morphine is being used most often, in opiate abuse (100%) as well as in multiple substance abuse (44.4%). The most frequent forms of morphine preparation are morphine and diluted tincture of opium. Frequency as well as dosage of medication vary broadly. 61.1% of the departments supported breast-feeding, regulations concerned participation in a substitution programme and general contraindications (HIV, HCV, HBV). Our results revealed that there is a big west-east gradient in patients being treated per year. NAS is not a rare entity anymore in Austria (up to 50 cases per year in Vienna). Our survey showed that most neonatology departments in Austria treat their patients following written guidelines. Although all of them base these guidelines on international recommendations there is no national consensus. © Georg Thieme Verlag KG Stuttgart · New York.

  16. Relationships Between Results Of An Internal And External Match Load Determining Method In Male, Singles Badminton Players.

    Science.gov (United States)

    Abdullahi, Yahaya; Coetzee, Ben; Van den Berg, Linda

    2017-07-03

    The study purpose was to determine relationships between results of internal and external match load determining methods. Twenty-one players, who participated in selected badminton championships during the 2014/2015 season served as subjects. The heart rate (HR) values and GPS data of each player were obtained via a fix Polar HR Transmitter Belt and MinimaxX GPS device. Moderate significant Spearman's rank correlations were found between HR and absolute duration (r = 0.43 at a low intensity (LI) and 0.44 at a high intensity (HI)), distance covered (r = 0.42 at a HI) and player load (PL) (r = 0.44 at a HI). Results also revealed an opposite trend for external and internal measures of load as the average relative HR value was found to be the highest for the HI zone (54.1%) compared to the relative measures of external load where average values (1.29-9.89%) were the lowest for the HI zone. In conclusion, our findings show that results of an internal and external badminton match load determining method are more related to each other in the HI zone than other zones and that the strength of relationships depend on the duration of activities that are performed in especially LI and HI zones. Overall, trivial to moderate relationships between results of an internal and external match load determining method in male, singles badminton players reaffirm the conclusions of others that these constructs measure distinctly different demands and should therefore be measured concurrently to fully understand the true requirements of badminton match play.

  17. A method of estimating conceptus doses resulting from multidetector CT examinations during all stages of gestation

    International Nuclear Information System (INIS)

    Damilakis, John; Tzedakis, Antonis; Perisinakis, Kostas; Papadakis, Antonios E.

    2010-01-01

    Purpose: Current methods for the estimation of conceptus dose from multidetector CT (MDCT) examinations performed on the mother provide dose data for typical protocols with a fixed scan length. However, modified low-dose imaging protocols are frequently used during pregnancy. The purpose of the current study was to develop a method for the estimation of conceptus dose from any MDCT examination of the trunk performed during all stages of gestation. Methods: The Monte Carlo N-Particle (MCNP) radiation transport code was employed in this study to model the Siemens Sensation 16 and Sensation 64 MDCT scanners. Four mathematical phantoms were used, simulating women at 0, 3, 6, and 9 months of gestation. The contribution to the conceptus dose from single simulated scans was obtained at various positions across the phantoms. To investigate the effect of maternal body size and conceptus depth on conceptus dose, phantoms of different sizes were produced by adding layers of adipose tissue around the trunk of the mathematical phantoms. To verify MCNP results, conceptus dose measurements were carried out by means of three physical anthropomorphic phantoms, simulating pregnancy at 0, 3, and 6 months of gestation and thermoluminescence dosimetry (TLD) crystals. Results: The results consist of Monte Carlo-generated normalized conceptus dose coefficients for single scans across the four mathematical phantoms. These coefficients were defined as the conceptus dose contribution from a single scan divided by the CTDI free-in-air measured with identical scanning parameters. Data have been produced to take into account the effect of maternal body size and conceptus position variations on conceptus dose. Conceptus doses measured with TLD crystals showed a difference of up to 19% compared to those estimated by mathematical simulations. Conclusions: Estimation of conceptus doses from MDCT examinations of the trunk performed on pregnant patients during all stages of gestation can be made

  18. A method for modeling laterally asymmetric proton beamlets resulting from collimation

    Energy Technology Data Exchange (ETDEWEB)

    Gelover, Edgar; Wang, Dongxu; Flynn, Ryan T.; Hyer, Daniel E. [Department of Radiation Oncology, University of Iowa, 200 Hawkins Drive, Iowa City, Iowa 52242 (United States); Hill, Patrick M. [Department of Human Oncology, University of Wisconsin, 600 Highland Avenue, Madison, Wisconsin 53792 (United States); Gao, Mingcheng; Laub, Steve; Pankuch, Mark [Division of Medical Physics, CDH Proton Center, 4455 Weaver Parkway, Warrenville, Illinois 60555 (United States)

    2015-03-15

    Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σ{sub x1},σ{sub x2},σ{sub y1},σ{sub y2}) together with the spatial location of the maximum dose (μ{sub x},μ{sub y}). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets.

  19. A method for modeling laterally asymmetric proton beamlets resulting from collimation

    International Nuclear Information System (INIS)

    Gelover, Edgar; Wang, Dongxu; Flynn, Ryan T.; Hyer, Daniel E.; Hill, Patrick M.; Gao, Mingcheng; Laub, Steve; Pankuch, Mark

    2015-01-01

    Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σ x1 ,σ x2 ,σ y1 ,σ y2 ) together with the spatial location of the maximum dose (μ x ,μ y ). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets

  20. A method for modeling laterally asymmetric proton beamlets resulting from collimation

    Science.gov (United States)

    Gelover, Edgar; Wang, Dongxu; Hill, Patrick M.; Flynn, Ryan T.; Gao, Mingcheng; Laub, Steve; Pankuch, Mark; Hyer, Daniel E.

    2015-01-01

    Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σx1,σx2,σy1,σy2) together with the spatial location of the maximum dose (μx,μy). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets. PMID:25735287

  1. Decision making with consonant belief functions: Discrepancy resulting with the probability transformation method used

    Directory of Open Access Journals (Sweden)

    Cinicioglu Esma Nur

    2014-01-01

    Full Text Available Dempster−Shafer belief function theory can address a wider class of uncertainty than the standard probability theory does, and this fact appeals the researchers in operations research society for potential application areas. However, the lack of a decision theory of belief functions gives rise to the need to use the probability transformation methods for decision making. For representation of statistical evidence, the class of consonant belief functions is used which is not closed under Dempster’s rule of combination but is closed under Walley’s rule of combination. In this research, it is shown that the outcomes obtained using both Dempster’s and Walley’s rules do result in different probability distributions when pignistic transformation is used. However, when plausibility transformation is used, they do result in the same probability distribution. This result shows that the choice of the combination rule and probability transformation method may have a significant effect on decision making since it may change the choice of the decision alternative selected. This result is illustrated via an example of missile type identification.

  2. REVEALING OF DEFECTS OF BEARINGS WITH THE HELP OF MODERN METHODS OF CONTROL OF TECHNOLOGICAL EQUIPMENT OF HARDWARE PRODUCTION

    Directory of Open Access Journals (Sweden)

    S. M. Piskun

    2010-01-01

    Full Text Available It is shown that using of modern methods and means of technical diagnostics will allow to provide reliable accident-free exploitation of equipment, to decrease considerably labour-intensiveness, period of repair and accordingly production expenses.

  3. Results of the determination of He in cenozoic aquifers using the GC method.

    Science.gov (United States)

    Kotowski, Tomasz; Najman, Joanna

    2015-04-01

    Applications of the Helium (He) method known so far consisted mainly of 4He measurements using a special mass spectrometer. 4He measurements for groundwater dating purposes can be replaced by total He (3He+4He) concentration measurements because the content of 3He can be ignored. The concentrations of 3He are very low and 3He/4 He ratios do not exceed 1.0·10(-5) in most cases. In this study, the total He concentrations in groundwater were determined using the gas chromatographic (GC) method as an alternative to methods based on spectrometry measurement. He concentrations in groundwater were used for the determination of residence time and groundwater circulation. Additionally, the radiocarbon method was used to determine the value of the external He flux (JHe) in the study area. Obtained low He concentrations and their small variation within the ca. 65 km long section along which groundwater flows indicate that it is likely there is relatively short residence time and a strong hydraulic connection between the aquifers. The estimated residence time (ca. 3000 years) is heavily dependent on the great uncertainty of the He concentration resulting from the low concentrations of He, the external 4He flux value adopted for calculation purposes and the 14C ages used to estimate the external 4He flux. © 2015, National Ground Water Association.

  4. CT-guided percutaneous neurolysis methods. State of the art and first results

    International Nuclear Information System (INIS)

    Schneider, B.; Richter, G.M.; Roeren, T.; Kauffmann, G.W.

    1996-01-01

    We used 21G or 22G fine needles. All CT-guided percutaneous neurolysis methods require a proper blood coagulation. Most common CT scanners are suitable for neurolysis if there is enough room for maintaining sterile conditions. All neurolysis methods involve sterile puncture of the ganglia under local anesthesia, a test block with anesthetic and contrast agent to assess the clinical effect and the definitive block with a mixture of 96% ethanol and local anesthetic. This allows us to correct the position of the needle if we see improper distribution of the test block or unwanted side effects. Though inflammatory complications of the peritoneum due to puncture are rarely seen, we prefer the dorsal approach whenever possible. Results: Seven of 20 legs showed at least transient clinical improvement after CT-guided lumbar sympathectomies; 13 legs had to be amputated. Results of the methods in the literature differ. For lumbar sympathectomy, improved perfusion is reported in 39-89%, depending on the pre-selection of the patient group. Discussion: It was recently proved that sympathectomy not only improves perfusion of the skin but also of the muscle. The hypothesis of a steal effect after sympathectomy towards skin perfusion was disproved. Modern aggressive surgical and interventional treatment often leaves patients to sympathectomy whose reservers of collateralization are nearly exhausted. We presume this is the reason for the different results we found in our patient group. For thoracic sympathectomy the clinical treatment depends very much on the indications. Whereas palmar hyperhidrosis offers nearly 100% success, only 60-70% of patients with disturbance of perfusion have benefited. Results in celiac ganglia block also differ. Patients with carcinoma of the pancreas and other organs of the upper abdomen benefit in 80-100% of all cases, patients with chronic pancreatitis in 60-80%. (orig./VHE) [de

  5. Immunoglobulin G (IgG) Fab glycosylation analysis using a new mass spectrometric high-throughput profiling method reveals pregnancy-associated changes.

    Science.gov (United States)

    Bondt, Albert; Rombouts, Yoann; Selman, Maurice H J; Hensbergen, Paul J; Reiding, Karli R; Hazes, Johanna M W; Dolhain, Radboud J E M; Wuhrer, Manfred

    2014-11-01

    The N-linked glycosylation of the constant fragment (Fc) of immunoglobulin G has been shown to change during pathological and physiological events and to strongly influence antibody inflammatory properties. In contrast, little is known about Fab-linked N-glycosylation, carried by ∼ 20% of IgG. Here we present a high-throughput workflow to analyze Fab and Fc glycosylation of polyclonal IgG purified from 5 μl of serum. We were able to detect and quantify 37 different N-glycans by means of MALDI-TOF-MS analysis in reflectron positive mode using a novel linkage-specific derivatization of sialic acid. This method was applied to 174 samples of a pregnancy cohort to reveal Fab glycosylation features and their change with pregnancy. Data analysis revealed marked differences between Fab and Fc glycosylation, especially in the levels of galactosylation and sialylation, incidence of bisecting GlcNAc, and presence of high mannose structures, which were all higher in the Fab portion than the Fc, whereas Fc showed higher levels of fucosylation. Additionally, we observed several changes during pregnancy and after delivery. Fab N-glycan sialylation was increased and bisection was decreased relative to postpartum time points, and nearly complete galactosylation of Fab glycans was observed throughout. Fc glycosylation changes were similar to results described before, with increased galactosylation and sialylation and decreased bisection during pregnancy. We expect that the parallel analysis of IgG Fab and Fc, as set up in this paper, will be important for unraveling roles of these glycans in (auto)immunity, which may be mediated via recognition by human lectins or modulation of antigen binding. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.

  6. Immunoglobulin G (IgG) Fab Glycosylation Analysis Using a New Mass Spectrometric High-throughput Profiling Method Reveals Pregnancy-associated Changes*

    Science.gov (United States)

    Bondt, Albert; Rombouts, Yoann; Selman, Maurice H. J.; Hensbergen, Paul J.; Reiding, Karli R.; Hazes, Johanna M. W.; Dolhain, Radboud J. E. M.; Wuhrer, Manfred

    2014-01-01

    The N-linked glycosylation of the constant fragment (Fc) of immunoglobulin G has been shown to change during pathological and physiological events and to strongly influence antibody inflammatory properties. In contrast, little is known about Fab-linked N-glycosylation, carried by ∼20% of IgG. Here we present a high-throughput workflow to analyze Fab and Fc glycosylation of polyclonal IgG purified from 5 μl of serum. We were able to detect and quantify 37 different N-glycans by means of MALDI-TOF-MS analysis in reflectron positive mode using a novel linkage-specific derivatization of sialic acid. This method was applied to 174 samples of a pregnancy cohort to reveal Fab glycosylation features and their change with pregnancy. Data analysis revealed marked differences between Fab and Fc glycosylation, especially in the levels of galactosylation and sialylation, incidence of bisecting GlcNAc, and presence of high mannose structures, which were all higher in the Fab portion than the Fc, whereas Fc showed higher levels of fucosylation. Additionally, we observed several changes during pregnancy and after delivery. Fab N-glycan sialylation was increased and bisection was decreased relative to postpartum time points, and nearly complete galactosylation of Fab glycans was observed throughout. Fc glycosylation changes were similar to results described before, with increased galactosylation and sialylation and decreased bisection during pregnancy. We expect that the parallel analysis of IgG Fab and Fc, as set up in this paper, will be important for unraveling roles of these glycans in (auto)immunity, which may be mediated via recognition by human lectins or modulation of antigen binding. PMID:25004930

  7. A novel method for RNA extraction from FFPE samples reveals significant differences in biomarker expression between orthotopic and subcutaneous pancreatic cancer patient-derived xenografts.

    Science.gov (United States)

    Hoover, Malachia; Adamian, Yvess; Brown, Mark; Maawy, Ali; Chang, Alexander; Lee, Jacqueline; Gharibi, Armen; Katz, Matthew H; Fleming, Jason; Hoffman, Robert M; Bouvet, Michael; Doebler, Robert; Kelber, Jonathan A

    2017-01-24

    Next-generation sequencing (NGS) can identify and validate new biomarkers of cancer onset, progression and therapy resistance. Substantial archives of formalin-fixed, paraffin-embedded (FFPE) cancer samples from patients represent a rich resource for linking molecular signatures to clinical data. However, performing NGS on FFPE samples is limited by poor RNA purification methods. To address this hurdle, we developed an improved methodology for extracting high-quality RNA from FFPE samples. By briefly integrating a newly-designed micro-homogenizing (mH) tool with commercially available FFPE RNA extraction protocols, RNA recovery is increased by approximately 3-fold while maintaining standard A260/A280 ratios and RNA quality index (RQI) values. Furthermore, we demonstrate that the mH-purified FFPE RNAs are longer and of higher integrity. Previous studies have suggested that pancreatic ductal adenocarcinoma (PDAC) gene expression signatures vary significantly under in vitro versus in vivo and in vivo subcutaneous versus orthotopic conditions. By using our improved mH-based method, we were able to preserve established expression patterns of KRas-dependency genes within these three unique microenvironments. Finally, expression analysis of novel biomarkers in KRas mutant PDAC samples revealed that PEAK1 decreases and MST1R increases by over 100-fold in orthotopic versus subcutaneous microenvironments. Interestingly, however, only PEAK1 levels remain elevated in orthotopically grown KRas wild-type PDAC cells. These results demonstrate the critical nature of the orthotopic tumor microenvironment when evaluating the clinical relevance of new biomarkers in cells or patient-derived samples. Furthermore, this new mH-based FFPE RNA extraction method has the potential to enhance and expand future FFPE-RNA-NGS cancer biomarker studies.

  8. A method for purifying air containing radioactive substances resulting from the disintegration of radon

    International Nuclear Information System (INIS)

    Stringer, C.W.

    1974-01-01

    The invention relates to the extraction of radioactive isotopes from air. It refers to a method for withdrawing the radioactive substances resulting from the disintegration of radon from air, said method of the type comprising filtrating the air contaminated by the radon daughter products in a filter wetted with water in order to trap said substances in water. It is characterized in that it comprises the steps of causing the water contaminated by the radon daughter products to flow through a filtrating substance containing a non hydrosoluble granular substrate, the outer surface of which has been dried then wetted by a normally-liquid hydrocarbon, and of returning then wetted by a normally-liquid hydrocarbon, and of returning the thus filtrated water so that it wets again the air filter and entraps further radon daughter products. This can be applied to the purification of the air in uranium mines [fr

  9. The Trojan Horse method for nuclear astrophysics: Recent results for direct reactions

    International Nuclear Information System (INIS)

    Tumino, A.; Gulino, M.; Spitaleri, C.; Cherubini, S.; Romano, S.; Cognata, M. La; Pizzone, R. G.; Rapisarda, G. G.; Lamia, L.

    2014-01-01

    The Trojan Horse method is a powerful indirect technique to determine the astrophysical factor for binary rearrangement processes A+x→b+B at astrophysical energies by measuring the cross section for the Trojan Horse (TH) reaction A+a→B+b+s in quasi free kinematics. The Trojan Horse Method has been successfully applied to many reactions of astrophysical interest, both direct and resonant. In this paper, we will focus on direct sub-processes. The theory of the THM for direct binary reactions will be shortly presented based on a few-body approach that takes into account the off-energy-shell effects and initial and final state interactions. Examples of recent results will be presented to demonstrate how THM works experimentally

  10. The Trojan Horse method for nuclear astrophysics: Recent results for direct reactions

    Energy Technology Data Exchange (ETDEWEB)

    Tumino, A.; Gulino, M. [Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania, Italy and Università degli Studi di Enna Kore, Enna (Italy); Spitaleri, C.; Cherubini, S.; Romano, S. [Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania, Italy and Dipartimento di Fisica e Astronomia, Università di Catania, Catania (Italy); Cognata, M. La; Pizzone, R. G.; Rapisarda, G. G. [Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania (Italy); Lamia, L. [Dipartimento di Fisica e Astronomia, Università di Catania, Catania (Italy)

    2014-05-09

    The Trojan Horse method is a powerful indirect technique to determine the astrophysical factor for binary rearrangement processes A+x→b+B at astrophysical energies by measuring the cross section for the Trojan Horse (TH) reaction A+a→B+b+s in quasi free kinematics. The Trojan Horse Method has been successfully applied to many reactions of astrophysical interest, both direct and resonant. In this paper, we will focus on direct sub-processes. The theory of the THM for direct binary reactions will be shortly presented based on a few-body approach that takes into account the off-energy-shell effects and initial and final state interactions. Examples of recent results will be presented to demonstrate how THM works experimentally.

  11. Methods used by Elsam for monitoring precision and accuracy of analytical results

    Energy Technology Data Exchange (ETDEWEB)

    Hinnerskov Jensen, J [Soenderjyllands Hoejspaendingsvaerk, Faelleskemikerne, Aabenraa (Denmark)

    1996-12-01

    Performing round robins at regular intervals is the primary method used by ELsam for monitoring precision and accuracy of analytical results. The firs round robin was started in 1974, and today 5 round robins are running. These are focused on: boiler water and steam, lubricating oils, coal, ion chromatography and dissolved gases in transformer oils. Besides the power plant laboratories in Elsam, the participants are power plant laboratories from the rest of Denmark, industrial and commercial laboratories in Denmark, and finally foreign laboratories. The calculated standard deviations or reproducibilities are compared with acceptable values. These values originate from ISO, ASTM and the like, or from own experiences. Besides providing the laboratories with a tool to check their momentary performance, the round robins are vary suitable for evaluating systematic developments on a long term basis. By splitting up the uncertainty according to methods, sample preparation/analysis, etc., knowledge can be extracted from the round robins for use in many other situations. (au)

  12. Effective methods of protection of the intellectual activity results in infosphere of global telematics networks

    Directory of Open Access Journals (Sweden)

    D. A. Lovtsov

    2016-01-01

    Full Text Available The purpose of this article is perfection of using metodology of technological and organization and legal protect of intellectual activity results and related intellectual rights in information sphere of Global Telematics Networks (such as of «Internet», «Relkom», «Sitek», «Sedab», «Remart», and others. On the conduct analysis base of the peculiarities and possibilities of using of different technological, organization and legal methods and ways protection of information objects the offers of perfection of corresponding organization and legal safeguarding are formulated. The effectiveness of the protection is provided on the basis of rational aggregation technological, organization and legal methods and ways possible in a particular situation.

  13. Raw material consumption of the European Union--concept, calculation method, and results.

    Science.gov (United States)

    Schoer, Karl; Weinzettel, Jan; Kovanda, Jan; Giegrich, Jürgen; Lauwigi, Christoph

    2012-08-21

    This article presents the concept, calculation method, and first results of the "Raw Material Consumption" (RMC) economy-wide material flow indicator for the European Union (EU). The RMC measures the final domestic consumption of products in terms of raw material equivalents (RME), i.e. raw materials used in the complete production chain of consumed products. We employed the hybrid input-output life cycle assessment method to calculate RMC. We first developed a highly disaggregated environmentally extended mixed unit input output table and then applied life cycle inventory data for imported products without appropriate representation of production within the domestic economy. Lastly, we treated capital formation as intermediate consumption. Our results show that services, often considered as a solution for dematerialization, account for a significant part of EU raw material consumption, which emphasizes the need to focus on the full production chains and dematerialization of services. Comparison of the EU's RMC with its domestic extraction shows that the EU is nearly self-sufficient in biomass and nonmetallic minerals but extremely dependent on direct and indirect imports of fossil energy carriers and metal ores. This implies an export of environmental burden related to extraction and primary processing of these materials to the rest of the world. Our results demonstrate that internalizing capital formation has significant influence on the calculated RMC.

  14. Some results about the dating of pre hispanic mexican ceramics by the thermoluminescence method

    International Nuclear Information System (INIS)

    Gonzalez M, P.; Mendoza A, D.; Ramirez L, A.; Schaaf, P.

    2004-01-01

    One of the most frequently recurring questions in Archaeometry concerns the age of the studied objects. The some first dating methods were based in historical narrations, style of buildings manufacture techniques. However, has been observed that as consequence the continuous irradiation from naturally occurring radioisotopes and from cosmic rays some materials, such as archaeological ceramic, accumulate certain quantity of energy. These types of material can, in principle, be dated through the analysis of these accumulate energy. In that case, ceramic dating can be realized by thermoluminescence (TL) dating. In this work, results obtained by our research group about TL dating of ceramic belonging to several archaeological zones like to Edzna (Campeche), Calixtlahuaca and Teotenango (Mexico State) and Hervideros (Durango) are presented. The analysis was realized using the fine grained mode in a Daybreak model 1100 reader TL system. The radioisotopes that contribute in the accumulate annual dose in ceramic samples ( 40 K, 238 U, 232 Th) were determined by means of techniques such as Energy Dispersive X-ray Spectroscopy (EDS) and Neutron Activation Analysis (AAN). Our results are agree with results obtained through other methods. (Author) 7 refs., 2 tabs., 5 figs

  15. Influence of Meibomian Gland Expression Methods on Human Lipid Analysis Results.

    Science.gov (United States)

    Kunnen, Carolina M E; Brown, Simon H J; Lazon de la Jara, Percy; Holden, Brien A; Blanksby, Stephen J; Mitchell, Todd W; Papas, Eric B

    2016-01-01

    To compare the lipid composition of human meibum across three different meibum expression techniques. Meibum was collected from five healthy non-contact lens wearers (aged 20-35 years) after cleaning the eyelid margin using three meibum expression methods: cotton buds (CB), meibomian gland evaluator (MGE) and meibomian gland forceps (MGF). Meibum was also collected using cotton buds without cleaning the eyelid margin (CBn). Lipids were analyzed by chip-based, nano-electrospray mass spectrometry (ESI-MS). Comparisons were made using linear mixed models. Tandem MS enabled identification and quantification of over 200 lipid species across ten lipid classes. There were significant differences between collection techniques in the relative quantities of polar lipids obtained (P<.05). The MGE method returned smaller polar lipid quantities than the CB approaches. No significant differences were found between techniques for nonpolar lipids. No significant differences were found between cleaned and non-cleaned eyelids for polar or nonpolar lipids. Meibum expression technique influences the relative amount of phospholipids in the resulting sample. The highest amounts of phospholipids were detected with the CB approaches and the lowest with the MGE technique. Cleaning the eyelid margin prior to expression was not found to affect the lipid composition of the sample. This may be a consequence of the more forceful expression resulting in cell membrane contamination or higher risk of tear lipid contamination as a result of reflex tearing. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Novel method reveals a narrow phylogenetic distribution of bacterial dispersers in environmental communities exposed to low hydration conditions

    DEFF Research Database (Denmark)

    Krüger, U. S.; Bak, F.; Aamand, J.

    2018-01-01

    In this study, we developed a method that provides community-level surface dispersal profiles under controlled hydration conditions from environmental samples and enables us to isolate and uncover the diversity of the fastest bacterial dispersers. The method expands on the Porous Surface Model (PSM...... Pseudomonas putida and Flavobacterium johnsoniae strains from their non-motile mutants. Applying the method to soil and lake water bacterial communities showed that community-scale dispersal declined as conditions became drier. However, for both communities, dispersal was detected even under low hydration...... dispersers were substantially less diverse than the total communities. The dispersing fraction of the soil microbial community was dominated by Pseudomonas which increased in abundance at low hydration conditions, while the dispersing fraction of the lake community was dominated by Aeromonas and, under wet...

  17. Daily radiotoxicological supervision of personnel at the Pierrelatte industrial complex. Methods and results

    International Nuclear Information System (INIS)

    Chalabreysse, Jacques.

    1978-05-01

    A 13 year experience gained from daily radiotoxicological supervision of personnel at the PIERRELATTE industrial complex is presented. This study is divided into two parts: part one is theoretical: bibliographical synthesis of all scattered documents and publications; a homogeneous survey of all literature on the subject is thus available. Part two reviews the experience gained in professional surroundings: laboratory measurements and analyses (development of methods and daily applications); mathematical formulae to answer the first questions which arise before an individual liable to be contaminated; results obtained at PIERRELATTE [fr

  18. SAFER, an Analysis Method of Quantitative Proteomic Data, Reveals New Interactors of the C. elegans Autophagic Protein LGG-1.

    Science.gov (United States)

    Yi, Zhou; Manil-Ségalen, Marion; Sago, Laila; Glatigny, Annie; Redeker, Virginie; Legouis, Renaud; Mucchielli-Giorgi, Marie-Hélène

    2016-05-06

    Affinity purifications followed by mass spectrometric analysis are used to identify protein-protein interactions. Because quantitative proteomic data are noisy, it is necessary to develop statistical methods to eliminate false-positives and identify true partners. We present here a novel approach for filtering false interactors, named "SAFER" for mass Spectrometry data Analysis by Filtering of Experimental Replicates, which is based on the reproducibility of the replicates and the fold-change of the protein intensities between bait and control. To identify regulators or targets of autophagy, we characterized the interactors of LGG1, a ubiquitin-like protein involved in autophagosome formation in C. elegans. LGG-1 partners were purified by affinity, analyzed by nanoLC-MS/MS mass spectrometry, and quantified by a label-free proteomic approach based on the mass spectrometric signal intensity of peptide precursor ions. Because the selection of confident interactions depends on the method used for statistical analysis, we compared SAFER with several statistical tests and different scoring algorithms on this set of data. We show that SAFER recovers high-confidence interactors that have been ignored by the other methods and identified new candidates involved in the autophagy process. We further validated our method on a public data set and conclude that SAFER notably improves the identification of protein interactors.

  19. Mixed-Method Research on Learning Vocabulary through Technology Reveals Vocabulary Growth in Second-Grade Students

    Science.gov (United States)

    Huang, SuHua

    2015-01-01

    A mixed-method embedded research design was employed to investigate the effectiveness of the integration of technology for second-grade students' vocabulary development and learning. Two second-grade classes with a total of 40 students (21 boys and 19 girls) were randomly selected to participate in this study for the course of a semester. One…

  20. Human exposure to bisphenol A by biomonitoring: Methods, results and assessment of environmental exposures

    International Nuclear Information System (INIS)

    Dekant, Wolfgang; Voelkel, Wolfgang

    2008-01-01

    Human exposure to bisphenol A is controversially discussed. This review critically assesses methods for biomonitoring of bisphenol A exposures and reported concentrations of bisphenol A in blood and urine of non-occupationally ('environmentally') exposed humans. From the many methods published to assess bisphenol A concentrations in biological media, mass spectrometry-based methods are considered most appropriate due to high sensitivity, selectivity and precision. In human blood, based on the known toxicokinetics of bisphenol A in humans, the expected very low concentrations of bisphenol A due to rapid biotransformation and the very rapid excretion result in severe limitations in the use of reported blood levels of bisphenol A for exposure assessment. Due to the rapid and complete excretion of orally administered bisphenol A, urine samples are considered as the appropriate body fluid for bisphenol A exposure assessment. In urine samples from several cohorts, bisphenol A (as glucuronide) was present in average concentrations in the range of 1-3 μg/L suggesting that daily human exposure to bisphenol A is below 6 μg per person (< 0.1 μg/kg bw/day) for the majority of the population

  1. Application of X-ray methods to assess grain vulnerability to damage resulting from multiple loads

    International Nuclear Information System (INIS)

    Zlobecki, A.

    1995-01-01

    The aim of the work is to describe wheat grain behavior under multiple dynamic loads with various multipliers. The experiments were conducted on Almari variety grain. Grain moisture was 11, 16, 21 and 28%. A special ram stand was used for loading the grain. The experiments were carried out using an 8 g weight, equivalent to impact energy of 4,6 x 10 -3 [J]. The X-ray method was used to assess damage. The exposure time was 8 minutes with X-ray lamp voltage equal to 15 kV. The position index was used as the measure of the damage. The investigation results were elaborated statistically. Based on the results of analysis of variance, regression analysis, the d-Duncan test and the Kolmogorov-Smirnov test, the damage number was shown to depend greatly on the number of impacts for the whole range of moisture of the grain loaded. (author)

  2. (Re)interpreting LHC New Physics Search Results : Tools and Methods, 3rd Workshop

    CERN Document Server

    The quest for new physics beyond the SM is arguably the driving topic for LHC Run2. LHC collaborations are pursuing searches for new physics in a vast variety of channels. Although collaborations provide various interpretations for their search results, the full understanding of these results requires a much wider interpretation scope involving all kinds of theoretical models. This is a very active field, with close theory-experiment interaction. In particular, development of dedicated methodologies and tools is crucial for such scale of interpretation. Recently, a Forum was initiated to host discussions among LHC experimentalists and theorists on topics related to the BSM (re)interpretation of LHC data, and especially on the development of relevant interpretation tools and infrastructure: https://twiki.cern.ch/twiki/bin/view/LHCPhysics/InterpretingLHCresults Two meetings were held at CERN, where active discussions and concrete work on (re)interpretation methods and tools took place, with valuable cont...

  3. GRS Method for Uncertainty and Sensitivity Evaluation of Code Results and Applications

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    During the recent years, an increasing interest in computational reactor safety analysis is to replace the conservative evaluation model calculations by best estimate calculations supplemented by uncertainty analysis of the code results. The evaluation of the margin to acceptance criteria, for example, the maximum fuel rod clad temperature, should be based on the upper limit of the calculated uncertainty range. Uncertainty analysis is needed if useful conclusions are to be obtained from best estimate thermal-hydraulic code calculations, otherwise single values of unknown accuracy would be presented for comparison with regulatory acceptance limits. Methods have been developed and presented to quantify the uncertainty of computer code results. The basic techniques proposed by GRS are presented together with applications to a large break loss of coolant accident on a reference reactor as well as on an experiment simulating containment behaviour

  4. Changes of forest stands vulnerability to future wind damage resulting from different management methods

    DEFF Research Database (Denmark)

    Panferov, O.; Sogachev, Andrey; Ahrends, B.

    2010-01-01

    The structure of forests stands changes continuously as a result of forest growth and both natural and anthropogenic disturbances like windthrow or management activities – planting/cutting of trees. These structure changes can stabilize or destabilize forest stands in terms of their resistance...... to wind damage. The driving force behind the damage is the climate, but the magnitude and sign of resulting effect depend on tree species, management method and soil conditions. The projected increasing frequency of weather extremes in the whole and severe storms in particular might produce wide area...... damage in European forest ecosystems during the 21st century. To assess the possible wind damage and stabilization/destabilization effects of forest management a number of numeric experiments are carried out for the region of Solling, Germany. The coupled small-scale process-based model combining Brook90...

  5. Modeling the Downstream Processing of Monoclonal Antibodies Reveals Cost Advantages for Continuous Methods for a Broad Range of Manufacturing Scales.

    Science.gov (United States)

    Hummel, Jonathan; Pagkaliwangan, Mark; Gjoka, Xhorxhi; Davidovits, Terence; Stock, Rick; Ransohoff, Thomas; Gantier, Rene; Schofield, Mark

    2018-01-17

    The biopharmaceutical industry is evolving in response to changing market conditions, including increasing competition and growing pressures to reduce costs. Single-use (SU) technologies and continuous bioprocessing have attracted attention as potential facilitators of cost-optimized manufacturing for monoclonal antibodies. While disposable bioprocessing has been adopted at many scales of manufacturing, continuous bioprocessing has yet to reach the same level of implementation. In this study, the cost of goods of Pall Life Science's integrated, continuous bioprocessing (ICB) platform is modeled, along with that of purification processes in stainless-steel and SU batch formats. All three models include costs associated with downstream processing only. Evaluation of the models across a broad range of clinical and commercial scenarios reveal that the cost savings gained by switching from stainless-steel to SU batch processing are often amplified by continuous operation. The continuous platform exhibits the lowest cost of goods across 78% of all scenarios modeled here, with the SU batch process having the lowest costs in the rest of the cases. The relative savings demonstrated by the continuous process are greatest at the highest feed titers and volumes. These findings indicate that existing and imminent continuous technologies and equipment can become key enablers for more cost effective manufacturing of biopharmaceuticals. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Long-term results of forearm lengthening and deformity correction by the Ilizarov method.

    Science.gov (United States)

    Orzechowski, Wiktor; Morasiewicz, Leszek; Krawczyk, Artur; Dragan, Szymon; Czapiński, Jacek

    2002-06-30

    Background. Shortening and deformity of the forearm is most frequently caused by congenital disorders or posttraumatic injury. Given its complex anatomy and biomechanics, the forearm is clearly the most difficult segment for lengthening and deformity correction. Material and methods. We analyzed 16 patients with shortening and deformity of the forearm, treated surgically, using the Ilizarov method in our Department from 1989 to 2001. in 9 cases 1-stage surgery was sufficient, while the remaining 7 patients underwent 2-5 stages of treatment. At total of 31 surgical operations were performed. The extent of forearm shortening ranged from 1,5 to 14,5 cm (5-70%). We development a new fixator based on Schanz half-pins. Results. The length of forearm lengthening per operative stage averaged 2,35 cm. the proportion of lengthening ranged from 6% to 48% with an average of 18,3%. The mean lengthening index was 48,15 days/cm. the per-patient rate of complications was 88% compared 45% per stage of treatment, mostly limited rotational mobility and abnormal consolidation of regenerated bone. Conclusions. Despite the high complication rate, the Ilizarov method is the method of choice for patients with forearm shortenings and deformities. Treatment is particularly indicated in patients with shortening caused by disproportionate length of the ulnar and forearm bones. Treatment should be managed so as cause the least possible damage to arm function, even at the cost of limited lengthening. Our new stabilizer based on Schanz half-pins makes it possible to preserve forearm rotation.

  7. Ultrasonic Digital Communication System for a Steel Wall Multipath Channel: Methods and Results

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Timothy L. [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2005-12-01

    As of the development of this thesis, no commercially available products have been identified for the digital communication of instrumented data across a thick ({approx} 6 n.) steel wall using ultrasound. The specific goal of the current research is to investigate the application of methods for digital communication of instrumented data (i.e., temperature, voltage, etc.) across the wall of a steel pressure vessel. The acoustic transmission of data using ultrasonic transducers prevents the need to breach the wall of such a pressure vessel which could ultimately affect its safety or lifespan, or void the homogeneity of an experiment under test. Actual digital communication paradigms are introduced and implemented for the successful dissemination of data across such a wall utilizing solely an acoustic ultrasonic link. The first, dubbed the ''single-hop'' configuration, can communicate bursts of digital data one-way across the wall using the Differential Binary Phase-Shift Keying (DBPSK) modulation technique as fast as 500 bps. The second, dubbed the ''double-hop'' configuration, transmits a carrier into the vessel, modulates it, and retransmits it externally. Using a pulsed carrier with Pulse Amplitude Modulation (PAM), this technique can communicate digital data as fast as 500 bps. Using a CW carrier, Least Mean-Squared (LMS) adaptive interference suppression, and DBPSK, this method can communicate data as fast as 5 kbps. A third technique, dubbed the ''reflected-power'' configuration, communicates digital data by modulating a pulsed carrier by varying the acoustic impedance at the internal transducer-wall interface. The paradigms of the latter two configurations are believed to be unique. All modulation methods are based on the premise that the wall cannot be breached in any way and can therefore be viably implemented with power delivered wirelessly through the acoustic channel using ultrasound. Methods

  8. Improving the accuracy of myocardial perfusion scintigraphy results by machine learning method

    International Nuclear Information System (INIS)

    Groselj, C.; Kukar, M.

    2002-01-01

    Full text: Machine learning (ML) as rapidly growing artificial intelligence subfield has already proven in last decade to be a useful tool in many fields of decision making, also in some fields of medicine. Its decision accuracy usually exceeds the human one. To assess applicability of ML in interpretation the results of stress myocardial perfusion scintigraphy for CAD diagnosis. The 327 patient's data of planar stress myocardial perfusion scintigraphy were reevaluated in usual way. Comparing them with the results of coronary angiography the sensitivity, specificity and accuracy for the investigation was computed. The data were digitized and the decision procedure repeated by ML program 'Naive Bayesian classifier'. As the ML is able to simultaneously manipulate of whatever number of data, all reachable disease connected data (regarding history, habitus, risk factors, stress results) were added. The sensitivity, specificity and accuracy for scintigraphy were expressed in this way. The results of both decision procedures were compared. With ML method 19 patients more out of 327 (5.8 %) were correctly diagnosed by stress myocardial perfusion scintigraphy. ML could be an important tool for decision making in myocardial perfusion scintigraphy. (author)

  9. Machine learning methods reveal the temporal pattern of dengue incidence using meteorological factors in metropolitan Manila, Philippines.

    Science.gov (United States)

    Carvajal, Thaddeus M; Viacrusis, Katherine M; Hernandez, Lara Fides T; Ho, Howell T; Amalin, Divina M; Watanabe, Kozo

    2018-04-17

    Several studies have applied ecological factors such as meteorological variables to develop models and accurately predict the temporal pattern of dengue incidence or occurrence. With the vast amount of studies that investigated this premise, the modeling approaches differ from each study and only use a single statistical technique. It raises the question of whether which technique would be robust and reliable. Hence, our study aims to compare the predictive accuracy of the temporal pattern of Dengue incidence in Metropolitan Manila as influenced by meteorological factors from four modeling techniques, (a) General Additive Modeling, (b) Seasonal Autoregressive Integrated Moving Average with exogenous variables (c) Random Forest and (d) Gradient Boosting. Dengue incidence and meteorological data (flood, precipitation, temperature, southern oscillation index, relative humidity, wind speed and direction) of Metropolitan Manila from January 1, 2009 - December 31, 2013 were obtained from respective government agencies. Two types of datasets were used in the analysis; observed meteorological factors (MF) and its corresponding delayed or lagged effect (LG). After which, these datasets were subjected to the four modeling techniques. The predictive accuracy and variable importance of each modeling technique were calculated and evaluated. Among the statistical modeling techniques, Random Forest showed the best predictive accuracy. Moreover, the delayed or lag effects of the meteorological variables was shown to be the best dataset to use for such purpose. Thus, the model of Random Forest with delayed meteorological effects (RF-LG) was deemed the best among all assessed models. Relative humidity was shown to be the top-most important meteorological factor in the best model. The study exhibited that there are indeed different predictive outcomes generated from each statistical modeling technique and it further revealed that the Random forest model with delayed meteorological

  10. Long-term results of 2 adjuvant trials reveal differences in chemosensitivity and the pattern of metastases between colon cancer and rectal cancer.

    Science.gov (United States)

    Kornmann, Marko; Staib, Ludger; Wiegel, Thomas; Kron, Martina; Henne-Bruns, Doris; Link, Karl-Heinrich; Formentini, Andrea

    2013-03-01

    Two identical randomized controlled trials designed to optimize adjuvant treatment of colon cancer (CC) (n =855) and rectal cancer (RC) (n = 796) were performed. Long-term evaluation confirmed that the addition of folinic acid (FA) to 5-fluorouracil (5-FU) improved 7-year overall survival (OS) in CC but not in RC and revealed different patterns of recurrence in patients with CC and those with RC. Our aim was to compare long-term results of adjuvant treatment of colon cancer (CC) and rectal cancer (RC). Adjuvant chemotherapy of CC improved overall survival (OS), whereas that of RC remained at the level achieved by 5-fluorouracil (5-FU). We separately conducted 2 identically designed adjuvant trials in CC and RC. Patients were assigned to adjuvant chemotherapy with 5-FU alone, 5-FU + folinic acid (FA), or 5-FU + interferon-alfa. The first study enrolled patients with stage IIb/III CC, and the second study enrolled patients with stage II/III RC. All patients with RC received postoperative irradiation. Median follow-up for all patients with CC (n = 855) and RC (n = 796) was 4.9 years. The pattern and frequency of recurrence differed significantly, especially lung metastases, which occurred more frequently in RC (12.7%) than in CC (7.3%; P < .001). Seven-year OS rates for 5-FU, 5-FU + FA, and 5-FU + IFN-alfa were 54.1% (95% confidence interval [CI], 46.5-61.0), 66.8% (95% CI, 59.4-73.1), and 56.7% (95% CI, 49.3-63.4) in CC and 50.6% (95% CI, 43.0-57.7), 56.3% (95% CI, 49.4-62.7), and 54.8% (95% CI, 46.7-62.2) in RC, respectively. A subgroup analysis pointed to a reduced local recurrence (LR) rate and an increased OS by the addition of FA in stage II RC (n = 271) but not in stage III RC (n = 525). FA increased 7-year OS by 12.7 percentage points in CC but was not effective in RC. Based on these results and the pattern of metastases, our results suggest that the chemosensitivity of CC and RC may be different. Strategies different from those used in CC may be successful to

  11. Neurochemistry of Alzheimer's disease and related dementias: Results of metabolic imaging and future application of ligand binding methods

    International Nuclear Information System (INIS)

    Frey, K.A.; Koeppe, R.A.; Kuhl, D.E.

    1991-01-01

    Although Alzheimer's disease (AD) has been recognized for over a decade as a leading cause of cognitive decline in the elderly, its etiology remains unknown. Radiotracer imaging studies have revealed characteristic patterns of abnormal energy metabolism and blood flow in AD. A consistent reduction in cerebral glucose metabolism, determined by positron emission tomography, is observed in the parietal, temporal, and frontal association cortices. It is proposed that this occurs on the basis of diffuse cortical pathology, resulting in disproportionate loss of presynaptic input to higher cortical association areas. Postmortem neurochemical studies consistently indicate a severe depletion of cortical presynaptic cholinergic markers in AD. This is accounted for by loss of cholinergic projection neurons in the basal forebrain. In addition, loss of extrinsic serotonergic innervation of the cortex and losses of intrinsic cortical markers such as somatostatin, substance P, glutamate receptors, and glutamate- and GABA-uptake sites are reported. These observations offer the opportunity for study in vivo with the use of radioligand imaging methods under development. The role of tracer imaging studies in the investigation and diagnosis of dementia is likely to become increasingly central, as metabolic imaging provides evidence of abnormality early in the clinical course. New neurochemical imaging methods will allow direct testing of hypotheses of selective neuronal degeneration, and will assist in design of future studies of AD pathophysiology

  12. Differences in quantitative methods for measuring subjective cognitive decline - results from a prospective memory clinic study.

    Science.gov (United States)

    Vogel, Asmus; Salem, Lise Cronberg; Andersen, Birgitte Bo; Waldemar, Gunhild

    2016-09-01

    Cognitive complaints occur frequently in elderly people and may be a risk factor for dementia and cognitive decline. Results from studies on subjective cognitive decline are difficult to compare due to variability in assessment methods, and little is known about how different methods influence reports of cognitive decline. The Subjective Memory Complaints Scale (SMC) and The Memory Complaint Questionnaire (MAC-Q) were applied in 121 mixed memory clinic patients with mild cognitive symptoms (mean MMSE = 26.8, SD 2.7). The scales were applied independently and raters were blinded to results from the other scale. Scales were not used for diagnostic classification. Cognitive performances and depressive symptoms were also rated. We studied the association between the two measures and investigated the scales' relation to depressive symptoms, age, and cognitive status. SMC and MAC-Q were significantly associated (r = 0.44, N = 121, p = 0.015) and both scales had a wide range of scores. In this mixed cohort of patients, younger age was associated with higher SMC scores. There were no significant correlations between cognitive test performances and scales measuring subjective decline. Depression scores were significantly correlated to both scales measuring subjective decline. Linear regression models showed that age did not have a significant contribution to the variance in subjective memory beyond that of depressive symptoms. Measures for subjective cognitive decline are not interchangeable when used in memory clinics and the application of different scales in previous studies is an important factor as to why studies show variability in the association between subjective cognitive decline and background data and/or clinical results. Careful consideration should be taken as to which questions are relevant and have validity when operationalizing subjective cognitive decline.

  13. 3D ultrasound computer tomography: Hardware setup, reconstruction methods and first clinical results

    Science.gov (United States)

    Gemmeke, Hartmut; Hopp, Torsten; Zapf, Michael; Kaiser, Clemens; Ruiter, Nicole V.

    2017-11-01

    A promising candidate for improved imaging of breast cancer is ultrasound computer tomography (USCT). Current experimental USCT systems are still focused in elevation dimension resulting in a large slice thickness, limited depth of field, loss of out-of-plane reflections, and a large number of movement steps to acquire a stack of images. 3D USCT emitting and receiving spherical wave fronts overcomes these limitations. We built an optimized 3D USCT, realizing for the first time the full benefits of a 3D system. The point spread function could be shown to be nearly isotropic in 3D, to have very low spatial variability and fit the predicted values. The contrast of the phantom images is very satisfactory in spite of imaging with a sparse aperture. The resolution and imaged details of the reflectivity reconstruction are comparable to a 3 T MRI volume. Important for the obtained resolution are the simultaneously obtained results of the transmission tomography. The KIT 3D USCT was then tested in a pilot study on ten patients. The primary goals of the pilot study were to test the USCT device, the data acquisition protocols, the image reconstruction methods and the image fusion techniques in a clinical environment. The study was conducted successfully; the data acquisition could be carried out for all patients with an average imaging time of six minutes per breast. The reconstructions provide promising images. Overlaid volumes of the modalities show qualitative and quantitative information at a glance. This paper gives a summary of the involved techniques, methods, and first results.

  14. Impact of Costing and Cost Analysis Methods on the Result of the Period: Methods Based on Full Cost Theory

    Directory of Open Access Journals (Sweden)

    Toma Maria

    2017-01-01

    In light of the above, in the present paper we have proposed that objectives, to approach the methods of calculating full costs (economic or traditional, and comparing them to determine the effect they have on the outcome of the period.

  15. A new time-frequency method to reveal quantum dynamics of atomic hydrogen in intense laser pulses: Synchrosqueezing transform

    International Nuclear Information System (INIS)

    Sheu, Yae-lin; Hsu, Liang-Yan; Wu, Hau-tieng; Li, Peng-Cheng; Chu, Shih-I

    2014-01-01

    This study introduces a new adaptive time-frequency (TF) analysis technique, the synchrosqueezing transform (SST), to explore the dynamics of a laser-driven hydrogen atom at an ab initio level, upon which we have demonstrated its versatility as a new viable venue for further exploring quantum dynamics. For a signal composed of oscillatory components which can be characterized by instantaneous frequency, the SST enables rendering the decomposed signal based on the phase information inherited in the linear TF representation with mathematical support. Compared with the classical type of TF methods, the SST clearly depicts several intrinsic quantum dynamical processes such as selection rules, AC Stark effects, and high harmonic generation

  16. Feasibility to implement the radioisotopic method of nasal mucociliary transport measurement getting reliable results

    International Nuclear Information System (INIS)

    Troncoso, M.; Opazo, C.; Quilodran, C.; Lizama, V.

    2002-01-01

    Aim: Our goal was to implement the radioisotopic method to measure the nasal mucociliary velocity of transport (NMVT) in a feasible way in order to make it easily available as well as to validate the accuracy of the results. Such a method is needed when primary ciliary dyskinesia (PCD) is suspected, a disorder characterized for low NMVT, non-specific chronic respiratory symptoms that needs to be confirmed by electronic microscopic cilia biopsy. Methods: We performed one hundred studies from February 2000 until February 2002. Patients aged 2 months to 39 years, mean 9 years. All of them were referred from the Respiratory Disease Department. Ninety had upper or lower respiratory symptoms, ten were healthy controls. The procedure, done be the Nuclear Medicine Technologist, consists to put a 20 μl drop of 99mTc-MAA (0,1 mCi, 4 MBq) behind the head of the inferior turbinate in one nostril using a frontal light, a nasal speculum and a teflon catheter attached to a tuberculin syringe. The drop movement was acquired in a gamma camera-computer system and the velocity was expressed in mm/min. As there is need for the patient not to move during the procedure, sedation has to be used in non-cooperative children. Abnormal NMVT values cases were referred for nasal biopsy. Patients were classified in three groups. Normal controls (NC), PCD confirmed by biopsy (PCDB) and cases with respiratory symptoms without biopsy (RSNB). In all patients with NMVT less than 2.4 mm/min PCD was confirmed by biopsy. There was a clear-cut separation between normal and abnormal values and interestingly even the highest NMVT in PCDB cases was lower than the lowest NMVT in NC. The procedure is not as easy as is generally described in the literature because the operator has to get some skill as well as for the need of sedation in some cases. Conclusion: The procedure gives reliable, reproducible and objective results. It is safe, not expensive and quick in cooperative patients. Although, sometimes

  17. Changes in mitochondrial functioning with electromagnetic radiation of ultra high frequency as revealed by electron paramagnetic resonance methods.

    Science.gov (United States)

    Burlaka, Anatoly; Selyuk, Marina; Gafurov, Marat; Lukin, Sergei; Potaskalova, Viktoria; Sidorik, Evgeny

    2014-05-01

    To study the effects of electromagnetic radiation (EMR) of ultra high frequency (UHF) in the doses equivalent to the maximal permitted energy load for the staffs of the radar stations on the biochemical processes that occur in the cell organelles. Liver, cardiac and aorta tissues from the male rats exposed to non-thermal UHF EMR in pulsed and continuous modes were studied during 28 days after the irradiation by the electron paramagnetic resonance (EPR) methods including a spin trapping of superoxide radicals. The qualitative and quantitative disturbances in electron transport chain (ETC) of mitochondria are registered. A formation of the iron-nitrosyl complexes of nitric oxide (NO) radicals with the iron-sulphide (FeS) proteins, the decreased activity of FeS-protein N2 of NADH-ubiquinone oxidoreductase complex and flavo-ubisemiquinone growth combined with the increased rates of superoxide production are obtained. (i) Abnormalities in the mitochondrial ETC of liver and aorta cells are more pronounced for animals radiated in a pulsed mode; (ii) the alterations in the functioning of the mitochondrial ETC cause increase of superoxide radicals generation rate in all samples, formation of cellular hypoxia, and intensification of the oxide-initiated metabolic changes; and (iii) electron paramagnetic resonance methods could be used to track the qualitative and quantitative changes in the mitochondrial ETC caused by the UHF EMR.

  18. Kissinger method applied to the crystallization of glass-forming liquids: Regimes revealed by ultra-fast-heating calorimetry

    Energy Technology Data Exchange (ETDEWEB)

    Orava, J., E-mail: jo316@cam.ac.uk [Department of Materials Science & Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS (United Kingdom); WPI-Advanced Institute for Materials Research (WPI-AIMR), Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 (Japan); Greer, A.L., E-mail: alg13@cam.ac.uk [Department of Materials Science & Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS (United Kingdom); WPI-Advanced Institute for Materials Research (WPI-AIMR), Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 (Japan)

    2015-03-10

    Highlights: • Study of ultra-fast DSC applied to the crystallization of glass-forming liquids. • Numerical modeling of DSC traces at heating rates exceeding 10 orders of magnitude. • Identification of three regimes in Kissinger plots. • Elucidation of the effect of liquid fragility on the Kissinger method. • Modeling to study the regime in which crystal growth is thermodynamically limited. - Abstract: Numerical simulation of DSC traces is used to study the validity and limitations of the Kissinger method for determining the temperature dependence of the crystal-growth rate on continuous heating of glasses from the glass transition to the melting temperature. A particular interest is to use the wide range of heating rates accessible with ultra-fast DSC to study systems such as the chalcogenide Ge{sub 2}Sb{sub 2}Te{sub 5} for which fast crystallization is of practical interest in phase-change memory. Kissinger plots are found to show three regimes: (i) at low heating rates the plot is straight, (ii) at medium heating rates the plot is curved as expected from the liquid fragility, and (iii) at the highest heating rates the crystallization rate is thermodynamically limited, and the plot has curvature of the opposite sign. The relative importance of these regimes is identified for different glass-forming systems, considered in terms of the liquid fragility and the reduced glass-transition temperature. The extraction of quantitative information on fundamental crystallization kinetics from Kissinger plots is discussed.

  19. Kissinger method applied to the crystallization of glass-forming liquids: Regimes revealed by ultra-fast-heating calorimetry

    International Nuclear Information System (INIS)

    Orava, J.; Greer, A.L.

    2015-01-01

    Highlights: • Study of ultra-fast DSC applied to the crystallization of glass-forming liquids. • Numerical modeling of DSC traces at heating rates exceeding 10 orders of magnitude. • Identification of three regimes in Kissinger plots. • Elucidation of the effect of liquid fragility on the Kissinger method. • Modeling to study the regime in which crystal growth is thermodynamically limited. - Abstract: Numerical simulation of DSC traces is used to study the validity and limitations of the Kissinger method for determining the temperature dependence of the crystal-growth rate on continuous heating of glasses from the glass transition to the melting temperature. A particular interest is to use the wide range of heating rates accessible with ultra-fast DSC to study systems such as the chalcogenide Ge 2 Sb 2 Te 5 for which fast crystallization is of practical interest in phase-change memory. Kissinger plots are found to show three regimes: (i) at low heating rates the plot is straight, (ii) at medium heating rates the plot is curved as expected from the liquid fragility, and (iii) at the highest heating rates the crystallization rate is thermodynamically limited, and the plot has curvature of the opposite sign. The relative importance of these regimes is identified for different glass-forming systems, considered in terms of the liquid fragility and the reduced glass-transition temperature. The extraction of quantitative information on fundamental crystallization kinetics from Kissinger plots is discussed

  20. Accuracy of the hypothetical sky-polarimetric Viking navigation versus sky conditions: revealing solar elevations and cloudinesses favourable for this navigation method

    Science.gov (United States)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Blahó, Miklós; Egri, Ádám; Szabó, Gyula; Horváth, Gábor

    2017-09-01

    According to Thorkild Ramskou's theory proposed in 1967, under overcast and foggy skies, Viking seafarers might have used skylight polarization analysed with special crystals called sunstones to determine the position of the invisible Sun. After finding the occluded Sun with sunstones, its elevation angle had to be measured and its shadow had to be projected onto the horizontal surface of a sun compass. According to Ramskou's theory, these sunstones might have been birefringent calcite or dichroic cordierite or tourmaline crystals working as polarizers. It has frequently been claimed that this method might have been suitable for navigation even in cloudy weather. This hypothesis has been accepted and frequently cited for decades without any experimental support. In this work, we determined the accuracy of this hypothetical sky-polarimetric Viking navigation for 1080 different sky situations characterized by solar elevation θ and cloudiness ρ, the sky polarization patterns of which were measured by full-sky imaging polarimetry. We used the earlier measured uncertainty functions of the navigation steps 1, 2 and 3 for calcite, cordierite and tourmaline sunstone crystals, respectively, and the newly measured uncertainty function of step 4 presented here. As a result, we revealed the meteorological conditions under which Vikings could have used this hypothetical navigation method. We determined the solar elevations at which the navigation uncertainties are minimal at summer solstice and spring equinox for all three sunstone types. On average, calcite sunstone ensures a more accurate sky-polarimetric navigation than tourmaline and cordierite. However, in some special cases (generally at 35° ≤ θ ≤ 40°, 1 okta ≤ ρ ≤ 6 oktas for summer solstice, and at 20° ≤ θ ≤ 25°, 0 okta ≤ ρ ≤ 4 oktas for spring equinox), the use of tourmaline and cordierite results in smaller navigation uncertainties than that of calcite. Generally, under clear or less cloudy

  1. Accuracy of the hypothetical sky-polarimetric Viking navigation versus sky conditions: revealing solar elevations and cloudinesses favourable for this navigation method.

    Science.gov (United States)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Blahó, Miklós; Egri, Ádám; Szabó, Gyula; Horváth, Gábor

    2017-09-01

    According to Thorkild Ramskou's theory proposed in 1967, under overcast and foggy skies, Viking seafarers might have used skylight polarization analysed with special crystals called sunstones to determine the position of the invisible Sun. After finding the occluded Sun with sunstones, its elevation angle had to be measured and its shadow had to be projected onto the horizontal surface of a sun compass. According to Ramskou's theory, these sunstones might have been birefringent calcite or dichroic cordierite or tourmaline crystals working as polarizers. It has frequently been claimed that this method might have been suitable for navigation even in cloudy weather. This hypothesis has been accepted and frequently cited for decades without any experimental support. In this work, we determined the accuracy of this hypothetical sky-polarimetric Viking navigation for 1080 different sky situations characterized by solar elevation θ and cloudiness ρ , the sky polarization patterns of which were measured by full-sky imaging polarimetry. We used the earlier measured uncertainty functions of the navigation steps 1, 2 and 3 for calcite, cordierite and tourmaline sunstone crystals, respectively, and the newly measured uncertainty function of step 4 presented here. As a result, we revealed the meteorological conditions under which Vikings could have used this hypothetical navigation method. We determined the solar elevations at which the navigation uncertainties are minimal at summer solstice and spring equinox for all three sunstone types. On average, calcite sunstone ensures a more accurate sky-polarimetric navigation than tourmaline and cordierite. However, in some special cases (generally at 35° ≤  θ  ≤ 40°, 1 okta ≤  ρ  ≤ 6 oktas for summer solstice, and at 20° ≤  θ  ≤ 25°, 0 okta ≤  ρ  ≤ 4 oktas for spring equinox), the use of tourmaline and cordierite results in smaller navigation uncertainties than that of calcite

  2. Rainfall assimilation in RAMS by means of the Kuo parameterisation inversion: method and preliminary results

    Science.gov (United States)

    Orlandi, A.; Ortolani, A.; Meneguzzo, F.; Levizzani, V.; Torricella, F.; Turk, F. J.

    2004-03-01

    In order to improve high-resolution forecasts, a specific method for assimilating rainfall rates into the Regional Atmospheric Modelling System model has been developed. It is based on the inversion of the Kuo convective parameterisation scheme. A nudging technique is applied to 'gently' increase with time the weight of the estimated precipitation in the assimilation process. A rough but manageable technique is explained to estimate the partition of convective precipitation from stratiform one, without requiring any ancillary measurement. The method is general purpose, but it is tuned for geostationary satellite rainfall estimation assimilation. Preliminary results are presented and discussed, both through totally simulated experiments and through experiments assimilating real satellite-based precipitation observations. For every case study, Rainfall data are computed with a rapid update satellite precipitation estimation algorithm based on IR and MW satellite observations. This research was carried out in the framework of the EURAINSAT project (an EC research project co-funded by the Energy, Environment and Sustainable Development Programme within the topic 'Development of generic Earth observation technologies', Contract number EVG1-2000-00030).

  3. Implantable central venous chemoport: camparision of results according to approach routes and methods

    International Nuclear Information System (INIS)

    Shin, Byung Suck; Ahn, Moon Sang

    2003-01-01

    To evaluate the results and complications of placement of implantable port according to approach routes and methods. Between April 2001 and October 2002, a total of 103 implantable chemoport was placed in 95 patients for chemotherapy using preconnected type (n=39) and attachable type (n=64). Puncture sites were left subclavian vein (n=35), right subclavian vein (n=5), left internal jugular vein (n=9), right internal jugular vein (n=54). We evaluated duration of catheterization days, complications according to approach routes and methods. Implantable chemoport was placed successfully in all cases. Duration of catheterization ranged from 8 to 554 days(mean 159, total 17,872 catheter days). Procedure related complications occurred transient pulmonary air embolism (n=1), small hematoma (n=1) and malposition in using preconnected type (n=2). Late complications occurred catheter migration (n=5), catheter malfunction (n=3), occlusion (n=1) and infection (n=11). Among them 15 chemoport was removed (14.5%). Catheter migration was occured via subclavian vein in all cases (13%, p=.008). Infection developed in 10.7% of patients(0.61 per 1000 catheter days). There were no catheter-related central vein thrombosis. Implantation of chemoport is a safe procedure. Choice of right internal jugular vein than subclavian vain vein for puncture site has less complications. And selection of attachable type of chemoport is convenient than preconnected type. Adequate care of chemoport is essential for long patency

  4. The Trojan Horse method for nuclear astrophysics: Recent results on resonance reactions

    Energy Technology Data Exchange (ETDEWEB)

    Cognata, M. La; Pizzone, R. G. [Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania (Italy); Spitaleri, C.; Cherubini, S.; Romano, S. [Dipartimento di Fisica e Astronomia, Università di Catania, Catania, Italy and Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania (Italy); Gulino, M.; Tumino, A. [Kore University, Enna, Italy and Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania (Italy); Lamia, L. [Dipartimento di Fisica e Astronomia, Università di Catania, Catania (Italy)

    2014-05-09

    Nuclear astrophysics aims to measure nuclear-reaction cross sections of astrophysical interest to be included into models to study stellar evolution and nucleosynthesis. Low energies, < 1 MeV or even < 10 keV, are requested for this is the window where these processes are more effective. Two effects have prevented to achieve a satisfactory knowledge of the relevant nuclear processes, namely, the Coulomb barrier exponentially suppressing the cross section and the presence of atomic electrons. These difficulties have triggered theoretical and experimental investigations to extend our knowledge down to astrophysical energies. For instance, indirect techniques such as the Trojan Horse Method have been devised yielding new cutting-edge results. In particular, I will focus on the application of this indirect method to resonance reactions. Resonances might dramatically enhance the astrophysical S(E)-factor so, when they occur right at astrophysical energies, their measurement is crucial to pin down the astrophysical scenario. Unknown or unpredicted resonances might introduce large systematic errors in nucleosynthesis models. These considerations apply to low-energy resonances and to sub-threshold resonances as well, as they may produce sizable modifications of the S-factor due to, for instance, destructive interference with another resonance.

  5. The Trojan Horse method for nuclear astrophysics: Recent results on resonance reactions

    International Nuclear Information System (INIS)

    Cognata, M. La; Pizzone, R. G.; Spitaleri, C.; Cherubini, S.; Romano, S.; Gulino, M.; Tumino, A.; Lamia, L.

    2014-01-01

    Nuclear astrophysics aims to measure nuclear-reaction cross sections of astrophysical interest to be included into models to study stellar evolution and nucleosynthesis. Low energies, < 1 MeV or even < 10 keV, are requested for this is the window where these processes are more effective. Two effects have prevented to achieve a satisfactory knowledge of the relevant nuclear processes, namely, the Coulomb barrier exponentially suppressing the cross section and the presence of atomic electrons. These difficulties have triggered theoretical and experimental investigations to extend our knowledge down to astrophysical energies. For instance, indirect techniques such as the Trojan Horse Method have been devised yielding new cutting-edge results. In particular, I will focus on the application of this indirect method to resonance reactions. Resonances might dramatically enhance the astrophysical S(E)-factor so, when they occur right at astrophysical energies, their measurement is crucial to pin down the astrophysical scenario. Unknown or unpredicted resonances might introduce large systematic errors in nucleosynthesis models. These considerations apply to low-energy resonances and to sub-threshold resonances as well, as they may produce sizable modifications of the S-factor due to, for instance, destructive interference with another resonance

  6. The healthy building intervention study: Objectives, methods and results of selected environmental measurements

    Energy Technology Data Exchange (ETDEWEB)

    Fisk, W.J.; Faulkner, D.; Sullivan, D. [and others

    1998-02-17

    To test proposed methods for reducing SBS symptoms and to learn about the causes of these symptoms, a double-blind controlled intervention study was designed and implemented. This study utilized two different interventions designed to reduce occupants` exposures to airborne particles: (1) high efficiency filters in the building`s HVAC systems; and (2) thorough cleaning of carpeted floors and fabric-covered chairs with an unusually powerful vacuum cleaner. The study population was the workers on the second and fourth floors of a large office building with mechanical ventilation, air conditioning, and sealed windows. Interventions were implemented on one floor while the occupants on the other floor served as a control group. For the enhanced-filtration intervention, a multiple crossover design was used (a crossover is a repeat of the experiment with the former experimental group as the control group and vice versa). Demographic and health symptom data were collected via an initial questionnaire on the first study week and health symptom data were obtained each week, for eight additional weeks, via weekly questionnaires. A large number of indoor environmental parameters were measured during the study including air temperatures and humidities, carbon dioxide concentrations, particle concentrations, concentrations of several airborne bioaerosols, and concentrations of several microbiologic compounds within the dust sampled from floors and chairs. This report describes the study methods and summarizes the results of selected environmental measurements.

  7. Differences in quantitative methods for measuring subjective cognitive decline - results from a prospective memory clinic study

    DEFF Research Database (Denmark)

    Vogel, Asmus; Salem, Lise Cronberg; Andersen, Birgitte Bo

    2016-01-01

    influence reports of cognitive decline. METHODS: The Subjective Memory Complaints Scale (SMC) and The Memory Complaint Questionnaire (MAC-Q) were applied in 121 mixed memory clinic patients with mild cognitive symptoms (mean MMSE = 26.8, SD 2.7). The scales were applied independently and raters were blinded...... decline. Depression scores were significantly correlated to both scales measuring subjective decline. Linear regression models showed that age did not have a significant contribution to the variance in subjective memory beyond that of depressive symptoms. CONCLUSIONS: Measures for subjective cognitive...... decline are not interchangeable when used in memory clinics and the application of different scales in previous studies is an important factor as to why studies show variability in the association between subjective cognitive decline and background data and/or clinical results. Careful consideration...

  8. Results of a survey on accident and safety analysis codes, benchmarks, verification and validation methods

    International Nuclear Information System (INIS)

    Lee, A.G.; Wilkin, G.B.

    1996-03-01

    During the 'Workshop on R and D needs' at the 3rd Meeting of the International Group on Research Reactors (IGORR-III), the participants agreed that it would be useful to compile a survey of the computer codes and nuclear data libraries used in accident and safety analyses for research reactors and the methods various organizations use to verify and validate their codes and libraries. Five organizations, Atomic Energy of Canada Limited (AECL, Canada), China Institute of Atomic Energy (CIAE, People's Republic of China), Japan Atomic Energy Research Institute (JAERI, Japan), Oak Ridge National Laboratories (ORNL, USA), and Siemens (Germany) responded to the survey. The results of the survey are compiled in this report. (author) 36 refs., 3 tabs

  9. Radioactive indium labelling of the figured elements of blood. Method, results, applications

    International Nuclear Information System (INIS)

    Ducassou, D.; Nouel, J.P.

    Following the work of Thakur et al. the authors became interested in red corpuscle, leucocyte and platelet labelling with indium 111 or 113m (8 hydroxyquinolein-indium). For easier labelling of the figured elements of blood the technique described was modified. The chelate is prepared by simple contact at room temperature of indium 111 or 113m chloride and water-soluble 8 hydroxyquinolein sulphate, in the presence of 0.2M TRIS buffer. The figured element chosen suspended in physiological serum is added directly to the solution obtained, the platelets and leucocytes being separated out beforehand by differential centrifugation. While it gives results similar to those of Thabur et al. the method proposed avoids the chloroform extraction of the radioactive chelate and the use of alcohol, liable to impair the platelet regation capacity [fr

  10. Hydrogen storage in single-walled carbon nanotubes: methods and results

    International Nuclear Information System (INIS)

    Poirier, E.; Chahine, R.; Tessier, A.; Cossement, D.; Lafi, L.; Bose, T.K.

    2004-01-01

    We present high sensitivity gravimetric and volumetric hydrogen sorption measurement systems adapted for in situ conditioning under high temperature and high vacuum. These systems, which allow for precise measurements on small samples and thorough degassing, are used for sorption measurements on carbon nanostructures. We developed one volumetric system for the pressure range 0-1 bar, and two gravimetric systems for 0-1 bar and 0-100 bars. The use of both gravimetric and volumetric methods allows for the cross-checking of the results. The accuracy of the systems has been determined from hydrogen absorption measurements on palladium. The accuracies of the 0-1 bar volumetric and gravimetric systems are about 10 μg and 20 μg respectively. The accuracy of the 0-100 bars gravimetric system is about 20 μg. Hydrogen sorption measurements on single-walled carbon nanotubes (SWNTs) and metal-incorporated- SWNTs are presented. (author)

  11. Method validation in plasma source optical emission spectroscopy (ICP-OES) - From samples to results

    International Nuclear Information System (INIS)

    Pilon, Fabien; Vielle, Karine; Birolleau, Jean-Claude; Vigneau, Olivier; Labet, Alexandre; Arnal, Nadege; Adam, Christelle; Camilleri, Virginie; Amiel, Jeanine; Granier, Guy; Faure, Joel; Arnaud, Regine; Beres, Andre; Blanchard, Jean-Marc; Boyer-Deslys, Valerie; Broudic, Veronique; Marques, Caroline; Augeray, Celine; Bellefleur, Alexandre; Bienvenu, Philippe; Delteil, Nicole; Boulet, Beatrice; Bourgarit, David; Brennetot, Rene; Fichet, Pascal; Celier, Magali; Chevillotte, Rene; Klelifa, Aline; Fuchs, Gilbert; Le Coq, Gilles; Mermet, Jean-Michel

    2017-01-01

    Even though ICP-OES (Inductively Coupled Plasma - Optical Emission Spectroscopy) is now a routine analysis technique, requirements for measuring processes impose a complete control and mastering of the operating process and of the associated quality management system. The aim of this (collective) book is to guide the analyst during all the measurement validation procedure and to help him to guarantee the mastering of its different steps: administrative and physical management of samples in the laboratory, preparation and treatment of the samples before measuring, qualification and monitoring of the apparatus, instrument setting and calibration strategy, exploitation of results in terms of accuracy, reliability, data covariance (with the practical determination of the accuracy profile). The most recent terminology is used in the book, and numerous examples and illustrations are given in order to a better understanding and to help the elaboration of method validation documents

  12. Labelling of blood cells with radioactive indium-201: method, results, indications

    International Nuclear Information System (INIS)

    Ducassou, D.; Brendel, A.; Nouel, J.P.

    1978-01-01

    A modification of the method of Thakur et al. for labelling polynuclear cells with 8-hydroxyquinolein-indium-complexe utilising the water soluble sulfate of the substance was applied. The labelling procedure gave a yield over 98% with erthrocytes and over 80% with platelets and polynuclear cells using at least 1 x 10 8 plasma free cells. Functional capacity of the labelled cells remained unaltered. Injection double labelled ( 111 In, 51 Cr) red cells correlation of values for the red cell volume amounted to r = 0,98 (n=20); red cell life-spane measurements gave comparable results in 5 patients. After injecting labelled platelets a life-spane between 6,5 and 11 days was measured. Scintigraphic visualisation of pulmonary embolism was obtained 30 minutes after injecting labelled platelets. Injection of labelled polynuclear cells allows life-spane measurements as well as detection of abscesses. (author)

  13. Pion emission from the T2K replica target: method, results and application

    CERN Document Server

    Abgrall, N.; Anticic, T.; Antoniou, N.; Argyriades, J.; Baatar, B.; Blondel, A.; Blumer, J.; Bogomilov, M.; Bravar, A.; Brooks, W.; Brzychczyk, J.; Bubak, A.; Bunyatov, S.A.; Busygina, O.; Christakoglou, P.; Chung, P.; Czopowicz, T.; Davis, N.; Debieux, S.; Di Luise, S.; Dominik, W.; Dumarchez, J.; Dynowski, K.; Engel, R.; Ereditato, A.; Esposito, L.S.; Feofilov, G.A.; Fodor, Z.; Ferrero, A.; Fulop, A.; Gazdzicki, M.; Golubeva, M.; Grabez, B.; Grebieszkow, K.; Grzeszczuk, A.; Guber, F.; Haesler, A.; Hakobyan, H.; Hasegawa, T.; Idczak, R.; Igolkin, S.; Ivanov, Y.; Ivashkin, A.; Kadija, K.; Kapoyannis, A.; Katrynska, N.; Kielczewska, D.; Kikola, D.; Kirejczyk, M.; Kisiel, J.; Kiss, T.; Kleinfelder, S.; Kobayashi, T.; Kochebina, O.; Kolesnikov, V.I.; Kolev, D.; Kondratiev, V.P.; Korzenev, A.; Kowalski, S.; Krasnoperov, A.; Kuleshov, S.; Kurepin, A.; Lacey, R.; Larsen, D.; Laszlo, A.; Lyubushkin, V.V.; Mackowiak-Pawlowska, M.; Majka, Z.; Maksiak, B.; Malakhov, A.I.; Maletic, D.; Marchionni, A.; Marcinek, A.; Maris, I.; Marin, V.; Marton, K.; Matulewicz, T.; Matveev, V.; Melkumov, G.L.; Messina, M.; Mrowczynski, St.; Murphy, S.; Nakadaira, T.; Nishikawa, K.; Palczewski, T.; Palla, G.; Panagiotou, A.D.; Paul, T.; Peryt, W.; Petukhov, O.; Planeta, R.; Pluta, J.; Popov, B.A.; Posiadala, M.; Pulawski, S.; Puzovic, J.; Rauch, W.; Ravonel, M.; Renfordt, R.; Robert, A.; Rohrich, D.; Rondio, E.; Rossi, B.; Roth, M.; Rubbia, A.; Rustamov, A.; Rybczynski, M.; Sadovsky, A.; Sakashita, K.; Savic, M.; Sekiguchi, T.; Seyboth, P.; Shibata, M.; Sipos, M.; Skrzypczak, E.; Slodkowski, M.; Staszel, P.; Stefanek, G.; Stepaniak, J.; Strabel, C.; Strobele, H.; Susa, T.; Szuba, M.; Tada, M.; Taranenko, A.; Tereshchenko, V.; Tolyhi, T.; Tsenov, R.; Turko, L.; Ulrich, R.; Unger, M.; Vassiliou, M.; Veberic, D.; Vechernin, V.V.; Vesztergombi, G.; Wilczek, A.; Wlodarczyk, Z.; Wojtaszek-Szwarc, A.; Wyszynski, O.; Zambelli, L.; Zipper, W.; Hartz, M.; Ichikawa, A.K.; Kubo, H.; Marino, A.D.; Matsuoka, K.; Murakami, A.; Nakaya, T.; Suzuki, K.; Yuan, T.; Zimmerman, E.D.

    2013-01-01

    The T2K long-baseline neutrino oscillation experiment in Japan needs precise predictions of the initial neutrino flux. The highest precision can be reached based on detailed measurements of hadron emission from the same target as used by T2K exposed to a proton beam of the same kinetic energy of 30 GeV. The corresponding data were recorded in 2007-2010 by the NA61/SHINE experiment at the CERN SPS using a replica of the T2K graphite target. In this paper details of the experiment, data taking, data analysis method and results from the 2007 pilot run are presented. Furthermore, the application of the NA61/SHINE measurements to the predictions of the T2K initial neutrino flux is described and discussed.

  14. MLFMA-accelerated Nyström method for ultrasonic scattering - Numerical results and experimental validation

    Science.gov (United States)

    Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron

    2018-04-01

    Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.

  15. Lagrangian methods for blood damage estimation in cardiovascular devices--How numerical implementation affects the results.

    Science.gov (United States)

    Marom, Gil; Bluestein, Danny

    2016-01-01

    This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.

  16. Lineage range estimation method reveals fine-scale endemism linked to Pleistocene stability in Australian rainforest herpetofauna.

    Science.gov (United States)

    Rosauer, Dan F; Catullo, Renee A; VanDerWal, Jeremy; Moussalli, Adnan; Moritz, Craig

    2015-01-01

    Areas of suitable habitat for species and communities have arisen, shifted, and disappeared with Pleistocene climate cycles, and through this shifting landscape, current biodiversity has found paths to the present. Evolutionary refugia, areas of relative habitat stability in this shifting landscape, support persistence of lineages through time, and are thus crucial to the accumulation and maintenance of biodiversity. Areas of endemism are indicative of refugial areas where diversity has persisted, and endemism of intraspecific lineages in particular is strongly associated with late-Pleistocene habitat stability. However, it remains a challenge to consistently estimate the geographic ranges of intraspecific lineages and thus infer phylogeographic endemism, because spatial sampling for genetic analyses is typically sparse relative to species records. We present a novel technique to model the geographic distribution of intraspecific lineages, which is informed by the ecological niche of a species and known locations of its constituent lineages. Our approach allows for the effects of isolation by unsuitable habitat, and captures uncertainty in the extent of lineage ranges. Applying this method to the arc of rainforest areas spanning 3500 km in eastern Australia, we estimated lineage endemism for 53 species of rainforest dependent herpetofauna with available phylogeographic data. We related endemism to the stability of rainforest habitat over the past 120,000 years and identified distinct concentrations of lineage endemism that can be considered putative refugia. These areas of lineage endemism are strongly related to historical stability of rainforest habitat, after controlling for the effects of current environment. In fact, a dynamic stability model that allows movement to track suitable habitat over time was the most important factor in explaining current patterns of endemism. The techniques presented here provide an objective, practical method for estimating

  17. INTERDISCIPLINARITY IN PUBLIC SPACE PARTICIPATIVE PROJECTS: METHODS AND RESULTS IN PRACTICE AND TEACHING

    Directory of Open Access Journals (Sweden)

    Pedro Brandão

    2015-06-01

    • In the development of design practice and studio teaching methods We shall see in this paper how interdisciplinary approaches correspond to new and complex urban transformations, focusing on the importance of actors’ interaction processes, combining professional and non-professional knowledge and theory-practice relations. Therefore, we aim at a deepening in public space area of knowledge under the growing complexity of urban life. We see it as a base for further development of collaborative projects and their implications on community empowerment and urban governance at local level. Motivations of this line of work are persistent in several ongoing research projects, aiming to: - Understand public space as a cohesion factor both in urban life and urban form - Manage processes and strategies as elements of urban transformation, - Stimulate the understanding of actors’ roles in urban design practices. - Favoring the questioning of emerging aspects of urban space production… The paper presents and analyses processes, methods and results from civic participation projects developed in the neighbourhood of Barò de Viver (Barcelona and in the District of Marvila (Lisbon. In the first case, a long process initiated in 2004 and partially completed in 2011, neighbours developed the projects "Memory Wall" and Ciutat d'Asuncion Promenade as part of identity construction in public space, in collaboration  with a team of facilitators from CrPolis group. In the second case, different participatory processes dated from 2001 and 2003 have resulted in the implementation of a specific identity urban brand and communication system with an ongoing project of "maps" construction according to the neighbours perception and representation systems. We may conclude that processes of urban governance require more active participation of citizens in projects regarding the improvement of quality of life. At the same time, the implementation of these processes requires a clear

  18. Quantifying viruses and bacteria in wastewater—Results, interpretation methods, and quality control

    Science.gov (United States)

    Francy, Donna S.; Stelzer, Erin A.; Bushon, Rebecca N.; Brady, Amie M.G.; Mailot, Brian E.; Spencer, Susan K.; Borchardt, Mark A.; Elber, Ashley G.; Riddell, Kimberly R.; Gellner, Terry M.

    2011-01-01

    Membrane bioreactors (MBR), used for wastewater treatment in Ohio and elsewhere in the United States, have pore sizes small enough to theoretically reduce concentrations of protozoa and bacteria, but not viruses. Sampling for viruses in wastewater is seldom done and not required. Instead, the bacterial indicators Escherichia coli (E. coli) and fecal coliforms are the required microbial measures of effluents for wastewater-discharge permits. Information is needed on the effectiveness of MBRs in removing human enteric viruses from wastewaters, particularly as compared to conventional wastewater treatment before and after disinfection. A total of 73 regular and 28 quality-control (QC) samples were collected at three MBR and two conventional wastewater plants in Ohio during 23 regular and 3 QC sampling trips in 2008-10. Samples were collected at various stages in the treatment processes and analyzed for bacterial indicators E. coli, fecal coliforms, and enterococci by membrane filtration; somatic and F-specific coliphage by the single agar layer (SAL) method; adenovirus, enterovirus, norovirus GI and GII, rotavirus, and hepatitis A virus by molecular methods; and viruses by cell culture. While addressing the main objective of the study-comparing removal of viruses and bacterial indicators in MBR and conventional plants-it was realized that work was needed to identify data analysis and quantification methods for interpreting enteric virus and QC data. Therefore, methods for quantifying viruses, qualifying results, and applying QC data to interpretations are described in this report. During each regular sampling trip, samples were collected (1) before conventional or MBR treatment (post-preliminary), (2) after secondary or MBR treatment (post-secondary or post-MBR), (3) after tertiary treatment (one conventional plant only), and (4) after disinfection (post-disinfection). Glass-wool fiber filtration was used to concentrate enteric viruses from large volumes, and small

  19. Impact of Costing and Cost Analysis Methods on the Result of the Period: Methods Based on Partial Cost Theory

    Directory of Open Access Journals (Sweden)

    Toma Maria

    2017-01-01

    Looking from this perspective, in the present paper we have proposed that objectives, to approach the full cost calculation methods based on partial costs (direct-costing on the product or direct-costing evolved, and comparing them to determine the effect they have on the outcome of the period.

  20. Benchmarking sample preparation/digestion protocols reveals tube-gel being a fast and repeatable method for quantitative proteomics.

    Science.gov (United States)

    Muller, Leslie; Fornecker, Luc; Van Dorsselaer, Alain; Cianférani, Sarah; Carapito, Christine

    2016-12-01

    Sample preparation, typically by in-solution or in-gel approaches, has a strong influence on the accuracy and robustness of quantitative proteomics workflows. The major benefit of in-gel procedures is their compatibility with detergents (such as SDS) for protein solubilization. However, SDS-PAGE is a time-consuming approach. Tube-gel (TG) preparation circumvents this drawback as it involves directly trapping the sample in a polyacrylamide gel matrix without electrophoresis. We report here the first global label-free quantitative comparison between TG, stacking gel (SG), and basic liquid digestion (LD). A series of UPS1 standard mixtures (at 0.5, 1, 2.5, 5, 10, and 25 fmol) were spiked in a complex yeast lysate background. TG preparation allowed more yeast proteins to be identified than did the SG and LD approaches, with mean numbers of 1979, 1788, and 1323 proteins identified, respectively. Furthermore, the TG method proved equivalent to SG and superior to LD in terms of the repeatability of the subsequent experiments, with mean CV for yeast protein label-free quantifications of 7, 9, and 10%. Finally, known variant UPS1 proteins were successfully detected in the TG-prepared sample within a complex background with high sensitivity. All the data from this study are accessible on ProteomeXchange (PXD003841). © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Sedimentation rates of lake Haruna in the past 200 years as revealed by tephrochronology, 210Pb and 137Cs methods

    International Nuclear Information System (INIS)

    Ishiwatari, Ryoshi; Uchida, Kuniko; Nagasaka, Hiromitsu; Tsukamoto, Sumiko

    2010-01-01

    A 90 cm sediment core (HAR 99A) from Lake Haruna, Gumma Prefecture, Japan was dated by tephrochronology, lead-210 and cesium-137 methods and was compared stratigraphically with the cores obtained in 1966 (HAR 96B) and 1971 (HAR 71). For the HAR 99A core, the 24-26 cm depth layer was estimated to be AD 1963 by 137 Cs. The tephra layer in 62-66 cm depth was identified to be volcanic ashes from Asama volcano eruption (Asama-A tephra: As-A) in AD 1783. Average mass sedimentation rate (AMSR) for 1963 to 1999 (0-26 cm depth) is 0.050 g cm -2 yr -1 and that for 1783 to 1963 (25-62 cm depth) is 0.033 g cm -2 yr -1 . AMSR for the 0-62 cm depth obtained by 210 Pb ranges between 0.052 and 0.058 g cm -2 yr -1 . In addition, it is proposed that the previous assignment of As-B (AD 1108) for a tephra layer at 40-50 cm depth of the HAR 71 core should be changed to As-A tephra (AD 1783). (author)

  2. A method for combining search coil and fluxgate magnetometer data to reveal finer structures in reconnection physics

    Science.gov (United States)

    Argall, M. R.; Caide, A.; Chen, L.; Torbert, R. B.

    2012-12-01

    Magnetometers have been used to measure terrestrial and extraterrestrial magnetic fields in space exploration ever since Sputnik 3. Modern space missions, such as Cluster, RBSP, and MMS incorporate both search coil magnetometers (SCMs) and fluxgate magnetometers (FGMs) in their instrument suites: FGMs work well at low frequencies while SCMs perform better at high frequencies. In analyzing the noise floor of these instruments, a cross-over region is apparent around 0.3-1.5Hz. The satellite separation of MMS and average speeds of field convection and plasma flows at the subsolar magnetopause make this a crucial range for the upcoming MMS mission. The method presented here combines the signals from SCM and FGM by taking a weighted average of both in this frequency range in order to draw out key features, such as narrow current sheet structures, that would otherwise not be visible. The technique is applied to burst mode Cluster data for reported magnetopause and magnetotail reconnection events to demonstrate the power of the combined data. This technique is also applied to data from the the EMFISIS instrument on the RBSP mission. The authors acknowledge and thank the FGM and STAFF team for the use of their data from the CLUSTER Active Archive.

  3. Studies to reveal the nature of interactions between catalase and curcumin using computational methods and optical techniques.

    Science.gov (United States)

    Mofidi Najjar, Fayezeh; Ghadari, Rahim; Yousefi, Reza; Safari, Naser; Sheikhhasani, Vahid; Sheibani, Nader; Moosavi-Movahedi, Ali Akbar

    2017-02-01

    Curcumin is an important antioxidant compound, and is widely reported as an effective component for reducing complications of many diseases. However, the detailed mechanisms of its activity remain poorly understood. We found that curcumin can significantly increase catalase activity of BLC (bovine liver catalase). The mechanism of curcumin action was investigated using a computational method. We suggested that curcumin may activate BLC by modifying the bottleneck of its narrow channel. The molecular dynamic simulation data showed that placing curcumin on the structure of enzyme can increase the size of the bottleneck in the narrow channel of BLC, and readily allow the access of substrate to the active site. Because of the increase of the distance between amino acids of the bottleneck in the presence of curcumin, the entrance space of substrate increased from 250Å 3 to 440Å 3 . In addition, the increase in emission of intrinsic fluorescence of BLC in presence of curcumin demonstrated changes in tertiary structure of catalase, and possibility of less quenching. We also used circular dichroism (CD) spectropolarimetry to determine how curcumin may alter the enzyme secondary structure. Catalase spectra in the presence of various concentrations of curcumin showed an increase in the amount of α-helix content. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. A New Method for a Virtue-Based Responsible Conduct of Research Curriculum: Pilot Test Results.

    Science.gov (United States)

    Berling, Eric; McLeskey, Chet; O'Rourke, Michael; Pennock, Robert T

    2018-02-03

    Drawing on Pennock's theory of scientific virtues, we are developing an alternative curriculum for training scientists in the responsible conduct of research (RCR) that emphasizes internal values rather than externally imposed rules. This approach focuses on the virtuous characteristics of scientists that lead to responsible and exemplary behavior. We have been pilot-testing one element of such a virtue-based approach to RCR training by conducting dialogue sessions, modeled upon the approach developed by Toolbox Dialogue Initiative, that focus on a specific virtue, e.g., curiosity and objectivity. During these structured discussions, small groups of scientists explore the roles they think the focus virtue plays and should play in the practice of science. Preliminary results have shown that participants strongly prefer this virtue-based model over traditional methods of RCR training. While we cannot yet definitively say that participation in these RCR sessions contributes to responsible conduct, these pilot results are encouraging and warrant continued development of this virtue-based approach to RCR training.

  5. First characterization of the expiratory flow increase technique: method development and results analysis

    International Nuclear Information System (INIS)

    Maréchal, L; Barthod, C; Jeulin, J C

    2009-01-01

    This study provides an important contribution to the definition of the expiratory flow increase technique (EFIT). So far, no measuring means were suited to assess the manual EFIT performed on infants. The proposed method aims at objectively defining the EFIT based on the quantification of pertinent cognitive parameters used by physiotherapists when practicing. We designed and realized customized instrumented gloves endowed with pressure and displacement sensors, and the associated electronics and software. This new system is specific to the manoeuvre, to the user and innocuous for the patient. Data were collected and analysed on infants with bronchiolitis managed by an expert physiotherapist. The analysis presented is realized on a group of seven subjects (mean age: 6.1 months, SD: 1.1; mean chest circumference: 44.8 cm, SD: 1.9). The results are consistent with the physiotherapist's tactility. In spite of inevitable variability due to measurements on infants, repeatable quantitative data could be reported regarding the manoeuvre characteristics: the magnitudes of displacements do not exceed 10 mm on both hands; the movement of the thoracic hand is more vertical than the movement of the abdominal hand; the maximum applied pressure with the thoracic hand is about twice higher than with the abdominal hand; the thrust of the manual compression lasts (590 ± 62) ms. Inter-operators measurements are in progress in order to generalize these results

  6. Enhancing activated-peroxide formulations for porous materials: Test methods and results

    Energy Technology Data Exchange (ETDEWEB)

    Krauter, Paula [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tucker, Mark D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tezak, Matthew S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boucher, Raymond [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2012-12-01

    During an urban wide-area incident involving the release of a biological warfare agent, the recovery/restoration effort will require extensive resources and will tax the current capabilities of the government and private contractors. In fact, resources may be so limited that decontamination by facility owners/occupants may become necessary and a simple decontamination process and material should be available for this use. One potential process for use by facility owners/occupants would be a liquid sporicidal decontaminant, such as pHamended bleach or activated-peroxide, and simple application devices. While pH-amended bleach is currently the recommended low-tech decontamination solution, a less corrosive and toxic decontaminant is desirable. The objective of this project is to provide an operational assessment of an alternative to chlorine bleach for low-tech decontamination applications activated hydrogen peroxide. This report provides the methods and results for activatedperoxide evaluation experiments. The results suggest that the efficacy of an activated-peroxide decontaminant is similar to pH-amended bleach on many common materials.

  7. Energy Conservation Program Evaluation : Practical Methods, Useful Results : Proceedings of the 1987 Conference.

    Energy Technology Data Exchange (ETDEWEB)

    Argonne National Laboratory; International Conference on Energy Conservation Program Evaluation (3rd : 1987 : Chicago, ILL.)

    1987-01-01

    The success of cutting-edge evaluation methodologies depends on our ability to merge, manage, and maintain huge amounts of data. Equally important is presenting results of the subsequent analysis in a meaningful way. These topics are addressed at this session. The considerable amounts of data that have been collected about energy conservation programs are rarely used by other researchers, either because they are not available in computerized form or, if they are, because of the difficulties of interpreting someone else's data, format inconsistencies, incompatibility of computers, lack of documentation, data entry errors, and obtaining data use agreements. Even census, RECS, and AHS data can be best used only by a researcher who is intimately familiar with them. Once the data have been accessed and analyzed, the results need to be put in a format that can be readily understood by others. This is a particularly difficult task when submetered data is the basis of the analysis. Stoops and Gilbride will demonstrate their methods of using off-the-shelf graphics software to illustrate complex hourly data from nonresidential buildings.

  8. Automatically classifying sentences in full-text biomedical articles into Introduction, Methods, Results and Discussion.

    Science.gov (United States)

    Agarwal, Shashank; Yu, Hong

    2009-12-01

    Biomedical texts can be typically represented by four rhetorical categories: Introduction, Methods, Results and Discussion (IMRAD). Classifying sentences into these categories can benefit many other text-mining tasks. Although many studies have applied different approaches for automatically classifying sentences in MEDLINE abstracts into the IMRAD categories, few have explored the classification of sentences that appear in full-text biomedical articles. We first evaluated whether sentences in full-text biomedical articles could be reliably annotated into the IMRAD format and then explored different approaches for automatically classifying these sentences into the IMRAD categories. Our results show an overall annotation agreement of 82.14% with a Kappa score of 0.756. The best classification system is a multinomial naïve Bayes classifier trained on manually annotated data that achieved 91.95% accuracy and an average F-score of 91.55%, which is significantly higher than baseline systems. A web version of this system is available online at-http://wood.ims.uwm.edu/full_text_classifier/.

  9. Dosimetry methods and results for the former residents of Bikini Atoll

    International Nuclear Information System (INIS)

    Greenhouse, N.A.

    1979-01-01

    The US Government utilized Bikini and Enewetak Atolls in the northern Marshall Islands of Micronesia for atomspheric tests of nuclear explosives in the 1940's and 1950's. The original inhabitants of these atolls were relocated prior to the tests. During the early 1970's, a small but growing population of Marshallese people reinhabited Bikini. Environmental and personnel radiological monitoring programs were begun in 1974 to ensure that doses and dose commitments received by Bikini residents remained within US Federal Radiation Council guidelines. Dramatic increases in 137 Cs body burdens among the inhabitants between April 1977 and 1978 may have played a significant role in the government decision to move the 140 Bikinians in residence off of the atoll in August 1978. The average 137 Cs body burden for the population was 2.3 μCi in April 1978. Several individuals, however, exceeded the maximum permissible body burden of 3 μCi, and some approached 6 μCi. The resultant total dose commitment was less than 200 mrem for the average resident. The average total dose for the mean residence interval of approx. 4.5 years was about 1 rem. The sources of exposure, the probable cause of the unexpected increase in 137 Cs body burdens, and the methods for calculating radionuclide intake and resultant doses are discussed. Suggestions are offered as to the implications of the most significant exposure pathways for the future inhabitation of Bikini and Enewetak

  10. Major Results of the OECD BEMUSE (Best Estimate Methods; Uncertainty and Sensitivity Evaluation) Programme

    International Nuclear Information System (INIS)

    Reventos, F.

    2008-01-01

    One of the goals of computer code models of Nuclear Power Plants (NPP) is to demonstrate that these are designed to respond safely at postulated accidents. Models and codes are an approximation of the real physical behaviour occurring during a hypothetical transient and the data used to build these models are also known with certain accuracy. Therefore code predictions are uncertain. The BEMUSE programme is focussed on the application of uncertainty methodologies to large break LOCAs. The programme intends to evaluate the practicability, quality and reliability of best-estimate methods including uncertainty evaluations in applications relevant to nuclear reactor safety, to develop common understanding and to promote/facilitate their use by the regulator bodies and the industry. In order to fulfil its objectives BEMUSE is organized in to steps and six phases. The first step is devoted to the complete analysis of a LB-LOCA (L2-5) in an experimental facility (LOFT) while the second step refers to an actual Nuclear Power Plant. Both steps provide results on thermalhydraulic Best Estimate simulation as well as Uncertainty and sensitivity evaluation. At the time this paper is prepared, phases I, II and III are fully completed and the corresponding reports have been issued. Phase IV draft report is by now being reviewed while participants are working on Phase V developments. Phase VI consists in preparing the final status report which will summarizes the most relevant results of the whole programme.

  11. Repetitive transcranial magnetic stimulation as an adjuvant method in the treatment of depression: Preliminary results

    Directory of Open Access Journals (Sweden)

    Jovičić Milica

    2014-01-01

    Full Text Available Introduction. Repetitive transcranial magnetic stimulation (rTMS is a method of brain stimulation which is increasingly used in both clinical practice and research. Up-to-date studies have pointed out a potential antidepressive effect of rTMS, but definitive superiority over placebo has not yet been confirmed. Objective. The aim of the study was to examine the effect of rTMS as an adjuvant treatment with antidepressants during 18 weeks of evaluation starting from the initial application of the protocol. Methods. Four patients with the diagnosis of moderate/severe major depression were included in the study. The protocol involved 2000 stimuli per day (rTMS frequency of 10 Hz, intensity of 120% motor threshold administered over the left dorsolateral prefrontal cortex (DLPFC for 15 days. Subjective and objective depressive symptoms were measured before the initiation of rTMS and repeatedly evaluated at week 3, 6, 12 and 18 from the beginning of the stimulation. Results. After completion of rTMS protocol two patients demonstrated a reduction of depressive symptoms that was sustained throughout the 15-week follow-up period. One patient showed a tendency of remission during the first 12 weeks of the study, but relapsed in week 18. One patient showed no significant symptom reduction at any point of follow-up. Conclusion. Preliminary findings suggest that rTMS has a good tolerability and can be efficient in accelerating the effect of antidepressants, particularly in individuals with shorter duration of depressive episodes and moderate symptom severity. [Projekat Ministarstva nauke Republike Srbije, br. III41029 i br. ON175090

  12. The Vermont oxford neonatal encephalopathy registry: rationale, methods, and initial results

    Science.gov (United States)

    2012-01-01

    Background In 2006, the Vermont Oxford Network (VON) established the Neonatal Encephalopathy Registry (NER) to characterize infants born with neonatal encephalopathy, describe evaluations and medical treatments, monitor hypothermic therapy (HT) dissemination, define clinical research questions, and identify opportunities for improved care. Methods Eligible infants were ≥ 36 weeks with seizures, altered consciousness (stupor, coma) during the first 72 hours of life, a 5 minute Apgar score of ≤ 3, or receiving HT. Infants with central nervous system birth defects were excluded. Results From 2006–2010, 95 centers registered 4232 infants. Of those, 59% suffered a seizure, 50% had a 5 minute Apgar score of ≤ 3, 38% received HT, and 18% had stupor/coma documented on neurologic exam. Some infants experienced more than one eligibility criterion. Only 53% had a cord gas obtained and only 63% had a blood gas obtained within 24 hours of birth, important components for determining HT eligibility. Sixty-four percent received ventilator support, 65% received anticonvulsants, 66% had a head MRI, 23% had a cranial CT, 67% had a full channel encephalogram (EEG) and 33% amplitude integrated EEG. Of all infants, 87% survived. Conclusions The VON NER describes the heterogeneous population of infants with NE, the subset that received HT, their patterns of care, and outcomes. The optimal routine care of infants with neonatal encephalopathy is unknown. The registry method is well suited to identify opportunities for improvement in the care of infants affected by NE and study interventions such as HT as they are implemented in clinical practice. PMID:22726296

  13. Monitoring Spongospora subterranea Development in Potato Roots Reveals Distinct Infection Patterns and Enables Efficient Assessment of Disease Control Methods.

    Directory of Open Access Journals (Sweden)

    Tamilarasan Thangavel

    Full Text Available Spongospora subterranea is responsible for significant potato root and tuber disease globally. Study of this obligate (non-culturable pathogen that infects below-ground plant parts is technically difficult. The capacity to measure the dynamics and patterns of root infections can greatly assist in determining the efficacy of control treatments on disease progression. This study used qPCR and histological analysis in time-course experiments to measure temporal patterns of pathogen multiplication and disease development in potato (and tomato roots and tubers. Effects of delayed initiation of infection and fungicidal seed tuber and soil treatments were assessed. This study found roots at all plant developmental ages were susceptible to infection but that delaying infection significantly reduced pathogen content and resultant disease at final harvest. The pathogen was first detected in roots 15-20 days after inoculation (DAI and the presence of zoosporangia noted 15-45 DAI. Following initial infection pathogen content in roots increased at a similar rate regardless of plant age at inoculation. All fungicide treatments (except soil-applied mancozeb which had a variable response suppressed pathogen multiplication and root and tuber disease. In contrast to delayed inoculation, the fungicide treatments slowed disease progress (rate rather than delaying onset of infection. Trials under suboptimal temperatures for disease expression provided valuable data on root infection rate, demonstrating the robustness of monitoring root infection. These results provide an early measure of the efficacy of control treatments and indicate two possible patterns of disease suppression by either delayed initiation of infection which then proceeds at a similar rate or diminished epidemic rate.

  14. Monitoring Spongospora subterranea Development in Potato Roots Reveals Distinct Infection Patterns and Enables Efficient Assessment of Disease Control Methods.

    Science.gov (United States)

    Thangavel, Tamilarasan; Tegg, Robert S; Wilson, Calum R

    2015-01-01

    Spongospora subterranea is responsible for significant potato root and tuber disease globally. Study of this obligate (non-culturable) pathogen that infects below-ground plant parts is technically difficult. The capacity to measure the dynamics and patterns of root infections can greatly assist in determining the efficacy of control treatments on disease progression. This study used qPCR and histological analysis in time-course experiments to measure temporal patterns of pathogen multiplication and disease development in potato (and tomato) roots and tubers. Effects of delayed initiation of infection and fungicidal seed tuber and soil treatments were assessed. This study found roots at all plant developmental ages were susceptible to infection but that delaying infection significantly reduced pathogen content and resultant disease at final harvest. The pathogen was first detected in roots 15-20 days after inoculation (DAI) and the presence of zoosporangia noted 15-45 DAI. Following initial infection pathogen content in roots increased at a similar rate regardless of plant age at inoculation. All fungicide treatments (except soil-applied mancozeb which had a variable response) suppressed pathogen multiplication and root and tuber disease. In contrast to delayed inoculation, the fungicide treatments slowed disease progress (rate) rather than delaying onset of infection. Trials under suboptimal temperatures for disease expression provided valuable data on root infection rate, demonstrating the robustness of monitoring root infection. These results provide an early measure of the efficacy of control treatments and indicate two possible patterns of disease suppression by either delayed initiation of infection which then proceeds at a similar rate or diminished epidemic rate.

  15. Learning Method and Its Influence on Nutrition Study Results Throwing the Ball

    Science.gov (United States)

    Samsudin; Nugraha, Bayu

    2015-01-01

    This study aimed to know the difference between playing and learning methods of exploratory learning methods to learning outcomes throwing the ball. In addition, this study also aimed to determine the effect of nutritional status of these two learning methods mentioned above. This research was conducted at SDN Cipinang Besar Selatan 16 Pagi East…

  16. A MITE-based genotyping method to reveal hundreds of DNA polymorphisms in an animal genome after a few generations of artificial selection

    Directory of Open Access Journals (Sweden)

    Tetreau Guillaume

    2008-10-01

    Full Text Available Abstract Background For most organisms, developing hundreds of genetic markers spanning the whole genome still requires excessive if not unrealistic efforts. In this context, there is an obvious need for methodologies allowing the low-cost, fast and high-throughput genotyping of virtually any species, such as the Diversity Arrays Technology (DArT. One of the crucial steps of the DArT technique is the genome complexity reduction, which allows obtaining a genomic representation characteristic of the studied DNA sample and necessary for subsequent genotyping. In this article, using the mosquito Aedes aegypti as a study model, we describe a new genome complexity reduction method taking advantage of the abundance of miniature inverted repeat transposable elements (MITEs in the genome of this species. Results Ae. aegypti genomic representations were produced following a two-step procedure: (1 restriction digestion of the genomic DNA and simultaneous ligation of a specific adaptor to compatible ends, and (2 amplification of restriction fragments containing a particular MITE element called Pony using two primers, one annealing to the adaptor sequence and one annealing to a conserved sequence motif of the Pony element. Using this protocol, we constructed a library comprising more than 6,000 DArT clones, of which at least 5.70% were highly reliable polymorphic markers for two closely related mosquito strains separated by only a few generations of artificial selection. Within this dataset, linkage disequilibrium was low, and marker redundancy was evaluated at 2.86% only. Most of the detected genetic variability was observed between the two studied mosquito strains, but individuals of the same strain could still be clearly distinguished. Conclusion The new complexity reduction method was particularly efficient to reveal genetic polymorphisms in Ae. egypti. Overall, our results testify of the flexibility of the DArT genotyping technique and open new

  17. Finite elements volumes methods: applications to the Navier-Stokes equations and convergence results

    International Nuclear Information System (INIS)

    Emonot, P.

    1992-01-01

    In the first chapter are described the equations modeling incompressible fluid flow and a quick presentation of finite volumes method. The second chapter is an introduction to the finite elements volumes method. The box model is described and a method adapted to Navier-Stokes problems is proposed. The third chapter shows a fault analysis of the finite elements volumes method for the Laplacian problem and some examples in one, two, three dimensional calculations. The fourth chapter is an extension of the error analysis of the method for the Navier-Stokes problem

  18. Preliminary results from the application of risk matrix method for safety assessment in industrial radiography

    International Nuclear Information System (INIS)

    Lopez G, A.; Cruz, D.; Truppa, W.; Aravena, M.; Tamayo, B.

    2015-09-01

    Although the uses of ionizing radiation in industry are subject to procedures that provide a high level of safety, experience has shown that equipment failure, human errors, or the combination of both that can trigger accidental exposures may occur. Traditionally, the radiation safety checks whether these industrial practices (industrial radiography, industrial irradiators, among others) are sufficiently safe to prevent similar accidental exposures already occurred, so that becomes dependent on the published information and not always answers questions like: What other events can occur, or what other risks are present? Taking into account the results achieved by the Foro Iberoamericano de Organismos Reguladores Radiologicos y Nucleares, its leading position in the use of techniques of risk analysis in radioactive facilities and the need to develop a proactive approach to the prevention of accidents arising from the use of ionizing radiations in the industry, it intends to apply the risk analysis technique known as Risk Matrix to a hypothetical reference entity for the region in which industrial radiography is performed. In this paper the results of the first stage of this study are shown, that is the identification of initiating events (IE) and barriers that help mitigate the consequences of such IE, so that can appreciate the applicability of this method to industrial radiography services, to reduce the risk to acceptable levels. The fundamental advantage associated with the application of this methodology is that can be applied by the professionals working in the service and identifies specific weaknesses that from the point of view of safety there, so they can be prioritized resources depending on risk reduction. (Author)

  19. Methods and results for stress analyses on 14-ton, thin-wall depleted UF6 cylinders

    International Nuclear Information System (INIS)

    Kirkpatrick, J.R.; Chung, C.K.; Frazier, J.L.; Kelley, D.K.

    1996-10-01

    Uranium enrichment operations at the three US gaseous diffusion plants produce depleted uranium hexafluoride (DUF 6 ) as a residential product. At the present time, the inventory of DUF 6 in this country is more than half a million tons. The inventory of DUF 6 is contained in metal storage cylinders, most of which are located at the gaseous diffusion plants. The principal objective of the project is to ensure the integrity of the cylinders to prevent causing an environmental hazard by releasing the contents of the cylinders into the atmosphere. Another objective is to maintain the cylinders in such a manner that the DUF 6 may eventually be converted to a less hazardous material for final disposition. An important task in the DUF 6 cylinders management project is determining how much corrosion of the walls can be tolerated before the cylinders are in danger of being damaged during routine handling and shipping operations. Another task is determining how to handle cylinders that have already been damaged in a manner that will minimize the chance that a breach will occur or that the size of an existing breach will be significantly increased. A number of finite element stress analysis (FESA) calculations have been done to analyze the stresses for three conditions: (1) while the cylinder is being lifted, (2) when a cylinder is resting on two cylinders under it in the customary two-tier stacking array, and (3) when a cylinder is resting on tis chocks on the ground. Various documents describe some of the results and discuss some of the methods whereby they have been obtained. The objective of the present report is to document as many of the FESA cases done at Oak Ridge for 14-ton thin-wall cylinders as possible, giving results and a description of the calculations in some detail

  20. Methods and introductory results of the Greek national health and nutrition survey - HYDRIA

    Directory of Open Access Journals (Sweden)

    Georgia Martimianaki

    2018-06-01

    Full Text Available Background:  According to a large prospective cohort study (with baseline examination in the 1990s and smaller studies that followed, the population in Greece has been gradually deprived of the favorable morbidity and mortality indices recorded in the 1960s. The HYDRIA survey conducted in 2013-14 is the first nationally representative survey, which collected data related to the health and nutrition of the population in Greece. Methods: The survey sample consists of 4011 males (47% and females aged 18 years and over. Data collection included interviewer-administered questionnaires on personal characteristics, lifestyle choices, dietary habits and medical history; measurements of somatometry and blood pressure; and, blood drawing. Weighting factors were applied to ensure national representativeness of results. Results: Three out of five adults in Greece reported suffering of a chronic disease, with diabetes mellitus and chronic depression being the more frequent ones among older individuals. The population is also experiencing an overweight/obesity epidemic, since seven out of 10 adults are either overweight or obese. In addition, 40% of the population bears indications of hypertension. Smoking is still common and among women the prevalence was higher in younger age groups. Social disparities were observed in the prevalence of chronic diseases and mortality risk factors (hypertension, obesity, impaired lipid profile and high blood glucose levels. Conclusion: Excess body weight, hypertension, the smoking habit and the population’s limited physical activity are the predominant challenges that public health officials have to deal with in formulating policies and designing actions for the population in Greece.

  1. Methods used by accredited dental specialty programs to advertise faculty positions: results of a national survey.

    Science.gov (United States)

    Ballard, Richard W; Hagan, Joseph L; Armbruster, Paul C; Gallo, John R

    2011-01-01

    The various reasons for the current and projected shortages of dental faculty members in the United States have received much attention. Dental school deans have reported that the top three factors impacting their ability to fill faculty positions are meeting the requirements of the position, lack of response to position announcement, and salary/budget limitations. An electronic survey sent to program directors of specialty programs at all accredited U.S. dental schools inquired about the number of vacant positions, advertised vacant positions, reasons for not advertising, selection of advertising medium, results of advertising, and assistance from professional dental organizations. A total of seventy-three permanently funded full-time faculty positions were reported vacant, with 89.0 percent of these positions having been advertised in nationally recognized professional journals and newsletters. Networking or word-of-mouth was reported as the most successful method for advertising. The majority of those responding reported that professional dental organizations did not help with filling vacant faculty positions, but that they would utilize the American Dental Association's website or their specialty organization's website to post faculty positions if they were easy to use and update.

  2. Methods for improving mechanical properties of partially stabilized zirconia and the resulting product

    International Nuclear Information System (INIS)

    Aronov, V.A.

    1987-01-01

    A method for improving mechanical surface properties of a rigid body comprising partially stabilized zirconia as a constituent is described comprising the following steps: (i) providing a rigid body having an exposed surface and an interior volume; (ii) subjecting the exposed surface region of partially stabilized zirconia to external heating to heat the exposed surface region to 1100 0 C-1600 0 C without heating the interior volume above 500 0 C-800 0 C; and (iii) cooling the rigid body to a temperature of less than 500 0 C to cause a portion of the exposed surface region to transform from the tetragonal lattice modification to the monoclinic lattice modification, thereby creating a compressive stress field in the exposed surface region and improving the mechanical surface properties of the exposed surface region. In a ceramic body comprising a first exposed region of a partially stabilized zirconia, and a second region of a partially stabilized zirconia at an interior portion of the ceramic body, the improvement is described comprising the ceramic body having in the first, exposed region a greater percentage of the monoclinic lattice modification than in the second region; having in the first, exposed region 5 percent to 100 percent in the monoclinic lattice modification; and having a molded surface finish in the first, exposed region; the first, exposed region being subjected to a compressive field resulting from the greater percentage of the monoclinic lattice modification

  3. New results to BDD truncation method for efficient top event probability calculation

    International Nuclear Information System (INIS)

    Mo, Yuchang; Zhong, Farong; Zhao, Xiangfu; Yang, Quansheng; Cui, Gang

    2012-01-01

    A Binary Decision Diagram (BDD) is a graph-based data structure that calculates an exact top event probability (TEP). It has been a very difficult task to develop an efficient BDD algorithm that can solve a large problem since its memory consumption is very high. Recently, in order to solve a large reliability problem within limited computational resources, Jung presented an efficient method to maintain a small BDD size by a BDD truncation during a BDD calculation. In this paper, it is first identified that Jung's BDD truncation algorithm can be improved for a more practical use. Then, a more efficient truncation algorithm is proposed in this paper, which can generate truncated BDD with smaller size and approximate TEP with smaller truncation error. Empirical results showed this new algorithm uses slightly less running time and slightly more storage usage than Jung's algorithm. It was also found, that designing a truncation algorithm with ideal features for every possible fault tree is very difficult, if not impossible. The so-called ideal features of this paper would be that with the decrease of truncation limits, the size of truncated BDD converges to the size of exact BDD, but should never be larger than exact BDD.

  4. Evaluating Behaviorally Oriented Aviation Maintenance Resource Management (MRM) Training and Programs: Methods, Results, and Conclusions

    Science.gov (United States)

    Taylor, James C.; Thomas, Robert L., III

    2003-01-01

    Assessment of the impact of Aviation Resource Management Programs on aviation culture and performance has compelled a considerable body of research (Taylor & Robertson, 1995; Taylor, 1998; Taylor & Patankar, 2001). In recent years new methods have been applied to the problem of maintenance error precipitated by factors such as the need for self-assessment of communication and trust. The present study - 2002 -- is an extension of that past work. This research project was designed as the conclusion of a larger effort to help understand, evaluate and validate the impact of Maintenance Resource Management (MRM) training programs, and other MRM interventions on participant attitudes, opinions, behaviors, and ultimately on enhanced safety performance. It includes research and development of evaluation methodology as well as examination of psychological constructs and correlates of maintainer performance. In particular, during 2002, three issues were addressed. First, the evaluation of two (independent & different) MRM programs for changing behaviors was undertaken. In one case we were able to further apply the approach to measuring written communication developed during 2001 (Taylor, 2002; Taylor & Thomas, 2003). Second, the MRM/TOQ surveys were made available for completion on the internet. The responses from these on-line surveys were automatically linked to a results calculator (like the one developed and described in Taylor, 2002) to aid industry users in analyzing and evaluating their local survey data on the internet. Third, the main trends and themes from our research about MRM programs over the past dozen years were reviewed.

  5. The relationship between team climate and interprofessional collaboration: Preliminary results of a mixed methods study.

    Science.gov (United States)

    Agreli, Heloise F; Peduzzi, Marina; Bailey, Christopher

    2017-03-01

    Relational and organisational factors are key elements of interprofessional collaboration (IPC) and team climate. Few studies have explored the relationship between IPC and team climate. This article presents a study that aimed to explore IPC in primary healthcare teams and understand how the assessment of team climate may provide insights into IPC. A mixed methods study design was adopted. In Stage 1 of the study, team climate was assessed using the Team Climate Inventory with 159 professionals in 18 interprofessional teams based in São Paulo, Brazil. In Stage 2, data were collected through in-depth interviews with a sample of team members who participated in the first stage of the study. Results from Stage 1 provided an overview of factors relevant to teamwork, which in turn informed our exploration of the relationship between team climate and IPC. Preliminary findings from Stage 2 indicated that teams with a more positive team climate (in particular, greater participative safety) also reported more effective communication and mutual support. In conclusion, team climate provided insights into IPC, especially regarding aspects of communication and interaction in teams. Further research will provide a better understanding of differences and areas of overlap between team climate and IPC. It will potentially contribute for an innovative theoretical approach to explore interprofessional work in primary care settings.

  6. Methods for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry

    Science.gov (United States)

    Chan, George C. Y. [Bloomington, IN; Hieftje, Gary M [Bloomington, IN

    2010-08-03

    A method for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry (ICP-AES). ICP-AES analysis is performed across a plurality of selected locations in the plasma on an unknown sample, collecting the light intensity at one or more selected wavelengths of one or more sought-for analytes, creating a first dataset. The first dataset is then calibrated with a calibration dataset creating a calibrated first dataset curve. If the calibrated first dataset curve has a variability along the location within the plasma for a selected wavelength, errors are present. Plasma-related errors are then corrected by diluting the unknown sample and performing the same ICP-AES analysis on the diluted unknown sample creating a calibrated second dataset curve (accounting for the dilution) for the one or more sought-for analytes. The cross-over point of the calibrated dataset curves yields the corrected value (free from plasma related errors) for each sought-for analyte.

  7. 3D MRI of the colon: methods and first results of 5 patients

    International Nuclear Information System (INIS)

    Luboldt, W.; Bauerfeind, P.; Pelkonen, P.; Steiner, P.; Krestin, G.P.; Debatin, J.F.

    1997-01-01

    Purpose: 'Exoscopic' and endoscopic identification of colorectal pathologies via MRI. Methods: 5 patients (36-88 years), two normal and three with different colorectal pathologies (diverticular disease, polyps and carcinoma of the colon), were examined by MRI after colonoscopy. Subsequent to filling of the colon with a gadolinium-water mixture under MRI-monitoring, 3D-data sets of the colon were acquired in prone and supine positions over a 28 sec breathold interval. Subsequently multiplanar T 1 -weighted 2D-sequences were acquired before and following i.v. administration of Gd-DTPA (0.1 mmol/kg BW). All imaging was performed in the coronal orientation. The 3D-data were interactively analysed based on various displays: Maximum intensity projection (MIP), surface shadowed display (SSD), multiplanar reconstruction (MPR), virtual colonoscopy (VC). Results: All of the colorectal pathologies could be interactively diagnosed by MPR. On MIP images some pathologies were missed. VC presented the morphology of colon haustra as well as of all endoluminally growing lesions in a manner similar to endoscopy. The colon masses showed uptake of contrast media and could thus be differentiated from air or faeces. (orig./AJ) [de

  8. Integrate life-cycle assessment and risk analysis results, not methods.

    Science.gov (United States)

    Linkov, Igor; Trump, Benjamin D; Wender, Ben A; Seager, Thomas P; Kennedy, Alan J; Keisler, Jeffrey M

    2017-08-04

    Two analytic perspectives on environmental assessment dominate environmental policy and decision-making: risk analysis (RA) and life-cycle assessment (LCA). RA focuses on management of a toxicological hazard in a specific exposure scenario, while LCA seeks a holistic estimation of impacts of thousands of substances across multiple media, including non-toxicological and non-chemically deleterious effects. While recommendations to integrate the two approaches have remained a consistent feature of environmental scholarship for at least 15 years, the current perception is that progress is slow largely because of practical obstacles, such as a lack of data, rather than insurmountable theoretical difficulties. Nonetheless, the emergence of nanotechnology presents a serious challenge to both perspectives. Because the pace of nanomaterial innovation far outstrips acquisition of environmentally relevant data, it is now clear that a further integration of RA and LCA based on dataset completion will remain futile. In fact, the two approaches are suited for different purposes and answer different questions. A more pragmatic approach to providing better guidance to decision-makers is to apply the two methods in parallel, integrating only after obtaining separate results.

  9. Seismic hazard of American Samoa and neighboring South Pacific Islands--methods, data, parameters, and results

    Science.gov (United States)

    Petersen, Mark D.; Harmsen, Stephen C.; Rukstales, Kenneth S.; Mueller, Charles S.; McNamara, Daniel E.; Luco, Nicolas; Walling, Melanie

    2012-01-01

    American Samoa and the neighboring islands of the South Pacific lie near active tectonic-plate boundaries that host many large earthquakes which can result in strong earthquake shaking and tsunamis. To mitigate earthquake risks from future ground shaking, the Federal Emergency Management Agency requested that the U.S. Geological Survey prepare seismic hazard maps that can be applied in building-design criteria. This Open-File Report describes the data, methods, and parameters used to calculate the seismic shaking hazard as well as the output hazard maps, curves, and deaggregation (disaggregation) information needed for building design. Spectral acceleration hazard for 1 Hertz having a 2-percent probability of exceedance on a firm rock site condition (Vs30=760 meters per second) is 0.12 acceleration of gravity (1 second, 1 Hertz) and 0.32 acceleration of gravity (0.2 seconds, 5 Hertz) on American Samoa, 0.72 acceleration of gravity (1 Hertz) and 2.54 acceleration of gravity (5 Hertz) on Tonga, 0.15 acceleration of gravity (1 Hertz) and 0.55 acceleration of gravity (5 Hertz) on Fiji, and 0.89 acceleration of gravity (1 Hertz) and 2.77 acceleration of gravity (5 Hertz) on the Vanuatu Islands.

  10. Method for Developing Descriptions of Hard-to-Price Products: Results of the Telecommunications Product Study

    Energy Technology Data Exchange (ETDEWEB)

    Conrad, F.; Tonn, B.

    1999-05-01

    This report presents the results of a study to test a new method for developing descriptions of hard-to-price products. The Bureau of Labor Statistics (BLS) is responsible for collecting data to estimate price indices such as the Consumers Price Index (BLS) is responsible for collecting data to estimate price indices such as the Consumers Price Index (CPI). BLS accomplishes this task by sending field staff to places of business to price actual products. The field staff are given product checklists to help them determine whether products found today are comparable to products priced the previous month. Prices for non-comparable products are not included in the current month's price index calculations. A serious problem facing BLS is developing product checklists for dynamic product areas, new industries, and the service sector. It is difficult to keep checklists up-to-date and quite often simply to develop checklists for service industry products. Some people estimates that upwards of 50 % of US economic activity is not accounted for in the CPI

  11. Application of machine learning methods to histone methylation ChIP-Seq data reveals H4R3me2 globally represses gene expression

    Science.gov (United States)

    2010-01-01

    Background In the last decade, biochemical studies have revealed that epigenetic modifications including histone modifications, histone variants and DNA methylation form a complex network that regulate the state of chromatin and processes that depend on it including transcription and DNA replication. Currently, a large number of these epigenetic modifications are being mapped in a variety of cell lines at different stages of development using high throughput sequencing by members of the ENCODE consortium, the NIH Roadmap Epigenomics Program and the Human Epigenome Project. An extremely promising and underexplored area of research is the application of machine learning methods, which are designed to construct predictive network models, to these large-scale epigenomic data sets. Results Using a ChIP-Seq data set of 20 histone lysine and arginine methylations and histone variant H2A.Z in human CD4+ T-cells, we built predictive models of gene expression as a function of histone modification/variant levels using Multilinear (ML) Regression and Multivariate Adaptive Regression Splines (MARS). Along with extensive crosstalk among the 20 histone methylations, we found H4R3me2 was the most and second most globally repressive histone methylation among the 20 studied in the ML and MARS models, respectively. In support of our finding, a number of experimental studies show that PRMT5-catalyzed symmetric dimethylation of H4R3 is associated with repression of gene expression. This includes a recent study, which demonstrated that H4R3me2 is required for DNMT3A-mediated DNA methylation--a known global repressor of gene expression. Conclusion In stark contrast to univariate analysis of the relationship between H4R3me2 and gene expression levels, our study showed that the regulatory role of some modifications like H4R3me2 is masked by confounding variables, but can be elucidated by multivariate/systems-level approaches. PMID:20653935

  12. A method for calculating Bayesian uncertainties on internal doses resulting from complex occupational exposures

    International Nuclear Information System (INIS)

    Puncher, M.; Birchall, A.; Bull, R. K.

    2012-01-01

    Estimating uncertainties on doses from bioassay data is of interest in epidemiology studies that estimate cancer risk from occupational exposures to radionuclides. Bayesian methods provide a logical framework to calculate these uncertainties. However, occupational exposures often consist of many intakes, and this can make the Bayesian calculation computationally intractable. This paper describes a novel strategy for increasing the computational speed of the calculation by simplifying the intake pattern to a single composite intake, termed as complex intake regime (CIR). In order to assess whether this approximation is accurate and fast enough for practical purposes, the method is implemented by the Weighted Likelihood Monte Carlo Sampling (WeLMoS) method and evaluated by comparing its performance with a Markov Chain Monte Carlo (MCMC) method. The MCMC method gives the full solution (all intakes are independent), but is very computationally intensive to apply routinely. Posterior distributions of model parameter values, intakes and doses are calculated for a representative sample of plutonium workers from the United Kingdom Atomic Energy cohort using the WeLMoS method with the CIR and the MCMC method. The distributions are in good agreement: posterior means and Q 0.025 and Q 0.975 quantiles are typically within 20 %. Furthermore, the WeLMoS method using the CIR converges quickly: a typical case history takes around 10-20 min on a fast workstation, whereas the MCMC method took around 12-hr. The advantages and disadvantages of the method are discussed. (authors)

  13. The Method for Assessing and Forecasting Value of Knowledge in SMEs – Research Results

    Directory of Open Access Journals (Sweden)

    Justyna Patalas-Maliszewska

    2010-10-01

    Full Text Available Decisions by SMEs regarding knowledge development are made at a strategic level (Haas-Edersheim, 2007. Related to knowledge management are approaches to "measure" knowledge, where literature distinguishes between qualitative and quantitative methods of valuating intellectual capital. Although there is a quite range of such methods to build an intellectual capital reporting system, none of them is really widely recognized. This work presents a method enabling assessing the effectiveness of investing in human resources, taking into consideration existing methods. The method presented is focusing on SMEs (taking into consideration their importance for, especially, regional development. It consists of four parts: an SME reference model, an indicator matrix to assess investments into knowledge, innovation indicators, and the GMDH algorithm for decision making. The method presented is exemplified by a case study including 10 companies.

  14. A Review of Spectral Methods for Variable Amplitude Fatigue Prediction and New Results

    Science.gov (United States)

    Larsen, Curtis E.; Irvine, Tom

    2013-01-01

    A comprehensive review of the available methods for estimating fatigue damage from variable amplitude loading is presented. The dependence of fatigue damage accumulation on power spectral density (psd) is investigated for random processes relevant to real structures such as in offshore or aerospace applications. Beginning with the Rayleigh (or narrow band) approximation, attempts at improved approximations or corrections to the Rayleigh approximation are examined by comparison to rainflow analysis of time histories simulated from psd functions representative of simple theoretical and real world applications. Spectral methods investigated include corrections by Wirsching and Light, Ortiz and Chen, the Dirlik formula, and the Single-Moment method, among other more recent proposed methods. Good agreement is obtained between the spectral methods and the time-domain rainflow identification for most cases, with some limitations. Guidelines are given for using the several spectral methods to increase confidence in the damage estimate.

  15. Evaluation of the effects of green taxes in the Nordic countries. Results and method question

    International Nuclear Information System (INIS)

    Skou Andersen, M.; Dengsoee, N.; Branth Pedersen, A.

    2000-01-01

    Green taxes have over the past 10 years become a significant part of environmental regulation in the Nordic countries. The present report is a literature study of the effects of green taxes with regard to CO 2 and pesticides. The authors have identified 68 studies of CO 2 -taxes and 20 studies of the pesticide taxes. The report presents a summary of the results from these studies and assesses the methodologies employed for examining the effects of the green taxes. The majority of the reviewed studies are ex-ante studies, which have been carried out in advance of the implementation of the taxes, and which are often based on simplified economic models. Ex-post studies, which are based on the actual historical data for the adjustment to the taxes, are relatively few. 20 ex-post studies of the CO 2 -taxes have been identified, while there are not any ex-post studies of the pesticide taxes. With regard to the environmental effects of green taxes, the ex-post studies can be relied on for the procurement of the most reliable data. The completed ex-post studies of the CO 2 -taxes do not present unambiguous results, because focus and methodology differ. Most studies are partial in their focus and relate to one or more sectors of the economy. Some studies have been carried out few tears after the introduction of the taxes, and do not present an updated assessment of the effects of the taxes. To the extent that it is possible to summarise the present knowledge about the effects of the CO 2 -taxes, there seems to be indications for relatively marked effects in Denmark as compared to the other Nordic countries, since Denmark is the only country whose taxed CO 2 -emissions have been reduced in absolute figures. With regard to Norway and Sweden, effects of the CO 2 -taxes can be identified in particular sectors in relation to business-as-usual scenarios. Finland's CO 2 -tax has not been comprehensively evaluated ex-post, but has reached a tax level which gives expectations of

  16. The SAGES Legacy Unifying Globulars and Galaxies survey (SLUGGS): sample definition, methods, and initial results

    Energy Technology Data Exchange (ETDEWEB)

    Brodie, Jean P.; Romanowsky, Aaron J.; Jennings, Zachary G.; Pota, Vincenzo; Kader, Justin; Roediger, Joel C.; Villaume, Alexa; Arnold, Jacob A.; Woodley, Kristin A. [University of California Observatories, 1156 High Street, Santa Cruz, CA 95064 (United States); Strader, Jay [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Forbes, Duncan A.; Pastorello, Nicola; Usher, Christopher; Blom, Christina; Kartha, Sreeja S. [Centre for Astrophysics and Supercomputing, Swinburne University, Hawthorn, VIC 3122 (Australia); Foster, Caroline; Spitler, Lee R., E-mail: jbrodie@ucsc.edu [Australian Astronomical Observatory, P.O. Box 915, North Ryde, NSW 1670 (Australia)

    2014-11-20

    We introduce and provide the scientific motivation for a wide-field photometric and spectroscopic chemodynamical survey of nearby early-type galaxies (ETGs) and their globular cluster (GC) systems. The SAGES Legacy Unifying Globulars and GalaxieS (SLUGGS) survey is being carried out primarily with Subaru/Suprime-Cam and Keck/DEIMOS. The former provides deep gri imaging over a 900 arcmin{sup 2} field-of-view to characterize GC and host galaxy colors and spatial distributions, and to identify spectroscopic targets. The NIR Ca II triplet provides GC line-of-sight velocities and metallicities out to typically ∼8 R {sub e}, and to ∼15 R {sub e} in some cases. New techniques to extract integrated stellar kinematics and metallicities to large radii (∼2-3 R {sub e}) are used in concert with GC data to create two-dimensional (2D) velocity and metallicity maps for comparison with simulations of galaxy formation. The advantages of SLUGGS compared with other, complementary, 2D-chemodynamical surveys are its superior velocity resolution, radial extent, and multiple halo tracers. We describe the sample of 25 nearby ETGs, the selection criteria for galaxies and GCs, the observing strategies, the data reduction techniques, and modeling methods. The survey observations are nearly complete and more than 30 papers have so far been published using SLUGGS data. Here we summarize some initial results, including signatures of two-phase galaxy assembly, evidence for GC metallicity bimodality, and a novel framework for the formation of extended star clusters and ultracompact dwarfs. An integrated overview of current chemodynamical constraints on GC systems points to separate, in situ formation modes at high redshifts for metal-poor and metal-rich GCs.

  17. Establishing Upper Limits for Item Ratings for the Angoff Method: Are Resulting Standards More 'Realistic'?

    Science.gov (United States)

    Reid, Jerry B.

    This report investigates an area of uncertainty in using the Angoff method for setting standards, namely whether or not a judge's conceptualizations of borderline group performance are realistic. Ratings are usually made with reference to the performance of this hypothetical group, therefore the Angoff method's success is dependent on this point.…

  18. Some new results on correlation-preserving factor scores prediction methods

    NARCIS (Netherlands)

    Ten Berge, J.M.F.; Krijnen, W.P.; Wansbeek, T.J.; Shapiro, A.

    1999-01-01

    Anderson and Rubin and McDonald have proposed a correlation-preserving method of factor scores prediction which minimizes the trace of a residual covariance matrix for variables. Green has proposed a correlation-preserving method which minimizes the trace of a residual covariance matrix for factors.

  19. Communicating patient-reported outcome scores using graphic formats: results from a mixed-methods evaluation.

    Science.gov (United States)

    Brundage, Michael D; Smith, Katherine C; Little, Emily A; Bantug, Elissa T; Snyder, Claire F

    2015-10-01

    Patient-reported outcomes (PROs) promote patient-centered care by using PRO research results ("group-level data") to inform decision making and by monitoring individual patient's PROs ("individual-level data") to inform care. We investigated the interpretability of current PRO data presentation formats. This cross-sectional mixed-methods study randomized purposively sampled cancer patients and clinicians to evaluate six group-data or four individual-data formats. A self-directed exercise assessed participants' interpretation accuracy and ratings of ease-of-understanding and usefulness (0 = least to 10 = most) of each format. Semi-structured qualitative interviews explored helpful and confusing format attributes. We reached thematic saturation with 50 patients (44 % < college graduate) and 20 clinicians. For group-level data, patients rated simple line graphs highest for ease-of-understanding and usefulness (median 8.0; 33 % selected for easiest to understand/most useful) and clinicians rated simple line graphs highest for ease-of-understanding and usefulness (median 9.0, 8.5) but most often selected line graphs with confidence limits or norms (30 % for each format for easiest to understand/most useful). Qualitative results support that clinicians value confidence intervals, norms, and p values, but patients find them confusing. For individual-level data, both patients and clinicians rated line graphs highest for ease-of-understanding (median 8.0 patients, 8.5 clinicians) and usefulness (median 8.0, 9.0) and selected them as easiest to understand (50, 70 %) and most useful (62, 80 %). The qualitative interviews supported highlighting scores requiring clinical attention and providing reference values. This study has identified preferences and opportunities for improving on current formats for PRO presentation and will inform development of best practices for PRO presentation. Both patients and clinicians prefer line graphs across group-level data and individual

  20. Survey of sterile admixture practices in canadian hospital pharmacies: part 1. Methods and results.

    Science.gov (United States)

    Warner, Travis; Nishi, Cesilia; Checkowski, Ryan; Hall, Kevin W

    2009-03-01

    The 1996 Guidelines for Preparation of Sterile Products in Pharmacies of the Canadian Society of Hospital Pharmacists (CSHP) represent the current standard of practice for sterile compounding in Canada. However, these guidelines are practice recommendations, not enforceable standards. Previous surveys of sterile compounding practices have shown that actual practice deviates markedly from voluntary practice recommendations. In 2004, the United States Pharmacopeia (USP) published its "General Chapter Pharmaceutical Compounding-Sterile Preparations", which set a more rigorous and enforceable standard for sterile compounding in the United States. To assess sterile compounding practices in Canadian hospital pharmacies and to compare them with current CSHP recommendations and USP chapter standards. An online survey, based on previous studies of sterile compounding practices, the CSHP guidelines, and the chapter standards, was created and distributed to 193 Canadian hospital pharmacies. A total of 133 pharmacies completed at least part of the survey, for a response rate of 68.9%. All respondents reported the preparation of sterile products. Various degrees of deviation from the practice recommendations were noted for virtually all areas of the CSHP guidelines and the USP standards. Low levels of compliance were most notable in the areas of facilities and equipment, process validation, and product testing. Availability in the central pharmacy of a clean room facility meeting or exceeding the criteria of International Organization for Standardization (ISO) class 8 is a requirement of the chapter standards, but more than 40% of responding pharmacies reported that they did not have such a facility. Higher levels of compliance were noted for policies and procedures, garbing requirements, aseptic technique, and handling of hazardous products. Part 1 of this series reports the survey methods and results relating to policies, personnel, raw materials, storage and handling

  1. Sensitivity of Spaceborne and Ground Radar Comparison Results to Data Analysis Methods and Constraints

    Science.gov (United States)

    Morris, Kenneth R.; Schwaller, Mathew

    2011-01-01

    With the availability of active weather radar observations from space from the Precipitation Radar (PR) on board the Tropical Rainfall Measuring Mission (TR.MM) satellite, numerous studies have been performed comparing PR reflectivity and derived rain rates to similar observations from ground-based weather radars (GR). These studies have used a variety of algorithms to compute matching PR and GR volumes for comparison. Most studies have used a fixed 3-dimensional Cartesian grid centered on the ground radar, onto which the PR and GR data are interpolated using a proprietary approach and/or commonly available GR analysis software (e.g., SPRINT, REORDER). Other studies have focused on the intersection of the PR and GR viewing geometries either explicitly or using a hybrid of the fixed grid and PR/GR common fields of view. For the Dual-Frequency Precipitation Radar (DPR) of the upcoming Global Precipitation Measurement (GPM) mission, a prototype DPR/GR comparison algorithm based on similar TRMM PR data has been developed that defines the common volumes in terms of the geometric intersection of PR and GR rays, where smoothing of the PR and GR data are minimized and no interpolation is performed. The PR and GR volume-averaged reflectivity values of each sample volume are accompanied by descriptive metadata, for attributes including the variability and maximum of the reflectivity within the sample volume, and the fraction of range gates in the sample average having reflectivity values above an adjustable detection threshold (typically taken to be 18 dBZ for the PR). Sample volumes are further characterized by rain type (Stratiform or Convective), proximity to the melting layer, underlying surface (land/water/mixed), and the time difference between the PR and GR observations. The mean reflectivity differences between the PR and GR can differ between data sets produced by the different analysis methods; and for the GPM prototype, by the type of constraints and

  2. Could Daylight Glare Be Defined Mathematically?Results of Testing the DGIN Method in Japan

    Science.gov (United States)

    Nazzal, Ali; Oki, Masato

    Discomfort glare from daylight is a common problem without valid prediction methods so far. A new mathematical DGIN (New Daylight Glare Index) method tries to respond the challenge. This paper reports on experiments carried out in daylit office environment in Japan to test applicability of the method. Slight positive correlation was found between the DGIN and the subjective evaluation. Additionally, a high Ladaptation value together with the small ratio of Lwindow to Ladaptation was obviously experienced sufficient to neutralize the effect of glare discomfort. However, subjective assessments are poor glare indicators and not reliable in testing glare prediction methods. DGIN is a good indicator of daylight glare, and when the DGIN value is analyzed together with the measured illuminance ratios, discomfort glare from daylight can be analyzed in a quantitative manner. The DGIN method could serve architects and lighting designers in testing daylighting systems, and also guide the action of daylight responsive lighting controls.

  3. Comparison of two dietary assessment methods by food consumption: results of the German National Nutrition Survey II.

    Science.gov (United States)

    Eisinger-Watzl, Marianne; Straßburg, Andrea; Ramünke, Josa; Krems, Carolin; Heuer, Thorsten; Hoffmann, Ingrid

    2015-04-01

    To further characterise the performance of the diet history method and the 24-h recalls method, both in an updated version, a comparison was conducted. The National Nutrition Survey II, representative for Germany, assessed food consumption with both methods. The comparison was conducted in a sample of 9,968 participants aged 14-80. Besides calculating mean differences, statistical agreement measurements encompass Spearman and intraclass correlation coefficients, ranking participants in quartiles and the Bland-Altman method. Mean consumption of 12 out of 18 food groups was higher assessed with the diet history method. Three of these 12 food groups had a medium to large effect size (e.g., raw vegetables) and seven showed at least a small strength while there was basically no difference for coffee/tea or ice cream. Intraclass correlations were strong only for beverages (>0.50) and revealed the least correlation for vegetables (diet history method to remember consumption of the past 4 weeks may be a source of inaccurateness, especially for inhomogeneous food groups. Additionally, social desirability gains significance. There is no assessment method without errors and attention to specific food groups is a critical issue with every method. Altogether, the 24-h recalls method applied in the presented study, offers advantages approximating food consumption as compared to the diet history method.

  4. The 'revealed preferences' theory: Assumptions and conjectures

    International Nuclear Information System (INIS)

    Green, C.H.

    1983-01-01

    Being kind of intuitive psychology the 'Revealed-Preferences'- theory based approaches towards determining the acceptable risks are a useful method for the generation of hypotheses. In view of the fact that reliability engineering develops faster than methods for the determination of reliability aims the Revealed-Preferences approach is a necessary preliminary help. Some of the assumptions on which the 'Revealed-Preferences' theory is based will be identified and analysed and afterwards compared with experimentally obtained results. (orig./DG) [de

  5. Investigation of error estimation method of observational data and comparison method between numerical and observational results toward V and V of seismic simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio; Kawakami, Yoshiaki; Nakajima, Norihiro

    2017-01-01

    The method to estimate errors included in observational data and the method to compare numerical results with observational results are investigated toward the verification and validation (V and V) of a seismic simulation. For the method to estimate errors, 144 literatures for the past 5 years (from the year 2010 to 2014) in the structure engineering field and earthquake engineering field where the description about acceleration data is frequent are surveyed. As a result, it is found that some processes to remove components regarded as errors from observational data are used in about 30% of those literatures. Errors are caused by the resolution, the linearity, the temperature coefficient for sensitivity, the temperature coefficient for zero shift, the transverse sensitivity, the seismometer property, the aliasing, and so on. Those processes can be exploited to estimate errors individually. For the method to compare numerical results with observational results, public materials of ASME V and V Symposium 2012-2015, their references, and above 144 literatures are surveyed. As a result, it is found that six methods have been mainly proposed in existing researches. Evaluating those methods using nine items, advantages and disadvantages for those methods are arranged. The method is not well established so that it is necessary to employ those methods by compensating disadvantages and/or to search for a solution to a novel method. (author)

  6. Results of a study assessing teaching methods of faculty after measuring student learning style preference.

    Science.gov (United States)

    Stirling, Bridget V

    2017-08-01

    Learning style preference impacts how well groups of students respond to their curricula. Faculty have many choices in the methods for delivering nursing content, as well as assessing students. The purpose was to develop knowledge around how faculty delivered curricula content, and then considering these findings in the context of the students learning style preference. Following an in-service on teaching and learning styles, faculty completed surveys on their methods of teaching and the proportion of time teaching, using each learning style (visual, aural, read/write and kinesthetic). This study took place at the College of Nursing a large all-female university in Saudi Arabia. 24 female nursing faculty volunteered to participate in the project. A cross-sectional design was used. Faculty reported teaching using mostly methods that were kinesthetic and visual, although lecture was also popular (aural). Students preferred kinesthetic and aural learning methods. Read/write was the least preferred by students and the least used method of teaching by faculty. Faculty used visual methods about one third of the time, although they were not preferred by the students. Students' preferred learning style (kinesthetic) was the method most used by faculty. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. On a new iterative method for solving linear systems and comparison results

    Science.gov (United States)

    Jing, Yan-Fei; Huang, Ting-Zhu

    2008-10-01

    In Ujevic [A new iterative method for solving linear systems, Appl. Math. Comput. 179 (2006) 725-730], the author obtained a new iterative method for solving linear systems, which can be considered as a modification of the Gauss-Seidel method. In this paper, we show that this is a special case from a point of view of projection techniques. And a different approach is established, which is both theoretically and numerically proven to be better than (at least the same as) Ujevic's. As the presented numerical examples show, in most cases, the convergence rate is more than one and a half that of Ujevic.

  8. Monitoring intakes of radionuclides by nuclear power plants workers in France: materials, methods and results

    Energy Technology Data Exchange (ETDEWEB)

    Gonin, M.; Le Guen, B.; Bailloeuil, C.; Gerondal, M. [Electricite De France (France)

    2000-05-01

    for 72 hours of counting, 4. Beta and alpha total counts (ZnS scintillators) on nose blows used for monitoring alpha risks. 5. Quality assurance programs are applied for measurements and also for dose assessments. For the measurements, especially for low activities of alpha emmiters, yearly comparison exercises are organized by PROCORAD association (CEA and COGEMA laboratories). More than 50 European and American laboratories have participated in recent years. By another way, OPRI, the radiation protection office of the Ministry of Health, organises intercomparison measurements of laboratories. For the quality control on dose assessment, the EDF laboratory participates in the comparison exercises planned by EURADOS-CENDOS Working Group 6. Routine, operational and special monitoring programs are carried out. The results are given with the dominant radionuclides indicated. These results shows that the collective effective dose is extremely low: 257 man-mSv for 90 cases over 16 years. The protection methods used appear to be effective. Nethertheless, in some special cases, adding effective dose to external exposure doses could lead to a total dose higher than the regulatory limits. (author)

  9. Monitoring intakes of radionuclides by nuclear power plants workers in France: materials, methods and results

    International Nuclear Information System (INIS)

    Gonin, M.; Le Guen, B.; Bailloeuil, C.; Gerondal, M.

    2000-01-01

    hours of counting, 4. Beta and alpha total counts (ZnS scintillators) on nose blows used for monitoring alpha risks. 5. Quality assurance programs are applied for measurements and also for dose assessments. For the measurements, especially for low activities of alpha emmiters, yearly comparison exercises are organized by PROCORAD association (CEA and COGEMA laboratories). More than 50 European and American laboratories have participated in recent years. By another way, OPRI, the radiation protection office of the Ministry of Health, organises intercomparison measurements of laboratories. For the quality control on dose assessment, the EDF laboratory participates in the comparison exercises planned by EURADOS-CENDOS Working Group 6. Routine, operational and special monitoring programs are carried out. The results are given with the dominant radionuclides indicated. These results shows that the collective effective dose is extremely low: 257 man-mSv for 90 cases over 16 years. The protection methods used appear to be effective. Nethertheless, in some special cases, adding effective dose to external exposure doses could lead to a total dose higher than the regulatory limits. (author)

  10. Finding ultracool brown dwarfs with MegaCam on CFHT: method and first results

    Science.gov (United States)

    Delorme, P.; Willott, C. J.; Forveille, T.; Delfosse, X.; Reylé, C.; Bertin, E.; Albert, L.; Artigau, E.; Robin, A. C.; Allard, F.; Doyon, R.; Hill, G. J.

    2008-06-01

    Aims: We present the first results of a wide field survey for cool brown dwarfs with the MegaCam camera on the CFHT telescope, the Canada-France Brown Dwarf Survey, hereafter CFBDS. Our objectives are to find ultracool brown dwarfs and to constrain the field-brown dwarf mass function thanks to a larger sample of L and T dwarfs. Methods: We identify candidates in CFHT/MegaCam i' and z' images using optimised psf-fitting within Source Extractor, and follow them up with pointed near-infrared imaging on several telescopes. Results: We have so far analysed over 350 square degrees and found 770 brown dwarf candidates brighter than z'_AB=22.5. We currently have J-band photometry for 220 of these candidates, which confirms 37% as potential L or T dwarfs. Some are among the reddest and farthest brown dwarfs currently known, including an independent identification of the recently published ULAS J003402.77-005206.7 and the discovery of a second brown dwarf later than T8, CFBDS J005910.83-011401.3. Infrared spectra of three T dwarf candidates confirm their nature, and validate the selection process. Conclusions: The completed survey will discover ~100 T dwarfs and ~500 L dwarfs or M dwarfs later than M8, approximately doubling the number of currently known brown dwarfs. The resulting sample will have a very well-defined selection function, and will therefore produce a very clean luminosity function. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at TERAPIX and the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS. Based on observations made

  11. Investigation of sol-gel transition by rheological methods. Part II. Results and discussion.

    Directory of Open Access Journals (Sweden)

    KUDRYAVTSEV Pavel Gennadievich

    2017-10-01

    Full Text Available In this work rheological studies of the gelling process were carried out. We have developed a measuring system for studying the rheology of the gelation process. It consisted of several measuring cells of the Weiler-Rebinder type, system for automatic regulation of the composition of the medium and thermostabilization system. This complex is designed to measure the dependence of the value of the ultimate shear stress as a function of time, from the start of the sol-gel transition to the complete conversion of the sol to the gel. The developed device has a wide range of measured values of critical shear stresses τ0 = (0,05÷50000 Dyne/cm2. Using the developed instrument, it was possible to establish the shape of the initial section of the curve τ0 = f(t and develop a methodology for more accurate determination of gelation time. The developed method proved that the classical method for determining the start time of the sol-gel transition using the point of intersection of the tangent to the linear part of the rheological curve τ0 = f(t, gives significantly distorted results. A new phenomenon has been discovered: the kinetic curves in the coordinates of the Avrami-Erofeev-Bogolyubov equation have an inflection point which separates the kinetic curve into two parts, the initial and the final. It was found that the constant k in the Avrami–Erofeev–Bogolyubov equation does not depend on the temperature and is the same for both the initial and final parts of the kinetic curve. It depends only on the chemical nature of the reacting system. It was found that for the initial section of the kinetic curves, the value of the parameter n in the Avrami-Erofeev-Bogolyubov equation was n = 23,4±2,8 and, unlike the final section of the rheological curve, does not depend on temperature. A large value of this parameter can be interpreted as the average number of directions of growth of a fractal aggregate during its growth. The value of this parameter

  12. Structuring scientific works in the “Introduction, Methods, Results and Discussion” format – what a beginner ought to know

    Directory of Open Access Journals (Sweden)

    N. V. Avdeeva

    2016-01-01

    Full Text Available Reference materials about the “Introduction, Methods, Results and Discussion”, which is a commonly used international format for scientific works, have become available for Russian authors nowadays, still lack of knowledge about the format would pop up here or there, especially when we speak about beginners. The faults which would appear regularly in work structuring prompted the present research, the aim of which is to compare the information about the IMRAD format with the specific difficulties beginning authors would often face when preparing their works for publication.The main materials to be studied were sources in Russian and in English published mostly in 2010s and devoted to the problems of structuring works according to the meant above format. Besides, the present research considered the results of plagiarism tests (such tests used to be carried out at the Russia State Library within the period of 2013 – 2015 with the help of software “Automated system of specialized processing of textual documents”. The main methods of our research would remain structural and comparative analysis of texts.As a result, our research revealed the fact of inconsistency of the available information on the IMRAD structure. It would often demand deep thinking and explanations. Different authors of reference editions would as a rule differ one from another in their interpretation of the degree of necessity of this or that composition element, of the amount of details in descriptions, etc. Moreover, the very structure of scientific work looks differently for different authors. More often the structure supposes the integrity of the contents and its form, still sometimes its description would be replaced by outer elements, such as, for example, language clichés. The analysis of the most common faults in text structuring points that authors do not often have a clear idea of how to understand the different demands which are so obscurely described

  13. Revealing Rembrandt

    Directory of Open Access Journals (Sweden)

    Andrew J Parker

    2014-04-01

    Full Text Available The power and significance of artwork in shaping human cognition is self-evident. The starting point for our empirical investigations is the view that the task of neuroscience is to integrate itself with other forms of knowledge, rather than to seek to supplant them. In our recent work, we examined a particular aspect of the appreciation of artwork using present-day functional magnetic resonance imaging (fMRI. Our results emphasised the continuity between viewing artwork and other human cognitive activities. We also showed that appreciation of a particular aspect of artwork, namely authenticity, depends upon the co-ordinated activity between the brain regions involved in multiple decision making and those responsible for processing visual information. The findings about brain function probably have no specific consequences for understanding how people respond to the art of Rembrandt in comparison with their response to other artworks. However, the use of images of Rembrandt’s portraits, his most intimate and personal works, clearly had a significant impact upon our viewers, even though they have been spatially confined to the interior of an MRI scanner at the time of viewing. Neuroscientific studies of humans viewing artwork have the capacity to reveal the diversity of human cognitive responses that may be induced by external advice or context as people view artwork in a variety of frameworks and settings.

  14. Biological dosimetry intercomparison exercise: an evaluation of Triage and routine mode results by robust methods

    International Nuclear Information System (INIS)

    Di Giorgio, M.; Vallerga, M.B.; Radl, A.; Taja, M.R.; Barquinero, J.F.; Seoane, A.; De Luca, J.; Guerrero Carvajal, Y.C.; Stuck Oliveira, M.S.; Valdivia, P.; García Lima, O.; Lamadrid, A.; González Mesa, J.; Romero Aguilera, I.; Mandina Cardoso, T.; Arceo Maldonado, C.; Espinoza, M.E.; Martínez López, W.; Lloyd, D.C.; Méndez Acuña, L.; Di Tomaso, M.V.; Roy, L.; Lindholm, C.; Romm, H.; Güçlü, I.

    2011-01-01

    Well-defined protocols and quality management standards are indispensable for biological dosimetry laboratories. Participation in periodic proficiency testing by interlaboratory comparisons is also required. This harmonization is essential if a cooperative network is used to respond to a mass casualty event. Here we present an international intercomparison based on dicentric chromosome analysis for dose assessment performed in the framework of the IAEA Regional Latin American RLA/9/054 Project. The exercise involved 14 laboratories, 8 from Latin America and 6 from Europe. The performance of each laboratory and the reproducibility of the exercise were evaluated using robust methods described in ISO standards. The study was based on the analysis of slides from samples irradiated with 0.75 (DI) and 2.5 Gy (DII). Laboratories were required to score the frequency of dicentrics and convert them to estimated doses, using their own dose-effect curves, after the analysis of 50 or 100 cells (triage mode) and after conventional scoring of 500 cells or 100 dicentrics. In the conventional scoring, at both doses, all reported frequencies were considered as satisfactory, and two reported doses were considered as questionable. The analysis of the data dispersion among the dicentric frequencies and among doses indicated a better reproducibility for estimated doses (15.6% for DI and 8.8% for DII) than for frequencies (24.4% for DI and 11.4% for DII), expressed by the coefficient of variation. In the two triage modes, although robust analysis classified some reported frequencies or doses as unsatisfactory or questionable, all estimated doses were in agreement with the accepted error of ±0.5 Gy. However, at the DI dose and for 50 scored cells, 5 out of the 14 reported confidence intervals that included zero dose and could be interpreted as false negatives. This improved with 100 cells, where only one confidence interval included zero dose. At the DII dose, all estimations fell within

  15. COMPARISON OF CONSEQUENCE ANALYSIS RESULTS FROM TWO METHODS OF PROCESSING SITE METEOROLOGICAL DATA

    International Nuclear Information System (INIS)

    , D

    2007-01-01

    Consequence analysis to support documented safety analysis requires the use of one or more years of representative meteorological data for atmospheric transport and dispersion calculations. At minimum, the needed meteorological data for most atmospheric transport and dispersion models consist of hourly samples of wind speed and atmospheric stability class. Atmospheric stability is inferred from measured and/or observed meteorological data. Several methods exist to convert measured and observed meteorological data into atmospheric stability class data. In this paper, one year of meteorological data from a western Department of Energy (DOE) site is processed to determine atmospheric stability class using two methods. The method that is prescribed by the U.S. Nuclear Regulatory Commission (NRC) for supporting licensing of nuclear power plants makes use of measurements of vertical temperature difference to determine atmospheric stability. Another method that is preferred by the U.S. Environmental Protection Agency (EPA) relies upon measurements of incoming solar radiation, vertical temperature gradient, and wind speed. Consequences are calculated and compared using the two sets of processed meteorological data from these two methods as input data into the MELCOR Accident Consequence Code System 2 (MACCS2) code

  16. Using Google Earth to Assess Shade for Sun Protection in Urban Recreation Spaces: Methods and Results.

    Science.gov (United States)

    Gage, R; Wilson, N; Signal, L; Barr, M; Mackay, C; Reeder, A; Thomson, G

    2018-05-16

    Shade in public spaces can lower the risk of and sun burning and skin cancer. However, existing methods of auditing shade require travel between sites, and sunny weather conditions. This study aimed to evaluate the feasibility of free computer software-Google Earth-for assessing shade in urban open spaces. A shade projection method was developed that uses Google Earth street view and aerial images to estimate shade at solar noon on the summer solstice, irrespective of the date of image capture. Three researchers used the method to separately estimate shade cover over pre-defined activity areas in a sample of 45 New Zealand urban open spaces, including 24 playgrounds, 12 beaches and 9 outdoor pools. Outcome measures included method accuracy (assessed by comparison with a subsample of field observations of 10 of the settings) and inter-rater reliability. Of the 164 activity areas identified in the 45 settings, most (83%) had no shade cover. The method identified most activity areas in playgrounds (85%) and beaches (93%) and was accurate for assessing shade over these areas (predictive values of 100%). Only 8% of activity areas at outdoor pools were identified, due to a lack of street view images. Reliability for shade cover estimates was excellent (intraclass correlation coefficient of 0.97, 95% CI 0.97-0.98). Google Earth appears to be a reasonably accurate and reliable and shade audit tool for playgrounds and beaches. The findings are relevant for programmes focused on supporting the development of healthy urban open spaces.

  17. Automatic Tree Data Removal Method for Topography Measurement Result Using Terrestrial Laser Scanner

    Science.gov (United States)

    Yokoyama, H.; Chikatsu, H.

    2017-02-01

    Recently, laser scanning has been receiving greater attention as a useful tool for real-time 3D data acquisition, and various applications such as city modelling, DTM generation and 3D modelling of cultural heritage sites have been proposed. And, former digital data processing were demanded in the past digital archive techniques for cultural heritage sites. However, robust filtering method for distinguishing on- and off-terrain points by terrestrial laser scanner still have many issues. In the past investigation, former digital data processing using air-bone laser scanner were reported. Though, efficient tree removal methods from terrain points for the cultural heritage are not considered. In this paper, authors describe a new robust filtering method for cultural heritage using terrestrial laser scanner with "the echo digital processing technology" as latest data processing techniques of terrestrial laser scanner.

  18. Review of solution approach, methods, and recent results of the TRAC-PF1 system code

    International Nuclear Information System (INIS)

    Mahaffy, J.H.; Liles, D.R.; Knight, T.D.

    1983-01-01

    The current version of the Transient Reactor Analysis Code (TRAC-PF1) was created to improve on the capabilities of its predecessor (TRAC-PD2) for analyzing slow reactor transients such as small-break loss-of-coolant accidents. TRAC-PF1 continues to use a semi-implicit finite-difference method for modeling three-dimensional flows in the reactor vessel. However, it contains a new stability-enhancing two-step (SETS) finite-difference tecnique for one-dimensional flow calculations. This method is not restricted by a material Courant stability condition, allowing much larger time-step sizes during slow transients than would a semi-implicit method. These have been successfully applied to the analysis of a variety of experiments and hypothetical plant transients covering a full range of two-phase flow regimes

  19. Characterization of the Darwin direct implicit particle-in-cell method and resulting guidelines for operation

    International Nuclear Information System (INIS)

    Gibbons, M.R.; Hewett, D.W.

    1997-01-01

    We investigate the linear dispersion and other properties of the Darwin Direct Implicit Particle-in-cell (DADIPIC) method in order to deduce guidelines for its use in the simulation of long time-scale, kinetic phenomena in plasmas. The Darwin part of this algorithm eliminates the Courant constraint for light propagation across a grid cell in a time step and divides the field solution into several elliptic equations. The direct implicit method is only applied to the electrostatic field relieving the need to resolve plasma oscillations. Linear theory and simulations verifying the theory are used to generate the desired guidelines as well as show the utility of DADIPIC for a wide range of low frequency, electromagnetic phenomena. We find that separation of the fields has made the task of predicting algorithm behavior easier and produced a robust method without restrictive constraints. 20 refs., 11 figs., 3 tabs

  20. Comparison of evaluation results of piping thermal fatigue evaluation method based on equivalent stress amplitude

    International Nuclear Information System (INIS)

    Suzuki, Takafumi; Kasahara, Naoto

    2012-01-01

    In recent years, reports have increased about failure cases caused by high cycle thermal fatigue both at light water reactors and fast breeder reactors. One of the reasons of the cases is a turbulent mixing at a Tee-junction, where hot and cold temperature fluids are mixed, in a coolant system. In order to prevent thermal fatigue failures at Tee-junctions. The Japan Society of Mechanical Engineers published the guideline which is an evaluation method of high cycle thermal fatigue damage at nuclear pipes. In order to justify safety margin and make the procedure of the guideline concise, this paper proposes a new evaluation method of thermal fatigue damage with use of the 'equivalent stress amplitude.' Because this new method makes procedure of evaluation clear and concise, it will contribute to improving the guideline for thermal fatigue evaluation. (author)

  1. Personality psychology: lexical approaches, assessment methods, and trait concepts reveal only half of the story--why it is time for a paradigm shift.

    Science.gov (United States)

    Uher, Jana

    2013-03-01

    This article develops a comprehensive philosophy-of-science for personality psychology that goes far beyond the scope of the lexical approaches, assessment methods, and trait concepts that currently prevail. One of the field's most important guiding scientific assumptions, the lexical hypothesis, is analysed from meta-theoretical viewpoints to reveal that it explicitly describes two sets of phenomena that must be clearly differentiated: 1) lexical repertoires and the representations that they encode and 2) the kinds of phenomena that are represented. Thus far, personality psychologists largely explored only the former, but have seriously neglected studying the latter. Meta-theoretical analyses of these different kinds of phenomena and their distinct natures, commonalities, differences, and interrelations reveal that personality psychology's focus on lexical approaches, assessment methods, and trait concepts entails a) erroneous meta-theoretical assumptions about what the phenomena being studied actually are, and thus how they can be analysed and interpreted, b) that contemporary personality psychology is largely based on everyday psychological knowledge, and c) a fundamental circularity in the scientific explanations used in trait psychology. These findings seriously challenge the widespread assumptions about the causal and universal status of the phenomena described by prominent personality models. The current state of knowledge about the lexical hypothesis is reviewed, and implications for personality psychology are discussed. Ten desiderata for future research are outlined to overcome the current paradigmatic fixations that are substantially hampering intellectual innovation and progress in the field.

  2. Semiautomatic volume of interest drawing for 18F-FDG image analysis - method and preliminary results

    International Nuclear Information System (INIS)

    Green, A.J.; Baig, S.; Begent, R.H.J.; Francis, R.J.

    2008-01-01

    Functional imaging of cancer adds important information to the conventional measurements in monitoring response. Serial 18 F-fluorodeoxyglucose (FDG) positron emission tomography (PET), which indicates changes in glucose metabolism in tumours, shows great promise for this. However, there is a need for a method to quantitate alterations in uptake of FDG, which accounts for changes in tumour volume and intensity of FDG uptake. Selection of regions or volumes [ROI or volumes of interest (VOI)] by hand drawing, or simple thresholding, suffers from operator-dependent drawbacks. We present a simple, robust VOI growing method for this application. The method requires a single seed point within the visualised tumour and another in relevant normal tissue. The drawn tumour VOI is insensitive to the operator inconsistency and is, thus, a suitable basis for comparative measurements. The method is validated using a software phantom. We demonstrate the use of the method in the assessment of tumour response in 31 patients receiving chemotherapy for various carcinomas. Valid assessment of tumour response could be made 2-4 weeks after starting chemotherapy, giving information for clinical decision making which would otherwise have taken 9-12 weeks. Survival was predicted from FDG-PET 2-4 weeks after starting chemotherapy (p = 0.04) and after 9-12 weeks FDG-PET gave a better prediction of survival (p = 0.002) than CT or MRI (p = 0.015). FDG-PET using this method of analysis has potential as a routine tool for optimising use of chemotherapy and improving its cost effectiveness. It also has potential for increasing the accuracy of response assessment in clinical trials of novel therapies. (orig.)

  3. Comparing Results from Constant Comparative and Computer Software Methods: A Reflection about Qualitative Data Analysis

    Science.gov (United States)

    Putten, Jim Vander; Nolen, Amanda L.

    2010-01-01

    This study compared qualitative research results obtained by manual constant comparative analysis with results obtained by computer software analysis of the same data. An investigated about issues of trustworthiness and accuracy ensued. Results indicated that the inductive constant comparative data analysis generated 51 codes and two coding levels…

  4. Histological Grading of Hepatocellular Carcinomas with Intravoxel Incoherent Motion Diffusion-weighted Imaging: Inconsistent Results Depending on the Fitting Method.

    Science.gov (United States)

    Ichikawa, Shintaro; Motosugi, Utaroh; Hernando, Diego; Morisaka, Hiroyuki; Enomoto, Nobuyuki; Matsuda, Masanori; Onishi, Hiroshi

    2018-04-10

    To compare the abilities of three intravoxel incoherent motion (IVIM) imaging approximation methods to discriminate the histological grade of hepatocellular carcinomas (HCCs). Fifty-eight patients (60 HCCs) underwent IVIM imaging with 11 b-values (0-1000 s/mm 2 ). Slow (D) and fast diffusion coefficients (D * ) and the perfusion fraction (f) were calculated for the HCCs using the mean signal intensities in regions of interest drawn by two radiologists. Three approximation methods were used. First, all three parameters were obtained simultaneously using non-linear fitting (method A). Second, D was obtained using linear fitting (b = 500 and 1000), followed by non-linear fitting for D * and f (method B). Third, D was obtained by linear fitting, f was obtained using the regression line intersection and signals at b = 0, and non-linear fitting was used for D * (method C). A receiver operating characteristic analysis was performed to reveal the abilities of these methods to distinguish poorly-differentiated from well-to-moderately-differentiated HCCs. Inter-reader agreements were assessed using intraclass correlation coefficients (ICCs). The measurements of D, D * , and f in methods B and C (Az-value, 0.658-0.881) had better discrimination abilities than did those in method A (Az-value, 0.527-0.607). The ICCs of D and f were good to excellent (0.639-0.835) with all methods. The ICCs of D * were moderate with methods B (0.580) and C (0.463) and good with method A (0.705). The IVIM parameters may vary depending on the fitting methods, and therefore, further technical refinement may be needed.

  5. Noninvasive MRI thermometry with the proton resonance frequency (PRF) method: in vivo results in human muscle

    DEFF Research Database (Denmark)

    De Poorter, J; De Wagter, C; De Deene, Y

    1995-01-01

    The noninvasive thermometry method is based on the temperature dependence of the proton resonance frequency (PRF). High-quality temperature images can be obtained from phase information of standard gradient-echo sequences with an accuracy of 0.2 degrees C in phantoms. This work was focused on the...

  6. Review of quantum Monte Carlo methods and results for Coulombic systems

    International Nuclear Information System (INIS)

    Ceperley, D.

    1983-01-01

    The various Monte Carlo methods for calculating ground state energies are briefly reviewed. Then a summary of the charged systems that have been studied with Monte Carlo is given. These include the electron gas, small molecules, a metal slab and many-body hydrogen

  7. Remote Determination of Cloud Temperature and Transmittance from Spectral Radiance Measurements: Method and Results

    Science.gov (United States)

    1996-10-01

    atmospherics temperatura and humidity profiles. Validation tests performed on experimental spectra demonstrate the occuracy of the method with typical...indicated as with the title.) Passive Remota Sensing Infrared Spectra Cloud Temperatura Cloud Transmittance FTIR Spectrometer Icing Hazard Detection (DCD03E.IFO - 95.02.22) UNCLASSIFIED SECURITY CLASSIFICATION OF FORM

  8. Revealing Interactions between Human Resources, Quality of Life and Environmental Changes within Socially-oriented Observations : Results from the IPY PPS Arctic Project in the Russian North

    Science.gov (United States)

    Vlasova, Tatiana

    2010-05-01

    Socially-oriented Observations (SOO) in the Russian North have been carried out within multidisciplinary IPY PPS Arctic project under the leadership of Norway and supported by the Research Council of Norway as well as Russian Academy of Sciences. The main objective of SOO is to increase knowledge and observation of changes in quality of life conditions (state of natural environment including climate and biota, safe drinking water and foods, well-being, employment, social relations, access to health care and high quality education, etc.) and - to reveal trends in human capital and capacities (health, demography, education, creativity, spiritual-cultural characteristics and diversity, participation in decision making, etc.). SOO have been carried out in industrial cities as well as sparsely populated rural and nature protection areas in observation sites situated in different bioms (from coastal tundra to southern taiga zone) of Murmansk, Arkhangelsk Oblast and Republic of Komi. SOO were conducted according to the international protocol included in PPS Arctic Manual. SOO approaches based both on local people's perceptions and statistics help to identify main issues and targets for life quality, human capital and environment improvement and thus to distinguish leading SOO indicators for further monitoring. SOO have revealed close interaction between human resources, quality of life and environmental changes. Negative changes in human capital (depopulation, increasing unemployment, aging, declining physical and mental health, quality of education, loss of traditional knowledge, marginalization etc.), despite peoples' high creativity and optimism are becoming the major driving force effecting both the quality of life and the state of environment and overall sustainability. Human induced disturbances such as uncontrolled forests cuttings and poaching are increasing. Observed rapid changes in climate and biota (ice and permafrost melting, tundra shrubs getting taller and

  9. Vehicle Speed Determination in Case of Road Accident by Software Method and Comparing of Results with the Mathematical Model

    OpenAIRE

    Hoxha Gezim; Shala Ahmet; Likaj Rame

    2017-01-01

    The paper addresses the problem to vehicle speed calculation at road accidents. To determine the speed are used the PC Crash software and Virtual Crash. With both methods are analysed concrete cases of road accidents. Calculation methods and comparing results are present for analyse. These methods consider several factors such are: the front part of the vehicle, the technical feature of the vehicle, car angle, remote relocation after the crash, road conditions etc. Expected results with PC Cr...

  10. Epidemiological Characteristics and Clinical Treatment Outcome of Typhoid Fever in Ningbo, China, 2005-2014: Pulsed-Field Gel Electorophoresis Results Revealing Great Proportion of Common Transmission Sources.

    Science.gov (United States)

    Song, Qifa; Yang, Yuanbin; Lin, Wenping; Yi, Bo; Xu, Guozhang

    2017-09-25

    We aimed to describe the molecular epidemiological characteristics and clinical treatment outcome of typhoid fever in Ningbo, China during 2005-2014. Eighty-eight Salmonella Typhi isolates were obtained from 307 hospitalized patients. Three prevalent pulsed-field gel electrophoresis (PFGE) patterns of 54 isolates from 3 outbreaks were identified. Overall, there were 64 (72.7%) isolates from clustered cases and 24 (27.3%) isolates from sporadic cases. Resistance to nalidixic acid (NAL) (n = 47; 53.4%) and ampicillin (AMP) (n = 40; 45.4%) and rare resistance to tetracycline (TET) (n = 2; 2.3%) and gentamicin (GEN) (n = 2; 2.3%) were observed. No isolates resistant to cefotaxime (CTX), chloramphenicol (CL), ciprofloxacin (CIP), and trimethoprim-sulfamethoxazole (SXT) were found. The occurrence of reduced sensitivity to CIP was 52.3% (n = 46). The medians of fever clearance time in cases with and without complications were 7 (interquartile range (IQR): 4-10) and 5 (IQR: 3-7) days (P = 0.001), respectively, when patients were treated with CIP or levofloxacin (LEV) and/or third-generation cephalosporins (CEP). Rates of serious complications were at low levels: peritonitis (2.3%), intestinal hemorrhage (6.8%), and intestinal perforation (1.1%). The present study revealed a long-term clustering trend with respect to PFGE patterns, occasional outbreaks, and the rapid spread of AMP resistance and decreased CIP susceptibility among S. Typhi isolates in recent years.

  11. A new method for processing INAA results without the necessity of standards

    International Nuclear Information System (INIS)

    Hemon, G.; Philippot, J.C.

    1986-01-01

    When neutron activation analysis is used for elemental determinations in samples taken from environment, and quite different in origin, certain questions arise: is the method absolute or relative, precise or accurate? How should objects be chosen to represent the subject studied? How should the conclusions of the measurement be used? How are the quality and intensity of the flux to be controlled? What corrections are needed for the effects of perturbing elements, uranium and boron? How sensitive is the method or - which amounts to the same thing - what is the best time to analyse an element in a given matrix? The authors attempt to answer these questions and illustrate the subject by way of a few specific examples: mineral and river water, sea and river sediments, aerosols, quartz tools, hair, nodules and Mn deposits, diamonds, wines, PWR effluents. (author)

  12. Health effects estimation: Methods and results for uranium mill tailings contaminated properties

    International Nuclear Information System (INIS)

    Denham, D.H.; Cross, F.T.; Soldat, J.K.

    1990-01-01

    This paper describes methods for estimating potential health effects from exposure to uranium mill tailings and presents a summary of risk projections for 50 contaminated properties (residences, schools, churches, and businesses) in the US. The methods provide realistic estimates of cancer risk to exposed individuals based on property-specific occupancy and contamination patterns. External exposure to gamma radiation, inhalation of radon daughters, and consumption of food products grown in radium-contaminated soil are considered. Most of the projected risk was from indoor exposure to radon daughters; however, for some properties the risk from consumption of locally grown food products is similar to that from radon daughters. In all cases, the projected number of lifetime cancer deaths for specific properties is less than one, but for some properties the increase in risk over that normally expected is greater than 100%

  13. Method of eliminating undesirable gaseous products resulting in underground uranium ore leaching

    International Nuclear Information System (INIS)

    Krizek, J.; Dedic, K.; Johann, J.; Haas, F.; Sokola, K.

    1980-01-01

    The method described is characteristic of the fact that gases being formed or dissolved are oxidized using a combined oxidation-reduction system consisting of airborne oxygen, oxygen carriers and a strong irreversible oxidant. The oxygen carrier system consists of a mixture of Fe 2+ and Fe 3+ cations or of Cu + and Cu 2+ cations introduced in solutions in form of iron salts at a concentration of 0.0001 to 0.003 M, or copper salts maximally of 0.0003 M. The irreversible oxidant shows a standard redox potential of at least +1.0 V. In addition to undesirable product elimination, the method allows increasing the leaching process yield. (J.B.)

  14. Strengthening of limestone by the impregnation - gamma irradiation method. Results of tests

    International Nuclear Information System (INIS)

    Ramiere, R.; Tassigny, C. de

    1975-04-01

    The method developed by the Centre d'Etudes Nucleaires de Grenoble (France) strengthens the stones by impregnation with a styrene resin/liquid polystyrene mixture followed by polymerization under gamma irradiation. This method is applicable to stones which can be taken into the laboratory for treatment. The increase in strength of 6 different species of French limestone has been quantitatively recorded. The following parameters were studied: possibility of water migration inside the stones, improvements of the mechanical properties of the impregnated stone, standing up to freeze-thaw conditions and artificial ageing of the stones which causes only minor changes in the appearance of the stone and a negligible decrease in weight [fr

  15. The Ramsey method in high-precision mass spectrometry with Penning traps Experimental results

    CERN Document Server

    George, S; Herfurth, F; Herlert, A; Kretzschmar, M; Nagy, S; Schwarz, S; Schweikhard, L; Yazidjian, C

    2007-01-01

    The highest precision in direct mass measurements is obtained with Penning trap mass spectrometry. Most experiments use the interconversion of the magnetron and cyclotron motional modes of the stored ion due to excitation by external radiofrequency-quadrupole fields. In this work a new excitation scheme, Ramsey's method of time-separated oscillatory fields, has been successfully tested. It has been shown to reduce significantly the uncertainty in the determination of the cyclotron frequency and thus of the ion mass of interest. The theoretical description of the ion motion excited with Ramsey's method in a Penning trap and subsequently the calculation of the resonance line shapes for different excitation times, pulse structures, and detunings of the quadrupole field has been carried out in a quantum mechanical framework and is discussed in detail in the preceding article in this journal by M. Kretzschmar. Here, the new excitation technique has been applied with the ISOLTRAP mass spectrometer at ISOLDE/CERN fo...

  16. Method for covering a spme fibre with carbon nanotubes and resulting spme fibre

    OpenAIRE

    Bertrán, Enric; Jover Comas, Eric; García Céspedes, Jordi; Bayona Termens, Josep María

    2010-01-01

    [EN] The invention relates to a method for covering solid phase microextraction (SPME) fibres with carbon nanotubes (CNT), comprising the following operations: (i) depositing a layer of a metal material on the SPME fibre; (ii) applying a heat treatment in order to form catalytic metal nanoparticles in a reducing atmosphere; and (iii) applying carbon using chemical deposition techniques, thereby forming CNT on top ofthe metal nanoparticles. The invention also relates to a fibre obtain...

  17. A Simple Method for Closure of Urethrocutaneous Fistula after Tubularized Incised Plate Repair: Preliminary Results.

    Science.gov (United States)

    Shirazi, Mehdi; Ariafar, Ali; Babaei, Amir Hossein; Ashrafzadeh, Abdosamad; Adib, Ali

    2016-11-01

    Urethrocutaneous fistula (UCF) is the most prevalent complication after hypospadias repair surgery. Many methods have been developed for UCF correction, and the best technique for UCF repair is determined based on the size, location, and number of fistulas, as well as the status of the surrounding skin. In this study, we introduced and evaluated a simple method for UCF correction after tubularized incised plate (TIP) repair. This clinical study was conducted on children with UCFs ≤ 4 mm that developed after TIP surgery for hypospadias repair. The skin was incised around the fistula and the tract was released from the surrounding tissues and the dartos fascia, then ligated with 5 - 0 polydioxanone (PDS) sutures. The dartos fascia, as the second layer, was covered on the fistula tract with PDS thread (gauge 5 - 0) by the continuous suture method. The skin was closed with 6 - 0 Vicryl sutures. After six months of follow-up, surgical outcomes were evaluated based on fistula relapse and other complications. After six months, relapse occurred in only one patient, a six-year-old boy with a single 4-mm distal opening, who had undergone no previous fistula repairs. Therefore, in 97.5% of the cases, relapse was non-existent. Other complications, such as urethral stenosis, intraurethral obstruction, and epidermal inclusion cysts, were not seen in the other patients during the six-month follow-up period. This repair method, which is simple, rapid, and easily learned, is highly applicable, with a high success rate for the closure of UCFs measuring up to 4 mm in any location.

  18. "Rehabilitation schools for scoliosis" thematic series: describing the methods and results

    OpenAIRE

    Rigo, Manuel D; Grivas, Theodoros B

    2010-01-01

    Abstract The Scoliosis Rehabilitation model begins with the correct diagnosis and evaluation of the patient, to make treatment decisions oriented to the patient. The treatment is based on observation, education, scoliosis specific exercises, and bracing. The state of research in the field of conservative treatment is insufficient. There is some evidence supporting scoliosis specific exercises as a part of the rehabilitation treatment, however, the evidence is poor and the different methods ar...

  19. The relationship between team climate and interprofessional collaboration: preliminary results of a mixed methods study

    OpenAIRE

    Bailey, Christopher; Agreli, Heloise F.; Peduzzi, Marina

    2016-01-01

    Relational and organisational factors are key elements of interprofessional collaboration (IPC) and team climate. Few studies have explored the relationship between IPC and team climate. This article presents a study that 10 aimed to explore IPC in primary healthcare teams and understand how the assessment of team climate may provide insights into IPC. A mixed methods study design was adopted. In Stage 1 of the study, team climate was assessed using the Team Climate Inventory with 159 profess...

  20. Dark Energy Survey Year 1 results: cross-correlation redshifts - methods and systematics characterization

    Science.gov (United States)

    Gatti, M.; Vielzeuf, P.; Davis, C.; Cawthon, R.; Rau, M. M.; DeRose, J.; De Vicente, J.; Alarcon, A.; Rozo, E.; Gaztanaga, E.; Hoyle, B.; Miquel, R.; Bernstein, G. M.; Bonnett, C.; Carnero Rosell, A.; Castander, F. J.; Chang, C.; da Costa, L. N.; Gruen, D.; Gschwend, J.; Hartley, W. G.; Lin, H.; MacCrann, N.; Maia, M. A. G.; Ogando, R. L. C.; Roodman, A.; Sevilla-Noarbe, I.; Troxel, M. A.; Wechsler, R. H.; Asorey, J.; Davis, T. M.; Glazebrook, K.; Hinton, S. R.; Lewis, G.; Lidman, C.; Macaulay, E.; Möller, A.; O'Neill, C. R.; Sommer, N. E.; Uddin, S. A.; Yuan, F.; Zhang, B.; Abbott, T. M. C.; Allam, S.; Annis, J.; Bechtol, K.; Brooks, D.; Burke, D. L.; Carollo, D.; Carrasco Kind, M.; Carretero, J.; Cunha, C. E.; D'Andrea, C. B.; DePoy, D. L.; Desai, S.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Goldstein, D. A.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Hoormann, J. K.; Jain, B.; James, D. J.; Jarvis, M.; Jeltema, T.; Johnson, M. W. G.; Johnson, M. D.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Li, T. S.; Lima, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Nichol, R. C.; Nord, B.; Plazas, A. A.; Reil, K.; Rykoff, E. S.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sheldon, E.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Tucker, B. E.; Tucker, D. L.; Vikram, V.; Walker, A. R.; Weller, J.; Wester, W.; Wolf, R. C.

    2018-06-01

    We use numerical simulations to characterize the performance of a clustering-based method to calibrate photometric redshift biases. In particular, we cross-correlate the weak lensing source galaxies from the Dark Energy Survey Year 1 sample with redMaGiC galaxies (luminous red galaxies with secure photometric redshifts) to estimate the redshift distribution of the former sample. The recovered redshift distributions are used to calibrate the photometric redshift bias of standard photo-z methods applied to the same source galaxy sample. We apply the method to two photo-z codes run in our simulated data: Bayesian Photometric Redshift and Directional Neighbourhood Fitting. We characterize the systematic uncertainties of our calibration procedure, and find that these systematic uncertainties dominate our error budget. The dominant systematics are due to our assumption of unevolving bias and clustering across each redshift bin, and to differences between the shapes of the redshift distributions derived by clustering versus photo-zs. The systematic uncertainty in the mean redshift bias of the source galaxy sample is Δz ≲ 0.02, though the precise value depends on the redshift bin under consideration. We discuss possible ways to mitigate the impact of our dominant systematics in future analyses.

  1. Experimental results and validation of a method to reconstruct forces on the ITER test blanket modules

    International Nuclear Information System (INIS)

    Zeile, Christian; Maione, Ivan A.

    2015-01-01

    Highlights: • An in operation force measurement system for the ITER EU HCPB TBM has been developed. • The force reconstruction methods are based on strain measurements on the attachment system. • An experimental setup and a corresponding mock-up have been built. • A set of test cases representing ITER relevant excitations has been used for validation. • The influence of modeling errors on the force reconstruction has been investigated. - Abstract: In order to reconstruct forces on the test blanket modules in ITER, two force reconstruction methods, the augmented Kalman filter and a model predictive controller, have been selected and developed to estimate the forces based on strain measurements on the attachment system. A dedicated experimental setup with a corresponding mock-up has been designed and built to validate these methods. A set of test cases has been defined to represent possible excitation of the system. It has been shown that the errors in the estimated forces mainly depend on the accuracy of the identified model used by the algorithms. Furthermore, it has been found that a minimum of 10 strain gauges is necessary to allow for a low error in the reconstructed forces.

  2. Different methods to define utility functions yield similar results but engage different neural processes

    Directory of Open Access Journals (Sweden)

    Marcus Heldmann

    2009-10-01

    Full Text Available Although the concept of utility is fundamental to many economic theories, up to now a generally accepted method determining a subject’s utility function is not available. We investigated two methods that are used in economic sciences for describing utility functions by using response-locked event-related potentials in order to assess their neural underpinnings. For defining the certainty equivalent (CE, we used a lottery game with probabilities to win p=0.5, for identifying the subjects’ utility functions directly a standard bisection task was applied. Although the lottery tasks’ payoffs were only hypothetical, a pronounced negativity was observed resembling the error related negativity (ERN previously described in action monitoring research, but this occurred only for choices far away from the indifference point between money and lottery. By contrast, the bisection task failed to evoke an ERN irrespective of the responses’ correctness. Based on these findings we are reasoning that only decisions made in the lottery task achieved a level of subjective relevance that activates cognitive-emotional monitoring. In terms of economic sciences, our findings support the view that the bisection method is unaffected by any kind of probability valuation or other parameters related to risk and in combination with the lottery task can, therefore, be used to differentiate between payoff and probability valuation.

  3. The results of STEM education methods in physics at the 11th grade level: Light and visual equipment lesson

    Science.gov (United States)

    Tungsombatsanti, A.; Ponkham, K.; Somtoa, T.

    2018-01-01

    This research aimed to: 1) To evaluate the efficiency of the process and the efficiency of the results (E1 / E2) of the innovative instructional lesson plan in the form of the STEM Education method in the field of physics of secondary students at the 10th grade level in physics class to determine the efficiency of the STEM based on criteria of the 70/70 standard level. 2) To study students' critical thinking skills of secondary students at the 11th grade level, and assessing skill in criteria 80 percentage 3) To compare learning achievements between students' pre-post testing after taught in STEM Education 4) To evaluate Student' Satisfaction after using STEM Education teaching by using mean compare to 5 points Likert Scale. The participant used were 40 students from grade 11 at Borabu School, Borabu District, Mahasarakham Province, semester 2, Academic year 2016. Tools used in this study consist of: 1) STEM Education plan about the force and laws of motion for grade 11 students of 1 schemes with total of 15 hours, 2) The test of critical think skills with essay type in amount of 30 items, 3) achievement test on Light and visual equipment with multiple-choice of 4 options of 30 items, 4) satisfaction learning with 5 Rating Scale of 16 items. The statistics used in data analysis were percentage, mean, standard deviation, and t-test (Dependent). The results showed that 1) The results of these findings revealed that the efficiency of the STEM based on criteria indicate that are higher than the standard level of the 70/70 at 71.51/75 2) Student has critical thinking scores that are higher than criteria 80 percentage as amount is 26 people. 3) Statistically significant of students' learning achievements to their later outcomes were differentiated between pretest and posttest at the .05 level, evidently. 4) The student' level of satisfaction toward the learning by using STEM Education plan was at a good level (X ¯ = 4.33, S.D = 0.64).

  4. An improved method for interpreting API filter press hydraulic conductivity test results

    International Nuclear Information System (INIS)

    Heslin, G.M.; Baxter, D.Y.; Filz, G.M.; Davidson, R.R.

    1997-01-01

    The American Petroleum Institute (API) filter press is frequently used to measure the hydraulic conductivity of soil-bentonite backfill during the mix design process and as part of construction quality controls. However, interpretation of the test results is complicated by the fact that the seepage-induced consolidation pressure varies from zero at the top of the specimen to a maximum value at the bottom of the specimen. An analytical solution is available which relates the stress, compressibility, and hydraulic conductivity in soil consolidated by seepage forces. This paper presents the results of a laboratory investigation undertaken to support application of this theory to API hydraulic conductivity tests. When the API test results are interpreted using seepage consolidation theory, they are in good agreement with the results of consolidometer permeameter tests. Limitations of the API test are also discussed

  5. Quantitative functional scintigraphy of the salivary glands: A new method of interpreting and clinical results

    International Nuclear Information System (INIS)

    Schneider, P.; Trauring, G.; Haas, J.P.; Noodt, A.; Draf, W.

    1984-01-01

    Tc-99m pertechnetate is injected i.v. and the kinetics of the tracer in the salivary glands is analyzed using a gamma camera and a computer system. To visualize regional gland function, phase images as well as socalled gradient images are generated, which reflect the rate of tracer inflow and outflow. The time activity curves for the individual glands which are obtained with the ROI technique show an initial rise which reflects the pertechnetate uptake potential of the gland and is superimposed by background activity. After a standardized lemon juice dose the curve drops steeply, with the slope depending on the outflow potential of the gland and the background activity. In the past, attempts at quantifying the uptake and elimination functions have failed because of problems in allowing for the variable background component of the time activity curves, which normally amounts of about 60%. In 25 patients in whom one gland had been removed surgically the background activity was examined in terms of the time course and the regional pattern and a patient and gland-specific subtraction method was developed for visualizing the time activity curves of isolated glands devoid of any background activity and describing the uptake and elimination potentials in quantitative terms. Using this new method we evaluated 305 salivary gland scans. Normal ranges for the quantitative parameters were established and their reproducibility was examined. Unlike qualitative functional images of the salivary glands the new quantitative method offers accurate evidence of the extent of gland function and thus helps to decide wether a gland should be salvaged or not (conservative versus surgical treatment). However, quantitation does not furnish any clues on the benign or malignant nature of a tumor. (Author)

  6. Some elaborating methods of gamma scanning results on irradiated nuclear fuels

    International Nuclear Information System (INIS)

    Sternini, E.

    1979-01-01

    Gamma scanning, as a post-irradiation examination, is a technique which provides a large number of informations on irradiated nuclear fuels. Power profile, fission products distribution, average and local burn-up of single elements structural and nuclear behaviour of fuel materials are examples of the obtained informations. In the present work experimental methods and theoretical calculations used at the CNEN hot cell laboratory for the mentioned purposes are described. Errors arising from the application of the gamma scanning technique are also discussed

  7. Qualitative and quantitative methods for human factor analysis and assessment in NPP. Investigations and results

    International Nuclear Information System (INIS)

    Hristova, R.; Kalchev, B.; Atanasov, D.

    2005-01-01

    A description of the most frequently used approaches for human reliability assesment is given. The relation between different human factor causes for human induced events in Kozloduy NPP during the period 2000 - 2003 is discussed. A comparison between the contribution of the casual factors for events occurrences in Kozloduy NPP and an Japanese NPP is presented. It can be concluded that for both NPPs the most important casual factors are: 1) written procedures and documents; 2) man-machine interface; 3) environmental working conditions; 4) working practice; 5) training and qualification; 6) supervising methods

  8. Results of a survey on accident and safety analysis codes, benchmarks, verification and validation methods

    International Nuclear Information System (INIS)

    Lee, A.G.; Wilkin, G.B.

    1995-01-01

    This report is a compilation of the information submitted by AECL, CIAE, JAERI, ORNL and Siemens in response to a need identified at the 'Workshop on R and D Needs' at the IGORR-3 meeting. The survey compiled information on the national standards applied to the Safety Quality Assurance (SQA) programs undertaken by the participants. Information was assembled for the computer codes and nuclear data libraries used in accident and safety analyses for research reactors and the methods used to verify and validate the codes and libraries. Although the survey was not comprehensive, it provides a basis for exchanging information of common interest to the research reactor community

  9. Vehicle Speed Determination in Case of Road Accident by Software Method and Comparing of Results with the Mathematical Model

    Directory of Open Access Journals (Sweden)

    Hoxha Gezim

    2017-11-01

    Full Text Available The paper addresses the problem to vehicle speed calculation at road accidents. To determine the speed are used the PC Crash software and Virtual Crash. With both methods are analysed concrete cases of road accidents. Calculation methods and comparing results are present for analyse. These methods consider several factors such are: the front part of the vehicle, the technical feature of the vehicle, car angle, remote relocation after the crash, road conditions etc. Expected results with PC Crash software and Virtual Crash are shown in tabular graphics and compared in mathematical methods.

  10. Evolution of different reaction methods resulting in the formation of AgI125 for use in brachytherapy sources

    International Nuclear Information System (INIS)

    Souza, C.D.; Peleias Jr, F.S.; Rostelato, M.E.C.M.; Zeituni, C.A.; Benega, M.A.G.; Tiezzi, R.; Mattos, F.R.; Rodrigues, B.T.; Oliveira, T.B.; Feher, A.; Moura, J.A.; Costa, O.L.

    2014-01-01

    Prostate cancer represents about 10% of all cases of cancer in the world. Brachytherapy has been extensively used in the early and intermediate stages of the illness. The radiotherapy method reduces the damage probability to surrounding healthy tissues. The present study compares several deposition methods of iodine-125 on silver substrate (seed core), in order to choose the most suitable one to be implemented at IPEN. Four methods were selected: method 1 (assay based on electrodeposition) which presented efficiency of 65.16%; method 2 (assay based on chemical reactions, developed by David Kubiatowicz) which presented efficiency of 70.80%; method 3 (chemical reaction based on the methodology developed by Dr. Maria Elisa Rostelato) which presented efficiency of 55.80%; new method developed by IPEN with 90.5% efficiency. Based on the results, the new method is the suggested one to be implemented. (authors)

  11. A method to derive fixed budget results from expected optimisation times

    DEFF Research Database (Denmark)

    Doerr, Benjamin; Jansen, Thomas; Witt, Carsten

    2013-01-01

    At last year's GECCO a novel perspective for theoretical performance analysis of evolutionary algorithms and other randomised search heuristics was introduced that concentrates on the expected function value after a pre-defined number of steps, called budget. This is significantly different from...... the common perspective where the expected optimisation time is analysed. While there is a huge body of work and a large collection of tools for the analysis of the expected optimisation time the new fixed budget perspective introduces new analytical challenges. Here it is shown how results on the expected...... optimisation time that are strengthened by deviation bounds can be systematically turned into fixed budget results. We demonstrate our approach by considering the (1+1) EA on LeadingOnes and significantly improving previous results. We prove that deviating from the expected time by an additive term of ω(n3...

  12. Genetic relationships among wild and cultivated accessions of curry leaf plant (Murraya koenigii (L.) Spreng.), as revealed by DNA fingerprinting methods.

    Science.gov (United States)

    Verma, Sushma; Rana, T S

    2013-02-01

    Murraya koenigii (L.) Spreng. (Rutaceae), is an aromatic plant and much valued for its flavor, nutritive and medicinal properties. In this study, three DNA fingerprinting methods viz., random amplification of polymorphic DNA (RAPD), directed amplification of minisatellite DNA (DAMD), and inter-simple sequence repeat (ISSR), were used to unravel the genetic variability and relationships across 92 wild and cultivated M. koenigii accessions. A total of 310, 102, and 184, DNA fragments were amplified using 20 RAPD, 5 DAMD, and 13 ISSR primers, revealing 95.80, 96.07, and 96.73% polymorphism, respectively, across all accessions. The average polymorphic information content value obtained with RAPD, DAMD, and ISSR markers was 0.244, 0.250, and 0.281, respectively. The UPGMA tree, based on Jaccard's similarity coefficient generated from the cumulative (RAPD, DAMD, and ISSR) band data showed two distinct clusters, clearly separating wild and cultivated accessions in the dendrogram. Percentage polymorphism, gene diversity (H), and Shannon information index (I) estimates were higher in cultivated accessions compared to wild accessions. The overall high level of polymorphism and varied range of genetic distances revealed a wide genetic base in M. koenigii accessions. The study suggests that RAPD, DAMD, and ISSR markers are highly useful to unravel the genetic variability in wild and cultivated accessions of M. koenigii.

  13. In silico and experimental methods revealed highly diverse bacteria with quorum sensing and aromatics biodegradation systems--a potential broad application on bioremediation.

    Science.gov (United States)

    Huang, Yili; Zeng, Yanhua; Yu, Zhiliang; Zhang, Jing; Feng, Hao; Lin, Xiuchun

    2013-11-01

    Phylogenetic overlaps between aromatics-degrading bacteria and acyl-homoserine-lactone (AHL) or autoinducer (AI) based quorum-sensing (QS) bacteria were evident in literatures; however, the diversity of bacteria with both activities had never been finely described. In-silico searching in NCBI genome database revealed that more than 11% of investigated population harbored both aromatic ring-hydroxylating-dioxygenase (RHD) gene and AHL/AI-synthetase gene. These bacteria were distributed in 10 orders, 15 families, 42 genus and 78 species. Horizontal transfers of both genes were common among them. Using enrichment and culture dependent method, 6 Sphingomonadales and 4 Rhizobiales with phenanthrene- or pyrene-degrading ability and AHL-production were isolated from marine, wetland and soil samples. Thin-layer-chromatography and gas-chromatography-mass-spectrum revealed that these Sphingomonads produced various AHL molecules. This is the first report of highly diverse bacteria that harbored both aromatics-degrading and QS systems. QS regulation may have broad impacts on aromatics biodegradation, and would be a new angle for developing bioremediation technology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. First in-flight results of Pleiades 1A innovative methods for optical calibration

    Science.gov (United States)

    Kubik, Philippe; Lebègue, Laurent; Fourest, Sébastien; Delvit, Jean-Marc; de Lussy, Françoise; Greslou, Daniel; Blanchet, Gwendoline

    2017-11-01

    The PLEIADES program is a space Earth Observation system led by France, under the leadership of the French Space Agency (CNES). Since it was successfully launched on December 17th, 2011, Pleiades 1A high resolution optical satellite has been thoroughly tested and validated during the commissioning phase led by CNES. The whole system has been designed to deliver submetric optical images to users whose needs were taken into account very early in the design process. This satellite opens a new era in Europe since its off-nadir viewing capability delivers a worldwide 2- days access, and its great agility will make possible to image numerous targets, strips and stereo coverage from the same orbit. Its imaging capability of more than 450 images of 20 km x 20 km per day can fulfill a broad spectrum of applications for both civilian and defence users. For an earth observing satellite with no on-board calibration source, the commissioning phase is a critical quest of wellcharacterized earth landscapes and ground patterns that have to be imaged by the camera in order to compute or fit the parameters of the viewing models. It may take a long time to get the required scenes with no cloud, whilst atmosphere corrections need simultaneous measurements that are not always possible. The paper focuses on new in-flight calibration methods that were prepared before the launch in the framework of the PLEIADES program : they take advantage of the satellite agility that can deeply relax the operational constraints and may improve calibration accuracy. Many performances of the camera were assessed thanks to a dedicated innovative method that was successfully validated during the commissioning period : Modulation Transfer Function (MTF), refocusing, absolute calibration, line of sight stability were estimated on stars and on the Moon. Detectors normalization and radiometric noise were computed on specific pictures on Earth with a dedicated guidance profile. Geometric viewing frame was

  15. "Rehabilitation schools for scoliosis" thematic series: describing the methods and results

    Directory of Open Access Journals (Sweden)

    Grivas Theodoros B

    2010-12-01

    Full Text Available Abstract The Scoliosis Rehabilitation model begins with the correct diagnosis and evaluation of the patient, to make treatment decisions oriented to the patient. The treatment is based on observation, education, scoliosis specific exercises, and bracing. The state of research in the field of conservative treatment is insufficient. There is some evidence supporting scoliosis specific exercises as a part of the rehabilitation treatment, however, the evidence is poor and the different methods are not known by most of the scientific community. The only way to improve the knowledge and understanding of the different physiotherapy methodologies (specific exercises, integrated into the whole rehabilitation program, is to establish a single and comprehensive source of information about it. This is what the SCOLIOSIS Journal is going to do through the "Rehabilitation Schools for Scoliosis" Thematic Series, where technical papers coming from the different schools will be published.

  16. Dosimetric methods and results of measurement for total body electron irradiation

    International Nuclear Information System (INIS)

    Feng Ningyuan; Yu Geng; Yu Zihao

    1987-01-01

    A modified 'STANFORD TSEI TECHNIQUE' e.g. dual angled gantry, 6 turntable angles and 12 fields was developed on PHILIPS SL 75-20 linear accelerator to treat mycosis fungoides. A plastic scatter screen, 5 mm in thickness was used to reduce the primary electron energy to 4 MeV in order to control treatment depth (d 80 approx.= 1.2 cm) and skin dose up to 89%. The X-ray contamination was at an acceptable level of 2%. This measurement which involved multiple dosimetric methods, showed that the distance between the scattor screen and the patient, within 10-30 cm, had no influence on PDD and the dose distribution on the body surface was reasonably homogeneous, but strongly dependent on the anatomic positions. For those sites which were located in the electron beam shadows, boosting irradiation might be necessary. The preliminary clinical trials indicated that this technique is valid and feasible

  17. A new method for deriving rigorous results on ππ scattering

    International Nuclear Information System (INIS)

    Caprini, I.; Dita, P.

    1979-06-01

    We develop a new approach to the problem of constraining the ππ scattering amplitudes by means of the axiomatically proved properties of unitarity, analyticity and crossing symmetry. The method is based on the solution of an extremal problem on a convex set of analytic functions and provides a global description of the domain of values taken by any finite number of partial waves at an arbitrary set of unphysical energies, compatible with unitarity, the bounds at complex energies derived from generalized dispersion relations and the crossing integral relations. From this doma domain we obtain new absolute bounds for the amplitudes as well as rigorous correlations between the values of various partial waves. (author)

  18. Radioimmunoassay for human myoglobin: methods and results in patients with skeletal muscle or myocardial disorders

    International Nuclear Information System (INIS)

    Miyoshi, K.; Saito, S.; Kawai, H.; Kondo, A.; Iwasa, M.; Hayashi, T.; Yagita, M.

    1978-01-01

    A sensitive and specific radioimmunoassay has been developed for the measurement of serum Mb. Immunization of rabbit with human Mb yielded anti-Mb antibody which was purified by affinity chromatography. Human hemoglobin, CK, and the component of serum per se did not appear to cross-react with the antibody. Mb was radiolabeled by the chloramine T method. The radioimmunoassay method could detect as little as 0.3 ng of Mb and was not affected by hemolysis. Information is also given on precision, recovery, and specimen preservation. Mb levels could be detected in all of 120 normal adults, and the values ranged between 1 and 28 ng/ml (mean, 13.1 +- 6.1). No sex difference was observed. Levels were markedly elevated in all the patients with progressive muscular dystrophy, especially in the Duchenne type at the level of 40 to 1700 ng/ml. It was also noticed that about 70% of female gene carriers of Duchenne type had a slightly increased Mb level. An elevated serum Mb was also noted in polymyositis. In every case of acute myocardial infarction, serum Mb levels were increased, peak values ranging from 175 to 4400 ng/ml and averaging 1162 +- 287.9 Mb levels were elevated faster and peaked earlier (within 6 to 12 hr after the attack) than serum CK activity and returned to nearly normal range within 3 to 4 days. The increase in serum Mb was also noticed in shock and surgery. These data indicate that radioimmunoassay of Mb is a useful test for judging the myolytic state of myogenic myopathies and for early detection of myocardial infarction

  19. Indications and organisational methods for autologous blood transfusion procedures in Italy: results of a national survey.

    Science.gov (United States)

    Catalano, Liviana; Campolongo, Alessandra; Caponera, Maurizio; Berzuini, Alessandra; Bontadini, Andrea; Furlò, Giuseppe; Pasqualetti, Patrizio; Liumbruno, Giancarlo M

    2014-10-01

    Pre-operative donation of autologous blood is a practice that is now being abandoned. Alternative methods of transfusing autologous blood, other than predeposited blood, do however play a role in limiting the need for transfusion of allogeneic blood. This survey of autologous blood transfusion practices, promoted by the Italian Society of Transfusion Medicine and Immunohaematology more than 2 years after the publication of national recommendations on the subject, was intended to acquire information on the indications for predeposit in Italy and on some organisational aspects of the alternative techniques of autotransfusion. A structured questionnaire consisting of 22 questions on the indications and organisational methods of autologous blood transfusion was made available on a web platform from 15 January to 15 March, 2013. The 232 Transfusion Services in Italy were invited by e-mail to complete the online survey. Of the 232 transfusion structures contacted, 160 (69%) responded to the survey, with the response rate decreasing from the North towards the South and the Islands. The use of predeposit has decreased considerably in Italy and about 50% of the units collected are discarded because of lack of use. Alternative techniques (acute isovolaemic haemodilution and peri-operative blood salvage) are used at different frequencies across the country. The data collected in this survey can be considered representative of national practice; they show that the already very limited indications for predeposit autologous blood transfusion must be adhered to even more scrupulously, also to avoid the notable waste of resources due to unused units.Users of alternative autotransfusion techniques must be involved in order to gain a full picture of the degree of use of such techniques; multidisciplinary agreement on the indications for their use is essential in order for these indications to have an effective role in "patient blood management" programmes.

  20. Evaluation and perceived results of moral case deliberation: A mixed methods study

    NARCIS (Netherlands)

    Janssens, R.; van Zadelhoff, E.; van Loo, G.; Widdershoven, G.A.; Molewijk, A.C.

    2015-01-01

    Background: Moral case deliberation is increasingly becoming part of various Dutch healthcare organizations. Although some evaluation studies of moral case deliberation have been carried out, research into the results of moral case deliberation within aged care is scarce. Research questions: How did

  1. Does It Matter? Analyzing the Results of Three Different Learning Delivery Methods

    Science.gov (United States)

    Chernish, William N.; DeFranco, Agnes L.; Lindner, James R.; Dooley, Kim E.

    2005-01-01

    The increasing diversity of learners and their preferences coupled with increasing usage of the computer and Internet prompted the need for testing and verifying the ways that knowledge can be delivered and learned effectively. This research addresses these concerns by comparing the results of a college course, hospitality human resource…

  2. The ATLAS3D project - IX. The merger origin of a fast- and a slow-rotating early-type galaxy revealed with deep optical imaging: first results

    Science.gov (United States)

    Duc, Pierre-Alain; Cuillandre, Jean-Charles; Serra, Paolo; Michel-Dansac, Leo; Ferriere, Etienne; Alatalo, Katherine; Blitz, Leo; Bois, Maxime; Bournaud, Frédéric; Bureau, Martin; Cappellari, Michele; Davies, Roger L.; Davis, Timothy A.; de Zeeuw, P. T.; Emsellem, Eric; Khochfar, Sadegh; Krajnović, Davor; Kuntschner, Harald; Lablanche, Pierre-Yves; McDermid, Richard M.; Morganti, Raffaella; Naab, Thorsten; Oosterloo, Tom; Sarzi, Marc; Scott, Nicholas; Weijmans, Anne-Marie; Young, Lisa M.

    2011-10-01

    The mass assembly of galaxies leaves imprints in their outskirts, such as shells and tidal tails. The frequency and properties of such fine structures depend on the main acting mechanisms - secular evolution, minor or major mergers - and on the age of the last substantial accretion event. We use this to constrain the mass assembly history of two apparently relaxed nearby early-type galaxies (ETGs) selected from the ATLAS3D sample, NGC 680 and 5557. Our ultra-deep optical images obtained with MegaCam on the Canada-France-Hawaii Telescope reach 29 mag arcsec-2 in the g band. They reveal very low surface brightness (LSB) filamentary structures around these ellipticals. Among them, a gigantic 160 kpc long, narrow, tail east of NGC 5557 hosts three gas-rich star-forming objects, previously detected in H I with the Westerbork Synthesis Radio Telescope and in UV with GALEX. NGC 680 exhibits two major diffuse plumes apparently connected to extended H I tails, as well as a series of arcs and shells. Comparing the outer stellar and gaseous morphology of the two ellipticals with that predicted from models of colliding galaxies, we argue that the LSB features are tidal debris and that each of these two ETGs was assembled during a relatively recent, major wet merger, which most likely occurred after the redshift z ≃ 0.5 epoch. Had these mergers been older, the tidal features should have already fallen back or be destroyed by more recent accretion events. However, the absence of molecular gas and of a prominent young stellar population in the core region of the galaxies indicates that the merger is at least 1-2 Gyr old: the memory of any merger-triggered nuclear starburst has indeed been lost. The star-forming objects found towards the collisional debris of NGC 5557 are then likely tidal dwarf galaxies. Such recycled galaxies here appear to be long-lived and continue to form stars while any star formation activity has stopped in their parent galaxy. The inner kinematics of NGC

  3. Measuring age differences among globular clusters having similar metallicities - A new method and first results

    International Nuclear Information System (INIS)

    Vandenberg, D.A.; Bolte, M.; Stetson, P.B.

    1990-01-01

    A color-difference technique for estimating the relative ages of globular clusters with similar chemical compositions on the basis of their CM diagrams is described and demonstrated. The theoretical basis and implementation of the procedure are explained, and results for groups of globular clusters with m/H = about -2, -1.6, and -1.3, and for two special cases (Palomar 12 and NGC 5139) are presented in extensive tables and graphs and discussed in detail. It is found that the more metal-deficient globular clusters are nearly coeval (differences less than 0.5 Gyr), whereas the most metal-rich globular clusters exhibit significant age differences (about 2 Gyr). This result is shown to contradict Galactic evolution models postulating halo collapse in less than a few times 100 Myr. 77 refs

  4. Review of solution approach, methods, and recent results of the RELAP5 system code

    International Nuclear Information System (INIS)

    Trapp, J.A.; Ransom, V.H.

    1983-01-01

    The present RELAP5 code is based on a semi-implicit numerical scheme for the hydrodynamic model. The basic guidelines employed in the development of the semi-implicit numerical scheme are discussed and the numerical features of the scheme are illustrated by analysis for a simple, but analogous, single-equation model. The basic numerical scheme is recorded and results from several simulations are presented. The experimental results and code simulations are used in a complementary fashion to develop insights into nuclear-plant response that would not be obtained if either tool were used alone. Further analysis using the simple single-equation model is carried out to yield insights that are presently being used to implement a more-implicit multi-step scheme in the experimental version of RELAP5. The multi-step implicit scheme is also described

  5. Performance of various mathematical methods for computer-aided processing of radioimmunoassay results

    International Nuclear Information System (INIS)

    Vogt, W.; Sandel, P.; Langfelder, Ch.; Knedel, M.

    1978-01-01

    The performance of 6 algorithms were compared for computer aided determination of radioimmunological end results. These were weighted and unweighted linear logit log regression; quadratic logit log regression, smoothing spline interpolation with a large and small smoothing factor, respectively, and polygonal interpolation and the manual curve fitting on the basis of three radioimmunoassays with different reference curve characteristics (digoxin, estriol, human chorionic somatomammotrophin (HCS)). Great store was set by the accuracy of the approximation at the intermediate points on the curve, i.e. those points that lie midway between two standard concentrations. These concentrations were obtained by weighing and inserted as unknown samples. In the case of digoxin and estriol the polygonal interpolation provided the best results, while the weighted logit log regression proved superior in the case of HCS. (Auth.)

  6. Conversion Method of the Balance Test Results in Open Jet Tunnel on the Free Flow Conditions

    Directory of Open Access Journals (Sweden)

    V. T. Bui

    2015-01-01

    Full Text Available The paper considers a problem of sizing a model and converting the balance test results in the low speed open-jet wind tunnel to free-flow conditions. The ANSYS Fluent commercial code performs flow model calculations in the test section and in the free flow, and the ANSYS ICEM CFD module is used to provide grid generation. A structured grid is generated in the free flow and an unstructured one is provided in the test section. The changes of aerodynamic coefficients are determined at the different values of the blockage factor for the segmental-conical and hemisphere cylinder-cone shapes of the model. The blockage factor values are found at which the interference of the test section – model is neglected. The paper presents a technique to convert the wind tunnel test results to the free flow conditions.

  7. Comparison of Different Methods for Transverse Emittance Measurement and Recent Results from LEP

    CERN Document Server

    Bovet, Claude; Jung, R

    1997-01-01

    The knowledge of its position and angular transverse distributions is of utmost interest to assess the good behaviour of a beam within an accelerator. After a short reminder of beam "emittance" definitions, a review is made of various measurement techniques used so far both in single pass machines and colliders. Results of measurements made at CERN in the future LHC injection complex and in LEP are presented and discussed.

  8. USA/FBR program fast flux test facility startup physics and reactor characterization methods and results

    International Nuclear Information System (INIS)

    Bennett, R.A.; Harris, R.A.; Daughtry, J.W.

    1981-09-01

    Final confirmation of much of the engineering mockup work has been achieved in FTR zero-power experiments in February, 1980, and in power demonstration performed in December, 1980, and March, 1981. Final in-core low-power and high-power irradiation of spatially distributed radioactivants will be completed late in 1981. This paper describes physics experiments and present summaries of the extensive results accumulated to date. 53 figures

  9. Methods and results of a probabilistic risk assessment for radioactive waste transports

    International Nuclear Information System (INIS)

    Lange, F.; Gruendler, D.; Schwarz, G.

    1993-01-01

    The radiological risk from accidents has been analyzed for the expected annual transport volume (3400 shipping units) of low and partially intermediate level radioactive wastes to be shipped to a final repository. In order to take account of these variable quantities and conditions a computer code was developed to simulate a wide spectrum of waste transport and accident configurations using Monte Carlo sampling techniques. Typically some 10.000 source terms were generated to represent possible releases of radionuclides from transport accidents. Accident events in which the integrity of waste packagings is retained and consequently no releases occur are included. Potential radiological consequences are then calculated for each of the release categories by using an accident consequence code which takes into account atmospheric dispersion statistics. Finally cumulative complementary frequency distributions of radiological consequences are generated by superposing the results for all release categories. Radiological consequences are primarily expressed as potential effective individual doses resulting from airborne and deposited radionuclides. The results of the risk analysis show that expected frequencies of effective doses comparable to the natural radiation exposure of one year are quite low and very low for potential radiation exposures in the range of 50 mSv. (J.P.N.)

  10. Method for purification of environmental objects, contaminated with radioactive substancesas a result of natural disasters

    International Nuclear Information System (INIS)

    Mammadov, Kh.; Shiraliyeva, Kh.; Mirzayev, N.; Garibov, R.; Allahverdiyev, G.; Aliyeva, U.; Farajova, A.

    2017-01-01

    , centrifugation and evaporation/ and vegetation /firing, treatment with nitric acid and distilled water/ taken from all regions of the republic were carried out. Mineral compositions have been studied by X-ray fluorescence, γ, α, β spectroscopy, electron microscopy and chemical methods. Four times washing /stirring and extraction/ by distilled water were used to clean the soil samples. It is possible to re-wash the soil for achieve complete cleaning of deeply contaminated soils. Methods sedimentation, centrifugation, mass spectroscopy and adsorption on activated carbon or on organic porous adsorbents /copolymers of maleic anhydride with styrene hardened by polyethylene polyamines/ were used for purification (separation of heavy elements, radioactive isotopes) of water samples and water extracts of soil.

  11. Pharmaceutical companies' policies on access to trial data, results, and methods: audit study.

    Science.gov (United States)

    Goldacre, Ben; Lane, Síle; Mahtani, Kamal R; Heneghan, Carl; Onakpoya, Igho; Bushfield, Ian; Smeeth, Liam

    2017-07-26

    Objectives  To identify the policies of major pharmaceutical companies on transparency of trials, to extract structured data detailing each companies' commitments, and to assess concordance with ethical and professional guidance. Design  Structured audit. Setting  Pharmaceutical companies, worldwide. Participants  42 pharmaceutical companies. Main outcome measures  Companies' commitments on sharing summary results, clinical study reports (CSRs), individual patient data (IPD), and trial registration, for prospective and retrospective trials. Results  Policies were highly variable. Of 23 companies eligible from the top 25 companies by revenue, 21 (91%) committed to register all trials and 22 (96%) committed to share summary results; however, policies commonly lacked timelines for disclosure, and trials on unlicensed medicines and off-label uses were only included in six (26%). 17 companies (74%) committed to share the summary results of past trials. The median start date for this commitment was 2005. 22 companies (96%) had a policy on sharing CSRs, mostly on request: two committed to share only synopses and only two policies included unlicensed treatments. 22 companies (96%) had a policy to share IPD; 14 included phase IV trials (one included trials on unlicensed medicines and off-label uses). Policies in the exploratory group of smaller companies made fewer transparency commitments. Two companies fell short of industry body commitments on registration, three on summary results. Examples of contradictory and ambiguous language were documented and summarised by theme. 23/42 companies (55%) responded to feedback; 7/1806 scored policy elements were revised in light of feedback from companies (0.4%). Several companies committed to changing policy; some made changes immediately. Conclusions  The commitments made by companies to transparency of trials were highly variable. Other than journal submission for all trials within 12 months, all elements of best practice

  12. False positive results using calcitonin as a screening method for medullary thyroid carcinoma

    Directory of Open Access Journals (Sweden)

    Rafael Loch Batista

    2013-01-01

    Full Text Available The role of serum calcitonin as part of the evaluation of thyroid nodules has been widely discussed in literature. However there still is no consensus of measurement of calcitonin in the initial evaluation of a patient with thyroid nodule. Problems concerning cost-benefit, lab methods, false positive and low prevalence of medullary thyroid carcinoma (MTC are factors that limit this approach. We have illustrated two cases where serum calcitonin was used in the evaluation of thyroid nodule and rates proved to be high. A stimulation test was performed, using calcium as secretagogue, and calcitonin hyper-stimulation was confirmed, but anatomopathologic examination did not evidence medullar neoplasia. Anatomopathologic diagnosis detected Hashimoto thyroiditis in one case and adenomatous goiter plus an occult papillary thyroid carcinoma in the other one. Recommendation for routine use of serum calcitonin in the initial diagnostic evaluation of a thyroid nodule, followed by a confirming stimulation test if basal serum calcitonin is showed to be high, is the most currently recommended approach, but questions concerning cost-benefit and possibility of diagnosis error make the validity of this recommendation discussible.

  13. Accelerated life-test methods and results for implantable electronic devices with adhesive encapsulation.

    Science.gov (United States)

    Huang, Xuechen; Denprasert, Petcharat May; Zhou, Li; Vest, Adriana Nicholson; Kohan, Sam; Loeb, Gerald E

    2017-09-01

    We have developed and applied new methods to estimate the functional life of miniature, implantable, wireless electronic devices that rely on non-hermetic, adhesive encapsulants such as epoxy. A comb pattern board with a high density of interdigitated electrodes (IDE) could be used to detect incipient failure from water vapor condensation. Inductive coupling of an RF magnetic field was used to provide DC bias and to detect deterioration of an encapsulated comb pattern. Diodes in the implant converted part of the received energy into DC bias on the comb pattern. The capacitance of the comb pattern forms a resonant circuit with the inductor by which the implant receives power. Any moisture affects both the resonant frequency and the Q-factor of the resonance of the circuitry, which was detected wirelessly by its effects on the coupling between two orthogonal RF coils placed around the device. Various defects were introduced into the comb pattern devices to demonstrate sensitivity to failures and to correlate these signals with visual inspection of failures. Optimized encapsulation procedures were validated in accelerated life tests of both comb patterns and a functional neuromuscular stimulator under development. Strong adhesive bonding between epoxy and electronic circuitry proved to be necessary and sufficient to predict 1 year packaging reliability of 99.97% for the neuromuscular stimulator.

  14. Generalist palliative care in hospital - Cultural and organisational interactions. Results of a mixed-methods study.

    Science.gov (United States)

    Bergenholtz, Heidi; Jarlbaek, Lene; Hølge-Hazelton, Bibi

    2016-06-01

    It can be challenging to provide generalist palliative care in hospitals, owing to difficulties in integrating disease-oriented treatment with palliative care and the influences of cultural and organisational conditions. However, knowledge on the interactions that occur is sparse. To investigate the interactions between organisation and culture as conditions for integrated palliative care in hospital and, if possible, to suggest workable solutions for the provision of generalist palliative care. A convergent parallel mixed-methods design was chosen using two independent studies: a quantitative study, in which three independent datasets were triangulated to study the organisation and evaluation of generalist palliative care, and a qualitative, ethnographic study exploring the culture of generalist palliative nursing care in medical departments. A Danish regional hospital with 29 department managements and one hospital management. Two overall themes emerged: (1) 'generalist palliative care as a priority at the hospital', suggesting contrasting issues regarding prioritisation of palliative care at different organisational levels, and (2) 'knowledge and use of generalist palliative care clinical guideline', suggesting that the guideline had not reached all levels of the organisation. Contrasting issues in the hospital's provision of generalist palliative care at different organisational levels seem to hamper the interactions between organisation and culture - interactions that appear to be necessary for the provision of integrated palliative care in the hospital. The implementation of palliative care is also hindered by the main focus being on disease-oriented treatment, which is reflected at all the organisational levels. © The Author(s) 2015.

  15. Accelerated stress testing of thin film solar cells: Development of test methods and preliminary results

    Science.gov (United States)

    Lathrop, J. W.

    1985-01-01

    If thin film cells are to be considered a viable option for terrestrial power generation their reliability attributes will need to be explored and confidence in their stability obtained through accelerated testing. Development of a thin film accelerated test program will be more difficult than was the case for crystalline cells because of the monolithic construction nature of the cells. Specially constructed test samples will need to be fabricated, requiring committment to the concept of accelerated testing by the manufacturers. A new test schedule appropriate to thin film cells will need to be developed which will be different from that used in connection with crystalline cells. Preliminary work has been started to seek thin film schedule variations to two of the simplest tests: unbiased temperature and unbiased temperature humidity. Still to be examined are tests which involve the passage of current during temperature and/or humidity stress, either by biasing in the forward (or reverse) directions or by the application of light during stress. Investigation of these current (voltage) accelerated tests will involve development of methods of reliably contacting the thin conductive films during stress.

  16. Joint hyperlaxity prevents relapses in clubfeet treated by Ponseti method-preliminary results.

    Science.gov (United States)

    Cosma, Dan Ionuţ; Corbu, Andrei; Nistor, Dan Viorel; Todor, Adrian; Valeanu, Madalina; Morcuende, Jose; Man, Sorin

    2018-05-07

    The aim of the study was to evaluate the role of joint hyperlaxity (by Beighton score) as a protective factor for clubfoot relapse. Patients with idiopathic clubfoot treated with the Ponseti method between January 2004 and December 2012, without other congenital foot deformity, and not previously treated by open surgery were included in either the Relapse group (n = 23) if it was a clubfoot relapse or the Control group (n = 19) if no relapse was noted. Joint laxity was evaluated using the Beighton score at the latest follow-up against the Normal group (n = 22, children matched by sex and age without clubfoot deformity). We found a significantly higher joint laxity in the Control group (4.58, 95% confidence interval [CI]: 2.1-7.06) as compared to the Relapse (3.17, 95% CI: 1.53-4.81, p = 0.032) and Normal (3.14, 95% CI: 1.78-4.5, p = 0.03) groups. The univariate logistic regression showed a 5.28-times increase in the risk of relapse for a Beighton score lower than 4/9 points (odds ratio = 5.28; 95% CI = 1.29-21.5; p = 0.018). Joint hyperlaxity could be a protective factor for clubfoot relapse.

  17. Preliminary results of oxygen isotope ratio measurement with a particle-gamma coincidence method

    Energy Technology Data Exchange (ETDEWEB)

    Borysiuk, Maciek, E-mail: maciek.borysiuk@pixe.lth.se; Kristiansson, Per; Ros, Linus; Abdel, Nassem S.; Elfman, Mikael; Nilsson, Charlotta; Pallon, Jan

    2015-04-01

    The possibility to study variations in the oxygen isotopic ratio with photon tagged nuclear reaction analysis (pNRA) is evaluated in the current work. The experiment described in the article was performed at Lund Ion Beam Analysis Facility (LIBAF) with a 2 MeV deuteron beam. Isotopic fractionation of light elements such as carbon, oxygen and nitrogen is the basis of many analytical tools in hydrology, geology, paleobiology and paleogeology. IBA methods provide one possible tool for measurement of isotopic content. During this experimental run we focused on measurement of the oxygen isotopic ratio. The measurement of stable isotopes of oxygen has a number of applications; the particular one driving the current investigation belongs to the field of astrogeology and specifically evaluation of fossil extraterrestrial material. There are three stable isotopes of oxygen: {sup 16}O, {sup 17}O and {sup 18}O. We procured samples highly enriched with all three isotopes. Isotopes {sup 16}O and {sup 18}O were easily detected in the enriched samples, but no significant signal from {sup 17}O was detected in the same samples. The measured yield was too low to detect {sup 18}O in a sample with natural abundances of oxygen isotopes, at least in the current experimental setup, but the spectral line from the reaction with {sup 16}O was clearly visible.

  18. Variation in Results of Volume Measurements of Stumps of Lower-Limb Amputees : A Comparison of 4 Methods

    NARCIS (Netherlands)

    de Boer-Wilzing, Vera G.; Bolt, Arjen; Geertzen, Jan H.; Emmelot, Cornelis H.; Baars, Erwin C.; Dijkstra, Pieter U.

    de Boer-Wilzing VG, Bolt A, Geertzen JH, Emmelot CH, Baars EC, Dijkstra PU. Variation in results of volume measurements of stumps of lower-limb amputees: a comparison of 4 methods. Arch Phys Med Rehabil 2011;92:941-6. Objective: To analyze the reliability of 4 methods (water immersion,

  19. Studies on the comparability of the results from different methods for the radioimmunological determination of digoxin

    International Nuclear Information System (INIS)

    Dwenger, A.; Trautschold, I.

    1978-01-01

    Three iodine-125-digoxin radioimmunoassay kits (A Amersham Buchler; B Boehringer Mannheim; C Schwarz Mann/Becton Dickinson) were evaluated with respect to assay quality and comparability of the results. Intra- and interassay variances were calculated for the following types of samples: Three media (a pool serum; b artificial human serum; c buffer solution with albumin and globulin) containing pure digoxin, sera from a pharmacokinetic study, sera with different concentrations of proteins, a hemolytic serum and sera with digitoxin and metabolites of spironolactone. The intra-assay precision depended on the medium of the sample and was higher for samples with identical digoxin concentrations in an identical medium (e.g. CV for 2 μg/l in medium a for kit A: 4.3% for kit B: 7.0%; for kit C: 2.2%) than for samples with identical antigen concentrations in different media (CV for 2 μg/l in media a, b and c for kit A: 6.4%; for kit B: 9.1%; for kit C: 4.3%). The mean recovery in the range 0.5-4 μg/l depended on the kind of medium (a, b or c) and varied for kit A from 84.4% to 100.8%, for kit B from 112.0% to 119.6%, and for kit C from 98.0% to 104.5%. Decreasing serum protein concentrations to less than one half of the physiological concentration gave false negative results for kit A and fals positive results for kit C; for kit B this dependency was not be observed, but there was a decrease of reproducibility. (orig./AJ) [de

  20. Assessing incentive contracts for reducing residential electricity consumption: new experimental methods for new results

    International Nuclear Information System (INIS)

    Frachet, Laure

    2013-01-01

    Facing economic, political and environmental stakes, electricity providers are nowadays developing incentive tools, in order to reduce consumer's demand, particularly during peak demand periods. For residential customers, these tools can be tariffs (dynamic pricing of time-of-use tariffs), or informative devices or services (feedbacks on historical or real-time consumption, given on various media). They might go along with automation systems that can help cutting of some electric devices when needed. In order to evaluate the capacity of these settings among their customers, electricity utilities are developing quite a few studies, which are mainly field experiment often called pilots. During these pilots, demand response tools are implemented on a population sample. These long and expensive studies lid to quantitative and qualitative analysis. We have compiled about 40 of them and extract from this survey some generalizable teachings. We have shown what these results were and highlighted pilot programs' methodological limits. In order to propose a substitute to these heavy experimentations, we assessed the capacity or experimental economics. This relatively new discipline's objective is to evaluation the efficiency of institutions, like markets, but also to study what animate economic agents' behaviour, e.g. preferences, beliefs, cognitive biases, willingness to pay... We were also able to elaborate an experimental protocol dedicated to the evaluation of some demand response contracts' acceptability. The results collected during 14 experimental sessions gave us some innovative clues and insight on these contracts acceptability. But, beyond these results, we have demonstrated that even if experimental economics can't obviously be a substitute for field experiments, it can represent an interesting exploratory methodology. To sum up the experimental economics can take part of residential customers' behaviour understanding, performing

  1. A Survey for hot Central Stars of Planetary Nebulae I. Methods and First Results

    OpenAIRE

    Kanarek, Graham C.; Shara, Michael M.; Faherty, Jacqueline K.; Zurek, David; Moffat, Anthony F. J.

    2015-01-01

    We present the results of initial spectrographic followup with the Very Large Telescope (UT3, Melipal) for $K_s \\ge 14$ Galactic plane CIV emission-line candidates in the near-infrared (NIR). These 7 faint stars all display prominent HeI and CIV emission lines characteristic of a carbon-rich Wolf-Rayet star. They have NIR colours which are much too blue to be those of distant, classical WR stars. The magnitudes and colours are compatible with those expected for central stars of planetary nebu...

  2. Study on some factors affecting the results in the use of MIP method in concrete research

    International Nuclear Information System (INIS)

    Kumar, Rakesh; Bhattacharjee, B.

    2003-01-01

    Effects of rate of pressure application and forms and type of sample on porosity and pore size distribution of concrete estimated through mercury intrusion porosimetry (MIP) are presented in this experimental work. Two different forms of concrete sample, namely, crushed chunks of concrete and small core drilled out from the concrete beam specimens, were used for this study. The results exhibit that the rate of pressure application in mercury porosimetry has little effect on porosity and pore size distribution of concrete. It is also demonstrated that small cores drilled out from large concrete specimens are preferable as samples for performing porosimetry test on concrete

  3. Longitudinal bunch diagnostics using coherent transition radiation spectroscopy. Physical principles, multichannel spectrometer, experimental results, mathematical methods

    International Nuclear Information System (INIS)

    Schmidt, Bernhard; Wesch, Stephan; Behrens, Christopher; Koevener, Toke; Hass, Eugen; Casalbuoni, Sara

    2018-03-01

    The generation and properties of transition radiation (TR) are thoroughly treated. The spectral energy density, as described by the Ginzburg-Frank formula, is computed analytically, and the modifications caused by the finite size of the TR screen and by near-field diffraction effects are carefully analyzed. The principles of electron bunch shape reconstruction using coherent transition radiation are outlined. Spectroscopic measurements yield only the magnitude of the longitudinal form factor but not its phase. Two phase retrieval methods are investigated and illustrated with model calculations: analytic phase computation by means of the Kramers-Kronig dispersion relation, and iterative phase retrieval. Particular attention is paid to the ambiguities which are unavoidable in the reconstruction of longitudinal charge density profiles from spectroscopic data. The origin of these ambiguities has been identified and a thorough mathematical analysis is presented. The experimental part of the paper comprises a description of our multichannel infrared and THz spectrometer and a selection of measurements at FLASH, comparing the bunch profiles derived from spectroscopic data with those determined with a transversely deflecting microwave structure. A rigorous derivation of the Kramers-Kronig phase formula is presented in Appendix A. Numerous analytic model calculations can be found in Appendix B. The differences between normal and truncated Gaussians are discussed in Appendix C. Finally, Appendix D contains a short description of the propagation of an electromagnetic wave front by two-dimensional fast Fourier transformation. This is the basis of a powerful numerical Mathematica trademark code THzTransport, which permits the propagation of electromagnetic wave fronts through a beam line consisting of drift spaces, lenses, mirrors and apertures.

  4. Postgraduate Education in Quality Improvement Methods: Initial Results of the Fellows' Applied Quality Training (FAQT) Curriculum.

    Science.gov (United States)

    Winchester, David E; Burkart, Thomas A; Choi, Calvin Y; McKillop, Matthew S; Beyth, Rebecca J; Dahm, Phillipp

    2016-06-01

    Training in quality improvement (QI) is a pillar of the next accreditation system of the Accreditation Committee on Graduate Medical Education and a growing expectation of physicians for maintenance of certification. Despite this, many postgraduate medical trainees are not receiving training in QI methods. We created the Fellows Applied Quality Training (FAQT) curriculum for cardiology fellows using both didactic and applied components with the goal of increasing confidence to participate in future QI projects. Fellows completed didactic training from the Institute for Healthcare Improvement's Open School and then designed and completed a project to improve quality of care or patient safety. Self-assessments were completed by the fellows before, during, and after the first year of the curriculum. The primary outcome for our curriculum was the median score reported by the fellows regarding their self-confidence to complete QI activities. Self-assessments were completed by 23 fellows. The majority of fellows (15 of 23, 65.2%) reported no prior formal QI training. Median score on baseline self-assessment was 3.0 (range, 1.85-4), which was significantly increased to 3.27 (range, 2.23-4; P = 0.004) on the final assessment. The distribution of scores reported by the fellows indicates that 30% were slightly confident at conducting QI activities on their own, which was reduced to 5% after completing the FAQT curriculum. An interim assessment was conducted after the fellows completed didactic training only; median scores were not different from the baseline (mean, 3.0; P = 0.51). After completion of the FAQT, cardiology fellows reported higher self-confidence to complete QI activities. The increase in self-confidence seemed to be limited to the applied component of the curriculum, with no significant change after the didactic component.

  5. Machine learning methods for the classification of gliomas: Initial results using features extracted from MR spectroscopy.

    Science.gov (United States)

    Ranjith, G; Parvathy, R; Vikas, V; Chandrasekharan, Kesavadas; Nair, Suresh

    2015-04-01

    With the advent of new imaging modalities, radiologists are faced with handling increasing volumes of data for diagnosis and treatment planning. The use of automated and intelligent systems is becoming essential in such a scenario. Machine learning, a branch of artificial intelligence, is increasingly being used in medical image analysis applications such as image segmentation, registration and computer-aided diagnosis and detection. Histopathological analysis is currently the gold standard for classification of brain tumors. The use of machine learning algorithms along with extraction of relevant features from magnetic resonance imaging (MRI) holds promise of replacing conventional invasive methods of tumor classification. The aim of the study is to classify gliomas into benign and malignant types using MRI data. Retrospective data from 28 patients who were diagnosed with glioma were used for the analysis. WHO Grade II (low-grade astrocytoma) was classified as benign while Grade III (anaplastic astrocytoma) and Grade IV (glioblastoma multiforme) were classified as malignant. Features were extracted from MR spectroscopy. The classification was done using four machine learning algorithms: multilayer perceptrons, support vector machine, random forest and locally weighted learning. Three of the four machine learning algorithms gave an area under ROC curve in excess of 0.80. Random forest gave the best performance in terms of AUC (0.911) while sensitivity was best for locally weighted learning (86.1%). The performance of different machine learning algorithms in the classification of gliomas is promising. An even better performance may be expected by integrating features extracted from other MR sequences. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  6. Longitudinal bunch diagnostics using coherent transition radiation spectroscopy. Physical principles, multichannel spectrometer, experimental results, mathematical methods

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Bernhard; Wesch, Stephan; Behrens, Christopher [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Koevener, Toke [Hamburg Univ. (Germany); European Organization for Nuclear Research (CERN), Geneva (Switzerland); Hass, Eugen [Hamburg Univ. (Germany); Casalbuoni, Sara [Karlsruhe Institute of Technology (Germany). Inst. for Beam Physics and Technology; Schmueser, Peter [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Hamburg Univ. (Germany)

    2018-03-15

    The generation and properties of transition radiation (TR) are thoroughly treated. The spectral energy density, as described by the Ginzburg-Frank formula, is computed analytically, and the modifications caused by the finite size of the TR screen and by near-field diffraction effects are carefully analyzed. The principles of electron bunch shape reconstruction using coherent transition radiation are outlined. Spectroscopic measurements yield only the magnitude of the longitudinal form factor but not its phase. Two phase retrieval methods are investigated and illustrated with model calculations: analytic phase computation by means of the Kramers-Kronig dispersion relation, and iterative phase retrieval. Particular attention is paid to the ambiguities which are unavoidable in the reconstruction of longitudinal charge density profiles from spectroscopic data. The origin of these ambiguities has been identified and a thorough mathematical analysis is presented. The experimental part of the paper comprises a description of our multichannel infrared and THz spectrometer and a selection of measurements at FLASH, comparing the bunch profiles derived from spectroscopic data with those determined with a transversely deflecting microwave structure. A rigorous derivation of the Kramers-Kronig phase formula is presented in Appendix A. Numerous analytic model calculations can be found in Appendix B. The differences between normal and truncated Gaussians are discussed in Appendix C. Finally, Appendix D contains a short description of the propagation of an electromagnetic wave front by two-dimensional fast Fourier transformation. This is the basis of a powerful numerical Mathematica trademark code THzTransport, which permits the propagation of electromagnetic wave fronts through a beam line consisting of drift spaces, lenses, mirrors and apertures.

  7. Evaluation of the results of treatment of morbid obesity by the endoscopic intragastric balloon implantation method.

    Science.gov (United States)

    Żurawiński, Wojciech; Sokołowski, Dariusz; Krupa-Kotara, Karolina; Czech, Elżbieta; Sosada, Krystyn

    2017-01-01

    Overweight and obesity are ranked in the fifth place among the risk factors responsible for the greatest number of deaths in the world. To assess the effects of treatment of patients with morbid obesity using endoscopic intragastric balloon (IGB) implantation. Two hundred and seventy-two patients with obesity were treated using endoscopic intragastric balloon implantation. Upon analysis of the inclusion and exclusion criteria, the study covered a group of 63 patients with morbid obesity. The patients were implanted with the LexBal balloon. Reduction of excess body mass, changes to BMI values and ailments and complications divided into mild and severe were assessed. Before intragastric balloon treatment, the average body mass index (BMI) value was 58.3 ±10.5 kg/m 2 , whereas after 6 months of treatment it decreased to 49.5 ±8.7 kg/m 2 . The patients with postoperative BMI equal to or greater than 50.0 kg/m 2 reported nausea (69.7%), vomiting (51.5%), flatulence (45.5%), upper abdominal pain (36.4%) and general discomfort (424%) more frequently. Dehydration (9.1%) was also more frequent in this group, whereas frequency of occurrence of such ailments and complications as heartburn (23.3%) and oesophageal candidiasis (10.0%) was higher in the patients with postoperative BMI below 50.0 kg/m 2 . Endoscopic intragastric balloon implantation is an effective and safe method of excess body mass reduction in patients with morbid obesity before a planned bariatric surgical procedure. Pre-operative excess body mass and BMI value and post-operative excess weight loss in patients with morbid obesity have no impact on frequency of occurrence of ailments and complications in IGB treatment.

  8. Application of a PCR method for the diagnosis of ulcerative enteritis: preliminary results

    Directory of Open Access Journals (Sweden)

    Fabrizio Agnoletti

    2010-01-01

    Full Text Available Ulcerative enteritis or “quail disease” is an acute clostridial infection of young birds reported in many avian species, chicken and turkey included. Clostridium colinum is the causative agent of ulcerative enteritis and because of the difficulties bound to the isolation and identification of this bacterium by means of classic bacteriological techniques, its detection appears very hard and the prevalence of this disease could be underestimated. To investigate the diffusion of C. colinum in enteric disease of birds, a recently developed PCR protocol was applied to 42 cultural broths previously inoculated with organs and intestinal samples collected from diseased subjects. PCR-positive broths were cultivated to attempt the isolation of C. colinum. Samples collected from positive birds were subjected to histological examinations. 4 birds (3 broilers chickens and 1 pigeon resulted PCR-positive and, in one case, C. colinum was isolated. Gross and histological lesions of positive birds were compatible with those described in other ulcerative enteritis outbreaks. These preliminary results demonstrates that C. colinum is sporadically implicated in enteric diseases of broiler chickens (14.2%. In addition, the PCR assay proved to be an useful and reliable instrument to support the diagnosis of ulcerative enteritis and to facilitate the isolation of C. colinum.

  9. UV-VIS Spectroscopy Applied to Stratospheric Chemistry, Methods and Results

    Energy Technology Data Exchange (ETDEWEB)

    Karlsen, K.

    1996-03-01

    This paper was read at the workshop ``The Norwegian Climate and Ozone Research Programme`` held on 11-12 March 1996. Numerous observations and modeling have shown with a very high degree of certainty that the man-made emissions of chlorofluorocarbons (CFC) and halons are responsible for the Antarctica ozone hole. It is also evident that the ozone layer of the Northern Hemisphere has suffered a certain decline over the last 10-15 years, possibly because of CFC and halons. 20-30% of the observed reduction is ascribed to coupled chlorine and bromine chemistry via a catalytic cycle resulting in the net conversion of 2O{sub 3} to 3O{sub 2}. But the details are not fully understood. The author plans to assemble a UV-VIS spectrometer for measuring the species OClO and BrO and to compare and discuss measured diurnal variations of OClO and BrO with model calculations. The use of Differential Optical Absorption Spectroscopy (DOAS) is discussed and some results from late 1995 presented. 6 refs., 2 figs.

  10. Challenges of E-learning in medicine: methods and results of a systematical exploration

    Directory of Open Access Journals (Sweden)

    Spreckelsen, Cord

    2006-11-01

    Full Text Available E-learning in medicine traditionally concentrates on case oriented or problem oriented learning scenarios, the development of multimedia courseware or the implementation of simulators. This paper aims at a systematic exploration of actual and new challenges for E-learning in the medical domain. The exploration is based on the analysis of the scientific discourse in the field of Medical Education. The analysis starts from text based sources: the concept hierarchy of the Medical Subject Headings, the profiles of the relevant scientific associations, and the scientific program of scientific conferences or annual meetings. These sources are subjected to conceptual analysis, supported by network visualization tools and supplemented by network theoretic indices (Betweeness Centrality. As a result, the main concerns of the Medical Education community and their modifications during the last six years can be identified. The analysis discovers new challenges, which result from central issues of Medical Education, namely from e.g. curricular and faculty development or the sustainable integration of postgraduate education and continuing medial education. The main challenges are: 1 the implementation of integrative conceptions of the application of learning management systems (LMS and 2 the necessity of combining aspects of organizational development, knowledge management and learning management within the scope of a comprehensive learning life cycle management.

  11. Polarimetry at 1.3 mm using MILLIPOL - methods and preliminary results for Orion

    International Nuclear Information System (INIS)

    Barvainis, R.; Clemens, D.P.; Leach, R.

    1988-01-01

    This paper describes a polarimeter for use at wavelengths near 1 mm, designed to be self-contained and portable. Only minor modifications should be required to adapt this instrument for use on any of several millimeter and submillimeter telescopes. The polarimeter system and data-taking techniques are described, and a preliminary measurement is reported of the polarized dust emission from the Orion KL region at 1.3 mm using the NRAO 12 m telescope. The results are similar to previous polarization measurements of Orion at far-infrared and submillimeter wavelengths. The magnetic field direction implied by the polarization position angle is parallel to that found in the surrounding Orion region using optical and near- to midinfrared polarimetric techniques. 17 references

  12. Preliminary results for the detection method of perfluoroalkyl substances (PFASs residues in pork

    Directory of Open Access Journals (Sweden)

    Shih-kuo Lin

    2017-05-01

    Full Text Available The perfluoroalkyl substances (PFASs residues, which come from environmental pollution, tend to accumulate in the food chain (EFSA, 2008; Guerranti et al., 2013. 17 chemicals of PFASs family were selected for this study. Fresh pork samples were extracted by Waters® WAX SPE (solid phase extraction cartridges. All extracted samples were analyzed by liquid chromatography tandem mass spectrometry (LC-MS/MS. The results of calibration curves of each PFAS were good, the R2 values ranging from 0.9901 to 0.9993. The Recoveries were in the range 80%-119%. The protocol of extraction by Waters® WAX SPE cartridge will be applied in future studies.

  13. The value of evaluating parenting groups: a new researcher's perspective on methods and results.

    Science.gov (United States)

    Cabral, Judy

    2013-06-01

    The aim of this research project was to evaluate the impact of the Solihull Approach Understanding Your Child's Behaviour (UYCB) parenting groups on the participants' parenting practice and their reported behaviour of their children. Validated tools that met both the Solihull Child and Adolescent Mental Health Service (CAMHS) and academic requirements were used to establish what changes, if any, in parenting practice and children's behaviour (as perceived by the parent) occur following attendance of a UYCB parenting group. Independent evidence of the efficacy of the Solihull Approach UYCB programme was collated. Results indicated significant increases in self-esteem and parenting sense of competence; improvement in the parental locus of control; a decrease in hyperactivity and conduct problems and an increase in pro-social behaviour, as measured by the 'Strength and Difficulties' questionnaire. The qualitative and quantitative findings corroborated each other, demonstrating the impact and effectiveness of the programme and supporting anecdotal feedback on the success of UYCB parenting groups.

  14. PV Performance Modeling Methods and Practices: Results from the 4th PV Performance Modeling Collaborative Workshop.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    In 2014, the IEA PVPS Task 13 added the PVPMC as a formal activity to its technical work plan for 2014-2017. The goal of this activity is to expand the reach of the PVPMC to a broader international audience and help to reduce PV performance modeling uncertainties worldwide. One of the main deliverables of this activity is to host one or more PVPMC workshops outside the US to foster more international participation within this collaborative group. This report reviews the results of the first in a series of these joint IEA PVPS Task 13/PVPMC workshops. The 4th PV Performance Modeling Collaborative Workshop was held in Cologne, Germany at the headquarters of TÜV Rheinland on October 22-23, 2015.

  15. Evaluation and perceived results of moral case deliberation: A mixed methods study.

    Science.gov (United States)

    Janssens, Rien M J P A; van Zadelhoff, Ezra; van Loo, Ger; Widdershoven, Guy A M; Molewijk, Bert A C

    2015-12-01

    Moral case deliberation is increasingly becoming part of various Dutch healthcare organizations. Although some evaluation studies of moral case deliberation have been carried out, research into the results of moral case deliberation within aged care is scarce. How did participants evaluate moral case deliberation? What has moral case deliberation brought to them? What has moral case deliberation contributed to care practice? Should moral case deliberation be further implemented and, if so, how? Quantitative analysis of a questionnaire study among participants of moral case deliberation, both caregivers and team leaders. Qualitative analysis of written answers to open questions, interview study and focus group meetings among caregivers and team leaders. Caregivers and team leaders in a large organization for aged care in the Netherlands. A total of 61 moral case deliberation sessions, carried out on 16 care locations belonging to the organization, were evaluated and perceived results were assessed. Participants gave informed consent and anonymity was guaranteed. In the Netherlands, the law does not prescribe independent ethical review by an Institutional Review Board for this kind of research among healthcare professionals. Moral case deliberation was evaluated positively by the participants. Content and atmosphere of moral case deliberation received high scores, while organizational issues regarding the moral case deliberation sessions scored lower and merit further attention. Respondents indicated that moral case deliberation has the potential to contribute to care practice as relationships among team members improve, more openness is experienced and more understanding for different perspectives is fostered. If moral case deliberation is to be successfully implemented, top-down approaches should go hand in hand with bottom-up approaches. The relevance of moral case deliberation for care practice received wide acknowledgement from the respondents. It can contribute

  16. Detection of leaks in underground storage tanks using electrical resistance methods: 1996 results

    International Nuclear Information System (INIS)

    Ramirez, A.; Daily, W.

    1996-10-01

    This document provides a summary of a field experiment performed under a 15m diameter steel tank mockup located at the Hanford Reservation, Washington. The purpose of this test was to image a contaminant plume as it develops in soil under a tank already contaminated by previous leakage and to determine whether contaminant plumes can be detected without the benefit of background data. Measurements of electrical resistance were made before and during a salt water release. These measurements were made in soil which contained the remnants of salt water plumes released during previous tests in 1994 and in 1995. About 11,150 liters of saline solution were released along a portion of the tank's edge in 1996. Changes in electrical resistivity due to release of salt water conducted in 1996 were determined in two ways: (1) changes relative to the 1996 pre-spill data, and (2) changes relative to data collected near the middle of the 1996 spill after the release flow rate was increased. In both cases, the observed resistivity changes show clearly defined anomalies caused by the salt water release. These results indicate that when a plume develops over an existing plume and in a geologic environment similar to the test site environment, the resulting resistivity changes are easily detectable. Three dimensional tomographs of the resistivity of the soil under the tank show that the salt water release caused a region of low soil resistivity which can be observed directly without the benefit of comparing the tomograph to tomographs or data collected before the spill started. This means that it may be possible to infer the presence of pre-existing plumes if there is other data showing that the regions of low resistivity are correlated with the presence of contaminated soil. However, this approach does not appear reliable in defining the total extent of the plume due to the confounding effect that natural heterogeneity has on our ability to define the margins of the anomaly

  17. Mathematics revealed

    CERN Document Server

    Berman, Elizabeth

    1979-01-01

    Mathematics Revealed focuses on the principles, processes, operations, and exercises in mathematics.The book first offers information on whole numbers, fractions, and decimals and percents. Discussions focus on measuring length, percent, decimals, numbers as products, addition and subtraction of fractions, mixed numbers and ratios, division of fractions, addition, subtraction, multiplication, and division. The text then examines positive and negative numbers and powers and computation. Topics include division and averages, multiplication, ratios, and measurements, scientific notation and estim

  18. Changes in School Food Preparation Methods Result in Healthier Cafeteria Lunches in Elementary Schools.

    Science.gov (United States)

    Behrens, Timothy K; Liebert, Mina L; Peterson, Hannah J; Howard Smith, Jennifer; Sutliffe, Jay T; Day, Aubrey; Mack, Jodi

    2018-05-01

    The purpose of this study is to examine the impact of a districtwide food best practices and preparation changes in elementary schools lunches, implemented as part of the LiveWell@School childhood obesity program, funded by LiveWell Colorado/Kaiser Permanente Community Health Initiative. Longitudinal study examining how school changes in best practices for food preparation impacted the types of side items offered from 2009 to 2015 in elementary school cafeterias in a high-need school district in southern Colorado. Specifically, this study examined changes in side items (fruits, vegetables, potatoes, breads, and desserts). In Phase 1 (2009-2010), baseline data were collected. During Phase 2 (2010-2011), breaded and processed foods (e.g., frozen nuggets, pre-packaged pizza) were removed and school chefs were trained on scratch cooking methods. Phase 3 (2011-2012) saw an increased use of fresh/frozen fruits and vegetables after a new commodity order. During Phase 4 (2013-2015), chef consulting and training took place. The frequency of side offerings was tracked across phases. Analyses were completed in Fall 2016. Because of limited sample sizes, data from Phases 2 to 4 (intervention phases) were combined for potatoes and desserts. Descriptive statistics were calculated. After adjusting for length of time for each phase, Pearson chi-square tests were conducted to examine changes in offerings of side items by phase. Fresh fruit offerings increased and canned fruit decreased in Phases 1-4 (p=0.001). A significant difference was observed for vegetables (p=0.001), with raw and steamed vegetables increasing and canned vegetables decreasing from Phase 1 to 4. Fresh potatoes (low in sodium) increased and fried potatoes (high in sodium) decreased from Phase 1 to Phases 2-4 (p=0.001). Breads were eliminated entirely in Phase 2, and dessert changes were not significant (p=0.927). This approach to promoting healthier lunch sides is a promising paradigm for improving elementary

  19. Effects of sampling methods on the quantity and quality of dissolved organic matter in sediment pore waters as revealed by absorption and fluorescence spectroscopy.

    Science.gov (United States)

    Chen, Meilian; Lee, Jong-Hyeon; Hur, Jin

    2015-10-01

    Despite literature evidence suggesting the importance of sampling methods on the properties of sediment pore waters, their effects on the dissolved organic matter (PW-DOM) have been unexplored to date. Here, we compared the effects of two commonly used sampling methods (i.e., centrifuge and Rhizon sampler) on the characteristics of PW-DOM for the first time. The bulk dissolved organic carbon (DOC), ultraviolet-visible (UV-Vis) absorption, and excitation-emission matrixes coupled with parallel factor analysis (EEM-PARAFAC) of the PW-DOM samples were compared for the two sampling methods with the sediments from minimal to severely contaminated sites. The centrifuged samples were found to have higher average values of DOC, UV absorption, and protein-like EEM-PARAFAC components. The samples collected with the Rhizon sampler, however, exhibited generally more humified characteristics than the centrifuged ones, implying a preferential collection of PW-DOM with respect to the sampling methods. Furthermore, the differences between the two sampling methods seem more pronounced in relatively more polluted sites. Our observations were possibly explained by either the filtration effect resulting from the smaller pore size of the Rhizon sampler or the desorption of DOM molecules loosely bound to minerals during centrifugation, or both. Our study suggests that consistent use of one sampling method is crucial for PW-DOM studies and also that caution should be taken in the comparison of data collected with different sampling methods.

  20. New archeointensity results from the reconstructed ancient kiln by the Tsunakawa-Shaw method

    Science.gov (United States)

    Yamamoto, Y.; Hatakeyama, T.; Kitahara, Y.; Saito, T.

    2017-12-01

    Yamamoto et al. (2015) reported that baked clay samples from the floor of a reconstructed ancient kiln provided a reliable Tsunakawa-Shaw (LTD-DHT Shaw) archeointensity (AI) estimate of 47.3 +/- 2.2 microT which is fairly consistent with the in situ geomagnetic field of 46.4 microT at the time of the reconstruction. The reconstruction was conducted to reproduce an excavated kiln of the seventh century in Japan and Sue-type potteries of contemporary style were also fired (Nakajima et al., 1974). Two of the potteries with reddish color were recently subjected to the Tsunakawa-Shaw archeointensity determinations, resulting in reliable AI estimates of 45.4 +/- 2.3 (N=6) and 48.2 +/- 2.7 microT (N=15) when specimens were heated in air in laboratory (Yamamoto et al., 2017 JpGU-AGU Joint Meeting). We have had another opportunity to take samples from a new reconstructed ancient kiln in Japan which was fired in autumn 2016. The samples were two Sue-type potteries with grayish color (bowl-type and plate-type) and some blocks collected from inner wall of the kiln body. They were cut into mini specimens and then subjected to the Tsunakawa-Shaw experiment. Heating in laboratory was done either in air or vacuum.For the bowl-type pottery, AIs of 46.9 +/- 2.8 (N=6, air) and 45.3 +/- 2.3 microT (N=6, vacuum) are obtained. They are indistinguishable each other and consistent with the IGRF field of 47.4 microT at the reconstructed location in 2016. For the plate-type pottery, AIs result in 41.8 +/- 1.3 (N=4, air) and 43.9 +/- 3.9 microT (N=4, vacuum). They are also indistinguishable each other but the former AI is slightly lower than the IGRF field.For the inner wall, AIs of 45.0 (N=1, air) and 46.8 microT (N=1, vacuum) are obtained from a right-side wall, and those of 45.5 +/- 2.5 (N=2, air) and 47.7 +/- 3.0 microT (N=2, vacuum) are observed from a left-side wall. They are all indistinguishable and consistent with the IGRF field.

  1. TREC 2010 legal track: method and results of the ELK collaboration

    Energy Technology Data Exchange (ETDEWEB)

    Spearing, Shelly [Los Alamos National Laboratory; Roman, Jorge [Los Alamos National Laboratory; Mc Kay, Bain [KAYVIUM; Lindquist, Eric [EWA-IIT

    2010-10-25

    The ELK team ([E]WA-IIT, [L]os Alamos National laboratory (LANL), and [K]ayvium Corporation (ELK)) used the legal Track task 302 as an opportunity to compare and integrate advanced semantic-automation strategies. The team members believe that enabling parties to discover, consume, analyze, and make decisions in a noisy and information-overloaded environment requires new tools. Together, as well as independently, they are actively developing these tools and view the TREC exercise as an opportunity to test, compare, and complement tools and approaches. Our collaboration is new to TREC, brought together by a shared interest in document relevance, concept-in-context identification and annotation, and the recognition that words out-of-context do not a match make. The team's intent was to lay the foundation for automating the mining and analysis of large volumes of electronic information by litigants and their lawyers, not only in the context of document discovery, but also to support litigation strategy, motion practice, deposition, trial tactics, etc. The premise was that a Subject Matter Expert- (SME-) built model can be automatically mapped onto various search engines for document retrieval, organization, relevance scoring, analysis and decision support. In the end, we ran nearly a dozen models, mostly, but not exclusively, with Kayvium Corporation's knowledge automation technology. The Sal Database Search Engine we used had a bug in its proximity feature, requiring that we develop a workaround. While the work-around was successful, it left us with insufficient time to converge the models to achieve expected quality. However, with optimized proximity processing in place, we would be able to run the model many more times, and believe repeatable quality would be a matter of working through a few requests to get the approach right. We believe that with more time, the results we would achieve might point towards a new way of processing documents for litigation

  2. Inequalities and Duality in Gene Coexpression Networks of HIV-1 Infection Revealed by the Combination of the Double-Connectivity Approach and the Gini's Method

    Directory of Open Access Journals (Sweden)

    Chuang Ma

    2011-01-01

    Full Text Available The symbiosis (Sym and pathogenesis (Pat is a duality problem of microbial infection, including HIV/AIDS. Statistical analysis of inequalities and duality in gene coexpression networks (GCNs of HIV-1 infection may gain novel insights into AIDS. In this study, we focused on analysis of GCNs of uninfected subjects and HIV-1-infected patients at three different stages of viral infection based on data deposited in the GEO database of NCBI. The inequalities and duality in these GCNs were analyzed by the combination of the double-connectivity (DC approach and the Gini's method. DC analysis reveals that there are significant differences between positive and negative connectivity in HIV-1 stage-specific GCNs. The inequality measures of negative connectivity and edge weight are changed more significantly than those of positive connectivity and edge weight in GCNs from the HIV-1 uninfected to the AIDS stages. With the permutation test method, we identified a set of genes with significant changes in the inequality and duality measure of edge weight. Functional analysis shows that these genes are highly enriched for the immune system, which plays an essential role in the Sym-Pat duality (SPD of microbial infections. Understanding of the SPD problems of HIV-1 infection may provide novel intervention strategies for AIDS.

  3. Test results for cables used in nuclear power plants by a new environmental testing method

    Energy Technology Data Exchange (ETDEWEB)

    Handa, Katsue; Fujimura, Shun-ichi; Hayashi, Toshiyasu; Takano, Keiji; Oya, Shingo

    1982-12-01

    In the nuclear power plants using PWRs or BWRs in Japan, environmental tests are provided, in which simulated LOCA conditions are considered so as to conform with Japanese conditions, and many cables which passed these tests are presently employed. Lately, the new environmental testing, in which a credible accident called MSLB (main steam line breakage) is taken into account, is investigated in PWR nuclear power plants, besides LOCA. This paper reports on the results of evaluating some PWR cables for this new environmental testing conditions. The several cables tested were selected out of PH cables (fire-retardant, ethylene propylene rubber insulated, chlorosulfonated polyethylene sheathed cables) as the cables for safety protecting circuits and to be used in containment vessels where the cables are to be exposed to severe environmental test conditions of 2 x 10/sup 8/ Rad ..gamma..-irradiation and simulated LOCA. All these cables have been accepted after the vertical tray burning test provided in the IEEE Standard 383. The new testing was carried out by sequentially applying thermal deterioration, ..gamma..-irradiation, and the exposure to steam (twice 300 s exposures to 190 deg C superheated steam). After completing each step, tensile strength, elongation, insulation resistance and breakdown voltage were measured, respectively. Every cable tested showed satisfactory breakdown voltage after the exposure to steam, thus it was decided to be acceptable. In future, it is required to investigate the influence of the rate of temperature rise on the cable to be tested in MSLB simulation.

  4. Estimating the uncertainty of damage costs of pollution: A simple transparent method and typical results

    International Nuclear Information System (INIS)

    Spadaro, Joseph V.; Rabl, Ari

    2008-01-01

    Whereas the uncertainty of environmental impacts and damage costs is usually estimated by means of a Monte Carlo calculation, this paper shows that most (and in many cases all) of the uncertainty calculation involves products and/or sums of products and can be accomplished with an analytic solution which is simple and transparent. We present our own assessment of the component uncertainties and calculate the total uncertainty for the impacts and damage costs of the classical air pollutants; results for a Monte Carlo calculation for the dispersion part are also shown. The distribution of the damage costs is approximately lognormal and can be characterized in terms of geometric mean μ g and geometric standard deviation σ g , implying that the confidence interval is multiplicative. We find that for the classical air pollutants σ g is approximately 3 and the 68% confidence interval is [μ g / σ g , μ g σ g ]. Because the lognormal distribution is highly skewed for large σ g , the median is significantly smaller than the mean. We also consider the case where several lognormally distributed damage costs are added, for example to obtain the total damage cost due to all the air pollutants emitted by a power plant, and we find that the relative error of the sum can be significantly smaller than the relative errors of the summands. Even though the distribution for such sums is not exactly lognormal, we present a simple lognormal approximation that is quite adequate for most applications

  5. Test results for cables used in nuclear power plants by a new environmental testing method

    International Nuclear Information System (INIS)

    Handa, Katsue; Fujimura, Shun-ichi; Hayashi, Toshiyasu; Takano, Keiji; Oya, Shingo

    1982-01-01

    In the nuclear power plants using PWRs or BWRs in Japan, environmental tests are provided, in which simulated LOCA conditions are considered so as to conform with Japanese conditions, and many cables which passed these tests are presently employed. Lately, the new environmental testing, in which a credible accident called MSLB (main steam line breakage) is taken into account, is investigated in PWR nuclear power plants, besides LOCA. This paper reports on the results of evaluating some PWR cables for this new environmental testing conditions. The several cables tested were selected out of PH cables (fire-retardant, ethylene propylene rubber insulated, chlorosulfonated polyethylene sheathed cables) as the cables for safety protecting circuits and to be used in containment vessels where the cables are to be exposed to severe environmental test conditions of 2 x 10 8 Rad γ-irradiation and simulated LOCA. All these cables have been accepted after the vertical tray burning test provided in the IEEE Standard 383. The new testing was carried out by sequentially applying thermal deterioration, γ-irradiation, and the exposure to steam (twice 300 s exposures to 190 deg C superheated steam). After completing each step, tensile strength, elongation, insulation resistance and breakdown voltage were measured, respectively. Every cable tested showed satisfactory breakdown voltage after the exposure to steam, thus it was decided to be acceptable. In future, it is required to investigate the influence of the rate of temperature rise on the cable to be tested in MSLB simulation. (Wakatsuki, Y.)

  6. Mean Blood Pressure Assessment during Post-Exercise: Result from Two Different Methods of Calculation

    Directory of Open Access Journals (Sweden)

    Gianmarco Sainas, Raffaele Milia, Girolamo Palazzolo, Gianfranco Ibba, Elisabetta Marongiu, Silvana Roberto, Virginia Pinna, Giovanna Ghiani, Filippo Tocco, Antonio Crisafulli

    2016-09-01

    Full Text Available At rest the proportion between systolic and diastolic periods of the cardiac cycle is about 1/3 and 2/3 respectively. Therefore, mean blood pressure (MBP is usually calculated with a standard formula (SF as follows: MBP = diastolic blood pressure (DBP + 1/3 [systolic blood pressure (SBP – DBP]. However, during exercise this proportion is lost because of tachycardia, which shortens diastole more than systole. We analysed the difference in MBP calculation between the SF and a corrected formula (CF which takes into account changes in the diastolic and systolic periods caused by exercise-induced tachycardia. Our hypothesis was that the SF potentially induce a systematic error in MBP assessment during recovery after exercise. Ten healthy males underwent two exercise-recovery tests on a cycle-ergometer at mild-moderate and moderate-heavy workloads. Hemodynamics and MBP were monitored for 30 minutes after exercise bouts. The main result was that the SF on average underestimated MBP by –4.1 mmHg with respect to the CF. Moreover, in the period immediately after exercise, when sustained tachycardia occurred, the difference between SF and CF was large (in the order of -20-30 mmHg. Likewise, a systematic error in systemic vascular resistance assessment was present. It was concluded that the SF introduces a substantial error in MBP estimation in the period immediately following effort. This equation should not be used in this situation.

  7. Estimation of the Radon-induced Dose for Russia's Population: Methods and Results

    International Nuclear Information System (INIS)

    Marenny, A.M.; Savkin, M.N.; Shinkarev, S.M.

    2000-01-01

    A model is proposed for inferring the radon-induced annual average collective and personal doses, as well as the dose distribution of the population, all over Russia from selective radon monitoring in some regions of Russia. The model assumptions and the selective radon monitoring results that underlie the numerical estimates obtained for different population groups are presented. The current estimate of the collective radon-induced dose received by the population of Russia (148,100,000 as of 1996) is about 130,000 man Sv, of which 55,000 man Sv is for the rural population (27% of the total population) and 75,000 man Sv for the urban population (73% of the total). The average radon-induced personal dose in Russia is estimated to be about 0.87 mSv. About 1,000,000 people receive annual doses above 10 mSv, including some 200,000 people who receive doses above 20 mSv annually. The ways of making the current estimates more accurate are outlined. (author)

  8. Landscape genetics reveals inbreeding and genetic bottlenecks in the extremely rare short-globose cacti Mammillaria pectinifera (Cactaceae as a result of habitat fragmentation

    Directory of Open Access Journals (Sweden)

    Reyna Maya-García

    2017-02-01

    Full Text Available Mammillaria pectinifera is an endemic, short-globose cactus species, included in the IUCN list as a threatened species with only 18 remaining populations in the Tehuacán-Cuicatlán Valley in central Mexico. We evaluated the population genetic diversity and structure, connectivity, recent bottlenecks and population size, using nuclear microsatellites. M. pectinifera showed high genetic diversity but some evidence of heterozygote deficiency (FIS, recent bottlenecks in some populations and reductions in population size. Also, we found low population genetic differentiation and high values of connectivity for M. pectinifera, as the result of historical events of gene flow through pollen and seed dispersal. M. pectinifera occurs in sites with some degree of disturbance leading to the isolation of its populations and decreasing the levels of gene flow among them. Excessive deforestation also changes the original vegetation damaging the natural habitats. This species will become extinct if it is not properly preserved. Furthermore, this species has some ecological features that make them more vulnerable to disturbance such as a very low growth rates and long life cycles. We suggest in situ conservation to prevent the decrease of population sizes and loss of genetic diversity in the natural protected areas such as the Tehuacán-Cuicatlán Biosphere Reserve. In addition, a long-term ex situ conservation program is need to construct seed banks, and optimize seed germination and plant establishment protocols that restore disturbed habitats. Furthermore, creating a supply of living plants for trade is critical to avoid further extraction of plants from nature.

  9. Pilot Study on Folate Bioavailability from a Camembert Cheese Reveals Contradictory Findings to Recent Results from a Human Short-term Study.

    Science.gov (United States)

    Mönch, Sabine; Netzel, Michael; Netzel, Gabriele; Ott, Undine; Frank, Thomas; Rychlik, Michael

    2016-01-01

    Different dietary sources of folate have differing bioavailabilities, which may affect their nutritional "value." In order to examine if these differences also occur within the same food products, a short-term human pilot study was undertaken as a follow-up study to a previously published human trial to evaluate the relative native folate bioavailabilities from low-fat Camembert cheese compared to pteroylmonoglutamic acid as the reference dose. Two healthy human subjects received the test foods in a randomized cross-over design separated by a 14-day equilibrium phase. Folate body pools were saturated with a pteroylmonoglutamic acid supplement before the first testing and between the testings. Folates in test foods and blood plasma were analyzed by stable isotope dilution assays. The biokinetic parameters C max, t max, and area under the curve (AUC) were determined in plasma within the interval of 0-12 h. When comparing the ratio estimates of AUC and C max for the different Camembert cheeses, a higher bioavailability was found for the low-fat Camembert assessed in the present study (≥64%) compared to a different brand in our previous investigation (8.8%). It is suggested that these differences may arise from the different folate distribution in the soft dough and firm rind as well as differing individual folate vitamer proportions. The results clearly underline the importance of the food matrix, even within the same type of food product, in terms of folate bioavailability. Moreover, our findings add to the increasing number of studies questioning the general assumption of 50% bioavailability as the rationale behind the definition of folate equivalents. However, more research is needed to better understand the interactions between individual folate vitamers and other food components and the potential impact on folate bioavailability and metabolism.

  10. Pilot Study on Folate Bioavailability from A Camembert Cheese reveals contradictory findings to recent results from a Human Short-term study

    Directory of Open Access Journals (Sweden)

    Sabine eMönch

    2016-04-01

    Full Text Available Different dietary sources of folate have differing bioavailabilities which may affect their nutritional value. In order to examine if these differences also occur within the same food products, a short term human pilot study was undertaken as a follow-up study to a previously published human trial to evaluate the relative native folate bioavailabilities from low-fat Camembert cheese compared to pteroylmonoglutamic acid as the reference dose. Two healthy human subjects received the test foods in a randomized cross-over design separated by a 14-day equilibrium phase. Folate body pools were saturated with a pteroylmonoglutamic acid supplement before the first testing and between the testings. Folates in test foods and blood plasma were analysed by stable isotope dilution assays. The biokinetic parameters Cmax, tmax and AUC were determined in plasma within the interval of 0 to 12 hours. When comparing the ratio estimates of AUC and Cmax for the different Camembert cheeses, a higher bioavailability was found for the low-fat Camembert assessed in the present study (≥64% compared to a different brand in our previous investigation (8.8%. It is suggested that these differences may arise from the different folate distribution in the soft dough and firm rind as well as differing individual folate vitamer proportions. The results clearly underline the importance of the food matrix, even within the same type of food product, in terms of folate bioavailability. Moreover, our findings add to the increasing number of studies questioning the general assumption of 50 % bioavailability as the rationale behind the definition of folate equivalents. However, more research is needed to better understand the interactions between individual folate vitamers and other food components and the potential impact on folate bioavailability and metabolism.

  11. A Literature Study of Matrix Element Influenced to the Result of Analysis Using Absorption Atomic Spectroscopy Method (AAS)

    International Nuclear Information System (INIS)

    Tyas-Djuhariningrum

    2004-01-01

    The gold sample analysis can be deviated more than >10% to those thrue value caused by the matrix element. So that the matrix element character need to be study in order to reduce the deviation. In rock samples, the matrix elements can cause self quenching, self absorption and ionization process, so there is a result analysis error. In the rock geochemical process, the elements of the same group at the periodic system have the tendency to be together because of their same characteristic. In absorption Atomic Spectroscopy analysis, the elements associate can absorb primer energy with similar wave length so that it can cause deviation in the result interpretation. The aim of study is to predict matrix element influences from rock sample with application standard method for reducing deviation. In quantitative way, assessment of primer light intensity that will be absorbed is proportional to the concentration atom in the sample that relationship between photon intensity with concentration in part per million is linier (ppm). These methods for eliminating matrix elements influence consist of three methods : external standard method, internal standard method, and addition standard method. External standard method for all matrix element, internal standard method for elimination matrix element that have similar characteristics, addition standard methods for elimination matrix elements in Au, Pt samples. The third of standard posess here accuracy are about 95-97%. (author)

  12. Effect of Chemistry Triangle Oriented Learning Media on Cooperative, Individual and Conventional Method on Chemistry Learning Result

    Science.gov (United States)

    Latisma D, L.; Kurniawan, W.; Seprima, S.; Nirbayani, E. S.; Ellizar, E.; Hardeli, H.

    2018-04-01

    The purpose of this study was to see which method are well used with the Chemistry Triangle-oriented learning media. This quasi experimental research involves first grade of senior high school students in six schools namely each two SMA N in Solok city, in Pasaman and two SMKN in Pariaman. The sampling technique was done by Cluster Random Sampling. Data were collected by test and analyzed by one-way anova and Kruskall Wallish test. The results showed that the high school students in Solok learning taught by cooperative method is better than the results of student learning taught by conventional and Individual methods, both for students who have high initial ability and low-ability. Research in SMK showed that the overall student learning outcomes taught by conventional method is better than the student learning outcomes taught by cooperative and individual methods. Student learning outcomes that have high initial ability taught by individual method is better than student learning outcomes that are taught by cooperative method and for students who have low initial ability, there is no difference in student learning outcomes taught by cooperative, individual and conventional methods. Learning in high school in Pasaman showed no significant difference in learning outcomes of the three methods undertaken.

  13. The new Inventory of Italian Glaciers: Present knowledge, applied methods and preliminary results

    Science.gov (United States)

    Smiraglia, Claudio; Diolaiuti, Guglielmina; D'Agata, Carlo; Maragno, Davide; Baroni, Carlo; Mortara, Gianni; Perotti, Luigi; Bondesan, Aldino; Salvatore, Cristina; Vagliasindi, Marco; Vuillermoz, Elisa

    2013-04-01

    A new Glacier Inventory is an indispensable requirement in Italy due to the importance of evaluating the present glacier coverage and the recent changes driven by climate. Furthermore Alpine glaciers represent a not negligible water and touristic resource then to manage and promote them is needed to know their distribution, size and features. The first Italian Glacier Inventory dates back to 1959-1962. It was compiled by the Italian Glaciological Committee (CGI) in cooperation with the National Research Council (CNR); this first inventory was mainly based on field data coupled with photographs (acquired on the field) and high resolution maps. The Italian glaciation resulted to be spread into 754 ice bodies which altogether were covering 525 km2. Moreover in the Eighties a new inventory was compiled to insert Italian data into the World Glacier Inventory (WGI); aerial photos taken at the end of the Seventies (and in some cases affected by a high and not negligible snow coverage) were used as the main source of data. No other national inventory were compiled after that period. Nevertheless during the last decade the largest part of the Italian Alpine regions have produced regional and local glacier inventories which in several cases are also available and queried through web sites and web GIS application. The actual need is now to obtain a complete, homogeneous and contemporary picture of the Italian Glaciation which encompasses the already available regional and local data and all the new updated information coming from new sources of data (e.g.: orthophotos, satellite imagines, etc..). The challenge was accepted by the University of Milan, the EvK2CNR Committee and the Italian Glaciological Committee who, with the sponsorship of Levissima Spa, are presently working to compile the new updated Italian Glacier Inventory. The first project step is to produce a unique homogeneous glacier database including glacier boundary and surface area and the main fundamental

  14. Task-Related Edge Density (TED)-A New Method for Revealing Dynamic Network Formation in fMRI Data of the Human Brain.

    Science.gov (United States)

    Lohmann, Gabriele; Stelzer, Johannes; Zuber, Verena; Buschmann, Tilo; Margulies, Daniel; Bartels, Andreas; Scheffler, Klaus

    2016-01-01

    The formation of transient networks in response to external stimuli or as a reflection of internal cognitive processes is a hallmark of human brain function. However, its identification in fMRI data of the human brain is notoriously difficult. Here we propose a new method of fMRI data analysis that tackles this problem by considering large-scale, task-related synchronisation networks. Networks consist of nodes and edges connecting them, where nodes correspond to voxels in fMRI data, and the weight of an edge is determined via task-related changes in dynamic synchronisation between their respective times series. Based on these definitions, we developed a new data analysis algorithm that identifies edges that show differing levels of synchrony between two distinct task conditions and that occur in dense packs with similar characteristics. Hence, we call this approach "Task-related Edge Density" (TED). TED proved to be a very strong marker for dynamic network formation that easily lends itself to statistical analysis using large scale statistical inference. A major advantage of TED compared to other methods is that it does not depend on any specific hemodynamic response model, and it also does not require a presegmentation of the data for dimensionality reduction as it can handle large networks consisting of tens of thousands of voxels. We applied TED to fMRI data of a fingertapping and an emotion processing task provided by the Human Connectome Project. TED revealed network-based involvement of a large number of brain areas that evaded detection using traditional GLM-based analysis. We show that our proposed method provides an entirely new window into the immense complexity of human brain function.

  15. Task-Related Edge Density (TED-A New Method for Revealing Dynamic Network Formation in fMRI Data of the Human Brain.

    Directory of Open Access Journals (Sweden)

    Gabriele Lohmann

    Full Text Available The formation of transient networks in response to external stimuli or as a reflection of internal cognitive processes is a hallmark of human brain function. However, its identification in fMRI data of the human brain is notoriously difficult. Here we propose a new method of fMRI data analysis that tackles this problem by considering large-scale, task-related synchronisation networks. Networks consist of nodes and edges connecting them, where nodes correspond to voxels in fMRI data, and the weight of an edge is determined via task-related changes in dynamic synchronisation between their respective times series. Based on these definitions, we developed a new data analysis algorithm that identifies edges that show differing levels of synchrony between two distinct task conditions and that occur in dense packs with similar characteristics. Hence, we call this approach "Task-related Edge Density" (TED. TED proved to be a very strong marker for dynamic network formation that easily lends itself to statistical analysis using large scale statistical inference. A major advantage of TED compared to other methods is that it does not depend on any specific hemodynamic response model, and it also does not require a presegmentation of the data for dimensionality reduction as it can handle large networks consisting of tens of thousands of voxels. We applied TED to fMRI data of a fingertapping and an emotion processing task provided by the Human Connectome Project. TED revealed network-based involvement of a large number of brain areas that evaded detection using traditional GLM-based analysis. We show that our proposed method provides an entirely new window into the immense complexity of human brain function.

  16. Task-Related Edge Density (TED)—A New Method for Revealing Dynamic Network Formation in fMRI Data of the Human Brain

    Science.gov (United States)

    Lohmann, Gabriele; Stelzer, Johannes; Zuber, Verena; Buschmann, Tilo; Margulies, Daniel; Bartels, Andreas; Scheffler, Klaus

    2016-01-01

    The formation of transient networks in response to external stimuli or as a reflection of internal cognitive processes is a hallmark of human brain function. However, its identification in fMRI data of the human brain is notoriously difficult. Here we propose a new method of fMRI data analysis that tackles this problem by considering large-scale, task-related synchronisation networks. Networks consist of nodes and edges connecting them, where nodes correspond to voxels in fMRI data, and the weight of an edge is determined via task-related changes in dynamic synchronisation between their respective times series. Based on these definitions, we developed a new data analysis algorithm that identifies edges that show differing levels of synchrony between two distinct task conditions and that occur in dense packs with similar characteristics. Hence, we call this approach “Task-related Edge Density” (TED). TED proved to be a very strong marker for dynamic network formation that easily lends itself to statistical analysis using large scale statistical inference. A major advantage of TED compared to other methods is that it does not depend on any specific hemodynamic response model, and it also does not require a presegmentation of the data for dimensionality reduction as it can handle large networks consisting of tens of thousands of voxels. We applied TED to fMRI data of a fingertapping and an emotion processing task provided by the Human Connectome Project. TED revealed network-based involvement of a large number of brain areas that evaded detection using traditional GLM-based analysis. We show that our proposed method provides an entirely new window into the immense complexity of human brain function. PMID:27341204

  17. A new garnet-orthopyroxene thermometer developed: method, results and applications

    Science.gov (United States)

    Olivotos, Spyros-Christos; Kostopoulos, Dimitrios

    2014-05-01

    The Fe-Mg exchange reaction between garnet and orthopyroxene is a robust geothermometer that has extensively been used to retrieve metamorphic temperatures from granulitic and peridotitic/pyroxenitic lithologies with important implications on the thermal state of the continental lithosphere. More than 800 experimental mineral pairs from both simple and complex systems were gleaned from the literature covering the P-T range 0.5-15 GPa / 800-1800°C. Grt was treated as a senary (Py, Alm, Grs, Sps, Kno and Uv), whereas Opx as a septenary (En, Fs, Di, Hd, FeTs, MgTs and MgCrTs) solid solution. For Opx, Al in the M1 site was calculated following Carswell (1991) and Fe/Mg equipartitioning between sites was assumed. A mixing on sites model was employed to calculate mole fractions of components for both minerals. With regard to the excess free energy of solution and activity coefficients the formalism of Mukhopadhyay et al. (1993) was adopted treating both minerals as symmetric regular solutions. Calibration was achieved in multiple steps; in each step ΔS was allowed to vary until the standard deviation of the differences between experimental and calculated temperature for all experiments was minimised. The experiment with the largest absolute relative deviation in temperature was then eliminated and the process was repeated. The new thermometer reproduces the experimental data to within 50°C and is independent of P-T-X variations within the bounds of the calibrant data set. Application of our new calibration to metamorphosed crustal and mantle rocks that occur both as massifs and xenoliths in volcanics suggested the following results. Granulite terranes have recorded differences in temperature between peak and re-equilibration conditions in the range 100-340°C, primarily depending on the mechanism and rate of exhumation. Several provinces retain memory of discrete cooling pulses (e.g. Palni Hills, South Harris, Adirondacks, E. Antarctic Belt, Aldan Shield) whereas

  18. What does patient feedback reveal about the NHS? A mixed methods study of comments posted to the NHS Choices online service

    Science.gov (United States)

    Brookes, Gavin; Baker, Paul

    2017-01-01

    Objective To examine the key themes of positive and negative feedback in patients’ online feedback on NHS (National Health Service) services in England and to understand the specific issues within these themes and how they drive positive and negative evaluation. Design Computer-assisted quantitative and qualitative studies of 228 113 comments (28 971 142 words) of online feedback posted to the NHS Choices website. Comments containing the most frequent positive and negative evaluative words are qualitatively examined to determine the key drivers of positive and negative feedback. Participants Contributors posting comments about the NHS between March 2013 and September 2015. Results Overall, NHS services were evaluated positively approximately three times more often than negatively. The four key areas of focus were: treatment, communication, interpersonal skills and system/organisation. Treatment exhibited the highest proportion of positive evaluative comments (87%), followed by communication (77%), interpersonal skills (44%) and, finally, system/organisation (41%). Qualitative analysis revealed that reference to staff interpersonal skills featured prominently, even in comments relating to treatment and system/organisational issues. Positive feedback was elicited in cases of staff being caring, compassionate and knowing patients’’ names, while rudeness, apathy and not listening were frequent drivers of negative feedback. Conclusions Although technical competence constitutes an undoubtedly fundamental aspect of healthcare provision, staff members were much more likely to be evaluated both positively and negatively according to their interpersonal skills. Therefore, the findings reported in this study highlight the salience of such ‘soft’ skills to patients and emphasise the need for these to be focused upon and developed in staff training programmes, as well as ensuring that decisions around NHS funding do not result in demotivated and rushed staff. The

  19. Qualitative Analysis Results for Applications of a New Fire Probabilistic Safety Assessment Method to Ulchin Unit 3

    International Nuclear Information System (INIS)

    Kang, Daeil; Kim, Kilyoo; Jang, Seungcheol

    2013-01-01

    The fire PRA Implementation Guide has been used for performing a fire PSA for NPPs in Korea. Recently, US NRC and EPRI developed a new fire PSA method, NUREG/CR-6850, to provide state-of-the-art methods, tools, and data for the conduct of a fire PSA for a commercial nuclear power plant (NPP). Due to the limited budget and man powers for the development of KSRP, hybrid PSA approaches, using NUREG/CR-6850 and Fire PRA Implementation Guide, will be employed for conducting a fire PSA of Ulchin Unit 3. In this paper, the qualitative analysis results for applications of a new fire PSA method to Ulchin Unit 3 are presented. This paper introduces the qualitative analysis results for applications of a new fire PSA method to Ulchin Unit 3. Compared with the previous industry, the number of fire areas for quantification identified and the number of equipment selected has increased

  20. The numerical method of inverse Laplace transform for calculation of overvoltages in power transformers and test results

    Directory of Open Access Journals (Sweden)

    Mikulović Jovan Č.

    2014-01-01

    Full Text Available A methodology for calculation of overvoltages in transformer windings, based on a numerical method of inverse Laplace transform, is presented. Mathematical model of transformer windings is described by partial differential equations corresponding to distributed parameters electrical circuits. The procedure of calculating overvoltages is applied to windings having either isolated neutral point, or grounded neutral point, or neutral point grounded through impedance. A comparative analysis of the calculation results obtained by the proposed numerical method and by analytical method of calculation of overvoltages in transformer windings is presented. The results computed by the proposed method and measured voltage distributions, when a voltage surge is applied to a three-phase 30 kVA power transformer, are compared. [Projekat Ministartsva nauke Republike Srbije, br. TR-33037 i br. TR-33020

  1. First results of Minimum Fisher Regularisation as unfolding method for JET NE213 liquid scintillator neutron spectrometry

    International Nuclear Information System (INIS)

    Mlynar, Jan; Adams, John M.; Bertalot, Luciano; Conroy, Sean

    2005-01-01

    At JET, the NE213 liquid scintillator is being validated as a diagnostic tool for spectral measurements of neutrons emitted from the plasma. Neutron spectra have to be unfolded from the measured pulse-height spectra, which is an ill-conditioned problem. Therefore, use of two independent unfolding methods allows for less ambiguity on the interpretation of the data. In parallel to the routine algorithm MAXED based on the Maximum Entropy method, the Minimum Fisher Regularisation (MFR) method has been introduced at JET. The MFR method, known from two-dimensional tomography applications, has proved to provide a new transparent tool to validate the JET neutron spectra measured with the NE213 liquid scintillators. In this article, the MFR method applicable to spectra unfolding is briefly explained. After a mention of MFR tests on phantom spectra experimental neutron spectra are presented that were obtained by applying MFR to NE213 data in selected JET experiments. The results tend to confirm MAXED observations

  2. Method, equipment and results of determination of element composition of the Venus rock by the Vega-2 space probe

    International Nuclear Information System (INIS)

    Surkov, Yu.A.; Moskaleva, L.P.; Shcheglov, O.P.

    1985-01-01

    Venus rock composition was determined by X-ray radiometric method in the northeast site of the Aphrodita terra. The experiment was performed on the Vega-2 spacecraft. Composition of Venus rock proved to be close to the composition of the anorthosite-norite-troctolite rocks widespread in the lunar highland crust. The descriptions of the method, instrumentation and results of determining the composition of rocks in landing site of Vega-2 spacecraft are given

  3. A novel method for measuring cellular antibody uptake using imaging flow cytometry reveals distinct uptake rates for two different monoclonal antibodies targeting L1.

    Science.gov (United States)

    Hazin, John; Moldenhauer, Gerhard; Altevogt, Peter; Brady, Nathan R

    2015-08-01

    Monoclonal antibodies (mAbs) have emerged as a promising tool for cancer therapy. Differing approaches utilize mAbs to either deliver a drug to the tumor cells or to modulate the host's immune system to mediate tumor kill. The rate by which a therapeutic antibody is being internalized by tumor cells is a decisive feature for choosing the appropriate treatment strategy. We herein present a novel method to effectively quantitate antibody uptake of tumor cells by using image-based flow cytometry, which combines image analysis with high throughput of sample numbers and sample size. The use of this method is established by determining uptake rate of an anti-EpCAM antibody (HEA125), from single cell measurements of plasma membrane versus internalized antibody, in conjunction with inhibitors of endocytosis. The method is then applied to two mAbs (L1-9.3, L1-OV52.24) targeting the neural cell adhesion molecule L1 (L1CAM) at two different epitopes. Based on median cell population responses, we find that mAb L1-OV52.24 is rapidly internalized by the ovarian carcinoma cell line SKOV3ip while L1 mAb 9.3 is mainly retained at the cell surface. These findings suggest the L1 mAb OV52.24 as a candidate to be further developed for drug-delivery to cancer cells, while L1-9.3 may be optimized to tag the tumor cells and stimulate immunogenic cancer cell killing. Furthermore, when analyzing cell-to-cell variability, we observed L1 mAb OV52.24 rapidly transition into a subpopulation with high-internalization capacity. In summary, this novel high-content method for measuring antibody internalization rate provides a high level of accuracy and sensitivity for cell population measurements and reveals further biologically relevant information when taking into account cellular heterogeneity. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Preferential Interactions between ApoE-containing Lipoproteins and Aβ Revealed by a Detection Method that Combines Size Exclusion Chromatography with Non-Reducing Gel-shift

    Science.gov (United States)

    LaDu, Mary Jo; Munson, Gregory W.; Jungbauer, Lisa; Getz, Godfrey S.; Reardon, Catherine A.; Tai, Leon M.; Yu, Chunjiang

    2012-01-01

    The association between apolipoprotein E (apoE) and amyloid-β peptide (Aβ) may significantly impact the function of both proteins, thus affecting the etiology of Alzheimer’s disease (AD). However, apoE/Aβ interactions remain fundamentally defined by the stringency of the detection method. Here we use size exclusion chromatography (SEC) as a non-stringent approach to the detection of apoE/Aβ interactions in solution, specifically apoE and both endogenous and exogenous Aβ from plasma, CSF and astrocyte conditioned media. By SEC analysis, Aβ association with plasma and CNS lipoproteins is apoE-dependent. While endogenous Aβ elutes to specific human plasma lipoproteins distinct from those containing apoE, it is the apoE-containing lipoproteins that absorb excess amounts of exogenous Aβ40. In human CSF, apoE, endogenous Aβ and phospholipid elute in an almost identical profile, as do apoE, exogenous Aβ and phospholipid from astrocyte conditioned media. Combining SEC fractionation with subsequent analysis for SDS-stable apoE/Aβ complex reveals that apoE-containing astrocyte lipoproteins exhibit the most robust interactions with Aβ. Thus, standardization of the methods for detecting apoE/Aβ complex is necessary to determine its functional significance in the neuropathology characteristic of AD. Importantly, a systematic understanding of the role of apoE-containing plasma and CNS lipoproteins in Aβ homeostasis could potentially contribute to identifying a plasma biomarker currently over-looked because it has multiple components. PMID:22138302

  5. Uranium City radiation reduction program: further efforts at remedial measures for houses with block walls, concrete porosity test results, and intercomparison of Kuznetz method and Tsivoglau method

    International Nuclear Information System (INIS)

    Haubrich, E.; Leung, M.K.; Mackie, R.

    1980-01-01

    An attempt was made to reduce the levels of radon in a house in Uranium City by mechanically venting the plenums in the concrete block basement walls, with little success. A table compares the results obtained by measuring the radon WL using the Tsivoglau and the Kuznetz methods

  6. Air sampling methods to evaluate microbial contamination in operating theatres: results of a comparative study in an orthopaedics department.

    Science.gov (United States)

    Napoli, C; Tafuri, S; Montenegro, L; Cassano, M; Notarnicola, A; Lattarulo, S; Montagna, M T; Moretti, B

    2012-02-01

    To evaluate the level of microbial contamination of air in operating theatres using active [i.e. surface air system (SAS)] and passive [i.e. index of microbial air contamination (IMA) and nitrocellulose membranes positioned near the wound] sampling systems. Sampling was performed between January 2010 and January 2011 in the operating theatre of the orthopaedics department in a university hospital in Southern Italy. During surgery, the mean bacterial loads recorded were 2232.9 colony-forming units (cfu)/m(2)/h with the IMA method, 123.2 cfu/m(3) with the SAS method and 2768.2 cfu/m(2)/h with the nitrocellulose membranes. Correlation was found between the results of the three methods. Staphylococcus aureus was detected in 12 of 60 operations (20%) with the membranes, five (8.3%) operations with the SAS method, and three operations (5%) with the IMA method. Use of nitrocellulose membranes placed near a wound is a valid method for measuring the microbial contamination of air. This method was more sensitive than the IMA method and was not subject to any calibration bias, unlike active air monitoring systems. Copyright © 2011 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  7. Estimated H-atom anisotropic displacement parameters: a comparison between different methods and with neutron diffraction results

    DEFF Research Database (Denmark)

    Munshi, Parthapratim; Madsen, Anders Ø; Spackman, Mark A

    2008-01-01

    systems and identify systematic discrepancies for several atom types. A revised and extended library of internal H-atom mean-square displacements is presented for use with Madsen's SHADE web server [J. Appl. Cryst. (2006), 39, 757-758; http://shade.ki.ku.dk], and the improvement over the original SHADE...... in the agreement with neutron results. The SHADE2 library, now incorporated in the SHADE web server, is recommended as a routine procedure for deriving estimates of H-atom ADPs suitable for use in charge-density studies on molecular crystals, and its widespread use should reveal remaining deficiencies and perhaps...... results is substantial, suggesting that this is now the most readily and widely applicable of the three approximate procedures. Using this new library--SHADE2--it is shown that, in line with expectations, a segmented rigid-body description of the heavy atoms yields only a small improvement...

  8. Android Emotions Revealed

    DEFF Research Database (Denmark)

    Vlachos, Evgenios; Schärfe, Henrik

    2012-01-01

    This work presents a method for designing facial interfaces for sociable android robots with respect to the fundamental rules of human affect expression. Extending the work of Paul Ekman towards a robotic direction, we follow the judgment-based approach for evaluating facial expressions to test...... findings are based on the results derived from a number of judgments, and suggest that before programming the facial expressions of a Geminoid, the Original should pass through the proposed procedure. According to our recommendations, the facial expressions of an android should be tested by judges, even...... in which case an android robot like the Geminoid|DK –a duplicate of an Original person- reveals emotions convincingly; when following an empirical perspective, or when following a theoretical one. The methodology includes the processes of acquiring the empirical data, and gathering feedback on them. Our...

  9. Locating previously unknown patterns in data-mining results: a dual data- and knowledge-mining method

    Directory of Open Access Journals (Sweden)

    Knaus William A

    2006-03-01

    Full Text Available Abstract Background Data mining can be utilized to automate analysis of substantial amounts of data produced in many organizations. However, data mining produces large numbers of rules and patterns, many of which are not useful. Existing methods for pruning uninteresting patterns have only begun to automate the knowledge acquisition step (which is required for subjective measures of interestingness, hence leaving a serious bottleneck. In this paper we propose a method for automatically acquiring knowledge to shorten the pattern list by locating the novel and interesting ones. Methods The dual-mining method is based on automatically comparing the strength of patterns mined from a database with the strength of equivalent patterns mined from a relevant knowledgebase. When these two estimates of pattern strength do not match, a high "surprise score" is assigned to the pattern, identifying the pattern as potentially interesting. The surprise score captures the degree of novelty or interestingness of the mined pattern. In addition, we show how to compute p values for each surprise score, thus filtering out noise and attaching statistical significance. Results We have implemented the dual-mining method using scripts written in Perl and R. We applied the method to a large patient database and a biomedical literature citation knowledgebase. The system estimated association scores for 50,000 patterns, composed of disease entities and lab results, by querying the database and the knowledgebase. It then computed the surprise scores by comparing the pairs of association scores. Finally, the system estimated statistical significance of the scores. Conclusion The dual-mining method eliminates more than 90% of patterns with strong associations, thus identifying them as uninteresting. We found that the pruning of patterns using the surprise score matched the biomedical evidence in the 100 cases that were examined by hand. The method automates the acquisition of

  10. Comparison of objective methods for assessment of annoyance of low frequency noise with the results of a laboratory listening test

    DEFF Research Database (Denmark)

    Poulsen, Torben

    2003-01-01

    Subjective assessments made by test persons were compared to results from a number of objective measurement and calculation methods for the assessment of low frequency noise. Eighteen young persons with normal hearing listened to eight environmental low frequency noises and evaluated the annoyance...

  11. A method for carrying out radiolysis and chemical reactions by means of the radiations resulting from a thermonuclear reaction

    International Nuclear Information System (INIS)

    Gomberg, H.J.

    1974-01-01

    The invention relates to the use of the radiations resulting from thermonuclear reactions. It deals with a method comprising a combination of thermo-chemical and radiolytic reactions for treating a molecule having a high absorption rate, by the radiations of a thermonuclear reaction. This is applicable to the dissociation of water into oxygen and hydrogen [fr

  12. Chapter 5: Exponential experiments on natural uranium graphite moderated systems. II: Correlation of results with the method of Syrett (1961)

    International Nuclear Information System (INIS)

    Brown, G.; Moore, P.G.F.; Richmond, R.

    1963-01-01

    The results are given of exponential experiments on graphite moderated systems with fuel elements consisting of single rods and tubes of natural uranium metal. A correlation is given with the method of calculation proposed by Syrett (1961) and new consistent values of neutron yield and effective resonance integral are derived. (author)

  13. STANDARDIZATION OF GLYCOHEMOGLOBIN RESULTS AND REFERENCE VALUES IN WHOLE-BLOOD STUDIED IN 103 LABORATORIES USING 20 METHODS

    NARCIS (Netherlands)

    WEYKAMP, CW; PENDERS, TJ; MUSKIET, FAJ; VANDERSLIK, W

    We investigated the effect of calibration with lyophilized calibrators on whole-blood glycohemoglobin (glyHb) results. One hundred three laboratories, using 20 different methods, determined glyHb in two lyophilized calibrators and two whole-blood samples. For whole-blood samples with low (5%) and

  14. An optimized method for measuring hypocretin-1 peptide in the mouse brain reveals differential circadian regulation of hypocretin-1 levels rostral and caudal to the hypothalamus

    DEFF Research Database (Denmark)

    Justinussen, J L; Holm, A; Kornum, B R

    2015-01-01

    an optimized peptide quantification method for hypocretin-1 extracted from different mouse brain areas and use this method for investigating circadian fluctuations of hypocretin-1 levels in these areas. The results show that hypocretin-1 peptide can be extracted from small pieces of intact tissue...... as does prepro-hypocretin mRNA in the hypothalamus. However, in midbrain and brainstem tissue caudal to the hypothalamus, there was less circadian fluctuation and a tendency for higher levels during the light phase. These data suggest that regulation of the hypocretin system differs between brain areas.......The hypocretin/orexin system regulates, among other things, sleep and energy homeostasis. The system is likely regulated by both homeostatic and circadian mechanisms. Little is known about local differences in the regulation of hypocretin activity. The aim of this study was to establish...

  15. A Comparison of Result Reliability for Investigation of Milk Composition by Alternative Analytical Methods in Czech Republic

    Directory of Open Access Journals (Sweden)

    Oto Hanuš

    2014-01-01

    Full Text Available The milk analyse result reliability is important for assurance of foodstuff chain quality. There are more direct and indirect methods for milk composition measurement (fat (F, protein (P, lactose (L and solids non fat (SNF content. The goal was to evaluate some reference and routine milk analytical procedures on result basis. The direct reference analyses were: F, fat content (Röse–Gottlieb method; P, crude protein content (Kjeldahl method; L, lactose (monohydrate, polarimetric method; SNF, solids non fat (gravimetric method. F, P, L and SNF were determined also by various indirect methods: – MIR (infrared (IR technology with optical filters, 7 instruments in 4 labs; – MIR–FT (IR spectroscopy with Fourier’s transformations, 10 in 6; – ultrasonic method (UM, 3 in 1; – analysis by the blue and red box (BRB, 1 v 1. There were used 10 reference milk samples. Coefficient of determination (R2, correlation coefficient (r and standard deviation of the mean of individual differences (MDsd, for n were evaluated. All correlations (r; for all indirect and alternative methods and all milk components were significant (P ≤ 0.001. MIR and MIR–FT (conventional methods explained considerably higher proportion of the variability in reference results than the UM and BRB methods (alternative. All r average values (x minus 1.64 × sd for 95% confidence interval can be used as standards for calibration quality evaluation (MIR, MIR–FT, UM and BRB: – for F 0.997, 0.997, 0.99 and 0.995; – for P 0.986, 0.981, 0.828 and 0.864; – for L 0.968, 0.871, 0.705 and 0.761; – for SNF 0.992, 0.993, 0.911 and 0.872. Similarly ​MDsd (x plus 1.64 × sd: – for F 0.071, 0.068, 0.132 and 0.101%; – for P 0.051, 0.054, 0.202 and 0.14%; – for L 0.037, 0.074, 0.113 and 0.11%; – for SNF 0.052, 0.068, 0.141 and 0.204.

  16. An inverse analysis reveals limitations of the soil-CO2 profile method to calculate CO2 production and efflux for well-structured soils

    Directory of Open Access Journals (Sweden)

    M. D. Corre

    2010-08-01

    Full Text Available Soil respiration is the second largest flux in the global carbon cycle, yet the underlying below-ground process, carbon dioxide (CO2 production, is not well understood because it can not be measured in the field. CO2 production has frequently been calculated from the vertical CO2 diffusive flux divergence, known as "soil-CO2 profile method". This relatively simple model requires knowledge of soil CO2 concentration profiles and soil diffusive properties. Application of the method for a tropical lowland forest soil in Panama gave inconsistent results when using diffusion coefficients (D calculated based on relationships with soil porosity and moisture ("physically modeled" D. Our objective was to investigate whether these inconsistencies were related to (1 the applied interpolation and solution methods and/or (2 uncertainties in the physically modeled profile of D. First, we show that the calculated CO2 production strongly depends on the function used to interpolate between measured CO2 concentrations. Secondly, using an inverse analysis of the soil-CO2 profile method, we deduce which D would be required to explain the observed CO2 concentrations, assuming the model perception is valid. In the top soil, this inversely modeled D closely resembled the physically modeled D. In the deep soil, however, the inversely modeled D increased sharply while the physically modeled D did not. When imposing a constraint during the fit parameter optimization, a solution could be found where this deviation between the physically and inversely modeled D disappeared. A radon (Rn mass balance model, in which diffusion was calculated based on the physically modeled or constrained inversely modeled D, simulated observed Rn profiles reasonably well. However, the CO2 concentrations which corresponded to the constrained inversely modeled D were too small compared to the measurements. We suggest that, in well-structured soils, a missing description of steady state CO2

  17. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    Science.gov (United States)

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  18. [Reconsidering children's dreams. A critical review of methods and results in developmental dream research from Freud to contemporary works].

    Science.gov (United States)

    Sándor, Piroska; Bódizs, Róbert

    2014-01-01

    Examining children's dream development is a significant challenge for researchers. Results from studies on children's dreaming may enlighten us on the nature and role of dreaming as well as broaden our knowledge of consciousness and cognitive development. This review summarizes the main questions and historical progress in developmental dream research, with the aim of shedding light on the advantages, disadvantages and effects of different settings and methods on research outcomes. A typical example would be the dreams of 3 to 5 year-olds: they are simple and static, with a relative absence of emotions and active self participation according to laboratory studies; studies using different methodology however found them to be vivid, rich in emotions, with the self as an active participant. Questions about the validity of different methods arise, and are considered within this review. Given that methodological differences can result in highly divergent outcomes, it is strongly recommended for future research to select methodology and treat results more carefully.

  19. Examining Sexual Dysfunction in Non‐Muscle‐Invasive Bladder Cancer: Results of Cross‐Sectional Mixed‐Methods Research

    Directory of Open Access Journals (Sweden)

    Marc A. Kowalkowski, PhD

    2014-08-01

    Conclusions: Survivors' sexual symptoms may result from NMIBC, comorbidities, or both. These results inform literature and practice by raising awareness about the frequency of symptoms and the impact on NMIBC survivors' intimate relationships. Further work is needed to design symptom management education programs to dispel misinformation about contamination post‐treatment and improve quality of life. Kowalkowski MA, Chandrashekar A, Amiel GE, Lerner SP, Wittmann DA, Latini DM, and Goltz HH. Examining sexual dysfunction in non‐muscle‐invasive bladder cancer: Results of cross‐sectional mixed‐methods research. Sex Med 2014;2:141–151.

  20. Standardization of glycohemoglobin results and reference values in whole blood studied in 103 laboratories using 20 methods.

    Science.gov (United States)

    Weykamp, C W; Penders, T J; Miedema, K; Muskiet, F A; van der Slik, W

    1995-01-01

    We investigated the effect of calibration with lyophilized calibrators on whole-blood glycohemoglobin (glyHb) results. One hundred three laboratories, using 20 different methods, determined glyHb in two lyophilized calibrators and two whole-blood samples. For whole-blood samples with low (5%) and high (9%) glyHb percentages, respectively, calibration decreased overall interlaboratory variation (CV) from 16% to 9% and from 11% to 6% and decreased intermethod variation from 14% to 6% and from 12% to 5%. Forty-seven laboratories, using 14 different methods, determined mean glyHb percentages in self-selected groups of 10 nondiabetic volunteers each. With calibration their overall mean (2SD) was 5.0% (0.5%), very close to the 5.0% (0.3%) derived from the reference method used in the Diabetes Control and Complications Trial. In both experiments the Abbott IMx and Vision showed deviating results. We conclude that, irrespective of the analytical method used, calibration enables standardization of glyHb results, reference values, and interpretation criteria.

  1. A validation of direct grey Dancoff factors results for cylindrical cells in cluster geometry by the Monte Carlo method

    International Nuclear Information System (INIS)

    Rodrigues, Leticia Jenisch; Bogado, Sergio; Vilhena, Marco T.

    2008-01-01

    The WIMS code is a well known and one of the most used codes to handle nuclear core physics calculations. Recently, the PIJM module of the WIMS code was modified in order to allow the calculation of Grey Dancoff factors, for partially absorbing materials, using the alternative definition in terms of escape and collision probabilities. Grey Dancoff factors for the Canadian CANDU-37 and CANFLEX assemblies were calculated with PIJM at five symmetrically distinct fuel pin positions. The results, obtained via Direct Method, i.e., by direct calculation of escape and collision probabilities, were satisfactory when compared with the ones of literature. On the other hand, the PIJMC module was developed to calculate escape and collision probabilities using Monte Carlo method. Modifications in this module were performed to determine Black Dancoff factors, considering perfectly absorbing fuel rods. In this work, we proceed further in the task of validating the Direct Method by the Monte Carlo approach. To this end, the PIJMC routine is modified to compute Grey Dancoff factors using the cited alternative definition. Results are reported for the mentioned CANDU-37 and CANFLEX assemblies obtained with PIJMC, at the same fuel pin positions as with PIJM. A good agreement is observed between the results from the Monte Carlo and Direct methods

  2. METHODS OF MEASURING THE EFFECTS OF LIGHTNING BY SIMULATING ITS STRIKES WITH THE INTERVAL ASSESSMENT OF THE RESULTS OF MEASUREMENTS

    Directory of Open Access Journals (Sweden)

    P. V. Kriksin

    2017-01-01

    Full Text Available The article presents the results of the development of new methods aimed at more accurate interval estimate of the experimental values of voltages on grounding devices of substations and circuits in the control cables, that occur when lightning strikes to lightning rods; the abovementioned estimate made it possible to increase the accuracy of the results of the study of lightning noise by 28 %. A more accurate value of interval estimation were achieved by developing a measurement model that takes into account, along with the measured values, different measurement errors and includes the special processing of the measurement results. As a result, the interval of finding the true value of the sought voltage is determined with an accuracy of 95 %. The methods can be applied to the IK-1 and IKP-1 measurement complexes, consisting in the aperiodic pulse generator, the generator of high-frequency pulses and selective voltmeters, respectively. To evaluate the effectiveness of the developed methods series of experimental voltage assessments of grounding devices of ten active high-voltage substation have been fulfilled in accordance with the developed methods and traditional techniques. The evaluation results confirmed the possibility of finding the true values of voltage over a wide range, that ought to be considered in the process of technical diagnostics of lightning protection of substations when the analysis of the measurement results and the development of measures to reduce the effects of lightning are being fulfilled. Also, a comparative analysis of the results of measurements made in accordance with the developed methods and traditional techniques has demonstrated that the true value of the sought voltage may exceed the measured value at an average of 28 %, that ought to be considered in the further analysis of the parameters of lightning protection at the facility and in the development of corrective actions. The developed methods have been

  3. Summary of EPA's risk assessment results from the analysis of alternative methods of low-level waste disposal

    International Nuclear Information System (INIS)

    Bandrowski, M.S.; Hung, C.Y.; Meyer, G.L.; Rogers, V.C.

    1987-01-01

    Evaluation of the potential health risk and individual exposure from a broad number of disposal alternatives is an important part of EPA's program to develop generally applicable environmental standards for the land disposal of low-level radioactive wastes (LLW). The Agency has completed an analysis of the potential population health risks and maximum individual exposures from ten disposal methods under three different hydrogeological and climatic settings. This paper briefly describes the general input and analysis procedures used in the risk assessment for LLW disposal and presents their preliminary results. Some important lessons learned from simulating LLW disposal under a large variety of methods and conditions are identified

  4. An optimized method for measuring hypocretin-1 peptide in the mouse brain reveals differential circadian regulation of hypocretin-1 levels rostral and caudal to the hypothalamus.

    Science.gov (United States)

    Justinussen, J L; Holm, A; Kornum, B R

    2015-12-03

    The hypocretin/orexin system regulates, among other things, sleep and energy homeostasis. The system is likely regulated by both homeostatic and circadian mechanisms. Little is known about local differences in the regulation of hypocretin activity. The aim of this study was to establish an optimized peptide quantification method for hypocretin-1 extracted from different mouse brain areas and use this method for investigating circadian fluctuations of hypocretin-1 levels in these areas. The results show that hypocretin-1 peptide can be extracted from small pieces of intact tissue, with sufficient yield for measurements in a standard radioimmunoassay. Utilizing the optimized method, it was found that prepro-hypocretin mRNA and peptide show circadian fluctuations in the mouse brain. This study further demonstrates that the hypocretin-1 peptide level in the frontal brain peaks during dark as does prepro-hypocretin mRNA in the hypothalamus. However, in midbrain and brainstem tissue caudal to the hypothalamus, there was less circadian fluctuation and a tendency for higher levels during the light phase. These data suggest that regulation of the hypocretin system differs between brain areas. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. Four novel prosthodontic methods for managing upper airway resistance syndrome: An investigative analysis revealing the efficacy of the new nasopharyngeal aperture guard appliance

    Directory of Open Access Journals (Sweden)

    Venkat R

    2010-01-01

    Full Text Available Statement of Problem: Obstructive sleep apnea is the most frequent cause for insomnia in the populace. Snoring is mulled over as the potential factor that can lead the sequel to obstructive sleep apnea. Although the etiology and deterrence measures for snoring are yet to be undoubtedly clarified by our scientific sorority, various means of surgical corrections have been affirmed and put into practice, with a substantial degree of success. Despite this, it is implicit that a noninvasive method of managing obstructive sleep apnea is more relevant for overcoming this condition. Purpose: This manuscript intends to establish how snoring can be controlled prosthodontically by different modalities of scientifically defensible approaches. The most effective among the modalities was affirmed as the investigative analyses of the treatment outcomes with each modality. Novel Methods: Four new methods of managing obstructive sleep apnea - uvula lift appliance, uvula and velopharynx lift appliance, nasopharyngeal aperture guard, and soft palate lift appliance were demonstrated through this article. Clinical Reports: The four new modalities stated and one conventional modality of mandibular advancement appliance for managing obstructive sleep apnea, a total of five types of appliance therapies, were described with case reports for each. Investigation: Five individuals undergoing the appliance therapy were chosen for each modality. The treatment outcome with each modality was examined by analysis of clinical predictors and also by means of standard investigation, with nasal and oral endoscopic analyses. Result: Among the five types of appliance therapies, the nasopharyngeal aperture guard provided the best treatment outcome in terms of clinical predictors and endoscopic analyses. Conclusion: Nasopharyngeal aperture guard, the novel method stated in this article is the better modality for managing obstructive sleep apnea, among the five different appliance

  6. The effect of different methods and image analyzers on the results of the in vivo comet assay.

    Science.gov (United States)

    Kyoya, Takahiro; Iwamoto, Rika; Shimanura, Yuko; Terada, Megumi; Masuda, Shuichi

    2018-01-01

    The in vivo comet assay is a widely used genotoxicity test that can detect DNA damage in a range of organs. It is included in the Organisation for Economic Co-operation and Development Guidelines for the Testing of Chemicals. However, various protocols are still used for this assay, and several different image analyzers are used routinely to evaluate the results. Here, we verified a protocol that largely contributes to the equivalence of results, and we assessed the effect on the results when slides made from the same sample were analyzed using two different image analyzers (Comet Assay IV vs Comet Analyzer). Standardizing the agarose concentrations and DNA unwinding and electrophoresis times had a large impact on the equivalence of the results between the different methods used for the in vivo comet assay. In addition, there was some variation in the sensitivity of the two different image analyzers tested; however this variation was considered to be minor and became negligible when the test conditions were standardized between the two different methods. By standardizing the concentrations of low melting agarose and DNA unwinding and electrophoresis times between both methods used in the current study, the sensitivity to detect the genotoxicity of a positive control substance in the in vivo comet assay became generally comparable, independently of the image analyzer used. However, there may still be the possibility that other conditions, except for the three described here, could affect the reproducibility of the in vivo comet assay.

  7. Molecular Weights of Bovine and Porcine Heparin Samples: Comparison of Chromatographic Methods and Results of a Collaborative Survey

    Directory of Open Access Journals (Sweden)

    Sabrina Bertini

    2017-07-01

    Full Text Available In a collaborative study involving six laboratories in the USA, Europe, and India the molecular weight distributions of a panel of heparin sodium samples were determined, in order to compare heparin sodium of bovine intestinal origin with that of bovine lung and porcine intestinal origin. Porcine samples met the current criteria as laid out in the USP Heparin Sodium monograph. Bovine lung heparin samples had consistently lower average molecular weights. Bovine intestinal heparin was variable in molecular weight; some samples fell below the USP limits, some fell within these limits and others fell above the upper limits. These data will inform the establishment of pharmacopeial acceptance criteria for heparin sodium derived from bovine intestinal mucosa. The method for MW determination as described in the USP monograph uses a single, broad standard calibrant to characterize the chromatographic profile of heparin sodium on high-resolution silica-based GPC columns. These columns may be short-lived in some laboratories. Using the panel of samples described above, methods based on the use of robust polymer-based columns have been developed. In addition to the use of the USP’s broad standard calibrant for heparin sodium with these columns, a set of conditions have been devised that allow light-scattering detected molecular weight characterization of heparin sodium, giving results that agree well with the monograph method. These findings may facilitate the validation of variant chromatographic methods with some practical advantages over the USP monograph method.

  8. Molecular Weights of Bovine and Porcine Heparin Samples: Comparison of Chromatographic Methods and Results of a Collaborative Survey.

    Science.gov (United States)

    Bertini, Sabrina; Risi, Giulia; Guerrini, Marco; Carrick, Kevin; Szajek, Anita Y; Mulloy, Barbara

    2017-07-19

    In a collaborative study involving six laboratories in the USA, Europe, and India the molecular weight distributions of a panel of heparin sodium samples were determined, in order to compare heparin sodium of bovine intestinal origin with that of bovine lung and porcine intestinal origin. Porcine samples met the current criteria as laid out in the USP Heparin Sodium monograph. Bovine lung heparin samples had consistently lower average molecular weights. Bovine intestinal heparin was variable in molecular weight; some samples fell below the USP limits, some fell within these limits and others fell above the upper limits. These data will inform the establishment of pharmacopeial acceptance criteria for heparin sodium derived from bovine intestinal mucosa. The method for MW determination as described in the USP monograph uses a single, broad standard calibrant to characterize the chromatographic profile of heparin sodium on high-resolution silica-based GPC columns. These columns may be short-lived in some laboratories. Using the panel of samples described above, methods based on the use of robust polymer-based columns have been developed. In addition to the use of the USP's broad standard calibrant for heparin sodium with these columns, a set of conditions have been devised that allow light-scattering detected molecular weight characterization of heparin sodium, giving results that agree well with the monograph method. These findings may facilitate the validation of variant chromatographic methods with some practical advantages over the USP monograph method.

  9. EIT Imaging of admittivities with a D-bar method and spatial prior: experimental results for absolute and difference imaging.

    Science.gov (United States)

    Hamilton, S J

    2017-05-22

    Electrical impedance tomography (EIT) is an emerging imaging modality that uses harmless electrical measurements taken on electrodes at a body's surface to recover information about the internal electrical conductivity and or permittivity. The image reconstruction task of EIT is a highly nonlinear inverse problem that is sensitive to noise and modeling errors making the image reconstruction task challenging. D-bar methods solve the nonlinear problem directly, bypassing the need for detailed and time-intensive forward models, to provide absolute (static) as well as time-difference EIT images. Coupling the D-bar methodology with the inclusion of high confidence a priori data results in a noise-robust regularized image reconstruction method. In this work, the a priori D-bar method for complex admittivities is demonstrated effective on experimental tank data for absolute imaging for the first time. Additionally, the method is adjusted for, and tested on, time-difference imaging scenarios. The ability of the method to be used for conductivity, permittivity, absolute as well as time-difference imaging provides the user with great flexibility without a high computational cost.

  10. Assessing Cost-Effectiveness in Obesity (ACE-Obesity: an overview of the ACE approach, economic methods and cost results

    Directory of Open Access Journals (Sweden)

    Swinburn Boyd

    2009-11-01

    Full Text Available Abstract Background The aim of the ACE-Obesity study was to determine the economic credentials of interventions which aim to prevent unhealthy weight gain in children and adolescents. We have reported elsewhere on the modelled effectiveness of 13 obesity prevention interventions in children. In this paper, we report on the cost results and associated methods together with the innovative approach to priority setting that underpins the ACE-Obesity study. Methods The Assessing Cost Effectiveness (ACE approach combines technical rigour with 'due process' to facilitate evidence-based policy analysis. Technical rigour was achieved through use of standardised evaluation methods, a research team that assembles best available evidence and extensive uncertainty analysis. Cost estimates were based on pathway analysis, with resource usage estimated for the interventions and their 'current practice' comparator, as well as associated cost offsets. Due process was achieved through involvement of stakeholders, consensus decisions informed by briefing papers and 2nd stage filter analysis that captures broader factors that influence policy judgements in addition to cost-effectiveness results. The 2nd stage filters agreed by stakeholders were 'equity', 'strength of the evidence', 'feasibility of implementation', 'acceptability to stakeholders', 'sustainability' and 'potential for side-effects'. Results The intervention costs varied considerably, both in absolute terms (from cost saving [6 interventions] to in excess of AUD50m per annum and when expressed as a 'cost per child' estimate (from Conclusion The use of consistent methods enables valid comparison of potential intervention costs and cost-offsets for each of the interventions. ACE-Obesity informs policy-makers about cost-effectiveness, health impact, affordability and 2nd stage filters for important options for preventing unhealthy weight gain in children. In related articles cost-effectiveness results and

  11. Gradient Correlation Method for the Stabilization of Inversion Results of Aerosol Microphysical Properties Retrieved from Profiles of Optical Data

    Directory of Open Access Journals (Sweden)

    Kolgotin Alexei

    2016-01-01

    Full Text Available Correlation relationships between aerosol microphysical parameters and optical data are investigated. The results show that surface-area concentrations and extinction coefficients are linearly correlated with a correlation coefficient above 0.99 for arbitrary particle size distribution. The correlation relationships that we obtained can be used as constraints in our inversion of optical lidar data. Simulation studies demonstrate a significant stabilization of aerosol microphysical data products if we apply the gradient correlation method in our traditional regularization technique.

  12. Channels Coordination Game Model Based on Result Fairness Preference and Reciprocal Fairness Preference: A Behavior Game Forecasting and Analysis Method

    OpenAIRE

    Ding, Chuan; Wang, Kaihong; Huang, Xiaoying

    2014-01-01

    In a distribution channel, channel members are not always self-interested, but altruistic in some conditions. Based on this assumption, this paper adopts a behavior game method to analyze and forecast channel members’ decision behavior based on result fairness preference and reciprocal fairness preference by embedding a fair preference theory in channel research of coordination. The behavior game forecasts that a channel can achieve coordination if channel members consider behavior elemen...

  13. Rapid method of calculating the fluence and spectrum of neutrons from a critical assembly and the resulting dose

    International Nuclear Information System (INIS)

    Bessis, J.

    1977-01-01

    The proposed calculation method is unsophisticated but rapid. The first part (computer code CRITIC), which is based on the Fermi age equation, evaluates the number of neutrons per fission emitted from a moderated critical assembly and their energy spectrum. The second part (computer code NARCISSE), which uses the current albedo for concrete, evaluates the product of neutron reflection on the walls and calculates the fluence resulting at any point in the room and its energy distribution by 21 groups. The results obtained are shown to compare satisfactorily with those obtained through the use of a Monte Carlo program

  14. method

    Directory of Open Access Journals (Sweden)

    L. M. Kimball

    2002-01-01

    Full Text Available This paper presents an interior point algorithm to solve the multiperiod hydrothermal economic dispatch (HTED. The multiperiod HTED is a large scale nonlinear programming problem. Various optimization methods have been applied to the multiperiod HTED, but most neglect important network characteristics or require decomposition into thermal and hydro subproblems. The algorithm described here exploits the special bordered block diagonal structure and sparsity of the Newton system for the first order necessary conditions to result in a fast efficient algorithm that can account for all network aspects. Applying this new algorithm challenges a conventional method for the use of available hydro resources known as the peak shaving heuristic.

  15. Four-spacecraft determination of magnetopause orientation, motion and thickness: comparison with results from single-spacecraft methods

    Directory of Open Access Journals (Sweden)

    S. E. Haaland

    2004-04-01

    Full Text Available In this paper, we use Cluster data from one magnetopause event on 5 July 2001 to compare predictions from various methods for determination of the velocity, orientation, and thickness of the magnetopause current layer. We employ established as well as new multi-spacecraft techniques, in which time differences between the crossings by the four spacecraft, along with the duration of each crossing, are used to calculate magnetopause speed, normal vector, and width. The timing is based on data from either the Cluster Magnetic Field Experiment (FGM or the Electric Field Experiment (EFW instruments. The multi-spacecraft results are compared with those derived from various single-spacecraft techniques, including minimum-variance analysis of the magnetic field and deHoffmann-Teller, as well as Minimum-Faraday-Residue analysis of plasma velocities and magnetic fields measured during the crossings. In order to improve the overall consistency between multi- and single-spacecraft results, we have also explored the use of hybrid techniques, in which timing information from the four spacecraft is combined with certain limited results from single-spacecraft methods, the remaining results being left for consistency checks. The results show good agreement between magnetopause orientations derived from appropriately chosen single-spacecraft techniques and those obtained from multi-spacecraft timing. The agreement between magnetopause speeds derived from single- and multi-spacecraft methods is quantitatively somewhat less good but it is evident that the speed can change substantially from one crossing to the next within an event. The magnetopause thickness varied substantially from one crossing to the next, within an event. It ranged from 5 to 10 ion gyroradii. The density profile was sharper than the magnetic profile: most of the density change occured in the earthward half of the magnetopause.

    Key words. Magnetospheric physics (magnetopause, cusp and

  16. Separate base usages of genes located on the leading and lagging strands in Chlamydia muridarum revealed by the Z curve method

    Directory of Open Access Journals (Sweden)

    Yu Xiu-Juan

    2007-10-01

    Full Text Available Abstract Background The nucleotide compositional asymmetry between the leading and lagging strands in bacterial genomes has been the subject of intensive study in the past few years. It is interesting to mention that almost all bacterial genomes exhibit the same kind of base asymmetry. This work aims to investigate the strand biases in Chlamydia muridarum genome and show the potential of the Z curve method for quantitatively differentiating genes on the leading and lagging strands. Results The occurrence frequencies of bases of protein-coding genes in C. muridarum genome were analyzed by the Z curve method. It was found that genes located on the two strands of replication have distinct base usages in C. muridarum genome. According to their positions in the 9-D space spanned by the variables u1 – u9 of the Z curve method, K-means clustering algorithm can assign about 94% of genes to the correct strands, which is a few percent higher than those correctly classified by K-means based on the RSCU. The base usage and codon usage analyses show that genes on the leading strand have more G than C and more T than A, particularly at the third codon position. For genes on the lagging strand the biases is reverse. The y component of the Z curves for the complete chromosome sequences show that the excess of G over C and T over A are more remarkable in C. muridarum genome than in other bacterial genomes without separating base and/or codon usages. Furthermore, for the genomes of Borrelia burgdorferi, Treponema pallidum, Chlamydia muridarum and Chlamydia trachomatis, in which distinct base and/or codon usages have been observed, closer phylogenetic distance is found compared with other bacterial genomes. Conclusion The nature of the strand biases of base composition in C. muridarum is similar to that in most other bacterial genomes. However, the base composition asymmetry between the leading and lagging strands in C. muridarum is more significant than that in

  17. Partial least squares methods for spectrally estimating lunar soil FeO abundance: A stratified approach to revealing nonlinear effect and qualitative interpretation

    Science.gov (United States)

    Li, Lin

    2008-12-01

    Partial least squares (PLS) regressions were applied to lunar highland and mare soil data characterized by the Lunar Soil Characterization Consortium (LSCC) for spectral estimation of the abundance of lunar soil chemical constituents FeO and Al2O3. The LSCC data set was split into a number of subsets including the total highland, Apollo 16, Apollo 14, and total mare soils, and then PLS was applied to each to investigate the effect of nonlinearity on the performance of the PLS method. The weight-loading vectors resulting from PLS were analyzed to identify mineral species responsible for spectral estimation of the soil chemicals. The results from PLS modeling indicate that the PLS performance depends on the correlation of constituents of interest to their major mineral carriers, and the Apollo 16 soils are responsible for the large errors of FeO and Al2O3 estimates when the soils were modeled along with other types of soils. These large errors are primarily attributed to the degraded correlation FeO to pyroxene for the relatively mature Apollo 16 soils as a result of space weathering and secondary to the interference of olivine. PLS consistently yields very accurate fits to the two soil chemicals when applied to mare soils. Although Al2O3 has no spectrally diagnostic characteristics, this chemical can be predicted for all subset data by PLS modeling at high accuracies because of its correlation to FeO. This correlation is reflected in the symmetry of the PLS weight-loading vectors for FeO and Al2O3, which prove to be very useful for qualitative interpretation of the PLS results. However, this qualitative interpretation of PLS modeling cannot be achieved using principal component regression loading vectors.

  18. Validation of a near infrared microscopy method for the detection of animal products in feedingstuffs: results of a collaborative study.

    Science.gov (United States)

    Boix, A; Fernández Pierna, J A; von Holst, C; Baeten, V

    2012-01-01

    The performance characteristics of a near infrared microscopy (NIRM) method, when applied to the detection of animal products in feedingstuffs, were determined via a collaborative study. The method delivers qualitative results in terms of the presence or absence of animal particles in feed and differentiates animal from vegetable feed ingredients on the basis of the evaluation of near infrared spectra obtained from individual particles present in the sample. The specificity ranged from 86% to 100%. The limit of detection obtained on the analysis of the sediment fraction, prepared as for the European official method, was 0.1% processed animal proteins (PAPs) in feed, since all laboratories correctly identified the positive samples. This limit has to be increased up to 2% for the analysis of samples which are not sedimented. The required sensitivity for the official control is therefore achieved in the analysis of the sediment fraction of the samples where the method can be applied for the detection of the presence of animal meal. Criteria for the classification of samples, when fewer than five spectra are found, as being of animal origin needs to be set up in order to harmonise the approach taken by the laboratories when applying NIRM for the detection of the presence of animal meal in feed.

  19. Description and pilot results from a novel method for evaluating return of incidental findings from next-generation sequencing technologies.

    Science.gov (United States)

    Goddard, Katrina A B; Whitlock, Evelyn P; Berg, Jonathan S; Williams, Marc S; Webber, Elizabeth M; Webster, Jennifer A; Lin, Jennifer S; Schrader, Kasmintan A; Campos-Outcalt, Doug; Offit, Kenneth; Feigelson, Heather Spencer; Hollombe, Celine

    2013-09-01

    The aim of this study was to develop, operationalize, and pilot test a transparent, reproducible, and evidence-informed method to determine when to report incidental findings from next-generation sequencing technologies. Using evidence-based principles, we proposed a three-stage process. Stage I "rules out" incidental findings below a minimal threshold of evidence and is evaluated using inter-rater agreement and comparison with an expert-based approach. Stage II documents criteria for clinical actionability using a standardized approach to allow experts to consistently consider and recommend whether results should be routinely reported (stage III). We used expert opinion to determine the face validity of stages II and III using three case studies. We evaluated the time and effort for stages I and II. For stage I, we assessed 99 conditions and found high inter-rater agreement (89%), and strong agreement with a separate expert-based method. Case studies for familial adenomatous polyposis, hereditary hemochromatosis, and α1-antitrypsin deficiency were all recommended for routine reporting as incidental findings. The method requires definition of clinically actionable incidental findings and provide documentation and pilot testing of a feasible method that is scalable to the whole genome.

  20. Comparison of R6 and A16 J estimation methods under combined mechanical and thermal loads with FE results

    International Nuclear Information System (INIS)

    Nam, Hyun-Suk; Oh, Chang-Young; Kim, Yun-Jae; Jerng, Dong Wook; Ainsworth, Robert A.; Budden, Peter J.; Marie, Stéphane

    2015-01-01

    This paper compares elastic–plastic values of J calculated using the methods in the UK R6 and the French A16 fitness-for-service procedures with FE results for a vessel with a circumferential surface crack under axial tension and a radial thermal gradient. In the FE analyses, the relative magnitudes and loading sequence of mechanical and thermal loads are systematically varied, together with the material strain hardening exponent. Fully circumferential and semi-elliptical surface crack with two relative crack depths are considered. It is found that the R6 estimates are overall accurate but can be non-conservative at large L_r. The A16 estimates are more conservative than the R6 estimates at small L_r but are conservative even at large L_r. Possible sources of conservatism and non-conservatism in R6 and A16 are discussed. - Highlights: • Accuracy of existing J estimation methods for combined mechanical and thermal loading are compared with FE results. • The methods in the UK R6 and the French A16 procedures are considered. • The R6 estimates are overall accurate but can be non-conservative at large L_r. • The A16 estimates are more conservative than the R6 estimates at small L_r but are conservative even at large L_r. • Possible sources of conservatism and non-conservatism in R6 and A16 are discussed.

  1. Paleomagnetic intensity of Aso pyroclastic flows: Additional results with LTD-DHT Shaw method, Thellier method with pTRM-tail check

    Science.gov (United States)

    Maruuchi, T.; Shibuya, H.

    2009-12-01

    , and 42 specimens were submitted to Thellier experiments. Twelve specimens from 4 sites passed the same criteria as Aso-2, and yield a mean paleointensity of 43.1±1.4uT. It again agrees with the value (45.6±1.7uT) of Takai et al. (2002). LTD-DHT Shaw method experiment is also applied for 12 specimens from 3 sites, and 4 passed the criteria giving 38.2±1.7. Although it is a little smaller than Thellier results, it is way larger than the Sint-800 at the time of Aso-4. Aso-1 result in this study is more consistent with the Sint-800 at that time than Takai et al. (2002). But for Aso-2 and Aso-4, their new reliable paleointensity results suggest that the discrepancy from the Sint-800 is not attributed to the experimental problems.

  2. A New Method for Re-Analyzing Evaluation Bias: Piecewise Growth Curve Modeling Reveals an Asymmetry in the Evaluation of Pro and Con Arguments.

    Directory of Open Access Journals (Sweden)

    Jens Jirschitzka

    Full Text Available In four studies we tested a new methodological approach to the investigation of evaluation bias. The usage of piecewise growth curve modeling allowed for investigation into the impact of people's attitudes on their persuasiveness ratings of pro- and con-arguments, measured over the whole range of the arguments' polarity from an extreme con to an extreme pro position. Moreover, this method provided the opportunity to test specific hypotheses about the course of the evaluation bias within certain polarity ranges. We conducted two field studies with users of an existing online information portal (Studies 1a and 2a as participants, and two Internet laboratory studies with mostly student participants (Studies 1b and 2b. In each of these studies we presented pro- and con-arguments, either for the topic of MOOCs (massive open online courses, Studies 1a and 1b or for the topic of M-learning (mobile learning, Studies 2a and 2b. Our results indicate that using piecewise growth curve models is more appropriate than simpler approaches. An important finding of our studies was an asymmetry of the evaluation bias toward pro- or con-arguments: the evaluation bias appeared over the whole polarity range of pro-arguments and increased with more and more extreme polarity. This clear-cut result pattern appeared only on the pro-argument side. For the con-arguments, in contrast, the evaluation bias did not feature such a systematic picture.

  3. Studies on mycobacterium tuberculosis sensitivity test by using the method of rapid radiometry with appendixes of clinical results

    International Nuclear Information System (INIS)

    Yang Yongqing; Jiang Yimin; Lu Wendong; Zhu Rongen

    1987-01-01

    Three standard strains of mycobacterium tuberculosis (H 37 RV-fully sensitive, SM-R1000 μg/ml, RFP-R 100 μg/ml) were tested with 10 concentration of 5 antitubercular agent, INH, SM, PAS, RFP and EB. 114 isolates of mycobacterium tuberculosis taken from patients were tested with INH, PAS, SM and RFP. They were agreed with the results of standard Lowenstein-Jensen method in 81.7%. 82% of the isolate test were completed within 5 days. The method may be used in routine clinical work. The liquid media prepared by authors do not require human serum albumin and it is less expensive and readily available

  4. The method of producing climate change datasets impacts the resulting policy guidance and chance of mal-adaptation

    Directory of Open Access Journals (Sweden)

    Marie Ekström

    2016-12-01

    Full Text Available Impact, adaptation and vulnerability (IAV research underpin strategies for adaptation to climate change and help to conceptualise what life may look like in decades to come. Research draws on information from global climate models (GCMs though typically post-processed into a secondary product with finer resolution through methods of downscaling. Through worked examples set in an Australian context we assess the influence of GCM sub-setting, geographic area sub-setting and downscaling method on the regional change signal. Examples demonstrate that choices impact on the final results differently depending on factors such as application needs, range of uncertainty of the projected variable, amplitude of natural variability, and size of study region. For heat extremes, the choice of emissions scenario is of prime importance, but for a given scenario the method of preparing data can affect the magnitude of the projection by a factor of two or more, strongly affecting the indicated adaptation decision. For catchment level runoff projections, the choice of emission scenario is less dominant. Rather the method of selecting and producing application-ready datasets is crucial as demonstrated by results with opposing sign of change, raising the real possibility of mal-adaptive decisions. This work illustrates the potential pitfalls of GCM sub-sampling or the use of a single downscaled product when conducting IAV research. Using the broad range of change from all available model sources, whilst making the application more complex, avoids the larger problem of over-confidence in climate projections and lessens the chance of mal-adaptation.

  5. Public attitudes towards alcohol control policies in Scotland and England: Results from a mixed-methods study.

    Science.gov (United States)

    Li, Jessica; Lovatt, Melanie; Eadie, Douglas; Dobbie, Fiona; Meier, Petra; Holmes, John; Hastings, Gerard; MacKintosh, Anne Marie

    2017-03-01

    The harmful effects of heavy drinking on health have been widely reported, yet public opinion on governmental responsibility for alcohol control remains divided. This study examines UK public attitudes towards alcohol policies, identifies underlying dimensions that inform these, and relationships with perceived effectiveness. A cross-sectional mixed methods study involving a telephone survey of 3477 adult drinkers aged 16-65 and sixteen focus groups with 89 adult drinkers in Scotland and England was conducted between September 2012 and February 2013. Principal components analysis (PCA) was used to reduce twelve policy statements into underlying dimensions. These dimensions were used in linear regression models examining alcohol policy support by demographics, drinking behaviour and perceptions of UK drinking and government responsibility. Findings were supplemented with a thematic analysis of focus group transcripts. A majority of survey respondents supported all alcohol policies, although the level of support varied by type of policy. Greater enforcement of laws on under-age sales and more police patrolling the streets were strongly supported while support for pricing policies and restricting access to alcohol was more divided. PCA identified four main dimensions underlying support on policies: alcohol availability, provision of health information and treatment services, alcohol pricing, and greater law enforcement. Being female, older, a moderate drinker, and holding a belief that government should do more to reduce alcohol harms were associated with higher support on all policy dimensions. Focus group data revealed findings from the survey may have presented an overly positive level of support on all policies due to differences in perceived policy effectiveness. Perceived effectiveness can help inform underlying patterns of policy support and should be considered in conjunction with standard measures of support in future research on alcohol control policies

  6. Salicylic and jasmonic acid pathways are necessary for defence against Dickeya solani as revealed by a novel method for Blackleg disease screening of in vitro grown potato.

    Science.gov (United States)

    Burra, D D; Mühlenbock, P; Andreasson, E

    2015-09-01

    Potato is major crop ensuring food security in Europe, and blackleg disease is increasingly causing losses in yield and during storage. Recently, one blackleg pathogen, Dickeya solani has been shown to be spreading in Northern Europe that causes aggressive disease development. Currently, identification of tolerant commercial potato varieties has been unsuccessful; this is confounded by the complicated etiology of the disease and a strong environmental influence on disease development. There is currently a lack of efficient testing systems. Here, we describe a system for quantification of blackleg symptoms on shoots of sterile in vitro potato plants, which saves time and space compared to greenhouse and existing field assays. We found no evidence for differences in infection between the described in vitro-based screening method and existing greenhouse assays. This system facilitates efficient screening of blackleg disease response of potato plants independent of other microorganisms and variable environmental conditions. We therefore used the in vitro screening method to increase understanding of plant mechanisms involved in blackleg disease development by analysing disease response of hormone- related (salicylic and jasmonic acid) transgenic potato plants. We show that both jasmonic (JA) and salicylic (SA) acid pathways regulate tolerance to blackleg disease in potato, a result unlike previous findings in Arabidopsis defence response to necrotrophic bacteria. We confirm this by showing induction of a SA marker, pathogenesis-related protein 1 (StPR1), and a JA marker, lipoxygenase (StLOX), in Dickeya solani infected in vitro potato plants. We also observed that tubers of transgenic potato plants were more susceptible to soft rot compared to wild type, suggesting a role for SA and JA pathways in general tolerance to Dickeya. © 2015 German Botanical Society and The Royal Botanical Society of the Netherlands.

  7. Improvement of production layout based on optimum production balancing scale results by using Moodie Young and Comsoal method

    Science.gov (United States)

    Ikhsan, Siregar; Ulina Anastasia Sipangkar, Tri; Prasetio, Aji

    2017-09-01

    This research was conducted at a make to order production system company which is engaged in the car body of the vehicle. One of the products produced is dump truck which is one kind of transportation for the transport of goods equipped with hydraulics to facilitate goods’ loading and unloading process. The company has 7 work stations with different cycle times. Companies often experience delays in order delivery. The production process on the production floor has not been done optimally where there is a build up of work in process in some work centres. The build up of work in process (WIP) products is seen in the welding and painting stations. Stacking that occurs on the production line may cause the company to be liable for damages due to delays in product completion. The WIP occurs due to unbalanced paths can be seen from the variance of cycle time of each station is very diverse. The time difference of each work element is due to the allocation of work elements to each work centre unevenly. On the basis of the allocation of uneven work elements, the dump truck assembly line is made. The analysis is done by using Moodie Young and Comsoal method to do the balancing of production line. The result of layout improvement by using systematic layout planning (SLP) method is change the composition of the work centre from 7 into 4 work centre which enables the movement of material to be more effective and efficient so that it can get an efficient and effective production trajectory and can solve existing problems. The result of the track balancing is then used as a guide in constructing a new layout based on the balancing result with the most optimum method.

  8. Rapid radiation in spiny lobsters (Palinurus spp) as revealed by classic and ABC methods using mtDNA and microsatellite data.

    Science.gov (United States)

    Palero, Ferran; Lopes, Joao; Abelló, Pere; Macpherson, Enrique; Pascual, Marta; Beaumont, Mark A

    2009-11-09

    Molecular tools may help to uncover closely related and still diverging species from a wide variety of taxa and provide insight into the mechanisms, pace and geography of marine speciation. There is a certain controversy on the phylogeography and speciation modes of species-groups with an Eastern Atlantic-Western Indian Ocean distribution, with previous studies suggesting that older events (Miocene) and/or more recent (Pleistocene) oceanographic processes could have influenced the phylogeny of marine taxa. The spiny lobster genus Palinurus allows for testing among speciation hypotheses, since it has a particular distribution with two groups of three species each in the Northeastern Atlantic (P. elephas, P. mauritanicus and P. charlestoni) and Southeastern Atlantic and Southwestern Indian Oceans (P. gilchristi, P. delagoae and P. barbarae). In the present study, we obtain a more complete understanding of the phylogenetic relationships among these species through a combined dataset with both nuclear and mitochondrial markers, by testing alternative hypotheses on both the mutation rate and tree topology under the recently developed approximate Bayesian computation (ABC) methods. Our analyses support a North-to-South speciation pattern in Palinurus with all the South-African species forming a monophyletic clade nested within the Northern Hemisphere species. Coalescent-based ABC methods allowed us to reject the previously proposed hypothesis of a Middle Miocene speciation event related with the closure of the Tethyan Seaway. Instead, divergence times obtained for Palinurus species using the combined mtDNA-microsatellite dataset and standard mutation rates for mtDNA agree with known glaciation-related processes occurring during the last 2 my. The Palinurus speciation pattern is a typical example of a series of rapid speciation events occurring within a group, with very short branches separating different species. Our results support the hypothesis that recent climate

  9. Developing a bone mineral density test result letter to send to patients: a mixed-methods study

    Directory of Open Access Journals (Sweden)

    Edmonds SW

    2014-06-01

    Full Text Available Stephanie W Edmonds,1,2 Samantha L Solimeo,3 Xin Lu,1 Douglas W Roblin,4,8 Kenneth G Saag,5 Peter Cram6,7 1Department of Internal Medicine, 2College of Nursing, University of Iowa, Iowa City, IA, USA; 3Center for Comprehensive Access and Delivery Research and Evaluation, Iowa City Veterans Affairs Health Care System, Iowa City, IA, USA; 4Kaiser Permanente of Atlanta, Atlanta, GA, USA; 5Department of Rheumatology, University of Alabama at Birmingham, Birmingham, AL, USA; 6Faculty of Medicine, University of Toronto, Toronto, ON, Canada; 7University Health Network and Mount Sinai Hospital, Toronto, ON, Canada; 8School of Public Health, Georgia State University, Atlanta, GA, USA Purpose: To use a mixed-methods approach to develop a letter that can be used to notify patients of their bone mineral density (BMD results by mail that may activate patients in their bone-related health care. Patients and methods: A multidisciplinary team developed three versions of a letter for reporting BMD results to patients. Trained interviewers presented these letters in a random order to a convenience sample of adults, aged 50 years and older, at two different health care systems. We conducted structured interviews to examine the respondents’ preferences and comprehension among the various letters. Results: A total of 142 participants completed the interview. A majority of the participants were female (64.1% and white (76.1%. A plurality of the participants identified a specific version of the three letters as both their preferred version (45.2%; P<0.001 and as the easiest to understand (44.6%; P<0.01. A majority of participants preferred that the letters include specific next steps for improving their bone health. Conclusion: Using a mixed-methods approach, we were able to develop and optimize a printed letter for communicating a complex test result (BMD to patients. Our results may offer guidance to clinicians, administrators, and researchers who are

  10. Thermal Response Testing Results of Different Types of Borehole Heat Exchangers: An Analysis and Comparison of Interpretation Methods

    Directory of Open Access Journals (Sweden)

    Angelo Zarrella

    2017-06-01

    Full Text Available The design phase of ground source heat pump systems is an extremely important one as many of the decisions made at that time can affect the system’s energy performance as well as installation and operating costs. The current study examined the interpretation of thermal response testing measurements used to evaluate the equivalent ground thermal conductivity and thus to design the system. All the measurements were taken at the same geological site located in Molinella, Bologna (Italy where a variety of borehole heat exchangers (BHEs had been installed and investigated within the project Cheap-GSHPs (Cheap and efficient application of reliable Ground Source Heat exchangers and Pumps of the European Union’s Horizon 2020 research and innovation program. The measurements were initially analyzed in accordance with the common interpretation based on the first-order approximation of the solution for the infinite line source model and then by utilizing the complete solutions of both the infinite line and cylinder source models. An inverse numerical approach based on a detailed model that considers the current geometry of the BHE and the axial heat transfer as well as the effect of weather on the ground surface was also used. Study findings revealed that the best result was generally obtained using the inverse numerical interpretation.

  11. Proximate method for diagnosis of psychophysical state of personnel of the Kombinat industrial association to reveal the risk groups of boundary nervous-mental and psychosomatic disorders

    International Nuclear Information System (INIS)

    Kazakov, V.I.; Tarapata, M.I.; Kupdiev, Yu.I.; Navakadikyan, A.A.; Buzupov, V.A.; Tabachnikov, S.I.

    1989-01-01

    Method consists in the direct reproduction of geometrical figures (triangles) with different internal shading presented for the memorizing for 10 s. Proposed proximate method is described. Above method for the quantitative evaluation of man fatiguability under work load is based on the principles of physiological indication of working conditions, consequent sampling of information interrelations characterizing working stress and residual phenomena after a shift. Method was tested on persons of operator and manual labour. fig. 1; tabs. 2

  12. Comparison of OpenFOAM and EllipSys3D actuator line methods with (NEW) MEXICO results

    Science.gov (United States)

    Nathan, J.; Meyer Forsting, A. R.; Troldborg, N.; Masson, C.

    2017-05-01

    The Actuator Line Method exists for more than a decade and has become a well established choice for simulating wind rotors in computational fluid dynamics. Numerous implementations exist and are used in the wind energy research community. These codes were verified by experimental data such as the MEXICO experiment. Often the verification against other codes were made on a very broad scale. Therefore this study attempts first a validation by comparing two different implementations, namely an adapted version of SOWFA/OpenFOAM and EllipSys3D and also a verification by comparing against experimental results from the MEXICO and NEW MEXICO experiments.

  13. Indication, methods and results of selective arteriography of the A. iliaca interna in case of erectile dysfunction

    Energy Technology Data Exchange (ETDEWEB)

    Baehren, W.; Gall, H.; Scherb, W.; Thon, W.

    1988-01-01

    Erectile dysfunction very frequently can be traced back to the real cause by means of angiography. Selective angiography is the method of choice in cases where other causes of circulatory disturbance have already been excluded, and non-invasive tests are expected to yield information of relevance to therapy. The qualitatively best angiographic results are obtained by examination under peridural anesthesia and by intracavitary injection of vaso-active substances. Selective arteriography is indicated in cases of primary or post-traumatic erectile dysfunction. It is a prerequisite of surgery for revascularisation of the pudendal-penile vascular bed.

  14. Comparison of OpenFOAM and EllipSys3D actuator line methods with (NEW) MEXICO results

    International Nuclear Information System (INIS)

    Nathan, J; Masson, C; Meyer Forsting, A R; Troldborg, N

    2017-01-01

    The Actuator Line Method exists for more than a decade and has become a well established choice for simulating wind rotors in computational fluid dynamics. Numerous implementations exist and are used in the wind energy research community. These codes were verified by experimental data such as the MEXICO experiment. Often the verification against other codes were made on a very broad scale. Therefore this study attempts first a validation by comparing two different implementations, namely an adapted version of SOWFA/OpenFOAM and EllipSys3D and also a verification by comparing against experimental results from the MEXICO and NEW MEXICO experiments. (paper)

  15. Discrepancies between qualitative and quantitative evaluation of randomised controlled trial results: achieving clarity through mixed methods triangulation.

    Science.gov (United States)

    Tonkin-Crine, Sarah; Anthierens, Sibyl; Hood, Kerenza; Yardley, Lucy; Cals, Jochen W L; Francis, Nick A; Coenen, Samuel; van der Velden, Alike W; Godycki-Cwirko, Maciek; Llor, Carl; Butler, Chris C; Verheij, Theo J M; Goossens, Herman; Little, Paul

    2016-05-12

    Mixed methods are commonly used in health services research; however, data are not often integrated to explore complementarity of findings. A triangulation protocol is one approach to integrating such data. A retrospective triangulation protocol was carried out on mixed methods data collected as part of a process evaluation of a trial. The multi-country randomised controlled trial found that a web-based training in communication skills (including use of a patient booklet) and the use of a C-reactive protein (CRP) point-of-care test decreased antibiotic prescribing by general practitioners (GPs) for acute cough. The process evaluation investigated GPs' and patients' experiences of taking part in the trial. Three analysts independently compared findings across four data sets: qualitative data collected view semi-structured interviews with (1) 62 patients and (2) 66 GPs and quantitative data collected via questionnaires with (3) 2886 patients and (4) 346 GPs. Pairwise comparisons were made between data sets and were categorised as agreement, partial agreement, dissonance or silence. Three instances of dissonance occurred in 39 independent findings. GPs and patients reported different views on the use of a CRP test. GPs felt that the test was useful in convincing patients to accept a no-antibiotic decision, but patient data suggested that this was unnecessary if a full explanation was given. Whilst qualitative data indicated all patients were generally satisfied with their consultation, quantitative data indicated highest levels of satisfaction for those receiving a detailed explanation from their GP with a booklet giving advice on self-care. Both qualitative and quantitative data sets indicated higher patient enablement for those in the communication groups who had received a booklet. Use of CRP tests does not appear to engage patients or influence illness perceptions and its effect is more centred on changing clinician behaviour. Communication skills and the patient

  16. Evaluation of column flotation results with a film flotation method; Film fusenho wo mochiita column fusen kekkan no hyoka

    Energy Technology Data Exchange (ETDEWEB)

    Fujimoto, H.; Matsukata, M.; Ueyama, K. [Osaka University, Osaka (Japan). Faculty of Engineering Science

    1996-10-28

    Change in wettability of coal particle surfaces due to kerosene adsorption was studied by using a film flotation method. The applicability of a film flotation method to coals modified by kerosene adsorption was first confirmed. In experiment, film flotation was applied to Illinois coal modified by aqueous methanol solution and kerosene adsorption, and the weight percent of residual particles on a gas-liquid interface and kerosene in aqueous methanol solution were analyzed to verify the applicability of a film flotation method. Film flotation was applied to Datong and Illinois coals modified by kerosene adsorption, and the weight percent of residual particles on a gas-liquid interface was plotted to surface tension of liquid. As a result, the weight percent of hydrophobic particles within 50mN/m in surface tension slightly increased in Datong coal and remarkably increased in Illinois coal with kerosene addition. It was thus suggested that in addition to surface tension, the distributions of hydrophilic and hydrophobic strengths on the surface of each coal particle should be considered to understand attachment of coal particles and bubbles. 6 refs., 4 figs.

  17. A Kinematic Study of Prosodic Structure in Articulatory and Manual Gestures: Results from a Novel Method of Data Collection

    Directory of Open Access Journals (Sweden)

    Jelena Krivokapić

    2017-03-01

    Full Text Available The primary goal of this work is to examine prosodic structure as expressed concurrently through articulatory and manual gestures. Specifically, we investigated the effects of phrase-level prominence (Experiment 1 and of prosodic boundaries (Experiments 2 and 3 on the kinematic properties of oral constriction and manual gestures. The hypothesis guiding this work is that prosodic structure will be similarly expressed in both modalities. To test this, we have developed a novel method of data collection that simultaneously records speech audio, vocal tract gestures (using electromagnetic articulometry and manual gestures (using motion capture. This method allows us, for the first time, to investigate kinematic properties of body movement and vocal tract gestures simultaneously, which in turn allows us to examine the relationship between speech and body gestures with great precision. A second goal of the paper is thus to establish the validity of this method. Results from two speakers show that manual and oral gestures lengthen under prominence and at prosodic boundaries, indicating that the effects of prosodic structure extend beyond the vocal tract to include body movement.1

  18. Methods for the decontamination of personnel recommended by a company doctor on the basis of recent research results

    International Nuclear Information System (INIS)

    Heinemann, G.

    1992-01-01

    There is no single panacea for all kinds of contamination and, thus, no standard procedure to be uniformly adopted in the decontamination of individuals. This means that methods of personnel decontamination vary according to the different working conditions encountered in research laboratories and units for the production of nuclear fuel and radionuclides on the one hand and nuclear power plants on the other hand. Some knowledge of the chemical properties of contaminating materials appears indispensable, but is mostly found wanting. A suitable method of personnel decontamination can by no means be defined as one that ensures the cleaning of contaminations from the skin surface. All decontamination measures, even the less aggressive ones, may result in incorporation. An intact skin offers the best protection against incorporation. It must be borne in mind that most contaminations occuring in nuclear power plants are of minor importance as regards dose. The damage to the affected individual from aggressive methods of removal will be much greater than that from any radioactivity remaining in the corneal layer. (orig.) [de

  19. Methods and results of implementing a commercially available videotaped health physics training program in a multi-disciplined DOE facility

    International Nuclear Information System (INIS)

    O'Neal, B.L.

    1979-01-01

    Sandia, a prime contractor for DOE, is a multi-disciplined research and development laboratory. Its various activities include the operations of two nuclear reactors, several multi-kilocurie gamma irradiation facilities, a transuranic hot cell facility, various and numerous particle accelerators and x-ray generators, and many other areas involving employees working with or around radioactive materials or radiation producing machines. Since March 1979, Sandia has conducted a formalized basic radiation safety training program using a commercially available videotaped training package. The videotapes are generic in nature and are accompanied with hard copy text material, vu-graphs, quizzes, and an instructor's guide. Sandia's overall training program and the methods, results, and problem areas of implementing an off the shelf, commercially available videotaped training program are described. Results are summarized using an instructor/course/student evaluation form

  20. Use of statistical study methods for the analysis of the results of the imitation modeling of radiation transfer

    Science.gov (United States)

    Alekseenko, M. A.; Gendrina, I. Yu.

    2017-11-01

    Recently, due to the abundance of various types of observational data in the systems of vision through the atmosphere and the need for their processing, the use of various methods of statistical research in the study of such systems as correlation-regression analysis, dynamic series, variance analysis, etc. is actual. We have attempted to apply elements of correlation-regression analysis for the study and subsequent prediction of the patterns of radiation transfer in these systems same as in the construction of radiation models of the atmosphere. In this paper, we present some results of statistical processing of the results of numerical simulation of the characteristics of vision systems through the atmosphere obtained with the help of a special software package.1

  1. Potential theory for stationary Schrödinger operators: a survey of results obtained with non-probabilistic methods

    Directory of Open Access Journals (Sweden)

    Marco Bramanti

    1992-05-01

    Full Text Available In this paper we deal with a uniformly elliptic operator of the kind: Lu  Au + Vu, where the principal part A is in divergence form, and V is a function assumed in a “Kato class”. This operator has been studied in different contexts, especially using probabilistic techniques. The aim of the present work is to give a unified and simplified presentation of the results obtained with non probabilistic methods for the operator L on a bounded Lipschitz domain. These results regard: continuity of the solutions of Lu=0; Harnack inequality; estimates on the Green's function and L-harmonic measure; boundary behavior of positive solutions of Lu=0, in particular a “Fatou's theorem”.

  2. A convenient method for estimating the contaminated zone of a subsurface aquifer resulting from radioactive waste disposal into ground

    International Nuclear Information System (INIS)

    Fukui, Masami; Katsurayama, Kousuke; Uchida, Shigeo.

    1981-01-01

    Studies were conducted to estimate the contamination spread resulting from the radioactive waste disposal into a subsurface aquifer. A general equation, expressing the contaminated zone as a function of radioactive decay, the physical and chemical parameters of soil is presented. A distribution coefficient was also formulated which can be used to judge the suitability of a site for waste disposal. M