WorldWideScience

Sample records for methods results show

  1. Comparative study of methods on outlying data detection in experimental results

    International Nuclear Information System (INIS)

    Oliveira, P.M.S.; Munita, C.S.; Hazenfratz, R.

    2009-01-01

    The interpretation of experimental results through multivariate statistical methods might reveal the outliers existence, which is rarely taken into account by the analysts. However, their presence can influence the results interpretation, generating false conclusions. This paper shows the importance of the outliers determination for one data base of 89 samples of ceramic fragments, analyzed by neutron activation analysis. The results were submitted to five procedures to detect outliers: Mahalanobis distance, cluster analysis, principal component analysis, factor analysis, and standardized residual. The results showed that although cluster analysis is one of the procedures most used to identify outliers, it can fail by not showing the samples that are easily identified as outliers by other methods. In general, the statistical procedures for the identification of the outliers are little known by the analysts. (author)

  2. Gun Shows and Gun Violence: Fatally Flawed Study Yields Misleading Results

    Science.gov (United States)

    Hemenway, David; Webster, Daniel; Pierce, Glenn; Braga, Anthony A.

    2010-01-01

    A widely publicized but unpublished study of the relationship between gun shows and gun violence is being cited in debates about the regulation of gun shows and gun commerce. We believe the study is fatally flawed. A working paper entitled “The Effect of Gun Shows on Gun-Related Deaths: Evidence from California and Texas” outlined this study, which found no association between gun shows and gun-related deaths. We believe the study reflects a limited understanding of gun shows and gun markets and is not statistically powered to detect even an implausibly large effect of gun shows on gun violence. In addition, the research contains serious ascertainment and classification errors, produces results that are sensitive to minor specification changes in key variables and in some cases have no face validity, and is contradicted by 1 of its own authors’ prior research. The study should not be used as evidence in formulating gun policy. PMID:20724672

  3. Two different hematocrit detection methods: Different methods, different results?

    Directory of Open Access Journals (Sweden)

    Schuepbach Reto A

    2010-03-01

    Full Text Available Abstract Background Less is known about the influence of hematocrit detection methodology on transfusion triggers. Therefore, the aim of the present study was to compare two different hematocrit-assessing methods. In a total of 50 critically ill patients hematocrit was analyzed using (1 blood gas analyzer (ABLflex 800 and (2 the central laboratory method (ADVIA® 2120 and compared. Findings Bland-Altman analysis for repeated measurements showed a good correlation with a bias of +1.39% and 2 SD of ± 3.12%. The 24%-hematocrit-group showed a correlation of r2 = 0.87. With a kappa of 0.56, 22.7% of the cases would have been transfused differently. In the-28%-hematocrit group with a similar correlation (r2 = 0.8 and a kappa of 0.58, 21% of the cases would have been transfused differently. Conclusions Despite a good agreement between the two methods used to determine hematocrit in clinical routine, the calculated difference of 1.4% might substantially influence transfusion triggers depending on the employed method.

  4. A result-driven minimum blocking method for PageRank parallel computing

    Science.gov (United States)

    Tao, Wan; Liu, Tao; Yu, Wei; Huang, Gan

    2017-01-01

    Matrix blocking is a common method for improving computational efficiency of PageRank, but the blocking rules are hard to be determined, and the following calculation is complicated. In tackling these problems, we propose a minimum blocking method driven by result needs to accomplish a parallel implementation of PageRank algorithm. The minimum blocking just stores the element which is necessary for the result matrix. In return, the following calculation becomes simple and the consumption of the I/O transmission is cut down. We do experiments on several matrixes of different data size and different sparsity degree. The results show that the proposed method has better computational efficiency than traditional blocking methods.

  5. Different methods to quantify Listeria monocytogenesbiofilms cells showed different profile in their viability

    Directory of Open Access Journals (Sweden)

    Lizziane Kretli Winkelströter

    2015-03-01

    Full Text Available Listeria monocytogenes is a foodborne pathogen able to adhere and to form biofilms in several materials commonly present in food processing plants. The aim of this study was to evaluate the resistance of Listeria monocytogenes attached to abiotic surface, after treatment with sanitizers, by culture method, microscopy and Quantitative Real Time Polymerase Chain Reaction (qPCR. Biofilms of L. monocytogenes were obtained in stainless steel coupons immersed in Brain Heart Infusion Broth, under agitation at 37 °C for 24 h. The methods selected for this study were based on plate count, microscopic count with the aid of viability dyes (CTC-DAPI, and qPCR. Results of culture method showed that peroxyacetic acid was efficient to kill sessile L. monocytogenes populations, while sodium hypochlorite was only partially effective to kill attached L. monocytogenes (p < 0.05. When, viability dyes (CTC/DAPI combined with fluorescence microscopy and qPCR were used and lower counts were found after treatments (p < 0.05. Selective quantification of viable cells of L. monocytogenes by qPCR using EMA revelead that the pre-treatment with EMA was not appropriate since it also inhibited amplification of DNA from live cells by ca. 2 log. Thus, the use of CTC counts was the best method to count viable cells in biofilms.

  6. Evaluating rehabilitation methods - some practical results from Rum Jungle

    International Nuclear Information System (INIS)

    Ryan, P.

    1987-01-01

    Research and analysis of the following aspects of rehabilitation have been conducted at the Rum Jungle mine site over the past three years: drainage structure stability; rock batter stability; soil fauna; tree growth in compacted soils; rehabilitation costs. The results show that, for future rehabilitation projects adopting refined methods, attention to final construction detail and biospheric influences is most important. The mine site offers a unique opportunity to evaluate the success of a variety of rehabilitation methods to the benefit of the industry in Australia overseas. It is intended that practical, economic, research will continue for some considerable time

  7. Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems

    Science.gov (United States)

    Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding

    2007-09-01

    In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.

  8. The effect of PBL and film showing, frequent quizzes and lecture-based method on short-term performance of dentistry students

    Directory of Open Access Journals (Sweden)

    Sadr Lahijani M.S

    2004-01-01

    Full Text Available Background: Advocates have proposed that frequent testing increases the effectiveness of instruction by encouraging learners to study and review more often. It has also been argued that in this way, student errors can be identified and corrected earlier and good performance can be recognized, leading to more positive attitudes toward learning process. In problem-based learning (PBL, medical students reportedly take a more active role in learning and have better recall than students in a conventional learning environment. The hypothetical benefits of a PBL and studentbased environment and use of films in the class are the development of self-learning and problem-solving skills and enhancement of knowledge and motivation. Purpose: To examine the effect of combination of PBL method and film showing on the short-term performance of dentistry students and to compare it with lecture-based method and frequent quizzes. Methods: All students of 3 years (from 2000 till 2002 that had theoretical endodontic course (part 1 participated in this descriptive-analytic study. The scores of final examinations of this course were obtained from their files. Data were analyzed by SPSS software & ANOVA. Results: The results showed that by changing the way of learning (PBL and film showing in 2001, there was a statistical difference between scores of the students of 2000 and 2001. Also there was a statistical difference with the students’ scores in 2002- the group with frequent quizzes. Conclusion: The variables such as changing the way of learning, using different methods in teaching, showing scientific films in class or, as a whole, active learning have significant effects on the results of final examination. Key Words: PBL, lecture based method, education, frequent quizzes

  9. Meta-Analysis of Quantification Methods Shows that Archaea and Bacteria Have Similar Abundances in the Subseafloor

    Science.gov (United States)

    May, Megan K.; Kevorkian, Richard T.; Steen, Andrew D.

    2013-01-01

    There is no universally accepted method to quantify bacteria and archaea in seawater and marine sediments, and different methods have produced conflicting results with the same samples. To identify best practices, we compiled data from 65 studies, plus our own measurements, in which bacteria and archaea were quantified with fluorescent in situ hybridization (FISH), catalyzed reporter deposition FISH (CARD-FISH), polyribonucleotide FISH, or quantitative PCR (qPCR). To estimate efficiency, we defined “yield” to be the sum of bacteria and archaea counted by these techniques divided by the total number of cells. In seawater, the yield was high (median, 71%) and was similar for FISH, CARD-FISH, and polyribonucleotide FISH. In sediments, only measurements by CARD-FISH in which archaeal cells were permeabilized with proteinase K showed high yields (median, 84%). Therefore, the majority of cells in both environments appear to be alive, since they contain intact ribosomes. In sediments, the sum of bacterial and archaeal 16S rRNA gene qPCR counts was not closely related to cell counts, even after accounting for variations in copy numbers per genome. However, qPCR measurements were precise relative to other qPCR measurements made on the same samples. qPCR is therefore a reliable relative quantification method. Inconsistent results for the relative abundance of bacteria versus archaea in deep subsurface sediments were resolved by the removal of CARD-FISH measurements in which lysozyme was used to permeabilize archaeal cells and qPCR measurements which used ARCH516 as an archaeal primer or TaqMan probe. Data from best-practice methods showed that archaea and bacteria decreased as the depth in seawater and marine sediments increased, although archaea decreased more slowly. PMID:24096423

  10. German precursor study: methods and results

    International Nuclear Information System (INIS)

    Hoertner, H.; Frey, W.; von Linden, J.; Reichart, G.

    1985-01-01

    This study has been prepared by the GRS by contract of the Federal Minister of Interior. The purpose of the study is to show how the application of system-analytic tools and especially of probabilistic methods on the Licensee Event Reports (LERs) and on other operating experience can support a deeper understanding of the safety-related importance of the events reported in reactor operation, the identification of possible weak points, and further conclusions to be drawn from the events. Additionally, the study aimed at a comparison of its results for the severe core damage frequency with those of the German Risk Study as far as this is possible and useful. The German Precursor Study is a plant-specific study. The reference plant is Biblis NPP with its very similar Units A and B, whereby the latter was also the reference plant for the German Risk Study

  11. Recruitment Methods and Show Rates to a Prostate Cancer Early Detection Program for High-Risk Men: A Comprehensive Analysis

    Science.gov (United States)

    Giri, Veda N.; Coups, Elliot J.; Ruth, Karen; Goplerud, Julia; Raysor, Susan; Kim, Taylor Y.; Bagden, Loretta; Mastalski, Kathleen; Zakrzewski, Debra; Leimkuhler, Suzanne; Watkins-Bruner, Deborah

    2009-01-01

    Purpose Men with a family history (FH) of prostate cancer (PCA) and African American (AA) men are at higher risk for PCA. Recruitment and retention of these high-risk men into early detection programs has been challenging. We report a comprehensive analysis on recruitment methods, show rates, and participant factors from the Prostate Cancer Risk Assessment Program (PRAP), which is a prospective, longitudinal PCA screening study. Materials and Methods Men 35–69 years are eligible if they have a FH of PCA, are AA, or have a BRCA1/2 mutation. Recruitment methods were analyzed with respect to participant demographics and show to the first PRAP appointment using standard statistical methods Results Out of 707 men recruited, 64.9% showed to the initial PRAP appointment. More individuals were recruited via radio than from referral or other methods (χ2 = 298.13, p < .0001). Men recruited via radio were more likely to be AA (p<0.001), less educated (p=0.003), not married or partnered (p=0.007), and have no FH of PCA (p<0.001). Men recruited via referrals had higher incomes (p=0.007). Men recruited via referral were more likely to attend their initial PRAP visit than those recruited by radio or other methods (χ2 = 27.08, p < .0001). Conclusions This comprehensive analysis finds that radio leads to higher recruitment of AA men with lower socioeconomic status. However, these are the high-risk men that have lower show rates for PCA screening. Targeted motivational measures need to be studied to improve show rates for PCA risk assessment for these high-risk men. PMID:19758657

  12. Image restoration by the method of convex projections: part 2 applications and numerical results.

    Science.gov (United States)

    Sezan, M I; Stark, H

    1982-01-01

    The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method.

  13. A simple identification method for spore-forming bacteria showing high resistance against γ-rays

    International Nuclear Information System (INIS)

    Koshikawa, Tomihiko; Sone, Koji; Kobayashi, Toshikazu

    1993-01-01

    A simple identification method was developed for spore-forming bacteria which are highly resistant against γ-rays. Among 23 species of Bacillus studied, the spores of Bacillus megaterium, B. cereus, B. thuringiensis, B. pumilus and B. aneurinolyticus showed high resistance against γ-rays as compared with other spores of Bacillus species. Combination of the seven kinds of biochemical tests, namely, the citrate utilization test, nitrate reduction test, starch hydrolysis test, Voges-Proskauer reaction test, gelatine hydrolysis test, mannitol utilization test and xylose utilization test showed a characteristic pattern for each species of Bacillus. The combination pattern of each the above tests with a few supplementary test, if necessary, was useful to identify Bacillus species showing high radiation resistance against γ-rays. The method is specific for B. megaterium, B. thuringiensis and B. pumilus, and highly selective for B. aneurinolyticus and B. cereus. (author)

  14. Multiband discrete ordinates method: formalism and results

    International Nuclear Information System (INIS)

    Luneville, L.

    1998-06-01

    The multigroup discrete ordinates method is a classical way to solve transport equation (Boltzmann) for neutral particles. Self-shielding effects are not correctly treated due to large variations of cross sections in a group (in the resonance range). To treat the resonance domain, the multiband method is introduced. The main idea is to divide the cross section domain into bands. We obtain the multiband parameters using the moment method; the code CALENDF provides probability tables for these parameters. We present our implementation in an existing discrete ordinates code: SN1D. We study deep penetration benchmarks and show the improvement of the method in the treatment of self-shielding effects. (author)

  15. New method for rearing Spodoptera frugiperda in laboratory shows that larval cannibalism is not obligatory

    Directory of Open Access Journals (Sweden)

    Cherre Sade Bezerra Da Silva

    2013-09-01

    Full Text Available New method for rearing Spodoptera frugiperda in laboratory shows that larval cannibalism is not obligatory. Here we show, for the first time, that larvae of the fall armyworm (FAW, Spodoptera frugiperda (Lepidoptera, Noctuidae, can be successfully reared in a cohort-based manner with virtually no cannibalism. FAW larvae were reared since the second instar to pupation in rectangular plastic containers containing 40 individuals with a surprisingly ca. 90% larval survivorship. Adult females from the cohort-based method showed fecundity similar to that already reported on literature for larvae reared individually, and fertility higher than 99%, with the advantage of combining economy of time, space and material resources. These findings suggest that the factors affecting cannibalism of FAW larvae in laboratory rearings need to be reevaluated, whilst the new technique also show potential to increase the efficiency of both small and mass FAW rearings.

  16. Testing Delays Resulting in Increased Identification Accuracy in Line-Ups and Show-Ups.

    Science.gov (United States)

    Dekle, Dawn J.

    1997-01-01

    Investigated time delays (immediate, two-three days, one week) between viewing a staged theft and attempting an eyewitness identification. Compared lineups to one-person showups in a laboratory analogue involving 412 subjects. Results show that across all time delays, participants maintained a higher identification accuracy with the showup…

  17. Comparison of the analysis result between two laboratories using different methods

    International Nuclear Information System (INIS)

    Sri Murniasih; Agus Taftazani

    2017-01-01

    Comparison of the analysis result of volcano ash sample between two laboratories using different analysis methods. The research aims to improve the testing laboratory quality and cooperate with the testing laboratory from other country. Samples were tested at the Center for Accelerator of Science and Technology (CAST)-NAA laboratory using NAA, while at the University of Texas (UT) USA using ICP-MS and ENAA method. From 12 elements of target, CAST-NAA able to present 11 elements of data analysis. The comparison results shows that the analysis of the K, Mn, Ti and Fe elements from both laboratories have a very good comparison and close one to other. It is known from RSD values and correlation coefficients of the both laboratories analysis results. While observed of the results difference known that the analysis results of Al, Na, K, Fe, V, Mn, Ti, Cr and As elements from both laboratories is not significantly different. From 11 elements were reported, only Zn which have significantly different values for both laboratories. (author)

  18. A method to determine the mammographic regions that show early changes due to the development of breast cancer

    Science.gov (United States)

    Karemore, Gopal; Nielsen, Mads; Karssemeijer, Nico; Brandt, Sami S.

    2014-11-01

    It is well understood nowadays that changes in the mammographic parenchymal pattern are an indicator of a risk of breast cancer and we have developed a statistical method that estimates the mammogram regions where the parenchymal changes, due to breast cancer, occur. This region of interest is computed from a score map by utilising the anatomical breast coordinate system developed in our previous work. The method also makes an automatic scale selection to avoid overfitting while the region estimates are computed by a nested cross-validation scheme. In this way, it is possible to recover those mammogram regions that show a significant difference in classification scores between the cancer and the control group. Our experiments suggested that the most significant mammogram region is the region behind the nipple and that can be justified by previous findings from other research groups. This result was conducted on the basis of the cross-validation experiments on independent training, validation and testing sets from the case-control study of 490 women, of which 245 women were diagnosed with breast cancer within a period of 2-4 years after the baseline mammograms. We additionally generalised the estimated region to another, mini-MIAS study and showed that the transferred region estimate gives at least a similar classification result when compared to the case where the whole breast region is used. In all, by following our method, one most likely improves both preclinical and follow-up breast cancer screening, but a larger study population will be required to test this hypothesis.

  19. A method to determine the mammographic regions that show early changes due to the development of breast cancer

    International Nuclear Information System (INIS)

    Karemore, Gopal; Nielsen, Mads; Brandt, Sami S; Karssemeijer, Nico

    2014-01-01

    It is well understood nowadays that changes in the mammographic parenchymal pattern are an indicator of a risk of breast cancer and we have developed a statistical method that estimates the mammogram regions where the parenchymal changes, due to breast cancer, occur. This region of interest is computed from a score map by utilising the anatomical breast coordinate system developed in our previous work. The method also makes an automatic scale selection to avoid overfitting while the region estimates are computed by a nested cross-validation scheme. In this way, it is possible to recover those mammogram regions that show a significant difference in classification scores between the cancer and the control group. Our experiments suggested that the most significant mammogram region is the region behind the nipple and that can be justified by previous findings from other research groups. This result was conducted on the basis of the cross-validation experiments on independent training, validation and testing sets from the case-control study of 490 women, of which 245 women were diagnosed with breast cancer within a period of 2–4 years after the baseline mammograms. We additionally generalised the estimated region to another, mini-MIAS study and showed that the transferred region estimate gives at least a similar classification result when compared to the case where the whole breast region is used. In all, by following our method, one most likely improves both preclinical and follow-up breast cancer screening, but a larger study population will be required to test this hypothesis. (paper)

  20. Survey Shows Variation in Ph.D. Methods Training.

    Science.gov (United States)

    Steeves, Leslie; And Others

    1983-01-01

    Reports on a 1982 survey of journalism graduate studies indicating considerable variation in research methods requirements and emphases in 23 universities offering doctoral degrees in mass communication. (HOD)

  1. On Calculation Methods and Results for Straight Cylindrical Roller Bearing Deflection, Stiffness, and Stress

    Science.gov (United States)

    Krantz, Timothy L.

    2011-01-01

    The purpose of this study was to assess some calculation methods for quantifying the relationships of bearing geometry, material properties, load, deflection, stiffness, and stress. The scope of the work was limited to two-dimensional modeling of straight cylindrical roller bearings. Preparations for studies of dynamic response of bearings with damaged surfaces motivated this work. Studies were selected to exercise and build confidence in the numerical tools. Three calculation methods were used in this work. Two of the methods were numerical solutions of the Hertz contact approach. The third method used was a combined finite element surface integral method. Example calculations were done for a single roller loaded between an inner and outer raceway for code verification. Next, a bearing with 13 rollers and all-steel construction was used as an example to do additional code verification, including an assessment of the leading order of accuracy of the finite element and surface integral method. Results from that study show that the method is at least first-order accurate. Those results also show that the contact grid refinement has a more significant influence on precision as compared to the finite element grid refinement. To explore the influence of material properties, the 13-roller bearing was modeled as made from Nitinol 60, a material with very different properties from steel and showing some potential for bearing applications. The codes were exercised to compare contact areas and stress levels for steel and Nitinol 60 bearings operating at equivalent power density. As a step toward modeling the dynamic response of bearings having surface damage, static analyses were completed to simulate a bearing with a spall or similar damage.

  2. How the Television Show "Mythbusters" Communicates the Scientific Method

    Science.gov (United States)

    Zavrel, Erik; Sharpsteen, Eric

    2016-01-01

    The importance of understanding and internalizing the scientific method can hardly be exaggerated. Unfortunately, it is all too common for high school--and even university--students to graduate with only a partial or oversimplified understanding of what the scientific method is and how to actually employ it. Help in remedying this situation may…

  3. Effect of Chemistry Triangle Oriented Learning Media on Cooperative, Individual and Conventional Method on Chemistry Learning Result

    Science.gov (United States)

    Latisma D, L.; Kurniawan, W.; Seprima, S.; Nirbayani, E. S.; Ellizar, E.; Hardeli, H.

    2018-04-01

    The purpose of this study was to see which method are well used with the Chemistry Triangle-oriented learning media. This quasi experimental research involves first grade of senior high school students in six schools namely each two SMA N in Solok city, in Pasaman and two SMKN in Pariaman. The sampling technique was done by Cluster Random Sampling. Data were collected by test and analyzed by one-way anova and Kruskall Wallish test. The results showed that the high school students in Solok learning taught by cooperative method is better than the results of student learning taught by conventional and Individual methods, both for students who have high initial ability and low-ability. Research in SMK showed that the overall student learning outcomes taught by conventional method is better than the student learning outcomes taught by cooperative and individual methods. Student learning outcomes that have high initial ability taught by individual method is better than student learning outcomes that are taught by cooperative method and for students who have low initial ability, there is no difference in student learning outcomes taught by cooperative, individual and conventional methods. Learning in high school in Pasaman showed no significant difference in learning outcomes of the three methods undertaken.

  4. Experimental Results and Numerical Simulation of the Target RCS using Gaussian Beam Summation Method

    Directory of Open Access Journals (Sweden)

    Ghanmi Helmi

    2018-05-01

    Full Text Available This paper presents a numerical and experimental study of Radar Cross Section (RCS of radar targets using Gaussian Beam Summation (GBS method. The purpose GBS method has several advantages over ray method, mainly on the caustic problem. To evaluate the performance of the chosen method, we started the analysis of the RCS using Gaussian Beam Summation (GBS and Gaussian Beam Launching (GBL, the asymptotic models Physical Optic (PO, Geometrical Theory of Diffraction (GTD and the rigorous Method of Moment (MoM. Then, we showed the experimental validation of the numerical results using experimental measurements which have been executed in the anechoic chamber of Lab-STICC at ENSTA Bretagne. The numerical and experimental results of the RCS are studied and given as a function of various parameters: polarization type, target size, Gaussian beams number and Gaussian beams width.

  5. Application of the DSA preconditioned GMRES formalism to the method of characteristics - First results

    International Nuclear Information System (INIS)

    Le Tellier, R.; Hebert, A.

    2004-01-01

    The method of characteristics is well known for its slow convergence; consequently, as it is often done for SN methods, the Generalized Minimal Residual approach (GMRES) has been investigated for its practical implementation and its high reliability. GMRES is one of the most effective Krylov iterative methods to solve large linear systems. Moreover, the system has been 'left preconditioned' with the Algebraic Collapsing Acceleration (ACA) a variant of the Diffusion Synthetic Acceleration (DSA) based on I. Suslov's former works. This paper presents the first numerical results of these methods in 2D geometries with material discontinuities. Indeed, previous investigations have shown a degraded effectiveness of Diffusion Synthetic Accelerations with this kind of geometries. Results are presented for 9 x 9 Cartesian assemblies in terms of the speed of convergence of the inner iterations (fixed source) of the method of characteristics. It shows a significant improvement on the convergence rate. (authors)

  6. [Adverse events management. Methods and results of a development project].

    Science.gov (United States)

    Rabøl, Louise Isager; Jensen, Elisabeth Brøgger; Hellebek, Annemarie H; Pedersen, Beth Lilja

    2006-11-27

    This article describes the methods and results of a project in the Copenhagen Hospital Corporation (H:S) on preventing adverse events. The aim of the project was to raise awareness about patients' safety, test a reporting system for adverse events, develop and test methods of analysis of events and propagate ideas about how to prevent adverse events. H:S developed an action plan and a reporting system for adverse events, founded an organization and developed an educational program on theories and methods of learning from adverse events for both leaders and employees. During the three-year period from 1 January 2002 to 31 December 2004, the H:S staff reported 6011 adverse events. In the same period, the organization completed 92 root cause analyses. More than half of these dealt with events that had been optional to report, the other half events that had been mandatory to report. The number of reports and the front-line staff's attitude towards reporting shows that the H:S succeeded in founding a safety culture. Future work should be centred on developing and testing methods that will prevent adverse events from happening. The objective is to suggest and complete preventive initiatives which will help increase patient safety.

  7. Raw material consumption of the European Union--concept, calculation method, and results.

    Science.gov (United States)

    Schoer, Karl; Weinzettel, Jan; Kovanda, Jan; Giegrich, Jürgen; Lauwigi, Christoph

    2012-08-21

    This article presents the concept, calculation method, and first results of the "Raw Material Consumption" (RMC) economy-wide material flow indicator for the European Union (EU). The RMC measures the final domestic consumption of products in terms of raw material equivalents (RME), i.e. raw materials used in the complete production chain of consumed products. We employed the hybrid input-output life cycle assessment method to calculate RMC. We first developed a highly disaggregated environmentally extended mixed unit input output table and then applied life cycle inventory data for imported products without appropriate representation of production within the domestic economy. Lastly, we treated capital formation as intermediate consumption. Our results show that services, often considered as a solution for dematerialization, account for a significant part of EU raw material consumption, which emphasizes the need to focus on the full production chains and dematerialization of services. Comparison of the EU's RMC with its domestic extraction shows that the EU is nearly self-sufficient in biomass and nonmetallic minerals but extremely dependent on direct and indirect imports of fossil energy carriers and metal ores. This implies an export of environmental burden related to extraction and primary processing of these materials to the rest of the world. Our results demonstrate that internalizing capital formation has significant influence on the calculated RMC.

  8. Visual Display of Scientific Studies, Methods, and Results

    Science.gov (United States)

    Saltus, R. W.; Fedi, M.

    2015-12-01

    The need for efficient and effective communication of scientific ideas becomes more urgent each year.A growing number of societal and economic issues are tied to matters of science - e.g., climate change, natural resource availability, and public health. Societal and political debate should be grounded in a general understanding of scientific work in relevant fields. It is difficult for many participants in these debates to access science directly because the formal method for scientific documentation and dissemination is the journal paper, generally written for a highly technical and specialized audience. Journal papers are very effective and important for documentation of scientific results and are essential to the requirements of science to produce citable and repeatable results. However, journal papers are not effective at providing a quick and intuitive summary useful for public debate. Just as quantitative data are generally best viewed in graphic form, we propose that scientific studies also can benefit from visual summary and display. We explore the use of existing methods for diagramming logical connections and dependencies, such as Venn diagrams, mind maps, flow charts, etc., for rapidly and intuitively communicating the methods and results of scientific studies. We also discuss a method, specifically tailored to summarizing scientific papers that we introduced last year at AGU. Our method diagrams the relative importance and connections between data, methods/models, results/ideas, and implications/importance using a single-page format with connected elements in these four categories. Within each category (e.g., data) the spatial location of individual elements (e.g., seismic, topographic, gravity) indicates relative novelty (e.g., are these new data?) and importance (e.g., how critical are these data to the results of the paper?). The goal is to find ways to rapidly and intuitively share both the results and the process of science, both for communication

  9. The WOMBAT Attack Attribution Method: Some Results

    Science.gov (United States)

    Dacier, Marc; Pham, Van-Hau; Thonnard, Olivier

    In this paper, we present a new attack attribution method that has been developed within the WOMBAT project. We illustrate the method with some real-world results obtained when applying it to almost two years of attack traces collected by low interaction honeypots. This analytical method aims at identifying large scale attack phenomena composed of IP sources that are linked to the same root cause. All malicious sources involved in a same phenomenon constitute what we call a Misbehaving Cloud (MC). The paper offers an overview of the various steps the method goes through to identify these clouds, providing pointers to external references for more detailed information. Four instances of misbehaving clouds are then described in some more depth to demonstrate the meaningfulness of the concept.

  10. Methods uncovering usability issues in medication-related alerting functions: results from a systematic review.

    Science.gov (United States)

    Marcilly, Romaric; Vasseur, Francis; Ammenwerth, Elske; Beuscart-Zephir, Marie-Catherine

    2014-01-01

    This paper aims at listing the methods used to evaluate the usability of medication-related alerting functions and at knowing what type of usability issues those methods allow to detect. A sub-analysis of data from this systematic review has been performed. Methods applied in the included papers were collected. Then, included papers were sorted in four types of evaluation: "expert evaluation", "user- testing/simulation", "on site observation" and "impact studies". The types of usability issues (usability flaws, usage problems and negative outcomes) uncovered by those evaluations were analyzed. Results show that a large set of methods are used. The largest proportion of papers uses "on site observation" evaluation. This is the only evaluation type for which every kind of usability flaws, usage problems and outcomes are detected. It is somehow surprising that, in a usability systematic review, most of the papers included use a method that is not often presented as a usability method. Results are discussed about the opportunity to provide usability information collected after the implementation of the technology during their design process, i.e. before their implementation.

  11. Development of methods to measure hemoglobin adducts by gel electrophoresis - Preliminary results

    International Nuclear Information System (INIS)

    Sun, J.D.; McBride, S.M.

    1988-01-01

    Chemical adducts formed on blood hemoglobin may be a useful biomarker for assessing human exposures to these compounds. This paper reports preliminary results in the development of methods to measure such adducts that may be generally applicable for a wide variety of chemicals. Male F344/N rats were intraperitoneally injected with 14 C-BaP dissolved in corn oil. Twenty-four hours later, the rats were sacrificed. Blood samples were collected and globin was isolated. Globin protein was then cleaved into peptide fragments using cyanogen bromide and the fragments separated using 2-dimensional gel electrophoresis. The results showed that the adducted 14 C-globin fragments migrated to different areas of the gel than did unadducted fragments. Further research is being conducted to develop methods that will allow quantitation of separated adducted globin fragments from human blood samples without the use of a radiolabel. (author)

  12. Development and application of a new deterministic method for calculating computer model result uncertainties

    International Nuclear Information System (INIS)

    Maerker, R.E.; Worley, B.A.

    1989-01-01

    Interest in research into the field of uncertainty analysis has recently been stimulated as a result of a need in high-level waste repository design assessment for uncertainty information in the form of response complementary cumulative distribution functions (CCDFs) to show compliance with regulatory requirements. The solution to this problem must obviously rely on the analysis of computer code models, which, however, employ parameters that can have large uncertainties. The motivation for the research presented in this paper is a search for a method involving a deterministic uncertainty analysis approach that could serve as an improvement over those methods that make exclusive use of statistical techniques. A deterministic uncertainty analysis (DUA) approach based on the use of first derivative information is the method studied in the present procedure. The present method has been applied to a high-level nuclear waste repository problem involving use of the codes ORIGEN2, SAS, and BRINETEMP in series, and the resulting CDF of a BRINETEMP result of interest is compared with that obtained through a completely statistical analysis

  13. A method for determination of X-ray elastic constants of materials showing non-linear sin2ψ diagrams and its application to Zn-Ni-alloy electroplate

    International Nuclear Information System (INIS)

    Sasaki, Toshihiko; Kuramoto, Makoto; Yoshioka, Yasuo.

    1990-01-01

    This paper describes the method and the experiment for the determination of the x-ray elastic constants of Zn-Ni-alloy electroplate. For this material, the sin 2 ψ method is not adequate to use because this material shows severely curved sin 2 ψ diagrams. Therefore, a new method developed by the authors was explained first. This new method is effective for materials showing nonlinear sin 2 ψ diagrams. Secondly, the experiment was made on the application of this method to the Zn-Ni-alloy electroplate. And it was found out that the experimental data agreed well to the theory of this method. As a result, the following values were obtained as the x-ray elastic constants of the sample measured: (1+ν)/E=8.44 TPa -1 ν/E=2.02 TPa -1 (author)

  14. A method for modeling laterally asymmetric proton beamlets resulting from collimation

    Energy Technology Data Exchange (ETDEWEB)

    Gelover, Edgar; Wang, Dongxu; Flynn, Ryan T.; Hyer, Daniel E. [Department of Radiation Oncology, University of Iowa, 200 Hawkins Drive, Iowa City, Iowa 52242 (United States); Hill, Patrick M. [Department of Human Oncology, University of Wisconsin, 600 Highland Avenue, Madison, Wisconsin 53792 (United States); Gao, Mingcheng; Laub, Steve; Pankuch, Mark [Division of Medical Physics, CDH Proton Center, 4455 Weaver Parkway, Warrenville, Illinois 60555 (United States)

    2015-03-15

    Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σ{sub x1},σ{sub x2},σ{sub y1},σ{sub y2}) together with the spatial location of the maximum dose (μ{sub x},μ{sub y}). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets.

  15. A method for modeling laterally asymmetric proton beamlets resulting from collimation

    International Nuclear Information System (INIS)

    Gelover, Edgar; Wang, Dongxu; Flynn, Ryan T.; Hyer, Daniel E.; Hill, Patrick M.; Gao, Mingcheng; Laub, Steve; Pankuch, Mark

    2015-01-01

    Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σ x1 ,σ x2 ,σ y1 ,σ y2 ) together with the spatial location of the maximum dose (μ x ,μ y ). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets

  16. A method for modeling laterally asymmetric proton beamlets resulting from collimation

    Science.gov (United States)

    Gelover, Edgar; Wang, Dongxu; Hill, Patrick M.; Flynn, Ryan T.; Gao, Mingcheng; Laub, Steve; Pankuch, Mark; Hyer, Daniel E.

    2015-01-01

    Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σx1,σx2,σy1,σy2) together with the spatial location of the maximum dose (μx,μy). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets. PMID:25735287

  17. Tensile strength of concrete under static and intermediate strain rates: Correlated results from different testing methods

    International Nuclear Information System (INIS)

    Wu Shengxing; Chen Xudong; Zhou Jikai

    2012-01-01

    Highlights: ► Tensile strength of concrete increases with increase in strain rate. ► Strain rate sensitivity of tensile strength of concrete depends on test method. ► High stressed volume method can correlate results from various test methods. - Abstract: This paper presents a comparative experiment and analysis of three different methods (direct tension, splitting tension and four-point loading flexural tests) for determination of the tensile strength of concrete under low and intermediate strain rates. In addition, the objective of this investigation is to analyze the suitability of the high stressed volume approach and Weibull effective volume method to the correlation of the results of different tensile tests of concrete. The test results show that the strain rate sensitivity of tensile strength depends on the type of test, splitting tensile strength of concrete is more sensitive to an increase in the strain rate than flexural and direct tensile strength. The high stressed volume method could be used to obtain a tensile strength value of concrete, free from the influence of the characteristics of tests and specimens. However, the Weibull effective volume method is an inadequate method for describing failure of concrete specimens determined by different testing methods.

  18. Multiband discrete ordinates method: formalism and results; Methode multibande aux ordonnees discretes: formalisme et resultats

    Energy Technology Data Exchange (ETDEWEB)

    Luneville, L

    1998-06-01

    The multigroup discrete ordinates method is a classical way to solve transport equation (Boltzmann) for neutral particles. Self-shielding effects are not correctly treated due to large variations of cross sections in a group (in the resonance range). To treat the resonance domain, the multiband method is introduced. The main idea is to divide the cross section domain into bands. We obtain the multiband parameters using the moment method; the code CALENDF provides probability tables for these parameters. We present our implementation in an existing discrete ordinates code: SN1D. We study deep penetration benchmarks and show the improvement of the method in the treatment of self-shielding effects. (author) 15 refs.

  19. The estimation of the measurement results with using statistical methods

    International Nuclear Information System (INIS)

    Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" data-affiliation=" (State Enterprise Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" >Velychko, O; UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" data-affiliation=" (State Scientific Institution UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" >Gordiyenko, T

    2015-01-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed

  20. The estimation of the measurement results with using statistical methods

    Science.gov (United States)

    Velychko, O.; Gordiyenko, T.

    2015-02-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.

  1. VAN method of short-term earthquake prediction shows promise

    Science.gov (United States)

    Uyeda, Seiya

    Although optimism prevailed in the 1970s, the present consensus on earthquake prediction appears to be quite pessimistic. However, short-term prediction based on geoelectric potential monitoring has stood the test of time in Greece for more than a decade [VarotsosandKulhanek, 1993] Lighthill, 1996]. The method used is called the VAN method.The geoelectric potential changes constantly due to causes such as magnetotelluric effects, lightning, rainfall, leakage from manmade sources, and electrochemical instabilities of electrodes. All of this noise must be eliminated before preseismic signals are identified, if they exist at all. The VAN group apparently accomplished this task for the first time. They installed multiple short (100-200m) dipoles with different lengths in both north-south and east-west directions and long (1-10 km) dipoles in appropriate orientations at their stations (one of their mega-stations, Ioannina, for example, now has 137 dipoles in operation) and found that practically all of the noise could be eliminated by applying a set of criteria to the data.

  2. Non-Destructive Evaluation Method Based On Dynamic Invariant Stress Resultants

    Directory of Open Access Journals (Sweden)

    Zhang Junchi

    2015-01-01

    Full Text Available Most of the vibration based damage detection methods are based on changes in frequencies, mode shapes, mode shape curvature, and flexibilities. These methods are limited and typically can only detect the presence and location of damage. Current methods seldom can identify the exact severity of damage to structures. This paper will present research in the development of a new non-destructive evaluation method to identify the existence, location, and severity of damage for structural systems. The method utilizes the concept of invariant stress resultants (ISR. The basic concept of ISR is that at any given cross section the resultant internal force distribution in a structural member is not affected by the inflicted damage. The method utilizes dynamic analysis of the structure to simulate direct measurements of acceleration, velocity and displacement simultaneously. The proposed dynamic ISR method is developed and utilized to detect the damage of corresponding changes in mass, damping and stiffness. The objectives of this research are to develop the basic theory of the dynamic ISR method, apply it to the specific types of structures, and verify the accuracy of the developed theory. Numerical results that demonstrate the application of the method will reflect the advanced sensitivity and accuracy in characterizing multiple damage locations.

  3. Comparison of multiple-criteria decision-making methods - results of simulation study

    Directory of Open Access Journals (Sweden)

    Michał Adamczak

    2016-12-01

    Full Text Available Background: Today, both researchers and practitioners have many methods for supporting the decision-making process. Due to the conditions in which supply chains function, the most interesting are multi-criteria methods. The use of sophisticated methods for supporting decisions requires the parameterization and execution of calculations that are often complex. So is it efficient to use sophisticated methods? Methods: The authors of the publication compared two popular multi-criteria decision-making methods: the  Weighted Sum Model (WSM and the Analytic Hierarchy Process (AHP. A simulation study reflects these two decision-making methods. Input data for this study was a set of criteria weights and the value of each in terms of each criterion. Results: The iGrafx Process for Six Sigma simulation software recreated how both multiple-criteria decision-making methods (WSM and AHP function. The result of the simulation was a numerical value defining the preference of each of the alternatives according to the WSM and AHP methods. The alternative producing a result of higher numerical value  was considered preferred, according to the selected method. In the analysis of the results, the relationship between the values of the parameters and the difference in the results presented by both methods was investigated. Statistical methods, including hypothesis testing, were used for this purpose. Conclusions: The simulation study findings prove that the results obtained with the use of two multiple-criteria decision-making methods are very similar. Differences occurred more frequently in lower-value parameters from the "value of each alternative" group and higher-value parameters from the "weight of criteria" group.

  4. Decision making with consonant belief functions: Discrepancy resulting with the probability transformation method used

    Directory of Open Access Journals (Sweden)

    Cinicioglu Esma Nur

    2014-01-01

    Full Text Available Dempster−Shafer belief function theory can address a wider class of uncertainty than the standard probability theory does, and this fact appeals the researchers in operations research society for potential application areas. However, the lack of a decision theory of belief functions gives rise to the need to use the probability transformation methods for decision making. For representation of statistical evidence, the class of consonant belief functions is used which is not closed under Dempster’s rule of combination but is closed under Walley’s rule of combination. In this research, it is shown that the outcomes obtained using both Dempster’s and Walley’s rules do result in different probability distributions when pignistic transformation is used. However, when plausibility transformation is used, they do result in the same probability distribution. This result shows that the choice of the combination rule and probability transformation method may have a significant effect on decision making since it may change the choice of the decision alternative selected. This result is illustrated via an example of missile type identification.

  5. Life cycle analysis of electricity systems: Methods and results

    International Nuclear Information System (INIS)

    Friedrich, R.; Marheineke, T.

    1996-01-01

    The two methods for full energy chain analysis, process analysis and input/output analysis, are discussed. A combination of these two methods provides the most accurate results. Such a hybrid analysis of the full energy chains of six different power plants is presented and discussed. The results of such analyses depend on time, site and technique of each process step and, therefore have no general validity. For renewable energy systems the emissions form the generation of a back-up system should be added. (author). 7 figs, 1 fig

  6. EQUITY SHARES EQUATING THE RESULTS OF FCFF AND FCFE METHODS

    Directory of Open Access Journals (Sweden)

    Bartłomiej Cegłowski

    2012-06-01

    Full Text Available The aim of the article is to present the method of establishing equity shares in weight average cost of capital (WACC, in which the value of loan capital results from the fixed assumptions accepted in the financial plan (for example a schedule of loan repayment and own equity is evaluated by means of a discount method. The described method causes that, regardless of whether cash flows are calculated as FCFF or FCFE, the result of the company valuation will be identical.

  7. CT-guided percutaneous neurolysis methods. State of the art and first results

    International Nuclear Information System (INIS)

    Schneider, B.; Richter, G.M.; Roeren, T.; Kauffmann, G.W.

    1996-01-01

    We used 21G or 22G fine needles. All CT-guided percutaneous neurolysis methods require a proper blood coagulation. Most common CT scanners are suitable for neurolysis if there is enough room for maintaining sterile conditions. All neurolysis methods involve sterile puncture of the ganglia under local anesthesia, a test block with anesthetic and contrast agent to assess the clinical effect and the definitive block with a mixture of 96% ethanol and local anesthetic. This allows us to correct the position of the needle if we see improper distribution of the test block or unwanted side effects. Though inflammatory complications of the peritoneum due to puncture are rarely seen, we prefer the dorsal approach whenever possible. Results: Seven of 20 legs showed at least transient clinical improvement after CT-guided lumbar sympathectomies; 13 legs had to be amputated. Results of the methods in the literature differ. For lumbar sympathectomy, improved perfusion is reported in 39-89%, depending on the pre-selection of the patient group. Discussion: It was recently proved that sympathectomy not only improves perfusion of the skin but also of the muscle. The hypothesis of a steal effect after sympathectomy towards skin perfusion was disproved. Modern aggressive surgical and interventional treatment often leaves patients to sympathectomy whose reservers of collateralization are nearly exhausted. We presume this is the reason for the different results we found in our patient group. For thoracic sympathectomy the clinical treatment depends very much on the indications. Whereas palmar hyperhidrosis offers nearly 100% success, only 60-70% of patients with disturbance of perfusion have benefited. Results in celiac ganglia block also differ. Patients with carcinoma of the pancreas and other organs of the upper abdomen benefit in 80-100% of all cases, patients with chronic pancreatitis in 60-80%. (orig./VHE) [de

  8. On a new iterative method for solving linear systems and comparison results

    Science.gov (United States)

    Jing, Yan-Fei; Huang, Ting-Zhu

    2008-10-01

    In Ujevic [A new iterative method for solving linear systems, Appl. Math. Comput. 179 (2006) 725-730], the author obtained a new iterative method for solving linear systems, which can be considered as a modification of the Gauss-Seidel method. In this paper, we show that this is a special case from a point of view of projection techniques. And a different approach is established, which is both theoretically and numerically proven to be better than (at least the same as) Ujevic's. As the presented numerical examples show, in most cases, the convergence rate is more than one and a half that of Ujevic.

  9. Steady-state transport equation resolution by particle methods, and numerical results

    International Nuclear Information System (INIS)

    Mercier, B.

    1985-10-01

    A method to solve steady-state transport equation has been given. Principles of the method are given. The method is studied in two different cases; estimations given by the theory are compared to numerical results. Results got in 1-D (spherical geometry) and in 2-D (axisymmetric geometry) are given [fr

  10. A Fuzzy Logic Based Method for Analysing Test Results

    Directory of Open Access Journals (Sweden)

    Le Xuan Vinh

    2017-11-01

    Full Text Available Network operators must perform many tasks to ensure smooth operation of the network, such as planning, monitoring, etc. Among those tasks, regular testing of network performance, network errors and troubleshooting is very important. Meaningful test results will allow the operators to evaluate network performanceof any shortcomings and to better plan for network upgrade. Due to the diverse and mainly unquantifiable nature of network testing results, there is a needs to develop a method for systematically and rigorously analysing these results. In this paper, we present STAM (System Test-result Analysis Method which employs a bottom-up hierarchical processing approach using Fuzzy logic. STAM is capable of combining all test results into a quantitative description of the network performance in terms of network stability, the significance of various network erros, performance of each function blocks within the network. The validity of this method has been successfully demonstrated in assisting the testing of a VoIP system at the Research Instiute of Post and Telecoms in Vietnam. The paper is organized as follows. The first section gives an overview of fuzzy logic theory the concepts of which will be used in the development of STAM. The next section describes STAM. The last section, demonstrating STAM’s capability, presents a success story in which STAM is successfully applied.

  11. Standardization of glycohemoglobin results and reference values in whole blood studied in 103 laboratories using 20 methods.

    Science.gov (United States)

    Weykamp, C W; Penders, T J; Miedema, K; Muskiet, F A; van der Slik, W

    1995-01-01

    We investigated the effect of calibration with lyophilized calibrators on whole-blood glycohemoglobin (glyHb) results. One hundred three laboratories, using 20 different methods, determined glyHb in two lyophilized calibrators and two whole-blood samples. For whole-blood samples with low (5%) and high (9%) glyHb percentages, respectively, calibration decreased overall interlaboratory variation (CV) from 16% to 9% and from 11% to 6% and decreased intermethod variation from 14% to 6% and from 12% to 5%. Forty-seven laboratories, using 14 different methods, determined mean glyHb percentages in self-selected groups of 10 nondiabetic volunteers each. With calibration their overall mean (2SD) was 5.0% (0.5%), very close to the 5.0% (0.3%) derived from the reference method used in the Diabetes Control and Complications Trial. In both experiments the Abbott IMx and Vision showed deviating results. We conclude that, irrespective of the analytical method used, calibration enables standardization of glyHb results, reference values, and interpretation criteria.

  12. A statistical method for testing epidemiological results, as applied to the Hanford worker population

    International Nuclear Information System (INIS)

    Brodsky, A.

    1979-01-01

    Some recent reports of Mancuso, Stewart and Kneale claim findings of radiation-produced cancer in the Hanford worker population. These claims are based on statistical computations that use small differences in accumulated exposures between groups dying of cancer and groups dying of other causes; actual mortality and longevity were not reported. This paper presents a statistical method for evaluation of actual mortality and longevity longitudinally over time, as applied in a primary analysis of the mortality experience of the Hanford worker population. Although available, this method was not utilized in the Mancuso-Stewart-Kneale paper. The author's preliminary longitudinal analysis shows that the gross mortality experience of persons employed at Hanford during 1943-70 interval did not differ significantly from that of certain controls, when both employees and controls were selected from families with two or more offspring and comparison were matched by age, sex, race and year of entry into employment. This result is consistent with findings reported by Sanders (Health Phys. vol.35, 521-538, 1978). The method utilizes an approximate chi-square (1 D.F.) statistic for testing population subgroup comparisons, as well as the cumulation of chi-squares (1 D.F.) for testing the overall result of a particular type of comparison. The method is available for computer testing of the Hanford mortality data, and could also be adapted to morbidity or other population studies. (author)

  13. The anchors of steel wire ropes, testing methods and their results

    Directory of Open Access Journals (Sweden)

    J. Krešák

    2012-10-01

    Full Text Available The present paper introduces an application of the acoustic and thermographic method in the defectoscopic testing of immobile steel wire ropes at the most critical point, the anchor. First measurements and their results by these new defectoscopic methods are shown. In defectoscopic tests at the anchor, the widely used magnetic method gives unreliable results, and therefore presents a problem for steel wire defectoscopy. Application of the two new methods in the steel wire defectoscopy at the anchor point will enable increased safety measures at the anchor of steel wire ropes in bridge, roof, tower and aerial cable lift constructions.

  14. Mechanics of Nanostructures: Methods and Results

    Science.gov (United States)

    Ruoff, Rod

    2003-03-01

    We continue to develop and use new tools to measure the mechanics and electromechanics of nanostructures. Here we discuss: (a) methods for making nanoclamps and the resulting: nanoclamp geometry, chemical composition and type of chemical bonding, and nanoclamp strength (effectiveness as a nanoclamp for the mechanics measurements to be made); (b) mechanics of carbon nanocoils. We have received carbon nanocoils from colleagues in Japan [1], measured their spring constants, and have observed extensions exceeding 100% relative to the unloaded length, using our scanning electron microscope nanomanipulator tool; (c) several new devices that are essentially MEMS-based, that allow for improved measurements of the mechanics of psuedo-1D and planar nanostructures. [1] Zhang M., Nakayama Y., Pan L., Japanese J. Appl. Phys. 39, L1242-L1244 (2000).

  15. Methodics of computing the results of monitoring the exploratory gallery

    Directory of Open Access Journals (Sweden)

    Krúpa Víazoslav

    2000-09-01

    Full Text Available At building site of motorway tunnel Višòové-Dubná skala , the priority is given to driving of exploration galley that secures in detail: geologic, engineering geology, hydrogeology and geotechnics research. This research is based on gathering information for a supposed use of the full profile driving machine that would drive the motorway tunnel. From a part of the exploration gallery which is driven by the TBM method, a fulfilling information is gathered about the parameters of the driving process , those are gathered by a computer monitoring system. The system is mounted on a driving machine. This monitoring system is based on the industrial computer PC 104. It records 4 basic values of the driving process: the electromotor performance of the driving machine Voest-Alpine ATB 35HA, the speed of driving advance, the rotation speed of the disintegrating head TBM and the total head pressure. The pressure force is evaluated from the pressure in the hydraulic cylinders of the machine. Out of these values, the strength of rock mass, the angle of inner friction, etc. are mathematically calculated. These values characterize rock mass properties as their changes. To define the effectivity of the driving process, the value of specific energy and the working ability of driving head is used. The article defines the methodics of computing the gathered monitoring information, that is prepared for the driving machine Voest – Alpine ATB 35H at the Institute of Geotechnics SAS. It describes the input forms (protocols of the developed method created by an EXCEL program and shows selected samples of the graphical elaboration of the first monitoring results obtained from exploratory gallery driving process in the Višòové – Dubná skala motorway tunnel.

  16. Genoa Boat Show – Good Example of Event Management

    Directory of Open Access Journals (Sweden)

    Dunja Demirović

    2012-07-01

    Full Text Available International Boat Show, a business and tourist event, has been held annually in Italian city of Genoa since 1962. The fair is one of the oldest, largest and best known in the field of boating industry worldwide, primarily due to good management of the event and it can serve as case study for domestic fair organizers to improve the quality of their business and services. Since Belgrade is the city of fairs, but compared to Genoa still underdeveloped in terms of trade shows, the following tasks imposed naturally in this study: to determine the relationship of the organizers of Genoa Boat Show in the sector of preparation and fair offer, in the sector of selection and communication with specific target groups (especially visitors, services during the fair and functioning of the city during the fair. During the research the authors have mostly used historical method, comparison, synthesis and the interview method. The results of theoretical research, in addition, may help not only managers of fair shows and of exhibitions, but also to organizers of other events in our country

  17. The Use of Data Mining Methods to Predict the Result of Infertility Treatment Using the IVF ET Method

    Directory of Open Access Journals (Sweden)

    Malinowski Paweł

    2014-12-01

    Full Text Available The IVF ET method is a scientifically recognized infertility treat- ment method. The problem, however, is this method’s unsatisfactory efficiency. This calls for a more thorough analysis of the information available in the treat- ment process, in order to detect the factors that have an effect on the results, as well as to effectively predict result of treatment. Classical statistical methods have proven to be inadequate in this issue. Only the use of modern methods of data mining gives hope for a more effective analysis of the collected data. This work provides an overview of the new methods used for the analysis of data on infertility treatment, and formulates a proposal for further directions for research into increasing the efficiency of the predicted result of the treatment process.

  18. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    Science.gov (United States)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  19. Time dependent patient no-show predictive modelling development.

    Science.gov (United States)

    Huang, Yu-Li; Hanauer, David A

    2016-05-09

    Purpose - The purpose of this paper is to develop evident-based predictive no-show models considering patients' each past appointment status, a time-dependent component, as an independent predictor to improve predictability. Design/methodology/approach - A ten-year retrospective data set was extracted from a pediatric clinic. It consisted of 7,291 distinct patients who had at least two visits along with their appointment characteristics, patient demographics, and insurance information. Logistic regression was adopted to develop no-show models using two-thirds of the data for training and the remaining data for validation. The no-show threshold was then determined based on minimizing the misclassification of show/no-show assignments. There were a total of 26 predictive model developed based on the number of available past appointments. Simulation was employed to test the effective of each model on costs of patient wait time, physician idle time, and overtime. Findings - The results demonstrated the misclassification rate and the area under the curve of the receiver operating characteristic gradually improved as more appointment history was included until around the 20th predictive model. The overbooking method with no-show predictive models suggested incorporating up to the 16th model and outperformed other overbooking methods by as much as 9.4 per cent in the cost per patient while allowing two additional patients in a clinic day. Research limitations/implications - The challenge now is to actually implement the no-show predictive model systematically to further demonstrate its robustness and simplicity in various scheduling systems. Originality/value - This paper provides examples of how to build the no-show predictive models with time-dependent components to improve the overbooking policy. Accurately identifying scheduled patients' show/no-show status allows clinics to proactively schedule patients to reduce the negative impact of patient no-shows.

  20. Results from the Application of Uncertainty Methods in the CSNI Uncertainty Methods Study (UMS)

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    Within licensing procedures there is the incentive to replace the conservative requirements for code application by a - best estimate - concept supplemented by an uncertainty analysis to account for predictive uncertainties of code results. Methods have been developed to quantify these uncertainties. The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced -best estimate- thermal-hydraulic codes. Most of the methods identify and combine input uncertainties. The major differences between the predictions of the methods came from the choice of uncertain parameters and the quantification of the input uncertainties, i.e. the wideness of the uncertainty ranges. Therefore, suitable experimental and analytical information has to be selected to specify these uncertainty ranges or distributions. After the closure of the Uncertainty Method Study (UMS) and after the report was issued comparison calculations of experiment LSTF-SB-CL-18 were performed by University of Pisa using different versions of the RELAP 5 code. It turned out that the version used by two of the participants calculated a 170 K higher peak clad temperature compared with other versions using the same input deck. This may contribute to the differences of the upper limit of the uncertainty ranges.

  1. Finite elements volumes methods: applications to the Navier-Stokes equations and convergence results

    International Nuclear Information System (INIS)

    Emonot, P.

    1992-01-01

    In the first chapter are described the equations modeling incompressible fluid flow and a quick presentation of finite volumes method. The second chapter is an introduction to the finite elements volumes method. The box model is described and a method adapted to Navier-Stokes problems is proposed. The third chapter shows a fault analysis of the finite elements volumes method for the Laplacian problem and some examples in one, two, three dimensional calculations. The fourth chapter is an extension of the error analysis of the method for the Navier-Stokes problem

  2. Numerical proceessing of radioimmunoassay results using logit-log transformation method

    International Nuclear Information System (INIS)

    Textoris, R.

    1983-01-01

    The mathematical model and algorithm are described of the numerical processing of the results of a radioimmunoassay by the logit-log transformation method and by linear regression with weight factors. The limiting value of the curve for zero concentration is optimized with regard to the residual sum by the iterative method by multiple repeats of the linear regression. Typical examples are presented of the approximation of calibration curves. The method proved suitable for all hitherto used RIA sets and is well suited for small computers with internal memory of min. 8 Kbyte. (author)

  3. Multiple predictor smoothing methods for sensitivity analysis: Example results

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  4. Convergence results for a class of abstract continuous descent methods

    Directory of Open Access Journals (Sweden)

    Sergiu Aizicovici

    2004-03-01

    Full Text Available We study continuous descent methods for the minimization of Lipschitzian functions defined on a general Banach space. We establish convergence theorems for those methods which are generated by approximate solutions to evolution equations governed by regular vector fields. Since the complement of the set of regular vector fields is $sigma$-porous, we conclude that our results apply to most vector fields in the sense of Baire's categories.

  5. Method of vacuum correlation functions: Results and prospects

    International Nuclear Information System (INIS)

    Badalian, A. M.; Simonov, Yu. A.; Shevchenko, V. I.

    2006-01-01

    Basic results obtained within the QCD method of vacuum correlation functions over the past 20 years in the context of investigations into strong-interaction physics at the Institute of Theoretical and Experimental Physics (ITEP, Moscow) are formulated Emphasis is placed primarily on the prospects of the general theory developed within QCD by employing both nonperturbative and perturbative methods. On the basis of ab initio arguments, it is shown that the lowest two field correlation functions play a dominant role in QCD dynamics. A quantitative theory of confinement and deconfinement, as well as of the spectra of light and heavy quarkonia, glueballs, and hybrids, is given in terms of these two correlation functions. Perturbation theory in a nonperturbative vacuum (background perturbation theory) plays a significant role, not possessing drawbacks of conventional perturbation theory and leading to the infrared freezing of the coupling constant α s

  6. 3D ultrasound computer tomography: Hardware setup, reconstruction methods and first clinical results

    Science.gov (United States)

    Gemmeke, Hartmut; Hopp, Torsten; Zapf, Michael; Kaiser, Clemens; Ruiter, Nicole V.

    2017-11-01

    A promising candidate for improved imaging of breast cancer is ultrasound computer tomography (USCT). Current experimental USCT systems are still focused in elevation dimension resulting in a large slice thickness, limited depth of field, loss of out-of-plane reflections, and a large number of movement steps to acquire a stack of images. 3D USCT emitting and receiving spherical wave fronts overcomes these limitations. We built an optimized 3D USCT, realizing for the first time the full benefits of a 3D system. The point spread function could be shown to be nearly isotropic in 3D, to have very low spatial variability and fit the predicted values. The contrast of the phantom images is very satisfactory in spite of imaging with a sparse aperture. The resolution and imaged details of the reflectivity reconstruction are comparable to a 3 T MRI volume. Important for the obtained resolution are the simultaneously obtained results of the transmission tomography. The KIT 3D USCT was then tested in a pilot study on ten patients. The primary goals of the pilot study were to test the USCT device, the data acquisition protocols, the image reconstruction methods and the image fusion techniques in a clinical environment. The study was conducted successfully; the data acquisition could be carried out for all patients with an average imaging time of six minutes per breast. The reconstructions provide promising images. Overlaid volumes of the modalities show qualitative and quantitative information at a glance. This paper gives a summary of the involved techniques, methods, and first results.

  7. A semantics-based method for clustering of Chinese web search results

    Science.gov (United States)

    Zhang, Hui; Wang, Deqing; Wang, Li; Bi, Zhuming; Chen, Yong

    2014-01-01

    Information explosion is a critical challenge to the development of modern information systems. In particular, when the application of an information system is over the Internet, the amount of information over the web has been increasing exponentially and rapidly. Search engines, such as Google and Baidu, are essential tools for people to find the information from the Internet. Valuable information, however, is still likely submerged in the ocean of search results from those tools. By clustering the results into different groups based on subjects automatically, a search engine with the clustering feature allows users to select most relevant results quickly. In this paper, we propose an online semantics-based method to cluster Chinese web search results. First, we employ the generalised suffix tree to extract the longest common substrings (LCSs) from search snippets. Second, we use the HowNet to calculate the similarities of the words derived from the LCSs, and extract the most representative features by constructing the vocabulary chain. Third, we construct a vector of text features and calculate snippets' semantic similarities. Finally, we improve the Chameleon algorithm to cluster snippets. Extensive experimental results have shown that the proposed algorithm has outperformed over the suffix tree clustering method and other traditional clustering methods.

  8. Analysis of risk of nonconformities and applied quality inspection methods in the process of aluminium profiles coating based on FMEA results

    Directory of Open Access Journals (Sweden)

    Krzysztof Knop

    2017-10-01

    Full Text Available The article presents the results of risk analysis associated with nonconformities of aluminium profiles in the process of coating and quality inspection methods used to their detection. Analysis of risk was done based on results of FMEA method. Evaluated quality inspection methods were distinguished based on the term of inspection in the ISO 9000:2005 norm. Manufacturing process of aluminium profile in micro-technological approach was presented. Triple quantification of nonconformities risk based on the FMEA method by using three different approaches was conducted. Analysis of nonconformities risks associated with the use of specific quality inspection methods was done. In the last part the analysis of causes of critical nonconformities, proposals for improvement actions reducing the risk of the critical nonconformities and applied critical quality inspection method were showed.

  9. New method of scoliosis assessment: preliminary results using computerized photogrammetry.

    Science.gov (United States)

    Aroeira, Rozilene Maria Cota; Leal, Jefferson Soares; de Melo Pertence, Antônio Eustáquio

    2011-09-01

    A new method for nonradiographic evaluation of scoliosis was independently compared with the Cobb radiographic method, for the quantification of scoliotic curvature. To develop a protocol for computerized photogrammetry, as a nonradiographic method, for the quantification of scoliosis, and to mathematically relate this proposed method with the Cobb radiographic method. Repeated exposure to radiation of children can be harmful to their health. Nevertheless, no nonradiographic method until now proposed has gained popularity as a routine method for evaluation, mainly due to a low correspondence to the Cobb radiographic method. Patients undergoing standing posteroanterior full-length spine radiographs, who were willing to participate in this study, were submitted to dorsal digital photography in the orthostatic position with special surface markers over the spinous process, specifically the vertebrae C7 to L5. The radiographic and photographic images were sent separately for independent analysis to two examiners, trained in quantification of scoliosis for the types of images received. The scoliosis curvature angles obtained through computerized photogrammetry (the new method) were compared to those obtained through the Cobb radiographic method. Sixteen individuals were evaluated (14 female and 2 male). All presented idiopathic scoliosis, and were between 21.4 ± 6.1 years of age; 52.9 ± 5.8 kg in weight; 1.63 ± 0.05 m in height, with a body mass index of 19.8 ± 0.2. There was no statistically significant difference between the scoliosis angle measurements obtained in the comparative analysis of both methods, and a mathematical relationship was formulated between both methods. The preliminary results presented demonstrate equivalence between the two methods. More studies are needed to firmly assess the potential of this new method as a coadjuvant tool in the routine following of scoliosis treatment.

  10. Locating previously unknown patterns in data-mining results: a dual data- and knowledge-mining method

    Directory of Open Access Journals (Sweden)

    Knaus William A

    2006-03-01

    Full Text Available Abstract Background Data mining can be utilized to automate analysis of substantial amounts of data produced in many organizations. However, data mining produces large numbers of rules and patterns, many of which are not useful. Existing methods for pruning uninteresting patterns have only begun to automate the knowledge acquisition step (which is required for subjective measures of interestingness, hence leaving a serious bottleneck. In this paper we propose a method for automatically acquiring knowledge to shorten the pattern list by locating the novel and interesting ones. Methods The dual-mining method is based on automatically comparing the strength of patterns mined from a database with the strength of equivalent patterns mined from a relevant knowledgebase. When these two estimates of pattern strength do not match, a high "surprise score" is assigned to the pattern, identifying the pattern as potentially interesting. The surprise score captures the degree of novelty or interestingness of the mined pattern. In addition, we show how to compute p values for each surprise score, thus filtering out noise and attaching statistical significance. Results We have implemented the dual-mining method using scripts written in Perl and R. We applied the method to a large patient database and a biomedical literature citation knowledgebase. The system estimated association scores for 50,000 patterns, composed of disease entities and lab results, by querying the database and the knowledgebase. It then computed the surprise scores by comparing the pairs of association scores. Finally, the system estimated statistical significance of the scores. Conclusion The dual-mining method eliminates more than 90% of patterns with strong associations, thus identifying them as uninteresting. We found that the pruning of patterns using the surprise score matched the biomedical evidence in the 100 cases that were examined by hand. The method automates the acquisition of

  11. Soil Particle Size Analysis by Laser Diffractometry: Result Comparison with Pipette Method

    Science.gov (United States)

    Šinkovičová, Miroslava; Igaz, Dušan; Kondrlová, Elena; Jarošová, Miriam

    2017-10-01

    Soil texture as the basic soil physical property provides a basic information on the soil grain size distribution as well as grain size fraction representation. Currently, there are several methods of particle dimension measurement available that are based on different physical principles. Pipette method based on the different sedimentation velocity of particles with different diameter is considered to be one of the standard methods of individual grain size fraction distribution determination. Following the technical advancement, optical methods such as laser diffraction can be also used nowadays for grain size distribution determination in the soil. According to the literature review of domestic as well as international sources related to this topic, it is obvious that the results obtained by laser diffractometry do not correspond with the results obtained by pipette method. The main aim of this paper was to analyse 132 samples of medium fine soil, taken from the Nitra River catchment in Slovakia, from depths of 15-20 cm and 40-45 cm, respectively, using laser analysers: ANALYSETTE 22 MicroTec plus (Fritsch GmbH) and Mastersizer 2000 (Malvern Instruments Ltd). The results obtained by laser diffractometry were compared with pipette method and the regression relationships using linear, exponential, power and polynomial trend were derived. Regressions with the three highest regression coefficients (R2) were further investigated. The fit with the highest tightness was observed for the polynomial regression. In view of the results obtained, we recommend using the estimate of the representation of the clay fraction (analysis is done according to laser diffractometry. The advantages of laser diffraction method comprise the short analysis time, usage of small sample amount, application for the various grain size fraction and soil type classification systems, and a wide range of determined fractions. Therefore, it is necessary to focus on this issue further to address the

  12. Processing method and results of meteor shower radar observations

    International Nuclear Information System (INIS)

    Belkovich, O.I.; Suleimanov, N.I.; Tokhtasjev, V.S.

    1987-01-01

    Studies of meteor showers permit the solving of some principal problems of meteor astronomy: to obtain the structure of a stream in cross section and along its orbits; to retrace the evolution of particle orbits of the stream taking into account gravitational and nongravitational forces and to discover the orbital elements of its parent body; to find out the total mass of solid particles ejected from the parent body taking into account physical and chemical evolution of meteor bodies; and to use meteor streams as natural probes for investigation of the average characteristics of the meteor complex in the solar system. A simple and effective method of determining the flux density and mass exponent parameter was worked out. This method and its results are discussed

  13. Short overview of PSA quantification methods, pitfalls on the road from approximate to exact results

    International Nuclear Information System (INIS)

    Banov, Reni; Simic, Zdenko; Sterc, Davor

    2014-01-01

    Over time the Probabilistic Safety Assessment (PSA) models have become an invaluable companion in the identification and understanding of key nuclear power plant (NPP) vulnerabilities. PSA is an effective tool for this purpose as it assists plant management to target resources where the largest benefit for plant safety can be obtained. PSA has quickly become an established technique to numerically quantify risk measures in nuclear power plants. As complexity of PSA models increases, the computational approaches become more or less feasible. The various computational approaches can be basically classified in two major groups: approximate and exact (BDD based) methods. In recent time modern commercially available PSA tools started to provide both methods for PSA model quantification. Besides availability of both methods in proven PSA tools the usage must still be taken carefully since there are many pitfalls which can drive to wrong conclusions and prevent efficient usage of PSA tool. For example, typical pitfalls involve the usage of higher precision approximation methods and getting a less precise result, or mixing minimal cuts and prime implicants in the exact computation method. The exact methods are sensitive to selected computational paths in which case a simple human assisted rearrangement may help and even switch from computationally non-feasible to feasible methods. Further improvements to exact method are possible and desirable which opens space for a new research. In this paper we will show how these pitfalls may be detected and how carefully actions must be done especially when working with large PSA models. (authors)

  14. Method for depth referencing hydrocarbon gas shows on mud logs

    International Nuclear Information System (INIS)

    Dion, E.P.

    1986-01-01

    A method is described for identifying hydrocarbon formations surrounding a borehole, comprising the steps of: a. measuring hydrocarbon gas in the entrained formation cuttings obtained during drilling operations in which a drilling mud is continually circulated past a drill bit to carry the cuttings to the earth's surface, b. simultaneously measuring natural gamma radiation in the cuttings, c. identifying the depths at which the cuttings were obtained within the borehole, d. measuring natural gamma radiation within the borehole following completion of the drilling operations, e. correlating the natural gamma radiation measurements in steps (b) and (d), and f. identifying the depths within the borehole from which the entrained cuttings containing hydrocarbon gas were obtained during drilling operations when there is correlation between the natural gamma radiation measurements in steps (b) and (d)

  15. How the RNA isolation method can affect microRNA microarray results

    DEFF Research Database (Denmark)

    Podolska, Agnieszka; Kaczkowski, Bogumil; Litman, Thomas

    2011-01-01

    RNA microarray analysis on porcine brain tissue. One method is a phenol-guanidine isothiocyanate-based procedure that permits isolation of total RNA. The second method, miRVana™ microRNA isolation, is column based and recovers the small RNA fraction alone. We found that microarray analyses give different results...... that depend on the RNA fraction used, in particular because some microRNAs appear very sensitive to the RNA isolation method. We conclude that precautions need to be taken when comparing microarray studies based on RNA isolated with different methods.......The quality of RNA is crucial in gene expression experiments. RNA degradation interferes in the measurement of gene expression, and in this context, microRNA quantification can lead to an incorrect estimation. In the present study, two different RNA isolation methods were used to perform micro...

  16. Comparative Analysis of Clinical Samples Showing Weak Serum Reaction on AutoVue System Causing ABO Blood Typing Discrepancies.

    Science.gov (United States)

    Jo, Su Yeon; Lee, Ju Mi; Kim, Hye Lim; Sin, Kyeong Hwa; Lee, Hyeon Ji; Chang, Chulhun Ludgerus; Kim, Hyung Hoi

    2017-03-01

    ABO blood typing in pre-transfusion testing is a major component of the high workload in blood banks that therefore requires automation. We often experienced discrepant results from an automated system, especially weak serum reactions. We evaluated the discrepant results by the reference manual method to confirm ABO blood typing. In total, 13,113 blood samples were tested with the AutoVue system; all samples were run in parallel with the reference manual method according to the laboratory protocol. The AutoVue system confirmed ABO blood typing of 12,816 samples (97.7%), and these results were concordant with those of the manual method. The remaining 297 samples (2.3%) showed discrepant results in the AutoVue system and were confirmed by the manual method. The discrepant results involved weak serum reactions (serum reactions, samples from patients who had received stem cell transplants, ABO subgroups, and specific system error messages. Among the 98 samples showing ≤1+ reaction grade in the AutoVue system, 70 samples (71.4%) showed a normal serum reaction (≥2+ reaction grade) with the manual method, and 28 samples (28.6%) showed weak serum reaction in both methods. ABO blood tying of 97.7% samples could be confirmed by the AutoVue system and a small proportion (2.3%) needed to be re-evaluated by the manual method. Samples with a 2+ reaction grade in serum typing do not need to be evaluated manually, while those with ≤1+ reaction grade do.

  17. Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.

    Science.gov (United States)

    Kieffer, Kevin M.; Thompson, Bruce

    As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…

  18. Differences in quantitative methods for measuring subjective cognitive decline - results from a prospective memory clinic study

    DEFF Research Database (Denmark)

    Vogel, Asmus; Salem, Lise Cronberg; Andersen, Birgitte Bo

    2016-01-01

    influence reports of cognitive decline. METHODS: The Subjective Memory Complaints Scale (SMC) and The Memory Complaint Questionnaire (MAC-Q) were applied in 121 mixed memory clinic patients with mild cognitive symptoms (mean MMSE = 26.8, SD 2.7). The scales were applied independently and raters were blinded...... decline. Depression scores were significantly correlated to both scales measuring subjective decline. Linear regression models showed that age did not have a significant contribution to the variance in subjective memory beyond that of depressive symptoms. CONCLUSIONS: Measures for subjective cognitive...... decline are not interchangeable when used in memory clinics and the application of different scales in previous studies is an important factor as to why studies show variability in the association between subjective cognitive decline and background data and/or clinical results. Careful consideration...

  19. Automatically classifying sentences in full-text biomedical articles into Introduction, Methods, Results and Discussion.

    Science.gov (United States)

    Agarwal, Shashank; Yu, Hong

    2009-12-01

    Biomedical texts can be typically represented by four rhetorical categories: Introduction, Methods, Results and Discussion (IMRAD). Classifying sentences into these categories can benefit many other text-mining tasks. Although many studies have applied different approaches for automatically classifying sentences in MEDLINE abstracts into the IMRAD categories, few have explored the classification of sentences that appear in full-text biomedical articles. We first evaluated whether sentences in full-text biomedical articles could be reliably annotated into the IMRAD format and then explored different approaches for automatically classifying these sentences into the IMRAD categories. Our results show an overall annotation agreement of 82.14% with a Kappa score of 0.756. The best classification system is a multinomial naïve Bayes classifier trained on manually annotated data that achieved 91.95% accuracy and an average F-score of 91.55%, which is significantly higher than baseline systems. A web version of this system is available online at-http://wood.ims.uwm.edu/full_text_classifier/.

  20. A comparison of genotyping-by-sequencing analysis methods on low-coverage crop datasets shows advantages of a new workflow, GB-eaSy.

    Science.gov (United States)

    Wickland, Daniel P; Battu, Gopal; Hudson, Karen A; Diers, Brian W; Hudson, Matthew E

    2017-12-28

    Genotyping-by-sequencing (GBS), a method to identify genetic variants and quickly genotype samples, reduces genome complexity by using restriction enzymes to divide the genome into fragments whose ends are sequenced on short-read sequencing platforms. While cost-effective, this method produces extensive missing data and requires complex bioinformatics analysis. GBS is most commonly used on crop plant genomes, and because crop plants have highly variable ploidy and repeat content, the performance of GBS analysis software can vary by target organism. Here we focus our analysis on soybean, a polyploid crop with a highly duplicated genome, relatively little public GBS data and few dedicated tools. We compared the performance of five GBS pipelines using low-coverage Illumina sequence data from three soybean populations. To address issues identified with existing methods, we developed GB-eaSy, a GBS bioinformatics workflow that incorporates widely used genomics tools, parallelization and automation to increase the accuracy and accessibility of GBS data analysis. Compared to other GBS pipelines, GB-eaSy rapidly and accurately identified the greatest number of SNPs, with SNP calls closely concordant with whole-genome sequencing of selected lines. Across all five GBS analysis platforms, SNP calls showed unexpectedly low convergence but generally high accuracy, indicating that the workflows arrived at largely complementary sets of valid SNP calls on the low-coverage data analyzed. We show that GB-eaSy is approximately as good as, or better than, other leading software solutions in the accuracy, yield and missing data fraction of variant calling, as tested on low-coverage genomic data from soybean. It also performs well relative to other solutions in terms of the run time and disk space required. In addition, GB-eaSy is built from existing open-source, modular software packages that are regularly updated and commonly used, making it straightforward to install and maintain

  1. A method for data handling numerical results in parallel OpenFOAM simulations

    International Nuclear Information System (INIS)

    nd Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania))" data-affiliation=" (Faculty of Automatic Control and Computing, Politehnica University of Timişoara, 2nd Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania))" >Anton, Alin; th Mihai Viteazu Ave., 300221, TM Timişoara (Romania))" data-affiliation=" (Center for Advanced Research in Engineering Science, Romanian Academy – Timişoara Branch, 24th Mihai Viteazu Ave., 300221, TM Timişoara (Romania))" >Muntean, Sebastian

    2015-01-01

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit ® [1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms

  2. A method for data handling numerical results in parallel OpenFOAM simulations

    Energy Technology Data Exchange (ETDEWEB)

    Anton, Alin [Faculty of Automatic Control and Computing, Politehnica University of Timişoara, 2" n" d Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania); Muntean, Sebastian [Center for Advanced Research in Engineering Science, Romanian Academy – Timişoara Branch, 24" t" h Mihai Viteazu Ave., 300221, TM Timişoara (Romania)

    2015-12-31

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.

  3. PALEOEARTHQUAKES IN THE PRIBAIKALIE: METHODS AND RESULTS OF DATING

    Directory of Open Access Journals (Sweden)

    Oleg P. Smekalin

    2010-01-01

    Full Text Available In the Pribaikalie and adjacent territories, seismogeological studies have been underway for almost a half of the century and resulted in discovery of more than 70 dislocations of seismic or presumably seismic origin. With commencement of paleoseismic studies, dating of paleo-earthquakes was focused on as an indicator useful for long-term prediction of strong earthquakes. V.P. Solonenko [Solonenko, 1977] distinguished five methods for dating paleoseismogenic deformations, i.e. geological, engineering geological, historico-archeological, dendrochronological and radiocarbon methods. However, ages of the majority of seismic deformations, which were subject to studies at the initial stage of development of seismogeology in Siberia, were defined by methods of relative or correlation age determination.Since the 1980s, studies of seismogenic deformation in the Pribaikalie have been widely conducted with trenching. Mass sampling, followed with radiocarbon analyses and definition of absolute ages of paleo-earthquakes, provided new data on seismic regimes of the territory and rates of and recent displacements along active faults, and enhanced validity of methods of relative dating, in particular morphometry. Capacities of the morphometry method has significantly increased with introduction of laser techniques in surveys and digital processing of 3D relief models.Comprehensive seismogeological studies conducted in the Pribaikalie revealed 43 paleo-events within 16 seismogenic structures. Absolute ages of 18 paleo-events were defined by the radiocarbon age determination method. Judging by their ages, a number of dislocations were related with historical earthquakes which occurred in the 18th and 19th centuries, yet any reliable data on epicenters of such events are not available. The absolute and relative dating methods allowed us to identify sections in some paleoseismogenic structures by differences in ages of activation and thus provided new data for

  4. Four-spacecraft determination of magnetopause orientation, motion and thickness: comparison with results from single-spacecraft methods

    Directory of Open Access Journals (Sweden)

    S. E. Haaland

    2004-04-01

    Full Text Available In this paper, we use Cluster data from one magnetopause event on 5 July 2001 to compare predictions from various methods for determination of the velocity, orientation, and thickness of the magnetopause current layer. We employ established as well as new multi-spacecraft techniques, in which time differences between the crossings by the four spacecraft, along with the duration of each crossing, are used to calculate magnetopause speed, normal vector, and width. The timing is based on data from either the Cluster Magnetic Field Experiment (FGM or the Electric Field Experiment (EFW instruments. The multi-spacecraft results are compared with those derived from various single-spacecraft techniques, including minimum-variance analysis of the magnetic field and deHoffmann-Teller, as well as Minimum-Faraday-Residue analysis of plasma velocities and magnetic fields measured during the crossings. In order to improve the overall consistency between multi- and single-spacecraft results, we have also explored the use of hybrid techniques, in which timing information from the four spacecraft is combined with certain limited results from single-spacecraft methods, the remaining results being left for consistency checks. The results show good agreement between magnetopause orientations derived from appropriately chosen single-spacecraft techniques and those obtained from multi-spacecraft timing. The agreement between magnetopause speeds derived from single- and multi-spacecraft methods is quantitatively somewhat less good but it is evident that the speed can change substantially from one crossing to the next within an event. The magnetopause thickness varied substantially from one crossing to the next, within an event. It ranged from 5 to 10 ion gyroradii. The density profile was sharper than the magnetic profile: most of the density change occured in the earthward half of the magnetopause.

    Key words. Magnetospheric physics (magnetopause, cusp and

  5. Evaluation of different methods for determining the angle of attack on wind turbine blades with CFD results under axial inflow conditions

    DEFF Research Database (Denmark)

    Rahimi, Vajiheh; Schepers, J.G.; Shen, Wen Zhong

    2018-01-01

    as shortcomings, are presented. The investigations are performed for two 10 MW reference wind turbines under axial inflow conditions, namely the turbines designed in the EU AVATAR and INNWIND.EU projects. The results show that the evaluated methods are in good agreement with each other at the mid-span, though......This work presents an investigation on different methods for the calculation of the angle of attack and the underlying induced velocity on wind turbine blades using data obtained from three-dimensional Computational Fluid Dynamics (CFD). Several methods are examined and their advantages, as well...

  6. Method of eliminating undesirable gaseous products resulting in underground uranium ore leaching

    International Nuclear Information System (INIS)

    Krizek, J.; Dedic, K.; Johann, J.; Haas, F.; Sokola, K.

    1980-01-01

    The method described is characteristic of the fact that gases being formed or dissolved are oxidized using a combined oxidation-reduction system consisting of airborne oxygen, oxygen carriers and a strong irreversible oxidant. The oxygen carrier system consists of a mixture of Fe 2+ and Fe 3+ cations or of Cu + and Cu 2+ cations introduced in solutions in form of iron salts at a concentration of 0.0001 to 0.003 M, or copper salts maximally of 0.0003 M. The irreversible oxidant shows a standard redox potential of at least +1.0 V. In addition to undesirable product elimination, the method allows increasing the leaching process yield. (J.B.)

  7. Differences in quantitative methods for measuring subjective cognitive decline - results from a prospective memory clinic study.

    Science.gov (United States)

    Vogel, Asmus; Salem, Lise Cronberg; Andersen, Birgitte Bo; Waldemar, Gunhild

    2016-09-01

    Cognitive complaints occur frequently in elderly people and may be a risk factor for dementia and cognitive decline. Results from studies on subjective cognitive decline are difficult to compare due to variability in assessment methods, and little is known about how different methods influence reports of cognitive decline. The Subjective Memory Complaints Scale (SMC) and The Memory Complaint Questionnaire (MAC-Q) were applied in 121 mixed memory clinic patients with mild cognitive symptoms (mean MMSE = 26.8, SD 2.7). The scales were applied independently and raters were blinded to results from the other scale. Scales were not used for diagnostic classification. Cognitive performances and depressive symptoms were also rated. We studied the association between the two measures and investigated the scales' relation to depressive symptoms, age, and cognitive status. SMC and MAC-Q were significantly associated (r = 0.44, N = 121, p = 0.015) and both scales had a wide range of scores. In this mixed cohort of patients, younger age was associated with higher SMC scores. There were no significant correlations between cognitive test performances and scales measuring subjective decline. Depression scores were significantly correlated to both scales measuring subjective decline. Linear regression models showed that age did not have a significant contribution to the variance in subjective memory beyond that of depressive symptoms. Measures for subjective cognitive decline are not interchangeable when used in memory clinics and the application of different scales in previous studies is an important factor as to why studies show variability in the association between subjective cognitive decline and background data and/or clinical results. Careful consideration should be taken as to which questions are relevant and have validity when operationalizing subjective cognitive decline.

  8. Paleomagnetic intensity of Aso pyroclastic flows: Additional results with LTD-DHT Shaw method, Thellier method with pTRM-tail check

    Science.gov (United States)

    Maruuchi, T.; Shibuya, H.

    2009-12-01

    For the sake to calibrate the absolute value of the ’relative paleointensity variation curve’ drawn from sediment cores, Takai et al. (2002) proposed to use pyroclastic flows co-bearing with wide spread tephras. The pyroclastic flows prepare volcanic rocks with TRM, which let us determine absolute paleointensity, and the tephras prepare the correlation with sediment stratigraphy. While 4 out of 6 pyroclastic flows are consistent with Sint-800 paleointensity variation curve, two flows, Aso-2 and Aso-4, show weaker and stronger than Sint-800 beyond the error, respectively. We revisited the paleointensity study of Aso pyroclastic flows, adding LTD- DHT Shaw method, the pTRM-tail check in Thellier experiment, and LTD-DHT Shaw method by using volcanic glasses. We prepared 11 specimens from 3 sites of Aso-1 welded tuff for LTD-DHT Shaw method experiments, and obtained 6 paleointensities satisfied a set of strict criteria. They yield an average paleointensity of 21.3±5.8uT, which is smaller than 31.0±3.4uT provided by Takai et al. (2002). For Aso-2 welded tuff, 11 samples from 3 sites were submitted to Thellier experiments, and 6 passed a set of pretty stringent criteria including pTRM-tail check, which is not performed by Takai et al. (2002). They give an average paleointensity of 20.2±1.5uT, which is virtually identical to 20.2±1.0uT (27 samples) given by Takai et al. (2002). Although the success rate was not good in LTD-DHT Shaw method, 2 out of 12 specimens passed the criteria, and gave 25.8±3.4uT, which is consistent with Takai et al. (2002). In addition, we obtained a reliable paleointensity from a volcanic glass in LTD-DHT Shaw method, it gives a paleointensity of 23.6 uT. It is also consitent with Takai et al. (2002). For Aso-3 welded tuff, we performed only LTD-DHT Shaw method for one specimen from one site yet. It gives a paleointensity of 43.0uT, which is higher than 31.8±3.6uT given by Takai et al. (2002). Eight sites were set for Aso-4 welded tuff

  9. Radioimmunological determination of plasma progesterone. Methods - Results - Indications

    International Nuclear Information System (INIS)

    Gonon-Estrangin, Chantal.

    1978-10-01

    The aim of this work is to describe the radioimmunological determination of plasma progesterone carried out at the hormonology Laboratory of the Grenoble University Hospital Centre (Professor E. Chambaz), to compare our results with those of the literature and to present the main clinical indications of this analysis. The measurement method has proved reproducible, specific (the steroid purification stage is unnecessary) and sensitive (detection: 10 picograms of progesterone per tube). In seven normally menstruating women our results agree with published values: (in nanograms per millilitre: ng/ml) 0.07 ng/ml to 0.9 ng/ml in the follicular phase, from the start of menstruation until ovulation, then rapid increase at ovulation with a maximum in the middle of the luteal phase (our values for this maximum range from 7.9 ng/ml to 21.7 ng/ml) and gradual drop in progesterone secretion until the next menstrual period. In gynecology the radioimmunoassay of plasma progesterone is valuable for diagnostic and therapeutic purposes: - to diagnosis the absence of corpus luteum, - to judge the effectiveness of an ovulation induction treatment [fr

  10. Quantifying the measurement uncertainty of results from environmental analytical methods.

    Science.gov (United States)

    Moser, J; Wegscheider, W; Sperka-Gottlieb, C

    2001-07-01

    The Eurachem-CITAC Guide Quantifying Uncertainty in Analytical Measurement was put into practice in a public laboratory devoted to environmental analytical measurements. In doing so due regard was given to the provisions of ISO 17025 and an attempt was made to base the entire estimation of measurement uncertainty on available data from the literature or from previously performed validation studies. Most environmental analytical procedures laid down in national or international standards are the result of cooperative efforts and put into effect as part of a compromise between all parties involved, public and private, that also encompasses environmental standards and statutory limits. Central to many procedures is the focus on the measurement of environmental effects rather than on individual chemical species. In this situation it is particularly important to understand the measurement process well enough to produce a realistic uncertainty statement. Environmental analytical methods will be examined as far as necessary, but reference will also be made to analytical methods in general and to physical measurement methods where appropriate. This paper describes ways and means of quantifying uncertainty for frequently practised methods of environmental analysis. It will be shown that operationally defined measurands are no obstacle to the estimation process as described in the Eurachem/CITAC Guide if it is accepted that the dominating component of uncertainty comes from the actual practice of the method as a reproducibility standard deviation.

  11. Investigation of error estimation method of observational data and comparison method between numerical and observational results toward V and V of seismic simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio; Kawakami, Yoshiaki; Nakajima, Norihiro

    2017-01-01

    The method to estimate errors included in observational data and the method to compare numerical results with observational results are investigated toward the verification and validation (V and V) of a seismic simulation. For the method to estimate errors, 144 literatures for the past 5 years (from the year 2010 to 2014) in the structure engineering field and earthquake engineering field where the description about acceleration data is frequent are surveyed. As a result, it is found that some processes to remove components regarded as errors from observational data are used in about 30% of those literatures. Errors are caused by the resolution, the linearity, the temperature coefficient for sensitivity, the temperature coefficient for zero shift, the transverse sensitivity, the seismometer property, the aliasing, and so on. Those processes can be exploited to estimate errors individually. For the method to compare numerical results with observational results, public materials of ASME V and V Symposium 2012-2015, their references, and above 144 literatures are surveyed. As a result, it is found that six methods have been mainly proposed in existing researches. Evaluating those methods using nine items, advantages and disadvantages for those methods are arranged. The method is not well established so that it is necessary to employ those methods by compensating disadvantages and/or to search for a solution to a novel method. (author)

  12. Migraine patients consistently show abnormal vestibular bedside tests

    Directory of Open Access Journals (Sweden)

    Eliana Teixeira Maranhão

    2015-01-01

    Full Text Available Migraine and vertigo are common disorders, with lifetime prevalences of 16% and 7% respectively, and co-morbidity around 3.2%. Vestibular syndromes and dizziness occur more frequently in migraine patients. We investigated bedside clinical signs indicative of vestibular dysfunction in migraineurs.Objective To test the hypothesis that vestibulo-ocular reflex, vestibulo-spinal reflex and fall risk (FR responses as measured by 14 bedside tests are abnormal in migraineurs without vertigo, as compared with controls.Method Cross-sectional study including sixty individuals – thirty migraineurs, 25 women, 19-60 y-o; and 30 gender/age healthy paired controls.Results Migraineurs showed a tendency to perform worse in almost all tests, albeit only the Romberg tandem test was statistically different from controls. A combination of four abnormal tests better discriminated the two groups (93.3% specificity.Conclusion Migraine patients consistently showed abnormal vestibular bedside tests when compared with controls.

  13. A Pragmatic Smoothing Method for Improving the Quality of the Results in Atomic Spectroscopy

    Science.gov (United States)

    Bennun, Leonardo

    2017-07-01

    A new smoothing method for the improvement on the identification and quantification of spectral functions based on the previous knowledge of the signals that are expected to be quantified, is presented. These signals are used as weighted coefficients in the smoothing algorithm. This smoothing method was conceived to be applied in atomic and nuclear spectroscopies preferably to these techniques where net counts are proportional to acquisition time, such as particle induced X-ray emission (PIXE) and other X-ray fluorescence spectroscopic methods, etc. This algorithm, when properly applied, does not distort the form nor the intensity of the signal, so it is well suited for all kind of spectroscopic techniques. This method is extremely effective at reducing high-frequency noise in the signal much more efficient than a single rectangular smooth of the same width. As all of smoothing techniques, the proposed method improves the precision of the results, but in this case we found also a systematic improvement on the accuracy of the results. We still have to evaluate the improvement on the quality of the results when this method is applied over real experimental results. We expect better characterization of the net area quantification of the peaks, and smaller Detection and Quantification Limits. We have applied this method to signals that obey Poisson statistics, but with the same ideas and criteria, it could be applied to time series. In a general case, when this algorithm is applied over experimental results, also it would be required that the sought characteristic functions, required for this weighted smoothing method, should be obtained from a system with strong stability. If the sought signals are not perfectly clean, this method should be carefully applied

  14. Relationships Between Results Of An Internal And External Match Load Determining Method In Male, Singles Badminton Players.

    Science.gov (United States)

    Abdullahi, Yahaya; Coetzee, Ben; Van den Berg, Linda

    2017-07-03

    The study purpose was to determine relationships between results of internal and external match load determining methods. Twenty-one players, who participated in selected badminton championships during the 2014/2015 season served as subjects. The heart rate (HR) values and GPS data of each player were obtained via a fix Polar HR Transmitter Belt and MinimaxX GPS device. Moderate significant Spearman's rank correlations were found between HR and absolute duration (r = 0.43 at a low intensity (LI) and 0.44 at a high intensity (HI)), distance covered (r = 0.42 at a HI) and player load (PL) (r = 0.44 at a HI). Results also revealed an opposite trend for external and internal measures of load as the average relative HR value was found to be the highest for the HI zone (54.1%) compared to the relative measures of external load where average values (1.29-9.89%) were the lowest for the HI zone. In conclusion, our findings show that results of an internal and external badminton match load determining method are more related to each other in the HI zone than other zones and that the strength of relationships depend on the duration of activities that are performed in especially LI and HI zones. Overall, trivial to moderate relationships between results of an internal and external match load determining method in male, singles badminton players reaffirm the conclusions of others that these constructs measure distinctly different demands and should therefore be measured concurrently to fully understand the true requirements of badminton match play.

  15. Trojan Horse Method: Recent Results

    International Nuclear Information System (INIS)

    Pizzone, R. G.; Spitaleri, C.

    2008-01-01

    Owing the presence of the Coulomb barrier at astrophysically relevant kinetic energies, it is very difficult, or sometimes impossible to measure astrophysical reaction rates in laboratory. This is why different indirect techniques are being used along with direct measurements. The THM is unique indirect technique allowing one measure astrophysical rearrangement reactions down to astrophysical relevant energies. The basic principle and a review of the main application of the Trojan Horse Method are presented. The applications aiming at the extraction of the bare S b (E) astrophysical factor and electron screening potentials U e for several two body processes are discussed

  16. Stem cells show promising results for lymphoedema treatment

    DEFF Research Database (Denmark)

    Toyserkani, Navid Mohamadpour; Quaade, Marlene Louise; Sheikh, Søren Paludan

    2015-01-01

    Abstract Lymphoedema is a debilitating condition, manifesting in excess lymphatic fluid and swelling of subcutaneous tissues. Lymphoedema is as of yet still an incurable condition and current treatment modalities are not satisfactory. The capacity of mesenchymal stem cells to promote angiogenesis......, secrete growth factors, regulate the inflammatory process, and differentiate into multiple cell types make them a potential ideal therapy for lymphoedema. Adipose tissue is the richest and most accessible source of mesenchymal stem cells and they can be harvested, isolated, and used for therapy...... in a single stage procedure as an autologous treatment. The aim of this paper was to review all studies using mesenchymal stem cells for lymphoedema treatment with a special focus on the potential use of adipose-derived stem cells. A systematic search was performed and five preclinical and two clinical...

  17. Doppler method leak detection for LMFBR steam generators. Pt. 1. Experimental results of bubble detection using small models

    International Nuclear Information System (INIS)

    Kumagai, Hiromichi

    1999-01-01

    To prevent the expansion of the tube damage and to maintain structural integrity in the steam generators (SGs) of fast breeder reactors (FBRs), it is necessary to detect precisely and immediately the leakage of water from heat transfer tubes. Therefore, an active acoustic method was developed. Previous studies have revealed that in practical steam generators the active acoustic method can detect bubbles of 10 l/s within 10 seconds. To prevent the expansion of damage to neighboring tubes, it is necessary to detect smaller leakages of water from the heat transfer tubes. The Doppler method is designed to detect small leakages and to find the source of the leak before damage spreads to neighboring tubes. To evaluate the relationship between the detection sensitivity of the Doppler method and the bubble volume and bubble size, the structural shapes and bubble flow conditions were investigated experimentally, using a small structural model. The results show that the Doppler method can detect the bubbles under bubble flow conditions, and it is sensitive enough to detect small leakages within a short time. The doppler method thus has strong potential for the detection of water leakage in SGs. (author)

  18. Influence of Specimen Preparation and Test Methods on the Flexural Strength Results of Monolithic Zirconia Materials.

    Science.gov (United States)

    Schatz, Christine; Strickstrock, Monika; Roos, Malgorzata; Edelhoff, Daniel; Eichberger, Marlis; Zylla, Isabella-Maria; Stawarczyk, Bogna

    2016-03-09

    The aim of this work was to evaluate the influence of specimen preparation and test method on the flexural strength results of monolithic zirconia. Different monolithic zirconia materials (Ceramill Zolid (Amann Girrbach, Koblach, Austria), Zenostar ZrTranslucent (Wieland Dental, Pforzheim, Germany), and DD Bio zx² (Dental Direkt, Spenge, Germany)) were tested with three different methods: 3-point, 4-point, and biaxial flexural strength. Additionally, different specimen preparation methods were applied: either dry polishing before sintering or wet polishing after sintering. Each subgroup included 40 specimens. The surface roughness was assessed using scanning electron microscopy (SEM) and a profilometer whereas monoclinic phase transformation was investigated with X-ray diffraction. The data were analyzed using a three-way Analysis of Variance (ANOVA) with respect to the three factors: zirconia, specimen preparation, and test method. One-way ANOVA was conducted for the test method and zirconia factors within the combination of two other factors. A 2-parameter Weibull distribution assumption was applied to analyze the reliability under different testing conditions. In general, values measured using the 4-point test method presented the lowest flexural strength values. The flexural strength findings can be grouped in the following order: 4-point strength values than prepared before sintering. The Weibull moduli ranged from 5.1 to 16.5. Specimens polished before sintering showed higher surface roughness values than specimens polished after sintering. In contrast, no strong impact of the polishing procedures on the monoclinic surface layer was observed. No impact of zirconia material on flexural strength was found. The test method and the preparation method significantly influenced the flexural strength values.

  19. Experimental results showing the internal three-component velocity field and outlet temperature contours for a model gas turbine combustor

    CSIR Research Space (South Africa)

    Meyers, BC

    2011-09-01

    Full Text Available by the American Institute of Aeronautics and Astronautics Inc. All rights reserved ISABE-2011-1129 EXPERIMENTAL RESULTS SHOWING THE INTERNAL THREE-COMPONENT VELOCITY FIELD AND OUTLET TEMPERATURE CONTOURS FOR A MODEL GAS TURBINE COMBUSTOR BC Meyers*, GC... identifier c Position identifier F Fuel i Index L (Combustor) Liner OP Orifice plate Introduction There are often inconsistencies when comparing experimental and Computational Fluid Dynamics (CFD) simulations for gas turbine combustors [1...

  20. Correction the Bias of Odds Ratio resulting from the Misclassification of Exposures in the Study of Environmental Risk Factors of Lung Cancer using Bayesian Methods

    Directory of Open Access Journals (Sweden)

    Alireza Abadi

    2015-07-01

    Full Text Available Background & Objective: Inability to measure exact exposure in epidemiological studies is a common problem in many studies, especially cross-sectional studies. Depending on the extent of misclassification, results may be affected. Existing methods for solving this problem require a lot of time and money and it is not practical for some of the exposures. Recently, new methods have been proposed in 1:1 matched case–control studies that have solved these problems to some extent. In the present study we have aimed to extend the existing Bayesian method to adjust for misclassification in matched case–control Studies with 1:2 matching. Methods: Here, the standard Dirichlet prior distribution for a multinomial model was extended to allow the data of exposure–disease (OR parameter to be imported into the model excluding other parameters. Information that exist in literature about association between exposure and disease were used as prior information about OR. In order to correct the misclassification Sensitivity Analysis was accomplished and the results were obtained under three Bayesian Methods. Results: The results of naïve Bayesian model were similar to the classic model. The second Bayesian model by employing prior information about the OR, was heavily affected by these information. The third proposed model provides maximum bias adjustment for the risk of heavy metals, smoking and drug abuse. This model showed that heavy metals are not an important risk factor although raw model (logistic regression Classic detected this exposure as an influencing factor on the incidence of lung cancer. Sensitivity analysis showed that third model is robust regarding to different levels of Sensitivity and Specificity. Conclusion: The present study showed that although in most of exposures the results of the second and third model were similar but the proposed model would be able to correct the misclassification to some extent.

  1. Female Emotional Eaters Show Abnormalities in Consummatory and Anticipatory Food Reward

    Science.gov (United States)

    Bohon, Cara; Stice, Eric; Spoor, Sonja

    2009-01-01

    Objective To test the hypothesis that emotional eaters show greater neural activation in response to food intake and anticipated food intake than nonemotional eaters and whether these differences are amplified during a negative versus neutral mood state. Method Female emotional eaters and nonemotional eaters (N = 21) underwent functional magnetic resonance imaging (fMRI) during receipt and anticipated receipt of chocolate milkshake and a tasteless control solution while in a negative and neutral mood. Results Emotional eaters showed greater activation in the parahippocampal gyrus and anterior cingulate (ACC) in response to anticipated receipt of milkshake and greater activation in the pallidum, thalamus, and ACC in response to receipt of milkshake during a negative relative to a neutral mood. In contrast, nonemotional eaters showed decreased activation in reward regions during a negative versus a neutral mood. Discussion Results suggest that emotional eating is related to increased anticipatory and consummatory food reward, but only during negative mood. PMID:19040270

  2. Multivariate methods for particle identification

    CERN Document Server

    Visan, Cosmin

    2013-01-01

    The purpose of this project was to evaluate several MultiVariate methods in order to determine which one, if any, offers better results in Particle Identification (PID) than a simple n$\\sigma$ cut on the response of the ALICE PID detectors. The particles considered in the analysis were Pions, Kaons and Protons and the detectors used were TPC and TOF. When used with the same input n$\\sigma$ variables, the results show similar perfoance between the Rectangular Cuts Optimization method and the simple n$\\sigma$ cuts. The method MLP and BDT show poor results for certain ranges of momentum. The KNN method is the best performing, showing similar results for Pions and Protons as the Cuts method, and better results for Kaons. The extension of the methods to include additional input variables leads to poor results, related to instabilities still to be investigated.

  3. Alcohol content in the 'Hyper-Reality' MTV show 'Geordie Shore'

    OpenAIRE

    Lowe, Eden; Britton, John; Cranwell, Jo

    2018-01-01

    Aim: To quantify the occurrence of alcohol content, including alcohol branding, in the popular primetime television UK Reality TV show 'Geordie Shore' Series 11. \\ud \\ud Methods: A 1-min interval coding content analysis of alcohol content in the entire DVD Series 11 of 'Geordie Shore' (10 episodes). Occurrence of alcohol use, implied use, other alcohol reference/paraphernalia or branding was recorded. \\ud \\ud Results: All categories of alcohol were present in all episodes. 'Any alcohol' conte...

  4. LOGICAL CONDITIONS ANALYSIS METHOD FOR DIAGNOSTIC TEST RESULTS DECODING APPLIED TO COMPETENCE ELEMENTS PROFICIENCY

    Directory of Open Access Journals (Sweden)

    V. I. Freyman

    2015-11-01

    Full Text Available Subject of Research.Representation features of education results for competence-based educational programs are analyzed. Solution importance of decoding and proficiency estimation for elements and components of discipline parts of competences is shown. The purpose and objectives of research are formulated. Methods. The paper deals with methods of mathematical logic, Boolean algebra, and parametrical analysis of complex diagnostic test results, that controls proficiency of some discipline competence elements. Results. The method of logical conditions analysis is created. It will give the possibility to formulate logical conditions for proficiency determination of each discipline competence element, controlled by complex diagnostic test. Normalized test result is divided into noncrossing zones; a logical condition about controlled elements proficiency is formulated for each of them. Summarized characteristics for test result zones are imposed. An example of logical conditions forming for diagnostic test with preset features is provided. Practical Relevance. The proposed method of logical conditions analysis is applied in the decoding algorithm of proficiency test diagnosis for discipline competence elements. It will give the possibility to automate the search procedure for elements with insufficient proficiency, and is also usable for estimation of education results of a discipline or a component of competence-based educational program.

  5. Monitoring ambient ozone with a passive measurement technique method, field results and strategy

    NARCIS (Netherlands)

    Scheeren, BA; Adema, EH

    1996-01-01

    A low-cost, accurate and sensitive passive measurement method for ozone has been developed and tested. The method is based on the reaction of ozone with indigo carmine which results in colourless reaction products which are detected spectrophotometrically after exposure. Coated glass filters are

  6. A method of estimating conceptus doses resulting from multidetector CT examinations during all stages of gestation

    International Nuclear Information System (INIS)

    Damilakis, John; Tzedakis, Antonis; Perisinakis, Kostas; Papadakis, Antonios E.

    2010-01-01

    Purpose: Current methods for the estimation of conceptus dose from multidetector CT (MDCT) examinations performed on the mother provide dose data for typical protocols with a fixed scan length. However, modified low-dose imaging protocols are frequently used during pregnancy. The purpose of the current study was to develop a method for the estimation of conceptus dose from any MDCT examination of the trunk performed during all stages of gestation. Methods: The Monte Carlo N-Particle (MCNP) radiation transport code was employed in this study to model the Siemens Sensation 16 and Sensation 64 MDCT scanners. Four mathematical phantoms were used, simulating women at 0, 3, 6, and 9 months of gestation. The contribution to the conceptus dose from single simulated scans was obtained at various positions across the phantoms. To investigate the effect of maternal body size and conceptus depth on conceptus dose, phantoms of different sizes were produced by adding layers of adipose tissue around the trunk of the mathematical phantoms. To verify MCNP results, conceptus dose measurements were carried out by means of three physical anthropomorphic phantoms, simulating pregnancy at 0, 3, and 6 months of gestation and thermoluminescence dosimetry (TLD) crystals. Results: The results consist of Monte Carlo-generated normalized conceptus dose coefficients for single scans across the four mathematical phantoms. These coefficients were defined as the conceptus dose contribution from a single scan divided by the CTDI free-in-air measured with identical scanning parameters. Data have been produced to take into account the effect of maternal body size and conceptus position variations on conceptus dose. Conceptus doses measured with TLD crystals showed a difference of up to 19% compared to those estimated by mathematical simulations. Conclusions: Estimation of conceptus doses from MDCT examinations of the trunk performed on pregnant patients during all stages of gestation can be made

  7. [Results of treatment of milk teeth pulp by modified formocresol method].

    Science.gov (United States)

    Wochna-Sobańska, M

    1989-01-01

    The purpose of the study was evaluation of the results of treatment of molar pulp diseases by the formocresol method from the standpoint of the development of inflammatory complications in periapical tissues, disturbances of physiological resorption of roots, disturbances of mineralization of crowns of homologous permanent teeth. For the treatment milk molars were qualified with the diagnosis of grade II pulpopathy in children aged from 3 to 9 years. The treatment was done using formocresol by a modified method of pulp amputation according to Buckley after previous devitalization with parapaste. The status of 143 teeth was examined again 1 to 4 years after completion of treatment. The proportion of positive results after one year was 94%, after two years it was 90%, after three years 87% and after four years 80%. The cause of premature loss of most teeth was root resorption acceleration by 18-24 months. No harmful action of formocresol on the buds of permanent teeth was noted.

  8. Self-directed learning can outperform direct instruction in the course of a modern German medical curriculum - results of a mixed methods trial.

    Science.gov (United States)

    Peine, Arne; Kabino, Klaus; Spreckelsen, Cord

    2016-06-03

    Modernised medical curricula in Germany (so called "reformed study programs") rely increasingly on alternative self-instructed learning forms such as e-learning and curriculum-guided self-study. However, there is a lack of evidence that these methods can outperform conventional teaching methods such as lectures and seminars. This study was conducted in order to compare extant traditional teaching methods with new instruction forms in terms of learning effect and student satisfaction. In a randomised trial, 244 students of medicine in their third academic year were assigned to one of four study branches representing self-instructed learning forms (e-learning and curriculum-based self-study) and instructed learning forms (lectures and seminars). All groups participated in their respective learning module with standardised materials and instructions. Learning effect was measured with pre-test and post-test multiple-choice questionnaires. Student satisfaction and learning style were examined via self-assessment. Of 244 initial participants, 223 completed the respective module and were included in the study. In the pre-test, the groups showed relatively homogenous scores. All students showed notable improvements compared with the pre-test results. Participants in the non-self-instructed learning groups reached scores of 14.71 (seminar) and 14.37 (lecture), while the groups of self-instructed learners reached higher scores with 17.23 (e-learning) and 15.81 (self-study). All groups improved significantly (p learning group, whose self-assessment improved by 2.36. The study shows that students in modern study curricula learn better through modern self-instructed methods than through conventional methods. These methods should be used more, as they also show good levels of student acceptance and higher scores in personal self-assessment of knowledge.

  9. [Do different interpretative methods used for evaluation of checkerboard synergy test affect the results?].

    Science.gov (United States)

    Ozseven, Ayşe Gül; Sesli Çetin, Emel; Ozseven, Levent

    2012-07-01

    In recent years, owing to the presence of multi-drug resistant nosocomial bacteria, combination therapies are more frequently applied. Thus there is more need to investigate the in vitro activity of drug combinations against multi-drug resistant bacteria. Checkerboard synergy testing is among the most widely used standard technique to determine the activity of antibiotic combinations. It is based on microdilution susceptibility testing of antibiotic combinations. Although this test has a standardised procedure, there are many different methods for interpreting the results. In many previous studies carried out with multi-drug resistant bacteria, different rates of synergy have been reported with various antibiotic combinations using checkerboard technique. These differences might be attributed to the different features of the strains. However, different synergy rates detected by checkerboard method have also been reported in other studies using the same drug combinations and same types of bacteria. It was thought that these differences in synergy rates might be due to the different methods of interpretation of synergy test results. In recent years, multi-drug resistant Acinetobacter baumannii has been the most commonly encountered nosocomial pathogen especially in intensive-care units. For this reason, multidrug resistant A.baumannii has been the subject of a considerable amount of research about antimicrobial combinations. In the present study, the in vitro activities of frequently preferred combinations in A.baumannii infections like imipenem plus ampicillin/sulbactam, and meropenem plus ampicillin/sulbactam were tested by checkerboard synergy method against 34 multi-drug resistant A.baumannii isolates. Minimum inhibitory concentration (MIC) values for imipenem, meropenem and ampicillin/sulbactam were determined by the broth microdilution method. Subsequently the activity of two different combinations were tested in the dilution range of 4 x MIC and 0.03 x MIC in

  10. Repetitive transcranial magnetic stimulation as an adjuvant method in the treatment of depression: Preliminary results

    Directory of Open Access Journals (Sweden)

    Jovičić Milica

    2014-01-01

    Full Text Available Introduction. Repetitive transcranial magnetic stimulation (rTMS is a method of brain stimulation which is increasingly used in both clinical practice and research. Up-to-date studies have pointed out a potential antidepressive effect of rTMS, but definitive superiority over placebo has not yet been confirmed. Objective. The aim of the study was to examine the effect of rTMS as an adjuvant treatment with antidepressants during 18 weeks of evaluation starting from the initial application of the protocol. Methods. Four patients with the diagnosis of moderate/severe major depression were included in the study. The protocol involved 2000 stimuli per day (rTMS frequency of 10 Hz, intensity of 120% motor threshold administered over the left dorsolateral prefrontal cortex (DLPFC for 15 days. Subjective and objective depressive symptoms were measured before the initiation of rTMS and repeatedly evaluated at week 3, 6, 12 and 18 from the beginning of the stimulation. Results. After completion of rTMS protocol two patients demonstrated a reduction of depressive symptoms that was sustained throughout the 15-week follow-up period. One patient showed a tendency of remission during the first 12 weeks of the study, but relapsed in week 18. One patient showed no significant symptom reduction at any point of follow-up. Conclusion. Preliminary findings suggest that rTMS has a good tolerability and can be efficient in accelerating the effect of antidepressants, particularly in individuals with shorter duration of depressive episodes and moderate symptom severity. [Projekat Ministarstva nauke Republike Srbije, br. III41029 i br. ON175090

  11. Comb-push ultrasound shear elastography of breast masses: initial results show promise.

    Science.gov (United States)

    Denis, Max; Mehrmohammadi, Mohammad; Song, Pengfei; Meixner, Duane D; Fazzio, Robert T; Pruthi, Sandhya; Whaley, Dana H; Chen, Shigao; Fatemi, Mostafa; Alizad, Azra

    2015-01-01

    To evaluate the performance of Comb-push Ultrasound Shear Elastography (CUSE) for classification of breast masses. CUSE is an ultrasound-based quantitative two-dimensional shear wave elasticity imaging technique, which utilizes multiple laterally distributed acoustic radiation force (ARF) beams to simultaneously excite the tissue and induce shear waves. Female patients who were categorized as having suspicious breast masses underwent CUSE evaluations prior to biopsy. An elasticity estimate within the breast mass was obtained from the CUSE shear wave speed map. Elasticity estimates of various types of benign and malignant masses were compared with biopsy results. Fifty-four female patients with suspicious breast masses from our ongoing study are presented. Our cohort included 31 malignant and 23 benign breast masses. Our results indicate that the mean shear wave speed was significantly higher in malignant masses (6 ± 1.58 m/s) in comparison to benign masses (3.65 ± 1.36 m/s). Therefore, the stiffness of the mass quantified by the Young's modulus is significantly higher in malignant masses. According to the receiver operating characteristic curve (ROC), the optimal cut-off value of 83 kPa yields 87.10% sensitivity, 82.61% specificity, and 0.88 for the area under the curve (AUC). CUSE has the potential for clinical utility as a quantitative diagnostic imaging tool adjunct to B-mode ultrasound for differentiation of malignant and benign breast masses.

  12. Visual detection of gas shows from coal core and cuttings using liquid leak detector

    Energy Technology Data Exchange (ETDEWEB)

    Barker, C.E. [United States Geological Survey, Denver, CO (United States)

    2006-09-15

    Coal core descriptions are difficult to obtain, as they must be obtained immediately after the core is retrieved and before the core is closed in a canister. This paper described a method of marking gas shows on a core surface by coating the core with a water-based liquid leak detector and photographing the subsequent foam developed on the core surface while the core is still in the core tray. Coals from a borehole at the Yukon Flats Basin in Alaska and the Maverick Basin in Texas were used to illustrate the method. Drilling mud and debris were removed from the coal samples before the leak detector solution was applied onto the core surfaces. A white froth or dripping foam developed rapidly at gas shows on the sample surfaces. A hand-held lens and a binocular microscope were used to magnify the foaming action. It was noted that foaming was not continuous across the core surface, but was restricted to localized points along the surface. It was suggested that the localized point foaming may have resulted from the coring process. However, the same tendency toward point gas show across the sample surface was found in some hard, well-indurated samples that still had undisturbed bedding and other sedimentary structures. It was concluded that gas shows marked as separate foam centres may indicate a real condition of local permeability paths. Results suggested that the new gas show detection method could be used in core selection studies to reduce the costs of exploration programs. 6 refs., 4 figs.

  13. A holographic method for investigating cylindrical symmetry plasmas resulting from electric discharges

    International Nuclear Information System (INIS)

    Rosu, N.; Ralea, M.; Foca, M.; Iova, I.

    1992-01-01

    A new method based on holographic interferometry in real time with reference fringes for diagnosing gas electric discharges in cylindrical symmetry tubes is presented. A method for obtaining and quantitatively investigating interferograms obtained with a video camera is described. By studying the resulting images frame by frame and introducing the measurements into an adequate computer programme one gets a graphical recording of the radial distribution of the charged particle concentration in the plasma in any region of the tube at a given time, as well as their axial distribution. The real time evolution of certain phenomena occurring in the discharge tube can also be determined by this non-destructive method. The method is used for electric discharges in Ar at average pressures in a discharge tube with hollow cathode effect. (Author)

  14. Comb-push ultrasound shear elastography of breast masses: initial results show promise.

    Directory of Open Access Journals (Sweden)

    Max Denis

    Full Text Available To evaluate the performance of Comb-push Ultrasound Shear Elastography (CUSE for classification of breast masses.CUSE is an ultrasound-based quantitative two-dimensional shear wave elasticity imaging technique, which utilizes multiple laterally distributed acoustic radiation force (ARF beams to simultaneously excite the tissue and induce shear waves. Female patients who were categorized as having suspicious breast masses underwent CUSE evaluations prior to biopsy. An elasticity estimate within the breast mass was obtained from the CUSE shear wave speed map. Elasticity estimates of various types of benign and malignant masses were compared with biopsy results.Fifty-four female patients with suspicious breast masses from our ongoing study are presented. Our cohort included 31 malignant and 23 benign breast masses. Our results indicate that the mean shear wave speed was significantly higher in malignant masses (6 ± 1.58 m/s in comparison to benign masses (3.65 ± 1.36 m/s. Therefore, the stiffness of the mass quantified by the Young's modulus is significantly higher in malignant masses. According to the receiver operating characteristic curve (ROC, the optimal cut-off value of 83 kPa yields 87.10% sensitivity, 82.61% specificity, and 0.88 for the area under the curve (AUC.CUSE has the potential for clinical utility as a quantitative diagnostic imaging tool adjunct to B-mode ultrasound for differentiation of malignant and benign breast masses.

  15. Reinterpretation of the results of a pooled analysis of dietary carotenoid intake and breast cancer risk by using the interval collapsing method

    Directory of Open Access Journals (Sweden)

    Jong-Myon Bae

    2016-06-01

    Full Text Available OBJECTIVES: A pooled analysis of 18 prospective cohort studies reported in 2012 for evaluating carotenoid intakes and breast cancer risk defined by estrogen receptor (ER and progesterone receptor (PR statuses by using the “highest versus lowest intake” method (HLM. By applying the interval collapsing method (ICM to maximize the use of the estimated information, we reevaluated the results of the previous analysis in order to reinterpret the inferences made. METHODS: In order to estimate the summary effect size (sES and its 95% confidence interval (CI, meta-analyses with the random-effects model were conducted for adjusted relative risks and their 95% CI from the second to the fifth interval according to five kinds of carotenoids and ER/PR status. RESULTS: The following new findings were identified: α-Carotene and β-cryptoxanthin have protective effects on overall breast cancer. All five kinds of carotenoids showed protective effects on ER− breast cancer. β-Carotene level increased the risk of ER+ or ER+/PR+ breast cancer. α-Carotene, β-carotene, lutein/zeaxanthin, and lycopene showed a protective effect on ER−/PR+ or ER−/PR− breast cancer. CONCLUSIONS: The new facts support the hypothesis that carotenoids that show anticancer effects with anti-oxygen function might reduce the risk of ER− breast cancer. Based on the new facts, the modification of the effects of α-carotene, β-carotene, and β-cryptoxanthin should be evaluated according to PR and ER statuses.

  16. Video monitoring of brown planthopper predation in rice shows flaws of sentinel methods

    NARCIS (Netherlands)

    Zou, Yi; Kraker, De Joop; Bianchi, Felix J.J.A.; Telgen, Van Mario D.; Xiao, Haijun; Werf, Van Der Wopke

    2017-01-01

    Immobilized preys are routinely used in agro-ecological exposure studies to quantify predation of pests under field conditions, but this method has not been validated. Our purpose was to determine the validity of using immobilized adults of the major rice pest Nilaparvata lugens, brown plant hopper

  17. Best in show but not best shape: a photographic assessment of show dog body condition.

    Science.gov (United States)

    Such, Z R; German, A J

    2015-08-01

    Previous studies suggest that owners often wrongly perceive overweight dogs to be in normal condition. The body shape of dogs attending shows might influence owners' perceptions, with online images of overweight show winners having a negative effect. This was an observational in silico study of canine body condition. 14 obese-prone breeds and 14 matched non-obese-probe breeds were first selected, and one operator then used an online search engine to identify 40 images, per breed, of dogs that had appeared at a major national UK show (Crufts). After images were anonymised and coded, a second observer subjectively assessed body condition, in a single sitting, using a previously validated method. Of 1120 photographs initially identified, 960 were suitable for assessing body condition, with all unsuitable images being from longhaired breeds. None of the dogs (0 per cent) were underweight, 708 (74 per cent) were in ideal condition and 252 (26 per cent) were overweight. Pugs, basset hounds and Labrador retrievers were most likely to be overweight, while standard poodles, Rhodesian ridgebacks, Hungarian vizslas and Dobermanns were least likely to be overweight. Given the proportion of show dogs from some breeds that are overweight, breed standards should be redefined to be consistent with a dog in optimal body condition. British Veterinary Association.

  18. Application of NUREG-1150 methods and results to accident management

    International Nuclear Information System (INIS)

    Dingman, S.; Sype, T.; Camp, A.; Maloney, K.

    1991-01-01

    The use of NUREG-1150 and similar probabilistic risk assessments in the Nuclear Regulatory Commission (NRC) and industry risk management programs is discussed. Risk management is more comprehensive than the commonly used term accident management. Accident management includes strategies to prevent vessel breach, mitigate radionuclide releases from the reactor coolant system, and mitigate radionuclide releases to the environment. Risk management also addresses prevention of accident initiators, prevention of core damage, and implementation of effective emergency response procedures. The methods and results produced in NUREG-1150 provide a framework within which current risk management strategies can be evaluated, and future risk management programs can be developed and assessed. Examples of the use of the NUREG-1150 framework for identifying and evaluating risk management options are presented. All phases of risk management are discussed, with particular attention given to the early phases of accidents. Plans and methods for evaluating accident management strategies that have been identified in the NRC accident management program are discussed

  19. Application of NUREG-1150 methods and results to accident management

    International Nuclear Information System (INIS)

    Dingman, S.; Sype, T.; Camp, A.; Maloney, K.

    1990-01-01

    The use of NUREG-1150 and similar Probabilistic Risk Assessments in NRC and industry risk management programs is discussed. ''Risk management'' is more comprehensive than the commonly used term ''accident management.'' Accident management includes strategies to prevent vessel breach, mitigate radionuclide releases from the reactor coolant system, and mitigate radionuclide releases to the environment. Risk management also addresses prevention of accident initiators, prevention of core damage, and implementation of effective emergency response procedures. The methods and results produced in NUREG-1150 provide a framework within which current risk management strategies can be evaluated, and future risk management programs can be developed and assessed. Examples of the use of the NUREG-1150 framework for identifying and evaluating risk management options are presented. All phases of risk management are discussed, with particular attention given to the early phases of accidents. Plans and methods for evaluating accident management strategies that have been identified in the NRC accident management program are discussed. 2 refs., 3 figs

  20. New results to BDD truncation method for efficient top event probability calculation

    International Nuclear Information System (INIS)

    Mo, Yuchang; Zhong, Farong; Zhao, Xiangfu; Yang, Quansheng; Cui, Gang

    2012-01-01

    A Binary Decision Diagram (BDD) is a graph-based data structure that calculates an exact top event probability (TEP). It has been a very difficult task to develop an efficient BDD algorithm that can solve a large problem since its memory consumption is very high. Recently, in order to solve a large reliability problem within limited computational resources, Jung presented an efficient method to maintain a small BDD size by a BDD truncation during a BDD calculation. In this paper, it is first identified that Jung's BDD truncation algorithm can be improved for a more practical use. Then, a more efficient truncation algorithm is proposed in this paper, which can generate truncated BDD with smaller size and approximate TEP with smaller truncation error. Empirical results showed this new algorithm uses slightly less running time and slightly more storage usage than Jung's algorithm. It was also found, that designing a truncation algorithm with ideal features for every possible fault tree is very difficult, if not impossible. The so-called ideal features of this paper would be that with the decrease of truncation limits, the size of truncated BDD converges to the size of exact BDD, but should never be larger than exact BDD.

  1. Gradient Correlation Method for the Stabilization of Inversion Results of Aerosol Microphysical Properties Retrieved from Profiles of Optical Data

    Directory of Open Access Journals (Sweden)

    Kolgotin Alexei

    2016-01-01

    Full Text Available Correlation relationships between aerosol microphysical parameters and optical data are investigated. The results show that surface-area concentrations and extinction coefficients are linearly correlated with a correlation coefficient above 0.99 for arbitrary particle size distribution. The correlation relationships that we obtained can be used as constraints in our inversion of optical lidar data. Simulation studies demonstrate a significant stabilization of aerosol microphysical data products if we apply the gradient correlation method in our traditional regularization technique.

  2. The Validation of NAA Method Used as Test Method in Serpong NAA Laboratory

    International Nuclear Information System (INIS)

    Rina-Mulyaningsih, Th.

    2004-01-01

    The Validation Of NAA Method Used As Test Method In Serpong NAA Laboratory. NAA Method is a non standard testing method. The testing laboratory shall validate its using method to ensure and confirm that it is suitable with application. The validation of NAA methods have been done with the parameters of accuracy, precision, repeatability and selectivity. The NIST 1573a Tomato Leaves, NIES 10C Rice flour unpolished and standard elements were used in this testing program. The result of testing with NIST 1573a showed that the elements of Na, Zn, Al and Mn are met from acceptance criteria of accuracy and precision, whereas Co is rejected. The result of testing with NIES 10C showed that Na and Zn elements are met from acceptance criteria of accuracy and precision, but Mn element is rejected. The result of selectivity test showed that the value of quantity is between 0.1-2.5 μg, depend on the elements. (author)

  3. Interval estimation methods of the mean in small sample situation and the results' comparison

    International Nuclear Information System (INIS)

    Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen

    2009-01-01

    The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)

  4. A new PCR-based method shows that blue crabs (Callinectes sapidus (Rathbun)) consume winter flounder (Pseudopleuronectes americanus (Walbaum)).

    Science.gov (United States)

    Collier, Jackie L; Fitzgerald, Sean P; Hice, Lyndie A; Frisk, Michael G; McElroy, Anne E

    2014-01-01

    Winter flounder (Pseudopleuronectes americanus) once supported robust commercial and recreational fisheries in the New York (USA) region, but since the 1990s populations have been in decline. Available data show that settlement of young-of-the-year winter flounder has not declined as sharply as adult abundance, suggesting that juveniles are experiencing higher mortality following settlement. The recent increase of blue crab (Callinectes sapidus) abundance in the New York region raises the possibility that new sources of predation may be contributing to juvenile winter flounder mortality. To investigate this possibility we developed and validated a method to specifically detect winter flounder mitochondrial control region DNA sequences in the gut contents of blue crabs. A survey of 55 crabs collected from Shinnecock Bay (along the south shore of Long Island, New York) in July, August, and September of 2011 showed that 12 of 42 blue crabs (28.6%) from which PCR-amplifiable DNA was recovered had consumed winter flounder in the wild, empirically supporting the trophic link between these species that has been widely speculated to exist. This technique overcomes difficulties with visual identification of the often unrecognizable gut contents of decapod crustaceans, and modifications of this approach offer valuable tools to more broadly address their feeding habits on a wide variety of species.

  5. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON

    International Nuclear Information System (INIS)

    BEEBE - WANG, J.; LUCCIO, A.U.; D IMPERIO, N.; MACHIDA, S.

    2002-01-01

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed

  6. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON.

    Energy Technology Data Exchange (ETDEWEB)

    BEEBE - WANG,J.; LUCCIO,A.U.; D IMPERIO,N.; MACHIDA,S.

    2002-06-03

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed.

  7. Analytical method and result of radiation exposure for depressurization accident of HTTR

    International Nuclear Information System (INIS)

    Sawa, K.; Shiozawa, S.; Mikami, H.

    1990-01-01

    The Japan Atomic Energy Research Institute (JAERI) is now proceeding with the construction design of the High Temperature Engineering Test Reactor (HTTR). Since the HTTR has some characteristics different from LWRs, analytical method of radiation exposure in accidents provided for LWRs can not be applied directly. This paper describes the analytical method of radiation exposure developed by JAERI for the depressurization accident, which is the severest accident in respect to radiation exposure among the design basis accidents of the HTTR. The result is also described in this paper

  8. Show-Bix &

    DEFF Research Database (Denmark)

    2014-01-01

    The anti-reenactment 'Show-Bix &' consists of 5 dias projectors, a dial phone, quintophonic sound, and interactive elements. A responsive interface will enable the Dias projectors to show copies of original dias slides from the Show-Bix piece ”March på Stedet”, 265 images in total. The copies are...

  9. Qualitative and quantitative methods for human factor analysis and assessment in NPP. Investigations and results

    International Nuclear Information System (INIS)

    Hristova, R.; Kalchev, B.; Atanasov, D.

    2005-01-01

    We consider here two basic groups of methods for analysis and assessment of the human factor in the NPP area and give some results from performed analyses as well. The human factor is the human interaction with the design equipment, with the working environment and takes into account the human capabilities and limits. In the frame of the qualitative methods for analysis of the human factor are considered concepts and structural methods for classifying of the information, connected with the human factor. Emphasize is given to the HPES method for human factor analysis in NPP. Methods for quantitative assessment of the human reliability are considered. These methods allow assigning of probabilities to the elements of the already structured information about human performance. This part includes overview of classical methods for human reliability assessment (HRA, THERP), and methods taking into account specific information about human capabilities and limits and about the man-machine interface (CHR, HEART, ATHEANA). Quantitative and qualitative results concerning human factor influence in the initiating events occurrences in the Kozloduy NPP are presented. (authors)

  10. Cobalamin Concentrations in Fetal Liver Show Gender Differences: A Result from Using a High-Pressure Liquid Chromatography-Inductively Coupled Plasma Mass Spectrometry as an Ultratrace Cobalt Speciation Method.

    Science.gov (United States)

    Bosle, Janine; Goetz, Sven; Raab, Andrea; Krupp, Eva M; Scheckel, Kirk G; Lombi, Enzo; Meharg, Andrew A; Fowler, Paul A; Feldmann, Jörg

    2016-12-20

    Maternal diet and lifestyle choices may affect placental transfer of cobalamin (Cbl) to the fetus. Fetal liver concentration of Cbl reflects nutritional status with regards to vitamin B12, but at these low concentration current Cbl measurement methods lack robustness. An analytical method based on enzymatic extraction with subsequent reversed-phase-high-pressure liquid chromatography (RP-HPLC) separation and parallel ICPMS and electrospray ionization (ESI)-Orbitrap-MS to determine specifically Cbl species in liver samples of only 10-50 mg was developed using 14 pig livers. Subsequently 55 human fetal livers were analyzed. HPLC-ICPMS analysis for cobalt (Co) and Cbl gave detection limits of 0.18 ng/g and 0.88 ng/g d.m. in liver samples, respectively, with a recovery of >95%. Total Co (Co t ) concentration did not reflect the amount of Cbl or vitamin B12 in the liver. Cbl bound Co contributes only 45 ± 15% to Co t . XRF mapping and μXANES analysis confirmed the occurrence of non-Cbl cobalt in pig liver hot spots indicating particular Co. No correlations of total cobalt nor Cbl with fetal weight or weeks of gestation were found for the human fetal livers. Although no gender difference could be identified for total Co concentration, female livers were significantly higher in Cbl concentration (24.1 ± 7.8 ng/g) than those from male fetuses (19.8 ± 7.1 ng/g) (p = 0.04). This HPLC-ICPMS method was able to quantify total Co t and Cbl in fetus liver, and it was sensitive and precise enough to identify this gender difference.

  11. THE RESULTS OF THE ANALYSIS OF THE STUDENTS’ BODY COMPOSITION BY BIOIMPEDANCE METHOD

    Directory of Open Access Journals (Sweden)

    Dmitry S. Blinov

    2016-06-01

    Full Text Available Introduction. Tissues of the human body can conduct electricity. Liquid medium (water, blood, the contents of hollow bodies, have a low impedance, i.e. good conductors, while denser tissue (muscle, nerves, etc. resistance is significantly higher. The biggest impedance have fat and bone tissues. The bioimpendancemetry – a method which allows to determine the composition of the human body by measuring electrical resistance (impedance of its tissues. Relevance. This technique is indispensable to dieticians and fitness trainers. In addition, the results of the study can provide invaluable assistance in the appointment of effective treatment physicians, gynecologists, orthopedists, and other specialists. The bioimpedance method helps to determine the risks of developing diabetes type 2, atherosclerosis, hypertension, diseases of the musculoskeletal system, disorders of the endocrine system, gall-stone disease and etc. Materials and Methods. In the list of parameters of body composition assessed by bioimpedance analysis method, included absolute and relative indicators. Depending on the method of measurement of the absolute rates were determined for the whole body. To absolute performance were: fat and skinny body mass index, active cell and skeletal muscle mass, total body water, cellular and extracellular fluid. Along with them were calculated relatively (normalized to body weight, lean mass, or other variables indicators of body composition. Results. In the result of the comparison of anthropometric and bioimpedance method found that growth performance, vital capacity, weight, waist circumference, circumfer¬ence of waist and hip, basal metabolism, body fat mass, normalized on growth, lean mass, percentage skeletal muscle mass in boys and girls with normal and excessive body weight had statistically significant differences. Discussion and Conclusions. In the present study physical development with consideration of body composition in students

  12. Tank 48H Waste Composition and Results of Investigation of Analytical Methods

    Energy Technology Data Exchange (ETDEWEB)

    Walker , D.D. [Westinghouse Savannah River Company, AIKEN, SC (United States)

    1997-04-02

    This report serves two purposes. First, it documents the analytical results of Tank 48H samples taken between April and August 1996. Second, it describes investigations of the precision of the sampling and analytical methods used on the Tank 48H samples.

  13. Performance of various mathematical methods for calculation of radioimmunoassay results

    International Nuclear Information System (INIS)

    Sandel, P.; Vogt, W.

    1977-01-01

    Interpolation and regression methods are available for computer aided determination of radioimmunological end results. We compared the performance of eight algorithms (weighted and unweighted linear logit-log regression, quadratic logit-log regression, Rodbards logistic model in the weighted and unweighted form, smoothing spline interpolation with a large and small smoothing factor and polygonal interpolation) on the basis of three radioimmunoassays with different reference curve characteristics (digoxin, estriol, human chorionic somatomammotropin = HCS). Great store was set by the accuracy of the approximation at the intermediate points on the curve, ie. those points that lie midway between two standard concentrations. These concentrations were obtained by weighing and inserted as unknown samples. In the case of digoxin and estriol the polygonal interpolation provided the best results while the weighted logit-log regression proved superior in the case of HCS. (orig.) [de

  14. Davedan Show Di Amphi Theatre Nusa Dua Bali

    Directory of Open Access Journals (Sweden)

    Ni Made Ruastiti

    2018-05-01

    Bali karena berimplikasi positif pada ekonomi para pihak terkait, pengayaan bagi seni pertunjukan daerah setempat, dan identitas bagi kawasan wisata Nusa Dua, Bali. This article was compiled from the research results that aimed to understand the Davedan Show at Amphi Theater Nusa Dua, Bali. This research was conducted due to the imbalance between the assumption and the reality in real life. Generally, tourists visiting Bali are more excited and enthusiastic to watch the tourism performing arts that are based on local traditional art and culture. However, the reality is different. The questions are: how is the form of the Davedan show?; why do the tourists enjoy watching the show ?; what are the implications for the performer, the society, and the tourism industry in Nusa Dua, Bali?. This research applied qualitative research methods, especially the participative implementation that prioritized cooperation between the researchers and the related informants. The data sources of the research were the Davedan show, management, dancers, audiences, and similar research results produced by previous researchers. All data that had been collected by observation, interview, FGD, and literature study were then analyzed with aesthetic postmodern theory, theory of practice, and theory of power relationship. The results showed that: (1 Davedan Show was presented with the concept of a new presentation in the tourism performing arts in Bali. It could be seen from the material, the form, the way of presentation, and the management of the show. Davedan Show, presenting the theme of Treasure of the Archipelago and opening the new adventure gate, was accompanied by ethnic music recordings of the archipelago in a medley then continued with the performance structures of: Balinese, Sumatran, Sundanese, Solo, Borneo and Papuan art and culture; (2 Davedan Show attracted many foreign tourists because the show was based on the existence of market, aesthetic, and cultural ideologies of the

  15. 3D MRI of the colon: methods and first results of 5 patients

    International Nuclear Information System (INIS)

    Luboldt, W.; Bauerfeind, P.; Pelkonen, P.; Steiner, P.; Krestin, G.P.; Debatin, J.F.

    1997-01-01

    Purpose: 'Exoscopic' and endoscopic identification of colorectal pathologies via MRI. Methods: 5 patients (36-88 years), two normal and three with different colorectal pathologies (diverticular disease, polyps and carcinoma of the colon), were examined by MRI after colonoscopy. Subsequent to filling of the colon with a gadolinium-water mixture under MRI-monitoring, 3D-data sets of the colon were acquired in prone and supine positions over a 28 sec breathold interval. Subsequently multiplanar T 1 -weighted 2D-sequences were acquired before and following i.v. administration of Gd-DTPA (0.1 mmol/kg BW). All imaging was performed in the coronal orientation. The 3D-data were interactively analysed based on various displays: Maximum intensity projection (MIP), surface shadowed display (SSD), multiplanar reconstruction (MPR), virtual colonoscopy (VC). Results: All of the colorectal pathologies could be interactively diagnosed by MPR. On MIP images some pathologies were missed. VC presented the morphology of colon haustra as well as of all endoluminally growing lesions in a manner similar to endoscopy. The colon masses showed uptake of contrast media and could thus be differentiated from air or faeces. (orig./AJ) [de

  16. Influence of Meibomian Gland Expression Methods on Human Lipid Analysis Results.

    Science.gov (United States)

    Kunnen, Carolina M E; Brown, Simon H J; Lazon de la Jara, Percy; Holden, Brien A; Blanksby, Stephen J; Mitchell, Todd W; Papas, Eric B

    2016-01-01

    To compare the lipid composition of human meibum across three different meibum expression techniques. Meibum was collected from five healthy non-contact lens wearers (aged 20-35 years) after cleaning the eyelid margin using three meibum expression methods: cotton buds (CB), meibomian gland evaluator (MGE) and meibomian gland forceps (MGF). Meibum was also collected using cotton buds without cleaning the eyelid margin (CBn). Lipids were analyzed by chip-based, nano-electrospray mass spectrometry (ESI-MS). Comparisons were made using linear mixed models. Tandem MS enabled identification and quantification of over 200 lipid species across ten lipid classes. There were significant differences between collection techniques in the relative quantities of polar lipids obtained (P<.05). The MGE method returned smaller polar lipid quantities than the CB approaches. No significant differences were found between techniques for nonpolar lipids. No significant differences were found between cleaned and non-cleaned eyelids for polar or nonpolar lipids. Meibum expression technique influences the relative amount of phospholipids in the resulting sample. The highest amounts of phospholipids were detected with the CB approaches and the lowest with the MGE technique. Cleaning the eyelid margin prior to expression was not found to affect the lipid composition of the sample. This may be a consequence of the more forceful expression resulting in cell membrane contamination or higher risk of tear lipid contamination as a result of reflex tearing. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Improved Method of Detection Falsification Results the Digital Image in Conditions of Attacks

    Directory of Open Access Journals (Sweden)

    Kobozeva A.A.

    2016-08-01

    Full Text Available The modern level of information technologies development has led to unheard ease embodiments hitherto unauthorized modifications of digital content. At the moment, very important question is the effective expert examination of authenticity of digital images, video, audio, development of the methods for identification and localization of violations of their integrity using these contents for purposes other than entertainment. Present paper deals with the improvement of the detection method of the cloning results in digital images - one of the most frequently used in the software tools falsification realized in all modern graphics editors. The method is intended for clone detection areas and pre-image in terms of additional disturbing influences in the image after the cloning operation for "masking" of the results, which complicates the search process. The improvement is aimed at reducing the number of "false alarms", when the area of the clone / pre-image detected in the original image or the localization of the identified areas do not correspond to the real clone and pre-image. The proposed improvement, based on analysis of different sizes per-pixel image blocks with the least difference from each other, has made it possible efficient functioning of the method, regardless of the specificity of the analyzed digital image.

  18. The CREATE Method Does Not Result in Greater Gains in Critical Thinking than a More Traditional Method of Analyzing the Primary Literature †

    Science.gov (United States)

    Segura-Totten, Miriam; Dalman, Nancy E.

    2013-01-01

    Analysis of the primary literature in the undergraduate curriculum is associated with gains in student learning. In particular, the CREATE (Consider, Read, Elucidate hypotheses, Analyze and interpret the data, and Think of the next Experiment) method is associated with an increase in student critical thinking skills. We adapted the CREATE method within a required cell biology class and compared the learning gains of students using CREATE to those of students involved in less structured literature discussions. We found that while both sets of students had gains in critical thinking, students who used the CREATE method did not show significant improvement over students engaged in a more traditional method for dissecting the literature. Students also reported similar learning gains for both literature discussion methods. Our study suggests that, at least in our educational context, the CREATE method does not lead to higher learning gains than a less structured way of reading primary literature. PMID:24358379

  19. Comparison between results of solution of Burgers' equation and Laplace's equation by Galerkin and least-square finite element methods

    Science.gov (United States)

    Adib, Arash; Poorveis, Davood; Mehraban, Farid

    2018-03-01

    In this research, two equations are considered as examples of hyperbolic and elliptic equations. In addition, two finite element methods are applied for solving of these equations. The purpose of this research is the selection of suitable method for solving each of two equations. Burgers' equation is a hyperbolic equation. This equation is a pure advection (without diffusion) equation. This equation is one-dimensional and unsteady. A sudden shock wave is introduced to the model. This wave moves without deformation. In addition, Laplace's equation is an elliptical equation. This equation is steady and two-dimensional. The solution of Laplace's equation in an earth dam is considered. By solution of Laplace's equation, head pressure and the value of seepage in the directions X and Y are calculated in different points of earth dam. At the end, water table is shown in the earth dam. For Burgers' equation, least-square method can show movement of wave with oscillation but Galerkin method can not show it correctly (the best method for solving of the Burgers' equation is discrete space by least-square finite element method and discrete time by forward difference.). For Laplace's equation, Galerkin and least square methods can show water table correctly in earth dam.

  20. Integrating Quantitative and Qualitative Results in Health Science Mixed Methods Research Through Joint Displays

    Science.gov (United States)

    Guetterman, Timothy C.; Fetters, Michael D.; Creswell, John W.

    2015-01-01

    PURPOSE Mixed methods research is becoming an important methodology to investigate complex health-related topics, yet the meaningful integration of qualitative and quantitative data remains elusive and needs further development. A promising innovation to facilitate integration is the use of visual joint displays that bring data together visually to draw out new insights. The purpose of this study was to identify exemplar joint displays by analyzing the various types of joint displays being used in published articles. METHODS We searched for empirical articles that included joint displays in 3 journals that publish state-of-the-art mixed methods research. We analyzed each of 19 identified joint displays to extract the type of display, mixed methods design, purpose, rationale, qualitative and quantitative data sources, integration approaches, and analytic strategies. Our analysis focused on what each display communicated and its representation of mixed methods analysis. RESULTS The most prevalent types of joint displays were statistics-by-themes and side-by-side comparisons. Innovative joint displays connected findings to theoretical frameworks or recommendations. Researchers used joint displays for convergent, explanatory sequential, exploratory sequential, and intervention designs. We identified exemplars for each of these designs by analyzing the inferences gained through using the joint display. Exemplars represented mixed methods integration, presented integrated results, and yielded new insights. CONCLUSIONS Joint displays appear to provide a structure to discuss the integrated analysis and assist both researchers and readers in understanding how mixed methods provides new insights. We encourage researchers to use joint displays to integrate and represent mixed methods analysis and discuss their value. PMID:26553895

  1. Application of Statistical Methods to Activation Analytical Results near the Limit of Detection

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Wanscher, B.

    1978-01-01

    Reporting actual numbers instead of upper limits for analytical results at or below the detection limit may produce reliable data when these numbers are subjected to appropriate statistical processing. Particularly in radiometric methods, such as activation analysis, where individual standard...... deviations of analytical results may be estimated, improved discrimination may be based on the Analysis of Precision. Actual experimental results from a study of the concentrations of arsenic in human skin demonstrate the power of this principle....

  2. A Kinematic Study of Prosodic Structure in Articulatory and Manual Gestures: Results from a Novel Method of Data Collection

    Directory of Open Access Journals (Sweden)

    Jelena Krivokapić

    2017-03-01

    Full Text Available The primary goal of this work is to examine prosodic structure as expressed concurrently through articulatory and manual gestures. Specifically, we investigated the effects of phrase-level prominence (Experiment 1 and of prosodic boundaries (Experiments 2 and 3 on the kinematic properties of oral constriction and manual gestures. The hypothesis guiding this work is that prosodic structure will be similarly expressed in both modalities. To test this, we have developed a novel method of data collection that simultaneously records speech audio, vocal tract gestures (using electromagnetic articulometry and manual gestures (using motion capture. This method allows us, for the first time, to investigate kinematic properties of body movement and vocal tract gestures simultaneously, which in turn allows us to examine the relationship between speech and body gestures with great precision. A second goal of the paper is thus to establish the validity of this method. Results from two speakers show that manual and oral gestures lengthen under prominence and at prosodic boundaries, indicating that the effects of prosodic structure extend beyond the vocal tract to include body movement.1

  3. Ultrasonography-guided core needle biopsy for the thyroid nodule: does the procedure hold any benefit for the diagnosis when fine-needle aspiration cytology analysis shows inconclusive results?

    Science.gov (United States)

    Hahn, S Y; Han, B-K; Ko, E Y; Ko, E S

    2013-01-01

    Objective: We evaluated the diagnostic role of ultrasonography-guided core needle biopsy (CNB) according to ultrasonography features of thyroid nodules that had inconclusive ultrasonography-guided fine-needle aspiration (FNA) results. Methods: A total of 88 thyroid nodules in 88 patients who underwent ultrasonography-guided CNB because of previous inconclusive FNA results were evaluated. The patients were classified into three groups based on ultrasonography findings: Group A, which was suspicious for papillary thyroid carcinoma (PTC); Group B, which was suspicious for follicular (Hurthle cell) neoplasm; and Group C, which was suspicious for lymphoma. The final diagnoses of the thyroid nodules were determined by surgical confirmation or follow-up after ultrasonography-guided CNB. Results: Of the 88 nodules, the malignant rate was 49.1% in Group A, 12.0% in Group B and 90.0% in Group C. The rates of conclusive ultrasonography-guided CNB results after previous incomplete ultrasonography-guided FNA results were 96.2% in Group A, 64.0% in Group B and 90.0% in Group C (p=0.001). 12 cases with inconclusive ultrasonography-guided CNB results were finally diagnosed as 8 benign lesions, 3 PTCs and 1 lymphoma. The number of previous ultrasonography-guided FNA biopsies was not significantly different between the conclusive and the inconclusive result groups of ultrasonography-guided CNB (p=0.205). Conclusion: Ultrasonography-guided CNB has benefit for the diagnosis of thyroid nodules with inconclusive ultrasonography-guided FNA results. However, it is still not helpful for the differential diagnosis in 36% of nodules that are suspicious for follicular neoplasm seen on ultrasonography. Advances in knowledge: This study shows the diagnostic contribution of ultrasonography-guided CNB as an alternative to repeat ultrasonography-guided FNA or surgery. PMID:23564885

  4. This research is to study the factors which influence the business success of small business ‘processed rotan’. The data employed in the study are primary data within the period of July to August 2013, 30 research observations through census method. Method of analysis used in the study is multiple linear regressions. The results of analysis showed that the factors of labor, innovation and promotion have positive and significant influence on the business success of small business ‘processed rotan’ simultaneously. The analysis also showed that partially labor has positive and significant influence on the business success, yet innovation and promotion have insignificant and positive influence on the business success.

    OpenAIRE

    Nasution, Inggrita Gusti Sari; Muchtar, Yasmin Chairunnisa

    2013-01-01

    This research is to study the factors which influence the business success of small business ‘processed rotan’. The data employed in the study are primary data within the period of July to August 2013, 30 research observations through census method. Method of analysis used in the study is multiple linear regressions. The results of analysis showed that the factors of labor, innovation and promotion have positive and significant influence on the business success of small busine...

  5. Aircrew Exposure To Cosmic Radiation Evaluated By Means Of Several Methods; Results Obtained In 2006

    International Nuclear Information System (INIS)

    Ploc, Ondrej; Spurny, Frantisek; Jadrnickova, Iva; Turek, Karel

    2008-01-01

    Routine evaluation of aircraft crew exposure to cosmic radiation in the Czech Republic is performed by means of calculation method. Measurements onboard aircraft work as a control tool of the routine method, as well as a possibility of comparison of results measured by means of several methods. The following methods were used in 2006: (1) mobile dosimetry unit (MDU) type Liulin--a spectrometer of energy deposited in Si-detector; (2) two types of LET spectrometers based on the chemically etched track detectors (TED); (3) two types of thermoluminescent detectors; and (4) two calculation methods. MDU represents currently one of the most reliable equipments for evaluation of the aircraft crew exposure to cosmic radiation. It is an active device which measures total energy depositions (E dep ) in the semiconductor unit, and, after appropriate calibration, is able to give a separate estimation for non-neutron and neutron-like components of H*(10). This contribution consists mostly of results acquired by means of this equipment; measurements with passive detectors and calculations are mentioned because of comparison. Reasonably good agreement of all data sets could be stated

  6. Comparison Of Simulation Results When Using Two Different Methods For Mold Creation In Moldflow Simulation

    Directory of Open Access Journals (Sweden)

    Kaushikbhai C. Parmar

    2017-04-01

    Full Text Available Simulation gives different results when using different methods for the same simulation. Autodesk Moldflow Simulation software provide two different facilities for creating mold for the simulation of injection molding process. Mold can be created inside the Moldflow or it can be imported as CAD file. The aim of this paper is to study the difference in the simulation results like mold temperature part temperature deflection in different direction time for the simulation and coolant temperature for this two different methods.

  7. MULTICRITERIA METHODS IN PERFORMING COMPANIES’ RESULTS USING ELECTRONIC RECRUITING, CORPORATE COMMUNICATION AND FINANCIAL RATIOS

    Directory of Open Access Journals (Sweden)

    Ivana Bilić

    2011-02-01

    Full Text Available Human resources represent one of the most important companies’ resources responsible in creation of companies’ competitive advantage. In search for the most valuable resources, companies use different methods. Lately, one of the growing methods is electronic recruiting, not only as a recruitment tool, but also as a mean of external communication. Additionally, in the process of corporate communication, companies nowadays use the electronic corporate communication as the easiest, the cheapest and the simplest form of business communication. The aim of this paper is to investigate relationship between three groups of different criteria; including main characteristics of performed electronic recruiting, corporate communication and selected financial performances. Selected companies were ranked separately by each group of criteria by usage of multicriteria decision making method PROMETHEE II. The main idea is to research whether companies which are the highest performers by certain group of criteria obtain the similar results regarding other group of criteria or performing results.

  8. The Trojan Horse method for nuclear astrophysics: Recent results for direct reactions

    International Nuclear Information System (INIS)

    Tumino, A.; Gulino, M.; Spitaleri, C.; Cherubini, S.; Romano, S.; Cognata, M. La; Pizzone, R. G.; Rapisarda, G. G.; Lamia, L.

    2014-01-01

    The Trojan Horse method is a powerful indirect technique to determine the astrophysical factor for binary rearrangement processes A+x→b+B at astrophysical energies by measuring the cross section for the Trojan Horse (TH) reaction A+a→B+b+s in quasi free kinematics. The Trojan Horse Method has been successfully applied to many reactions of astrophysical interest, both direct and resonant. In this paper, we will focus on direct sub-processes. The theory of the THM for direct binary reactions will be shortly presented based on a few-body approach that takes into account the off-energy-shell effects and initial and final state interactions. Examples of recent results will be presented to demonstrate how THM works experimentally

  9. The Trojan Horse method for nuclear astrophysics: Recent results for direct reactions

    Energy Technology Data Exchange (ETDEWEB)

    Tumino, A.; Gulino, M. [Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania, Italy and Università degli Studi di Enna Kore, Enna (Italy); Spitaleri, C.; Cherubini, S.; Romano, S. [Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania, Italy and Dipartimento di Fisica e Astronomia, Università di Catania, Catania (Italy); Cognata, M. La; Pizzone, R. G.; Rapisarda, G. G. [Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania (Italy); Lamia, L. [Dipartimento di Fisica e Astronomia, Università di Catania, Catania (Italy)

    2014-05-09

    The Trojan Horse method is a powerful indirect technique to determine the astrophysical factor for binary rearrangement processes A+x→b+B at astrophysical energies by measuring the cross section for the Trojan Horse (TH) reaction A+a→B+b+s in quasi free kinematics. The Trojan Horse Method has been successfully applied to many reactions of astrophysical interest, both direct and resonant. In this paper, we will focus on direct sub-processes. The theory of the THM for direct binary reactions will be shortly presented based on a few-body approach that takes into account the off-energy-shell effects and initial and final state interactions. Examples of recent results will be presented to demonstrate how THM works experimentally.

  10. Monte Carlo Methods in Physics

    International Nuclear Information System (INIS)

    Santoso, B.

    1997-01-01

    Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained

  11. Results of an interlaboratory comparison of analytical methods for contaminants of emerging concern in water.

    Science.gov (United States)

    Vanderford, Brett J; Drewes, Jörg E; Eaton, Andrew; Guo, Yingbo C; Haghani, Ali; Hoppe-Jones, Christiane; Schluesener, Michael P; Snyder, Shane A; Ternes, Thomas; Wood, Curtis J

    2014-01-07

    An evaluation of existing analytical methods used to measure contaminants of emerging concern (CECs) was performed through an interlaboratory comparison involving 25 research and commercial laboratories. In total, 52 methods were used in the single-blind study to determine method accuracy and comparability for 22 target compounds, including pharmaceuticals, personal care products, and steroid hormones, all at ng/L levels in surface and drinking water. Method biases ranged from caffeine, NP, OP, and triclosan had false positive rates >15%. In addition, some methods reported false positives for 17β-estradiol and 17α-ethynylestradiol in unspiked drinking water and deionized water, respectively, at levels higher than published predicted no-effect concentrations for these compounds in the environment. False negative rates were also generally contamination, misinterpretation of background interferences, and/or inappropriate setting of detection/quantification levels for analysis at low ng/L levels. The results of both comparisons were collectively assessed to identify parameters that resulted in the best overall method performance. Liquid chromatography-tandem mass spectrometry coupled with the calibration technique of isotope dilution were able to accurately quantify most compounds with an average bias of <10% for both matrixes. These findings suggest that this method of analysis is suitable at environmentally relevant levels for most of the compounds studied. This work underscores the need for robust, standardized analytical methods for CECs to improve data quality, increase comparability between studies, and help reduce false positive and false negative rates.

  12. A comparison of short-term dispersion estimates resulting from various atmospheric stability classification methods

    International Nuclear Information System (INIS)

    Mitchell, A.E. Jr.

    1982-01-01

    Four methods of classifying atmospheric stability class are applied at four sites to make short-term (1-h) dispersion estimates from a ground-level source based on a model consistent with U.S. Nuclear Regulatory Commission practice. The classification methods include vertical temperature gradient, standard deviation of horizontal wind direction fluctuations (sigma theta), Pasquill-Turner, and modified sigma theta which accounts for meander. Results indicate that modified sigma theta yields reasonable dispersion estimates compared to those produced using methods of vertical temperature gradient and Pasquill-Turner, and can be considered as a potential economic alternative in establishing onsite monitoring programs. (author)

  13. Search Results | Page 88 | IDRC - International Development ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Search Results. Showing 871 - 880 of 8491 results. Studies ... Strengthening Nurses' Capacity in HIV Policy Development in Sub-Saharan Africa and the Caribbean ... Novel Epidemiological Method: Using Newspapers To Provide Insight Into ...

  14. Testing the ISP method with the PARIO device: Accuracy of results and influence of homogenization technique

    Science.gov (United States)

    Durner, Wolfgang; Huber, Magdalena; Yangxu, Li; Steins, Andi; Pertassek, Thomas; Göttlein, Axel; Iden, Sascha C.; von Unold, Georg

    2017-04-01

    The particle-size distribution (PSD) is one of the main properties of soils. To determine the proportions of the fine fractions silt and clay, sedimentation experiments are used. Most common are the Pipette and Hydrometer method. Both need manual sampling at specific times. Both are thus time-demanding and rely on experienced operators. Durner et al. (Durner, W., S.C. Iden, and G. von Unold (2017): The integral suspension pressure method (ISP) for precise particle-size analysis by gravitational sedimentation, Water Resources Research, doi:10.1002/2016WR019830) recently developed the integral suspension method (ISP) method, which is implemented in the METER Group device PARIOTM. This new method estimates continuous PSD's from sedimentation experiments by recording the temporal evolution of the suspension pressure at a certain measurement depth in a sedimentation cylinder. It requires no manual interaction after start and thus no specialized training of the lab personnel. The aim of this study was to test the precision and accuracy of new method with a variety of materials, to answer the following research questions: (1) Are the results obtained by PARIO reliable and stable? (2) Are the results affected by the initial mixing technique to homogenize the suspension, or by the presence of sand in the experiment? (3) Are the results identical to the one that are obtained with the Pipette method as reference method? The experiments were performed with a pure quartz silt material and four real soil materials. PARIO measurements were done repetitively on the same samples in a temperature-controlled lab to characterize the repeatability of the measurements. Subsequently, the samples were investigated by the pipette method to validate the results. We found that the statistical error for silt fraction from replicate and repetitive measurements was in the range of 1% for the quartz material to 3% for soil materials. Since the sand fractions, as in any sedimentation method, must

  15. The review and results of different methods for facial recognition

    Science.gov (United States)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  16. Implicit methods for equation-free analysis: convergence results and analysis of emergent waves in microscopic traffic models

    DEFF Research Database (Denmark)

    Marschler, Christian; Sieber, Jan; Berkemer, Rainer

    2014-01-01

    We introduce a general formulation for an implicit equation-free method in the setting of slow-fast systems. First, we give a rigorous convergence result for equation-free analysis showing that the implicitly defined coarse-level time stepper converges to the true dynamics on the slow manifold...... against the direction of traffic. Equation-free analysis enables us to investigate the behavior of the microscopic traffic model on a macroscopic level. The standard deviation of cars' headways is chosen as the macroscopic measure of the underlying dynamics such that traveling wave solutions correspond...... to equilibria on the macroscopic level in the equation-free setup. The collapse of the traffic jam to the free flow then corresponds to a saddle-node bifurcation of this macroscopic equilibrium. We continue this bifurcation in two parameters using equation-free analysis....

  17. Dosimetric methods and results of measurement for total body electron irradiation

    International Nuclear Information System (INIS)

    Feng Ningyuan; Yu Geng; Yu Zihao

    1987-01-01

    A modified 'STANFORD TSEI TECHNIQUE' e.g. dual angled gantry, 6 turntable angles and 12 fields was developed on PHILIPS SL 75-20 linear accelerator to treat mycosis fungoides. A plastic scatter screen, 5 mm in thickness was used to reduce the primary electron energy to 4 MeV in order to control treatment depth (d 80 approx.= 1.2 cm) and skin dose up to 89%. The X-ray contamination was at an acceptable level of 2%. This measurement which involved multiple dosimetric methods, showed that the distance between the scattor screen and the patient, within 10-30 cm, had no influence on PDD and the dose distribution on the body surface was reasonably homogeneous, but strongly dependent on the anatomic positions. For those sites which were located in the electron beam shadows, boosting irradiation might be necessary. The preliminary clinical trials indicated that this technique is valid and feasible

  18. Methods and optical fibers that decrease pulse degradation resulting from random chromatic dispersion

    Science.gov (United States)

    Chertkov, Michael; Gabitov, Ildar

    2004-03-02

    The present invention provides methods and optical fibers for periodically pinning an actual (random) accumulated chromatic dispersion of an optical fiber to a predicted accumulated dispersion of the fiber through relatively simple modifications of fiber-optic manufacturing methods or retrofitting of existing fibers. If the pinning occurs with sufficient frequency (at a distance less than or are equal to a correlation scale), pulse degradation resulting from random chromatic dispersion is minimized. Alternatively, pinning may occur quasi-periodically, i.e., the pinning distance is distributed between approximately zero and approximately two to three times the correlation scale.

  19. Using television shows to teach communication skills in internal medicine residency

    Directory of Open Access Journals (Sweden)

    Ma Irene

    2009-02-01

    Full Text Available Abstract Background To address evidence-based effective communication skills in the formal academic half day curriculum of our core internal medicine residency program, we designed and delivered an interactive session using excerpts taken from medically-themed television shows. Methods We selected two excerpts from the television show House, and one from Gray's Anatomy and featured them in conjunction with a brief didactic presentation of the Kalamazoo consensus statement on doctor-patient communication. To assess the efficacy of this approach a set of standardized questions were given to our residents once at the beginning and once at the completion of the session. Results Our residents indicated that their understanding of an evidence-based model of effective communication such as the Kalamazoo model, and their comfort levels in applying such model in clinical practice increased significantly. Furthermore, residents' understanding levels of the seven essential competencies listed in the Kalamazoo model also improved significantly. Finally, the residents reported that their comfort levels in three challenging clinical scenarios presented to them improved significantly. Conclusion We used popular television shows to teach residents in our core internal medicine residency program about effective communication skills with a focus on the Kalamazoo's model. The results of the subjective assessment of this approach indicated that it was successful in accomplishing our objectives.

  20. Cell wall proteome of sugarcane stems: comparison of a destructive and a non-destructive extraction method showed differences in glycoside hydrolases and peroxidases.

    Science.gov (United States)

    Calderan-Rodrigues, Maria Juliana; Jamet, Elisabeth; Douché, Thibaut; Bonassi, Maria Beatriz Rodrigues; Cataldi, Thaís Regiani; Fonseca, Juliana Guimarães; San Clemente, Hélène; Pont-Lezica, Rafael; Labate, Carlos Alberto

    2016-01-11

    Sugarcane has been used as the main crop for ethanol production for more than 40 years in Brazil. Recently, the production of bioethanol from bagasse and straw, also called second generation (2G) ethanol, became a reality with the first commercial plants started in the USA and Brazil. However, the industrial processes still need to be improved to generate a low cost fuel. One possibility is the remodeling of cell walls, by means of genetic improvement or transgenesis, in order to make the bagasse more accessible to hydrolytic enzymes. We aimed at characterizing the cell wall proteome of young sugarcane culms, to identify proteins involved in cell wall biogenesis. Proteins were extracted from the cell walls of 2-month-old culms using two protocols, non-destructive by vacuum infiltration vs destructive. The proteins were identified by mass spectrometry and bioinformatics. A predicted signal peptide was found in 84 different proteins, called cell wall proteins (CWPs). As expected, the non-destructive method showed a lower percentage of proteins predicted to be intracellular than the destructive one (33% vs 44%). About 19% of CWPs were identified with both methods, whilst the infiltration protocol could lead to the identification of 75% more CWPs. In both cases, the most populated protein functional classes were those of proteins related to lipid metabolism and oxido-reductases. Curiously, a single glycoside hydrolase (GH) was identified using the non-destructive method whereas 10 GHs were found with the destructive one. Quantitative data analysis allowed the identification of the most abundant proteins. The results highlighted the importance of using different protocols to extract proteins from cell walls to expand the coverage of the cell wall proteome. Ten GHs were indicated as possible targets for further studies in order to obtain cell walls less recalcitrant to deconstruction. Therefore, this work contributed to two goals: enlarge the coverage of the sugarcane

  1. Characterization of the Darwin direct implicit particle-in-cell method and resulting guidelines for operation

    International Nuclear Information System (INIS)

    Gibbons, M.R.; Hewett, D.W.

    1997-01-01

    We investigate the linear dispersion and other properties of the Darwin Direct Implicit Particle-in-cell (DADIPIC) method in order to deduce guidelines for its use in the simulation of long time-scale, kinetic phenomena in plasmas. The Darwin part of this algorithm eliminates the Courant constraint for light propagation across a grid cell in a time step and divides the field solution into several elliptic equations. The direct implicit method is only applied to the electrostatic field relieving the need to resolve plasma oscillations. Linear theory and simulations verifying the theory are used to generate the desired guidelines as well as show the utility of DADIPIC for a wide range of low frequency, electromagnetic phenomena. We find that separation of the fields has made the task of predicting algorithm behavior easier and produced a robust method without restrictive constraints. 20 refs., 11 figs., 3 tabs

  2. Possible distortion of autoradiographic results

    Energy Technology Data Exchange (ETDEWEB)

    Kozlov, A.A.; Tumanushvili, G.D. (AN Gruzinskoj SSR, Tbilisi. Inst. Ehksperimental' noj Morfologii)

    1980-01-01

    The effect of radioactive labelling (H/sup 3/-thymidine) on the infusorian division is studied. The presented results show that introduction of labelled compounds results in accelerating infusorian cell division v. Thorough investigation of labelled compound effect with low activity on the parameters of cell division and the search of methods to eliminate distortions able to appear in autoradiographic experiments is expedient.

  3. Lesion insertion in the projection domain: Methods and initial results

    International Nuclear Information System (INIS)

    Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Yu, Zhicong; Ma, Chi; McCollough, Cynthia

    2015-01-01

    Purpose: To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way of achieving this objective is to create hybrid images that combine patient images with inserted lesions. Because conventional hybrid images generated in the image domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Methods: Lesions were segmented from patient images and forward projected to acquire lesion projections. The forward-projection geometry was designed according to a commercial CT scanner and accommodated both axial and helical modes with various focal spot movement patterns. The energy employed by the commercial CT scanner for beam hardening correction was measured and used for the forward projection. The lesion projections were inserted into patient projections decoded from commercial CT projection data. The combined projections were formatted to match those of commercial CT raw data, loaded onto a commercial CT scanner, and reconstructed to create the hybrid images. Two validations were performed. First, to validate the accuracy of the forward-projection geometry, images were reconstructed from the forward projections of a virtual ACR phantom and compared to physically acquired ACR phantom images in terms of CT number accuracy and high-contrast resolution. Second, to validate the realism of the lesion in hybrid images, liver lesions were segmented from patient images and inserted back into the same patients, each at a new location specified by a radiologist. The inserted lesions were compared to the original lesions and visually assessed for realism by two experienced radiologists in a blinded fashion. Results: For the validation of the forward-projection geometry, the images reconstructed from the forward projections of the virtual ACR phantom were consistent with the images physically

  4. Lesion insertion in the projection domain: Methods and initial results

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Yu, Zhicong; Ma, Chi; McCollough, Cynthia, E-mail: mccollough.cynthia@mayo.edu [Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States)

    2015-12-15

    Purpose: To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way of achieving this objective is to create hybrid images that combine patient images with inserted lesions. Because conventional hybrid images generated in the image domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Methods: Lesions were segmented from patient images and forward projected to acquire lesion projections. The forward-projection geometry was designed according to a commercial CT scanner and accommodated both axial and helical modes with various focal spot movement patterns. The energy employed by the commercial CT scanner for beam hardening correction was measured and used for the forward projection. The lesion projections were inserted into patient projections decoded from commercial CT projection data. The combined projections were formatted to match those of commercial CT raw data, loaded onto a commercial CT scanner, and reconstructed to create the hybrid images. Two validations were performed. First, to validate the accuracy of the forward-projection geometry, images were reconstructed from the forward projections of a virtual ACR phantom and compared to physically acquired ACR phantom images in terms of CT number accuracy and high-contrast resolution. Second, to validate the realism of the lesion in hybrid images, liver lesions were segmented from patient images and inserted back into the same patients, each at a new location specified by a radiologist. The inserted lesions were compared to the original lesions and visually assessed for realism by two experienced radiologists in a blinded fashion. Results: For the validation of the forward-projection geometry, the images reconstructed from the forward projections of the virtual ACR phantom were consistent with the images physically

  5. RESULTS OF THE QUESTIONNAIRE: ANALYSIS METHODS

    CERN Multimedia

    Staff Association

    2014-01-01

    Five-yearly review of employment conditions   Article S V 1.02 of our Staff Rules states that the CERN “Council shall periodically review and determine the financial and social conditions of the members of the personnel. These periodic reviews shall consist of a five-yearly general review of financial and social conditions;” […] “following methods […] specified in § I of Annex A 1”. Then, turning to the relevant part in Annex A 1, we read that “The purpose of the five-yearly review is to ensure that the financial and social conditions offered by the Organization allow it to recruit and retain the staff members required for the execution of its mission from all its Member States. […] these staff members must be of the highest competence and integrity.” And for the menu of such a review we have: “The five-yearly review must include basic salaries and may include any other financial or soc...

  6. Dentoalveolar growth of patients with complete unilateral cleft lip and palate by early two-stage furlow and push-back method: preliminary results.

    Science.gov (United States)

    Kitagawa, Taiji; Kohara, Hiroshi; Sohmura, Taiji; Takahashi, Junzo; Tachimura, Takashi; Wada, Takeshi; Kogo, Mikihiko

    2004-09-01

    This study examined dentoalveolar growth changes prior to the time of palatoplasty up to 3 years of age by the early two-stage Furlow and push-back methods. Thirty-four Japanese patients with complete unilateral cleft lip and palate (UCLP) treated with either a two-stage Furlow procedure (Furlow group: seven boys, eight girls) from 1998 to 2002 or a push-back procedure (push-back group; 12 boys, 7 girls) from 1993 to 1997. Consecutive plaster models were measured by three-dimensional laser scanner, before primary palatoplasty, before hard palate closure (Furlow group only), and at 3 years of age. Bite measures were taken at 3 years of age. In the Furlow group, arch length, canine width, first and second deciduous molar width and cross-sectional area, and depth and volume at midpoint showed greater growth than in the push-back group. In the Furlow group, the crossbite score was also better than in the push-back group at 3 years of age. In comparison with the push-back group, inhibition of growth impediment in the anterior region was observed in the horizontal direction in the Furlow group. In the midregion, it was observed in the horizontal and vertical directions, and in the posterior region it was observed in the horizontal direction. The results demonstrate that the early two-stage Furlow method showed progressive alveolar growth. Therefore, the early two-stage Furlow method is a more beneficial procedure than the push-back method.

  7. Groundwater Seepage Estimation into Amirkabir Tunnel Using Analytical Methods and DEM and SGR Method

    OpenAIRE

    Hadi Farhadian; Homayoon Katibeh

    2015-01-01

    In this paper, groundwater seepage into Amirkabir tunnel has been estimated using analytical and numerical methods for 14 different sections of the tunnel. Site Groundwater Rating (SGR) method also has been performed for qualitative and quantitative classification of the tunnel sections. The obtained results of above mentioned methods were compared together. The study shows reasonable accordance with results of the all methods unless for two sections of tunnel. In these t...

  8. Quantitative measurement of post-irradiation neck fibrosis based on the young modulus: description of a new method and clinical results.

    Science.gov (United States)

    Leung, Sing-Fai; Zheng, Yongping; Choi, Charles Y K; Mak, Suzanne S S; Chiu, Samuel K W; Zee, Benny; Mak, Arthur F T

    2002-08-01

    Postirradiation fibrosis is one of the most common late effects of radiation therapy for patients with head and neck carcinoma. An objective and quantitative method for its measurement is much desired, but the criteria currently used to score fibrosis are mostly semiquantitative and partially subjective. The Young Modulus (YM) is a physical parameter that characterizes the deformability of material to stress. The authors measured the YM in soft tissues of the neck, at defined reference points, using an ultrasound probe and computer algorithm that quantified the indentation (deformation) on tissue due to a measured, applied force. One hundred five patients who had received previous radiation therapy to the entire neck were assessed, and the results were compared with the hand palpation scores and with a functional parameter represented by the range of neck rotation, and all results were correlated with symptoms. The YM was obtained successfully in all patients examined. It had a significant positive correlation with the palpation score and a significant negative correlation with the range of neck rotation. The YM was significantly higher on the side of the neck that received a boost dose of radiation, although the corresponding palpation scores were similar. The results of all three measurement methods were correlated with symptoms. Postirradiation neck fibrosis can be measured in absolute units based on the YM. The results showed a significant correlation with hand palpation scores, with restriction of neck rotation, and with symptoms. Compared with the palpation method, the YM is more quantitative, objective, focused on small subregions, and better discriminates regions subject to differential radiation dose levels. Its inclusion in the Analytic category of the Late Effects of Normal Tissues-SOMA system should be considered to facilitate comparative studies. Copyright 2002 American Cancer Society.

  9. [Analysis of the results of bone healing in femurs lengthened by the gradual distraction method in children and adolescents].

    Science.gov (United States)

    Jochymek, J; Skvaril, J; Ondrus, S

    2009-10-01

    Treatment of leg length inequality via lengthening of the shorter extremity is an infrequent orthopedic procedure due to the requirement of special distraction devices and possible serious complications. Essential qualitative changes in operative technique development are associated with the name of G. A. Ilizarov, who paved the way for the autoregenerate gradual distraction method in the 1950s. In the years 1990 through 2007 a total of 67 patients underwent femur lengthening via gradual distraction using various types of external fixators at the Department of Pediatric Surgery, Orthopedics, and Traumatology, Faculty Hospital in Brno. The quality of bone healing was monitored and a number of parameters followed and statistically evaluated using regularly scheduled X-ray examinations. In 13 cases we had to remove the external fixator following the distraction phase, perform an osteosynthesis via a splint and fill the distraction gap via spongioplasty. The bone healing was satisfactory in the remaining 54 patients and the lengthened bone required no other fixation method. The analysis showed statistically significant deceleration in bone healing following distraction in female patients over 12 years of age, and in boys over 14 years of age. Lack of periosteal callus five weeks after surgery always signified serious problems in further healing. Severe complications were recorded in 11 cases during the distraction phase, and in 12 cases after the removal of the distraction apparatus. Our results fully correspond with the data and experience of others cited authors. In addition our study showed deceleration in bone healing in girls over 12 years and in boys over 14 years of age and serious problem in healing when is lack of periostal callus five weeks after surgery. The aim of this report was to present the results of our study of distraction gap bone healing using the gradual lengthening approach. Key words: leg lengthening, gradual distraction, external fixation, leg

  10. Learning phacoemulsification. Results of different teaching methods.

    Directory of Open Access Journals (Sweden)

    Hennig Albrecht

    2004-01-01

    Full Text Available We report the learning curves of three eye surgeons converting from sutureless extracapsular cataract extraction to phacoemulsification using different teaching methods. Posterior capsule rupture (PCR as a per-operative complication and visual outcome of the first 100 operations were analysed. The PCR rate was 4% and 15% in supervised and unsupervised surgery respectively. Likewise, an uncorrected visual acuity of > or = 6/18 on the first postoperative day was seen in 62 (62% of patients and in 22 (22% in supervised and unsupervised surgery respectively.

  11. Long-term mental wellbeing of adolescents and young adults diagnosed with venous thromboembolism: results from a multistage mixed methods study.

    Science.gov (United States)

    Højen, A A; Sørensen, E E; Dreyer, P S; Søgaard, M; Larsen, T B

    2017-12-01

    Essentials Long-term mental wellbeing of adolescents and young adults with venous thromboembolism is unclear. This multistage mixed methods study was based on Danish nationwide registry data and interviews. Mental wellbeing is negatively impacted in the long-term and uncertainty of recurrence is pivotal. The perceived health threat is more important than disease severity for long-term mental wellbeing. Background Critical and chronic illness in youth can lead to impaired mental wellbeing. Venous thromboembolism (VTE) is a potentially traumatic and life-threatening condition. Nonetheless, the long-term mental wellbeing of adolescents and young adults (AYAS) with VTE is unclear. Objectives To investigate the long-term mental wellbeing of AYAS (aged 13-33 years) diagnosed with VTE. Methods We performed a multistage mixed method study based on data from the Danish nationwide health registries, and semistructured interviews with 12 AYAS diagnosed with VTE. An integrated mixed methods interpretation of the findings was conducted through narrative weaving and joint displays. Results The integrated mixed methods interpretation showed that the mental wellbeing of AYAS with VTE had a chronic perspective, with a persistently higher risk of psychotropic drug purchase among AYAS with a first-time diagnosis of VTE than among sex-matched and age-matched population controls and AYAS with a first-time diagnosis of insulin-dependent diabetes mellitus. Impaired mental wellbeing was largely connected to a fear of recurrence and concomitant uncertainty. Therefore, it was important for the long-term mental wellbeing to navigate uncertainty. The perceived health threat played a more profound role in long-term mental wellbeing than disease severity, as the potential life threat was the pivot which pointed back to the initial VTE and forward to the perception of future health threat and the potential risk of dying of a recurrent event. Conclusion Our findings show that the long

  12. Radiolabelling of RC-160: preliminary results

    International Nuclear Information System (INIS)

    Verdera, E.S.; Balter Binsky, H.S.; Robles, A.M.; Rodriguez, G.; Souto, B.; Laiz, J.; Oliver, P.; Leon, E.

    1998-01-01

    Vapreotide (RC-160) was labelled with 125 I using Chloramine-T and Iodogen methods and with 99m Tc by a direct method with sodium ditionite as reducing agent in the presence of ascorbic acid. Several methods of purification and quality control were evaluated. Yields of the reactions and of purification steps were calculated. The results obtained for the radioiodination reactions showed higher yields when limiting Chloramine-T method was used. Labelling of RC-160 with 99m Tc indicated better yields when high radioactivity concentration of the radionuclide was used. Stability of the products obtained was assessed at different post-labelling times by selected quality control methods: Sep-Pak cartridge as purification method and chromatography by RP-HPLC and ITLC-SG using saline solution as solvent. It was demonstrated that I-125-RC-160 and Tc-99m-RC-160 were stable during five weeks (at -20 deg. C) and 6 hours (at room temperature) respectively. Preliminary biodistribution of Tc-99m-RC-160 in normal rats and mice were done showing different biological behaviour compared with control animals injected with pertechnetate. In conclusion, RC-160 was successfully labelled with both radionuclides, with radiochemical purity higher than 95%. These results encourage further research work in animal models as well as to investigate the biochemical behaviour of radiolabelled peptide. (author)

  13. Comparison of two dietary assessment methods by food consumption: results of the German National Nutrition Survey II.

    Science.gov (United States)

    Eisinger-Watzl, Marianne; Straßburg, Andrea; Ramünke, Josa; Krems, Carolin; Heuer, Thorsten; Hoffmann, Ingrid

    2015-04-01

    To further characterise the performance of the diet history method and the 24-h recalls method, both in an updated version, a comparison was conducted. The National Nutrition Survey II, representative for Germany, assessed food consumption with both methods. The comparison was conducted in a sample of 9,968 participants aged 14-80. Besides calculating mean differences, statistical agreement measurements encompass Spearman and intraclass correlation coefficients, ranking participants in quartiles and the Bland-Altman method. Mean consumption of 12 out of 18 food groups was higher assessed with the diet history method. Three of these 12 food groups had a medium to large effect size (e.g., raw vegetables) and seven showed at least a small strength while there was basically no difference for coffee/tea or ice cream. Intraclass correlations were strong only for beverages (>0.50) and revealed the least correlation for vegetables (diet history method to remember consumption of the past 4 weeks may be a source of inaccurateness, especially for inhomogeneous food groups. Additionally, social desirability gains significance. There is no assessment method without errors and attention to specific food groups is a critical issue with every method. Altogether, the 24-h recalls method applied in the presented study, offers advantages approximating food consumption as compared to the diet history method.

  14. Comparison of microstickies measurement methods. Part II, Results and discussion

    Science.gov (United States)

    Mahendra R. Doshi; Angeles Blanco; Carlos Negro; Concepcion Monte; Gilles M. Dorris; Carlos C. Castro; Axel Hamann; R. Daniel Haynes; Carl Houtman; Karen Scallon; Hans-Joachim Putz; Hans Johansson; R. A. Venditti; K. Copeland; H.-M. Chang

    2003-01-01

    In part I of the article we discussed sample preparation procedure and described various methods used for the measurement of microstickies. Some of the important features of different methods are highlighted in Table 1. Temperatures used in the measurement methods vary from room temperature in some cases, 45 °C to 65 °C in other cases. Sample size ranges from as low as...

  15. Project Oriented Immersion Learning: Method and Results

    DEFF Research Database (Denmark)

    Icaza, José I.; Heredia, Yolanda; Borch, Ole M.

    2005-01-01

    A pedagogical approach called “project oriented immersion learning” is presented and tested on a graduate online course. The approach combines the Project Oriented Learning method with immersion learning in a virtual enterprise. Students assumed the role of authors hired by a fictitious publishing...... house that develops digital products including e-books, tutorials, web sites and so on. The students defined the problem that their product was to solve; choose the type of product and the content; and built the product following a strict project methodology. A wiki server was used as a platform to hold...

  16. Measurement of gamma ray from fuel of high temperature engineering test reactor. Method of measurement and results

    Energy Technology Data Exchange (ETDEWEB)

    Fujimoto, Nozomu; Nojiri, Naoki; Takada, Eiji [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment] [and others

    2001-02-01

    To obtain information in the HTTR core directly, gamma ray from fuel blocks was measured when fuel blocks were discharged from the core and reloaded to the core. Gamma ray was measured using GM detector, CZT semiconductor detector installed in a door valve and area monitors installed in a stand pipe compartment. The measurement was carried out for 20 fuel blocks in 4 columns considering the symmetry of uranium enrichment distribution in the core. Relative axial distribution in each column obtained by the GM detector and CZT detector agreed with calculated results. However, calculation values showed higher values than measured values in upper region of the core, lower those in lower region of the core. The axial distributions were also evaluated by the area monitors. The measured values agreed with calculated values. It became clear that it was possible to obtain the data inside the core by this method. (author)

  17. Methods used by Elsam for monitoring precision and accuracy of analytical results

    Energy Technology Data Exchange (ETDEWEB)

    Hinnerskov Jensen, J [Soenderjyllands Hoejspaendingsvaerk, Faelleskemikerne, Aabenraa (Denmark)

    1996-12-01

    Performing round robins at regular intervals is the primary method used by ELsam for monitoring precision and accuracy of analytical results. The firs round robin was started in 1974, and today 5 round robins are running. These are focused on: boiler water and steam, lubricating oils, coal, ion chromatography and dissolved gases in transformer oils. Besides the power plant laboratories in Elsam, the participants are power plant laboratories from the rest of Denmark, industrial and commercial laboratories in Denmark, and finally foreign laboratories. The calculated standard deviations or reproducibilities are compared with acceptable values. These values originate from ISO, ASTM and the like, or from own experiences. Besides providing the laboratories with a tool to check their momentary performance, the round robins are vary suitable for evaluating systematic developments on a long term basis. By splitting up the uncertainty according to methods, sample preparation/analysis, etc., knowledge can be extracted from the round robins for use in many other situations. (au)

  18. comparison of elastic-plastic FE method and engineering method for RPV fracture mechanics analysis

    International Nuclear Information System (INIS)

    Sun Yingxue; Zheng Bin; Zhang Fenggang

    2009-01-01

    This paper described the FE analysis of elastic-plastic fracture mechanics for a crack in RPV belt line using ABAQUS code. It calculated and evaluated the stress intensity factor and J integral of crack under PTS transients. The result is also compared with that by engineering analysis method. It shows that the results using engineering analysis method is a little larger than the results using FE analysis of 3D elastic-plastic fracture mechanics, thus the engineering analysis method is conservative than the elastic-plastic fracture mechanics method. (authors)

  19. Hydrogen storage in single-walled carbon nanotubes: methods and results

    International Nuclear Information System (INIS)

    Poirier, E.; Chahine, R.; Tessier, A.; Cossement, D.; Lafi, L.; Bose, T.K.

    2004-01-01

    We present high sensitivity gravimetric and volumetric hydrogen sorption measurement systems adapted for in situ conditioning under high temperature and high vacuum. These systems, which allow for precise measurements on small samples and thorough degassing, are used for sorption measurements on carbon nanostructures. We developed one volumetric system for the pressure range 0-1 bar, and two gravimetric systems for 0-1 bar and 0-100 bars. The use of both gravimetric and volumetric methods allows for the cross-checking of the results. The accuracy of the systems has been determined from hydrogen absorption measurements on palladium. The accuracies of the 0-1 bar volumetric and gravimetric systems are about 10 μg and 20 μg respectively. The accuracy of the 0-100 bars gravimetric system is about 20 μg. Hydrogen sorption measurements on single-walled carbon nanotubes (SWNTs) and metal-incorporated- SWNTs are presented. (author)

  20. Dietary fiber showed no preventive effect against colon and rectal cancers in Japanese with low fat intake: an analysis from the results of nutrition surveys from 23 Japanese prefectures

    Directory of Open Access Journals (Sweden)

    Sugawara Kazuo

    2001-10-01

    Full Text Available Abstract Background Since Fuchs' report in 1999, the reported protective effect of dietary fiber from colorectal carcinogenesis has led many researchers to question its real benefit. The aim of this study is to evaluate the association between diet, especially dietary fiber and fat and colorectal cancer in Japan. Methods A multiple regression analysis (using the stepwise variable selection method was performed using the standardized mortality ratios (SMRs of colon and rectal cancer in 23 Japanese prefectures as objective variables and dietary fiber, nutrients and food groups as explanatory variables. Results As for colon cancer, the standardized partial correlation coefficients were positively significant for fat (1,13, P = 0.000, seaweeds (0.41, P = 0.026 and beans (0.45, P = 0.017 and were negatively significant for vitamin A (-0.63, P = 0.003, vitamin C (-0.42, P = 0.019 and yellow-green vegetables (-0.37, P = 0.046. For rectal cancer, the standardized partial correlation coefficient in fat (0.60, P = 0.002 was positively significant. Dietary fiber was not found to have a significant relationship with either colon or rectal cancers. Conclusions This study failed to show any protective effect of dietary fiber in subjects with a low fat intake (Japanese in this analysis, which supports Fuchs' findings in subjects with a high fat intake (US Americans.

  1. Non-invasive approach towards the in vivo estimation of 3D inter-vertebral movements: methods and preliminary results.

    Science.gov (United States)

    Cerveri, P; Pedotti, A; Ferrigno, G

    2004-12-01

    A kinematical model of the lower spine was designed and used to obtain a robust estimation of the vertebral rotations during torso movements from skin-surface markers recorded by video-cameras. Markers were placed in correspondence of the anatomical landmarks of the pelvic bone and vertebral spinous and transverse processes, and acquired during flexion, lateral bending and axial motions. In the model calibration stage, a motion-based approach was used to compute the rotation axes and centres of the functional segmental units. Markers were mirrored into virtual points located on the model surface, expressed in the local reference system of coordinates. The spine motion assessment was solved into the domain of extended Kalman filters: at each frame of the acquisition, the model pose was updated by minimizing the distances between the measured 2D marker projections on the cameras and the corresponding back-projections of virtual points located on the model surface. The novelty of the proposed technique rests on the fact that the varying location of the rotation centres of the functional segmental units can be tracked directly during motion computation. In addition, we show how the effects of skin artefacts on orientation data can be taken into account. As a result, the kinematical estimation of simulated motions shows that orientation artefacts were reduced by a factor of at least 50%. Preliminary experiments on real motion confirmed the reliability of the proposed method with results in agreement with classical studies in literature.

  2. Enhancing activated-peroxide formulations for porous materials: Test methods and results

    Energy Technology Data Exchange (ETDEWEB)

    Krauter, Paula [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tucker, Mark D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tezak, Matthew S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boucher, Raymond [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2012-12-01

    During an urban wide-area incident involving the release of a biological warfare agent, the recovery/restoration effort will require extensive resources and will tax the current capabilities of the government and private contractors. In fact, resources may be so limited that decontamination by facility owners/occupants may become necessary and a simple decontamination process and material should be available for this use. One potential process for use by facility owners/occupants would be a liquid sporicidal decontaminant, such as pHamended bleach or activated-peroxide, and simple application devices. While pH-amended bleach is currently the recommended low-tech decontamination solution, a less corrosive and toxic decontaminant is desirable. The objective of this project is to provide an operational assessment of an alternative to chlorine bleach for low-tech decontamination applications activated hydrogen peroxide. This report provides the methods and results for activatedperoxide evaluation experiments. The results suggest that the efficacy of an activated-peroxide decontaminant is similar to pH-amended bleach on many common materials.

  3. Why Different Drought Indexes Show Distinct Future Drought Risk Outcomes in the U.S. Great Plains?

    Science.gov (United States)

    Feng, S.; Hayes, M. J.; Trnka, M.

    2015-12-01

    Vigorous discussions and disagreements about the future changes in drought intensity in the US Great Plains have been taking place recently within the literature. These discussions have involved widely varying estimates based on drought indices and model-based projections of the future. To investigate and understand the causes for such a disparity between these previous estimates, we analyzed 10 commonly-used drought indexes using the output from 26 state-of-the-art climate models. These drought indices were computed using potential evapotranspiration estimated by the physically-based Penman-Monteith method (PE_pm) and the empirically-based Thornthwaite method (PE_th). The results showed that the short-term drought indicators are similar to modeled surface soil moisture and show a small but consistent drying trend in the future. The long-term drought indicators and the total column soil moisture, however, are consistent in projecting more intense future drought. When normalized, the drought indices with PE_th all show unprecedented and possibly unrealistic future drying, while the drought indices with PE_pm show comparable dryness with the modeled soil moisture. Additionally, the drought indices with PE_pm are closely related to soil moisture during both the 20th and 21st Centuries. Overall, the drought indices with PE_pm, as well as the modeled total column soil moisture, suggest a widespread and very significant drying of the Great Plains region toward the end of the Century. Our results suggested that the sharp contracts about future drought risk in the Great Plains discussed in previous studies are caused by 1) comparing the projected changes in short-term droughts with that of the long-term droughts, and/or 2) computing the atmospheric evaporative demand using the empirically-based method (e.g., PE_th). Our analysis may be applied for drought projections in other regions across the globe.

  4. Some results about the dating of pre hispanic mexican ceramics by the thermoluminescence method

    International Nuclear Information System (INIS)

    Gonzalez M, P.; Mendoza A, D.; Ramirez L, A.; Schaaf, P.

    2004-01-01

    One of the most frequently recurring questions in Archaeometry concerns the age of the studied objects. The some first dating methods were based in historical narrations, style of buildings manufacture techniques. However, has been observed that as consequence the continuous irradiation from naturally occurring radioisotopes and from cosmic rays some materials, such as archaeological ceramic, accumulate certain quantity of energy. These types of material can, in principle, be dated through the analysis of these accumulate energy. In that case, ceramic dating can be realized by thermoluminescence (TL) dating. In this work, results obtained by our research group about TL dating of ceramic belonging to several archaeological zones like to Edzna (Campeche), Calixtlahuaca and Teotenango (Mexico State) and Hervideros (Durango) are presented. The analysis was realized using the fine grained mode in a Daybreak model 1100 reader TL system. The radioisotopes that contribute in the accumulate annual dose in ceramic samples ( 40 K, 238 U, 232 Th) were determined by means of techniques such as Energy Dispersive X-ray Spectroscopy (EDS) and Neutron Activation Analysis (AAN). Our results are agree with results obtained through other methods. (Author) 7 refs., 2 tabs., 5 figs

  5. Improving the accuracy of myocardial perfusion scintigraphy results by machine learning method

    International Nuclear Information System (INIS)

    Groselj, C.; Kukar, M.

    2002-01-01

    Full text: Machine learning (ML) as rapidly growing artificial intelligence subfield has already proven in last decade to be a useful tool in many fields of decision making, also in some fields of medicine. Its decision accuracy usually exceeds the human one. To assess applicability of ML in interpretation the results of stress myocardial perfusion scintigraphy for CAD diagnosis. The 327 patient's data of planar stress myocardial perfusion scintigraphy were reevaluated in usual way. Comparing them with the results of coronary angiography the sensitivity, specificity and accuracy for the investigation was computed. The data were digitized and the decision procedure repeated by ML program 'Naive Bayesian classifier'. As the ML is able to simultaneously manipulate of whatever number of data, all reachable disease connected data (regarding history, habitus, risk factors, stress results) were added. The sensitivity, specificity and accuracy for scintigraphy were expressed in this way. The results of both decision procedures were compared. With ML method 19 patients more out of 327 (5.8 %) were correctly diagnosed by stress myocardial perfusion scintigraphy. ML could be an important tool for decision making in myocardial perfusion scintigraphy. (author)

  6. Emission-Line Galaxies from the PEARS Hubble Ultra Deep Field: A 2-D Detection Method and First Results

    Science.gov (United States)

    Gardner, J. P.; Straughn, Amber N.; Meurer, Gerhardt R.; Pirzkal, Norbert; Cohen, Seth H.; Malhotra, Sangeeta; Rhoads, james; Windhorst, Rogier A.; Gardner, Jonathan P.; Hathi, Nimish P.; hide

    2007-01-01

    The Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) grism PEARS (Probing Evolution And Reionization Spectroscopically) survey provides a large dataset of low-resolution spectra from thousands of galaxies in the GOODS North and South fields. One important subset of objects in these data are emission-line galaxies (ELGs), and we have investigated several different methods aimed at systematically selecting these galaxies. Here we present a new methodology and results of a search for these ELGs in the PEARS observations of the Hubble Ultra Deep Field (HUDF) using a 2D detection method that utilizes the observation that many emission lines originate from clumpy knots within galaxies. This 2D line-finding method proves to be useful in detecting emission lines from compact knots within galaxies that might not otherwise be detected using more traditional 1D line-finding techniques. We find in total 96 emission lines in the HUDF, originating from 81 distinct "knots" within 63 individual galaxies. We find in general that [0 1111 emitters are the most common, comprising 44% of the sample, and on average have high equivalent widths (70% of [0 1111 emitters having rest-frame EW> 100A). There are 12 galaxies with multiple emitting knots; several show evidence of variations in H-alpha flux in the knots, suggesting that the differing star formation properties across a single galaxy can in general be probed at redshifts approximately greater than 0.2 - 0.4. The most prevalent morphologies are large face-on spirals and clumpy interacting systems, many being unique detections owing to the 2D method described here, thus highlighting the strength of this technique.

  7. LEAKAGE CHARACTERISTICS OF BASE OF RIVERBANK BY SELF POTENTIAL METHOD AND EXAMINATION OF EFFECTIVENESS OF SELF POTENTIAL METHOD TO HEALTH MONITORING OF BASE OF RIVERBANK

    Science.gov (United States)

    Matsumoto, Kensaku; Okada, Takashi; Takeuchi, Atsuo; Yazawa, Masato; Uchibori, Sumio; Shimizu, Yoshihiko

    Field Measurement of Self Potential Method using Copper Sulfate Electrode was performed in base of riverbank in WATARASE River, where has leakage problem to examine leakage characteristics. Measurement results showed typical S-shape what indicates existence of flow groundwater. The results agreed with measurement results by Ministry of Land, Infrastructure and Transport with good accuracy. Results of 1m depth ground temperature detection and Chain-Array detection showed good agreement with results of the Self Potential Method. Correlation between Self Potential value and groundwater velocity was examined model experiment. The result showed apparent correlation. These results indicate that the Self Potential Method was effective method to examine the characteristics of ground water of base of riverbank in leakage problem.

  8. Nondestructive methods for the structural evaluation of wood floor systems in historic buildings : preliminary results : [abstract

    Science.gov (United States)

    Zhiyong Cai; Michael O. Hunt; Robert J. Ross; Lawrence A. Soltis

    1999-01-01

    To date, there is no standard method for evaluating the structural integrity of wood floor systems using nondestructive techniques. Current methods of examination and assessment are often subjective and therefore tend to yield imprecise or variable results. For this reason, estimates of allowable wood floor loads are often conservative. The assignment of conservatively...

  9. The numerical method of inverse Laplace transform for calculation of overvoltages in power transformers and test results

    Directory of Open Access Journals (Sweden)

    Mikulović Jovan Č.

    2014-01-01

    Full Text Available A methodology for calculation of overvoltages in transformer windings, based on a numerical method of inverse Laplace transform, is presented. Mathematical model of transformer windings is described by partial differential equations corresponding to distributed parameters electrical circuits. The procedure of calculating overvoltages is applied to windings having either isolated neutral point, or grounded neutral point, or neutral point grounded through impedance. A comparative analysis of the calculation results obtained by the proposed numerical method and by analytical method of calculation of overvoltages in transformer windings is presented. The results computed by the proposed method and measured voltage distributions, when a voltage surge is applied to a three-phase 30 kVA power transformer, are compared. [Projekat Ministartsva nauke Republike Srbije, br. TR-33037 i br. TR-33020

  10. Results of the determination of He in cenozoic aquifers using the GC method.

    Science.gov (United States)

    Kotowski, Tomasz; Najman, Joanna

    2015-04-01

    Applications of the Helium (He) method known so far consisted mainly of 4He measurements using a special mass spectrometer. 4He measurements for groundwater dating purposes can be replaced by total He (3He+4He) concentration measurements because the content of 3He can be ignored. The concentrations of 3He are very low and 3He/4 He ratios do not exceed 1.0·10(-5) in most cases. In this study, the total He concentrations in groundwater were determined using the gas chromatographic (GC) method as an alternative to methods based on spectrometry measurement. He concentrations in groundwater were used for the determination of residence time and groundwater circulation. Additionally, the radiocarbon method was used to determine the value of the external He flux (JHe) in the study area. Obtained low He concentrations and their small variation within the ca. 65 km long section along which groundwater flows indicate that it is likely there is relatively short residence time and a strong hydraulic connection between the aquifers. The estimated residence time (ca. 3000 years) is heavily dependent on the great uncertainty of the He concentration resulting from the low concentrations of He, the external 4He flux value adopted for calculation purposes and the 14C ages used to estimate the external 4He flux. © 2015, National Ground Water Association.

  11. GRS Method for Uncertainty and Sensitivity Evaluation of Code Results and Applications

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    During the recent years, an increasing interest in computational reactor safety analysis is to replace the conservative evaluation model calculations by best estimate calculations supplemented by uncertainty analysis of the code results. The evaluation of the margin to acceptance criteria, for example, the maximum fuel rod clad temperature, should be based on the upper limit of the calculated uncertainty range. Uncertainty analysis is needed if useful conclusions are to be obtained from best estimate thermal-hydraulic code calculations, otherwise single values of unknown accuracy would be presented for comparison with regulatory acceptance limits. Methods have been developed and presented to quantify the uncertainty of computer code results. The basic techniques proposed by GRS are presented together with applications to a large break loss of coolant accident on a reference reactor as well as on an experiment simulating containment behaviour

  12. The possible distortion of autoradiographic results

    International Nuclear Information System (INIS)

    Kozlov, A.A.; Tumanushvili, G.D.

    1980-01-01

    The effect of radioactive labelling (H 3 -thymidine) on the infusorian division is studied. The presented results show that introduction of labelled compounds results in accelerating infusorian cell division v. Thorough inestigation of labelled compound effect with low activity on the parameters of cell division and the search of methods to eliminate distortions able to appear in autoradiographic experiments is expedient [ru

  13. Integrating Quantitative and Qualitative Results in Health Science Mixed Methods Research Through Joint Displays.

    Science.gov (United States)

    Guetterman, Timothy C; Fetters, Michael D; Creswell, John W

    2015-11-01

    Mixed methods research is becoming an important methodology to investigate complex health-related topics, yet the meaningful integration of qualitative and quantitative data remains elusive and needs further development. A promising innovation to facilitate integration is the use of visual joint displays that bring data together visually to draw out new insights. The purpose of this study was to identify exemplar joint displays by analyzing the various types of joint displays being used in published articles. We searched for empirical articles that included joint displays in 3 journals that publish state-of-the-art mixed methods research. We analyzed each of 19 identified joint displays to extract the type of display, mixed methods design, purpose, rationale, qualitative and quantitative data sources, integration approaches, and analytic strategies. Our analysis focused on what each display communicated and its representation of mixed methods analysis. The most prevalent types of joint displays were statistics-by-themes and side-by-side comparisons. Innovative joint displays connected findings to theoretical frameworks or recommendations. Researchers used joint displays for convergent, explanatory sequential, exploratory sequential, and intervention designs. We identified exemplars for each of these designs by analyzing the inferences gained through using the joint display. Exemplars represented mixed methods integration, presented integrated results, and yielded new insights. Joint displays appear to provide a structure to discuss the integrated analysis and assist both researchers and readers in understanding how mixed methods provides new insights. We encourage researchers to use joint displays to integrate and represent mixed methods analysis and discuss their value. © 2015 Annals of Family Medicine, Inc.

  14. Multidimensional structured data visualization method and apparatus, text visualization method and apparatus, method and apparatus for visualizing and graphically navigating the world wide web, method and apparatus for visualizing hierarchies

    Science.gov (United States)

    Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA; Hart, Michelle L [Richland, WA; Hatley, Wes L [Kennewick, WA

    2008-05-13

    A method of displaying correlations among information objects comprises receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.

  15. Semiautomatic volume of interest drawing for 18F-FDG image analysis - method and preliminary results

    International Nuclear Information System (INIS)

    Green, A.J.; Baig, S.; Begent, R.H.J.; Francis, R.J.

    2008-01-01

    Functional imaging of cancer adds important information to the conventional measurements in monitoring response. Serial 18 F-fluorodeoxyglucose (FDG) positron emission tomography (PET), which indicates changes in glucose metabolism in tumours, shows great promise for this. However, there is a need for a method to quantitate alterations in uptake of FDG, which accounts for changes in tumour volume and intensity of FDG uptake. Selection of regions or volumes [ROI or volumes of interest (VOI)] by hand drawing, or simple thresholding, suffers from operator-dependent drawbacks. We present a simple, robust VOI growing method for this application. The method requires a single seed point within the visualised tumour and another in relevant normal tissue. The drawn tumour VOI is insensitive to the operator inconsistency and is, thus, a suitable basis for comparative measurements. The method is validated using a software phantom. We demonstrate the use of the method in the assessment of tumour response in 31 patients receiving chemotherapy for various carcinomas. Valid assessment of tumour response could be made 2-4 weeks after starting chemotherapy, giving information for clinical decision making which would otherwise have taken 9-12 weeks. Survival was predicted from FDG-PET 2-4 weeks after starting chemotherapy (p = 0.04) and after 9-12 weeks FDG-PET gave a better prediction of survival (p = 0.002) than CT or MRI (p = 0.015). FDG-PET using this method of analysis has potential as a routine tool for optimising use of chemotherapy and improving its cost effectiveness. It also has potential for increasing the accuracy of response assessment in clinical trials of novel therapies. (orig.)

  16. Using Journals to Show Students What Social Psychology Is All about

    Science.gov (United States)

    Harrod, Wendy J.

    2009-01-01

    Professional journals serve the vital scientific function of disseminating knowledge to colleagues. In so doing, journals become the "face" and "voice" of the professional disciplines they represent. Journal content shows the major topics of interest, the scope, and the boundaries of the profession. It shows the techniques and methods of research…

  17. Some new results on correlation-preserving factor scores prediction methods

    NARCIS (Netherlands)

    Ten Berge, J.M.F.; Krijnen, W.P.; Wansbeek, T.J.; Shapiro, A.

    1999-01-01

    Anderson and Rubin and McDonald have proposed a correlation-preserving method of factor scores prediction which minimizes the trace of a residual covariance matrix for variables. Green has proposed a correlation-preserving method which minimizes the trace of a residual covariance matrix for factors.

  18. Quantifying viruses and bacteria in wastewater—Results, interpretation methods, and quality control

    Science.gov (United States)

    Francy, Donna S.; Stelzer, Erin A.; Bushon, Rebecca N.; Brady, Amie M.G.; Mailot, Brian E.; Spencer, Susan K.; Borchardt, Mark A.; Elber, Ashley G.; Riddell, Kimberly R.; Gellner, Terry M.

    2011-01-01

    Membrane bioreactors (MBR), used for wastewater treatment in Ohio and elsewhere in the United States, have pore sizes small enough to theoretically reduce concentrations of protozoa and bacteria, but not viruses. Sampling for viruses in wastewater is seldom done and not required. Instead, the bacterial indicators Escherichia coli (E. coli) and fecal coliforms are the required microbial measures of effluents for wastewater-discharge permits. Information is needed on the effectiveness of MBRs in removing human enteric viruses from wastewaters, particularly as compared to conventional wastewater treatment before and after disinfection. A total of 73 regular and 28 quality-control (QC) samples were collected at three MBR and two conventional wastewater plants in Ohio during 23 regular and 3 QC sampling trips in 2008-10. Samples were collected at various stages in the treatment processes and analyzed for bacterial indicators E. coli, fecal coliforms, and enterococci by membrane filtration; somatic and F-specific coliphage by the single agar layer (SAL) method; adenovirus, enterovirus, norovirus GI and GII, rotavirus, and hepatitis A virus by molecular methods; and viruses by cell culture. While addressing the main objective of the study-comparing removal of viruses and bacterial indicators in MBR and conventional plants-it was realized that work was needed to identify data analysis and quantification methods for interpreting enteric virus and QC data. Therefore, methods for quantifying viruses, qualifying results, and applying QC data to interpretations are described in this report. During each regular sampling trip, samples were collected (1) before conventional or MBR treatment (post-preliminary), (2) after secondary or MBR treatment (post-secondary or post-MBR), (3) after tertiary treatment (one conventional plant only), and (4) after disinfection (post-disinfection). Glass-wool fiber filtration was used to concentrate enteric viruses from large volumes, and small

  19. Comments on Brodsky's statistical methods for evaluating epidemiological results, and reply by Brodsky, A

    International Nuclear Information System (INIS)

    Frome, E.L.; Khare, M.

    1980-01-01

    Brodsky's paper 'A Statistical Method for Testing Epidemiological Results, as applied to the Hanford Worker Population', (Health Phys., 36, 611-628, 1979) proposed two test statistics for use in comparing the survival experience of a group of employees and controls. This letter states that both of the test statistics were computed using incorrect formulas and concludes that the results obtained using these statistics may also be incorrect. In his reply Brodsky concurs with the comments on the proper formulation of estimates of pooled standard errors in constructing test statistics but believes that the erroneous formulation does not invalidate the major points, results and discussions of his paper. (author)

  20. The impact of secure messaging on workflow in primary care: Results of a multiple-case, multiple-method study.

    Science.gov (United States)

    Hoonakker, Peter L T; Carayon, Pascale; Cartmill, Randi S

    2017-04-01

    Secure messaging is a relatively new addition to health information technology (IT). Several studies have examined the impact of secure messaging on (clinical) outcomes but very few studies have examined the impact on workflow in primary care clinics. In this study we examined the impact of secure messaging on workflow of clinicians, staff and patients. We used a multiple case study design with multiple data collections methods (observation, interviews and survey). Results show that secure messaging has the potential to improve communication and information flow and the organization of work in primary care clinics, partly due to the possibility of asynchronous communication. However, secure messaging can also have a negative effect on communication and increase workload, especially if patients send messages that are not appropriate for the secure messaging medium (for example, messages that are too long, complex, ambiguous, or inappropriate). Results show that clinicians are ambivalent about secure messaging. Secure messaging can add to their workload, especially if there is high message volume, and currently they are not compensated for these activities. Staff is -especially compared to clinicians- relatively positive about secure messaging and patients are overall very satisfied with secure messaging. Finally, clinicians, staff and patients think that secure messaging can have a positive effect on quality of care and patient safety. Secure messaging is a tool that has the potential to improve communication and information flow. However, the potential of secure messaging to improve workflow is dependent on the way it is implemented and used. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Airline Overbooking Problem with Uncertain No-Shows

    Directory of Open Access Journals (Sweden)

    Chunxiao Zhang

    2014-01-01

    Full Text Available This paper considers an airline overbooking problem of a new single-leg flight with discount fare. Due to the absence of historical data of no-shows for a new flight, and various uncertain human behaviors or unexpected events which causes that a few passengers cannot board their aircraft on time, we fail to obtain the probability distribution of no-shows. In this case, the airlines have to invite some domain experts to provide belief degree of no-shows to estimate its distribution. However, human beings often overestimate unlikely events, which makes the variance of belief degree much greater than that of the frequency. If we still regard the belief degree as a subjective probability, the derived results will exceed our expectations. In order to deal with this uncertainty, the number of no-shows of new flight is assumed to be an uncertain variable in this paper. Given the chance constraint of social reputation, an overbooking model with discount fares is developed to maximize the profit rate based on uncertain programming theory. Finally, the analytic expression of the optimal booking limit is obtained through a numerical example, and the results of sensitivity analysis indicate that the optimal booking limit is affected by flight capacity, discount, confidence level, and parameters of the uncertainty distribution significantly.

  2. Integrate life-cycle assessment and risk analysis results, not methods.

    Science.gov (United States)

    Linkov, Igor; Trump, Benjamin D; Wender, Ben A; Seager, Thomas P; Kennedy, Alan J; Keisler, Jeffrey M

    2017-08-04

    Two analytic perspectives on environmental assessment dominate environmental policy and decision-making: risk analysis (RA) and life-cycle assessment (LCA). RA focuses on management of a toxicological hazard in a specific exposure scenario, while LCA seeks a holistic estimation of impacts of thousands of substances across multiple media, including non-toxicological and non-chemically deleterious effects. While recommendations to integrate the two approaches have remained a consistent feature of environmental scholarship for at least 15 years, the current perception is that progress is slow largely because of practical obstacles, such as a lack of data, rather than insurmountable theoretical difficulties. Nonetheless, the emergence of nanotechnology presents a serious challenge to both perspectives. Because the pace of nanomaterial innovation far outstrips acquisition of environmentally relevant data, it is now clear that a further integration of RA and LCA based on dataset completion will remain futile. In fact, the two approaches are suited for different purposes and answer different questions. A more pragmatic approach to providing better guidance to decision-makers is to apply the two methods in parallel, integrating only after obtaining separate results.

  3. Comparison of Results according to the treatment Method in Maxillary Sinus Carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Woong Ki; Jo, Jae Sik; Ahn, Sung Ja; Nam, Taek Keun; Nah, Byung Sik [Chonnam National University College of Medicine, Kwangju (Korea, Republic of); Park, Seung Jin [Gyeongsang National Univ., Jinju (Korea, Republic of)

    1995-03-15

    Purpose : A retrospective analysis was performed to investigate the proper management of maxillary sinus carcinoma. Materials and Methods : Authors analysed 33 patients of squamous cell carcinoma of maxillary sinus treated at Chonnam University Hospital from January 1986 to December 1992. There were 24 men and 9 women with median age of 55 years. According to AJCC TNM system of 1988, a patient of T2, 10 patients of T3 and 22 patients of T4 were available, respectively. Cervical lymph node metastases was observed in 5 patients(N1;4/33, N2b;1/33). Patients were classified as 3 groups according to management method. The first group, named as 'FAR' (16 patients), was consisted of preoperative intra-arterial chemotherapy with 5-fluorouracil(5-FU;mean of total dosage;3078mg) through the superficial temporal artery with concurrent radiation(mean dose delivered;3433cGy, daily 180-200cGy) and vitamin A(50,000 IU daily), and followed by total maxillectomy and postoperative radiation therapy(mean dose;2351cGy). The second group, named as 'SR'(7 patients), was consisted of total maxillectomy followed by postoperative radiation therapy(mean dose 5920 cGy). Her third group, named as 'R'(6 patients), was treated with radiation alone(mean dose;7164cGy). Kaplan-Meier product limit method was used for survival analysis and Mantel-Cox test was performed for significance of survival difference between two groups. Results : Local recurrence free survival rate in the end of 2 year was 100%, 5-% and 0% in FAR, SR and R group, respectively. Disease free survival rate in 2 years was 88.9%, 40% and 50% in Far, SR and R group, respectively. There were statistically significant difference between FAR and SR or FAR and R group in their local recurrence free, disease free and overall survival rates. But difference of each survival rate between SR and R group was not significant. Conclusion : In this study FAR group revealed better results that SR or R group. In the

  4. Comparison of Results according to the treatment Method in Maxillary Sinus Carcinoma

    International Nuclear Information System (INIS)

    Chung, Woong Ki; Jo, Jae Sik; Ahn, Sung Ja; Nam, Taek Keun; Nah, Byung Sik; Park, Seung Jin

    1995-01-01

    Purpose : A retrospective analysis was performed to investigate the proper management of maxillary sinus carcinoma. Materials and Methods : Authors analysed 33 patients of squamous cell carcinoma of maxillary sinus treated at Chonnam University Hospital from January 1986 to December 1992. There were 24 men and 9 women with median age of 55 years. According to AJCC TNM system of 1988, a patient of T2, 10 patients of T3 and 22 patients of T4 were available, respectively. Cervical lymph node metastases was observed in 5 patients(N1;4/33, N2b;1/33). Patients were classified as 3 groups according to management method. The first group, named as 'FAR' (16 patients), was consisted of preoperative intra-arterial chemotherapy with 5-fluorouracil(5-FU;mean of total dosage;3078mg) through the superficial temporal artery with concurrent radiation(mean dose delivered;3433cGy, daily 180-200cGy) and vitamin A(50,000 IU daily), and followed by total maxillectomy and postoperative radiation therapy(mean dose;2351cGy). The second group, named as 'SR'(7 patients), was consisted of total maxillectomy followed by postoperative radiation therapy(mean dose 5920 cGy). Her third group, named as 'R'(6 patients), was treated with radiation alone(mean dose;7164cGy). Kaplan-Meier product limit method was used for survival analysis and Mantel-Cox test was performed for significance of survival difference between two groups. Results : Local recurrence free survival rate in the end of 2 year was 100%, 5-% and 0% in FAR, SR and R group, respectively. Disease free survival rate in 2 years was 88.9%, 40% and 50% in Far, SR and R group, respectively. There were statistically significant difference between FAR and SR or FAR and R group in their local recurrence free, disease free and overall survival rates. But difference of each survival rate between SR and R group was not significant. Conclusion : In this study FAR group revealed better results that SR or R group. In the future prospective randomized

  5. Vehicle Speed Determination in Case of Road Accident by Software Method and Comparing of Results with the Mathematical Model

    Directory of Open Access Journals (Sweden)

    Hoxha Gezim

    2017-11-01

    Full Text Available The paper addresses the problem to vehicle speed calculation at road accidents. To determine the speed are used the PC Crash software and Virtual Crash. With both methods are analysed concrete cases of road accidents. Calculation methods and comparing results are present for analyse. These methods consider several factors such are: the front part of the vehicle, the technical feature of the vehicle, car angle, remote relocation after the crash, road conditions etc. Expected results with PC Crash software and Virtual Crash are shown in tabular graphics and compared in mathematical methods.

  6. A Comparison of Result Reliability for Investigation of Milk Composition by Alternative Analytical Methods in Czech Republic

    Directory of Open Access Journals (Sweden)

    Oto Hanuš

    2014-01-01

    Full Text Available The milk analyse result reliability is important for assurance of foodstuff chain quality. There are more direct and indirect methods for milk composition measurement (fat (F, protein (P, lactose (L and solids non fat (SNF content. The goal was to evaluate some reference and routine milk analytical procedures on result basis. The direct reference analyses were: F, fat content (Röse–Gottlieb method; P, crude protein content (Kjeldahl method; L, lactose (monohydrate, polarimetric method; SNF, solids non fat (gravimetric method. F, P, L and SNF were determined also by various indirect methods: – MIR (infrared (IR technology with optical filters, 7 instruments in 4 labs; – MIR–FT (IR spectroscopy with Fourier’s transformations, 10 in 6; – ultrasonic method (UM, 3 in 1; – analysis by the blue and red box (BRB, 1 v 1. There were used 10 reference milk samples. Coefficient of determination (R2, correlation coefficient (r and standard deviation of the mean of individual differences (MDsd, for n were evaluated. All correlations (r; for all indirect and alternative methods and all milk components were significant (P ≤ 0.001. MIR and MIR–FT (conventional methods explained considerably higher proportion of the variability in reference results than the UM and BRB methods (alternative. All r average values (x minus 1.64 × sd for 95% confidence interval can be used as standards for calibration quality evaluation (MIR, MIR–FT, UM and BRB: – for F 0.997, 0.997, 0.99 and 0.995; – for P 0.986, 0.981, 0.828 and 0.864; – for L 0.968, 0.871, 0.705 and 0.761; – for SNF 0.992, 0.993, 0.911 and 0.872. Similarly ​MDsd (x plus 1.64 × sd: – for F 0.071, 0.068, 0.132 and 0.101%; – for P 0.051, 0.054, 0.202 and 0.14%; – for L 0.037, 0.074, 0.113 and 0.11%; – for SNF 0.052, 0.068, 0.141 and 0.204.

  7. The Trojan Horse method for nuclear astrophysics: Recent results on resonance reactions

    Energy Technology Data Exchange (ETDEWEB)

    Cognata, M. La; Pizzone, R. G. [Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania (Italy); Spitaleri, C.; Cherubini, S.; Romano, S. [Dipartimento di Fisica e Astronomia, Università di Catania, Catania, Italy and Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania (Italy); Gulino, M.; Tumino, A. [Kore University, Enna, Italy and Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania (Italy); Lamia, L. [Dipartimento di Fisica e Astronomia, Università di Catania, Catania (Italy)

    2014-05-09

    Nuclear astrophysics aims to measure nuclear-reaction cross sections of astrophysical interest to be included into models to study stellar evolution and nucleosynthesis. Low energies, < 1 MeV or even < 10 keV, are requested for this is the window where these processes are more effective. Two effects have prevented to achieve a satisfactory knowledge of the relevant nuclear processes, namely, the Coulomb barrier exponentially suppressing the cross section and the presence of atomic electrons. These difficulties have triggered theoretical and experimental investigations to extend our knowledge down to astrophysical energies. For instance, indirect techniques such as the Trojan Horse Method have been devised yielding new cutting-edge results. In particular, I will focus on the application of this indirect method to resonance reactions. Resonances might dramatically enhance the astrophysical S(E)-factor so, when they occur right at astrophysical energies, their measurement is crucial to pin down the astrophysical scenario. Unknown or unpredicted resonances might introduce large systematic errors in nucleosynthesis models. These considerations apply to low-energy resonances and to sub-threshold resonances as well, as they may produce sizable modifications of the S-factor due to, for instance, destructive interference with another resonance.

  8. The Trojan Horse method for nuclear astrophysics: Recent results on resonance reactions

    International Nuclear Information System (INIS)

    Cognata, M. La; Pizzone, R. G.; Spitaleri, C.; Cherubini, S.; Romano, S.; Gulino, M.; Tumino, A.; Lamia, L.

    2014-01-01

    Nuclear astrophysics aims to measure nuclear-reaction cross sections of astrophysical interest to be included into models to study stellar evolution and nucleosynthesis. Low energies, < 1 MeV or even < 10 keV, are requested for this is the window where these processes are more effective. Two effects have prevented to achieve a satisfactory knowledge of the relevant nuclear processes, namely, the Coulomb barrier exponentially suppressing the cross section and the presence of atomic electrons. These difficulties have triggered theoretical and experimental investigations to extend our knowledge down to astrophysical energies. For instance, indirect techniques such as the Trojan Horse Method have been devised yielding new cutting-edge results. In particular, I will focus on the application of this indirect method to resonance reactions. Resonances might dramatically enhance the astrophysical S(E)-factor so, when they occur right at astrophysical energies, their measurement is crucial to pin down the astrophysical scenario. Unknown or unpredicted resonances might introduce large systematic errors in nucleosynthesis models. These considerations apply to low-energy resonances and to sub-threshold resonances as well, as they may produce sizable modifications of the S-factor due to, for instance, destructive interference with another resonance

  9. Qualitative Analysis Results for Applications of a New Fire Probabilistic Safety Assessment Method to Ulchin Unit 3

    International Nuclear Information System (INIS)

    Kang, Daeil; Kim, Kilyoo; Jang, Seungcheol

    2013-01-01

    The fire PRA Implementation Guide has been used for performing a fire PSA for NPPs in Korea. Recently, US NRC and EPRI developed a new fire PSA method, NUREG/CR-6850, to provide state-of-the-art methods, tools, and data for the conduct of a fire PSA for a commercial nuclear power plant (NPP). Due to the limited budget and man powers for the development of KSRP, hybrid PSA approaches, using NUREG/CR-6850 and Fire PRA Implementation Guide, will be employed for conducting a fire PSA of Ulchin Unit 3. In this paper, the qualitative analysis results for applications of a new fire PSA method to Ulchin Unit 3 are presented. This paper introduces the qualitative analysis results for applications of a new fire PSA method to Ulchin Unit 3. Compared with the previous industry, the number of fire areas for quantification identified and the number of equipment selected has increased

  10. Algorithms for monitoring warfarin use: Results from Delphi Method.

    Science.gov (United States)

    Kano, Eunice Kazue; Borges, Jessica Bassani; Scomparini, Erika Burim; Curi, Ana Paula; Ribeiro, Eliane

    2017-10-01

    Warfarin stands as the most prescribed oral anticoagulant. New oral anticoagulants have been approved recently; however, their use is limited and the reversibility techniques of the anticoagulation effect are little known. Thus, our study's purpose was to develop algorithms for therapeutic monitoring of patients taking warfarin based on the opinion of physicians who prescribe this medicine in their clinical practice. The development of the algorithm was performed in two stages, namely: (i) literature review and (ii) algorithm evaluation by physicians using a Delphi Method. Based on the articles analyzed, two algorithms were developed: "Recommendations for the use of warfarin in anticoagulation therapy" and "Recommendations for the use of warfarin in anticoagulation therapy: dose adjustment and bleeding control." Later, these algorithms were analyzed by 19 medical doctors that responded to the invitation and agreed to participate in the study. Of these, 16 responded to the first round, 11 to the second and eight to the third round. A 70% consensus or higher was reached for most issues and at least 50% for six questions. We were able to develop algorithms to monitor the use of warfarin by physicians using a Delphi Method. The proposed method is inexpensive and involves the participation of specialists, and it has proved adequate for the intended purpose. Further studies are needed to validate these algorithms, enabling them to be used in clinical practice.

  11. THE RESULTS OF THE CLINICAL USE OF A NEW METHOD OF OSTEOSYNTHESIS WITH NON-FREE BONE AUTOPLASTY AT THE MEDIAL FEMORAL NECK FRACTURES

    Directory of Open Access Journals (Sweden)

    R. M. Tikhilov

    2013-01-01

    Full Text Available Objective - to improve treatment outcomes in patients with medial fractures of the femoral neck through the development and introduction into clinical practice a new method of fixation with non-free plastic by the autograft from the iliac crest on a permanent muscular-vascular pedicle. Materials and methods. A comparative analysis of short- and long-term results of surgical treatment of 57 patients with medial fractures of the femoral neck, which were divided into primary and control groups. The study group included 24 patients who have undergone an osteosynthesis with cannulated screws with additional autoplasty with vascularized graft from the iliac crest. The control group consisted of 33 patients who underwent fixation with cannulated screws for the traditional method. Results. The use of non-free bone autoplasty in the main group of patients provided the best short- and long-term outcomes: fracture healing occurred in all cases in a period of 6 to 8 months. The long-term results of treatment of 22 patients after 2-6 years after the operation showed comparatively better anatomical functional outcomes. Conclusions. The indications for the clinical use of the fixation with the non-free bone autotransplantation are prognostically unfavorable for the union medial fractures of the femoral neck (II-III types by Pauwels or III-IV types by Garden in patients aged under 60 years with no signs of deforming arthrosis II-III stages.

  12. Variation in Results of Volume Measurements of Stumps of Lower-Limb Amputees : A Comparison of 4 Methods

    NARCIS (Netherlands)

    de Boer-Wilzing, Vera G.; Bolt, Arjen; Geertzen, Jan H.; Emmelot, Cornelis H.; Baars, Erwin C.; Dijkstra, Pieter U.

    de Boer-Wilzing VG, Bolt A, Geertzen JH, Emmelot CH, Baars EC, Dijkstra PU. Variation in results of volume measurements of stumps of lower-limb amputees: a comparison of 4 methods. Arch Phys Med Rehabil 2011;92:941-6. Objective: To analyze the reliability of 4 methods (water immersion,

  13. Long-term results of forearm lengthening and deformity correction by the Ilizarov method.

    Science.gov (United States)

    Orzechowski, Wiktor; Morasiewicz, Leszek; Krawczyk, Artur; Dragan, Szymon; Czapiński, Jacek

    2002-06-30

    Background. Shortening and deformity of the forearm is most frequently caused by congenital disorders or posttraumatic injury. Given its complex anatomy and biomechanics, the forearm is clearly the most difficult segment for lengthening and deformity correction. Material and methods. We analyzed 16 patients with shortening and deformity of the forearm, treated surgically, using the Ilizarov method in our Department from 1989 to 2001. in 9 cases 1-stage surgery was sufficient, while the remaining 7 patients underwent 2-5 stages of treatment. At total of 31 surgical operations were performed. The extent of forearm shortening ranged from 1,5 to 14,5 cm (5-70%). We development a new fixator based on Schanz half-pins. Results. The length of forearm lengthening per operative stage averaged 2,35 cm. the proportion of lengthening ranged from 6% to 48% with an average of 18,3%. The mean lengthening index was 48,15 days/cm. the per-patient rate of complications was 88% compared 45% per stage of treatment, mostly limited rotational mobility and abnormal consolidation of regenerated bone. Conclusions. Despite the high complication rate, the Ilizarov method is the method of choice for patients with forearm shortenings and deformities. Treatment is particularly indicated in patients with shortening caused by disproportionate length of the ulnar and forearm bones. Treatment should be managed so as cause the least possible damage to arm function, even at the cost of limited lengthening. Our new stabilizer based on Schanz half-pins makes it possible to preserve forearm rotation.

  14. Labelling of blood cells with radioactive indium-201: method, results, indications

    International Nuclear Information System (INIS)

    Ducassou, D.; Brendel, A.; Nouel, J.P.

    1978-01-01

    A modification of the method of Thakur et al. for labelling polynuclear cells with 8-hydroxyquinolein-indium-complexe utilising the water soluble sulfate of the substance was applied. The labelling procedure gave a yield over 98% with erthrocytes and over 80% with platelets and polynuclear cells using at least 1 x 10 8 plasma free cells. Functional capacity of the labelled cells remained unaltered. Injection double labelled ( 111 In, 51 Cr) red cells correlation of values for the red cell volume amounted to r = 0,98 (n=20); red cell life-spane measurements gave comparable results in 5 patients. After injecting labelled platelets a life-spane between 6,5 and 11 days was measured. Scintigraphic visualisation of pulmonary embolism was obtained 30 minutes after injecting labelled platelets. Injection of labelled polynuclear cells allows life-spane measurements as well as detection of abscesses. (author)

  15. Three magnetic particles solid phase radioimmunoassay for T4: Comparison of their results with established methods

    International Nuclear Information System (INIS)

    Bashir, T.

    1996-01-01

    The introduction of solid phase separation techniques is an important improvement in radioimmunoassays and immunoradiometric assays. Magnetic particle solid phase method has additional advantages over others, as the separation is rapid and centrifugation is not required. Three types of magnetic particles have been studied in T 4 RIA and the results have been compared with commercial kits and other established methods. (author). 4 refs, 9 figs, 2 tabs

  16. Semantic Dementia Shows both Storage and Access Disorders of Semantic Memory

    Directory of Open Access Journals (Sweden)

    Yumi Takahashi

    2014-01-01

    Full Text Available Objective. Previous studies have shown that some patients with semantic dementia (SD have memory storage disorders, while others have access disorders. Here, we report three SD cases with both disorders. Methods. Ten pictures and ten words were prepared as visual stimuli to determine if the patients could correctly answer names and select pictures after hearing the names of items (Card Presentation Task, assessing memory storage disorder. In a second task, the viewing time was set at 20 or 300 msec (Momentary Presentation Task, evaluating memory access disorder using items for which correct answers were given in the first task. The results were compared with those for 6 patients with Alzheimer’s disease (AD. Results. The SD patients had lower scores than the AD group for both tasks, suggesting both storage and access disorders. The AD group had almost perfect scores on the Card Presentation Task but showed impairment on the Momentary Presentation Task, although to a lesser extent than the SD cases. Conclusions. These results suggest that SD patients have both storage and access disorders and have more severe access disorder than patients with AD.

  17. Results of the naive quark model

    International Nuclear Information System (INIS)

    Gignoux, C.

    1987-10-01

    The hypotheses and limits of the naive quark model are recalled and results on nucleon-nucleon scattering and possible multiquark states are presented. Results show that with this model, ropers do not come. For hadron-hadron interactions, the model predicts Van der Waals forces that the resonance group method does not allow. Known many-body forces are not found in the model. The lack of mesons shows up in the absence of a far reaching force. However, the model does have strengths. It is free from spuriousness of center of mass, and allows a democratic handling of flavor. It has few parameters, and its predictions are very good [fr

  18. Investigation into Methods for Predicting Connection Temperatures

    Directory of Open Access Journals (Sweden)

    K. Anderson

    2009-01-01

    Full Text Available The mechanical response of connections in fire is largely based on material strength degradation and the interactions between the various components of the connection. In order to predict connection performance in fire, temperature profiles must initially be established in order to evaluate the material strength degradation over time. This paper examines two current methods for predicting connection temperatures: The percentage method, where connection temperatures are calculated as a percentage of the adjacent beam lower-flange, mid-span temperatures; and the lumped capacitance method, based on the lumped mass of the connection. Results from the percentage method do not correlate well with experimental results, whereas the lumped capacitance method shows much better agreement with average connection temperatures. A 3D finite element heat transfer model was also created in Abaqus, and showed good correlation with experimental results

  19. Vehicle Speed Determination in Case of Road Accident by Software Method and Comparing of Results with the Mathematical Model

    OpenAIRE

    Hoxha Gezim; Shala Ahmet; Likaj Rame

    2017-01-01

    The paper addresses the problem to vehicle speed calculation at road accidents. To determine the speed are used the PC Crash software and Virtual Crash. With both methods are analysed concrete cases of road accidents. Calculation methods and comparing results are present for analyse. These methods consider several factors such are: the front part of the vehicle, the technical feature of the vehicle, car angle, remote relocation after the crash, road conditions etc. Expected results with PC Cr...

  20. Time delayed Ensemble Nudging Method

    Science.gov (United States)

    An, Zhe; Abarbanel, Henry

    Optimal nudging method based on time delayed embedding theory has shows potentials on analyzing and data assimilation in previous literatures. To extend the application and promote the practical implementation, new nudging assimilation method based on the time delayed embedding space is presented and the connection with other standard assimilation methods are studied. Results shows the incorporating information from the time series of data can reduce the sufficient observation needed to preserve the quality of numerical prediction, making it a potential alternative in the field of data assimilation of large geophysical models.

  1. Penyelesaian Numerik Persamaan Advection Dengan Radial Point Interpolation Method dan Integrasi Waktu Dengan Discontinuous Galerkin Method

    Directory of Open Access Journals (Sweden)

    Kresno Wikan Sadono

    2016-12-01

    integration is derived for linear-DGM and quadratic-DGM. The simulation result shows that this numerical method, close to the results exact well. The results of numerical simulations with RPIM-DGM show, the more nodes and the smaller the time increment, the more accurate the numerical results. Other results showed, integration with quadratic-DGM for a time increment, and a certain number of nodes, further improving accuracy, compared with the linear-DGM.

  2. Wide Binaries in TGAS: Search Method and First Results

    Science.gov (United States)

    Andrews, Jeff J.; Chanamé, Julio; Agüeros, Marcel A.

    2018-04-01

    Half of all stars reside in binary systems, many of which have orbital separations in excess of 1000 AU. Such binaries are typically identified in astrometric catalogs by matching the proper motions vectors of close stellar pairs. We present a fully Bayesian method that properly takes into account positions, proper motions, parallaxes, and their correlated uncertainties to identify widely separated stellar binaries. After applying our method to the >2 × 106 stars in the Tycho-Gaia astrometric solution from Gaia DR1, we identify over 6000 candidate wide binaries. For those pairs with separations less than 40,000 AU, we determine the contamination rate to be ~5%. This sample has an orbital separation (a) distribution that is roughly flat in log space for separations less than ~5000 AU and follows a power law of a -1.6 at larger separations.

  3. A Literature Study of Matrix Element Influenced to the Result of Analysis Using Absorption Atomic Spectroscopy Method (AAS)

    International Nuclear Information System (INIS)

    Tyas-Djuhariningrum

    2004-01-01

    The gold sample analysis can be deviated more than >10% to those thrue value caused by the matrix element. So that the matrix element character need to be study in order to reduce the deviation. In rock samples, the matrix elements can cause self quenching, self absorption and ionization process, so there is a result analysis error. In the rock geochemical process, the elements of the same group at the periodic system have the tendency to be together because of their same characteristic. In absorption Atomic Spectroscopy analysis, the elements associate can absorb primer energy with similar wave length so that it can cause deviation in the result interpretation. The aim of study is to predict matrix element influences from rock sample with application standard method for reducing deviation. In quantitative way, assessment of primer light intensity that will be absorbed is proportional to the concentration atom in the sample that relationship between photon intensity with concentration in part per million is linier (ppm). These methods for eliminating matrix elements influence consist of three methods : external standard method, internal standard method, and addition standard method. External standard method for all matrix element, internal standard method for elimination matrix element that have similar characteristics, addition standard methods for elimination matrix elements in Au, Pt samples. The third of standard posess here accuracy are about 95-97%. (author)

  4. COSMIC EVOLUTION OF DUST IN GALAXIES: METHODS AND PRELIMINARY RESULTS

    International Nuclear Information System (INIS)

    Bekki, Kenji

    2015-01-01

    We investigate the redshift (z) evolution of dust mass and abundance, their dependences on initial conditions of galaxy formation, and physical correlations between dust, gas, and stellar contents at different z based on our original chemodynamical simulations of galaxy formation with dust growth and destruction. In this preliminary investigation, we first determine the reasonable ranges of the most important two parameters for dust evolution, i.e., the timescales of dust growth and destruction, by comparing the observed and simulated dust mass and abundances and molecular hydrogen (H 2 ) content of the Galaxy. We then investigate the z-evolution of dust-to-gas ratios (D), H 2 gas fraction (f H 2 ), and gas-phase chemical abundances (e.g., A O = 12 + log (O/H)) in the simulated disk and dwarf galaxies. The principal results are as follows. Both D and f H 2 can rapidly increase during the early dissipative formation of galactic disks (z ∼ 2-3), and the z-evolution of these depends on initial mass densities, spin parameters, and masses of galaxies. The observed A O -D relation can be qualitatively reproduced, but the simulated dispersion of D at a given A O is smaller. The simulated galaxies with larger total dust masses show larger H 2 and stellar masses and higher f H 2 . Disk galaxies show negative radial gradients of D and the gradients are steeper for more massive galaxies. The observed evolution of dust masses and dust-to-stellar-mass ratios between z = 0 and 0.4 cannot be reproduced so well by the simulated disks. Very extended dusty gaseous halos can be formed during hierarchical buildup of disk galaxies. Dust-to-metal ratios (i.e., dust-depletion levels) are different within a single galaxy and between different galaxies at different z

  5. COSMIC EVOLUTION OF DUST IN GALAXIES: METHODS AND PRELIMINARY RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Bekki, Kenji [ICRAR, M468, The University of Western Australia, 35 Stirling Highway, Crawley, Western Australia 6009 (Australia)

    2015-02-01

    We investigate the redshift (z) evolution of dust mass and abundance, their dependences on initial conditions of galaxy formation, and physical correlations between dust, gas, and stellar contents at different z based on our original chemodynamical simulations of galaxy formation with dust growth and destruction. In this preliminary investigation, we first determine the reasonable ranges of the most important two parameters for dust evolution, i.e., the timescales of dust growth and destruction, by comparing the observed and simulated dust mass and abundances and molecular hydrogen (H{sub 2}) content of the Galaxy. We then investigate the z-evolution of dust-to-gas ratios (D), H{sub 2} gas fraction (f{sub H{sub 2}}), and gas-phase chemical abundances (e.g., A {sub O} = 12 + log (O/H)) in the simulated disk and dwarf galaxies. The principal results are as follows. Both D and f{sub H{sub 2}} can rapidly increase during the early dissipative formation of galactic disks (z ∼ 2-3), and the z-evolution of these depends on initial mass densities, spin parameters, and masses of galaxies. The observed A {sub O}-D relation can be qualitatively reproduced, but the simulated dispersion of D at a given A {sub O} is smaller. The simulated galaxies with larger total dust masses show larger H{sub 2} and stellar masses and higher f{sub H{sub 2}}. Disk galaxies show negative radial gradients of D and the gradients are steeper for more massive galaxies. The observed evolution of dust masses and dust-to-stellar-mass ratios between z = 0 and 0.4 cannot be reproduced so well by the simulated disks. Very extended dusty gaseous halos can be formed during hierarchical buildup of disk galaxies. Dust-to-metal ratios (i.e., dust-depletion levels) are different within a single galaxy and between different galaxies at different z.

  6. The Vermont oxford neonatal encephalopathy registry: rationale, methods, and initial results

    Science.gov (United States)

    2012-01-01

    Background In 2006, the Vermont Oxford Network (VON) established the Neonatal Encephalopathy Registry (NER) to characterize infants born with neonatal encephalopathy, describe evaluations and medical treatments, monitor hypothermic therapy (HT) dissemination, define clinical research questions, and identify opportunities for improved care. Methods Eligible infants were ≥ 36 weeks with seizures, altered consciousness (stupor, coma) during the first 72 hours of life, a 5 minute Apgar score of ≤ 3, or receiving HT. Infants with central nervous system birth defects were excluded. Results From 2006–2010, 95 centers registered 4232 infants. Of those, 59% suffered a seizure, 50% had a 5 minute Apgar score of ≤ 3, 38% received HT, and 18% had stupor/coma documented on neurologic exam. Some infants experienced more than one eligibility criterion. Only 53% had a cord gas obtained and only 63% had a blood gas obtained within 24 hours of birth, important components for determining HT eligibility. Sixty-four percent received ventilator support, 65% received anticonvulsants, 66% had a head MRI, 23% had a cranial CT, 67% had a full channel encephalogram (EEG) and 33% amplitude integrated EEG. Of all infants, 87% survived. Conclusions The VON NER describes the heterogeneous population of infants with NE, the subset that received HT, their patterns of care, and outcomes. The optimal routine care of infants with neonatal encephalopathy is unknown. The registry method is well suited to identify opportunities for improvement in the care of infants affected by NE and study interventions such as HT as they are implemented in clinical practice. PMID:22726296

  7. Development of standard methods for activity measurement of natural radionuclides in waterworks as basis for dose and risk assessment—First results of an Austrian study

    International Nuclear Information System (INIS)

    Stietka, M.; Baumgartner, A.; Seidel, C.; Maringer, F.J.

    2013-01-01

    A comprehensive study with the aim to evaluate the risks due to radiation exposure for workers in water supply is conducted in 21 Austrian waterworks. The development of standard methods for the assessment of occupational exposure of water work staff is a part of this study. Preliminary results of this study show a wide range of Rn-222 activity concentration in waterworks with values from (28±10) Bq/m 3 to (38,000±4000) Bq/m 3 . Also seasonal variations of the Rn-222 activity concentration could be observed. - Highlights: • In this study operational exposure of water work staff was evaluated. • The Rn-222 concentration in indoor air in waterworks was measured for 1 year. • Results show a wide range of Rn-222 activity concentration in waterworks. • Seasonal variations of the Rn-222 activity concentration could be observed

  8. A comparative study of the effectiveness of "Star Show" vs. "Participatory Oriented Planetarium" lessons in a middle school Starlab setting

    Science.gov (United States)

    Platco, Nicholas L.., Jr.

    2005-06-01

    . In light of the results of this study, it appears that both Star Show and POP methods of instruction should continue to play important roles in planetarium education. A combination of the two methods is clearly the ideal solution when teaching astronomy to middle school students in a Starlab setting.

  9. Variational iteration method for one dimensional nonlinear thermoelasticity

    International Nuclear Information System (INIS)

    Sweilam, N.H.; Khader, M.M.

    2007-01-01

    This paper applies the variational iteration method to solve the Cauchy problem arising in one dimensional nonlinear thermoelasticity. The advantage of this method is to overcome the difficulty of calculation of Adomian's polynomials in the Adomian's decomposition method. The numerical results of this method are compared with the exact solution of an artificial model to show the efficiency of the method. The approximate solutions show that the variational iteration method is a powerful mathematical tool for solving nonlinear problems

  10. Results and current trends of nuclear methods used in agriculture

    International Nuclear Information System (INIS)

    Horacek, P.

    1983-01-01

    The significance is evaluated of nuclear methods for agricultural research. The number of breeds induced by radiation mutations is increasing. The main importance of radiation mutation breeding consists in obtaining sources of the desired genetic properties for further hybridization. Radiostimulation is conducted with the aim of increasing yields. The irradiation of foods has not substantially increased worldwide. Very important is the irradiation of excrements and sludges which after such inactivation of pathogenic microorganisms may be used as humus-forming manure or as feed additives. In some countries the method is successfully being used of sexual sterilization for eradication of insect pests. The application of labelled compounds in the nutrition, physiology and protection of plants, farm animals and in food hygiene makes it possible to acquire new and accurate knowledge very quickly. Radioimmunoassay is a highly promising method in this respect. Labelling compounds with the stable 15 N isotope is used for the research of nitrogen metabolism. (M.D.)

  11. Daily radiotoxicological supervision of personnel at the Pierrelatte industrial complex. Methods and results

    International Nuclear Information System (INIS)

    Chalabreysse, Jacques.

    1978-05-01

    A 13 year experience gained from daily radiotoxicological supervision of personnel at the PIERRELATTE industrial complex is presented. This study is divided into two parts: part one is theoretical: bibliographical synthesis of all scattered documents and publications; a homogeneous survey of all literature on the subject is thus available. Part two reviews the experience gained in professional surroundings: laboratory measurements and analyses (development of methods and daily applications); mathematical formulae to answer the first questions which arise before an individual liable to be contaminated; results obtained at PIERRELATTE [fr

  12. Application of NDE methods to green ceramics: initial results

    International Nuclear Information System (INIS)

    Kupperman, D.S.; Karplus, H.B.; Poeppel, R.B.; Ellingson, W.A.; Berger, H.; Robbins, C.; Fuller, E.

    1984-03-01

    This paper describes a preliminary investigation to assess the effectiveness of microradiography, ultrasonic methods, nuclear magnetic resonance, and neutron radiography for the nondestructive evaluation of green (unfired), ceramics. Objective is to obtain useful information on defects, cracking, delaminations, agglomerates, inclusions, regions of high porosity, and anisotropy

  13. Five Kepler target stars that show multiple transiting exoplanet candidates

    Energy Technology Data Exchange (ETDEWEB)

    Steffen, Jason H.; /Fermilab; Batalha, Natalie M.; /San Jose State U.; Borucki, William J.; /NASA, Ames; Buchhave, Lars A.; /Harvard-Smithsonian Ctr. Astrophys. /Bohr Inst.; Caldwell, Douglas A.; /NASA, Ames /SETI Inst., Mtn. View; Cochran, William D.; /Texas U.; Endl, Michael; /Texas U.; Fabrycky, Daniel C.; /Harvard-Smithsonian Ctr. Astrophys.; Fressin, Francois; /Harvard-Smithsonian Ctr. Astrophys.; Ford, Eric B.; /Florida U.; Fortney, Jonathan J.; /UC, Santa Cruz, Phys. Dept. /NASA, Ames

    2010-06-01

    We present and discuss five candidate exoplanetary systems identified with the Kepler spacecraft. These five systems show transits from multiple exoplanet candidates. Should these objects prove to be planetary in nature, then these five systems open new opportunities for the field of exoplanets and provide new insights into the formation and dynamical evolution of planetary systems. We discuss the methods used to identify multiple transiting objects from the Kepler photometry as well as the false-positive rejection methods that have been applied to these data. One system shows transits from three distinct objects while the remaining four systems show transits from two objects. Three systems have planet candidates that are near mean motion commensurabilities - two near 2:1 and one just outside 5:2. We discuss the implications that multitransiting systems have on the distribution of orbital inclinations in planetary systems, and hence their dynamical histories; as well as their likely masses and chemical compositions. A Monte Carlo study indicates that, with additional data, most of these systems should exhibit detectable transit timing variations (TTV) due to gravitational interactions - though none are apparent in these data. We also discuss new challenges that arise in TTV analyses due to the presence of more than two planets in a system.

  14. Tokyo Motor Show 2003; Tokyo Motor Show 2003

    Energy Technology Data Exchange (ETDEWEB)

    Joly, E.

    2004-01-01

    The text which follows present the different techniques exposed during the 37. Tokyo Motor Show. The report points out the great tendencies of developments of the Japanese automobile industry. The hybrid electric-powered vehicles or those equipped with fuel cells have been highlighted by the Japanese manufacturers which allow considerable budgets in the research of less polluting vehicles. The exposed models, although being all different according to the manufacturer, use always a hybrid system: fuel cell/battery. The manufacturers have stressed too on the intelligent systems for navigation and safety as well as on the design and comfort. (O.M.)

  15. Arsenic absorption by members of the Brassicacea family, analysed by neutron activation, k0-method - preliminary results

    International Nuclear Information System (INIS)

    Uemura, George; Matos, Ludmila Vieira da Silva; Silva, Maria Aparecida da; Ferreira, Alexandre Santos Martorano; Menezes, Maria Angela de Barros Correia

    2009-01-01

    Natural arsenic contamination is a cause for concern in many countries of the world including Argentina, Bangladesh, Chile, China, India, Mexico, Thailand and the United States of America and also in Brazil, specially in the Iron Quadrangle area, where mining activities has been contributing to aggravate natural contamination. Brassicacea is a plant family with edible species (arugula, cabbage, cauliflower, cress, kale, mustard, radish), ornamental ones (alysssum, field pennycress, ornamental cabbages and kales) and some species are known as metal and metalloid accumulators (Indian mustard, field pennycress), like chromium, nickel, and arsenic. The present work aimed at studying other taxa of the Brassicaceae family to verify their capability in absorbing arsenic, under controlled conditions, for possible utilisation in remediation activities. The analytical method chosen was neutron activation analysis, k 0 method, a routine technique at CDTN, and also very appropriate for arsenic studies. To avoid possible interference from solid substrates, like sand or vermiculite, attempts were carried out to keep the specimens in 1/4 Murashige and Skoog basal salt solution (M and S). Growth was stumped, plants withered and perished, showing that modifications in M and S had to be done. The addition of nickel and silicon allowed normal growth of the plant specimens, for periods longer than usually achieved (more than two months); yielding samples large enough for further studies with other techniques, like ICP-MS, and other targets, like speciation studies. The results of arsenic absorption are presented here and the need of nickel and silicon in the composition of M and S is discussed. (author)

  16. THE RESULTS OF KINESIOTAPING IN PATIENTS WITH SCOLIOSIS

    Directory of Open Access Journals (Sweden)

    Dmitry Anatolevich Kiselev

    2016-10-01

    Full Text Available The kinesiotaping method has put into practice on the Rehabilitation and Sport medicine chair of the RNIMU named by N.I. Pirogov which is situated in the Medical Rehabilitation Department of RDKB. This method showed high theatment efficacy and results were stable and continuous. Kinesiotaping isn’t the main treatment method of scoliosis, but its good efficacy in reaching particular rehabilitation goals, potentiation of some methods of treatment of scoliosis, pain elimination and so on, support the idea to invent this method in the rehabilitation process circle in case of such difficult disease as scoliosis.

  17. Lithium inclusion in indium metal-organic frameworks showing increased surface area and hydrogen adsorption

    Directory of Open Access Journals (Sweden)

    Mathieu Bosch

    2014-12-01

    Full Text Available Investigation of counterion exchange in two anionic In-Metal-Organic Frameworks (In-MOFs showed that partial replacement of disordered ammonium cations was achieved through the pre-synthetic addition of LiOH to the reaction mixture. This resulted in a surface area increase of over 1600% in {Li [In(1,3 − BDC2]}n and enhancement of the H2 uptake of approximately 275% at 80 000 Pa at 77 K. This method resulted in frameworks with permanent lithium content after repeated solvent exchange as confirmed by inductively coupled plasma mass spectrometry. Lithium counterion replacement appears to increase porosity after activation through replacement of bulkier, softer counterions and demonstrates tuning of pore size and properties in MOFs.

  18. Why do results conflict regarding the prognostic value of the methylation status in colon cancers? the role of the preservation method

    Directory of Open Access Journals (Sweden)

    Tournier Benjamin

    2012-01-01

    Full Text Available Abstract Background In colorectal carcinoma, extensive gene promoter hypermethylation is called the CpG island methylator phenotype (CIMP. Explaining why studies on CIMP and survival yield conflicting results is essential. Most experiments to measure DNA methylation rely on the sodium bisulfite conversion of unmethylated cytosines into uracils. No study has evaluated the performance of bisulfite conversion and methylation levels from matched cryo-preserved and Formalin-Fixed Paraffin Embedded (FFPE samples using pyrosequencing. Methods Couples of matched cryo-preserved and FFPE samples from 40 colon adenocarcinomas were analyzed. Rates of bisulfite conversion and levels of methylation of LINE-1, MLH1 and MGMT markers were measured. Results For the reproducibility of bisulfite conversion, the mean of bisulfite-to-bisulfite standard deviation (SD was 1.3%. The mean of run-to-run SD of PCR/pyrosequencing was 0.9%. Of the 40 DNA couples, only 67.5%, 55.0%, and 57.5% of FFPE DNA were interpretable for LINE-1, MLH1, and MGMT markers, respectively, after the first analysis. On frozen samples the proportion of well converted samples was 95.0%, 97.4% and 87.2% respectively. For DNA showing a total bisulfite conversion, 8 couples (27.6% for LINE-1, 4 couples (15.4% for MLH1 and 8 couples (25.8% for MGMT displayed significant differences in methylation levels. Conclusions Frozen samples gave reproducible results for bisulfite conversion and reliable methylation levels. FFPE samples gave unsatisfactory and non reproducible bisulfite conversions leading to random results for methylation levels. The use of FFPE collections to assess DNA methylation by bisulfite methods must not be recommended. This can partly explain the conflicting results on the prognosis of CIMP colon cancers.

  19. Method and equipment for treating waste water resulting from the technological testing processes of NPP equipment

    International Nuclear Information System (INIS)

    Radulescu, M. C.; Valeca, S.; Iorga, C.

    2016-01-01

    Modern methods and technologies coupled together with advanced equipment for treating residual substances resulted from technological processes are mandatory measures for all industrial facilities. The correct management of the used working agents and of the all wastes resulted from the different technological process (preparation, use, collection, neutralization, discharge) is intended to reduce up to removal of their potential negative impact on the environment. The high pressure and temperature testing stands from INR intended for functional testing of nuclear components (fuel bundles, fuelling machines, etc.) were included in these measures since the use of oils, demineralized water chemically treated, greases, etc. This paper is focused on the method and equipment used at INR Pitesti in the chemical treatment of demineralized waters, as well as the equipment for collecting, neutralizing and discharging them after use. (authors)

  20. Online Italian fandoms of American TV shows

    Directory of Open Access Journals (Sweden)

    Eleonora Benecchi

    2015-06-01

    Full Text Available The Internet has changed media fandom in two main ways: it helps fans connect with each other despite physical distance, leading to the formation of international fan communities; and it helps fans connect with the creators of the TV show, deepening the relationship between TV producers and international fandoms. To assess whether Italian fan communities active online are indeed part of transnational online communities and whether the Internet has actually altered their relationship with the creators of the original text they are devoted to, qualitative analysis and narrative interviews of 26 Italian fans of American TV shows were conducted to explore the fan-producer relationship. Results indicated that the online Italian fans surveyed preferred to stay local, rather than using geography-leveling online tools. Further, the sampled Italian fans' relationships with the show runners were mediated or even absent.

  1. Comparison between the Variational Iteration Method and the Homotopy Perturbation Method for the Sturm-Liouville Differential Equation

    OpenAIRE

    Darzi R; Neamaty A

    2010-01-01

    We applied the variational iteration method and the homotopy perturbation method to solve Sturm-Liouville eigenvalue and boundary value problems. The main advantage of these methods is the flexibility to give approximate and exact solutions to both linear and nonlinear problems without linearization or discretization. The results show that both methods are simple and effective.

  2. Human exposure to bisphenol A by biomonitoring: Methods, results and assessment of environmental exposures

    International Nuclear Information System (INIS)

    Dekant, Wolfgang; Voelkel, Wolfgang

    2008-01-01

    Human exposure to bisphenol A is controversially discussed. This review critically assesses methods for biomonitoring of bisphenol A exposures and reported concentrations of bisphenol A in blood and urine of non-occupationally ('environmentally') exposed humans. From the many methods published to assess bisphenol A concentrations in biological media, mass spectrometry-based methods are considered most appropriate due to high sensitivity, selectivity and precision. In human blood, based on the known toxicokinetics of bisphenol A in humans, the expected very low concentrations of bisphenol A due to rapid biotransformation and the very rapid excretion result in severe limitations in the use of reported blood levels of bisphenol A for exposure assessment. Due to the rapid and complete excretion of orally administered bisphenol A, urine samples are considered as the appropriate body fluid for bisphenol A exposure assessment. In urine samples from several cohorts, bisphenol A (as glucuronide) was present in average concentrations in the range of 1-3 μg/L suggesting that daily human exposure to bisphenol A is below 6 μg per person (< 0.1 μg/kg bw/day) for the majority of the population

  3. Viscous wing theory development. Volume 1: Analysis, method and results

    Science.gov (United States)

    Chow, R. R.; Melnik, R. E.; Marconi, F.; Steinhoff, J.

    1986-01-01

    Viscous transonic flows at large Reynolds numbers over 3-D wings were analyzed using a zonal viscid-inviscid interaction approach. A new numerical AFZ scheme was developed in conjunction with the finite volume formulation for the solution of the inviscid full-potential equation. A special far-field asymptotic boundary condition was developed and a second-order artificial viscosity included for an improved inviscid solution methodology. The integral method was used for the laminar/turbulent boundary layer and 3-D viscous wake calculation. The interaction calculation included the coupling conditions of the source flux due to the wing surface boundary layer, the flux jump due to the viscous wake, and the wake curvature effect. A method was also devised incorporating the 2-D trailing edge strong interaction solution for the normal pressure correction near the trailing edge region. A fully automated computer program was developed to perform the proposed method with one scalar version to be used on an IBM-3081 and two vectorized versions on Cray-1 and Cyber-205 computers.

  4. Method of fabricating nested shells and resulting product

    Science.gov (United States)

    Henderson, Timothy M.; Kool, Lawrence B.

    1982-01-01

    A multiple shell structure and a method of manufacturing such structure wherein a hollow glass microsphere is surface treated in an organosilane solution so as to render the shell outer surface hydrophobic. The surface treated glass shell is then suspended in the oil phase of an oil-aqueous phase dispersion. The oil phase includes an organic film-forming monomer, a polymerization initiator and a blowing agent. A polymeric film forms at each phase boundary of the dispersion and is then expanded in a blowing operation so as to form an outer homogeneously integral monocellular substantially spherical thermoplastic shell encapsulating an inner glass shell of lesser diameter.

  5. The Biochemistry Show: a new and fun tool for learning

    Directory of Open Access Journals (Sweden)

    A.H Ono

    2006-07-01

    Full Text Available The traditional methods to teach biochemistry in most universities are based on the memorization of chemical structures,  biochemical  pathways  and  reagent  names,  which  is  many  times  dismotivating  for  the  students.  We presently describe an innovative, interactive and alternative method for teaching biochemistry to medical and nutrition undergraduate students, called the Biochemistry Show (BioBio Show.The Biobio show is based on active participation of the students. They are divided in groups and the groups face each other. One group faces another one group at a time, in a game based on true or false questions that involve subjects of applied biochemistry (exercise, obesity, diabetes, cholesterol, free radicals, among others. The questions of the Show are previously elaborated by senior students. The Biobio Show has four phases, the first one is a selection exam, and from the second to the fourth phase, eliminatory confrontations happen. On a confrontation, the first group must select a certain quantity of questions for the opponent to answer.  The group who choses the questions must know how to answer and justify the selected questions. This procedure is repeated on all phases of the show. On the last phase, the questions used are taken from an exam previously performed by the students: either the 9-hour biochemistry exam (Sé et al. A 9-hour biochemistry exam. An iron man competition or a good way of evaluating undergraduate students? SBBq 2005, abstract K-6 or the True-or-False exam (TFE (Sé et al. Are tutor-students capable of writing good biochemistry exams? SBBq 2004, abstract K-18. The winner group receives an extra 0,5 point on the final grade. Over 70% of the students informed on a questionnaire that the Biobio Show is a valuable tool for learning biochemistry.    That is a new way to enrich the discussion of biochemistry in the classroom without the students getting bored. Moreover, learning

  6. Comparison between the Variational Iteration Method and the Homotopy Perturbation Method for the Sturm-Liouville Differential Equation

    Directory of Open Access Journals (Sweden)

    R. Darzi

    2010-01-01

    Full Text Available We applied the variational iteration method and the homotopy perturbation method to solve Sturm-Liouville eigenvalue and boundary value problems. The main advantage of these methods is the flexibility to give approximate and exact solutions to both linear and nonlinear problems without linearization or discretization. The results show that both methods are simple and effective.

  7. Methods and results of a probabilistic risk assessment for radioactive waste transports

    International Nuclear Information System (INIS)

    Lange, F.; Gruendler, D.; Schwarz, G.

    1993-01-01

    The radiological risk from accidents has been analyzed for the expected annual transport volume (3400 shipping units) of low and partially intermediate level radioactive wastes to be shipped to a final repository. In order to take account of these variable quantities and conditions a computer code was developed to simulate a wide spectrum of waste transport and accident configurations using Monte Carlo sampling techniques. Typically some 10.000 source terms were generated to represent possible releases of radionuclides from transport accidents. Accident events in which the integrity of waste packagings is retained and consequently no releases occur are included. Potential radiological consequences are then calculated for each of the release categories by using an accident consequence code which takes into account atmospheric dispersion statistics. Finally cumulative complementary frequency distributions of radiological consequences are generated by superposing the results for all release categories. Radiological consequences are primarily expressed as potential effective individual doses resulting from airborne and deposited radionuclides. The results of the risk analysis show that expected frequencies of effective doses comparable to the natural radiation exposure of one year are quite low and very low for potential radiation exposures in the range of 50 mSv. (J.P.N.)

  8. Evolution of different reaction methods resulting in the formation of AgI125 for use in brachytherapy sources

    International Nuclear Information System (INIS)

    Souza, C.D.; Peleias Jr, F.S.; Rostelato, M.E.C.M.; Zeituni, C.A.; Benega, M.A.G.; Tiezzi, R.; Mattos, F.R.; Rodrigues, B.T.; Oliveira, T.B.; Feher, A.; Moura, J.A.; Costa, O.L.

    2014-01-01

    Prostate cancer represents about 10% of all cases of cancer in the world. Brachytherapy has been extensively used in the early and intermediate stages of the illness. The radiotherapy method reduces the damage probability to surrounding healthy tissues. The present study compares several deposition methods of iodine-125 on silver substrate (seed core), in order to choose the most suitable one to be implemented at IPEN. Four methods were selected: method 1 (assay based on electrodeposition) which presented efficiency of 65.16%; method 2 (assay based on chemical reactions, developed by David Kubiatowicz) which presented efficiency of 70.80%; method 3 (chemical reaction based on the methodology developed by Dr. Maria Elisa Rostelato) which presented efficiency of 55.80%; new method developed by IPEN with 90.5% efficiency. Based on the results, the new method is the suggested one to be implemented. (authors)

  9. Water footprint, extended water footprint and virtual water trade of the Cantabria region, Spain. A critical appraisal of results, uncertainties and methods.

    Science.gov (United States)

    Diaz-Alcaide, Silvia; Martinez-Santos, Pedro; Willaarts, Barbara; Hernández-Moreno, Enrique; Llamas, M. Ramon

    2015-04-01

    Water footprint assessments have gradually gained recognition as valuable tools for water management, to the point that they have been officially incorporated to water planning in countries such as Spain. Adequate combinations of the virtual water and water footprint concepts present the potential to link a broad range of sectors and issues, thus providing appropriate frameworks to support optimal water allocation and to inform production and trade decisions from the water perspective. We present the results of a regional study carried out in Cantabria, a 5300 km2 autonomous region located in northern Spain. Our approach deals with the municipal, shire and regional scales, combining different methods to assess each of the main components of Cantabria's water footprint (agriculture, livestock, forestry, industry, mining, tourism, domestic use and reservoirs), as well as exploring the significance of different approaches, assumptions and databases in the overall outcomes. The classic water footprint method is coupled with extended water footprint analyses in order to provide an estimate of the social and economic value of each sector. Finally, virtual water imports and exports are computed between Cantabria and the rest of Spain and between Cantabria and the world. The outcome of our work (a) highlights the paramount importance of green water (mostly embedded in pastures) in the region's water footprint and virtual water exports; (b) establishes the role of the region as a net virtual water exporter; (c) shows the productivity of water (euro/m3 and jobs/m3) to be highest in tourism and lowest in agriculture and livestock; and (d) demonstrates that statistical databases are seldom compiled with water footprint studies in mind, which is likely to introduce uncertainties in the results. Although our work shows that there is still plenty of room for improvement in regional-scale water footprint assessments, we contend that the available information is sufficient to

  10. Application of Semiempirical Methods to Transition Metal Complexes: Fast Results but Hard-to-Predict Accuracy.

    KAUST Repository

    Minenkov, Yury

    2018-05-22

    A series of semiempirical PM6* and PM7 methods has been tested in reproducing of relative conformational energies of 27 realistic-size complexes of 16 different transition metals (TMs). An analysis of relative energies derived from single-point energy evaluations on density functional theory (DFT) optimized conformers revealed pronounced deviations between semiempirical and DFT methods indicating fundamental difference in potential energy surfaces (PES). To identify the origin of the deviation, we compared fully optimized PM7 and respective DFT conformers. For many complexes, differences in PM7 and DFT conformational energies have been confirmed often manifesting themselves in false coordination of some atoms (H, O) to TMs and chemical transformations/distortion of coordination center geometry in PM7 structures. Despite geometry optimization with fixed coordination center geometry leads to some improvements in conformational energies, the resulting accuracy is still too low to recommend explored semiempirical methods for out-of-the-box conformational search/sampling: careful testing is always needed.

  11. A ChIP-Seq benchmark shows that sequence conservation mainly improves detection of strong transcription factor binding sites.

    Directory of Open Access Journals (Sweden)

    Tony Håndstad

    Full Text Available BACKGROUND: Transcription factors are important controllers of gene expression and mapping transcription factor binding sites (TFBS is key to inferring transcription factor regulatory networks. Several methods for predicting TFBS exist, but there are no standard genome-wide datasets on which to assess the performance of these prediction methods. Also, it is believed that information about sequence conservation across different genomes can generally improve accuracy of motif-based predictors, but it is not clear under what circumstances use of conservation is most beneficial. RESULTS: Here we use published ChIP-seq data and an improved peak detection method to create comprehensive benchmark datasets for prediction methods which use known descriptors or binding motifs to detect TFBS in genomic sequences. We use this benchmark to assess the performance of five different prediction methods and find that the methods that use information about sequence conservation generally perform better than simpler motif-scanning methods. The difference is greater on high-affinity peaks and when using short and information-poor motifs. However, if the motifs are specific and information-rich, we find that simple motif-scanning methods can perform better than conservation-based methods. CONCLUSIONS: Our benchmark provides a comprehensive test that can be used to rank the relative performance of transcription factor binding site prediction methods. Moreover, our results show that, contrary to previous reports, sequence conservation is better suited for predicting strong than weak transcription factor binding sites.

  12. Intracerebral metastasis showing restricted diffusion: Correlation with histopathologic findings

    Energy Technology Data Exchange (ETDEWEB)

    Duygulu, G. [Radiology Department, Ege University Medicine School, Izmir (Turkey); Ovali, G. Yilmaz [Radiology Department, Celal Bayar University Medicine School, Manisa (Turkey)], E-mail: gulgun.yilmaz@bayar.edu.tr; Calli, C.; Kitis, O.; Yuenten, N. [Radiology Department, Ege University Medicine School, Izmir (Turkey); Akalin, T. [Pathology Department, Ege University Medicine School, Izmir (Turkey); Islekel, S. [Neurosurgery Department, Ege University Medicine School, Izmir (Turkey)

    2010-04-15

    Objective: We aimed to detect the frequency of restricted diffusion in intracerebral metastases and to find whether there is correlation between the primary tumor pathology and diffusion-weighted MR imaging (DWI) findings of these metastases. Material and methods: 87 patients with intracerebral metastases were examined with routine MR imaging and DWI. 11 hemorrhagic metastatic lesions were excluded. The routine MR imaging included three plans before and after contrast enhancement. The DWI was performed with spin-echo EPI sequence with three b values (0, 500 and 1000), and ADC maps were calculated. 76 patients with metastases were grouped according to primary tumor histology and the ratios of restricted diffusion were calculated according to these groups. ADCmin values were measured within the solid components of the tumors and the ratio of metastases with restricted diffusion to that which do not show restricted diffusion were calculated. Fisher's exact and Mann-Whitney U tests were used for the statistical analysis. Results: Restricted diffusion was observed in a total of 15 metastatic lesions (19, 7%). Primary malignancy was lung carcinoma in 10 of these cases (66, 6%) (5 small cell carcinoma, 5 non-small cell carcinoma), and breast carcinoma in three cases (20%). Colon carcinoma and testicular teratocarcinoma were the other two primary tumors in which restricted diffusion in metastasis was detected. There was no statistical significant difference between the primary pathology groups which showed restricted diffusion (p > 0.05). ADCmin values of solid components of the metastasis with restricted diffusion and other metastasis without restricted diffusion also showed no significant statistical difference (0.72 {+-} 0.16 x 10{sup -3} mm{sup 2}/s and 0.78 {+-} 21 x 10{sup -3} mm{sup 2}/s respectively) (p = 0.325). Conclusion: Detection of restricted diffusion on DWI in intracerebral metastasis is not rare, particularly if the primary tumor is lung or breast

  13. Earthquake prediction by Kina Method

    International Nuclear Information System (INIS)

    Kianoosh, H.; Keypour, H.; Naderzadeh, A.; Motlagh, H.F.

    2005-01-01

    Earthquake prediction has been one of the earliest desires of the man. Scientists have worked hard to predict earthquakes for a long time. The results of these efforts can generally be divided into two methods of prediction: 1) Statistical Method, and 2) Empirical Method. In the first method, earthquakes are predicted using statistics and probabilities, while the second method utilizes variety of precursors for earthquake prediction. The latter method is time consuming and more costly. However, the result of neither method has fully satisfied the man up to now. In this paper a new method entitled 'Kiana Method' is introduced for earthquake prediction. This method offers more accurate results yet lower cost comparing to other conventional methods. In Kiana method the electrical and magnetic precursors are measured in an area. Then, the time and the magnitude of an earthquake in the future is calculated using electrical, and in particular, electrical capacitors formulas. In this method, by daily measurement of electrical resistance in an area we make clear that the area is capable of earthquake occurrence in the future or not. If the result shows a positive sign, then the occurrence time and the magnitude can be estimated by the measured quantities. This paper explains the procedure and details of this prediction method. (authors)

  14. Dosimetry methods and results for the former residents of Bikini Atoll

    International Nuclear Information System (INIS)

    Greenhouse, N.A.

    1979-01-01

    The US Government utilized Bikini and Enewetak Atolls in the northern Marshall Islands of Micronesia for atomspheric tests of nuclear explosives in the 1940's and 1950's. The original inhabitants of these atolls were relocated prior to the tests. During the early 1970's, a small but growing population of Marshallese people reinhabited Bikini. Environmental and personnel radiological monitoring programs were begun in 1974 to ensure that doses and dose commitments received by Bikini residents remained within US Federal Radiation Council guidelines. Dramatic increases in 137 Cs body burdens among the inhabitants between April 1977 and 1978 may have played a significant role in the government decision to move the 140 Bikinians in residence off of the atoll in August 1978. The average 137 Cs body burden for the population was 2.3 μCi in April 1978. Several individuals, however, exceeded the maximum permissible body burden of 3 μCi, and some approached 6 μCi. The resultant total dose commitment was less than 200 mrem for the average resident. The average total dose for the mean residence interval of approx. 4.5 years was about 1 rem. The sources of exposure, the probable cause of the unexpected increase in 137 Cs body burdens, and the methods for calculating radionuclide intake and resultant doses are discussed. Suggestions are offered as to the implications of the most significant exposure pathways for the future inhabitation of Bikini and Enewetak

  15. Effective methods of protection of the intellectual activity results in infosphere of global telematics networks

    Directory of Open Access Journals (Sweden)

    D. A. Lovtsov

    2016-01-01

    Full Text Available The purpose of this article is perfection of using metodology of technological and organization and legal protect of intellectual activity results and related intellectual rights in information sphere of Global Telematics Networks (such as of «Internet», «Relkom», «Sitek», «Sedab», «Remart», and others. On the conduct analysis base of the peculiarities and possibilities of using of different technological, organization and legal methods and ways protection of information objects the offers of perfection of corresponding organization and legal safeguarding are formulated. The effectiveness of the protection is provided on the basis of rational aggregation technological, organization and legal methods and ways possible in a particular situation.

  16. Feasibility to implement the radioisotopic method of nasal mucociliary transport measurement getting reliable results

    International Nuclear Information System (INIS)

    Troncoso, M.; Opazo, C.; Quilodran, C.; Lizama, V.

    2002-01-01

    Aim: Our goal was to implement the radioisotopic method to measure the nasal mucociliary velocity of transport (NMVT) in a feasible way in order to make it easily available as well as to validate the accuracy of the results. Such a method is needed when primary ciliary dyskinesia (PCD) is suspected, a disorder characterized for low NMVT, non-specific chronic respiratory symptoms that needs to be confirmed by electronic microscopic cilia biopsy. Methods: We performed one hundred studies from February 2000 until February 2002. Patients aged 2 months to 39 years, mean 9 years. All of them were referred from the Respiratory Disease Department. Ninety had upper or lower respiratory symptoms, ten were healthy controls. The procedure, done be the Nuclear Medicine Technologist, consists to put a 20 μl drop of 99mTc-MAA (0,1 mCi, 4 MBq) behind the head of the inferior turbinate in one nostril using a frontal light, a nasal speculum and a teflon catheter attached to a tuberculin syringe. The drop movement was acquired in a gamma camera-computer system and the velocity was expressed in mm/min. As there is need for the patient not to move during the procedure, sedation has to be used in non-cooperative children. Abnormal NMVT values cases were referred for nasal biopsy. Patients were classified in three groups. Normal controls (NC), PCD confirmed by biopsy (PCDB) and cases with respiratory symptoms without biopsy (RSNB). In all patients with NMVT less than 2.4 mm/min PCD was confirmed by biopsy. There was a clear-cut separation between normal and abnormal values and interestingly even the highest NMVT in PCDB cases was lower than the lowest NMVT in NC. The procedure is not as easy as is generally described in the literature because the operator has to get some skill as well as for the need of sedation in some cases. Conclusion: The procedure gives reliable, reproducible and objective results. It is safe, not expensive and quick in cooperative patients. Although, sometimes

  17. Collaborative assessment and management of suicidality method shows effect

    DEFF Research Database (Denmark)

    Nielsen, Ann Colleen; Alberdi Olano, Francisco Javier Lorenzo; Rosenbaum, Bent

    2011-01-01

    Previous studies confirm the effect of collaborative assessment and management of suicidality (CAMS) in an experimental setup, but there is a need to test CAMS with regard to its effectiveness and feasibility in a real-life clinical context. The purpose of this study was to investigate CAMS in a ...

  18. Results from a survey of the South African GISc community show ...

    African Journals Online (AJOL)

    Serena Coetzee

    the GISc community fulfil roles of data analysis and interpretation, together with data ... The remainder of the article is structured as follows: related work is briefly ...... 2. Analysis design. 3. Project definition. Geospatial information systems,.

  19. Examining Sexual Dysfunction in Non‐Muscle‐Invasive Bladder Cancer: Results of Cross‐Sectional Mixed‐Methods Research

    Directory of Open Access Journals (Sweden)

    Marc A. Kowalkowski, PhD

    2014-08-01

    Conclusions: Survivors' sexual symptoms may result from NMIBC, comorbidities, or both. These results inform literature and practice by raising awareness about the frequency of symptoms and the impact on NMIBC survivors' intimate relationships. Further work is needed to design symptom management education programs to dispel misinformation about contamination post‐treatment and improve quality of life. Kowalkowski MA, Chandrashekar A, Amiel GE, Lerner SP, Wittmann DA, Latini DM, and Goltz HH. Examining sexual dysfunction in non‐muscle‐invasive bladder cancer: Results of cross‐sectional mixed‐methods research. Sex Med 2014;2:141–151.

  20. Radioactive indium labelling of the figured elements of blood. Method, results, applications

    International Nuclear Information System (INIS)

    Ducassou, D.; Nouel, J.P.

    Following the work of Thakur et al. the authors became interested in red corpuscle, leucocyte and platelet labelling with indium 111 or 113m (8 hydroxyquinolein-indium). For easier labelling of the figured elements of blood the technique described was modified. The chelate is prepared by simple contact at room temperature of indium 111 or 113m chloride and water-soluble 8 hydroxyquinolein sulphate, in the presence of 0.2M TRIS buffer. The figured element chosen suspended in physiological serum is added directly to the solution obtained, the platelets and leucocytes being separated out beforehand by differential centrifugation. While it gives results similar to those of Thabur et al. the method proposed avoids the chloroform extraction of the radioactive chelate and the use of alcohol, liable to impair the platelet regation capacity [fr

  1. A method for purifying air containing radioactive substances resulting from the disintegration of radon

    International Nuclear Information System (INIS)

    Stringer, C.W.

    1974-01-01

    The invention relates to the extraction of radioactive isotopes from air. It refers to a method for withdrawing the radioactive substances resulting from the disintegration of radon from air, said method of the type comprising filtrating the air contaminated by the radon daughter products in a filter wetted with water in order to trap said substances in water. It is characterized in that it comprises the steps of causing the water contaminated by the radon daughter products to flow through a filtrating substance containing a non hydrosoluble granular substrate, the outer surface of which has been dried then wetted by a normally-liquid hydrocarbon, and of returning then wetted by a normally-liquid hydrocarbon, and of returning the thus filtrated water so that it wets again the air filter and entraps further radon daughter products. This can be applied to the purification of the air in uranium mines [fr

  2. Assessing Internet energy intensity: A review of methods and results

    Energy Technology Data Exchange (ETDEWEB)

    Coroama, Vlad C., E-mail: vcoroama@gmail.com [Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Hilty, Lorenz M. [Department of Informatics, University of Zurich, Binzmühlestrasse 14, 8050 Zurich (Switzerland); Empa, Swiss Federal Laboratories for Materials Science and Technology, Lerchenfeldstr. 5, 9014 St. Gallen (Switzerland); Centre for Sustainable Communications, KTH Royal Institute of Technology, Lindstedtsvägen 5, 100 44 Stockholm (Sweden)

    2014-02-15

    Assessing the average energy intensity of Internet transmissions is a complex task that has been a controversial subject of discussion. Estimates published over the last decade diverge by up to four orders of magnitude — from 0.0064 kilowatt-hours per gigabyte (kWh/GB) to 136 kWh/GB. This article presents a review of the methodological approaches used so far in such assessments: i) top–down analyses based on estimates of the overall Internet energy consumption and the overall Internet traffic, whereby average energy intensity is calculated by dividing energy by traffic for a given period of time, ii) model-based approaches that model all components needed to sustain an amount of Internet traffic, and iii) bottom–up approaches based on case studies and generalization of the results. Our analysis of the existing studies shows that the large spread of results is mainly caused by two factors: a) the year of reference of the analysis, which has significant influence due to efficiency gains in electronic equipment, and b) whether end devices such as personal computers or servers are included within the system boundary or not. For an overall assessment of the energy needed to perform a specific task involving the Internet, it is necessary to account for the types of end devices needed for the task, while the energy needed for data transmission can be added based on a generic estimate of Internet energy intensity for a given year. Separating the Internet as a data transmission system from the end devices leads to more accurate models and to results that are more informative for decision makers, because end devices and the networking equipment of the Internet usually belong to different spheres of control. -- Highlights: • Assessments of the energy intensity of the Internet differ by a factor of 20,000. • We review top–down, model-based, and bottom–up estimates from literature. • Main divergence factors are the year studied and the inclusion of end devices

  3. Assessing Internet energy intensity: A review of methods and results

    International Nuclear Information System (INIS)

    Coroama, Vlad C.; Hilty, Lorenz M.

    2014-01-01

    Assessing the average energy intensity of Internet transmissions is a complex task that has been a controversial subject of discussion. Estimates published over the last decade diverge by up to four orders of magnitude — from 0.0064 kilowatt-hours per gigabyte (kWh/GB) to 136 kWh/GB. This article presents a review of the methodological approaches used so far in such assessments: i) top–down analyses based on estimates of the overall Internet energy consumption and the overall Internet traffic, whereby average energy intensity is calculated by dividing energy by traffic for a given period of time, ii) model-based approaches that model all components needed to sustain an amount of Internet traffic, and iii) bottom–up approaches based on case studies and generalization of the results. Our analysis of the existing studies shows that the large spread of results is mainly caused by two factors: a) the year of reference of the analysis, which has significant influence due to efficiency gains in electronic equipment, and b) whether end devices such as personal computers or servers are included within the system boundary or not. For an overall assessment of the energy needed to perform a specific task involving the Internet, it is necessary to account for the types of end devices needed for the task, while the energy needed for data transmission can be added based on a generic estimate of Internet energy intensity for a given year. Separating the Internet as a data transmission system from the end devices leads to more accurate models and to results that are more informative for decision makers, because end devices and the networking equipment of the Internet usually belong to different spheres of control. -- Highlights: • Assessments of the energy intensity of the Internet differ by a factor of 20,000. • We review top–down, model-based, and bottom–up estimates from literature. • Main divergence factors are the year studied and the inclusion of end devices

  4. The effect of different methods and image analyzers on the results of the in vivo comet assay.

    Science.gov (United States)

    Kyoya, Takahiro; Iwamoto, Rika; Shimanura, Yuko; Terada, Megumi; Masuda, Shuichi

    2018-01-01

    The in vivo comet assay is a widely used genotoxicity test that can detect DNA damage in a range of organs. It is included in the Organisation for Economic Co-operation and Development Guidelines for the Testing of Chemicals. However, various protocols are still used for this assay, and several different image analyzers are used routinely to evaluate the results. Here, we verified a protocol that largely contributes to the equivalence of results, and we assessed the effect on the results when slides made from the same sample were analyzed using two different image analyzers (Comet Assay IV vs Comet Analyzer). Standardizing the agarose concentrations and DNA unwinding and electrophoresis times had a large impact on the equivalence of the results between the different methods used for the in vivo comet assay. In addition, there was some variation in the sensitivity of the two different image analyzers tested; however this variation was considered to be minor and became negligible when the test conditions were standardized between the two different methods. By standardizing the concentrations of low melting agarose and DNA unwinding and electrophoresis times between both methods used in the current study, the sensitivity to detect the genotoxicity of a positive control substance in the in vivo comet assay became generally comparable, independently of the image analyzer used. However, there may still be the possibility that other conditions, except for the three described here, could affect the reproducibility of the in vivo comet assay.

  5. Do qualitative methods validate choice experiment-results? A case study on the economic valuation of peatland restoration in Central Kalimantan, Indonesia

    Energy Technology Data Exchange (ETDEWEB)

    Schaafsma, M.; Van Beukering, P.J.H.; Davies, O.; Oskolokaite, I.

    2009-05-15

    This study explores the benefits of combining independent results of qualitative focus group discussions (FGD) with a quantitative choice experiment (CE) in a developing country context. The assessment addresses the compensation needed by local communities in Central Kalimantan to cooperate in peatland restoration programs by using a CE combined with a series of FGD to validate and explain the CE-results. The main conclusion of this study is that a combination of qualitative and quantitative methods is necessary to assess the economic value of ecological services in monetary terms and to better understand the underlying attitudes and motives that drive these outcomes. The FGD not only cross-validate results of the CE, but also help to interpret the differences in preferences of respondents arising from environmental awareness and ecosystem characteristics. The FGD confirms that the CE results provide accurate information for ecosystem valuation. Additional to the advantages of FGD listed in the literature, this study finds that FGD provide the possibility to identify the specific terms and conditions on which respondents will accept land-use change scenarios. The results show that FGD may help to address problems regarding the effects of distribution of costs and benefits over time that neo-classical economic theory poses for the interpretation of economic valuation results in the demand it puts on the rationality of trade-offs and the required calculations.

  6. Tomato Fruits Show Wide Phenomic Diversity but Fruit Developmental Genes Show Low Genomic Diversity.

    Directory of Open Access Journals (Sweden)

    Vijee Mohan

    Full Text Available Domestication of tomato has resulted in large diversity in fruit phenotypes. An intensive phenotyping of 127 tomato accessions from 20 countries revealed extensive morphological diversity in fruit traits. The diversity in fruit traits clustered the accessions into nine classes and identified certain promising lines having desirable traits pertaining to total soluble salts (TSS, carotenoids, ripening index, weight and shape. Factor analysis of the morphometric data from Tomato Analyzer showed that the fruit shape is a complex trait shared by several factors. The 100% variance between round and flat fruit shapes was explained by one discriminant function having a canonical correlation of 0.874 by stepwise discriminant analysis. A set of 10 genes (ACS2, COP1, CYC-B, RIN, MSH2, NAC-NOR, PHOT1, PHYA, PHYB and PSY1 involved in various plant developmental processes were screened for SNP polymorphism by EcoTILLING. The genetic diversity in these genes revealed a total of 36 non-synonymous and 18 synonymous changes leading to the identification of 28 haplotypes. The average frequency of polymorphism across the genes was 0.038/Kb. Significant negative Tajima'D statistic in two of the genes, ACS2 and PHOT1 indicated the presence of rare alleles in low frequency. Our study indicates that while there is low polymorphic diversity in the genes regulating plant development, the population shows wider phenotype diversity. Nonetheless, morphological and genetic diversity of the present collection can be further exploited as potential resources in future.

  7. (Re)interpreting LHC New Physics Search Results : Tools and Methods, 3rd Workshop

    CERN Document Server

    The quest for new physics beyond the SM is arguably the driving topic for LHC Run2. LHC collaborations are pursuing searches for new physics in a vast variety of channels. Although collaborations provide various interpretations for their search results, the full understanding of these results requires a much wider interpretation scope involving all kinds of theoretical models. This is a very active field, with close theory-experiment interaction. In particular, development of dedicated methodologies and tools is crucial for such scale of interpretation. Recently, a Forum was initiated to host discussions among LHC experimentalists and theorists on topics related to the BSM (re)interpretation of LHC data, and especially on the development of relevant interpretation tools and infrastructure: https://twiki.cern.ch/twiki/bin/view/LHCPhysics/InterpretingLHCresults Two meetings were held at CERN, where active discussions and concrete work on (re)interpretation methods and tools took place, with valuable cont...

  8. Detection of leaks in underground storage tanks using electrical resistance methods: 1996 results

    International Nuclear Information System (INIS)

    Ramirez, A.; Daily, W.

    1996-10-01

    This document provides a summary of a field experiment performed under a 15m diameter steel tank mockup located at the Hanford Reservation, Washington. The purpose of this test was to image a contaminant plume as it develops in soil under a tank already contaminated by previous leakage and to determine whether contaminant plumes can be detected without the benefit of background data. Measurements of electrical resistance were made before and during a salt water release. These measurements were made in soil which contained the remnants of salt water plumes released during previous tests in 1994 and in 1995. About 11,150 liters of saline solution were released along a portion of the tank's edge in 1996. Changes in electrical resistivity due to release of salt water conducted in 1996 were determined in two ways: (1) changes relative to the 1996 pre-spill data, and (2) changes relative to data collected near the middle of the 1996 spill after the release flow rate was increased. In both cases, the observed resistivity changes show clearly defined anomalies caused by the salt water release. These results indicate that when a plume develops over an existing plume and in a geologic environment similar to the test site environment, the resulting resistivity changes are easily detectable. Three dimensional tomographs of the resistivity of the soil under the tank show that the salt water release caused a region of low soil resistivity which can be observed directly without the benefit of comparing the tomograph to tomographs or data collected before the spill started. This means that it may be possible to infer the presence of pre-existing plumes if there is other data showing that the regions of low resistivity are correlated with the presence of contaminated soil. However, this approach does not appear reliable in defining the total extent of the plume due to the confounding effect that natural heterogeneity has on our ability to define the margins of the anomaly

  9. Exploration of a Method to Assess Children's Understandings of a Phenomenon after Viewing a Demonstration Show

    Science.gov (United States)

    DeKorver, Brittland K.; Choi, Mark; Towns, Marcy

    2017-01-01

    Chemical demonstration shows are a popular form of informal science education (ISE), employed by schools, museums, and other institutions in order to improve the public's understanding of science. Just as teachers employ formative and summative assessments in the science classroom to evaluate the impacts of their efforts, it is important to assess…

  10. Monte Carlo Method to Study Properties of Acceleration Factor Estimation Based on the Test Results with Varying Load

    Directory of Open Access Journals (Sweden)

    N. D. Tiannikova

    2014-01-01

    Full Text Available G.D. Kartashov has developed a technique to determine the rapid testing results scaling functions to the normal mode. Its feature is preliminary tests of products of one sample including tests using the alternating modes. Standard procedure of preliminary tests (researches is as follows: n groups of products with m elements in each start being tested in normal mode and, after a failure of one of products in the group, the remained products are tested in accelerated mode. In addition to tests in alternating mode, tests in constantly normal mode are conducted as well. The acceleration factor of rapid tests for this type of products, identical to any lots is determined using such testing results of products from the same lot. A drawback of this technique is that tests are to be conducted in alternating mode till the failure of all products. That is not always is possible. To avoid this shortcoming, the Renyi criterion is offered. It allows us to determine scaling functions using the right-censored data thus giving the opportunity to stop testing prior to all failures of products.In this work a statistical modeling of the acceleration factor estimation owing to Renyi statistics minimization is implemented by the Monte-Carlo method. Results of modeling show that the acceleration factor estimation obtained through Renyi statistics minimization is conceivable for rather large n . But for small sample volumes some systematic bias of acceleration factor estimation, which decreases with growth n is observed for both distributions (exponential and Veybull's distributions. Therefore the paper also presents calculation results of correction factors for a case of exponential distribution and Veybull's distribution.

  11. Sampling methods for low-frequency electromagnetic imaging

    International Nuclear Information System (INIS)

    Gebauer, Bastian; Hanke, Martin; Schneider, Christoph

    2008-01-01

    For the detection of hidden objects by low-frequency electromagnetic imaging the linear sampling method works remarkably well despite the fact that the rigorous mathematical justification is still incomplete. In this work, we give an explanation for this good performance by showing that in the low-frequency limit the measurement operator fulfils the assumptions for the fully justified variant of the linear sampling method, the so-called factorization method. We also show how the method has to be modified in the physically relevant case of electromagnetic imaging with divergence-free currents. We present numerical results to illustrate our findings, and to show that similar performance can be expected for the case of conducting objects and layered backgrounds

  12. Comparative study on γ-ray spectrum by several filtering method

    International Nuclear Information System (INIS)

    Yuan Xinyu; Liu Liangjun; Zhou Jianliang

    2011-01-01

    Comparative study was conducted on results of gamma-ray spectrum by using a majority of active smoothing method, which were used to show filtering effect. The results showed that peak was widened and overlap peaks increased with energy domain filter in γ-ray spectrum. Filter and its parameters should be seriously taken into consideration in frequency domain. Wavelet transformation can keep signal in high frequency region well. Improved threshold method showed the advantages of hard and soft threshold method at the same time by comparison, which was suitable for weak peaks detection. A new filter was put forward to eke out gravity model approach, whose denoise level was detected by standard deviation. This method not only kept signal and net area of peak well,but also attained better result and had simple computer program. (authors)

  13. Interface matrix method in AFEN framework

    Energy Technology Data Exchange (ETDEWEB)

    Pogosbekyan, Leonid; Cho, Jin Young; Kim, Young Jin [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    In this study, we extend the application of the interface-matrix(IM) method for reflector modeling to Analytic Flux Expansion Nodal (AFEN) method. This include the modifications of the surface-averaged net current continuity and the net leakage balance conditions for IM method in accordance with AFEN formula. AFEN-interface matrix (AFEN-IM) method has been tested against ZION-1 benchmark problem. The numerical result of AFEN-IM method shows 1.24% of maximum error and 0.42% of root-mean square error in assembly power distribution, and 0.006% {Delta} k of neutron multiplication factor. This result proves that the interface-matrix method for reflector modeling can be useful in AFEN method. 3 refs., 4 figs. (Author)

  14. Interface matrix method in AFEN framework

    Energy Technology Data Exchange (ETDEWEB)

    Pogosbekyan, Leonid; Cho, Jin Young; Kim, Young Jin [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    In this study, we extend the application of the interface-matrix(IM) method for reflector modeling to Analytic Flux Expansion Nodal (AFEN) method. This include the modifications of the surface-averaged net current continuity and the net leakage balance conditions for IM method in accordance with AFEN formula. AFEN-interface matrix (AFEN-IM) method has been tested against ZION-1 benchmark problem. The numerical result of AFEN-IM method shows 1.24% of maximum error and 0.42% of root-mean square error in assembly power distribution, and 0.006% {Delta} k of neutron multiplication factor. This result proves that the interface-matrix method for reflector modeling can be useful in AFEN method. 3 refs., 4 figs. (Author)

  15. Ultrasonic Digital Communication System for a Steel Wall Multipath Channel: Methods and Results

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Timothy L. [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2005-12-01

    As of the development of this thesis, no commercially available products have been identified for the digital communication of instrumented data across a thick ({approx} 6 n.) steel wall using ultrasound. The specific goal of the current research is to investigate the application of methods for digital communication of instrumented data (i.e., temperature, voltage, etc.) across the wall of a steel pressure vessel. The acoustic transmission of data using ultrasonic transducers prevents the need to breach the wall of such a pressure vessel which could ultimately affect its safety or lifespan, or void the homogeneity of an experiment under test. Actual digital communication paradigms are introduced and implemented for the successful dissemination of data across such a wall utilizing solely an acoustic ultrasonic link. The first, dubbed the ''single-hop'' configuration, can communicate bursts of digital data one-way across the wall using the Differential Binary Phase-Shift Keying (DBPSK) modulation technique as fast as 500 bps. The second, dubbed the ''double-hop'' configuration, transmits a carrier into the vessel, modulates it, and retransmits it externally. Using a pulsed carrier with Pulse Amplitude Modulation (PAM), this technique can communicate digital data as fast as 500 bps. Using a CW carrier, Least Mean-Squared (LMS) adaptive interference suppression, and DBPSK, this method can communicate data as fast as 5 kbps. A third technique, dubbed the ''reflected-power'' configuration, communicates digital data by modulating a pulsed carrier by varying the acoustic impedance at the internal transducer-wall interface. The paradigms of the latter two configurations are believed to be unique. All modulation methods are based on the premise that the wall cannot be breached in any way and can therefore be viably implemented with power delivered wirelessly through the acoustic channel using ultrasound. Methods

  16. Comparative analyses reveal discrepancies among results of commonly used methods for Anopheles gambiaemolecular form identification

    Directory of Open Access Journals (Sweden)

    Pinto João

    2011-08-01

    Full Text Available Abstract Background Anopheles gambiae M and S molecular forms, the major malaria vectors in the Afro-tropical region, are ongoing a process of ecological diversification and adaptive lineage splitting, which is affecting malaria transmission and vector control strategies in West Africa. These two incipient species are defined on the basis of single nucleotide differences in the IGS and ITS regions of multicopy rDNA located on the X-chromosome. A number of PCR and PCR-RFLP approaches based on form-specific SNPs in the IGS region are used for M and S identification. Moreover, a PCR-method to detect the M-specific insertion of a short interspersed transposable element (SINE200 has recently been introduced as an alternative identification approach. However, a large-scale comparative analysis of four widely used PCR or PCR-RFLP genotyping methods for M and S identification was never carried out to evaluate whether they could be used interchangeably, as commonly assumed. Results The genotyping of more than 400 A. gambiae specimens from nine African countries, and the sequencing of the IGS-amplicon of 115 of them, highlighted discrepancies among results obtained by the different approaches due to different kinds of biases, which may result in an overestimation of MS putative hybrids, as follows: i incorrect match of M and S specific primers used in the allele specific-PCR approach; ii presence of polymorphisms in the recognition sequence of restriction enzymes used in the PCR-RFLP approaches; iii incomplete cleavage during the restriction reactions; iv presence of different copy numbers of M and S-specific IGS-arrays in single individuals in areas of secondary contact between the two forms. Conclusions The results reveal that the PCR and PCR-RFLP approaches most commonly utilized to identify A. gambiae M and S forms are not fully interchangeable as usually assumed, and highlight limits of the actual definition of the two molecular forms, which might

  17. Reception of Talent Shows in Denmark: First Results from a Trans-National Audience Study of a Global Format Genre

    DEFF Research Database (Denmark)

    Jensen, Pia Majbritt

    This paper will discuss the methodology and present the preliminary findings of the Danish part of a trans-national, comparative audience study of the musical talent show genre undertaken in Denmark, Finland, Germany and Great Britain in Spring 2013. Within the international business model...... of format adaptation, the musical talent show genre has been particularly successful in crossing cultural borders. Formats such as Idols, X Factor and Voice have sold to a large variety of countries, covering all continents. Such global reach inevitably raises the question of the genre’s audience appeal......; to what degree its reach has to do with a universal appeal inherent in the genre and/or the innovative character of individual formats, and to what degree its global success is due to local broadcasters’ ability to successfully adapt the formats to local audience tastes. A consensus has developed...

  18. Developing a bone mineral density test result letter to send to patients: a mixed-methods study

    Directory of Open Access Journals (Sweden)

    Edmonds SW

    2014-06-01

    Full Text Available Stephanie W Edmonds,1,2 Samantha L Solimeo,3 Xin Lu,1 Douglas W Roblin,4,8 Kenneth G Saag,5 Peter Cram6,7 1Department of Internal Medicine, 2College of Nursing, University of Iowa, Iowa City, IA, USA; 3Center for Comprehensive Access and Delivery Research and Evaluation, Iowa City Veterans Affairs Health Care System, Iowa City, IA, USA; 4Kaiser Permanente of Atlanta, Atlanta, GA, USA; 5Department of Rheumatology, University of Alabama at Birmingham, Birmingham, AL, USA; 6Faculty of Medicine, University of Toronto, Toronto, ON, Canada; 7University Health Network and Mount Sinai Hospital, Toronto, ON, Canada; 8School of Public Health, Georgia State University, Atlanta, GA, USA Purpose: To use a mixed-methods approach to develop a letter that can be used to notify patients of their bone mineral density (BMD results by mail that may activate patients in their bone-related health care. Patients and methods: A multidisciplinary team developed three versions of a letter for reporting BMD results to patients. Trained interviewers presented these letters in a random order to a convenience sample of adults, aged 50 years and older, at two different health care systems. We conducted structured interviews to examine the respondents’ preferences and comprehension among the various letters. Results: A total of 142 participants completed the interview. A majority of the participants were female (64.1% and white (76.1%. A plurality of the participants identified a specific version of the three letters as both their preferred version (45.2%; P<0.001 and as the easiest to understand (44.6%; P<0.01. A majority of participants preferred that the letters include specific next steps for improving their bone health. Conclusion: Using a mixed-methods approach, we were able to develop and optimize a printed letter for communicating a complex test result (BMD to patients. Our results may offer guidance to clinicians, administrators, and researchers who are

  19. Propagation of internal errors in explicit Runge–Kutta methods and internal stability of SSP and extrapolation methods

    KAUST Repository

    Ketcheson, David I.

    2014-04-11

    In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.

  20. First results of Minimum Fisher Regularisation as unfolding method for JET NE213 liquid scintillator neutron spectrometry

    International Nuclear Information System (INIS)

    Mlynar, Jan; Adams, John M.; Bertalot, Luciano; Conroy, Sean

    2005-01-01

    At JET, the NE213 liquid scintillator is being validated as a diagnostic tool for spectral measurements of neutrons emitted from the plasma. Neutron spectra have to be unfolded from the measured pulse-height spectra, which is an ill-conditioned problem. Therefore, use of two independent unfolding methods allows for less ambiguity on the interpretation of the data. In parallel to the routine algorithm MAXED based on the Maximum Entropy method, the Minimum Fisher Regularisation (MFR) method has been introduced at JET. The MFR method, known from two-dimensional tomography applications, has proved to provide a new transparent tool to validate the JET neutron spectra measured with the NE213 liquid scintillators. In this article, the MFR method applicable to spectra unfolding is briefly explained. After a mention of MFR tests on phantom spectra experimental neutron spectra are presented that were obtained by applying MFR to NE213 data in selected JET experiments. The results tend to confirm MAXED observations

  1. Show Horse Welfare: Horse Show Competitors' Understanding, Awareness, and Perceptions of Equine Welfare.

    Science.gov (United States)

    Voigt, Melissa A; Hiney, Kristina; Richardson, Jennifer C; Waite, Karen; Borron, Abigail; Brady, Colleen M

    2016-01-01

    The purpose of this study was to gain a better understanding of stock-type horse show competitors' understanding of welfare and level of concern for stock-type show horses' welfare. Data were collected through an online questionnaire that included questions relating to (a) interest and general understanding of horse welfare, (b) welfare concerns of the horse show industry and specifically the stock-type horse show industry, (c) decision-making influences, and (d) level of empathic characteristics. The majority of respondents indicated they agree or strongly agree that physical metrics should be a factor when assessing horse welfare, while fewer agreed that behavioral and mental metrics should be a factor. Respondent empathy levels were moderate to high and were positively correlated with the belief that mental and behavioral metrics should be a factor in assessing horse welfare. Respondents indicated the inhumane practices that most often occur at stock-type shows include excessive jerking on reins, excessive spurring, and induced excessive unnatural movement. Additionally, respondents indicated association rules, hired trainers, and hired riding instructors are the most influential regarding the decisions they make related to their horses' care and treatment.

  2. Methods of dealing with co-products of biofuels in life-cycle analysis and consequent results within the U.S. context

    International Nuclear Information System (INIS)

    Wang, Michael; Huo Hong; Arora, Salil

    2011-01-01

    Products other than biofuels are produced in biofuel plants. For example, corn ethanol plants produce distillers' grains and solubles. Soybean crushing plants produce soy meal and soy oil, which is used for biodiesel production. Electricity is generated in sugarcane ethanol plants both for internal consumption and export to the electric grid. Future cellulosic ethanol plants could be designed to co-produce electricity with ethanol. It is important to take co-products into account in the life-cycle analysis of biofuels and several methods are available to do so. Although the International Standard Organization's ISO 14040 advocates the system boundary expansion method (also known as the 'displacement method' or the 'substitution method') for life-cycle analyses, application of the method has been limited because of the difficulty in identifying and quantifying potential products to be displaced by biofuel co-products. As a result, some LCA studies and policy-making processes have considered alternative methods. In this paper, we examine the available methods to deal with biofuel co-products, explore the strengths and weaknesses of each method, and present biofuel LCA results with different co-product methods within the U.S. context.

  3. Methods of dealing with co-products of biofuels in life-cycle analysis and consequent results within the U.S. context

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Michael, E-mail: mqwang@anl.gov [Center for Transportation Research, Argonne National Laboratory, Argonne, IL 60439 (United States); Huo Hong [Institute of Energy, Environment, and Economics, Tsinghua University, Beijing, 100084 (China); Arora, Salil [Center for Transportation Research, Argonne National Laboratory, Argonne, IL 60439 (United States)

    2011-10-15

    Products other than biofuels are produced in biofuel plants. For example, corn ethanol plants produce distillers' grains and solubles. Soybean crushing plants produce soy meal and soy oil, which is used for biodiesel production. Electricity is generated in sugarcane ethanol plants both for internal consumption and export to the electric grid. Future cellulosic ethanol plants could be designed to co-produce electricity with ethanol. It is important to take co-products into account in the life-cycle analysis of biofuels and several methods are available to do so. Although the International Standard Organization's ISO 14040 advocates the system boundary expansion method (also known as the 'displacement method' or the 'substitution method') for life-cycle analyses, application of the method has been limited because of the difficulty in identifying and quantifying potential products to be displaced by biofuel co-products. As a result, some LCA studies and policy-making processes have considered alternative methods. In this paper, we examine the available methods to deal with biofuel co-products, explore the strengths and weaknesses of each method, and present biofuel LCA results with different co-product methods within the U.S. context.

  4. Learning Method and Its Influence on Nutrition Study Results Throwing the Ball

    Science.gov (United States)

    Samsudin; Nugraha, Bayu

    2015-01-01

    This study aimed to know the difference between playing and learning methods of exploratory learning methods to learning outcomes throwing the ball. In addition, this study also aimed to determine the effect of nutritional status of these two learning methods mentioned above. This research was conducted at SDN Cipinang Besar Selatan 16 Pagi East…

  5. An adaptive method for γ spectra smoothing

    International Nuclear Information System (INIS)

    Xiao Gang; Zhou Chunlin; Li Tiantuo; Han Feng; Di Yuming

    2001-01-01

    Adaptive wavelet method and multinomial fitting gliding method are used for smoothing γ spectra, respectively, and then FWHM of 1332 keV peak of 60 Co and activities of 238 U standard specimen are calculated. Calculated results show that adaptive wavelet method is better than the other

  6. Showing Value (Editorial

    Directory of Open Access Journals (Sweden)

    Denise Koufogiannakis

    2009-06-01

    librarians on student achievement. Todd notes, “If we do not show value, we will not have a future. Evidence-based practice is not about the survival of school librarians, it’s about the survival of our students” (40. In this issue we feature school libraries and their connection to evidence based practice. Former Editor-in-Chief, Lindsay Glynn, began putting the wheels in motion for this feature almost a year ago. She invited Carol Gordon and Ross Todd to act as guest editors of the section, drawing upon their contacts and previous work in this field. The result is an issue with five feature articles exploring different aspects of the connection between school libraries and evidence based practice, from the theoretical to the practical. In addition, there is a thought-provoking Commentary by David Loertscher, asking whether we need the evolutionary model of evidence based practice, or something more revolutionary!In addition to the Feature section, we have a well-rounded issue with articles on the topics of library human resources, and the development of a scholars’ portal. As well, there are a record 10 evidence summaries and our educational EBL101 column. I hope there is something for everyone in this issue of EBLIP – enjoy, and see you soon in Stockholm!

  7. Joint hyperlaxity prevents relapses in clubfeet treated by Ponseti method-preliminary results.

    Science.gov (United States)

    Cosma, Dan Ionuţ; Corbu, Andrei; Nistor, Dan Viorel; Todor, Adrian; Valeanu, Madalina; Morcuende, Jose; Man, Sorin

    2018-05-07

    The aim of the study was to evaluate the role of joint hyperlaxity (by Beighton score) as a protective factor for clubfoot relapse. Patients with idiopathic clubfoot treated with the Ponseti method between January 2004 and December 2012, without other congenital foot deformity, and not previously treated by open surgery were included in either the Relapse group (n = 23) if it was a clubfoot relapse or the Control group (n = 19) if no relapse was noted. Joint laxity was evaluated using the Beighton score at the latest follow-up against the Normal group (n = 22, children matched by sex and age without clubfoot deformity). We found a significantly higher joint laxity in the Control group (4.58, 95% confidence interval [CI]: 2.1-7.06) as compared to the Relapse (3.17, 95% CI: 1.53-4.81, p = 0.032) and Normal (3.14, 95% CI: 1.78-4.5, p = 0.03) groups. The univariate logistic regression showed a 5.28-times increase in the risk of relapse for a Beighton score lower than 4/9 points (odds ratio = 5.28; 95% CI = 1.29-21.5; p = 0.018). Joint hyperlaxity could be a protective factor for clubfoot relapse.

  8. Dimensionality Reduction Methods: Comparative Analysis of methods PCA, PPCA and KPCA

    Directory of Open Access Journals (Sweden)

    Jorge Arroyo-Hernández

    2016-01-01

    Full Text Available The dimensionality reduction methods are algorithms mapping the set of data in subspaces derived from the original space, of fewer dimensions, that allow a description of the data at a lower cost. Due to their importance, they are widely used in processes associated with learning machine. This article presents a comparative analysis of PCA, PPCA and KPCA dimensionality reduction methods. A reconstruction experiment of worm-shape data was performed through structures of landmarks located in the body contour, with methods having different number of main components. The results showed that all methods can be seen as alternative processes. Nevertheless, thanks to the potential for analysis in the features space and the method for calculation of its preimage presented, KPCA offers a better method for recognition process and pattern extraction

  9. Comparison result of inversion of gravity data of a fault by particle swarm optimization and Levenberg-Marquardt methods.

    Science.gov (United States)

    Toushmalani, Reza

    2013-01-01

    The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.

  10. Arsenic absorption by members of the Brassicacea family, analysed by neutron activation, k{sub 0}-method - preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Uemura, George; Matos, Ludmila Vieira da Silva; Silva, Maria Aparecida da; Ferreira, Alexandre Santos Martorano; Menezes, Maria Angela de Barros Correia [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN-CNEN/MG), Belo Horizonte, MG (Brazil)], e-mail: george@cdtn.br, e-mail: menezes@cdtn.br

    2009-07-01

    Natural arsenic contamination is a cause for concern in many countries of the world including Argentina, Bangladesh, Chile, China, India, Mexico, Thailand and the United States of America and also in Brazil, specially in the Iron Quadrangle area, where mining activities has been contributing to aggravate natural contamination. Brassicacea is a plant family with edible species (arugula, cabbage, cauliflower, cress, kale, mustard, radish), ornamental ones (alysssum, field pennycress, ornamental cabbages and kales) and some species are known as metal and metalloid accumulators (Indian mustard, field pennycress), like chromium, nickel, and arsenic. The present work aimed at studying other taxa of the Brassicaceae family to verify their capability in absorbing arsenic, under controlled conditions, for possible utilisation in remediation activities. The analytical method chosen was neutron activation analysis, k{sub 0} method, a routine technique at CDTN, and also very appropriate for arsenic studies. To avoid possible interference from solid substrates, like sand or vermiculite, attempts were carried out to keep the specimens in 1/4 Murashige and Skoog basal salt solution (M and S). Growth was stumped, plants withered and perished, showing that modifications in M and S had to be done. The addition of nickel and silicon allowed normal growth of the plant specimens, for periods longer than usually achieved (more than two months); yielding samples large enough for further studies with other techniques, like ICP-MS, and other targets, like speciation studies. The results of arsenic absorption are presented here and the need of nickel and silicon in the composition of M and S is discussed. (author)

  11. Financial time series analysis based on information categorization method

    Science.gov (United States)

    Tian, Qiang; Shang, Pengjian; Feng, Guochen

    2014-12-01

    The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.

  12. Non-asthmatic patients show increased exhaled nitric oxide concentrations

    Directory of Open Access Journals (Sweden)

    Beatriz M. Saraiva-Romanholo

    2009-01-01

    Full Text Available OBJECTIVE: Evaluate whether exhaled nitric oxide may serve as a marker of intraoperative bronchospasm. INTRODUCTION: Intraoperative bronchospasm remains a challenging event during anesthesia. Previous studies in asthmatic patients suggest that exhaled nitric oxide may represent a noninvasive measure of airway inflammation. METHODS: A total of 146,358 anesthesia information forms, which were received during the period from 1999 to 2004, were reviewed. Bronchospasm was registered on 863 forms. From those, three groups were identified: 9 non-asthmatic patients (Bronchospasm group, 12 asthmatics (Asthma group and 10 subjects with no previous airway disease or symptoms (Control group. All subjects were submitted to exhaled nitric oxide measurements (parts/billion, spirometry and the induced sputum test. The data was compared by ANOVA followed by the Tukey test and Kruskal-Wallis followed by Dunn's test. RESULTS: The normal lung function test results for the Bronchospasm group were different from those of the asthma group (p <0.05. The median percentage of eosinophils in induced sputum was higher for the Asthma [2.46 (0.45-6.83] compared with either the Bronchospasm [0.55 (0-1.26] or the Control group [0.0 (0] (p <0.05; exhaled nitric oxide followed a similar pattern for the Asthma [81.55 (57.6-86.85], Bronchospasm [46.2 (42.0 -62.6] and Control group [18.7 (16.0-24.7] (p< 0.05. CONCLUSIONS: Non-asthmatic patients with intraoperative bronchospasm detected during anesthesia and endotracheal intubation showed increased expired nitric oxide.

  13. Left and right brain-oriented hemisity subjects show opposite behavioral preferences

    Directory of Open Access Journals (Sweden)

    Bruce Eldine Morton

    2012-11-01

    Full Text Available Introduction: Recently, three independent, intercorrelated biophysical measures have provided the first quantitative measures of a binary form of behavioral laterality called Hemisity, a term referring to inherent opposite right or left brain-oriented differences in thinking and behavioral styles. Crucially, the right or left brain-orientation of individuals assessed by these methods was later found to be essentially congruent with the thicker side of their ventral gyrus of the anterior cingulate cortex (vgACC as revealed by a 3 minute MRI procedure. Laterality of this putative executive structural element has thus become the primary standard defining individual hemisity. Methods: Here, the behavior of 150 subjects, whose hemisity had been calibrated by MRI, was assessed using five MRI-calibrated preference questionnaires, two of which were new.Results: Right and left brain-oriented subjects selected opposite answers (p > 0.05 for 47 of the 107 either-or, forced choice type preference questionnaire items. Hemisity subtype preference differences were present in several areas. They were in: a. logical orientation, b. type of consciousness, c. fear level and sensitivity, d. social-professional orientation, and e. pair bonding-spousal dominance style.Conclusions: The right and left brain-oriented hemisity subtype subjects, sorted on the anatomical basis of upon which brain side their vgACC was thickest, showed numerous significant differences in their either-or type of behavioral preferences.

  14. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China).

    Science.gov (United States)

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu

    2017-05-25

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The

  15. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China)

    Science.gov (United States)

    Zhao, Q.

    2017-12-01

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The

  16. SOD1 aggregation in ALS mice shows simplistic test tube behavior.

    Science.gov (United States)

    Lang, Lisa; Zetterström, Per; Brännström, Thomas; Marklund, Stefan L; Danielsson, Jens; Oliveberg, Mikael

    2015-08-11

    A longstanding challenge in studies of neurodegenerative disease has been that the pathologic protein aggregates in live tissue are not amenable to structural and kinetic analysis by conventional methods. The situation is put in focus by the current progress in demarcating protein aggregation in vitro, exposing new mechanistic details that are now calling for quantitative in vivo comparison. In this study, we bridge this gap by presenting a direct comparison of the aggregation kinetics of the ALS-associated protein superoxide dismutase 1 (SOD1) in vitro and in transgenic mice. The results based on tissue sampling by quantitative antibody assays show that the SOD1 fibrillation kinetics in vitro mirror with remarkable accuracy the spinal cord aggregate buildup and disease progression in transgenic mice. This similarity between in vitro and in vivo data suggests that, despite the complexity of live tissue, SOD1 aggregation follows robust and simplistic rules, providing new mechanistic insights into the ALS pathology and organism-level manifestation of protein aggregation phenomena in general.

  17. A simple method for analyzing exome sequencing data shows distinct levels of nonsynonymous variation for human immune and nervous system genes.

    Directory of Open Access Journals (Sweden)

    Jan Freudenberg

    Full Text Available To measure the strength of natural selection that acts upon single nucleotide variants (SNVs in a set of human genes, we calculate the ratio between nonsynonymous SNVs (nsSNVs per nonsynonymous site and synonymous SNVs (sSNVs per synonymous site. We transform this ratio with a respective factor f that corrects for the bias of synonymous sites towards transitions in the genetic code and different mutation rates for transitions and transversions. This method approximates the relative density of nsSNVs (rdnsv in comparison with the neutral expectation as inferred from the density of sSNVs. Using SNVs from a diploid genome and 200 exomes, we apply our method to immune system genes (ISGs, nervous system genes (NSGs, randomly sampled genes (RSGs, and gene ontology annotated genes. The estimate of rdnsv in an individual exome is around 20% for NSGs and 30-40% for ISGs and RSGs. This smaller rdnsv of NSGs indicates overall stronger purifying selection. To quantify the relative shift of nsSNVs towards rare variants, we next fit a linear regression model to the estimates of rdnsv over different SNV allele frequency bins. The obtained regression models show a negative slope for NSGs, ISGs and RSGs, supporting an influence of purifying selection on the frequency spectrum of segregating nsSNVs. The y-intercept of the model predicts rdnsv for an allele frequency close to 0. This parameter can be interpreted as the proportion of nonsynonymous sites where mutations are tolerated to segregate with an allele frequency notably greater than 0 in the population, given the performed normalization of the observed nsSNV to sSNV ratio. A smaller y-intercept is displayed by NSGs, indicating more nonsynonymous sites under strong negative selection. This predicts more monogenically inherited or de-novo mutation diseases that affect the nervous system.

  18. INTERDISCIPLINARITY IN PUBLIC SPACE PARTICIPATIVE PROJECTS: METHODS AND RESULTS IN PRACTICE AND TEACHING

    Directory of Open Access Journals (Sweden)

    Pedro Brandão

    2015-06-01

    • In the development of design practice and studio teaching methods We shall see in this paper how interdisciplinary approaches correspond to new and complex urban transformations, focusing on the importance of actors’ interaction processes, combining professional and non-professional knowledge and theory-practice relations. Therefore, we aim at a deepening in public space area of knowledge under the growing complexity of urban life. We see it as a base for further development of collaborative projects and their implications on community empowerment and urban governance at local level. Motivations of this line of work are persistent in several ongoing research projects, aiming to: - Understand public space as a cohesion factor both in urban life and urban form - Manage processes and strategies as elements of urban transformation, - Stimulate the understanding of actors’ roles in urban design practices. - Favoring the questioning of emerging aspects of urban space production… The paper presents and analyses processes, methods and results from civic participation projects developed in the neighbourhood of Barò de Viver (Barcelona and in the District of Marvila (Lisbon. In the first case, a long process initiated in 2004 and partially completed in 2011, neighbours developed the projects "Memory Wall" and Ciutat d'Asuncion Promenade as part of identity construction in public space, in collaboration  with a team of facilitators from CrPolis group. In the second case, different participatory processes dated from 2001 and 2003 have resulted in the implementation of a specific identity urban brand and communication system with an ongoing project of "maps" construction according to the neighbours perception and representation systems. We may conclude that processes of urban governance require more active participation of citizens in projects regarding the improvement of quality of life. At the same time, the implementation of these processes requires a clear

  19. A novel method for assessing elbow pain resulting from epicondylitis

    Science.gov (United States)

    Polkinghorn, Bradley S.

    2002-01-01

    Abstract Objective To describe a novel orthopedic test (Polk's test) which can assist the clinician in differentiating between me- dial and lateral epicondylitis, 2 of the most common causes of elbow pain. This test has not been previously described in the literature. Clinical Features The testing procedure described in this paper is easy to learn, simple to perform and may provide the clinician with a quick and effective method of differentiating between lateral and medial epicondylitis. The test also helps to elucidate normal activities of daily living that the patient may unknowingly be performing on a repetitive basis that are hindering recovery. The results of this simple test allow the clinician to make immediate lifestyle recommendations to the patient that should improve and hasten the response to subsequent treatment. It may be used in conjunction with other orthopedic testing procedures, as it correlates well with other clinical tests for assessing epicondylitis. Conclusion The use of Polk's Test may help the clinician to diagnostically differentiate between lateral and medial epicondylitis, as well as supply information relative to choosing proper instructions for the patient to follow as part of their treatment program. Further research, performed in an academic setting, should prove helpful in more thoroughly evaluating the merits of this test. In the meantime, clinical experience over the years suggests that the practicing physician should find a great deal of clinical utility in utilizing this simple, yet effective, diagnostic procedure. PMID:19674572

  20. AREVA: Operating performance shows distinct improvement; Results heavily impacted by the cost of remedial measures

    International Nuclear Information System (INIS)

    2016-01-01

    The 2015 results illustrate the progress AREVA made in 2015 and open up favorable prospects for 2016 and the following years in view of its fundamentals. The group's competitiveness plan had a very positive impact on its costs and cash, despite the heavy net loss situation which continues and in a market environment that remained difficult in 2015. Half of this loss of 2 billion Euro is due to additional provisions for OL3 and half to provisions for restructuring and impairment related to market conditions. Concerning the group's liquidity, 2016 is funded and the capital increase which will be launched in the coming months will enable AREVA to gradually regain the group's positive profile. A new phase awaits the Group in 2016 with clarity and confidence in the implementation of the restructuring announced in 2015 and in particular the autonomy of AREVA NP and the creation of New AREVA

  1. Analysis of Highly Nonlinear Oscillation System Using He's Max-Min Method and Comparison with Homotopy Analysis Method and Energy Balance Methods

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Barari, Amin; Kimiaeifar, Amin

    2010-01-01

    of calculations. Results obtained by max–min are compared with Homotopy Analysis Method (HAM), energy balance and numerical solution and it is shown that, simply one term is enough to obtain a highly accurate result in contrast to HAM with just one term in series solution. Finally, the phase plane to show...... the stability of systems is plotted and discussed....

  2. Analysis of risk of nonconformities and applied quality inspection methods in the process of aluminium profiles coating based on FMEA results

    OpenAIRE

    Krzysztof Knop

    2017-01-01

    The article presents the results of risk analysis associated with nonconformities of aluminium profiles in the process of coating and quality inspection methods used to their detection. Analysis of risk was done based on results of FMEA method. Evaluated quality inspection methods were distinguished based on the term of inspection in the ISO 9000:2005 norm. Manufacturing process of aluminium profile in micro-technological approach was presented. Triple quantification of nonconformities risk b...

  3. Energy Conservation Program Evaluation : Practical Methods, Useful Results : Proceedings of the 1987 Conference.

    Energy Technology Data Exchange (ETDEWEB)

    Argonne National Laboratory; International Conference on Energy Conservation Program Evaluation (3rd : 1987 : Chicago, ILL.)

    1987-01-01

    The success of cutting-edge evaluation methodologies depends on our ability to merge, manage, and maintain huge amounts of data. Equally important is presenting results of the subsequent analysis in a meaningful way. These topics are addressed at this session. The considerable amounts of data that have been collected about energy conservation programs are rarely used by other researchers, either because they are not available in computerized form or, if they are, because of the difficulties of interpreting someone else's data, format inconsistencies, incompatibility of computers, lack of documentation, data entry errors, and obtaining data use agreements. Even census, RECS, and AHS data can be best used only by a researcher who is intimately familiar with them. Once the data have been accessed and analyzed, the results need to be put in a format that can be readily understood by others. This is a particularly difficult task when submetered data is the basis of the analysis. Stoops and Gilbride will demonstrate their methods of using off-the-shelf graphics software to illustrate complex hourly data from nonresidential buildings.

  4. Further results for crack-edge mappings by ray methods

    International Nuclear Information System (INIS)

    Norris, A.N.; Achenbach, J.D.; Ahlberg, L.; Tittman, B.R.

    1984-01-01

    This chapter discusses further extensions of the local edge mapping method to the pulse-echo case and to configurations of water-immersed specimens and transducers. Crack edges are mapped by the use of arrival times of edge-diffracted signals. Topics considered include local edge mapping in a homogeneous medium, local edge mapping algorithms, local edge mapping through an interface, and edge mapping through an interface using synthetic data. Local edge mapping is iterative, with two or three iterations required for convergence

  5. Changes of forest stands vulnerability to future wind damage resulting from different management methods

    DEFF Research Database (Denmark)

    Panferov, O.; Sogachev, Andrey; Ahrends, B.

    2010-01-01

    The structure of forests stands changes continuously as a result of forest growth and both natural and anthropogenic disturbances like windthrow or management activities – planting/cutting of trees. These structure changes can stabilize or destabilize forest stands in terms of their resistance...... to wind damage. The driving force behind the damage is the climate, but the magnitude and sign of resulting effect depend on tree species, management method and soil conditions. The projected increasing frequency of weather extremes in the whole and severe storms in particular might produce wide area...... damage in European forest ecosystems during the 21st century. To assess the possible wind damage and stabilization/destabilization effects of forest management a number of numeric experiments are carried out for the region of Solling, Germany. The coupled small-scale process-based model combining Brook90...

  6. Elderly individuals with increased risk of falls show postural balance impairment

    Directory of Open Access Journals (Sweden)

    Márcio Rogério de Oliveira

    Full Text Available Introduction Falls are a serious public health problem. Objective The aim of this study was to evaluate whether elderly individuals with increased risk of falls have a postural balance deficit, evaluated using a force platform during a one-leg stance. Materials and methods The sample consisted of 94 physically independent elderly individuals from the EELO project. The instruments used were the Downton scale, in order to assess the risk as well as the history of falls, and the force platform to measure postural balance through parameters from the center of pressure (COP. Results Elderly individuals were split into two groups according to the score observed with the Downton scale: G1 — low fall risk (score ≤ 2 — and G2 — high fall risk (score > 2. No differences were observed between the groups concerning gender (P > 0.05, Chi Square test. On the other hand, individuals from G2 showed postural instability when compared to individuals from G1, and individuals from G2 showed higher values in all COP parameters analysed (Mann-Whitney test, P < 0.05. Conclusion It can be concluded that the Downton scale has sensitivity for identifying individuals with balance impairment as well as a risk of falls. Therefore, it may be suggested that this scale may be useful in primary health care for detecting falls in the elderly.

  7. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    Science.gov (United States)

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  8. Level-set-based reconstruction algorithm for EIT lung images: first clinical results

    International Nuclear Information System (INIS)

    Rahmati, Peyman; Adler, Andy; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz

    2012-01-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure–volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM. (paper)

  9. Auditory temporal-order thresholds show no gender differences

    NARCIS (Netherlands)

    van Kesteren, Marlieke T. R.; Wierslnca-Post, J. Esther C.

    2007-01-01

    Purpose: Several studies on auditory temporal-order processing showed gender differences. Women needed longer inter-stimulus intervals than men when indicating the temporal order of two clicks presented to the left and right ear. In this study, we examined whether we could reproduce these results in

  10. Auditory temporal-order thresholds show no gender differences

    NARCIS (Netherlands)

    van Kesteren, Marlieke T R; Wiersinga-Post, J Esther C

    2007-01-01

    PURPOSE: Several studies on auditory temporal-order processing showed gender differences. Women needed longer inter-stimulus intervals than men when indicating the temporal order of two clicks presented to the left and right ear. In this study, we examined whether we could reproduce these results in

  11. Ammonia synthesis using magnetic induction method (MIM)

    Science.gov (United States)

    Puspitasari, P.; Razak, J. Abd; Yahya, N.

    2012-09-01

    The most challenging issues for ammonia synthesis is to get the high yield. New approach of ammonia synthesis by using Magnetic Induction Method (MIM) and the Helmholtz Coils has been proposed. The ammonia detection was done by using Kjeldahl Method and FTIR. The system was designed by using Autocad software. The magnetic field of MIM was vary from 100mT-200mT and the magnetic field for the Helmholtz coils was 14mT. The FTIR result shows that ammonia has been successfully formed at stretching peaks 1097,1119,1162,1236, 1377, and 1464 cm-1. UV-VIS result shows the ammonia bond at 195nm of wavelength. The ammonia yield was increase to 244.72μmole/g.h by using the MIM and six pairs of Helmholtz coils. Therefore this new method will be a new promising method to achieve the high yield ammonia at ambient condition (at 25δC and 1atm), under the Magnetic Induction Method (MIM).

  12. A validation of direct grey Dancoff factors results for cylindrical cells in cluster geometry by the Monte Carlo method

    International Nuclear Information System (INIS)

    Rodrigues, Leticia Jenisch; Bogado, Sergio; Vilhena, Marco T.

    2008-01-01

    The WIMS code is a well known and one of the most used codes to handle nuclear core physics calculations. Recently, the PIJM module of the WIMS code was modified in order to allow the calculation of Grey Dancoff factors, for partially absorbing materials, using the alternative definition in terms of escape and collision probabilities. Grey Dancoff factors for the Canadian CANDU-37 and CANFLEX assemblies were calculated with PIJM at five symmetrically distinct fuel pin positions. The results, obtained via Direct Method, i.e., by direct calculation of escape and collision probabilities, were satisfactory when compared with the ones of literature. On the other hand, the PIJMC module was developed to calculate escape and collision probabilities using Monte Carlo method. Modifications in this module were performed to determine Black Dancoff factors, considering perfectly absorbing fuel rods. In this work, we proceed further in the task of validating the Direct Method by the Monte Carlo approach. To this end, the PIJMC routine is modified to compute Grey Dancoff factors using the cited alternative definition. Results are reported for the mentioned CANDU-37 and CANFLEX assemblies obtained with PIJMC, at the same fuel pin positions as with PIJM. A good agreement is observed between the results from the Monte Carlo and Direct methods

  13. Comparison between ASHRAE and ISO thermal transmittance calculation methods

    DEFF Research Database (Denmark)

    Blanusa, Petar; Goss, William P.; Roth, Hartwig

    2007-01-01

    is proportional to the glazing/frame sightline distance that is also proportional to the total glazing spacer length. An example calculation of the overall heat transfer and thermal transmittance (U-value or U-factor) using the two methods for a thermally broken, aluminum framed slider window is presented....... The fenestration thermal transmittance calculations analyses presented in this paper show that small differences exist between the calculated thermal transmittance values produced by the ISO and ASHRAE methods. The results also show that the overall thermal transmittance difference between the two methodologies...... decreases as the total window area (glazing plus frame) increases. Thus, the resulting difference in thermal transmittance values for the two methods is negligible for larger windows. This paper also shows algebraically that the differences between the ISO and ASHRAE methods turn out to be due to the way...

  14. Gastroesophageal reflux - correlation between diagnostic methods

    International Nuclear Information System (INIS)

    Cruz, Maria das Gracas de Almeida; Penas, Maria Exposito; Fonseca, Lea Mirian Barbosa; Lemme, Eponina Maria O.; Martinho, Maria Jose Ribeiro

    1999-01-01

    A group of 97 individuals with typical symptoms of gastroesophageal reflux disease (GERD) was submitted to gastroesophageal reflux scintigraphy (GES) and compared to the results obtained from endoscopy, histopathology and 24 hours pHmetry. Twenty-four healthy individuals were used as a control group and they have done only the GERS. The results obtained showed that: a) the difference int he reflux index (RI) for the control group and the sick individuals was statistically significant (p < 0.0001); b) the correlation between GERS and the other methods showed the following results: sensitivity, 84%; specificity, 95%; positive predictive value, 98%; negative predictive value, 67%; accuracy, 87%. We have concluded that the scintigraphic method should be used to confirm the diagnosis of GERD and also recommended as initial investiative procedure. (author)

  15. METHODS OF MEASURING THE EFFECTS OF LIGHTNING BY SIMULATING ITS STRIKES WITH THE INTERVAL ASSESSMENT OF THE RESULTS OF MEASUREMENTS

    Directory of Open Access Journals (Sweden)

    P. V. Kriksin

    2017-01-01

    Full Text Available The article presents the results of the development of new methods aimed at more accurate interval estimate of the experimental values of voltages on grounding devices of substations and circuits in the control cables, that occur when lightning strikes to lightning rods; the abovementioned estimate made it possible to increase the accuracy of the results of the study of lightning noise by 28 %. A more accurate value of interval estimation were achieved by developing a measurement model that takes into account, along with the measured values, different measurement errors and includes the special processing of the measurement results. As a result, the interval of finding the true value of the sought voltage is determined with an accuracy of 95 %. The methods can be applied to the IK-1 and IKP-1 measurement complexes, consisting in the aperiodic pulse generator, the generator of high-frequency pulses and selective voltmeters, respectively. To evaluate the effectiveness of the developed methods series of experimental voltage assessments of grounding devices of ten active high-voltage substation have been fulfilled in accordance with the developed methods and traditional techniques. The evaluation results confirmed the possibility of finding the true values of voltage over a wide range, that ought to be considered in the process of technical diagnostics of lightning protection of substations when the analysis of the measurement results and the development of measures to reduce the effects of lightning are being fulfilled. Also, a comparative analysis of the results of measurements made in accordance with the developed methods and traditional techniques has demonstrated that the true value of the sought voltage may exceed the measured value at an average of 28 %, that ought to be considered in the further analysis of the parameters of lightning protection at the facility and in the development of corrective actions. The developed methods have been

  16. Assessing Cost-Effectiveness in Obesity (ACE-Obesity: an overview of the ACE approach, economic methods and cost results

    Directory of Open Access Journals (Sweden)

    Swinburn Boyd

    2009-11-01

    Full Text Available Abstract Background The aim of the ACE-Obesity study was to determine the economic credentials of interventions which aim to prevent unhealthy weight gain in children and adolescents. We have reported elsewhere on the modelled effectiveness of 13 obesity prevention interventions in children. In this paper, we report on the cost results and associated methods together with the innovative approach to priority setting that underpins the ACE-Obesity study. Methods The Assessing Cost Effectiveness (ACE approach combines technical rigour with 'due process' to facilitate evidence-based policy analysis. Technical rigour was achieved through use of standardised evaluation methods, a research team that assembles best available evidence and extensive uncertainty analysis. Cost estimates were based on pathway analysis, with resource usage estimated for the interventions and their 'current practice' comparator, as well as associated cost offsets. Due process was achieved through involvement of stakeholders, consensus decisions informed by briefing papers and 2nd stage filter analysis that captures broader factors that influence policy judgements in addition to cost-effectiveness results. The 2nd stage filters agreed by stakeholders were 'equity', 'strength of the evidence', 'feasibility of implementation', 'acceptability to stakeholders', 'sustainability' and 'potential for side-effects'. Results The intervention costs varied considerably, both in absolute terms (from cost saving [6 interventions] to in excess of AUD50m per annum and when expressed as a 'cost per child' estimate (from Conclusion The use of consistent methods enables valid comparison of potential intervention costs and cost-offsets for each of the interventions. ACE-Obesity informs policy-makers about cost-effectiveness, health impact, affordability and 2nd stage filters for important options for preventing unhealthy weight gain in children. In related articles cost-effectiveness results and

  17. Alcohol Content in the ‘Hyper-Reality’ MTV Show ‘Geordie Shore’

    Science.gov (United States)

    Lowe, Eden; Britton, John

    2018-01-01

    Abstract Aim To quantify the occurrence of alcohol content, including alcohol branding, in the popular primetime television UK Reality TV show ‘Geordie Shore’ Series 11. Methods A 1-min interval coding content analysis of alcohol content in the entire DVD Series 11 of ‘Geordie Shore’ (10 episodes). Occurrence of alcohol use, implied use, other alcohol reference/paraphernalia or branding was recorded. Results All categories of alcohol were present in all episodes. ‘Any alcohol’ content occurred in 78%, ‘actual alcohol use’ in 30%, ‘inferred alcohol use’ in 72%, and all ‘other’ alcohol references occurred in 59% of all coding intervals (ACIs), respectively. Brand appearances occurred in 23% of ACIs. The most frequently observed alcohol brand was Smirnoff which appeared in 43% of all brand appearances. Episodes categorized as suitable for viewing by adolescents below the legal drinking age of 18 years comprised of 61% of all brand appearances. Conclusions Alcohol content, including branding, is highly prevalent in the UK Reality TV show ‘Geordie Shore’ Series 11. Two-thirds of all alcohol branding occurred in episodes age-rated by the British Board of Film Classification (BBFC) as suitable for viewers aged 15 years. The organizations OfCom, Advertising Standards Authority (ASA) and the Portman Group should implement more effective policies to reduce adolescent exposure to on-screen drinking. The drinks industry should consider demanding the withdrawal of their brands from the show. Short Summary Alcohol content, including branding, is highly prevalent in the MTV reality TV show ‘Geordie Shore’ Series 11. Current alcohol regulation is failing to protect young viewers from exposure to such content. PMID:29365032

  18. Applying homotopy analysis method for solving differential-difference equation

    International Nuclear Information System (INIS)

    Wang Zhen; Zou Li; Zhang Hongqing

    2007-01-01

    In this Letter, we apply the homotopy analysis method to solving the differential-difference equations. A simple but typical example is applied to illustrate the validity and the great potential of the generalized homotopy analysis method in solving differential-difference equation. Comparisons are made between the results of the proposed method and exact solutions. The results show that the homotopy analysis method is an attractive method in solving the differential-difference equations

  19. Text Mining of the Classical Medical Literature for Medicines That Show Potential in Diabetic Nephropathy

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    2014-01-01

    Full Text Available Objectives. To apply modern text-mining methods to identify candidate herbs and formulae for the treatment of diabetic nephropathy. Methods. The method we developed includes three steps: (1 identification of candidate ancient terms; (2 systemic search and assessment of medical records written in classical Chinese; (3 preliminary evaluation of the effect and safety of candidates. Results. Ancient terms Xia Xiao, Shen Xiao, and Xiao Shen were determined as the most likely to correspond with diabetic nephropathy and used in text mining. A total of 80 Chinese formulae for treating conditions congruent with diabetic nephropathy recorded in medical books from Tang Dynasty to Qing Dynasty were collected. Sao si tang (also called Reeling Silk Decoction was chosen to show the process of preliminary evaluation of the candidates. It had promising potential for development as new agent for the treatment of diabetic nephropathy. However, further investigations about the safety to patients with renal insufficiency are still needed. Conclusions. The methods developed in this study offer a targeted approach to identifying traditional herbs and/or formulae as candidates for further investigation in the search for new drugs for modern disease. However, more effort is still required to improve our techniques, especially with regard to compound formulae.

  20. Resource costing for multinational neurologic clinical trials: methods and results.

    Science.gov (United States)

    Schulman, K; Burke, J; Drummond, M; Davies, L; Carlsson, P; Gruger, J; Harris, A; Lucioni, C; Gisbert, R; Llana, T; Tom, E; Bloom, B; Willke, R; Glick, H

    1998-11-01

    We present the results of a multinational resource costing study for a prospective economic evaluation of a new medical technology for treatment of subarachnoid hemorrhage within a clinical trial. The study describes a framework for the collection and analysis of international resource cost data that can contribute to a consistent and accurate intercountry estimation of cost. Of the 15 countries that participated in the clinical trial, we collected cost information in the following seven: Australia, France, Germany, the UK, Italy, Spain, and Sweden. The collection of cost data in these countries was structured through the use of worksheets to provide accurate and efficient cost reporting. We converted total average costs to average variable costs and then aggregated the data to develop study unit costs. When unit costs were unavailable, we developed an index table, based on a market-basket approach, to estimate unit costs. To estimate the cost of a given procedure, the market-basket estimation process required that cost information be available for at least one country. When cost information was unavailable in all countries for a given procedure, we estimated costs using a method based on physician-work and practice-expense resource-based relative value units. Finally, we converted study unit costs to a common currency using purchasing power parity measures. Through this costing exercise we developed a set of unit costs for patient services and per diem hospital services. We conclude by discussing the implications of our costing exercise and suggest guidelines to facilitate more effective multinational costing exercises.

  1. THE METHOD OF SURGICAL TREATMENT OF HUMERAL EPICONDYLITIS

    Directory of Open Access Journals (Sweden)

    S. B. Korolev

    2011-01-01

    Full Text Available Method of treatment of epicondylitis of humeral bone is descripted. This metod is proposited to use if conservative therapy was not effective. Experience of use this method show excellent results.

  2. Talk Show Science.

    Science.gov (United States)

    Moore, Mitzi Ruth

    1992-01-01

    Proposes having students perform skits in which they play the roles of the science concepts they are trying to understand. Provides the dialog for a skit in which hot and cold gas molecules are interviewed on a talk show to study how these properties affect wind, rain, and other weather phenomena. (MDH)

  3. The healthy building intervention study: Objectives, methods and results of selected environmental measurements

    Energy Technology Data Exchange (ETDEWEB)

    Fisk, W.J.; Faulkner, D.; Sullivan, D. [and others

    1998-02-17

    To test proposed methods for reducing SBS symptoms and to learn about the causes of these symptoms, a double-blind controlled intervention study was designed and implemented. This study utilized two different interventions designed to reduce occupants` exposures to airborne particles: (1) high efficiency filters in the building`s HVAC systems; and (2) thorough cleaning of carpeted floors and fabric-covered chairs with an unusually powerful vacuum cleaner. The study population was the workers on the second and fourth floors of a large office building with mechanical ventilation, air conditioning, and sealed windows. Interventions were implemented on one floor while the occupants on the other floor served as a control group. For the enhanced-filtration intervention, a multiple crossover design was used (a crossover is a repeat of the experiment with the former experimental group as the control group and vice versa). Demographic and health symptom data were collected via an initial questionnaire on the first study week and health symptom data were obtained each week, for eight additional weeks, via weekly questionnaires. A large number of indoor environmental parameters were measured during the study including air temperatures and humidities, carbon dioxide concentrations, particle concentrations, concentrations of several airborne bioaerosols, and concentrations of several microbiologic compounds within the dust sampled from floors and chairs. This report describes the study methods and summarizes the results of selected environmental measurements.

  4. Fuel- and wood consumption surveys in developing countries: a proposal of an efficient low-cost method and the results of its application in eastern Ethiopia

    Energy Technology Data Exchange (ETDEWEB)

    Poschen, P.; Eiche, G.

    1986-01-01

    The method involves a preliminary survey to establish areas homogeneous for temperature and rainfall regime, natural vegetation, agricultural systems, wood fuel substitutes, housing and cooking habits. Locations are sampled within such areas to reflect variability in presence of forests and other fuel and wood resources, population density, and access to markets. Households are sampled within a location to cover variation in social structure. Results from a survey in the Hararghe highlands differ markedly from previous Ethiopia-wide estimates and show that remaining wood resources supply less than half of the fuel requirements. An effective community forestry programme is urgently required. (Refs. 9).

  5. Deformed Reality: Proof of concept and preliminary results

    OpenAIRE

    Haouchine , Nazim; Petit , Antoine; Roy , Frederick; Cotin , Stéphane

    2017-01-01

    International audience; We introduce " Deformed Reality " , a new paradigm to interactively manipulate objects in a scene in a deformable manner. Using the core principle of augmented reality to estimate rigid pose over time, our method enables the user to deform the targeted object while it is being rendered with its natural texture, giving the sense of a real-time object editing in user environment. The presented results show that our method can open new ways of using augmented reality by n...

  6. Model films of cellulose. I. Method development and initial results

    NARCIS (Netherlands)

    Gunnars, S.; Wågberg, L.; Cohen Stuart, M.A.

    2002-01-01

    This report presents a new method for the preparation of thin cellulose films. NMMO (N- methylmorpholine- N-oxide) was used to dissolve cellulose and addition of DMSO (dimethyl sulfoxide) was used to control viscosity of the cellulose solution. A thin layer of the cellulose solution is spin- coated

  7. A multigrid method for variational inequalities

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, S.; Stewart, D.E.; Wu, W.

    1996-12-31

    Multigrid methods have been used with great success for solving elliptic partial differential equations. Penalty methods have been successful in solving finite-dimensional quadratic programs. In this paper these two techniques are combined to give a fast method for solving obstacle problems. A nonlinear penalized problem is solved using Newton`s method for large values of a penalty parameter. Multigrid methods are used to solve the linear systems in Newton`s method. The overall numerical method developed is based on an exterior penalty function, and numerical results showing the performance of the method have been obtained.

  8. Standardization of the 24-hour diet recall calibration method used in the european prospective investigation into cancer and nutrition (EPIC): general concepts and preliminary results.

    Science.gov (United States)

    Slimani, N; Ferrari, P; Ocké, M; Welch, A; Boeing, H; Liere, M; Pala, V; Amiano, P; Lagiou, A; Mattisson, I; Stripp, C; Engeset, D; Charrondière, R; Buzzard, M; Staveren, W; Riboli, E

    2000-12-01

    Despite increasing interest in the concept of calibration in dietary surveys, there is still little experience in the use and standardization of a common reference dietary method, especially in international studies. In this paper, we present the general theoretical framework and the approaches developed to standardize the computer-assisted 24 h diet recall method (EPIC-SOFT) used to collect about 37 000 24-h dietary recall measurements (24-HDR) from the 10 countries participating in the European Prospective Investigation into Cancer and Nutrition (EPIC). In addition, an analysis of variance was performed to examine the level of standardization of EPIC-SOFT across the 90 interviewers involved in the study. The analysis of variance used a random effects model in which mean energy intake per interviewer was used as the dependent variable, while age, body mass index (BMI), energy requirement, week day, season, special diet, special day, physical activity and the EPIC-SOFT version were used as independent variables. The analysis was performed separately for men and women. The results show no statistical difference between interviewers in all countries for men and five out of eight countries for women, after adjustment for physical activity and the EPIC-SOFT program version used, and the exclusion of one interviewer in Germany (for men), and one in Denmark (for women). These results showed an interviewer effect in certain countries and a significant difference between gender, suggesting an underlying respondent's effect due to the higher under-reporting among women that was consistently observed in EPIC. However, the actual difference between interviewer and country mean energy intakes is about 10%. Furthermore, no statistical differences in mean energy intakes were observed across centres from the same country, except in Italy and Germany for men, and France and Spain for women, where the populations were recruited from areas scattered throughout the countries. Despite

  9. False positive results using calcitonin as a screening method for medullary thyroid carcinoma

    Directory of Open Access Journals (Sweden)

    Rafael Loch Batista

    2013-01-01

    Full Text Available The role of serum calcitonin as part of the evaluation of thyroid nodules has been widely discussed in literature. However there still is no consensus of measurement of calcitonin in the initial evaluation of a patient with thyroid nodule. Problems concerning cost-benefit, lab methods, false positive and low prevalence of medullary thyroid carcinoma (MTC are factors that limit this approach. We have illustrated two cases where serum calcitonin was used in the evaluation of thyroid nodule and rates proved to be high. A stimulation test was performed, using calcium as secretagogue, and calcitonin hyper-stimulation was confirmed, but anatomopathologic examination did not evidence medullar neoplasia. Anatomopathologic diagnosis detected Hashimoto thyroiditis in one case and adenomatous goiter plus an occult papillary thyroid carcinoma in the other one. Recommendation for routine use of serum calcitonin in the initial diagnostic evaluation of a thyroid nodule, followed by a confirming stimulation test if basal serum calcitonin is showed to be high, is the most currently recommended approach, but questions concerning cost-benefit and possibility of diagnosis error make the validity of this recommendation discussible.

  10. Obesity in show cats.

    Science.gov (United States)

    Corbee, R J

    2014-12-01

    Obesity is an important disease with a high prevalence in cats. Because obesity is related to several other diseases, it is important to identify the population at risk. Several risk factors for obesity have been described in the literature. A higher incidence of obesity in certain cat breeds has been suggested. The aim of this study was to determine whether obesity occurs more often in certain breeds. The second aim was to relate the increased prevalence of obesity in certain breeds to the official standards of that breed. To this end, 268 cats of 22 different breeds investigated by determining their body condition score (BCS) on a nine-point scale by inspection and palpation, at two different cat shows. Overall, 45.5% of the show cats had a BCS > 5, and 4.5% of the show cats had a BCS > 7. There were significant differences between breeds, which could be related to the breed standards. Most overweight and obese cats were in the neutered group. It warrants firm discussions with breeders and cat show judges to come to different interpretations of the standards in order to prevent overweight conditions in certain breeds from being the standard of beauty. Neutering predisposes for obesity and requires early nutritional intervention to prevent obese conditions. Journal of Animal Physiology and Animal Nutrition © 2014 Blackwell Verlag GmbH.

  11. Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry

    Science.gov (United States)

    Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei

    2018-04-01

    In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.

  12. Computational methods for nuclear criticality safety analysis

    International Nuclear Information System (INIS)

    Maragni, M.G.

    1992-01-01

    Nuclear criticality safety analyses require the utilization of methods which have been tested and verified against benchmarks results. In this work, criticality calculations based on the KENO-IV and MCNP codes are studied aiming the qualification of these methods at the IPEN-CNEN/SP and COPESP. The utilization of variance reduction techniques is important to reduce the computer execution time, and several of them are analysed. As practical example of the above methods, a criticality safety analysis for the storage tubes for irradiated fuel elements from the IEA-R1 research has been carried out. This analysis showed that the MCNP code is more adequate for problems with complex geometries, and the KENO-IV code shows conservative results when it is not used the generalized geometry option. (author)

  13. The method of producing climate change datasets impacts the resulting policy guidance and chance of mal-adaptation

    Directory of Open Access Journals (Sweden)

    Marie Ekström

    2016-12-01

    Full Text Available Impact, adaptation and vulnerability (IAV research underpin strategies for adaptation to climate change and help to conceptualise what life may look like in decades to come. Research draws on information from global climate models (GCMs though typically post-processed into a secondary product with finer resolution through methods of downscaling. Through worked examples set in an Australian context we assess the influence of GCM sub-setting, geographic area sub-setting and downscaling method on the regional change signal. Examples demonstrate that choices impact on the final results differently depending on factors such as application needs, range of uncertainty of the projected variable, amplitude of natural variability, and size of study region. For heat extremes, the choice of emissions scenario is of prime importance, but for a given scenario the method of preparing data can affect the magnitude of the projection by a factor of two or more, strongly affecting the indicated adaptation decision. For catchment level runoff projections, the choice of emission scenario is less dominant. Rather the method of selecting and producing application-ready datasets is crucial as demonstrated by results with opposing sign of change, raising the real possibility of mal-adaptive decisions. This work illustrates the potential pitfalls of GCM sub-sampling or the use of a single downscaled product when conducting IAV research. Using the broad range of change from all available model sources, whilst making the application more complex, avoids the larger problem of over-confidence in climate projections and lessens the chance of mal-adaptation.

  14. STANDARDIZATION OF GLYCOHEMOGLOBIN RESULTS AND REFERENCE VALUES IN WHOLE-BLOOD STUDIED IN 103 LABORATORIES USING 20 METHODS

    NARCIS (Netherlands)

    WEYKAMP, CW; PENDERS, TJ; MUSKIET, FAJ; VANDERSLIK, W

    We investigated the effect of calibration with lyophilized calibrators on whole-blood glycohemoglobin (glyHb) results. One hundred three laboratories, using 20 different methods, determined glyHb in two lyophilized calibrators and two whole-blood samples. For whole-blood samples with low (5%) and

  15. Dolphin shows and interaction programs: benefits for conservation education?

    Science.gov (United States)

    Miller, L J; Zeigler-Hill, V; Mellen, J; Koeppel, J; Greer, T; Kuczaj, S

    2013-01-01

    Dolphin shows and dolphin interaction programs are two types of education programs within zoological institutions used to educate visitors about dolphins and the marine environment. The current study examined the short- and long-term effects of these programs on visitors' conservation-related knowledge, attitude, and behavior. Participants of both dolphin shows and interaction programs demonstrated a significant short-term increase in knowledge, attitudes, and behavioral intentions. Three months following the experience, participants of both dolphin shows and interaction programs retained the knowledge learned during their experience and reported engaging in more conservation-related behaviors. Additionally, the number of dolphin shows attended in the past was a significant predictor of recent conservation-related behavior suggesting that repetition of these types of experiences may be important in inspiring people to conservation action. These results suggest that both dolphin shows and dolphin interaction programs can be an important part of a conservation education program for visitors of zoological facilities. © 2012 Wiley Periodicals, Inc.

  16. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    Science.gov (United States)

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  17. CT-guided percutaneous neurolysis methods. State of the art and first results; CT-gesteuerte Neurolysen. Stand der Technik und aktuelle Ergebnisse

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, B. [Abt. Radiodiagnostik, Radiologische Universitaetsklinik Heidelberg (Germany); Richter, G.M. [Abt. Radiodiagnostik, Radiologische Universitaetsklinik Heidelberg (Germany); Roeren, T. [Abt. Radiodiagnostik, Radiologische Universitaetsklinik Heidelberg (Germany); Kauffmann, G.W. [Abt. Radiodiagnostik, Radiologische Universitaetsklinik Heidelberg (Germany)

    1996-09-01

    We used 21G or 22G fine needles. All CT-guided percutaneous neurolysis methods require a proper blood coagulation. Most common CT scanners are suitable for neurolysis if there is enough room for maintaining sterile conditions. All neurolysis methods involve sterile puncture of the ganglia under local anesthesia, a test block with anesthetic and contrast agent to assess the clinical effect and the definitive block with a mixture of 96% ethanol and local anesthetic. This allows us to correct the position of the needle if we see improper distribution of the test block or unwanted side effects. Though inflammatory complications of the peritoneum due to puncture are rarely seen, we prefer the dorsal approach whenever possible. Results: Seven of 20 legs showed at least transient clinical improvement after CT-guided lumbar sympathectomies; 13 legs had to be amputated. Results of the methods in the literature differ. For lumbar sympathectomy, improved perfusion is reported in 39-89%, depending on the pre-selection of the patient group. Discussion: It was recently proved that sympathectomy not only improves perfusion of the skin but also of the muscle. The hypothesis of a steal effect after sympathectomy towards skin perfusion was disproved. Modern aggressive surgical and interventional treatment often leaves patients to sympathectomy whose reservers of collateralization are nearly exhausted. We presume this is the reason for the different results we found in our patient group. For thoracic sympathectomy the clinical treatment depends very much on the indications. Whereas palmar hyperhidrosis offers nearly 100% success, only 60-70% of patients with disturbance of perfusion have benefited. Results in celiac ganglia block also differ. Patients with carcinoma of the pancreas and other organs of the upper abdomen benefit in 80-100% of all cases, patients with chronic pancreatitis in 60-80%. (orig./VHE) [Deutsch] Thorakale und lumbale Sympathikolyse sowie die Zoeliakusblockade

  18. Pion emission from the T2K replica target: method, results and application

    CERN Document Server

    Abgrall, N.; Anticic, T.; Antoniou, N.; Argyriades, J.; Baatar, B.; Blondel, A.; Blumer, J.; Bogomilov, M.; Bravar, A.; Brooks, W.; Brzychczyk, J.; Bubak, A.; Bunyatov, S.A.; Busygina, O.; Christakoglou, P.; Chung, P.; Czopowicz, T.; Davis, N.; Debieux, S.; Di Luise, S.; Dominik, W.; Dumarchez, J.; Dynowski, K.; Engel, R.; Ereditato, A.; Esposito, L.S.; Feofilov, G.A.; Fodor, Z.; Ferrero, A.; Fulop, A.; Gazdzicki, M.; Golubeva, M.; Grabez, B.; Grebieszkow, K.; Grzeszczuk, A.; Guber, F.; Haesler, A.; Hakobyan, H.; Hasegawa, T.; Idczak, R.; Igolkin, S.; Ivanov, Y.; Ivashkin, A.; Kadija, K.; Kapoyannis, A.; Katrynska, N.; Kielczewska, D.; Kikola, D.; Kirejczyk, M.; Kisiel, J.; Kiss, T.; Kleinfelder, S.; Kobayashi, T.; Kochebina, O.; Kolesnikov, V.I.; Kolev, D.; Kondratiev, V.P.; Korzenev, A.; Kowalski, S.; Krasnoperov, A.; Kuleshov, S.; Kurepin, A.; Lacey, R.; Larsen, D.; Laszlo, A.; Lyubushkin, V.V.; Mackowiak-Pawlowska, M.; Majka, Z.; Maksiak, B.; Malakhov, A.I.; Maletic, D.; Marchionni, A.; Marcinek, A.; Maris, I.; Marin, V.; Marton, K.; Matulewicz, T.; Matveev, V.; Melkumov, G.L.; Messina, M.; Mrowczynski, St.; Murphy, S.; Nakadaira, T.; Nishikawa, K.; Palczewski, T.; Palla, G.; Panagiotou, A.D.; Paul, T.; Peryt, W.; Petukhov, O.; Planeta, R.; Pluta, J.; Popov, B.A.; Posiadala, M.; Pulawski, S.; Puzovic, J.; Rauch, W.; Ravonel, M.; Renfordt, R.; Robert, A.; Rohrich, D.; Rondio, E.; Rossi, B.; Roth, M.; Rubbia, A.; Rustamov, A.; Rybczynski, M.; Sadovsky, A.; Sakashita, K.; Savic, M.; Sekiguchi, T.; Seyboth, P.; Shibata, M.; Sipos, M.; Skrzypczak, E.; Slodkowski, M.; Staszel, P.; Stefanek, G.; Stepaniak, J.; Strabel, C.; Strobele, H.; Susa, T.; Szuba, M.; Tada, M.; Taranenko, A.; Tereshchenko, V.; Tolyhi, T.; Tsenov, R.; Turko, L.; Ulrich, R.; Unger, M.; Vassiliou, M.; Veberic, D.; Vechernin, V.V.; Vesztergombi, G.; Wilczek, A.; Wlodarczyk, Z.; Wojtaszek-Szwarc, A.; Wyszynski, O.; Zambelli, L.; Zipper, W.; Hartz, M.; Ichikawa, A.K.; Kubo, H.; Marino, A.D.; Matsuoka, K.; Murakami, A.; Nakaya, T.; Suzuki, K.; Yuan, T.; Zimmerman, E.D.

    2013-01-01

    The T2K long-baseline neutrino oscillation experiment in Japan needs precise predictions of the initial neutrino flux. The highest precision can be reached based on detailed measurements of hadron emission from the same target as used by T2K exposed to a proton beam of the same kinetic energy of 30 GeV. The corresponding data were recorded in 2007-2010 by the NA61/SHINE experiment at the CERN SPS using a replica of the T2K graphite target. In this paper details of the experiment, data taking, data analysis method and results from the 2007 pilot run are presented. Furthermore, the application of the NA61/SHINE measurements to the predictions of the T2K initial neutrino flux is described and discussed.

  19. Wielandt method applied to the diffusion equations discretized by finite element nodal methods

    International Nuclear Information System (INIS)

    Mugica R, A.; Valle G, E. del

    2003-01-01

    Nowadays the numerical methods of solution to the diffusion equation by means of algorithms and computer programs result so extensive due to the great number of routines and calculations that should carry out, this rebounds directly in the execution times of this programs, being obtained results in relatively long times. This work shows the application of an acceleration method of the convergence of the classic method of those powers that it reduces notably the number of necessary iterations for to obtain reliable results, what means that the compute times they see reduced in great measure. This method is known in the literature like Wielandt method and it has incorporated to a computer program that is based on the discretization of the neutron diffusion equations in plate geometry and stationary state by polynomial nodal methods. In this work the neutron diffusion equations are described for several energy groups and their discretization by means of those called physical nodal methods, being illustrated in particular the quadratic case. It is described a model problem widely described in the literature which is solved for the physical nodal grade schemes 1, 2, 3 and 4 in three different ways: to) with the classic method of the powers, b) method of the powers with the Wielandt acceleration and c) method of the powers with the Wielandt modified acceleration. The results for the model problem as well as for two additional problems known as benchmark problems are reported. Such acceleration method can also be implemented to problems of different geometry to the proposal in this work, besides being possible to extend their application to problems in 2 or 3 dimensions. (Author)

  20. A Review of Spectral Methods for Variable Amplitude Fatigue Prediction and New Results

    Science.gov (United States)

    Larsen, Curtis E.; Irvine, Tom

    2013-01-01

    A comprehensive review of the available methods for estimating fatigue damage from variable amplitude loading is presented. The dependence of fatigue damage accumulation on power spectral density (psd) is investigated for random processes relevant to real structures such as in offshore or aerospace applications. Beginning with the Rayleigh (or narrow band) approximation, attempts at improved approximations or corrections to the Rayleigh approximation are examined by comparison to rainflow analysis of time histories simulated from psd functions representative of simple theoretical and real world applications. Spectral methods investigated include corrections by Wirsching and Light, Ortiz and Chen, the Dirlik formula, and the Single-Moment method, among other more recent proposed methods. Good agreement is obtained between the spectral methods and the time-domain rainflow identification for most cases, with some limitations. Guidelines are given for using the several spectral methods to increase confidence in the damage estimate.

  1. Effects of patient safety auditing in hospital care: results of a mixed-method evaluation (part 1).

    Science.gov (United States)

    Hanskamp-Sebregts, Mirelle; Zegers, Marieke; Westert, Gert P; Boeijen, Wilma; Teerenstra, Steven; van Gurp, Petra J; Wollersheim, Hub

    2018-06-15

    To evaluate the effectiveness of internal auditing in hospital care focussed on improving patient safety. A before-and-after mixed-method evaluation study was carried out in eight departments of a university medical center in the Netherlands. Internal auditing and feedback focussed on improving patient safety. The effect of internal auditing was assessed 15 months after the audit, using linear mixed models, on the patient, professional, team and departmental levels. The measurement methods were patient record review on adverse events (AEs), surveys regarding patient experiences, safety culture and team climate, analysis of administrative hospital data (standardized mortality rate, SMR) and safety walk rounds (SWRs) to observe frontline care processes on safety. The AE rate decreased from 36.1% to 31.3% and the preventable AE rate from 5.5% to 3.6%; however, the differences before and after auditing were not statistically significant. The patient-reported experience measures regarding patient safety improved slightly over time (P audit. The SWRs showed that medication safety and information security were improved (P auditing was associated with improved patient experiences and observed safety on wards. No effects were found on adverse outcomes, safety culture and team climate 15 months after the internal audit.

  2. Quantitative functional scintigraphy of the salivary glands: A new method of interpreting and clinical results

    International Nuclear Information System (INIS)

    Schneider, P.; Trauring, G.; Haas, J.P.; Noodt, A.; Draf, W.

    1984-01-01

    Tc-99m pertechnetate is injected i.v. and the kinetics of the tracer in the salivary glands is analyzed using a gamma camera and a computer system. To visualize regional gland function, phase images as well as socalled gradient images are generated, which reflect the rate of tracer inflow and outflow. The time activity curves for the individual glands which are obtained with the ROI technique show an initial rise which reflects the pertechnetate uptake potential of the gland and is superimposed by background activity. After a standardized lemon juice dose the curve drops steeply, with the slope depending on the outflow potential of the gland and the background activity. In the past, attempts at quantifying the uptake and elimination functions have failed because of problems in allowing for the variable background component of the time activity curves, which normally amounts of about 60%. In 25 patients in whom one gland had been removed surgically the background activity was examined in terms of the time course and the regional pattern and a patient and gland-specific subtraction method was developed for visualizing the time activity curves of isolated glands devoid of any background activity and describing the uptake and elimination potentials in quantitative terms. Using this new method we evaluated 305 salivary gland scans. Normal ranges for the quantitative parameters were established and their reproducibility was examined. Unlike qualitative functional images of the salivary glands the new quantitative method offers accurate evidence of the extent of gland function and thus helps to decide wether a gland should be salvaged or not (conservative versus surgical treatment). However, quantitation does not furnish any clues on the benign or malignant nature of a tumor. (Author)

  3. Evaluation of the successive approximations method for acoustic streaming numerical simulations.

    Science.gov (United States)

    Catarino, S O; Minas, G; Miranda, J M

    2016-05-01

    This work evaluates the successive approximations method commonly used to predict acoustic streaming by comparing it with a direct method. The successive approximations method solves both the acoustic wave propagation and acoustic streaming by solving the first and second order Navier-Stokes equations, ignoring the first order convective effects. This method was applied to acoustic streaming in a 2D domain and the results were compared with results from the direct simulation of the Navier-Stokes equations. The velocity results showed qualitative agreement between both methods, which indicates that the successive approximations method can describe the formation of flows with recirculation. However, a large quantitative deviation was observed between the two methods. Further analysis showed that the successive approximation method solution is sensitive to the initial flow field. The direct method showed that the instantaneous flow field changes significantly due to reflections and wave interference. It was also found that convective effects contribute significantly to the wave propagation pattern. These effects must be taken into account when solving the acoustic streaming problems, since it affects the global flow. By adequately calculating the initial condition for first order step, the acoustic streaming prediction by the successive approximations method can be improved significantly.

  4. Reference satellite selection method for GNSS high-precision relative positioning

    Directory of Open Access Journals (Sweden)

    Xiao Gao

    2017-03-01

    Full Text Available Selecting the optimal reference satellite is an important component of high-precision relative positioning because the reference satellite directly influences the strength of the normal equation. The reference satellite selection methods based on elevation and positional dilution of precision (PDOP value were compared. Results show that all the above methods cannot select the optimal reference satellite. We introduce condition number of the design matrix in the reference satellite selection method to improve structure of the normal equation, because condition number can indicate the ill condition of the normal equation. The experimental results show that the new method can improve positioning accuracy and reliability in precise relative positioning.

  5. Assessment of South African uranium resources: methods and results

    International Nuclear Information System (INIS)

    Camisani-Calzolari, F.A.G.M.; De Klerk, W.J.; Van der Merwe, P.J.

    1985-01-01

    This paper deals primarily with the methods used by the Atomic Energy Corporation of South Africa, in arriving at the assessment of the South African uranium resources. The Resource Evaluation Group is responsible for this task, which is carried out on a continuous basis. The evaluation is done on a property-by-property basis and relies upon data submitted to the Nuclear Development Corporation of South Africa by the various companies involved in uranium mining and prospecting in South Africa. Resources are classified into Reasonably Assured (RAR), Estimated Additional (EAR) and Speculative (SR) categories as defined by the NEA/IAEA Steering Group on Uranium Resources. Each category is divided into three categories, viz, resources exploitable at less than $80/kg uranium, at $80-130/kg uranium and at $130-260/kg uranium. Resources are reported in quantities of uranium metal that could be recovered after mining and metallurgical losses have been taken into consideration. Resources in the RAR and EAR categories exploitable at costs of less than $130/kg uranium are now estimated at 460 000 t uranium which represents some 14 per cent of WOCA's (World Outside the Centrally Planned Economies Area) resources. The evaluation of a uranium venture is carried out in various steps, of which the most important, in order of implementation, are: geological interpretation, assessment of in situ resources using techniques varying from manual contouring of values, geostatistics, feasibility studies and estimation of recoverable resources. Because the choice of an evaluation method is, to some extent, dictated by statistical consderations, frequency distribution curves of the uranium grade variable are illustrated and discussed for characteristic deposits

  6. Talking with TV shows

    DEFF Research Database (Denmark)

    Sandvik, Kjetil; Laursen, Ditte

    2014-01-01

    User interaction with radio and television programmes is not a new thing. However, with new cross-media production concepts such as X Factor and Voice, this is changing dramatically. The second-screen logic of these productions encourages viewers, along with TV’s traditional one-way communication...... mode, to communicate on interactive (dialogue-enabling) devices such as laptops, smartphones and tablets. Using the TV show Voice as our example, this article shows how the technological and situational set-up of the production invites viewers to engage in new ways of interaction and communication...

  7. spa Typing and Multilocus Sequence Typing Show Comparable Performance in a Macroepidemiologic Study of Staphylococcus aureus in the United States.

    Science.gov (United States)

    O'Hara, F Patrick; Suaya, Jose A; Ray, G Thomas; Baxter, Roger; Brown, Megan L; Mera, Robertino M; Close, Nicole M; Thomas, Elizabeth; Amrine-Madsen, Heather

    2016-01-01

    A number of molecular typing methods have been developed for characterization of Staphylococcus aureus isolates. The utility of these systems depends on the nature of the investigation for which they are used. We compared two commonly used methods of molecular typing, multilocus sequence typing (MLST) (and its clustering algorithm, Based Upon Related Sequence Type [BURST]) with the staphylococcal protein A (spa) typing (and its clustering algorithm, Based Upon Repeat Pattern [BURP]), to assess the utility of these methods for macroepidemiology and evolutionary studies of S. aureus in the United States. We typed a total of 366 clinical isolates of S. aureus by these methods and evaluated indices of diversity and concordance values. Our results show that, when combined with the BURP clustering algorithm to delineate clonal lineages, spa typing produces results that are highly comparable with those produced by MLST/BURST. Therefore, spa typing is appropriate for use in macroepidemiology and evolutionary studies and, given its lower implementation cost, this method appears to be more efficient. The findings are robust and are consistent across different settings, patient ages, and specimen sources. Our results also support a model in which the methicillin-resistant S. aureus (MRSA) population in the United States comprises two major lineages (USA300 and USA100), which each consist of closely related variants.

  8. Simulation of a Centrifugal Pump by Using the Harmonic Balance Method

    Directory of Open Access Journals (Sweden)

    Franco Magagnato

    2015-01-01

    Full Text Available The harmonic balance method was used for the flow simulation in a centrifugal pump. Independence studies have been done to choose proper number of harmonic modes and inlet eddy viscosity ratio value. The results from harmonic balance method show good agreements with PIV experiments and unsteady calculation results (which is based on the dual time stepping method for the predicted head and the phase-averaged velocity. A detailed analysis of the flow fields at different flow rates shows that the flow rate has an evident influence on the flow fields. At 0.6Qd, some vortices begin to appear in the impeller, and at 0.4Qd some vortices have blocked the flow passage. The flow fields at different positions at 0.6Qd and 0.4Qd show how the complicated flow phenomena are forming, developing, and even disappearing. The harmonic balance method can be used for the flow simulation in pumps, showing the same accuracy as unsteady methods, but is considerably faster.

  9. marker development for two novel rice genes showing differential ...

    Indian Academy of Sciences (India)

    2014-08-19

    Aug 19, 2014 ... School of Crop Improvement, College of PostGraduate Studies, Central Agricultural University, ... from the root transcriptome data for tolerance to low P. .... Values show a representative result of three independent experiments ...

  10. Improvement of human cell line activation test (h-CLAT) using short-time exposure methods for prevention of false-negative results.

    Science.gov (United States)

    Narita, Kazuto; Ishii, Yuuki; Vo, Phuc Thi Hong; Nakagawa, Fumiko; Ogata, Shinichi; Yamashita, Kunihiko; Kojima, Hajime; Itagaki, Hiroshi

    2018-01-01

    Recently, animal testing has been affected by increasing ethical, social, and political concerns regarding animal welfare. Several in vitro safety tests for evaluating skin sensitization, such as the human cell line activation test (h-CLAT), have been proposed. However, similar to other tests, the h-CLAT has produced false-negative results, including in tests for acid anhydride and water-insoluble chemicals. In a previous study, we demonstrated that the cause of false-negative results from phthalic anhydride was hydrolysis by an aqueous vehicle, with IL-8 release from THP-1 cells, and that short-time exposure to liquid paraffin (LP) dispersion medium could reduce false-negative results from acid anhydrides. In the present study, we modified the h-CLAT by applying this exposure method. We found that the modified h-CLAT is a promising method for reducing false-negative results obtained from acid anhydrides and chemicals with octanol-water partition coefficients (LogK ow ) greater than 3.5. Based on the outcomes from the present study, a combination of the original and the modified h-CLAT is suggested for reducing false-negative results. Notably, the combination method provided a sensitivity of 95% (overall chemicals) or 93% (chemicals with LogK ow > 2.0), and an accuracy of 88% (overall chemicals) or 81% (chemicals with LogK ow > 2.0). We found that the combined method is a promising evaluation scheme for reducing false-negative results seen in existing in vitro skin-sensitization tests. In the future, we expect a combination of original and modified h-CLAT to be applied in a newly developed in vitro test for evaluating skin sensitization.

  11. Standardisation of a European measurement method for organic carbon and elemental carbon in ambient air: results of the field trial campaign and the determination of a measurement uncertainty and working range.

    Science.gov (United States)

    Brown, Richard J C; Beccaceci, Sonya; Butterfield, David M; Quincey, Paul G; Harris, Peter M; Maggos, Thomas; Panteliadis, Pavlos; John, Astrid; Jedynska, Aleksandra; Kuhlbusch, Thomas A J; Putaud, Jean-Philippe; Karanasiou, Angeliki

    2017-10-18

    The European Committee for Standardisation (CEN) Technical Committee 264 'Air Quality' has recently produced a standard method for the measurements of organic carbon and elemental carbon in PM 2.5 within its working group 35 in response to the requirements of European Directive 2008/50/EC. It is expected that this method will be used in future by all Member States making measurements of the carbonaceous content of PM 2.5 . This paper details the results of a laboratory and field measurement campaign and the statistical analysis performed to validate the standard method, assess its uncertainty and define its working range to provide clarity and confidence in the underpinning science for future users of the method. The statistical analysis showed that the expanded combined uncertainty for transmittance protocol measurements of OC, EC and TC is expected to be below 25%, at the 95% level of confidence, above filter loadings of 2 μg cm -2 . An estimation of the detection limit of the method for total carbon was 2 μg cm -2 . As a result of the laboratory and field measurement campaign the EUSAAR2 transmittance measurement protocol was chosen as the basis of the standard method EN 16909:2017.

  12. Nonlinear conjugate gradient methods in micromagnetics

    Directory of Open Access Journals (Sweden)

    J. Fischbacher

    2017-04-01

    Full Text Available Conjugate gradient methods for energy minimization in micromagnetics are compared. The comparison of analytic results with numerical simulation shows that standard conjugate gradient method may fail to produce correct results. A method that restricts the step length in the line search is introduced, in order to avoid this problem. When the step length in the line search is controlled, conjugate gradient techniques are a fast and reliable way to compute the hysteresis properties of permanent magnets. The method is applied to investigate demagnetizing effects in NdFe12 based permanent magnets. The reduction of the coercive field by demagnetizing effects is μ0ΔH = 1.4 T at 450 K.

  13. THE USEFULNESS OF USER TESTING METHODS IN IDENTIFYING PROBLEMS ON UNIVERSITY WEBSITES

    Directory of Open Access Journals (Sweden)

    Layla Hasan

    2014-10-01

    Full Text Available This paper aims to investigate the usefulness of three user testing methods (observation, and using both quantitative and qualitative data from a post-test questionnaire in terms of their ability or inability to find specific usability problems on university websites. The results showed that observation was the best method, compared to the other two, in identifying large numbers of major and minor usability problems on university websites. The results also showed that employing qualitative data from a post-test questionnaire was a useful complementary method since this identified additional usability problems that were not identified by the observation method. However, the results showed that the quantitative data from the post-test questionnaire were inaccurate and ineffective in terms of identifying usability problems on such websites.

  14. Divergence-Conforming Discontinuous Galerkin Methods and $C^0$ Interior Penalty Methods

    KAUST Repository

    Kanschat, Guido

    2014-01-01

    © 2014 Society for Industrial and Applied Mathematics. In this paper, we show that recently developed divergence-conforming methods for the Stokes problem have discrete stream functions. These stream functions in turn solve a continuous interior penalty problem for biharmonic equations. The equivalence is established for the most common methods in two dimensions based on interior penalty terms. Then, extensions of the concept to discontinuous Galerkin methods defined through lifting operators, for different weak formulations of the Stokes problem, and to three dimensions are discussed. Application of the equivalence result yields an optimal error estimate for the Stokes velocity without involving the pressure. Conversely, combined with a recent multigrid method for Stokes flow, we obtain a simple and uniform preconditioner for harmonic problems with simply supported and clamped boundary.

  15. Implantable central venous chemoport: camparision of results according to approach routes and methods

    International Nuclear Information System (INIS)

    Shin, Byung Suck; Ahn, Moon Sang

    2003-01-01

    To evaluate the results and complications of placement of implantable port according to approach routes and methods. Between April 2001 and October 2002, a total of 103 implantable chemoport was placed in 95 patients for chemotherapy using preconnected type (n=39) and attachable type (n=64). Puncture sites were left subclavian vein (n=35), right subclavian vein (n=5), left internal jugular vein (n=9), right internal jugular vein (n=54). We evaluated duration of catheterization days, complications according to approach routes and methods. Implantable chemoport was placed successfully in all cases. Duration of catheterization ranged from 8 to 554 days(mean 159, total 17,872 catheter days). Procedure related complications occurred transient pulmonary air embolism (n=1), small hematoma (n=1) and malposition in using preconnected type (n=2). Late complications occurred catheter migration (n=5), catheter malfunction (n=3), occlusion (n=1) and infection (n=11). Among them 15 chemoport was removed (14.5%). Catheter migration was occured via subclavian vein in all cases (13%, p=.008). Infection developed in 10.7% of patients(0.61 per 1000 catheter days). There were no catheter-related central vein thrombosis. Implantation of chemoport is a safe procedure. Choice of right internal jugular vein than subclavian vain vein for puncture site has less complications. And selection of attachable type of chemoport is convenient than preconnected type. Adequate care of chemoport is essential for long patency

  16. Direct measurement of tritium in urine by liquid scintillation method

    International Nuclear Information System (INIS)

    Zhang Caihong; Wen Qinghua; Chen Kefei; Li Huaixin

    1999-01-01

    The author introduces the method for direct measurement of tritium concentration in urine using liquid scintillation. Effects of sampling containers, store patterns and storage time are studied. Meanwhile, results of two methods are compared with direct measurement method and oxidation distillation method. The results shows that direct measurement method is a economic and simple method, which can meet the need of determination of urine tritium for NPP workers. There is no significant difference compared with the data obtained by oxidation distillation method

  17. European external quality control study on the competence of laboratories to recognize rare sequence variants resulting in unusual genotyping results.

    Science.gov (United States)

    Márki-Zay, János; Klein, Christoph L; Gancberg, David; Schimmel, Heinz G; Dux, László

    2009-04-01

    Depending on the method used, rare sequence variants adjacent to the single nucleotide polymorphism (SNP) of interest may cause unusual or erroneous genotyping results. Because such rare variants are known for many genes commonly tested in diagnostic laboratories, we organized a proficiency study to assess their influence on the accuracy of reported laboratory results. Four external quality control materials were processed and sent to 283 laboratories through 3 EQA organizers for analysis of the prothrombin 20210G>A mutation. Two of these quality control materials contained sequence variants introduced by site-directed mutagenesis. One hundred eighty-nine laboratories participated in the study. When samples gave a usual result with the method applied, the error rate was 5.1%. Detailed analysis showed that more than 70% of the failures were reported from only 9 laboratories. Allele-specific amplification-based PCR had a much higher error rate than other methods (18.3% vs 2.9%). The variants 20209C>T and [20175T>G; 20179_20180delAC] resulted in unusual genotyping results in 67 and 85 laboratories, respectively. Eighty-three (54.6%) of these unusual results were not recognized, 32 (21.1%) were attributed to technical issues, and only 37 (24.3%) were recognized as another sequence variant. Our findings revealed that some of the participating laboratories were not able to recognize and correctly interpret unusual genotyping results caused by rare SNPs. Our study indicates that the majority of the failures could be avoided by improved training and careful selection and validation of the methods applied.

  18. The information seeking and procurment needs of attendees at an industrial trade show

    Directory of Open Access Journals (Sweden)

    N. C. Bresler

    2009-12-01

    Full Text Available Purpose: The purpose of this article is to describe what attracts visitors to an industrial trade show, and to profile them. This will enable the show organisers to attract the right mix of exhibitors and fulfil their dual role of satisfying the needs of both attendees and exhibitors and improve the role of exhibitions in the latter's marketing mix. Problem investigated: The research seeks to elicit the attendance objectives of participants in order to ascertain their information seeking and procurement needs. This can be used to improve the marketing communication of both the organisers and exhibitors. Research methodology: The research design is a multi method, descriptive study. A non probability, judgemental sample was drawn; 1020 interviews were conducted per Electra Mining Africa expo in 2004 and 2006. Both open ended and fixed response questions were posed and due to the similarity of responses, 300 per show were analysed. The researcher fulfilled a participant observer role to enhance the validity and reliability of the findings. Findings/implications: The prime reason for visitation was to see what is new, discover, and gather information, and attendees were not disappointed in that. The trade shows attracted an informed, niche audience. Exhibitors gained access to key decision makers with buying influence. Attendees represented all roles in the buying process and intended to buy some capital items exhibited within the following year. Business contacts were made. The attraction and contact efficiency of the exhibition's were high and may result in conversion efficiency. In terms of market and geographical coverage it is a vertical international show. Originality: This paper contributes to limited research conducted on trade shows, especially in South Africa. It is unique in that it describes the effectiveness of an industrial trade show from a demand perspective in order to improve the facilitating role of exhibition organisers. It

  19. Summary of EPA's risk assessment results from the analysis of alternative methods of low-level waste disposal

    International Nuclear Information System (INIS)

    Bandrowski, M.S.; Hung, C.Y.; Meyer, G.L.; Rogers, V.C.

    1987-01-01

    Evaluation of the potential health risk and individual exposure from a broad number of disposal alternatives is an important part of EPA's program to develop generally applicable environmental standards for the land disposal of low-level radioactive wastes (LLW). The Agency has completed an analysis of the potential population health risks and maximum individual exposures from ten disposal methods under three different hydrogeological and climatic settings. This paper briefly describes the general input and analysis procedures used in the risk assessment for LLW disposal and presents their preliminary results. Some important lessons learned from simulating LLW disposal under a large variety of methods and conditions are identified

  20. Nodal spectrum method for solving neutron diffusion equation

    International Nuclear Information System (INIS)

    Sanchez, D.; Garcia, C. R.; Barros, R. C. de; Milian, D.E.

    1999-01-01

    Presented here is a new numerical nodal method for solving static multidimensional neutron diffusion equation in rectangular geometry. Our method is based on a spectral analysis of the nodal diffusion equations. These equations are obtained by integrating the diffusion equation in X, Y directions and then considering flat approximations for the current. These flat approximations are the only approximations that are considered in this method, as a result the numerical solutions are completely free from truncation errors. We show numerical results to illustrate the methods accuracy for coarse mesh calculations

  1. Finding protein sites using machine learning methods

    Directory of Open Access Journals (Sweden)

    Jaime Leonardo Bobadilla Molina

    2003-07-01

    Full Text Available The increasing amount of protein three-dimensional (3D structures determined by x-ray and NMR technologies as well as structures predicted by computational methods results in the need for automated methods to provide inital annotations. We have developed a new method for recognizing sites in three-dimensional protein structures. Our method is based on a previosly reported algorithm for creating descriptions of protein microenviroments using physical and chemical properties at multiple levels of detail. The recognition method takes three inputs: 1. A set of control nonsites that share some structural or functional role. 2. A set of control nonsites that lack this role. 3. A single query site. A support vector machine classifier is built using feature vectors where each component represents a property in a given volume. Validation against an independent test set shows that this recognition approach has high sensitivity and specificity. We also describe the results of scanning four calcium binding proteins (with the calcium removed using a three dimensional grid of probe points at 1.25 angstrom spacing. The system finds the sites in the proteins giving points at or near the blinding sites. Our results show that property based descriptions along with support vector machines can be used for recognizing protein sites in unannotated structures.

  2. Results form 2+1 flavours of SLiNC fermions

    International Nuclear Information System (INIS)

    Bietenholz, W.; Cundy, N.

    2009-10-01

    QCD results are presented for a 2+1 flavour fermion clover action (which we call the SLiNC action). A method of tuning the quark masses to their physical values is discussed. In this method the singlet quark mass is kept fixed, which solves the problem of different renormalisations (for singlet and non-singlet quark masses) occuring for non-chirally invariant lattice fermions. This procedure enables a wide range of quark masses to be probed, including the case with a heavy up-down quark mass and light strange quark mass. Preliminary results show the correct splittings for the baryon (octet and) decuplet spectrum. (orig.)

  3. Numerical methods for modeling photonic-crystal VCSELs

    DEFF Research Database (Denmark)

    Dems, Maciej; Chung, Il-Sug; Nyakas, Peter

    2010-01-01

    We show comparison of four different numerical methods for simulating Photonic-Crystal (PC) VCSELs. We present the theoretical basis behind each method and analyze the differences by studying a benchmark VCSEL structure, where the PC structure penetrates all VCSEL layers, the entire top-mirror DBR...... to the effective index method. The simulation results elucidate the strength and weaknesses of the analyzed methods; and outline the limits of applicability of the different models....

  4. Internal scanning method as unique imaging method of optical vortex scanning microscope

    Science.gov (United States)

    Popiołek-Masajada, Agnieszka; Masajada, Jan; Szatkowski, Mateusz

    2018-06-01

    The internal scanning method is specific for the optical vortex microscope. It allows to move the vortex point inside the focused vortex beam with nanometer resolution while the whole beam stays in place. Thus the sample illuminated by the focused vortex beam can be scanned just by the vortex point. We show that this method enables high resolution imaging. The paper presents the preliminary experimental results obtained with the first basic image recovery procedure. A prospect of developing more powerful tools for topography recovery with the optical vortex scanning microscope is discussed shortly.

  5. Methods of RVD object pose estimation and experiments

    Science.gov (United States)

    Shang, Yang; He, Yan; Wang, Weihua; Yu, Qifeng

    2007-11-01

    Methods of measuring a RVD (rendezvous and docking) cooperative object's pose from monocular and binocular images respectively are presented. The methods solve the initial values first and optimize the object pose parameters by bundle adjustment. In the disturbance-rejecting binocular method, chosen measurement system parameters of one camera's exterior parameters are modified simultaneously. The methods need three or more cooperative target points to measure the object's pose accurately. Experimental data show that the methods converge quickly and stably, provide accurate results and do not need accurate initial values. Even when the chosen measurement system parameters are subjected to some amount of disturbance, the binocular method manages to provide fairly accurate results.

  6. Lesion insertion in the projection domain: Methods and initial results.

    Science.gov (United States)

    Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Yu, Zhicong; Ma, Chi; McCollough, Cynthia

    2015-12-01

    phantom in terms of Hounsfield unit and high-contrast resolution. For the validation of the lesion realism, lesions of various types were successfully inserted, including well circumscribed and invasive lesions, homogeneous and heterogeneous lesions, high-contrast and low-contrast lesions, isolated and vessel-attached lesions, and small and large lesions. The two experienced radiologists who reviewed the original and inserted lesions could not identify the lesions that were inserted. The same lesion, when inserted into the projection domain and reconstructed with different parameters, demonstrated a parameter-dependent appearance. A framework has been developed for projection-domain insertion of lesions into commercial CT images, which can be potentially expanded to all geometries of CT scanners. Compared to conventional image-domain methods, the authors' method reflected the impact of scan and reconstruction parameters on lesion appearance. Compared to prior projection-domain methods, the authors' method has the potential to achieve higher anatomical complexity by employing clinical patient projections and real patient lesions.

  7. New nonlinear methods for linear transport calculations

    International Nuclear Information System (INIS)

    Adams, M.L.

    1993-01-01

    We present a new family of methods for the numerical solution of the linear transport equation. With these methods an iteration consists of an 'S N sweep' followed by an 'S 2 -like' calculation. We show, by analysis as well as numerical results, that iterative convergence is always rapid. We show that this rapid convergence does not depend on a consistent discretization of the S 2 -like equations - they can be discretized independently from the S N equations. We show further that independent discretizations can offer significant advantages over consistent ones. In particular, we find that in a wide range of problems, an accurate discretization of the S 2 -like equation can be combined with a crude discretization of the S N equations to produce an accurate S N answer. We demonstrate this by analysis as well as numerical results. (orig.)

  8. Measuring performance at trade shows

    DEFF Research Database (Denmark)

    Hansen, Kåre

    2004-01-01

    Trade shows is an increasingly important marketing activity to many companies, but current measures of trade show performance do not adequately capture dimensions important to exhibitors. Based on the marketing literature's outcome and behavior-based control system taxonomy, a model is built...... that captures a outcome-based sales dimension and four behavior-based dimensions (i.e. information-gathering, relationship building, image building, and motivation activities). A 16-item instrument is developed for assessing exhibitors perceptions of their trade show performance. The paper presents evidence...

  9. Improved meta-analytic methods show no effect of chromium supplements on fasting glucose.

    Science.gov (United States)

    Bailey, Christopher H

    2014-01-01

    The trace mineral chromium has been extensively researched over the years in its role in glucose metabolism. Dietary supplement companies have attempted to make claims that chromium may be able to treat or prevent diabetes. Previous meta-analyses/systematic reviews have indicated that chromium supplementation results in a significant lowering of fasting glucose in diabetics but not in nondiabetics. A meta-analysis was conducted using an alternative measure of effect size, d(ppc2) in order to account for changes in the control group as well as the chromium group. The literature search included MEDLINE, the Cochrane Controlled Trials Register, and previously published article reviews, systematic reviews, and meta-analyses. Included studies were randomized, placebo-controlled trials in the English language with subjects that were nonpregnant adults, both with and without diabetes. Sixteen studies with 809 participants (440 diabetics and 369 nondiabetics) were included in the analysis. Screening for publication bias indicated symmetry of the data. Tests of heterogeneity indicated the use of a fixed-effect model (I² = 0 %). The analysis indicated that there was no significant effect of chromium supplementation in diabetics or nondiabetics, with a weighted average effect size of 0.02 (SE = 0.07), p = 0.787, CI 95 % = -0.12 to 0.16. Chromium supplementation appears to provide no benefits to populations where chromium deficiency is unlikely.

  10. A Hair & a Fungus: Showing Kids the Size of a Microbe

    Science.gov (United States)

    Richter, Dana L.

    2013-01-01

    A simple method is presented to show kids the size of a microbe--a fungus hypha--compared to a human hair. Common household items are used to make sterile medium on a stove or hotplate, which is dispensed in the cells of a weekly plastic pill box. Mold fungi can be easily and safely grown on the medium from the classroom environment. A microscope…

  11. The energy show

    International Nuclear Information System (INIS)

    1988-01-01

    The Energy Show is a new look at the problems of world energy, where our supplies come from, now and in the future. The programme looks at how we need energy to maintain our standards of living. Energy supply is shown as the complicated set of problems it is - that Fossil Fuels are both raw materials and energy sources, that some 'alternatives' so readily suggested as practical options are in reality a long way from being effective. (author)

  12. A method of emotion contagion for crowd evacuation

    Science.gov (United States)

    Cao, Mengxiao; Zhang, Guijuan; Wang, Mengsi; Lu, Dianjie; Liu, Hong

    2017-10-01

    The current evacuation model does not consider the impact of emotion and personality on crowd evacuation. Thus, there is large difference between evacuation results and the real-life behavior of the crowd. In order to generate more realistic crowd evacuation results, we present a method of emotion contagion for crowd evacuation. First, we combine OCEAN (Openness, Extroversion, Agreeableness, Neuroticism, Conscientiousness) model and SIS (Susceptible Infected Susceptible) model to construct the P-SIS (Personalized SIS) emotional contagion model. The P-SIS model shows the diversity of individuals in crowd effectively. Second, we couple the P-SIS model with the social force model to simulate emotional contagion on crowd evacuation. Finally, the photo-realistic rendering method is employed to obtain the animation of crowd evacuation. Experimental results show that our method can simulate crowd evacuation realistically and has guiding significance for crowd evacuation in the emergency circumstances.

  13. Air sampling methods to evaluate microbial contamination in operating theatres: results of a comparative study in an orthopaedics department.

    Science.gov (United States)

    Napoli, C; Tafuri, S; Montenegro, L; Cassano, M; Notarnicola, A; Lattarulo, S; Montagna, M T; Moretti, B

    2012-02-01

    To evaluate the level of microbial contamination of air in operating theatres using active [i.e. surface air system (SAS)] and passive [i.e. index of microbial air contamination (IMA) and nitrocellulose membranes positioned near the wound] sampling systems. Sampling was performed between January 2010 and January 2011 in the operating theatre of the orthopaedics department in a university hospital in Southern Italy. During surgery, the mean bacterial loads recorded were 2232.9 colony-forming units (cfu)/m(2)/h with the IMA method, 123.2 cfu/m(3) with the SAS method and 2768.2 cfu/m(2)/h with the nitrocellulose membranes. Correlation was found between the results of the three methods. Staphylococcus aureus was detected in 12 of 60 operations (20%) with the membranes, five (8.3%) operations with the SAS method, and three operations (5%) with the IMA method. Use of nitrocellulose membranes placed near a wound is a valid method for measuring the microbial contamination of air. This method was more sensitive than the IMA method and was not subject to any calibration bias, unlike active air monitoring systems. Copyright © 2011 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  14. Transgene silencing of the Hutchinson-Gilford progeria syndrome mutation results in a reversible bone phenotype, whereas resveratrol treatment does not show overall beneficial effects

    DEFF Research Database (Denmark)

    Strandgren, Charlotte; Nasser, Hasina Abdul; McKenna, Tomás

    2015-01-01

    model to study the possibility of recovering from HGPS bone disease upon silencing of the HGPS mutation, and the potential benefits from treatment with resveratrol. We show that complete silencing of the transgenic expression of progerin normalized bone morphology and mineralization already after 7...... weeks. The improvements included lower frequencies of rib fractures and callus formation, an increased number of osteocytes in remodeled bone, and normalized dentinogenesis. The beneficial effects from resveratrol treatment were less significant and to a large extent similar to mice treated with sucrose...... alone. However, the reversal of the dental phenotype of overgrown and laterally displaced lower incisors in HGPS mice could be attributed to resveratrol. Our results indicate that the HGPS bone defects were reversible upon suppressed transgenic expression and suggest that treatments targeting aberrant...

  15. Internal Error Propagation in Explicit Runge--Kutta Methods

    KAUST Repository

    Ketcheson, David I.

    2014-09-11

    In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.

  16. Saccharomyces cerevisiae show low levels of traversal across human endothelial barrier in vitro [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Roberto Pérez-Torrado

    2017-09-01

    Full Text Available Background:  Saccharomyces cerevisiae is generally considered safe, and is involved in the production of many types of foods and dietary supplements. However, some isolates, which are genetically related to strains used in brewing and baking, have shown virulent traits, being able to produce infections in humans, mainly in immunodeficient patients. This can lead to systemic infections in humans. Methods: In this work, we studied S. cerevisiae isolates in an in vitro human endothelial barrier model, comparing their behaviour with that of several strains of the related pathogens Candida glabrata and Candida albicans. Results: The results showed that this food related yeast is able to cross the endothelial barrier in vitro. However, in contrast to C. glabrata and C. albicans, S. cerevisiae showed very low levels of traversal. Conclusions: We conclude that using an in vitro human endothelial barrier model with S. cerevisiae can be useful to evaluate the safety of S. cerevisiae strains isolated from foods.

  17. Method paper--distance and travel time to casualty clinics in Norway based on crowdsourced postcode coordinates: a comparison with other methods.

    Science.gov (United States)

    Raknes, Guttorm; Hunskaar, Steinar

    2014-01-01

    We describe a method that uses crowdsourced postcode coordinates and Google maps to estimate average distance and travel time for inhabitants of a municipality to a casualty clinic in Norway. The new method was compared with methods based on population centroids, median distance and town hall location, and we used it to examine how distance affects the utilisation of out-of-hours primary care services. At short distances our method showed good correlation with mean travel time and distance. The utilisation of out-of-hours services correlated with postcode based distances similar to previous research. The results show that our method is a reliable and useful tool for estimating average travel distances and travel times.

  18. Method paper--distance and travel time to casualty clinics in Norway based on crowdsourced postcode coordinates: a comparison with other methods.

    Directory of Open Access Journals (Sweden)

    Guttorm Raknes

    Full Text Available We describe a method that uses crowdsourced postcode coordinates and Google maps to estimate average distance and travel time for inhabitants of a municipality to a casualty clinic in Norway. The new method was compared with methods based on population centroids, median distance and town hall location, and we used it to examine how distance affects the utilisation of out-of-hours primary care services. At short distances our method showed good correlation with mean travel time and distance. The utilisation of out-of-hours services correlated with postcode based distances similar to previous research. The results show that our method is a reliable and useful tool for estimating average travel distances and travel times.

  19. Simulation methods for multiperiodic and aperiodic nanostructured dielectric waveguides

    DEFF Research Database (Denmark)

    Paulsen, Moritz; Neustock, Lars Thorben; Jahns, Sabrina

    2017-01-01

    on Rudin–Shapiro, Fibonacci, and Thue–Morse binary sequences. The near-field and far-field properties are computed employing the finite-element method (FEM), the finite-difference time-domain (FDTD) method as well as a rigorous coupled wave algorithm (RCWA). The results show that all three methods...

  20. [Preoperative psychoprophylaxis in childhood. Results of a hospital program].

    Science.gov (United States)

    Admetlla i Admetlla, I A; Jover i Fulgueira, S

    1988-05-01

    Results of a surgical psychoprophylaxis program, theoretically and technically framed within psychoanalytic theory is presented. It also comprises a description of the method used, as well as criteria by which authors have determined whether or not a child is ready for surgery. Results obtained with 134 children and a description of those who showed post-surgical disturbances are presented. Analysis is carried out of the percentage of disorders according to age group, showing that highest risk is among children up to five years of age, coinciding with the finding put forth by other authors. Finally some conclusions in relation to prevention of psychologic iatrogenic disorders in pediatric surgery are drawn.

  1. Optimization on Measurement Method for Neutron Moisture Meter

    International Nuclear Information System (INIS)

    Gong Yalin; Wu Zhiqiang; Li Yanfeng; Wang Wei; Song Qingfeng; Liu Hui; Wei Xiaoyun; Zhao Zhonghua

    2010-01-01

    When the water in the measured material is nonuniformity, the measured results of the neutron moisture meter in the field may have errors, so the measured errors of the moisture meter associated with the water nonuniformity in material were simulated by Monte Carlo method. A new measurement method of moisture meter named 'transmission plus scatter' was put forward. The experiment results show that the new measurement method can reduce the error even if the water in the material is nonuniformity. (authors)

  2. Rainfall assimilation in RAMS by means of the Kuo parameterisation inversion: method and preliminary results

    Science.gov (United States)

    Orlandi, A.; Ortolani, A.; Meneguzzo, F.; Levizzani, V.; Torricella, F.; Turk, F. J.

    2004-03-01

    In order to improve high-resolution forecasts, a specific method for assimilating rainfall rates into the Regional Atmospheric Modelling System model has been developed. It is based on the inversion of the Kuo convective parameterisation scheme. A nudging technique is applied to 'gently' increase with time the weight of the estimated precipitation in the assimilation process. A rough but manageable technique is explained to estimate the partition of convective precipitation from stratiform one, without requiring any ancillary measurement. The method is general purpose, but it is tuned for geostationary satellite rainfall estimation assimilation. Preliminary results are presented and discussed, both through totally simulated experiments and through experiments assimilating real satellite-based precipitation observations. For every case study, Rainfall data are computed with a rapid update satellite precipitation estimation algorithm based on IR and MW satellite observations. This research was carried out in the framework of the EURAINSAT project (an EC research project co-funded by the Energy, Environment and Sustainable Development Programme within the topic 'Development of generic Earth observation technologies', Contract number EVG1-2000-00030).

  3. Modelling lung cancer due to radon and smoking in WISMUT miners: Preliminary results

    International Nuclear Information System (INIS)

    Bijwaard, H.; Dekkers, F.; Van Dillen, T.

    2011-01-01

    A mechanistic two-stage carcinogenesis model has been applied to model lung-cancer mortality in the largest uranium-miner cohort available. Models with and without smoking action both fit the data well. As smoking information is largely missing from the cohort data, a method has been devised to project this information from a case-control study onto the cohort. Model calculations using 256 projections show that the method works well. Preliminary results show that if an explicit smoking action is absent in the model, this is compensated by the values of the baseline parameters. This indicates that in earlier studies performed without smoking information, the results obtained for the radiation parameters are still valid. More importantly, the inclusion of smoking-related parameters shows that these mainly influence the later stages of lung-cancer development. (authors)

  4. Application of Nemerow Index Method and Integrated Water Quality Index Method in Water Quality Assessment of Zhangze Reservoir

    Science.gov (United States)

    Zhang, Qian; Feng, Minquan; Hao, Xiaoyan

    2018-03-01

    [Objective] Based on the water quality historical data from the Zhangze Reservoir from the last five years, the water quality was assessed by the integrated water quality identification index method and the Nemerow pollution index method. The results of different evaluation methods were analyzed and compared and the characteristics of each method were identified.[Methods] The suitability of the water quality assessment methods were compared and analyzed, based on these results.[Results] the water quality tended to decrease over time with 2016 being the year with the worst water quality. The sections with the worst water quality were the southern and northern sections.[Conclusion] The results produced by the traditional Nemerow index method fluctuated greatly in each section of water quality monitoring and therefore could not effectively reveal the trend of water quality at each section. The combination of qualitative and quantitative measures of the comprehensive pollution index identification method meant it could evaluate the degree of water pollution as well as determine that the river water was black and odorous. However, the evaluation results showed that the water pollution was relatively low.The results from the improved Nemerow index evaluation were better as the single indicators and evaluation results are in strong agreement; therefore the method is able to objectively reflect the water quality of each water quality monitoring section and is more suitable for the water quality evaluation of the reservoir.

  5. Risk Aversion in Game Shows

    DEFF Research Database (Denmark)

    Andersen, Steffen; Harrison, Glenn W.; Lau, Morten I.

    2008-01-01

    We review the use of behavior from television game shows to infer risk attitudes. These shows provide evidence when contestants are making decisions over very large stakes, and in a replicated, structured way. Inferences are generally confounded by the subjective assessment of skill in some games......, and the dynamic nature of the task in most games. We consider the game shows Card Sharks, Jeopardy!, Lingo, and finally Deal Or No Deal. We provide a detailed case study of the analyses of Deal Or No Deal, since it is suitable for inference about risk attitudes and has attracted considerable attention....

  6. Monocytes of patients with familial hypercholesterolemia show alterations in cholesterol metabolism

    Directory of Open Access Journals (Sweden)

    Soufi Muhidien

    2008-11-01

    Full Text Available Abstract Background Elevated plasma cholesterol promotes the formation of atherosclerotic lesions in which monocyte-derived lipid-laden macrophages are frequently found. To analyze, if circulating monocytes already show increased lipid content and differences in lipoprotein metabolism, we compared monocytes from patients with Familial Hypercholesterolemia (FH with those from healthy individuals. Methods Cholesterol and oxidized cholesterol metabolite serum levels of FH and of healthy, gender/age matched control subjects were measured by combined gas chromatography – mass spectroscopy. Monocytes from patients with FH and from healthy subjects were isolated by antibody-assisted density centrifugation. Gene expression profiles of isolated monocytes were measured using Affymetrix HG-U 133 Plus 2.0 microarrays. We compared monocyte gene expression profiles from FH patients with healthy controls using a Welch T-test with correction for multiple testing (p Results Using microarray analysis we found in FH patients a significant up-regulation of 1,617 genes and a down-regulation of 701 genes compared to monocytes from healthy individuals. These include genes of proteins that are involved in the uptake, biosynthesis, disposition, and cellular efflux of cholesterol. In addition, plasma from FH patients contains elevated amounts of sterols and oxysterols. An increased uptake of oxidized as well as of native LDL by FH monocytes combined with a down-regulation of NPC1 and ABCA1 explains the lipid accumulation observed in these cells. Conclusion Our data demonstrate that circulating FH monocytes show differences in cell physiology that may contribute to the early onset of atherosclerosis in this disease.

  7. Synthesis and characterization of TiO2 nanoparticles by the method Pechini

    International Nuclear Information System (INIS)

    Zoccal, Joao Victor Marques; Arouca, Fabio de Oliveira; Goncalves, Jose Antonio Silveira

    2009-01-01

    In recent years, scientific research showed an increasing interest in the field of nanotechnology, resulting in several techniques for the production of nanoparticles, such as methods of chemical synthesis. Among the various existing methods, the Pechini method has been used to obtain nanoparticles of titanium dioxide (TiO 2 ). Thus, this work aims to synthesize and characterize nanoparticles of TiO 2 obtained by this method. The technique constitutes in the reaction between citric acid with titanium isopropoxide, resulting as the product the titanium citrate. With the addition of the ethylene glycol polymerization occurs, resulting in a polymeric resin. At the end of the process, the resin is calcined to remove organic matter, creating nanoparticles of TiO 2 . The resulting powders were characterized by thermogravimetric analysis (TGA) and thermal differential analysis (DTA), X-ray diffraction, absorption spectrophotometry in the infrared, method of adsorption nitrogen / helium (BET method) and scanning electron microscopy. The results obtained in the characterization techniques showed that the Pechini method is promising in obtaining nanosized TiO 2 . (author)

  8. Sensitivity of Spaceborne and Ground Radar Comparison Results to Data Analysis Methods and Constraints

    Science.gov (United States)

    Morris, Kenneth R.; Schwaller, Mathew

    2011-01-01

    categorization applied to the data. In this paper, we will show results comparing the 3-D gridded analysis "black box" approach to the GPM prototype volume-matching approach, using matching TRMM PR and WSR-88D ground radar data. The affects of applying data constraints and data categorizations on the volume-matched data to the results will be shown, and explanations of the differences in terms of data and analysis algorithm characteristics will be presented. Implications of the differences to the determination of PR/DPR calibration differences and use of ground radar data to evaluate the PR and DPR attenuation correction algorithms will be discussed.

  9. A novel method of S-box design based on chaotic map and composition method

    International Nuclear Information System (INIS)

    Lambić, Dragan

    2014-01-01

    Highlights: • Novel chaotic S-box generation method is presented. • Presented S-box has better cryptographic properties than other examples of chaotic S-boxes. • The advantages of the proposed method are the low complexity and large key space. -- Abstract: An efficient algorithm for obtaining random bijective S-boxes based on chaotic maps and composition method is presented. The proposed method is based on compositions of S-boxes from a fixed starting set. The sequence of the indices of starting S-boxes used is obtained by using chaotic maps. The results of performance test show that the S-box presented in this paper has good cryptographic properties. The advantages of the proposed method are the low complexity and the possibility to achieve large key space

  10. Comparison of new immunofluorescence method for detection of soy protein in meat products with immunohistochemical, histochemical, and ELISA methods

    Directory of Open Access Journals (Sweden)

    Michaela Petrášová

    2014-01-01

    Full Text Available Soy proteins are commonly used in the food industry thanks to their technological properties. However, soy is, along with cow’s milk, eggs, wheat, peanuts, tree nuts, fish, crustaceans, and molluscs, responsible for around 90% of food allergies, and is also one of the foodstuffs that can cause anaphylaxis. The aim of this work was to compare the immunofluorescence method for the detection of soy protein in meat products purchased from the retail market with other microscopic methods (immunohistochemical and histochemical, with the ELISA reference method and with the confirmatory results. Within the research, 127 meat products purchased in the retail network were examined using the immunofluorescence method used for the detection of soy protein. The method was compared to Enzyme-Linked ImmunoSorbent Assay (ELISA, immunohistochemical, and histochemical methods. According to McNemar’s test, non-compliance between the immunofluorescence method and immunohistochemical method was low. In addition, a significant difference between the fluorescence method and ELISA (P P < 0.01 was found. The immunofluorescence method was also compared with confirmatory results. According to McNemar’s test, non-compliance between the immunofluorescence method and confirmatory results was low. The results showed the possibilities of this new method to detect the content of soy protein in meat products.

  11. Comportamento agressivo em shows musicais: analisando notícias de jornal impresso

    Directory of Open Access Journals (Sweden)

    Carlos Eduardo Pimentel

    2011-08-01

    Full Text Available This paper reports a content analysis of the newspaper articles dealing with aggressive behavior in music shows. This analysis permits an understanding of the characteristics of these situations, the aggressive behaviors in shows, and the way in which the print media treat the subject. Despite of the newspapers’ influence on public opinion formation, that is, the transmission of attitudes and behavior repertoires, this kind of analysis is rarely reported in Brazilian social psychological literature. Some 31 newspaper articles were content analyzed as to aggressive behavior during music shows. Results indicate that aggressive behaviors occur in shows of most varied musical styles. Results are discussed on the basis of social cognitive theory.

  12. New implementation method for essential boundary condition to extended element-free Galerkin method. Application to nonlinear problem

    International Nuclear Information System (INIS)

    Saitoh, Ayumu; Matsui, Nobuyuki; Itoh, Taku; Kamitani, Atsushi; Nakamura, Hiroaki

    2011-01-01

    A new method has been proposed for implementing essential boundary conditions to the Element-Free Galerkin Method (EFGM) without using the Lagrange multiplier. Furthermore, the performance of the proposed method has been investigated for a nonlinear Poisson problem. The results of computations show that, as interpolation functions become closer to delta functions, the accuracy of the solution is improved on the boundary. In addition, the accuracy of the proposed method is higher than that of the conventional EFGM. Therefore, it might be concluded that the proposed method is useful for solving the nonlinear Poisson problem. (author)

  13. Method, equipment and results of determination of element composition of the Venus rock by the Vega-2 space probe

    International Nuclear Information System (INIS)

    Surkov, Yu.A.; Moskaleva, L.P.; Shcheglov, O.P.

    1985-01-01

    Venus rock composition was determined by X-ray radiometric method in the northeast site of the Aphrodita terra. The experiment was performed on the Vega-2 spacecraft. Composition of Venus rock proved to be close to the composition of the anorthosite-norite-troctolite rocks widespread in the lunar highland crust. The descriptions of the method, instrumentation and results of determining the composition of rocks in landing site of Vega-2 spacecraft are given

  14. Using video games for volcanic hazard education and communication: an assessment of the method and preliminary results

    Science.gov (United States)

    Mani, Lara; Cole, Paul D.; Stewart, Iain

    2016-07-01

    This paper presents the findings from a study aimed at understanding whether video games (or serious games) can be effective in enhancing volcanic hazard education and communication. Using the eastern Caribbean island of St. Vincent, we have developed a video game - St. Vincent's Volcano - for use in existing volcano education and outreach sessions. Its twin aims are to improve residents' knowledge of potential future eruptive hazards (ash fall, pyroclastic flows and lahars) and to integrate traditional methods of education in a more interactive manner. Here, we discuss the process of game development including concept design through to the final implementation on St. Vincent. Preliminary results obtained from the final implementation (through pre- and post-test knowledge quizzes) for both student and adult participants provide indications that a video game of this style may be effective in improving a learner's knowledge. Both groups of participants demonstrated a post-test increase in their knowledge quiz score of 9.3 % for adults and 8.3 % for students and, when plotted as learning gains (Hake, 1998), show similar overall improvements (0.11 for adults and 0.09 for students). These preliminary findings may provide a sound foundation for the increased integration of emerging technologies within traditional education sessions. This paper also shares some of the challenges and lessons learnt throughout the development and testing processes and provides recommendations for researchers looking to pursue a similar study.

  15. First characterization of the expiratory flow increase technique: method development and results analysis

    International Nuclear Information System (INIS)

    Maréchal, L; Barthod, C; Jeulin, J C

    2009-01-01

    This study provides an important contribution to the definition of the expiratory flow increase technique (EFIT). So far, no measuring means were suited to assess the manual EFIT performed on infants. The proposed method aims at objectively defining the EFIT based on the quantification of pertinent cognitive parameters used by physiotherapists when practicing. We designed and realized customized instrumented gloves endowed with pressure and displacement sensors, and the associated electronics and software. This new system is specific to the manoeuvre, to the user and innocuous for the patient. Data were collected and analysed on infants with bronchiolitis managed by an expert physiotherapist. The analysis presented is realized on a group of seven subjects (mean age: 6.1 months, SD: 1.1; mean chest circumference: 44.8 cm, SD: 1.9). The results are consistent with the physiotherapist's tactility. In spite of inevitable variability due to measurements on infants, repeatable quantitative data could be reported regarding the manoeuvre characteristics: the magnitudes of displacements do not exceed 10 mm on both hands; the movement of the thoracic hand is more vertical than the movement of the abdominal hand; the maximum applied pressure with the thoracic hand is about twice higher than with the abdominal hand; the thrust of the manual compression lasts (590 ± 62) ms. Inter-operators measurements are in progress in order to generalize these results

  16. A new method of MR total spine imaging for showing the brace effect in scoliosis

    Energy Technology Data Exchange (ETDEWEB)

    Schmitz, A.; Kandyba, J.; Koenig, R.; Jaeger, U.E.; Gieseke, J.; Schmitt, O. [Univ. of Bonn (Germany)

    2001-07-01

    Bracing is a method of early, nonsurgical treatment for scoliosis, but a hypokyphotic effect on the thoracic spine is reported. We developed a magnetic resonance tomography (MR) procedure presenting an image of the whole spine in the coronal and sagittal planes (MR total spine imaging), and studied the brace effect, using this technique. We examined 26 female patients with idiopathic scoliosis treated with a Cheneau brace (mean age, 13.2 years; mean duration of brace treatment at the time of investigation, 1.5 years). The MR examinations were performed with the patient in the supine position with and without the brace in direct sequence. As measured on the coronal MR images, the thoracic curve was corrected, on average, from 29 deg to 22 deg (mean correction, 24%). There was a slight reduction in the sagittal Cobb angle measured between T4 and T12 (mean sagittal Cobb angle without brace, 14 deg ; with brace, 12 deg), which was still a significant change. MR total spine imaging could be a useful tool for studying the brace effect in scoliosis in two planes. Using this technique, we found reduced sagittal Cobb angles for the thoracic kyphosis with brace. Because there is no radiation exposure, the MR procedure has a potential use in the monitoring of brace treatment. (author)

  17. Pharmaceutical companies' policies on access to trial data, results, and methods: audit study.

    Science.gov (United States)

    Goldacre, Ben; Lane, Síle; Mahtani, Kamal R; Heneghan, Carl; Onakpoya, Igho; Bushfield, Ian; Smeeth, Liam

    2017-07-26

    Objectives  To identify the policies of major pharmaceutical companies on transparency of trials, to extract structured data detailing each companies' commitments, and to assess concordance with ethical and professional guidance. Design  Structured audit. Setting  Pharmaceutical companies, worldwide. Participants  42 pharmaceutical companies. Main outcome measures  Companies' commitments on sharing summary results, clinical study reports (CSRs), individual patient data (IPD), and trial registration, for prospective and retrospective trials. Results  Policies were highly variable. Of 23 companies eligible from the top 25 companies by revenue, 21 (91%) committed to register all trials and 22 (96%) committed to share summary results; however, policies commonly lacked timelines for disclosure, and trials on unlicensed medicines and off-label uses were only included in six (26%). 17 companies (74%) committed to share the summary results of past trials. The median start date for this commitment was 2005. 22 companies (96%) had a policy on sharing CSRs, mostly on request: two committed to share only synopses and only two policies included unlicensed treatments. 22 companies (96%) had a policy to share IPD; 14 included phase IV trials (one included trials on unlicensed medicines and off-label uses). Policies in the exploratory group of smaller companies made fewer transparency commitments. Two companies fell short of industry body commitments on registration, three on summary results. Examples of contradictory and ambiguous language were documented and summarised by theme. 23/42 companies (55%) responded to feedback; 7/1806 scored policy elements were revised in light of feedback from companies (0.4%). Several companies committed to changing policy; some made changes immediately. Conclusions  The commitments made by companies to transparency of trials were highly variable. Other than journal submission for all trials within 12 months, all elements of best practice

  18. Application of X-ray methods to assess grain vulnerability to damage resulting from multiple loads

    International Nuclear Information System (INIS)

    Zlobecki, A.

    1995-01-01

    The aim of the work is to describe wheat grain behavior under multiple dynamic loads with various multipliers. The experiments were conducted on Almari variety grain. Grain moisture was 11, 16, 21 and 28%. A special ram stand was used for loading the grain. The experiments were carried out using an 8 g weight, equivalent to impact energy of 4,6 x 10 -3 [J]. The X-ray method was used to assess damage. The exposure time was 8 minutes with X-ray lamp voltage equal to 15 kV. The position index was used as the measure of the damage. The investigation results were elaborated statistically. Based on the results of analysis of variance, regression analysis, the d-Duncan test and the Kolmogorov-Smirnov test, the damage number was shown to depend greatly on the number of impacts for the whole range of moisture of the grain loaded. (author)

  19. Another method of dead time correction

    International Nuclear Information System (INIS)

    Sabol, J.

    1988-01-01

    A new method of the correction of counting losses caused by a non-extended dead time of pulse detection systems is presented. The approach is based on the distribution of time intervals between pulses at the output of the system. The method was verified both experimentally and by using the Monte Carlo simulations. The results show that the suggested technique is more reliable and accurate than other methods based on a separate measurement of the dead time. (author) 5 refs

  20. Application of Homotopy Analysis Method to Solve Relativistic Toda Lattice System

    International Nuclear Information System (INIS)

    Wang Qi

    2010-01-01

    In this letter, the homotopy analysis method is successfully applied to solve the Relativistic Toda lattice system. Comparisons are made between the results of the proposed method and exact solutions. Analysis results show that homotopy analysis method is a powerful and easy-to-use analytic tool to solve systems of differential-difference equations. (general)

  1. A method for calculating Bayesian uncertainties on internal doses resulting from complex occupational exposures

    International Nuclear Information System (INIS)

    Puncher, M.; Birchall, A.; Bull, R. K.

    2012-01-01

    Estimating uncertainties on doses from bioassay data is of interest in epidemiology studies that estimate cancer risk from occupational exposures to radionuclides. Bayesian methods provide a logical framework to calculate these uncertainties. However, occupational exposures often consist of many intakes, and this can make the Bayesian calculation computationally intractable. This paper describes a novel strategy for increasing the computational speed of the calculation by simplifying the intake pattern to a single composite intake, termed as complex intake regime (CIR). In order to assess whether this approximation is accurate and fast enough for practical purposes, the method is implemented by the Weighted Likelihood Monte Carlo Sampling (WeLMoS) method and evaluated by comparing its performance with a Markov Chain Monte Carlo (MCMC) method. The MCMC method gives the full solution (all intakes are independent), but is very computationally intensive to apply routinely. Posterior distributions of model parameter values, intakes and doses are calculated for a representative sample of plutonium workers from the United Kingdom Atomic Energy cohort using the WeLMoS method with the CIR and the MCMC method. The distributions are in good agreement: posterior means and Q 0.025 and Q 0.975 quantiles are typically within 20 %. Furthermore, the WeLMoS method using the CIR converges quickly: a typical case history takes around 10-20 min on a fast workstation, whereas the MCMC method took around 12-hr. The advantages and disadvantages of the method are discussed. (authors)

  2. [Reconsidering children's dreams. A critical review of methods and results in developmental dream research from Freud to contemporary works].

    Science.gov (United States)

    Sándor, Piroska; Bódizs, Róbert

    2014-01-01

    Examining children's dream development is a significant challenge for researchers. Results from studies on children's dreaming may enlighten us on the nature and role of dreaming as well as broaden our knowledge of consciousness and cognitive development. This review summarizes the main questions and historical progress in developmental dream research, with the aim of shedding light on the advantages, disadvantages and effects of different settings and methods on research outcomes. A typical example would be the dreams of 3 to 5 year-olds: they are simple and static, with a relative absence of emotions and active self participation according to laboratory studies; studies using different methodology however found them to be vivid, rich in emotions, with the self as an active participant. Questions about the validity of different methods arise, and are considered within this review. Given that methodological differences can result in highly divergent outcomes, it is strongly recommended for future research to select methodology and treat results more carefully.

  3. Application of improved AHP method to radiation protection optimization

    International Nuclear Information System (INIS)

    Wang Chuan; Zhang Jianguo; Yu Lei

    2014-01-01

    Aimed at the deficiency of traditional AHP method, a hierarchy model for optimum project selection of radiation protection was established with the improved AHP method. The result of comparison between the improved AHP method and the traditional AHP method shows that the improved AHP method can reduce personal judgment subjectivity, and its calculation process is compact and reasonable. The improved AHP method can provide scientific basis for radiation protection optimization. (authors)

  4. Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations

    Science.gov (United States)

    Stefanski, Philip L.

    2014-01-01

    A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.

  5. Crack/cocaine users show more family problems than other substance users

    Directory of Open Access Journals (Sweden)

    Helena Ferreira Moura

    2014-07-01

    Full Text Available OBJECTIVES:To evaluate family problems among crack/cocaine users compared with alcohol and other substance users.METHODS:A cross-sectional multi-center study selected 741 current adult substance users from outpatient and inpatient Brazilian specialized clinics. Subjects were evaluated with the sixth version of the Addiction Severity Index, and 293 crack users were compared with 126 cocaine snorters and 322 alcohol and other drug users.RESULTS:Cocaine users showed more family problems when compared with other drug users, with no significant difference between routes of administration. These problems included arguing (crack 66.5%, powder cocaine 63.3%, other drugs 50.3%, p= 0.004, having trouble getting along with partners (61.5%×64.6%×48.7%, p= 0.013, and the need for additional childcare services in order to attend treatment (13.3%×10.3%×5.1%, p= 0.002. Additionally, the majority of crack/cocaine users had spent time with relatives in the last month (84.6%×86.5%×76.6%, p= 0.011.CONCLUSIONS:Brazilian treatment programs should enhance family treatment strategies, and childcare services need to be included.

  6. Application of collocation meshless method to eigenvalue problem

    International Nuclear Information System (INIS)

    Saitoh, Ayumu; Matsui, Nobuyuki; Itoh, Taku; Kamitani, Atsushi; Nakamura, Hiroaki

    2012-01-01

    The numerical method for solving the nonlinear eigenvalue problem has been developed by using the collocation Element-Free Galerkin Method (EFGM) and its performance has been numerically investigated. The results of computations show that the approximate solution of the nonlinear eigenvalue problem can be obtained stably by using the developed method. Therefore, it can be concluded that the developed method is useful for solving the nonlinear eigenvalue problem. (author)

  7. Review of quantum Monte Carlo methods and results for Coulombic systems

    International Nuclear Information System (INIS)

    Ceperley, D.

    1983-01-01

    The various Monte Carlo methods for calculating ground state energies are briefly reviewed. Then a summary of the charged systems that have been studied with Monte Carlo is given. These include the electron gas, small molecules, a metal slab and many-body hydrogen

  8. COMPARISON OF CONSEQUENCE ANALYSIS RESULTS FROM TWO METHODS OF PROCESSING SITE METEOROLOGICAL DATA

    International Nuclear Information System (INIS)

    , D

    2007-01-01

    Consequence analysis to support documented safety analysis requires the use of one or more years of representative meteorological data for atmospheric transport and dispersion calculations. At minimum, the needed meteorological data for most atmospheric transport and dispersion models consist of hourly samples of wind speed and atmospheric stability class. Atmospheric stability is inferred from measured and/or observed meteorological data. Several methods exist to convert measured and observed meteorological data into atmospheric stability class data. In this paper, one year of meteorological data from a western Department of Energy (DOE) site is processed to determine atmospheric stability class using two methods. The method that is prescribed by the U.S. Nuclear Regulatory Commission (NRC) for supporting licensing of nuclear power plants makes use of measurements of vertical temperature difference to determine atmospheric stability. Another method that is preferred by the U.S. Environmental Protection Agency (EPA) relies upon measurements of incoming solar radiation, vertical temperature gradient, and wind speed. Consequences are calculated and compared using the two sets of processed meteorological data from these two methods as input data into the MELCOR Accident Consequence Code System 2 (MACCS2) code

  9. The Determination of Sugars by Chromatographic Method

    OpenAIRE

    Sumartini, Sri; Kantasubrata, Julia

    1992-01-01

    Experiments have been carried out to analyse sugars using TLC and HPLC methods, In the TLC method, separation of sugars was performed on silica plates impregnated with monosodium phosphate and using mixture of ethylacettuel pyridinde/water as an eluent. Whilst in the HPLC method, the use of three column types i.e. diol, RP-18 and modified silica column were tested. The results showed that TLC method was able to measure three sugars i:e. sucrose, glucose and fructose with standard deviations o...

  10. Results from preliminary FlowAct measurements during June 1995

    International Nuclear Information System (INIS)

    Linden, P.

    1997-02-01

    Flow measurements based on the pulsed neutron activation (PNA) method have been done and analysed. The results show that the accuracy of the PNA based FlowAct method is, under certain conditions, in the same range as the reference flow meter used. Also, the behaviour of the time distributions obtained is discussed, though the influence of velocity profile, radial mixing or other hydrodynamical questions is not taken into account. However, the objective of this work was to gain sufficient confidence in the method, and sufficient experience to be able to design and build a dedicated loop with stable flow and high-accuracy calibration. 4 refs, 12 figs

  11. A practical comparison of methods to assess sum-of-products

    International Nuclear Information System (INIS)

    Rauzy, A.; Chatelet, E.; Dutuit, Y.; Berenguer, C.

    2003-01-01

    Many methods have been proposed in the literature to assess the probability of a sum-of-products. This problem has been shown computationally hard (namely no. P-hard). Therefore, algorithms can be compared only from a practical point of view. In this article, we propose first an efficient implementation of the pivotal decomposition method. This kind of algorithms is widely used in the Artificial Intelligence framework. It is unfortunately almost never considered in the reliability engineering framework, but as a pedagogical tool. We report experimental results that show that this method is in general much more efficient than classical methods that rewrite the sum-of-products under study into an equivalent sum of disjoint products. Then, we derive from our method a factorization algorithm to be used as a preprocessing method for binary decision diagrams. We show by means of experimental results that this latter approach outperforms the formers

  12. Comparison between Two Linear Supervised Learning Machines' Methods with Principle Component Based Methods for the Spectrofluorimetric Determination of Agomelatine and Its Degradants.

    Science.gov (United States)

    Elkhoudary, Mahmoud M; Naguib, Ibrahim A; Abdel Salam, Randa A; Hadad, Ghada M

    2017-05-01

    Four accurate, sensitive and reliable stability indicating chemometric methods were developed for the quantitative determination of Agomelatine (AGM) whether in pure form or in pharmaceutical formulations. Two supervised learning machines' methods; linear artificial neural networks (PC-linANN) preceded by principle component analysis and linear support vector regression (linSVR), were compared with two principle component based methods; principle component regression (PCR) as well as partial least squares (PLS) for the spectrofluorimetric determination of AGM and its degradants. The results showed the benefits behind using linear learning machines' methods and the inherent merits of their algorithms in handling overlapped noisy spectral data especially during the challenging determination of AGM alkaline and acidic degradants (DG1 and DG2). Relative mean squared error of prediction (RMSEP) for the proposed models in the determination of AGM were 1.68, 1.72, 0.68 and 0.22 for PCR, PLS, SVR and PC-linANN; respectively. The results showed the superiority of supervised learning machines' methods over principle component based methods. Besides, the results suggested that linANN is the method of choice for determination of components in low amounts with similar overlapped spectra and narrow linearity range. Comparison between the proposed chemometric models and a reported HPLC method revealed the comparable performance and quantification power of the proposed models.

  13. 20 CFR 220.27 - What is needed to show an impairment.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false What is needed to show an impairment. 220.27... is needed to show an impairment. A physical or mental impairment must result from anatomical... diagnostic techniques. A physical or mental impairment must be established by medical evidence consisting of...

  14. Universal Image Steganalytic Method

    Directory of Open Access Journals (Sweden)

    V. Banoci

    2014-12-01

    Full Text Available In the paper we introduce a new universal steganalytic method in JPEG file format that is detecting well-known and also newly developed steganographic methods. The steganalytic model is trained by MHF-DZ steganographic algorithm previously designed by the same authors. The calibration technique with the Feature Based Steganalysis (FBS was employed in order to identify statistical changes caused by embedding a secret data into original image. The steganalyzer concept utilizes Support Vector Machine (SVM classification for training a model that is later used by the same steganalyzer in order to identify between a clean (cover and steganographic image. The aim of the paper was to analyze the variety in accuracy of detection results (ACR while detecting testing steganographic algorithms as F5, Outguess, Model Based Steganography without deblocking, JP Hide and Seek which represent the generally used steganographic tools. The comparison of four feature vectors with different lengths FBS (22, FBS (66 FBS(274 and FBS(285 shows promising results of proposed universal steganalytic method comparing to binary methods.

  15. Purification of crude glycerol from transesterification reaction of palm oil using direct method and multistep method

    Science.gov (United States)

    Nasir, N. F.; Mirus, M. F.; Ismail, M.

    2017-09-01

    Crude glycerol which produced from transesterification reaction has limited usage if it does not undergo purification process. It also contains excess methanol, catalyst and soap. Conventionally, purification method of the crude glycerol involves high cost and complex processes. This study aimed to determine the effects of using different purification methods which are direct method (comprises of ion exchange and methanol removal steps) and multistep method (comprises of neutralization, filtration, ion exchange and methanol removal steps). Two crude glycerol samples were investigated; the self-produced sample through the transesterification process of palm oil and the sample obtained from biodiesel plant. Samples were analysed using Fourier Transform Infrared Spectroscopy, Gas Chromatography and High Performance Liquid Chromatography. The results of this study for both samples after purification have showed that the pure glycerol was successfully produced and fatty acid salts were eliminated. Also, the results indicated the absence of methanol in both samples after purification process. In short, the combination of 4 purification steps has contributed to a higher quality of glycerol. Multistep purification method gave a better result compared to the direct method as neutralization and filtration steps helped in removing most excess salt, fatty acid and catalyst.

  16. Fat stigmatization in television shows and movies: a content analysis.

    Science.gov (United States)

    Himes, Susan M; Thompson, J Kevin

    2007-03-01

    To examine the phenomenon of fat stigmatization messages presented in television shows and movies, a content analysis was used to quantify and categorize fat-specific commentary and humor. Fat stigmatization vignettes were identified using a targeted sampling procedure, and 135 scenes were excised from movies and television shows. The material was coded by trained raters. Reliability indices were uniformly high for the seven categories (percentage agreement ranged from 0.90 to 0.98; kappas ranged from 0.66 to 0.94). Results indicated that fat stigmatization commentary and fat humor were often verbal, directed toward another person, and often presented directly in the presence of the overweight target. Results also indicated that male characters were three times more likely to engage in fat stigmatization commentary or fat humor than female characters. To our knowledge, these findings provide the first information regarding the specific gender, age, and types of fat stigmatization that occur frequently in movies and television shows. The stimuli should prove useful in future research examining the role of individual difference factors (e.g., BMI) in the reaction to viewing such vignettes.

  17. The Method for Assessing and Forecasting Value of Knowledge in SMEs – Research Results

    Directory of Open Access Journals (Sweden)

    Justyna Patalas-Maliszewska

    2010-10-01

    Full Text Available Decisions by SMEs regarding knowledge development are made at a strategic level (Haas-Edersheim, 2007. Related to knowledge management are approaches to "measure" knowledge, where literature distinguishes between qualitative and quantitative methods of valuating intellectual capital. Although there is a quite range of such methods to build an intellectual capital reporting system, none of them is really widely recognized. This work presents a method enabling assessing the effectiveness of investing in human resources, taking into consideration existing methods. The method presented is focusing on SMEs (taking into consideration their importance for, especially, regional development. It consists of four parts: an SME reference model, an indicator matrix to assess investments into knowledge, innovation indicators, and the GMDH algorithm for decision making. The method presented is exemplified by a case study including 10 companies.

  18. A new method for dosimetry with films radiochromic

    International Nuclear Information System (INIS)

    Mendez Carot, I.

    2013-01-01

    in this paper a new method is presented and the results of the comparison between the calibration is summarized based on a planning reference and calibration obtained from the irradiated fragments measure different dose levels multichannel compare dosimetry based on the weighted average dosimetry described by Micke et al.(present in the FilmQAPro software) and, finally, show different results obtained with the method proposed in several applications clinics. (Author)

  19. TMI-2 core debris analytical methods and results

    International Nuclear Information System (INIS)

    Akers, D.W.; Cook, B.A.

    1984-01-01

    A series of six grab samples was taken from the debris bed of the TMI-2 core in early September 1983. Five of these samples were sent to the Idaho National Engineering Laboratory for analysis. Presented is the analysis strategy for the samples and some of the data obtained from the early stages of examination of the samples (i.e., particle size-analysis, gamma spectrometry results, and fissile/fertile material analysis)

  20. National implementation of the UNECE convention on long-range transboundary air pollution (effects). Pt. 1. Deposition loads: methods, modelling and mapping results, trends

    Energy Technology Data Exchange (ETDEWEB)

    Gauger, Thomas [Federal Agricultural Research Centre, Braunschweig (DE). Inst. of Agroecology (FAL-AOE); Stuttgart Univ. (Germany). Inst. of Navigation; Haenel, Hans-Dieter; Roesemann, Claus [Federal Agricultural Research Centre, Braunschweig (DE). Inst. of Agroecology (FAL-AOE)

    2008-09-15

    The report on the implementation of the UNECE convention on long-range transboundary air pollution Pt.1, deposition loads (methods, modeling and mapping results, trends) includes the following chapters: Introduction, deposition on air pollutants used for the input for critical loads in exceeding calculations, methods applied for mapping total deposition loads, mapping wet deposition, wet deposition mapping results, mapping dry deposition, dry deposition mapping results, cloud and fog mapping results, total deposition mapping results, modeling the air concentration of acidifying components and heavy metals, agricultural emissions of acidifying and eutrophying species.

  1. Radio-iodination of a rabbit fibrinogen by the chloramine-T method

    Energy Technology Data Exchange (ETDEWEB)

    Moza, A K; Kumar, M; Belavalgidad, M I; Sapru, R P [Post-Graduate Inst. of Medical Education and Research, Chandigarh (India). Dept. of Experimental Medicine

    1974-01-01

    A method for radio-iodination of fibrinogen using chloramine-T has been described. Samples of greater than 90% clottable counts were obtained. Electrophoretic mobility and immunodiffusion showed that the entire radioactivity was present in the fibrinogen band. In vivo studies on the turnover of this labelled product in rabbits showed a half-life of 52.8 to 61.7 hrs in two batches of animals. The results compare very well with the reported results obtained from fibrinogen labelled with radioactive iodine by the iodine-monochloride method. The advantages of the new method have been pointed out.

  2. Comparison of a New Cobinamide-Based Method to a Standard Laboratory Method for Measuring Cyanide in Human Blood

    Science.gov (United States)

    Swezey, Robert; Shinn, Walter; Green, Carol; Drover, David R.; Hammer, Gregory B.; Schulman, Scott R.; Zajicek, Anne; Jett, David A.; Boss, Gerry R.

    2013-01-01

    Most hospital laboratories do not measure blood cyanide concentrations, and samples must be sent to reference laboratories. A simple method is needed for measuring cyanide in hospitals. The authors previously developed a method to quantify cyanide based on the high binding affinity of the vitamin B12 analog, cobinamide, for cyanide and a major spectral change observed for cyanide-bound cobinamide. This method is now validated in human blood, and the findings include a mean inter-assay accuracy of 99.1%, precision of 8.75% and a lower limit of quantification of 3.27 µM cyanide. The method was applied to blood samples from children treated with sodium nitroprusside and it yielded measurable results in 88 of 172 samples (51%), whereas the reference laboratory yielded results in only 19 samples (11%). In all 19 samples, the cobinamide-based method also yielded measurable results. The two methods showed reasonable agreement when analyzed by linear regression, but not when analyzed by a standard error of the estimate or paired t-test. Differences in results between the two methods may be because samples were assayed at different times on different sample types. The cobinamide-based method is applicable to human blood, and can be used in hospital laboratories and emergency rooms. PMID:23653045

  3. Hybrid Vortex Method for the Aerodynamic Analysis of Wind Turbine

    Directory of Open Access Journals (Sweden)

    Hao Hu

    2015-01-01

    Full Text Available The hybrid vortex method, in which vortex panel method is combined with the viscous-vortex particle method (HPVP, was established to model the wind turbine aerodynamic and relevant numerical procedure program was developed to solve flow equations. The panel method was used to calculate the blade surface vortex sheets and the vortex particle method was employed to simulate the blade wake vortices. As a result of numerical calculations on the flow over a wind turbine, the HPVP method shows significant advantages in accuracy and less computation resource consuming. The validation of the aerodynamic parameters against Phase VI wind turbine experimental data is performed, which shows reasonable agreement.

  4. Basic studies on gastrin-radioimmunoassay and the results of its clinical application. Comparative studies between the double antibody method using Wilson's anti-gastrin serum and a gastrin kit (CIS) method

    Energy Technology Data Exchange (ETDEWEB)

    Yabana, T; Uchiya, T; Kakumoto, Y; Waga, Y; Konta, M [Sapporo Medical Coll. (Japan)

    1975-03-01

    Fundamental and practical problems in carrying out the radioimmunoassay of gastrin were studied by comparing the double antibody method, using guinea pig anti-porcine gastrin serum (Wilson Lab.) with the gastrin kit method (G-K, CIS). The former method was found to have a measurable gastrin concentration range between 60 and 1,000 pg/ml, whereas the range of the latter method was between 25 and 800 pg/ml. The reproducibility of each method was satisfactory. The G-K method was affected more readily by co-existing proteins, whereas the interferences by other biologically active factors, e.g., CCK/PZ, caerulein, etc., were negligible. While there was a highly significant correlation between the values, values obtained by the G-K method were generally slightly lower than the values obtained by the double antibody method. Results of fractionation analysis employing gel filtration of blood and tissue immunoreactive gastrin caused the authors to observe that the value of big gastrin as determined with the G-K method was lower than that obtained by the double antibody method, and that the difference was especially remarkable for gastrin in blood.

  5. Ecological content validation of the Information Assessment Method for parents (IAM-parent): A mixed methods study.

    Science.gov (United States)

    Bujold, M; El Sherif, R; Bush, P L; Johnson-Lafleur, J; Doray, G; Pluye, P

    2018-02-01

    This mixed methods study content validated the Information Assessment Method for parents (IAM-parent) that allows users to systematically rate and comment on online parenting information. Quantitative data and results: 22,407 IAM ratings were collected; of the initial 32 items, descriptive statistics showed that 10 had low relevance. Qualitative data and results: IAM-based comments were collected, and 20 IAM users were interviewed (maximum variation sample); the qualitative data analysis assessed the representativeness of IAM items, and identified items with problematic wording. Researchers, the program director, and Web editors integrated quantitative and qualitative results, which led to a shorter and clearer IAM-parent. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. A New Method for a Virtue-Based Responsible Conduct of Research Curriculum: Pilot Test Results.

    Science.gov (United States)

    Berling, Eric; McLeskey, Chet; O'Rourke, Michael; Pennock, Robert T

    2018-02-03

    Drawing on Pennock's theory of scientific virtues, we are developing an alternative curriculum for training scientists in the responsible conduct of research (RCR) that emphasizes internal values rather than externally imposed rules. This approach focuses on the virtuous characteristics of scientists that lead to responsible and exemplary behavior. We have been pilot-testing one element of such a virtue-based approach to RCR training by conducting dialogue sessions, modeled upon the approach developed by Toolbox Dialogue Initiative, that focus on a specific virtue, e.g., curiosity and objectivity. During these structured discussions, small groups of scientists explore the roles they think the focus virtue plays and should play in the practice of science. Preliminary results have shown that participants strongly prefer this virtue-based model over traditional methods of RCR training. While we cannot yet definitively say that participation in these RCR sessions contributes to responsible conduct, these pilot results are encouraging and warrant continued development of this virtue-based approach to RCR training.

  7. Resonance interference method in lattice physics code stream

    International Nuclear Information System (INIS)

    Choi, Sooyoung; Khassenov, Azamat; Lee, Deokjung

    2015-01-01

    Newly developed resonance interference model is implemented in the lattice physics code STREAM, and the model shows a significant improvement in computing accurate eigenvalues. Equivalence theory is widely used in production calculations to generate the effective multigroup (MG) cross-sections (XS) for commercial reactors. Although a lot of methods have been developed to enhance the accuracy in computing effective XSs, the current resonance treatment methods still do not have a clear resonance interference model. The conventional resonance interference model simply adds the absorption XSs of resonance isotopes to the background XS. However, the conventional models show non-negligible errors in computing effective XSs and eigenvalues. In this paper, a resonance interference factor (RIF) library method is proposed. This method interpolates the RIFs in a pre-generated RIF library and corrects the effective XS, rather than solving the time consuming slowing down calculation. The RIF library method is verified for homogeneous and heterogeneous problems. The verification results using the proposed method show significant improvements of accuracy in treating the interference effect. (author)

  8. Analytical method for the isotopic characterization of soils

    International Nuclear Information System (INIS)

    Sibello Hernandez, Rita; Cozzella, Maria Letizia; Mariani, Mario

    2014-01-01

    The aim of this work was to develop an analytical method in order to determine the isotopic composition of different elements in soil samples and to determine the existence of contamination. The method used in the digestion of the samples was the EPA 3050B, and some metal concentration were determined including uranium and thorium. For elements with even lower concentrations such as plutonium and radium a treatment after mineralization by EPA, was necessary. The measurement technique used was mass spectrometry with quadrupole and plasma induced associated (ICP-MS). Results of the analysis performed in two laboratories showed a good correspondence. This method allowed to perform the isotopic characterization of studied soils and results showed that the studied soils do not present any local pollution and that the presence of plutonium-239, is due to global failure

  9. The quality of reporting methods and results of cost-effectiveness analyses in Spain: a methodological systematic review.

    Science.gov (United States)

    Catalá-López, Ferrán; Ridao, Manuel; Alonso-Arroyo, Adolfo; García-Altés, Anna; Cameron, Chris; González-Bermejo, Diana; Aleixandre-Benavent, Rafael; Bernal-Delgado, Enrique; Peiró, Salvador; Tabarés-Seisdedos, Rafael; Hutton, Brian

    2016-01-07

    Cost-effectiveness analysis has been recognized as an important tool to determine the efficiency of healthcare interventions and services. There is a need for evaluating the reporting of methods and results of cost-effectiveness analyses and establishing their validity. We describe and examine reporting characteristics of methods and results of cost-effectiveness analyses conducted in Spain during more than two decades. A methodological systematic review was conducted with the information obtained through an updated literature review in PubMed and complementary databases (e.g. Scopus, ISI Web of Science, National Health Service Economic Evaluation Database (NHS EED) and Health Technology Assessment (HTA) databases from Centre for Reviews and Dissemination (CRD), Índice Médico Español (IME) Índice Bibliográfico Español en Ciencias de la Salud (IBECS)). We identified cost-effectiveness analyses conducted in Spain that used quality-adjusted life years (QALYs) as outcome measures (period 1989-December 2014). Two reviewers independently extracted the data from each paper. The data were analysed descriptively. In total, 223 studies were included. Very few studies (10; 4.5 %) reported working from a protocol. Most studies (200; 89.7 %) were simulation models and included a median of 1000 patients. Only 105 (47.1 %) studies presented an adequate description of the characteristics of the target population. Most study interventions were categorized as therapeutic (189; 84.8 %) and nearly half (111; 49.8 %) considered an active alternative as the comparator. Effectiveness of data was derived from a single study in 87 (39.0 %) reports, and only few (40; 17.9 %) used evidence synthesis-based estimates. Few studies (42; 18.8 %) reported a full description of methods for QALY calculation. The majority of the studies (147; 65.9 %) reported that the study intervention produced "more costs and more QALYs" than the comparator. Most studies (200; 89.7 %) reported favourable

  10. MLFMA-accelerated Nyström method for ultrasonic scattering - Numerical results and experimental validation

    Science.gov (United States)

    Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron

    2018-04-01

    Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.

  11. Spectrophotometric methods for simultaneous estimation of pantoprazole and itopride hydrochloride in capsules

    Directory of Open Access Journals (Sweden)

    Krishna R. Gupta

    2010-12-01

    Full Text Available Three simple, accurate and economical methods for simultaneous estimation of pantoprazole and itopride hydrochloride in two component solid dosage forms have been developed. The proposed methods employ the application of simultaneous equation method (Method A, absorbance ratio method (Method B and multicomponent mode of analysis method (Method C. All these methods utilize distilled water as a solvent. In distilled water pantoprazole shows maximum absorbance at a wavelength of 289.0 nm while itopride hydrochloride shows maximum absorbance at a wavelength of 258.0 nm also the drugs show an isoabsorptive point at a wavelength of 270.0 nm. For multicomponent method, sampling wavelengths 289.0 nm, 270.0 nm and 239.5 nm were selected. All these methods showed linearity in the range from 4-20 µg/mL and 15-75 µg/mL for pantoprazole and itopride hydrochloride respectively. The results of analysis have been validated statistically and by recovery studies.

  12. Recent results from the Oxford EBIT

    Energy Technology Data Exchange (ETDEWEB)

    Crosby, David N [Clarendon Laboratory, Parks Road, Oxford, OX1 3PU (United Kingdom); Ezekiel, Toleme Z [Clarendon Laboratory, Parks Road, Oxford, OX1 3PU (United Kingdom); Green, Felicia M [National Physical Laboratory, Teddington, TW11 0LW (United Kingdom); Smith, Claire J [Clarendon Laboratory, Parks Road, Oxford, OX1 3PU (United Kingdom); Silver, Joshua D [Clarendon Laboratory, Parks Road, Oxford, OX1 3PU (United Kingdom)

    2004-01-01

    Here we summarise the present status of the experimental programme of the Oxford electron beam ion trap. Most notably this research has recently culminated in the successful measurement of the 2s{sub 1/2}-2p{sub 3/2} transition in hydrogenlike nitrogen by a laser resonance method. We also introduce preliminary results from some computational investigations of both electron beam transport and the trapped ion ensemble. In particular, we show that the contribution of the magnetic field to ion confinement has a potentially measurable effect on the ion phase space distribution.

  13. Classes evaluation: Methods and tools

    Directory of Open Access Journals (Sweden)

    Grabiński Tadeusz

    2013-01-01

    Full Text Available This study presents a method, tools, course and results of foreign language classes evaluation conducted in the summer semester 2012/2013 in the Andrzej Frycz - Modrzewski Krakow University. Because a new evaluation procedure has been implemented at the University, the former method - based on paper forms filled in by the students - was abandoned. On the surveyanyplace.com website, a free account has been registered and the form of the evaluation questionnaire has been inserted. This coverage presents results of a taxometric analysis aimed at checking the degree of mutual correspondence (correlation between certain criteria and instancing a graphic presentation of the evaluation results in a multidimensional perspective. In order to classify the grading criteria, the Ward's agglomerative method, along with Euclidean metric as a measure of criteria similarity, have been used. Calculations have been made with the use of Statistica package. Results of the questionnaire show that foreign language teaching at the Andrzej Frycz Modrzewski Krakow University is conducted professionally and on a high factual level.

  14. Fission track method for uranium ore exploration

    International Nuclear Information System (INIS)

    Guo Shilun; Deng Xinlu; Sun Shengfen; Meng Wu; Zhang Pengfa; Hao Xiuhong

    1986-01-01

    The uranium concentrations in natural water collected in the fields of uranium ore exploration with fission track method have been determined. It shows that the results of fission track method are consistent with that of fluoro-colorimetry and laser fluorometry for the same samples of water with uranium concentration in the region of 10 -4 to 10 -8 g/l. For water samples with lower uranium concentration (≤10 -8 g/l), the fission track method can still give accurate or referential results, but the other two methods failed. The reproducibility of fission track method was checked and discussed by using samples collected in the same fields of uranium ore exploration. The effects of the concentration of the impurities in natural water on determination of uranium concentration were analysed and discussed as well

  15. Uranium City radiation reduction program: further efforts at remedial measures for houses with block walls, concrete porosity test results, and intercomparison of Kuznetz method and Tsivoglau method

    International Nuclear Information System (INIS)

    Haubrich, E.; Leung, M.K.; Mackie, R.

    1980-01-01

    An attempt was made to reduce the levels of radon in a house in Uranium City by mechanically venting the plenums in the concrete block basement walls, with little success. A table compares the results obtained by measuring the radon WL using the Tsivoglau and the Kuznetz methods

  16. Diamagnetic measurements on ISX-B: method and results

    International Nuclear Information System (INIS)

    Neilson, G.H.

    1983-10-01

    A diamagnetic loop is used on the ISX-B tokamak to measure the change in toroidal magnetic flux, sigma phi, caused by finite plasma current and perpendicular pressure. From this measurement, the perpendicular poloidal beta β/sub I perpendicular to/ is determined. The principal difficulty encountered is in identifying and making corrections for various noise components which appear in the measured flux. These result from coupling between the measuring loops and the toroidal and poloidal field windings, both directly and through currents induced in the vacuum vessel and coils themselves. An analysis of these couplings is made and techniques for correcting them developed. Results from the diamagnetic measurement, employing some of these correction techniques, are presented and compared with other data. The obtained values of β/sub I perpendicular to/ agree with those obtained from the equilibrium magnetic analysis (β/sub IΔ/) in ohmically heated plasmas, indicating no anisotropy. However, with 0.3 to 2.0 MW of tangential neutral beam injection, β/sub IΔ/ is consistently greater than β/sub I pependicular to/ and qualitatively consistent with the formation of an anisotropic ion velocity distribution and with toroidal rotation. Quantitatively, the difference between β/sub IΔ/ and β/sub I perpendicular to/ is more than can be accounted for on the basis of the usual classical fast ion calculations and spectroscopic rotation measurements

  17. Application of NDE methods to green ceramics: initial results

    International Nuclear Information System (INIS)

    Kupperman, D.S.; Karplus, H.B.; Poeppel, R.B.; Ellingson, W.A.; Berger, H.; Robbins, C.; Fuller, E.

    1983-01-01

    The effectiveness of microradiography, ultrasonic methods, unclear magnetic resonance, and neutron radiography was assessed for the nondestructive evaluation of green (unfired) ceramics. The application of microradiography to ceramics is reviewed, and preliminary experiments with a commercial microradiography unit are described. Conventional ultrasonic techniques are difficult to apply to flaw detection green ceramics because of the high attenuation, fragility, and couplant-absorbing properties of these materials. However, velocity, attenuation, and spectral data were obtained with pressure-coupled transducers and provided useful informaion related to density variations and the presence of agglomerates. Nuclear magnetic resonance (NMR) imaging techniques and neutron radiography were considered for detection of anomalies in the distribution of porosity. With NMR, areas of high porosity might be detected after the samples are doped with water. In the case of neutron radiography, although imaging the binder distribution throughout the sample may not be feasible because of the low overall concentration of binder, regions of high binder concentration (thus high porosity) should be detectable

  18. RESULTS OF ANALYSIS OF BENCHMARKING METHODS OF INNOVATION SYSTEMS ASSESSMENT IN ACCORDANCE WITH AIMS OF SUSTAINABLE DEVELOPMENT OF SOCIETY

    Directory of Open Access Journals (Sweden)

    A. Vylegzhanina

    2016-01-01

    Full Text Available In this work, we introduce results of comparative analysis of international ratings indexes of innovation systems for their compliance with purposes of sustainable development. Purpose of this research is defining requirements to benchmarking methods of assessing national or regional innovation systems and compare them basing on assumption, that innovation system is aligned with sustainable development concept. Analysis of goal sets and concepts, which underlie observed international composite innovation indexes, comparison of their metrics and calculation techniques, allowed us to reveal opportunities and limitations of using these methods in frames of sustainable development concept. We formulated targets of innovation development on the base of innovation priorities of sustainable socio-economic development. Using comparative analysis of indexes with these targets, we revealed two methods of assessing innovation systems, maximally connected with goals of sustainable development. Nevertheless, today no any benchmarking method, which meets need of innovation systems assessing in compliance with sustainable development concept to a sufficient extent. We suggested practical directions of developing methods, assessing innovation systems in compliance with goals of societal sustainable development.

  19. EIT Imaging of admittivities with a D-bar method and spatial prior: experimental results for absolute and difference imaging.

    Science.gov (United States)

    Hamilton, S J

    2017-05-22

    Electrical impedance tomography (EIT) is an emerging imaging modality that uses harmless electrical measurements taken on electrodes at a body's surface to recover information about the internal electrical conductivity and or permittivity. The image reconstruction task of EIT is a highly nonlinear inverse problem that is sensitive to noise and modeling errors making the image reconstruction task challenging. D-bar methods solve the nonlinear problem directly, bypassing the need for detailed and time-intensive forward models, to provide absolute (static) as well as time-difference EIT images. Coupling the D-bar methodology with the inclusion of high confidence a priori data results in a noise-robust regularized image reconstruction method. In this work, the a priori D-bar method for complex admittivities is demonstrated effective on experimental tank data for absolute imaging for the first time. Additionally, the method is adjusted for, and tested on, time-difference imaging scenarios. The ability of the method to be used for conductivity, permittivity, absolute as well as time-difference imaging provides the user with great flexibility without a high computational cost.

  20. Rhabdomyosarcoma cells show an energy producing anabolic metabolic phenotype compared with primary myocytes

    Directory of Open Access Journals (Sweden)

    Higashi Richard M

    2008-10-01

    Full Text Available Abstract Background The functional status of a cell is expressed in its metabolic activity. We have applied stable isotope tracing methods to determine the differences in metabolic pathways in proliferating Rhabdomysarcoma cells (Rh30 and human primary myocytes in culture. Uniformly 13C-labeled glucose was used as a source molecule to follow the incorporation of 13C into more than 40 marker metabolites using NMR and GC-MS. These include metabolites that report on the activity of glycolysis, Krebs' cycle, pentose phosphate pathway and pyrimidine biosynthesis. Results The Rh30 cells proliferated faster than the myocytes. Major differences in flux through glycolysis were evident from incorporation of label into secreted lactate, which accounts for a substantial fraction of the glucose carbon utilized by the cells. Krebs' cycle activity as determined by 13C isotopomer distributions in glutamate, aspartate, malate and pyrimidine rings was considerably higher in the cancer cells than in the primary myocytes. Large differences were also evident in de novo biosynthesis of riboses in the free nucleotide pools, as well as entry of glucose carbon into the pyrimidine rings in the free nucleotide pool. Specific labeling patterns in these metabolites show the increased importance of anaplerotic reactions in the cancer cells to maintain the high demand for anabolic and energy metabolism compared with the slower growing primary myocytes. Serum-stimulated Rh30 cells showed higher degrees of labeling than serum starved cells, but they retained their characteristic anabolic metabolism profile. The myocytes showed evidence of de novo synthesis of glycogen, which was absent in the Rh30 cells. Conclusion The specific 13C isotopomer patterns showed that the major difference between the transformed and the primary cells is the shift from energy and maintenance metabolism in the myocytes toward increased energy and anabolic metabolism for proliferation in the Rh30 cells

  1. On multiple level-set regularization methods for inverse problems

    International Nuclear Information System (INIS)

    DeCezaro, A; Leitão, A; Tai, X-C

    2009-01-01

    We analyze a multiple level-set method for solving inverse problems with piecewise constant solutions. This method corresponds to an iterated Tikhonov method for a particular Tikhonov functional G α based on TV–H 1 penalization. We define generalized minimizers for our Tikhonov functional and establish an existence result. Moreover, we prove convergence and stability results of the proposed Tikhonov method. A multiple level-set algorithm is derived from the first-order optimality conditions for the Tikhonov functional G α , similarly as the iterated Tikhonov method. The proposed multiple level-set method is tested on an inverse potential problem. Numerical experiments show that the method is able to recover multiple objects as well as multiple contrast levels

  2. Studies on mycobacterium tuberculosis sensitivity test by using the method of rapid radiometry with appendixes of clinical results

    International Nuclear Information System (INIS)

    Yang Yongqing; Jiang Yimin; Lu Wendong; Zhu Rongen

    1987-01-01

    Three standard strains of mycobacterium tuberculosis (H 37 RV-fully sensitive, SM-R1000 μg/ml, RFP-R 100 μg/ml) were tested with 10 concentration of 5 antitubercular agent, INH, SM, PAS, RFP and EB. 114 isolates of mycobacterium tuberculosis taken from patients were tested with INH, PAS, SM and RFP. They were agreed with the results of standard Lowenstein-Jensen method in 81.7%. 82% of the isolate test were completed within 5 days. The method may be used in routine clinical work. The liquid media prepared by authors do not require human serum albumin and it is less expensive and readily available

  3. Metachronous metastasis- and survival-analysis show prognostic importance of lymphadenectomy for colon carcinomas

    Directory of Open Access Journals (Sweden)

    Laubert Tilman

    2012-03-01

    Full Text Available Abstract Background Lymphadenectomy is performed to assess patient prognosis and to prevent metastasizing. Recently, it was questioned whether lymph node metastases were capable of metastasizing and therefore, if lymphadenectomy was still adequate. We evaluated whether the nodal status impacts on the occurrence of distant metastases by analyzing a highly selected cohort of colon cancer patients. Methods 1,395 patients underwent surgery exclusively for colon cancer at the University of Lübeck between 01/1993 and 12/2008. The following exclusion criteria were applied: synchronous metastasis, R1-resection, prior/synchronous second carcinoma, age Results Five-year survival rates for TM + and TM- were 21% and 73%, respectively (p Conclusions Besides a higher T-category, a positive N-stage independently implies a higher probability to develop distant metastases and correlates with poor survival. Our data thus show a prognostic relevance of lymphadenectomy which should therefore be retained until conclusive studies suggest the unimportance of lmyphadenectomy.

  4. Investigation of e-Linac tube construction method and implementation suitable method for IPM e-Linac

    Directory of Open Access Journals (Sweden)

    F ghasemi

    2015-09-01

    Full Text Available The goal of electron linear accelerator project in institute for research in fundamental sciences(IPM is to build its components as many as they can in Iran. This accelerator is a traveling wave type. Investigations show that there are various techniques in forming and connecting the accelerating tube cavities. The shrinking method applied for constructing the accelerating tube is selected based on the one applied for Stanford University’s Mark III accelerator. With success in building an 8-cavity test tube and finding the problems of the method, the construction of the final accelerating tube with 24 cavities has been accomplished. The results show that the obtained frequency of 2996.5 MHz and quality factor of 11200, satisfy the design desired values.

  5. The improved design method of shear strength of reinforced concrete beams without transverse reinforcement

    Directory of Open Access Journals (Sweden)

    Vegera Pavlo

    2017-12-01

    Full Text Available In this article, results of experimental testing of reinforced concrete beams without transverse shear reinforcement are given. Three prototypes for improved testing methods were tested. The testing variable parameter was the shear span to the effective depth ratio. In the result of the tests we noticed that bearing capacity of RC beams is increased with the decreasing shear span to the effective depth ratio. The design method according to current codes was applied to test samples and it showed a significant discrepancy results. Than we proposed the improved design method using the adjusted value of shear strength of concrete CRd,c. The results obtained by the improved design method showed satisfactory reproducibility.

  6. Phytoceramide Shows Neuroprotection and Ameliorates Scopolamine-Induced Memory Impairment

    Directory of Open Access Journals (Sweden)

    Seikwan Oh

    2011-10-01

    Full Text Available The function and the role phytoceramide (PCER and phytosphingosine (PSO in the central nervous system has not been well studied. This study was aimed at investigating the possible roles of PCER and PSO in glutamate-induced neurotoxicity in cultured neuronal cells and memory function in mice. Phytoceramide showed neuro-protective activity in the glutamate-induced toxicity in cultured cortical neuronal cells. Neither phytosphingosine nor tetraacetylphytosphingosine (TAPS showed neuroproective effects in neuronal cells. PCER (50 mg/kg, p.o. recovered the scopolamine-induced reduction in step-through latency in the passive avoidance test; however, PSO did not modulate memory function on this task. The ameliorating effects of PCER on spatial memory were confirmed by the Morris water maze test. In conclusion, through behavioral and neurochemical experimental results, it was demonstrated that central administration of PCER produces amelioration of memory impairment. These results suggest that PCER plays an important role in neuroprotection and memory enhancement and PCER could be a potential new therapeutic agent for the treatment of neurodegenerative diseases such as Alzheimer’s disease.

  7. Adaptive governance good practice: Show me the evidence!

    Science.gov (United States)

    Sharma-Wallace, Lisa; Velarde, Sandra J; Wreford, Anita

    2018-09-15

    Adaptive governance has emerged in the last decade as an intriguing avenue of theory and practice for the holistic management of complex environmental problems. Research on adaptive governance has flourished since the field's inception, probing the process and mechanisms underpinning the new approach while offering various justifications and prescriptions for empirical use. Nevertheless, recent reviews of adaptive governance reveal some important conceptual and practical gaps in the field, particularly concerning challenges in its application to real-world cases. In this paper, we respond directly to the empirical challenge of adaptive governance, specifically asking: which methods contribute to the implementation of successful adaptive governance process and outcomes in practice and across cases and contexts? We adopt a systematic literature review methodology which considers the current body of empirical literature on adaptive governance of social-ecological systems in order to assess and analyse the methods affecting successful adaptive governance practice across the range of existing cases. We find that methods contributing to adaptive governance in practice resemble the design recommendations outlined in previous adaptive governance scholarship, including meaningful collaboration across actors and scales; effective coordination between stakeholders and levels; building social capital; community empowerment and engagement; capacity development; linking knowledge and decision-making through data collection and monitoring; promoting leadership capacity; and exploiting or creating governance opportunities. However, we critically contextualise these methods by analysing and summarising their patterns-in-use, drawing examples from the cases to explore the specific ways they were successfully or unsuccessfully applied to governance issues on-the-ground. Our results indicate some important underlying shared patterns, trajectories, and lessons learned for evidence

  8. Ringing Artefact Reduction By An Efficient Likelihood Improvement Method

    Science.gov (United States)

    Fuderer, Miha

    1989-10-01

    In MR imaging, the extent of the acquired spatial frequencies of the object is necessarily finite. The resulting image shows artefacts caused by "truncation" of its Fourier components. These are known as Gibbs artefacts or ringing artefacts. These artefacts are particularly. visible when the time-saving reduced acquisition method is used, say, when scanning only the lowest 70% of the 256 data lines. Filtering the data results in loss of resolution. A method is described that estimates the high frequency data from the low-frequency data lines, with the likelihood of the image as criterion. It is a computationally very efficient method, since it requires practically only two extra Fourier transforms, in addition to the normal. reconstruction. The results of this method on MR images of human subjects are promising. Evaluations on a 70% acquisition image show about 20% decrease of the error energy after processing. "Error energy" is defined as the total power of the difference to a 256-data-lines reference image. The elimination of ringing artefacts then appears almost complete..

  9. Adaptive BDDC Deluxe Methods for H(curl)

    KAUST Repository

    Zampini, Stefano

    2017-03-17

    The work presents numerical results using adaptive BDDC deluxe methods for preconditioning the linear systems arising from finite element discretizations of the time-domain, quasi-static approximation of the Maxwell’s equations. The provided results, obtained using the BDDC implementation of the PETSc library, show that these methods are poly-logarithmic in the polynomial degree of the Nédélec elements of first and second kind, and robust with respect to arbitrary distributions of the magnetic permeability and the conductivity of the medium.

  10. Studies of LMFBR: method of analysis and some results

    International Nuclear Information System (INIS)

    Ishiguro, Y.; Dias, A.F.; Nascimento, J.A. do.

    1983-01-01

    Some results of recent studies of LMFBR characteristics are summarized. A two-dimensional model of the LMFBR is taken from a publication and used as the base model for the analysis. Axial structures are added to the base model and a three-dimensional (Δ - Z) calculation has been done. Two dimensional (Δ and RZ) calculations are compared with the three-dimensional and published results. The eigenvalue, flux and power distributions, breeding characteristics, control rod worth, sodium-void and Doppler reactivities are analysed. Calculations are done by CITATION using six-group cross sections collapsed regionwise by EXPANDA in one-dimensional geometries from the 70-group JFS library. Burnup calculations of a simplified thorium-cycle LMFBR have also been done in the RZ geometry. Principal results of the studies are: (1) the JFS library appears adequate for predicting overall characteristics of an LMFBR, (2) the sodium void reactivity is negative within - 25 cm from the outer boundary of the core, (3) the halflife of Pa-233 must be considered explicitly in burnup analyses, and (4) two-dimensional (RZ and Δ) calculations can be used iteratively to analyze three-dimensional reactor systems. (Author) [pt

  11. Strengthening of limestone by the impregnation - gamma irradiation method. Results of tests

    International Nuclear Information System (INIS)

    Ramiere, R.; Tassigny, C. de

    1975-04-01

    The method developed by the Centre d'Etudes Nucleaires de Grenoble (France) strengthens the stones by impregnation with a styrene resin/liquid polystyrene mixture followed by polymerization under gamma irradiation. This method is applicable to stones which can be taken into the laboratory for treatment. The increase in strength of 6 different species of French limestone has been quantitatively recorded. The following parameters were studied: possibility of water migration inside the stones, improvements of the mechanical properties of the impregnated stone, standing up to freeze-thaw conditions and artificial ageing of the stones which causes only minor changes in the appearance of the stone and a negligible decrease in weight [fr

  12. Bonobos (Pan paniscus) show an attentional bias toward conspecifics' emotions.

    Science.gov (United States)

    Kret, Mariska E; Jaasma, Linda; Bionda, Thomas; Wijnen, Jasper G

    2016-04-05

    In social animals, the fast detection of group members' emotional expressions promotes swift and adequate responses, which is crucial for the maintenance of social bonds and ultimately for group survival. The dot-probe task is a well-established paradigm in psychology, measuring emotional attention through reaction times. Humans tend to be biased toward emotional images, especially when the emotion is of a threatening nature. Bonobos have rich, social emotional lives and are known for their soft and friendly character. In the present study, we investigated (i) whether bonobos, similar to humans, have an attentional bias toward emotional scenes compared with conspecifics showing a neutral expression, and (ii) which emotional behaviors attract their attention the most. As predicted, results consistently showed that bonobos' attention was biased toward the location of the emotional versus neutral scene. Interestingly, their attention was grabbed most by images showing conspecifics such as sexual behavior, yawning, or grooming, and not as much-as is often observed in humans-by signs of distress or aggression. The results suggest that protective and affiliative behaviors are pivotal in bonobo society and therefore attract immediate attention in this species.

  13. Method validation in plasma source optical emission spectroscopy (ICP-OES) - From samples to results

    International Nuclear Information System (INIS)

    Pilon, Fabien; Vielle, Karine; Birolleau, Jean-Claude; Vigneau, Olivier; Labet, Alexandre; Arnal, Nadege; Adam, Christelle; Camilleri, Virginie; Amiel, Jeanine; Granier, Guy; Faure, Joel; Arnaud, Regine; Beres, Andre; Blanchard, Jean-Marc; Boyer-Deslys, Valerie; Broudic, Veronique; Marques, Caroline; Augeray, Celine; Bellefleur, Alexandre; Bienvenu, Philippe; Delteil, Nicole; Boulet, Beatrice; Bourgarit, David; Brennetot, Rene; Fichet, Pascal; Celier, Magali; Chevillotte, Rene; Klelifa, Aline; Fuchs, Gilbert; Le Coq, Gilles; Mermet, Jean-Michel

    2017-01-01

    Even though ICP-OES (Inductively Coupled Plasma - Optical Emission Spectroscopy) is now a routine analysis technique, requirements for measuring processes impose a complete control and mastering of the operating process and of the associated quality management system. The aim of this (collective) book is to guide the analyst during all the measurement validation procedure and to help him to guarantee the mastering of its different steps: administrative and physical management of samples in the laboratory, preparation and treatment of the samples before measuring, qualification and monitoring of the apparatus, instrument setting and calibration strategy, exploitation of results in terms of accuracy, reliability, data covariance (with the practical determination of the accuracy profile). The most recent terminology is used in the book, and numerous examples and illustrations are given in order to a better understanding and to help the elaboration of method validation documents

  14. Methods of Teaching Reading to EFL Learners: A Case Study

    Science.gov (United States)

    Sanjaya, Dedi; Rahmah; Sinulingga, Johan; Lubis, Azhar Aziz; Yusuf, Muhammad

    2014-01-01

    Methods of teaching reading skill are not the same in different countries. It depends on the condition and situation of the learners. Observing the method of teaching in Malaysia was the purpose of this study and the result of the study shows that there are 5 methods that are applied in classroom activities namely Grammar Translation Method (GTM),…

  15. Radon in Austrian tourist mines and show caves

    International Nuclear Information System (INIS)

    Ringer, W.; Graeser, J.

    2009-01-01

    The radon situation in tourist mines and show caves is barely investigated in Austria. This paper investigates the influence of its determining factors, such as climate, structure and geology. For this purpose, long-term time-resolved measurements over 6 to 12 months in 4 tourist mines and 2 show caves - with 5 to 9 measuring points each - have been carried out to obtain the course of radon concentration throughout the year. In addition, temperature and air-pressure were measured and compared to the data outside where available. Results suggest that the dominating factors of the average radon concentration are structure and location (geology) of the tunnel-system, whereas the diurnal and annual variation is mainly caused by the changing airflow, which is driven by the difference in temperature inside and outside. Downcast air is connected with very low radon concentrations, upcast air with high concentrations. In some locations the maximum values appear when the airflow ceases. But airflow can be different in different parts of mines and caves. Systems close to the surface show generally lower radon levels than the ones located deeper underground. Due to variation of structure, geology and local climate, the radon situation in mines and caves can only be described by simultaneous measurements at several measuring points. (orig.)

  16. Two cases of Tolosa-Hunt syndrome showing interesting CT findings

    International Nuclear Information System (INIS)

    Abe, Masahiro; Hara, Yuzo; Ito, Noritaka; Nishimura, Mieko; Onishi, Yoshitaka; Hasuo, Kanehiro

    1982-01-01

    CT showed the lesion at the orbital apex in both of the 2 cases of Tolosa-Hunt syndrome. Steroid therapy resulted in improvement of clinical symptoms and regression of the lesion which was confirmed by CT. (Chiba, N.)

  17. Classification Method to Define Synchronization Capability Limits of Line-Start Permanent-Magnet Motor Using Mesh-Based Magnetic Equivalent Circuit Computation Results

    Directory of Open Access Journals (Sweden)

    Bart Wymeersch

    2018-04-01

    Full Text Available Line start permanent magnet synchronous motors (LS-PMSM are energy-efficient synchronous motors that can start asynchronously due to a squirrel cage in the rotor. The drawback, however, with this motor type is the chance of failure to synchronize after start-up. To identify the problem, and the stable operation limits, the synchronization at various parameter combinations is investigated. For accurate knowledge of the operation limits to assure synchronization with the utility grid, an accurate classification of parameter combinations is needed. As for this, many simulations have to be executed, a rapid evaluation method is indispensable. To simulate the dynamic behavior in the time domain, several modeling methods exist. In this paper, a discussion is held with respect to different modeling methods. In order to include spatial factors and magnetic nonlinearities, on the one hand, and to restrict the computation time on the other hand, a magnetic equivalent circuit (MEC modeling method is developed. In order to accelerate numerical convergence, a mesh-based analysis method is applied. The novelty in this paper is the implementation of support vector machine (SVM to classify the results of simulations at various parameter combinations into successful or unsuccessful synchronization, in order to define the synchronization capability limits. It is explained how these techniques can benefit the simulation time and the evaluation process. The results of the MEC modeling correspond to those obtained with finite element analysis (FEA, despite the reduced computation time. In addition, simulation results obtained with MEC modeling are experimentally validated.

  18. Honored Teacher Shows Commitment.

    Science.gov (United States)

    Ratte, Kathy

    1987-01-01

    Part of the acceptance speech of the 1985 National Council for the Social Studies Teacher of the Year, this article describes the censorship experience of this honored social studies teacher. The incident involved the showing of a videotape version of the feature film entitled "The Seduction of Joe Tynan." (JDH)

  19. Prospects of an alternative treatment against Trypanosoma cruzi based on abietic acid derivatives show promising results in Balb/c mouse model.

    Science.gov (United States)

    Olmo, F; Guardia, J J; Marin, C; Messouri, I; Rosales, M J; Urbanová, K; Chayboun, I; Chahboun, R; Alvarez-Manzaneda, E J; Sánchez-Moreno, M

    2015-01-07

    Chagas disease, caused by the protozoa parasite Trypanosoma cruzi, is an example of extended parasitaemia with unmet medical needs. Current treatments based on old-featured benznidazole (Bz) and nifurtimox are expensive and do not fulfil the criteria of effectiveness, and a lack of toxicity devoid to modern drugs. In this work, a group of abietic acid derivatives that are chemically stable and well characterised were introduced as candidates for the treatment of Chagas disease. In vitro and in vivo assays were performed in order to test the effectiveness of these compounds. Finally, those which showed the best activity underwent additional studies in order to elucidate the possible mechanism of action. In vitro results indicated that some compounds have low toxicity (i.e. >150 μM, against Vero cell) combined with high efficacy (i.e. <20 μM) against some forms of T. cruzi. Further in vivo studies on mice models confirmed the expectations of improvements in infected mice. In vivo tests on the acute phase gave parasitaemia inhibition values higher those of Bz, and a remarkable decrease in the reactivation of parasitaemia was found in the chronic phase after immunosuppression of the mice treated with one of the compounds. The morphological alterations found in treated parasites with our derivatives confirmed extensive damage; energetic metabolism disturbances were also registered by (1)H NMR. The demonstrated in vivo activity and low toxicity, together with the use of affordable starting products and the lack of synthetic complexity, put these abietic acid derivatives in a remarkable position toward the development of an anti-Chagasic agent. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  20. Construct solitary solutions of discrete hybrid equation by Adomian Decomposition Method

    International Nuclear Information System (INIS)

    Wang Zhen; Zhang Hongqing

    2009-01-01

    In this paper, we apply the Adomian Decomposition Method to solving the differential-difference equations. A typical example is applied to illustrate the validity and the great potential of the Adomian Decomposition Method in solving differential-difference equation. Kink shaped solitary solution and Bell shaped solitary solution are presented. Comparisons are made between the results of the proposed method and exact solutions. The results show that the Adomian Decomposition Method is an attractive method in solving the differential-difference equations.

  1. Rhizosphere microbial communities from resistant and susceptible watermelon cultivars showed different response to fusarium oxysporum f. sp. niveum inoculation

    International Nuclear Information System (INIS)

    Zhi, W.F.; Can, C.S.; Ling, C.; Hui, X.W.

    2015-01-01

    Fusarium oxysporum f. sp. niveum (FON), a soil-borne pathogen of watermelon (Citrullus lanatus), can cause substantial production losses worldwide. In this study, plate culture and PCR-denaturing gradient gel electrophoresis (DGGE) methods were used to evaluate the effects of inoculation of Fusarium oxysporum f.sp. niveum on rhizosphere microbial communities of different watermelon cultivars to FON. Two methods indicated that the effects of watermelon rhizosphere microbial community of different resistance cultivars to FON were much different. Populations of culturable bacteria and actinomycetes in the rhizosphere of susceptible watermelon cultivar were significantly lower than in the resistant cultivar after inoculation (P<0.05), but the opposite result was observed for fungi. Principal component analysis of bacterial and fungal community structure also showed that the cultivar of FON-inoculated soil treatment were separated from the non-inoculated controls after inoculation, and there was clear discrimination between the susceptible cultivars and the resistant cultivars. Sequence analysis of specific bands from DGGE profiles showed that specific rhizosphere bacterial and fungal groups differed between watermelon cultivars after inoculation . Both methods demonstrated that different resistant watermelon cultivars occupied different rhizosphere microbial communities, and and disease suppression might be correlated with high microbial diversity. F. oxysporum f. sp. Niveum alters the structure and functional diversity of microbial communities associated with watermelon rhizosphere. (author)

  2. Application of the homotopy perturbation method and the homotopy analysis method for the dynamics of tobacco use and relapse

    Directory of Open Access Journals (Sweden)

    Anant Kant Shukla

    2014-11-01

    Full Text Available We obtain approximate analytical solutions of two mathematical models of the dynamics of tobacco use and relapse including peer pressure using the homotopy perturbation method (HPM and the homotopy analysis method (HAM. To enlarge the domain of convergence we apply the Padé approximation to the HPM and HAM series solutions. We show graphically that the results obtained by both methods are very accurate in comparison with the numerical solution for a period of 30 years.

  3. Improvement of production layout based on optimum production balancing scale results by using Moodie Young and Comsoal method

    Science.gov (United States)

    Ikhsan, Siregar; Ulina Anastasia Sipangkar, Tri; Prasetio, Aji

    2017-09-01

    This research was conducted at a make to order production system company which is engaged in the car body of the vehicle. One of the products produced is dump truck which is one kind of transportation for the transport of goods equipped with hydraulics to facilitate goods’ loading and unloading process. The company has 7 work stations with different cycle times. Companies often experience delays in order delivery. The production process on the production floor has not been done optimally where there is a build up of work in process in some work centres. The build up of work in process (WIP) products is seen in the welding and painting stations. Stacking that occurs on the production line may cause the company to be liable for damages due to delays in product completion. The WIP occurs due to unbalanced paths can be seen from the variance of cycle time of each station is very diverse. The time difference of each work element is due to the allocation of work elements to each work centre unevenly. On the basis of the allocation of uneven work elements, the dump truck assembly line is made. The analysis is done by using Moodie Young and Comsoal method to do the balancing of production line. The result of layout improvement by using systematic layout planning (SLP) method is change the composition of the work centre from 7 into 4 work centre which enables the movement of material to be more effective and efficient so that it can get an efficient and effective production trajectory and can solve existing problems. The result of the track balancing is then used as a guide in constructing a new layout based on the balancing result with the most optimum method.

  4. A new ART iterative method and a comparison of performance among various ART methods

    International Nuclear Information System (INIS)

    Tan, Yufeng; Sato, Shunsuke

    1993-01-01

    Many algebraic reconstruction techniques (ART) image reconstruction algorithms, for instance, simultaneous iterative reconstruction technique (SIRT), the relaxation method and multiplicative ART (MART), have been proposed and their convergent properties have been studied. SIRT and the underrelaxed relaxation method converge to the least-squares solution, but the convergent speeds are very slow. The Kaczmarz method converges very quickly, but the reconstructed images contain a lot of noise. The comparative studies between these algorithms have been done by Gilbert and others, but are not adequate. In this paper, we (1) propose a new method which is a modified Kaczmarz method and prove its convergence property, (2) study performance of 7 algorithms including the one proposed here by computer simulation for 3 kinds of typical phantoms. The method proposed here does not give the least-square solution, but the root mean square errors of its reconstructed images decrease very quickly after few interations. The result shows that the method proposed here gives a better reconstructed image. (author)

  5. Fault detection of gearbox using time-frequency method

    Science.gov (United States)

    Widodo, A.; Satrijo, Dj.; Prahasto, T.; Haryanto, I.

    2017-04-01

    This research deals with fault detection and diagnosis of gearbox by using vibration signature. In this work, fault detection and diagnosis are approached by employing time-frequency method, and then the results are compared with cepstrum analysis. Experimental work has been conducted for data acquisition of vibration signal thru self-designed gearbox test rig. This test-rig is able to demonstrate normal and faulty gearbox i.e., wears and tooth breakage. Three accelerometers were used for vibration signal acquisition from gearbox, and optical tachometer was used for shaft rotation speed measurement. The results show that frequency domain analysis using fast-fourier transform was less sensitive to wears and tooth breakage condition. However, the method of short-time fourier transform was able to monitor the faults in gearbox. Wavelet Transform (WT) method also showed good performance in gearbox fault detection using vibration signal after employing time synchronous averaging (TSA).

  6. An assessment of the validity of inelastic design analysis methods by comparisons of predictions with test results

    International Nuclear Information System (INIS)

    Corum, J.M.; Clinard, J.A.; Sartory, W.K.

    1976-01-01

    The use of computer programs that employ relatively complex constitutive theories and analysis procedures to perform inelastic design calculations on fast reactor system components introduces questions of validation and acceptance of the analysis results. We may ask ourselves, ''How valid are the answers.'' These questions, in turn, involve the concepts of verification of computer programs as well as qualification of the computer programs and of the underlying constitutive theories and analysis procedures. This paper addresses the latter - the qualification of the analysis methods for inelastic design calculations. Some of the work underway in the United States to provide the necessary information to evaluate inelastic analysis methods and computer programs is described, and typical comparisons of analysis predictions with inelastic structural test results are presented. It is emphasized throughout that rather than asking ourselves how valid, or correct, are the analytical predictions, we might more properly question whether or not the combination of the predictions and the associated high-temperature design criteria leads to an acceptable level of structural integrity. It is believed that in this context the analysis predictions are generally valid, even though exact correlations between predictions and actual behavior are not obtained and cannot be expected. Final judgment, however, must be reserved for the design analyst in each specific case. (author)

  7. [Comparisons of manual and automatic refractometry with subjective results].

    Science.gov (United States)

    Wübbolt, I S; von Alven, S; Hülssner, O; Erb, C

    2006-11-01

    Refractometry is very important in everyday clinical practice. The aim of this study is to compare the precision of three objective methods of refractometry with subjective dioptometry (Phoropter). The objective methods with the smallest deviation to subjective refractometry results are evaluated. The objective methods/instruments used were retinoscopy, Prism Refractometer PR 60 (Rodenstock) and Auto Refractometer RM-A 7000 (Topcon). The results of monocular dioptometry (sphere, cylinder and axis) of each objective method were compared to the results of the subjective method. The examination was carried out on 178 eyes, which were divided into 3 age-related groups: 6 - 12 years (103 eyes), 13 - 18 years (38 eyes) and older than 18 years (37 eyes). All measurements were made in cycloplegia. The smallest standard deviation of the measurement error was found for the Auto Refractometer RM-A 7000. Both the PR 60 and retinoscopy had a clearly higher standard deviation. Furthermore, the RM-A 7000 showed in three and retinoscopy in four of the nine comparisons a significant bias in the measurement error. The Auto Refractometer provides measurements with the smallest deviation compared to the subjective method. Here it has to be taken into account that the measurements for the sphere have an average deviation of + 0.2 dpt. In comparison to retinoscopy the examination of children with the RM-A 7000 is difficult. An advantage of the Auto Refractometer is the fast and easy handling, so that measurements can be performed by medical staff.

  8. Comparison of Hi-C results using in-solution versus in-nucleus ligation.

    Science.gov (United States)

    Nagano, Takashi; Várnai, Csilla; Schoenfelder, Stefan; Javierre, Biola-Maria; Wingett, Steven W; Fraser, Peter

    2015-08-26

    Chromosome conformation capture and various derivative methods such as 4C, 5C and Hi-C have emerged as standard tools to analyze the three-dimensional organization of the genome in the nucleus. These methods employ ligation of diluted cross-linked chromatin complexes, intended to favor proximity-dependent, intra-complex ligation. During development of single-cell Hi-C, we devised an alternative Hi-C protocol with ligation in preserved nuclei rather than in solution. Here we directly compare Hi-C methods employing in-nucleus ligation with the standard in-solution ligation. We show in-nucleus ligation results in consistently lower levels of inter-chromosomal contacts. Through chromatin mixing experiments we show that a significantly large fraction of inter-chromosomal contacts are the result of spurious ligation events formed during in-solution ligation. In-nucleus ligation significantly reduces this source of experimental noise, and results in improved reproducibility between replicates. We also find that in-nucleus ligation eliminates restriction fragment length bias found with in-solution ligation. These improvements result in greater reproducibility of long-range intra-chromosomal and inter-chromosomal contacts, as well as enhanced detection of structural features such as topologically associated domain boundaries. We conclude that in-nucleus ligation captures chromatin interactions more consistently over a wider range of distances, and significantly reduces both experimental noise and bias. In-nucleus ligation creates higher quality Hi-C libraries while simplifying the experimental procedure. We suggest that the entire range of 3C applications are likely to show similar benefits from in-nucleus ligation.

  9. Results of a survey on accident and safety analysis codes, benchmarks, verification and validation methods

    International Nuclear Information System (INIS)

    Lee, A.G.; Wilkin, G.B.

    1996-03-01

    During the 'Workshop on R and D needs' at the 3rd Meeting of the International Group on Research Reactors (IGORR-III), the participants agreed that it would be useful to compile a survey of the computer codes and nuclear data libraries used in accident and safety analyses for research reactors and the methods various organizations use to verify and validate their codes and libraries. Five organizations, Atomic Energy of Canada Limited (AECL, Canada), China Institute of Atomic Energy (CIAE, People's Republic of China), Japan Atomic Energy Research Institute (JAERI, Japan), Oak Ridge National Laboratories (ORNL, USA), and Siemens (Germany) responded to the survey. The results of the survey are compiled in this report. (author) 36 refs., 3 tabs

  10. Personality, Study Methods and Academic Performance

    Science.gov (United States)

    Entwistle, N. J.; Wilson, J. D.

    1970-01-01

    A questionnaire measuring four student personality types--stable introvert, unstable introvert, stable extrovert and unstable extrovert--along with the Eysenck Personality Inventory (Form A) were give to 72 graduate students at Aberdeen University and the results showed recognizable interaction between study methods, motivation and personality…

  11. Effective Methods of Teaching Moon Phases

    Science.gov (United States)

    Jones, Heather; Hintz, E. G.; Lawler, M. J.; Jones, M.; Mangrubang, F. R.; Neeley, J. E.

    2010-01-01

    This research investigates the effectiveness of several commonly used methods for teaching the causes of moon phases to sixth grade students. Common teaching methods being investigated are the use of diagrams, animations, modeling/kinesthetics and direct observations of moon phases using a planetarium. Data for each method will be measured by a pre and post assessment of students understanding of moon phases taught using one of the methods. The data will then be used to evaluate the effectiveness of each teaching method individually and comparatively, as well as the method's ability to discourage common misconceptions about moon phases. Results from this research will provide foundational data for the development of educational planetarium shows for the deaf or other linguistically disadvantage children.

  12. Methods and introductory results of the Greek national health and nutrition survey - HYDRIA

    Directory of Open Access Journals (Sweden)

    Georgia Martimianaki

    2018-06-01

    Full Text Available Background:  According to a large prospective cohort study (with baseline examination in the 1990s and smaller studies that followed, the population in Greece has been gradually deprived of the favorable morbidity and mortality indices recorded in the 1960s. The HYDRIA survey conducted in 2013-14 is the first nationally representative survey, which collected data related to the health and nutrition of the population in Greece. Methods: The survey sample consists of 4011 males (47% and females aged 18 years and over. Data collection included interviewer-administered questionnaires on personal characteristics, lifestyle choices, dietary habits and medical history; measurements of somatometry and blood pressure; and, blood drawing. Weighting factors were applied to ensure national representativeness of results. Results: Three out of five adults in Greece reported suffering of a chronic disease, with diabetes mellitus and chronic depression being the more frequent ones among older individuals. The population is also experiencing an overweight/obesity epidemic, since seven out of 10 adults are either overweight or obese. In addition, 40% of the population bears indications of hypertension. Smoking is still common and among women the prevalence was higher in younger age groups. Social disparities were observed in the prevalence of chronic diseases and mortality risk factors (hypertension, obesity, impaired lipid profile and high blood glucose levels. Conclusion: Excess body weight, hypertension, the smoking habit and the population’s limited physical activity are the predominant challenges that public health officials have to deal with in formulating policies and designing actions for the population in Greece.

  13. Cytokinesis-block micronucleus method in micro-blood cultures

    International Nuclear Information System (INIS)

    Liu Jinwen; Wang Lianzhi; Yang Cangzhen; Yao Yanyu

    1991-01-01

    This paper reports the cytokinesis-block micronucleus method in micro-blood cultures. The observations on detection induced micronuclei of different doses of 60 Co γ-rays irradiation and spontaneous micronucleus of different ages were performed with CB method in comporison with conventional micronucleus (CM) method. The results showed that with direct peripheral micro-blood cultures the cytoknesis-block micronuclei is also obtained. Using CB method, the micronuclei fequency of different ages was linear relationship, Y = 1.62 + 0.74 D, the spontaneous micronuclei frequency of different ages was 4.14%, the induced micronuclei also was a linear relationship, Y = 6.01 + 0.692 D. Using CM method, it showed that the induced micronuclei was a linear relationship, Y = 0.486 D - 1.968, but there is no significant difference between the micronuclei frequency of different ages. Comparison with CM and direct blood smear methods confirmed that the cytokinesis-block method of micro-blood cultures is more sensitive and precise

  14. Methods and results of radiotherapy in case of medulloblastoma

    International Nuclear Information System (INIS)

    Bamberg, M.; Sauerwein, W.; Scherer, E.

    1982-01-01

    The prognosis of the medulloblastoma with its marked tendency towards early formation of metastases by way of liquor circulation can be decisively improved by post-surgical homogenous irradiation. A successful radiotherapy is only possible by means of new irradiation methods which have been developed for high-voltage units during recent years and which require great experience and skill on the part of the radiotherapeutist. At the Radiological Centre of Essen, 26 patients with medulloblastoma have been submitted to such a specially developed post-surgical radiotherapy since 1974. After a follow-up period of at most seven years, 16 patients have survived (two of them with recurrences) and 10 patients died because of a local recurrence. In dependence on the patient's state of health after surgery and before irradiation, the neurologic state and physical condition of these patients seem favorable after unique post-operative radiotherapy. New therapeutic possibilities are provided by radiosensitizing substances. The actually most effective radiosensitizer Misonidazol, however, could not respond hitherto to clinical expectances. (orig.) [de

  15. The Accident Sequence Precursor program: Methods improvements and current results

    International Nuclear Information System (INIS)

    Minarick, J.W.; Manning, F.M.; Harris, J.D.

    1987-01-01

    Changes in the US NRC Accident Sequence Precursor program methods since the initial program evaluations of 1969-81 operational events are described, along with insights from the review of 1984-85 events. For 1984-85, the number of significant precursors was consistent with the number observed in 1980-81, dominant sequences associated with significant events were reasonably consistent with PRA estimates for BWRs, but lacked the contribution due to small-break LOCAs previously observed and predicted in PWRs, and the frequency of initiating events and non-recoverable system failures exhibited some reduction compared to 1980-81. Operational events which provide information concerning additional PRA modeling needs are also described

  16. Examining mixing methods in an evaluation of a smoking cessation program.

    Science.gov (United States)

    Betzner, Anne; Lawrenz, Frances P; Thao, Mao

    2016-02-01

    Three different methods were used in an evaluation of a smoking cessation study: surveys, focus groups, and phenomenological interviews. The results of each method were analyzed separately and then combined using both a pragmatic and dialectic stance to examine the effects of different approaches to mixing methods. Results show that the further apart the methods are philosophically, the more diverse the findings. Comparisons of decision maker opinions and costs of the different methods are provided along with recommendations for evaluators' uses of different methods. Copyright © 2015. Published by Elsevier Ltd.

  17. Global morphological analysis of marine viruses shows minimal regional variation and dominance of non-tailed viruses.

    Science.gov (United States)

    Brum, Jennifer R; Schenck, Ryan O; Sullivan, Matthew B

    2013-09-01

    Viruses influence oceanic ecosystems by causing mortality of microorganisms, altering nutrient and organic matter flux via lysis and auxiliary metabolic gene expression and changing the trajectory of microbial evolution through horizontal gene transfer. Limited host range and differing genetic potential of individual virus types mean that investigations into the types of viruses that exist in the ocean and their spatial distribution throughout the world's oceans are critical to understanding the global impacts of marine viruses. Here we evaluate viral morphological characteristics (morphotype, capsid diameter and tail length) using a quantitative transmission electron microscopy (qTEM) method across six of the world's oceans and seas sampled through the Tara Oceans Expedition. Extensive experimental validation of the qTEM method shows that neither sample preservation nor preparation significantly alters natural viral morphological characteristics. The global sampling analysis demonstrated that morphological characteristics did not vary consistently with depth (surface versus deep chlorophyll maximum waters) or oceanic region. Instead, temperature, salinity and oxygen concentration, but not chlorophyll a concentration, were more explanatory in evaluating differences in viral assemblage morphological characteristics. Surprisingly, given that the majority of cultivated bacterial viruses are tailed, non-tailed viruses appear to numerically dominate the upper oceans as they comprised 51-92% of the viral particles observed. Together, these results document global marine viral morphological characteristics, show that their minimal variability is more explained by environmental conditions than geography and suggest that non-tailed viruses might represent the most ecologically important targets for future research.

  18. Predicting chaos in memristive oscillator via harmonic balance method.

    Science.gov (United States)

    Wang, Xin; Li, Chuandong; Huang, Tingwen; Duan, Shukai

    2012-12-01

    This paper studies the possible chaotic behaviors in a memristive oscillator with cubic nonlinearities via harmonic balance method which is also called the method of describing function. This method was proposed to detect chaos in classical Chua's circuit. We first transform the considered memristive oscillator system into Lur'e model and present the prediction of the existence of chaotic behaviors. To ensure the prediction result is correct, the distortion index is also measured. Numerical simulations are presented to show the effectiveness of theoretical results.

  19. Potential theory for stationary Schrödinger operators: a survey of results obtained with non-probabilistic methods

    Directory of Open Access Journals (Sweden)

    Marco Bramanti

    1992-05-01

    Full Text Available In this paper we deal with a uniformly elliptic operator of the kind: Lu  Au + Vu, where the principal part A is in divergence form, and V is a function assumed in a “Kato class”. This operator has been studied in different contexts, especially using probabilistic techniques. The aim of the present work is to give a unified and simplified presentation of the results obtained with non probabilistic methods for the operator L on a bounded Lipschitz domain. These results regard: continuity of the solutions of Lu=0; Harnack inequality; estimates on the Green's function and L-harmonic measure; boundary behavior of positive solutions of Lu=0, in particular a “Fatou's theorem”.

  20. Quantity and quality of black carbon molecular markers as obtained by two chromatographic methods (GC-FID and HPLC-DAD) - How do results compare?

    Science.gov (United States)

    Schneider, M. P. W.; Smittenberg, R. H.; Dittmar, T.; Schmidt, M. W. I.

    2009-04-01

    biomass is being used for this purpose. We seek to establish a conversion factor between both methods, if required. Results show that both the GC and the HPLC method can be used for organic samples containing some silica, such as grass char. Further tests include silica-rich materials, such as soils. Ongoing methodological work aims at carbon isotope analysis (13C and 14C) on individual BPCAs isolated via HPLC. At present the HPLC method employs tetrabutyl ammonium bromide (TBAB) as a modifier for the liquid phase. TBAB is not volatile and contains carbon, it therefore prevents carbon isotopic analysis on isolated BPCAs. References Brodowski, S., Rodionov, A., Haumeier, L., Glaser, B., Amelung, W. (2005) Revised black carbon assessment using benzene polycarboxylic acids. Organic Geochemistry, 36(9), 1299-1310. Dittmar, T. (2008) The molecular level determination of black carbon in marine dissolved organic matter. Organic Geochemistry, 39(4). 396-407. Glaser, B., Haumeier, L., Guggenberger, G., Zech, W. (1998) Black carbon in soils: the use of benzenecarboxylic acids as specific markers. Organic Geochemistry, 29(4), 811-819. Hammes, K. Smernik, R. J., Skjemstad, J. O., Herzog, A., Vogt, U. F., Schmidt, M. W. I. (2006) Synthesis and characterisation of laboratory-charred grass straw (Oryza saliva) and chestnut wood (Castanea sativa) as reference materials for black carbon quantification Organic Geochemistry 37(11). 1629-1633

  1. Analysis of Piezoelectric Solids using Finite Element Method

    Science.gov (United States)

    Aslam, Mohammed; Nagarajan, Praveen; Remanan, Mini

    2018-03-01

    Piezoelectric materials are extensively used in smart structures as sensors and actuators. In this paper, static analysis of three piezoelectric solids is done using general-purpose finite element software, Abaqus. The simulation results from Abaqus are compared with the results obtained using numerical methods like Boundary Element Method (BEM) and meshless point collocation method (PCM). The BEM and PCM are cumbersome for complex shape and complicated boundary conditions. This paper shows that the software Abaqus can be used to solve the governing equations of piezoelectric solids in a much simpler and faster way than the BEM and PCM.

  2. Immune Algorithm Complex Method for Transducer Calibration

    Directory of Open Access Journals (Sweden)

    YU Jiangming

    2014-08-01

    Full Text Available As a key link in engineering test tasks, the transducer calibration has significant influence on accuracy and reliability of test results. Because of unknown and complex nonlinear characteristics, conventional method can’t achieve satisfactory accuracy. An Immune algorithm complex modeling approach is proposed, and the simulated studies on the calibration of third multiple output transducers is made respectively by use of the developed complex modeling. The simulated and experimental results show that the Immune algorithm complex modeling approach can improve significantly calibration precision comparison with traditional calibration methods.

  3. Long term results of radiotherapy of degenerative joint diseases

    Energy Technology Data Exchange (ETDEWEB)

    Lindner, H; Freislederer, R

    1982-04-01

    At the Radiologic Department of the Staedt. Krankenhaus Passau, 473 patients with degenerative diseases in the big joints and the spine were irradiated with the caesium unit between 1971 and 1979. Among these patients, 249 could be followed up during a prolonged period (1/2 to 9 years, i.e. 4.2 years on an average). According to the categories of v. Pannewitz, 11% were pain-free at this moment, 21% showed an essential improvement, 29% showed an improvement, and 39% were not influenced by the treatment. 13.5% showed recurrent pains; these were mentioned as 'not influenced' in the statistical analysis. It is proved that the relief of pain does not depend on the age of the patients, but on the anamnesis period, the results of the X-ray examiantion, and the degree of the restriction of mobility. Due to the delay of irradiation, a preliminary treatment mostly produces a less favorable radiotherapeutic result. Compared with other therapeutic methods, the long term results of radiotherapy of degenerative joint diseases are generally favorable. This conclusion is also confirmed by the results of patients checked up more than five years after the treatment.

  4. Measuring the accuracy of automatic shoeprint recognition methods.

    Science.gov (United States)

    Luostarinen, Tapio; Lehmussola, Antti

    2014-11-01

    Shoeprints are an important source of information for criminal investigation. Therefore, an increasing number of automatic shoeprint recognition methods have been proposed for detecting the corresponding shoe models. However, comprehensive comparisons among the methods have not previously been made. In this study, an extensive set of methods proposed in the literature was implemented, and their performance was studied in varying conditions. Three datasets of different quality shoeprints were used, and the methods were evaluated also with partial and rotated prints. The results show clear differences between the algorithms: while the best performing method, based on local image descriptors and RANSAC, provides rather good results with most of the experiments, some methods are almost completely unrobust against any unidealities in the images. Finally, the results demonstrate that there is still a need for extensive research to improve the accuracy of automatic recognition of crime scene prints. © 2014 American Academy of Forensic Sciences.

  5. A Comparative Study of Potential Evapotranspiration Estimation by Eight Methods with FAO Penman–Monteith Method in Southwestern China

    Directory of Open Access Journals (Sweden)

    Dengxiao Lang

    2017-09-01

    Full Text Available Potential evapotranspiration (PET is crucial for water resources assessment. In this regard, the FAO (Food and Agriculture Organization–Penman–Monteith method (PM is commonly recognized as a standard method for PET estimation. However, due to requirement of detailed meteorological data, the application of PM is often constrained in many regions. Under such circumstances, an alternative method with similar efficiency to that of PM needs to be identified. In this study, three radiation-based methods, Makkink (Mak, Abtew (Abt, and Priestley–Taylor (PT, and five temperature-based methods, Hargreaves–Samani (HS, Thornthwaite (Tho, Hamon (Ham, Linacre (Lin, and Blaney–Criddle (BC, were compared with PM at yearly and seasonal scale, using long-term (50 years data from 90 meteorology stations in southwest China. Indicators, viz. (videlicet Nash–Sutcliffe efficiency (NSE, relative error (Re, normalized root mean squared error (NRMSE, and coefficient of determination (R2 were used to evaluate the performance of PET estimations by the above-mentioned eight methods. The results showed that the performance of the methods in PET estimation varied among regions; HS, PT, and Abt overestimated PET, while others underestimated. In Sichuan basin, Mak, Abt and HS yielded similar estimations to that of PM, while, in Yun-Gui plateau, Abt, Mak, HS, and PT showed better performances. Mak performed the best in the east Tibetan Plateau at yearly and seasonal scale, while HS showed a good performance in summer and autumn. In the arid river valley, HS, Mak, and Abt performed better than the others. On the other hand, Tho, Ham, Lin, and BC could not be used to estimate PET in some regions. In general, radiation-based methods for PET estimation performed better than temperature-based methods among the selected methods in the study area. Among the radiation-based methods, Mak performed the best, while HS showed the best performance among the temperature

  6. Gastric cancers of Western European and African patients show different patterns of genomic instability

    Directory of Open Access Journals (Sweden)

    Mulder Chris JJ

    2011-01-01

    Full Text Available Abstract Background Infection with H. pylori is important in the etiology of gastric cancer. Gastric cancer is infrequent in Africa, despite high frequencies of H. pylori infection, referred to as the African enigma. Variation in environmental and host factors influencing gastric cancer risk between different populations have been reported but little is known about the biological differences between gastric cancers from different geographic locations. We aim to study genomic instability patterns of gastric cancers obtained from patients from United Kingdom (UK and South Africa (SA, in an attempt to support the African enigma hypothesis at the biological level. Methods DNA was isolated from 67 gastric adenocarcinomas, 33 UK patients, 9 Caucasian SA patients and 25 native SA patients. Microsatellite instability and chromosomal instability were analyzed by PCR and microarray comparative genomic hybridization, respectively. Data was analyzed by supervised univariate and multivariate analyses as well as unsupervised hierarchical cluster analysis. Results Tumors from Caucasian and native SA patients showed significantly more microsatellite instable tumors (p Conclusions Gastric cancers from SA and UK patients show differences in genetic instability patterns, indicating possible different biological mechanisms in patients from different geographical origin. This is of future clinical relevance for stratification of gastric cancer therapy.

  7. Epsilon topological accelerating algorithms for difference method for initial-value problems

    International Nuclear Information System (INIS)

    Hristea, V.; Posirca, M.

    1992-01-01

    Linear and nonlinear parabolic equations can be solved by discretization methods which lead to linear and nonlinear algebraic systems. The iterative methods (e.g. Gauss - Seidel) show a very slow convergence and instability in the case of nonlinear equations. This paper proposes an ε topological algorithm for accelerating slow iterative methods used in the thermohydraulic code COBRA and the dynamic code ADEP. The results show an executing time approximately ten times lower than original algorithms. (Author)

  8. Calculation of concrete shielding wall thickness for 450kVp X-ray tube with MCNP simulation and result comparison with half value layer method calculation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Heon; Lee, Eun Joong; Kim, Chan Kyu; Cho, Gyu Seong [Dept. of Nuclear and Quantum Engineering, KAIST, Daejeon (Korea, Republic of); Hur, Sam Suk [Sam Yong Inspection Engineering Co., Ltd., Seoul (Korea, Republic of)

    2016-11-15

    Radiation generating devices must be properly shielded for their safe application. Although institutes such as US National Bureau of Standards and National Council on Radiation Protection and Measurements (NCRP) have provided guidelines for shielding X-ray tube of various purposes, industry people tend to rely on 'Half Value Layer (HVL) method' which requires relatively simple calculation compared to the case of those guidelines. The method is based on the fact that the intensity, dose, and air kerma of narrow beam incident on shielding wall decreases by about half as the beam penetrates the HVL thickness of the wall. One can adjust shielding wall thickness to satisfy outside wall dose or air kerma requirements with this calculation. However, this may not always be the case because 1) The strict definition of HVL deals with only Intensity, 2) The situation is different when the beam is not 'narrow'; the beam quality inside the wall is distorted and related changes on outside wall dose or air kerma such as buildup effect occurs. Therefore, sometimes more careful research should be done in order to verify the effect of shielding specific radiation generating device. High energy X-ray tubes which is operated at the voltage above 400 kV that are used for 'heavy' nondestructive inspection is an example. People have less experience in running and shielding such device than in the case of widely-used low energy X-ray tubes operated at the voltage below 300 kV. In this study, Air Kerma value per week, outside concrete shielding wall of various thickness surrounding 450 kVp X-ray tube were calculated using MCNP simulation with the aid of Geometry Splitting method which is a famous Variance Reduction technique. The comparison between simulated result, HVL method result, and NCRP Report 147 safety goal 0.02 mGy wk-1 on Air Kerma for the place where the public are free to pass showed that concrete wall of thickness 80 cm is needed to achieve the

  9. Calculation of concrete shielding wall thickness for 450kVp X-ray tube with MCNP simulation and result comparison with half value layer method calculation

    International Nuclear Information System (INIS)

    Lee, Sang Heon; Lee, Eun Joong; Kim, Chan Kyu; Cho, Gyu Seong; Hur, Sam Suk

    2016-01-01

    Radiation generating devices must be properly shielded for their safe application. Although institutes such as US National Bureau of Standards and National Council on Radiation Protection and Measurements (NCRP) have provided guidelines for shielding X-ray tube of various purposes, industry people tend to rely on 'Half Value Layer (HVL) method' which requires relatively simple calculation compared to the case of those guidelines. The method is based on the fact that the intensity, dose, and air kerma of narrow beam incident on shielding wall decreases by about half as the beam penetrates the HVL thickness of the wall. One can adjust shielding wall thickness to satisfy outside wall dose or air kerma requirements with this calculation. However, this may not always be the case because 1) The strict definition of HVL deals with only Intensity, 2) The situation is different when the beam is not 'narrow'; the beam quality inside the wall is distorted and related changes on outside wall dose or air kerma such as buildup effect occurs. Therefore, sometimes more careful research should be done in order to verify the effect of shielding specific radiation generating device. High energy X-ray tubes which is operated at the voltage above 400 kV that are used for 'heavy' nondestructive inspection is an example. People have less experience in running and shielding such device than in the case of widely-used low energy X-ray tubes operated at the voltage below 300 kV. In this study, Air Kerma value per week, outside concrete shielding wall of various thickness surrounding 450 kVp X-ray tube were calculated using MCNP simulation with the aid of Geometry Splitting method which is a famous Variance Reduction technique. The comparison between simulated result, HVL method result, and NCRP Report 147 safety goal 0.02 mGy wk-1 on Air Kerma for the place where the public are free to pass showed that concrete wall of thickness 80 cm is needed to achieve the safety goal

  10. Temporal quadratic expansion nodal Green's function method

    International Nuclear Information System (INIS)

    Liu Cong; Jing Xingqing; Xu Xiaolin

    2000-01-01

    A new approach is presented to efficiently solve the three-dimensional space-time reactor dynamics equation which overcomes the disadvantages of current methods. In the Temporal Quadratic Expansion Nodal Green's Function Method (TQE/NGFM), the Quadratic Expansion Method (QEM) is used for the temporal solution with the Nodal Green's Function Method (NGFM) employed for the spatial solution. Test calculational results using TQE/NGFM show that its time step size can be 5-20 times larger than that of the Fully Implicit Method (FIM) for similar precision. Additionally, the spatial mesh size with NGFM can be nearly 20 times larger than that using the finite difference method. So, TQE/NGFM is proved to be an efficient reactor dynamics analysis method

  11. Visualizing MCNP Tally Segment Geometry and Coupling Results with ABAQUS

    International Nuclear Information System (INIS)

    J. R. Parry; J. A. Galbraith

    2007-01-01

    The Advanced Graphite Creep test, AGC-1, is planned for irradiation in the Advanced Test Reactor (ATR) in support of the Next Generation Nuclear Plant program. The experiment requires very detailed neutronics and thermal hydraulics analyses to show compliance with programmatic and ATR safety requirements. The MCNP model used for the neutronics analysis required hundreds of tally regions to provide the desired detail. A method for visualizing the hundreds of tally region geometries and the tally region results in 3 dimensions has been created to support the AGC-1 irradiation. Additionally, a method was created which would allow ABAQUS to access the results directly for the thermal analysis of the AGC-1 experiment

  12. Different percentages of false-positive results obtained using five methods for the calculation of reference change values based on simulated normal and ln-normal distributions of data

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G

    2016-01-01

    a homeostatic set point that follows a normal (Gaussian) distribution. This set point (or baseline in steady-state) should be estimated from a set of previous samples, but, in practice, decisions based on reference change value are often based on only two consecutive results. The original reference change value......-positive results. The aim of this study was to investigate false-positive results using five different published methods for calculation of reference change value. METHODS: The five reference change value methods were examined using normally and ln-normally distributed simulated data. RESULTS: One method performed...... best in approaching the theoretical false-positive percentages on normally distributed data and another method performed best on ln-normally distributed data. The commonly used reference change value method based on two results (without use of estimated set point) performed worst both on normally...

  13. Machine learning methods can replace 3D profile method in classification of amyloidogenic hexapeptides

    Directory of Open Access Journals (Sweden)

    Stanislawski Jerzy

    2013-01-01

    Full Text Available Abstract Background Amyloids are proteins capable of forming fibrils. Many of them underlie serious diseases, like Alzheimer disease. The number of amyloid-associated diseases is constantly increasing. Recent studies indicate that amyloidogenic properties can be associated with short segments of aminoacids, which transform the structure when exposed. A few hundreds of such peptides have been experimentally found. Experimental testing of all possible aminoacid combinations is currently not feasible. Instead, they can be predicted by computational methods. 3D profile is a physicochemical-based method that has generated the most numerous dataset - ZipperDB. However, it is computationally very demanding. Here, we show that dataset generation can be accelerated. Two methods to increase the classification efficiency of amyloidogenic candidates are presented and tested: simplified 3D profile generation and machine learning methods. Results We generated a new dataset of hexapeptides, using more economical 3D profile algorithm, which showed very good classification overlap with ZipperDB (93.5%. The new part of our dataset contains 1779 segments, with 204 classified as amyloidogenic. The dataset of 6-residue sequences with their binary classification, based on the energy of the segment, was applied for training machine learning methods. A separate set of sequences from ZipperDB was used as a test set. The most effective methods were Alternating Decision Tree and Multilayer Perceptron. Both methods obtained area under ROC curve of 0.96, accuracy 91%, true positive rate ca. 78%, and true negative rate 95%. A few other machine learning methods also achieved a good performance. The computational time was reduced from 18-20 CPU-hours (full 3D profile to 0.5 CPU-hours (simplified 3D profile to seconds (machine learning. Conclusions We showed that the simplified profile generation method does not introduce an error with regard to the original method, while

  14. Could Daylight Glare Be Defined Mathematically?Results of Testing the DGIN Method in Japan

    Science.gov (United States)

    Nazzal, Ali; Oki, Masato

    Discomfort glare from daylight is a common problem without valid prediction methods so far. A new mathematical DGIN (New Daylight Glare Index) method tries to respond the challenge. This paper reports on experiments carried out in daylit office environment in Japan to test applicability of the method. Slight positive correlation was found between the DGIN and the subjective evaluation. Additionally, a high Ladaptation value together with the small ratio of Lwindow to Ladaptation was obviously experienced sufficient to neutralize the effect of glare discomfort. However, subjective assessments are poor glare indicators and not reliable in testing glare prediction methods. DGIN is a good indicator of daylight glare, and when the DGIN value is analyzed together with the measured illuminance ratios, discomfort glare from daylight can be analyzed in a quantitative manner. The DGIN method could serve architects and lighting designers in testing daylighting systems, and also guide the action of daylight responsive lighting controls.

  15. Extrapolation methods theory and practice

    CERN Document Server

    Brezinski, C

    1991-01-01

    This volume is a self-contained, exhaustive exposition of the extrapolation methods theory, and of the various algorithms and procedures for accelerating the convergence of scalar and vector sequences. Many subroutines (written in FORTRAN 77) with instructions for their use are provided on a floppy disk in order to demonstrate to those working with sequences the advantages of the use of extrapolation methods. Many numerical examples showing the effectiveness of the procedures and a consequent chapter on applications are also provided - including some never before published results and applicat

  16. Use of results from microscopic methods in optical model calculations

    International Nuclear Information System (INIS)

    Lagrange, C.

    1985-11-01

    A concept of vectorization for coupled-channel programs based upon conventional methods is first presented. This has been implanted in our program for its use on the CRAY-1 computer. In a second part we investigate the capabilities of a semi-microscopic optical model involving fewer adjustable parameters than phenomenological ones. The two main ingredients of our calculations are, for spherical or well-deformed nuclei, the microscopic optical-model calculations of Jeukenne, Lejeune and Mahaux and nuclear densities from Hartree-Fock-Bogoliubov calculations using the density-dependent force D1. For transitional nuclei deformation-dependent nuclear structure wave functions are employed to weigh the scattering potentials for different shapes and channels [fr

  17. A numerical test of the collective coordinate method

    International Nuclear Information System (INIS)

    Dobrowolski, T.; Tatrocki, P.

    2008-01-01

    The purpose of this Letter is to compare the dynamics of the kink interacting with the imperfection which follows from the collective coordinate method with the numerical results obtained on the ground of the field theoretical model. We showed that for weekly interacting kinks the collective coordinate method works similarly well for low and extremely large speeds

  18. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    Science.gov (United States)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an

  19. Spatial Heterodyne Observation of Water (SHOW) from a high altitude aircraft

    Science.gov (United States)

    Bourassa, A. E.; Langille, J.; Solheim, B.; Degenstein, D. A.; Letros, D.; Lloyd, N. D.; Loewen, P.

    2017-12-01

    The Spatial Heterodyne Observations of Water instrument (SHOW) is limb-sounding satellite prototype that is being developed in collaboration between the University of Saskatchewan, York University, the Canadian Space Agency and ABB. The SHOW instrument combines a field-widened SHS with an imaging system to observe limb-scattered sunlight in a vibrational band of water (1363 nm - 1366 nm). Currently, the instrument has been optimized for deployment on NASA's ER-2 aircraft. Flying at an altitude of 70, 000 ft the ER-2 configuration and SHOW viewing geometry provides high spatial resolution (limb-measurements of water vapor in the Upper troposphere and lower stratosphere region. During an observation campaign from July 15 - July 22, the SHOW instrument performed 10 hours of observations from the ER-2. This paper describes the SHOW measurement technique and presents the preliminary analysis and results from these flights. These observations are used to validate the SHOW measurement technique and demonstrate the sampling capabilities of the instrument.

  20. Molecular Weights of Bovine and Porcine Heparin Samples: Comparison of Chromatographic Methods and Results of a Collaborative Survey

    Directory of Open Access Journals (Sweden)

    Sabrina Bertini

    2017-07-01

    Full Text Available In a collaborative study involving six laboratories in the USA, Europe, and India the molecular weight distributions of a panel of heparin sodium samples were determined, in order to compare heparin sodium of bovine intestinal origin with that of bovine lung and porcine intestinal origin. Porcine samples met the current criteria as laid out in the USP Heparin Sodium monograph. Bovine lung heparin samples had consistently lower average molecular weights. Bovine intestinal heparin was variable in molecular weight; some samples fell below the USP limits, some fell within these limits and others fell above the upper limits. These data will inform the establishment of pharmacopeial acceptance criteria for heparin sodium derived from bovine intestinal mucosa. The method for MW determination as described in the USP monograph uses a single, broad standard calibrant to characterize the chromatographic profile of heparin sodium on high-resolution silica-based GPC columns. These columns may be short-lived in some laboratories. Using the panel of samples described above, methods based on the use of robust polymer-based columns have been developed. In addition to the use of the USP’s broad standard calibrant for heparin sodium with these columns, a set of conditions have been devised that allow light-scattering detected molecular weight characterization of heparin sodium, giving results that agree well with the monograph method. These findings may facilitate the validation of variant chromatographic methods with some practical advantages over the USP monograph method.