WorldWideScience

Sample records for methods secondary analyses

  1. Comparative Analyses of the Teaching Methods and Evaluation Practices in English Subject at Secondary School Certificate (SSC) and General Certificate of Education (GCE O-Level) in Pakistan

    Science.gov (United States)

    Behlol, Malik Ghulam; Anwar, Mohammad

    2011-01-01

    The study was conducted to compare the teaching methods and evaluation practices in English subject at secondary school certificate (SSC) and general certificate of education GCE-O-level in Pakistan. The population of the study was students, teachers and experts at SSC and 0-level in the Punjab province. Purposive and random sampling techniques…

  2. Examining the Impact of a Video Case-Based Mathematics Methods Course on Secondary Pre-Service Teachers' Skills at Analysing Students' Strategies

    Science.gov (United States)

    Martinez, Mara Vanina; Superfine, Alison Castro; Carlton, Theresa; Dasgupta, Chandan

    2015-01-01

    This paper focuses on results from a study conducted with two cohorts of pre-service teachers (PSTs) in a video case-based mathematics methods course at a large Midwestern university in the US. The motivation for this study was to look beyond whether or not PSTs pay attention to mathematical thinking of students, as shown by previous studies when…

  3. Secondary structural analyses of ITS1 in Paramecium.

    Science.gov (United States)

    Hoshina, Ryo

    2010-01-01

    The nuclear ribosomal RNA gene operon is interrupted by internal transcribed spacer (ITS) 1 and ITS2. Although the secondary structure of ITS2 has been widely investigated, less is known about ITS1 and its structure. In this study, the secondary structure of ITS1 sequences for Paramecium and other ciliates was predicted. Each Paramecium ITS1 forms an open loop with three helices, A through C. Helix B was highly conserved among Paramecium, and similar helices were found in other ciliates. A phylogenetic analysis using the ITS1 sequences showed high-resolution, implying that ITS1 is a good tool for species-level analyses.

  4. Method to perform radioimmunological analyses

    International Nuclear Information System (INIS)

    Friedel, R.

    1976-01-01

    The invention concerns a method for the radioimmunoligcal detection of antigens. According to the invention, antibodies are adsorbed on water-insoluble high-polymeric compounds on the inner surfaces of a capillary device, a labelled antigen is then added and, following incubation, suching off of the test mixture and washing of the coated surfaces, the latter is measured for radioactivity. (VJ) [de

  5. Classic Grounded Theory to Analyse Secondary Data: Reality and Reflections

    Directory of Open Access Journals (Sweden)

    Lorraine Andrews

    2012-06-01

    Full Text Available This paper draws on the experiences of two researchers and discusses how they conducted a secondary data analysis using classic grounded theory. The aim of the primary study was to explore first-time parents’ postnatal educational needs. A subset of the data from the primary study (eight transcripts from interviews with fathers was used for the secondary data analysis. The objectives of the secondary data analysis were to identify the challenges of using classic grounded theory with secondary data and to explore whether the re-analysis of primary data using a different methodology would yield a different outcome. Through the process of re-analysis a tentative theory emerged on ‘developing competency as a father’. Challenges encountered during this re-analysis included the small dataset, the pre-framed data, and limited ability for theoretical sampling. This re-analysis proved to be a very useful learning tool for author 1(LA, who was a novice with classic grounded theory.

  6. Authentic Teaching Experiences in Secondary Mathematics Methods

    Science.gov (United States)

    Stickles, Paula R.

    2015-01-01

    Often secondary mathematics methods courses include classroom peer teaching, but many pre-service teachers find it challenging to teach their classmate peers as there are no discipline issues and little mathematical discourse as the "students" know the content. We will share a recent change in our methods course where pre-service…

  7. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    Directory of Open Access Journals (Sweden)

    Brady T West

    Full Text Available Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT, which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data.

  8. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    Science.gov (United States)

    West, Brady T.; Sakshaug, Joseph W.; Aurelien, Guy Alain S.

    2016-01-01

    Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data. PMID:27355817

  9. GOTHIC MODEL OF BWR SECONDARY CONTAINMENT DRAWDOWN ANALYSES

    International Nuclear Information System (INIS)

    Hansen, P.N.

    2004-01-01

    This article introduces a GOTHIC version 7.1 model of the Secondary Containment Reactor Building Post LOCA drawdown analysis for a BWR. GOTHIC is an EPRI sponsored thermal hydraulic code. This analysis is required by the Utility to demonstrate an ability to restore and maintain the Secondary Containment Reactor Building negative pressure condition. The technical and regulatory issues associated with this modeling are presented. The analysis includes the affect of wind, elevation and thermal impacts on pressure conditions. The model includes a multiple volume representation which includes the spent fuel pool. In addition, heat sources and sinks are modeled as one dimensional heat conductors. The leakage into the building is modeled to include both laminar as well as turbulent behavior as established by actual plant test data. The GOTHIC code provides components to model heat exchangers used to provide fuel pool cooling as well as area cooling via air coolers. The results of the evaluation are used to demonstrate the time that the Reactor Building is at a pressure that exceeds external conditions. This time period is established with the GOTHIC model based on the worst case pressure conditions on the building. For this time period the Utility must assume the primary containment leakage goes directly to the environment. Once the building pressure is restored below outside conditions the release to the environment can be credited as a filtered release

  10. Matrix Summaries Improve Research Reports: Secondary Analyses Using Published Literature

    Science.gov (United States)

    Zientek, Linda Reichwein; Thompson, Bruce

    2009-01-01

    Correlation matrices and standard deviations are the building blocks of many of the commonly conducted analyses in published research, and AERA and APA reporting standards recommend their inclusion when reporting research results. The authors argue that the inclusion of correlation/covariance matrices, standard deviations, and means can enhance…

  11. Fast and accurate methods for phylogenomic analyses

    Directory of Open Access Journals (Sweden)

    Warnow Tandy

    2011-10-01

    Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.

  12. Direct Analyses of Secondary Metabolites by Mass Spectrometry Imaging (MSI) from Sunflower (Helianthus annuus L.) Trichomes.

    Science.gov (United States)

    Brentan Silva, Denise; Aschenbrenner, Anna-Katharina; Lopes, Norberto Peporine; Spring, Otmar

    2017-05-10

    Helianthus annuus (sunflower) displays non-glandular trichomes (NGT), capitate glandular trichomes (CGT), and linear glandular trichomes (LGT), which reveal different chemical compositions and locations in different plant tissues. With matrix-assisted laser desorption/ionization (MALDI) and laser desorption/ionization (LDI) mass spectrometry imaging (MSI) techniques, efficient methods were developed to analyze the tissue distribution of secondary metabolites (flavonoids and sesquiterpenes) and proteins inside of trichomes. Herein, we analyzed sesquiterpene lactones, present in CGT, from leaf transversal sections using the matrix 2,5-dihydroxybenzoic acid (DHB) and α-cyano-4-hydroxycinnamic acid (CHCA) (mixture 1:1) with sodium ions added to increase the ionization in positive ion mode. The results observed for sesquiterpenes and polymethoxylated flavones from LGT were similar. However, upon desiccation, LGT changed their shape in the ionization source, complicating analyses by MSI mainly after matrix application. An alternative method could be applied to LGT regions by employing LDI (without matrix) in negative ion mode. The polymethoxylated flavones were easily ionized by LDI, producing images with higher resolution, but the sesquiterpenes were not observed in spectra. Thus, the application and viability of MALDI imaging for the analyses of protein and secondary metabolites inside trichomes were confirmed, highlighting the importance of optimization parameters.

  13. Method for the secondary recovery of petroleum

    Energy Technology Data Exchange (ETDEWEB)

    Roth, H H

    1966-10-11

    A method for the secondary recovery of petroleum from subterranean formations consists of flooding these formations with aqueous fluids. These aqueous fluids contain one or more saline solutes which are either present before the flooding or which are dissolved from the formation during flooding. These fluids contain, as a thickening agent, a substantially linear, high molecular weight, water-soluble alkenylaromatic polymer which has sulfonic acid or sulfonate groups on the aromatic nuclei. This saline solute and polymer are mutually compatible. (5 claims)

  14. Uncertainty Analyses for Back Projection Methods

    Science.gov (United States)

    Zeng, H.; Wei, S.; Wu, W.

    2017-12-01

    So far few comprehensive error analyses for back projection methods have been conducted, although it is evident that high frequency seismic waves can be easily affected by earthquake depth, focal mechanisms and the Earth's 3D structures. Here we perform 1D and 3D synthetic tests for two back projection methods, MUltiple SIgnal Classification (MUSIC) (Meng et al., 2011) and Compressive Sensing (CS) (Yao et al., 2011). We generate synthetics for both point sources and finite rupture sources with different depths, focal mechanisms, as well as 1D and 3D structures in the source region. The 3D synthetics are generated through a hybrid scheme of Direct Solution Method and Spectral Element Method. Then we back project the synthetic data using MUSIC and CS. The synthetic tests show that the depth phases can be back projected as artificial sources both in space and time. For instance, for a source depth of 10km, back projection gives a strong signal 8km away from the true source. Such bias increases with depth, e.g., the error of horizontal location could be larger than 20km for a depth of 40km. If the array is located around the nodal direction of direct P-waves the teleseismic P-waves are dominated by the depth phases. Therefore, back projections are actually imaging the reflection points of depth phases more than the rupture front. Besides depth phases, the strong and long lasted coda waves due to 3D effects near trench can lead to additional complexities tested here. The strength contrast of different frequency contents in the rupture models also produces some variations to the back projection results. In the synthetic tests, MUSIC and CS derive consistent results. While MUSIC is more computationally efficient, CS works better for sparse arrays. In summary, our analyses indicate that the impact of various factors mentioned above should be taken into consideration when interpreting back projection images, before we can use them to infer the earthquake rupture physics.

  15. Wall thinning trend analyses for secondary side piping of Korean NPPs

    International Nuclear Information System (INIS)

    Hwang, K.M.; Jin, T.E.; Lee, S.H.; Jeon, S.C.

    2003-01-01

    Since the mid-1990s, nuclear power plants in Korea have experienced wall thinning, leaks, and ruptures of secondary side piping caused by flow-accelerated corrosion (FAC). The pipe failures have increased as operating time progresses. In order to prevent the FAC-induced pipe failures and to develop an effective FAC management strategy, KEPRI and KOPEC have conducted a study for developing systematic FAC management technology for secondary side piping of all Korean nuclear power plants. As a part of the study, FAC analyses were performed using the CHECWORKS code. The analysis results were used to select components for inspection and to determine inspection intervals on each nuclear power plant. This paper describes the introduction of the FAC analysis method and the wall thinning trend analysis results by reactor type, system, and water treatment amine. This paper also represents the site application feasibility for secondary side piping management. The site application feasibility of the analysis results was proven by comparisons of predicted and measured wear rates. (author)

  16. The secondary stress analyses in the fuel pin cladding due to the swelling gradient across the wall thickness

    International Nuclear Information System (INIS)

    Uwaba, Tomoyuki; Ukai, Shigeharu

    2002-01-01

    Irradiation deformation analyses of FBR fuel cladding were made by using the finite element method. In these analyses the history of the stress occurred in the cladding was evaluated paying attention to the secondary stress induced by the swelling difference across the wall thickness. It was revealed that the difference of the swelling incubation dose in the direction of the thickness and the irradiation creep deformation play an important role in the history of the secondary stress. The effect of the stress-enhanced swelling was also analyzed in this study

  17. Testing Mediation Using Multiple Regression and Structural Equation Modeling Analyses in Secondary Data

    Science.gov (United States)

    Li, Spencer D.

    2011-01-01

    Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…

  18. The use of secondary ion mass spectrometry in forensic analyses of ultra-small samples

    Science.gov (United States)

    Cliff, John

    2010-05-01

    It is becoming increasingly important in forensic science to perform chemical and isotopic analyses on very small sample sizes. Moreover, in some instances the signature of interest may be incorporated in a vast background making analyses impossible by bulk methods. Recent advances in instrumentation make secondary ion mass spectrometry (SIMS) a powerful tool to apply to these problems. As an introduction, we present three types of forensic analyses in which SIMS may be useful. The causal organism of anthrax (Bacillus anthracis) chelates Ca and other metals during spore formation. Thus, the spores contain a trace element signature related to the growth medium that produced the organisms. Although other techniques have been shown to be useful in analyzing these signatures, the sample size requirements are generally relatively large. We have shown that time of flight SIMS (TOF-SIMS) combined with multivariate analysis, can clearly separate Bacillus sp. cultures prepared in different growth media using analytical spot sizes containing approximately one nanogram of spores. An important emerging field in forensic analysis is that of provenance of fecal pollution. The strategy of choice for these analyses-developing host-specific nucleic acid probes-has met with considerable difficulty due to lack of specificity of the probes. One potentially fruitful strategy is to combine in situ nucleic acid probing with high precision isotopic analyses. Bulk analyses of human and bovine fecal bacteria, for example, indicate a relative difference in d13C content of about 4 per mil. We have shown that sample sizes of several nanograms can be analyzed with the IMS 1280 with precisions capable of separating two per mil differences in d13C. The NanoSIMS 50 is capable of much better spatial resolution than the IMS 1280, albeit at a cost of analytical precision. Nevertheless we have documented precision capable of separating five per mil differences in d13C using analytical spots containing

  19. Methods for Handling Missing Secondary Respondent Data

    Science.gov (United States)

    Young, Rebekah; Johnson, David

    2013-01-01

    Secondary respondent data are underutilized because researchers avoid using these data in the presence of substantial missing data. The authors reviewed, evaluated, and tested solutions to this problem. Five strategies of dealing with missing partner data were reviewed: (a) complete case analysis, (b) inverse probability weighting, (c) correction…

  20. Methods and procedures for shielding analyses for the SNS

    International Nuclear Information System (INIS)

    Popova, I.; Ferguson, F.; Gallmeier, F.X.; Iverson, E.; Lu, Wei

    2011-01-01

    In order to provide radiologically safe Spallation Neutron Source operation, shielding analyses are performed according to Oak Ridge National Laboratory internal regulations and to comply with the Code of Federal Regulations. An overview of on-going shielding work for the accelerator facility and neutrons beam lines, methods used for the analyses, and associated procedures and regulations are presented. Methods used to perform shielding analyses are described as well. (author)

  1. Secondary air injection system and method

    Science.gov (United States)

    Wu, Ko-Jen; Walter, Darrell J.

    2014-08-19

    According to one embodiment of the invention, a secondary air injection system includes a first conduit in fluid communication with at least one first exhaust passage of the internal combustion engine and a second conduit in fluid communication with at least one second exhaust passage of the internal combustion engine, wherein the at least one first and second exhaust passages are in fluid communication with a turbocharger. The system also includes an air supply in fluid communication with the first and second conduits and a flow control device that controls fluid communication between the air supply and the first conduit and the second conduit and thereby controls fluid communication to the first and second exhaust passages of the internal combustion engine.

  2. The Physically Active Lifestyle of Flemish Secondary School Teachers: A Mixed-Methods Approach towards Developing a Physical Activity Intervention

    Science.gov (United States)

    Bogaert, Inge; De Martelaer, Kristine; Deforche, Benedicte; Clarys, Peter; Zinzen, Evert

    2015-01-01

    Objective: The primary aim of this study was to describe and analyse the physical activity and sedentary levels of secondary school teachers in Flanders. A secondary aim was to collect information regarding a possible worksite intervention of special relevance to secondary school teachers. Design: Mixed-methods quantitative and qualitative…

  3. Surface potential measurement of negative-ion-implanted insulators by analysing secondary electron energy distribution

    International Nuclear Information System (INIS)

    Toyota, Yoshitaka; Tsuji, Hiroshi; Nagumo, Syoji; Gotoh, Yasuhito; Ishikawa, Junzo; Sakai, Shigeki.

    1994-01-01

    The negative ion implantation method we have proposed is a noble technique which can reduce surface charging of isolated electrodes by a large margin. In this paper, the way to specify the surface potential of negative-ion-implanted insulators by the secondary electron energy analysis is described. The secondary electron energy distribution is obtained by a retarding field type energy analyzer. The result shows that the surface potential of fused quartz by negative-ion implantation (C - with the energy of 10 keV to 40 keV) is negatively charged by only several volts. This surface potential is extremely low compared with that by positive-ion implantation. Therefore, the negative-ion implantation is a very effective method for charge-up free implantation without charge compensation. (author)

  4. Secondary waste minimization in analytical methods

    International Nuclear Information System (INIS)

    Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S.; Schilling, J.B.

    1995-01-01

    The characterization phase of site remediation is an important and costly part of the process. Because toxic solvents and other hazardous materials are used in common analytical methods, characterization is also a source of new waste, including mixed waste. Alternative analytical methods can reduce the volume or form of hazardous waste produced either in the sample preparation step or in the measurement step. The authors are examining alternative methods in the areas of inorganic, radiological, and organic analysis. For determining inorganic constituents, alternative methods were studied for sample introduction into inductively coupled plasma spectrometers. Figures of merit for the alternative methods, as well as their associated waste volumes, were compared with the conventional approaches. In the radiological area, the authors are comparing conventional methods for gross α/β measurements of soil samples to an alternative method that uses high-pressure microwave dissolution. For determination of organic constituents, microwave-assisted extraction was studied for RCRA regulated semivolatile organics in a variety of solid matrices, including spiked samples in blank soil; polynuclear aromatic hydrocarbons in soils, sludges, and sediments; and semivolatile organics in soil. Extraction efficiencies were determined under varying conditions of time, temperature, microwave power, moisture content, and extraction solvent. Solvent usage was cut from the 300 mL used in conventional extraction methods to about 30 mL. Extraction results varied from one matrix to another. In most cases, the microwave-assisted extraction technique was as efficient as the more common Soxhlet or sonication extraction techniques

  5. Using the Socratic Method in Secondary Teaching.

    Science.gov (United States)

    Schoeman, Stephen

    1997-01-01

    Students are more accustomed to receiving knowledge than to questioning knowledge, challenging underlying assumptions, and seeing inconsistencies and irrelevancies. The Socratic method requires teachers to challenge students' critical thinking abilities by developing questions based on analogies and hypothetical situations. Although the Socratic…

  6. Methods for analysing cardiovascular studies with repeated measures

    NARCIS (Netherlands)

    Cleophas, T. J.; Zwinderman, A. H.; van Ouwerkerk, B. M.

    2009-01-01

    Background. Repeated measurements in a single subject are generally more similar than unrepeated measurements in different subjects. Unrepeated analyses of repeated data cause underestimation of the treatment effects. Objective. To review methods adequate for the analysis of cardiovascular studies

  7. A vector matching method for analysing logic Petri nets

    Science.gov (United States)

    Du, YuYue; Qi, Liang; Zhou, MengChu

    2011-11-01

    Batch processing function and passing value indeterminacy in cooperative systems can be described and analysed by logic Petri nets (LPNs). To directly analyse the properties of LPNs, the concept of transition enabling vector sets is presented and a vector matching method used to judge the enabling transitions is proposed in this article. The incidence matrix of LPNs is defined; an equation about marking change due to a transition's firing is given; and a reachable tree is constructed. The state space explosion is mitigated to a certain extent from directly analysing LPNs. Finally, the validity and reliability of the proposed method are illustrated by an example in electronic commerce.

  8. Simplified Hybrid-Secondary Uncluttered Machine And Method

    Science.gov (United States)

    Hsu, John S [Oak Ridge, TN

    2005-05-10

    An electric machine (40, 40') has a stator (43) and a rotor (46) and a primary air gap (48) has secondary coils (47c, 47d) separated from the rotor (46) by a secondary air gap (49) so as to induce a slip current in the secondary coils (47c, 47d). The rotor (46, 76) has magnetic brushes (A, B, C, D) or wires (80) which couple flux in through the rotor (46) to the secondary coils (47c, 47d) without inducing a current in the rotor (46) and without coupling a stator rotational energy component to the secondary coils (47c, 47d). The machine can be operated as a motor or a generator in multi-phase or single-phase embodiments. A method of providing a slip energy controller is also disclosed.

  9. A method for rapid similarity analysis of RNA secondary structures

    Directory of Open Access Journals (Sweden)

    Liu Na

    2006-11-01

    Full Text Available Abstract Background Owing to the rapid expansion of RNA structure databases in recent years, efficient methods for structure comparison are in demand for function prediction and evolutionary analysis. Usually, the similarity of RNA secondary structures is evaluated based on tree models and dynamic programming algorithms. We present here a new method for the similarity analysis of RNA secondary structures. Results Three sets of real data have been used as input for the example applications. Set I includes the structures from 5S rRNAs. Set II includes the secondary structures from RNase P and RNase MRP. Set III includes the structures from 16S rRNAs. Reasonable phylogenetic trees are derived for these three sets of data by using our method. Moreover, our program runs faster as compared to some existing ones. Conclusion The famous Lempel-Ziv algorithm can efficiently extract the information on repeated patterns encoded in RNA secondary structures and makes our method an alternative to analyze the similarity of RNA secondary structures. This method will also be useful to researchers who are interested in evolutionary analysis.

  10. Hybrid-secondary uncluttered permanent magnet machine and method

    Science.gov (United States)

    Hsu, John S.

    2005-12-20

    An electric machine (40) has a stator (43), a permanent magnet rotor (38) with permanent magnets (39) and a magnetic coupling uncluttered rotor (46) for inducing a slip energy current in secondary coils (47). A dc flux can be produced in the uncluttered rotor when the secondary coils are fed with dc currents. The magnetic coupling uncluttered rotor (46) has magnetic brushes (A, B, C, D) which couple flux in through the rotor (46) to the secondary coils (47c, 47d) without inducing a current in the rotor (46) and without coupling a stator rotational energy component to the secondary coils (47c, 47d). The machine can be operated as a motor or a generator in multi-phase or single-phase embodiments and is applicable to the hybrid electric vehicle. A method of providing a slip energy controller is also disclosed.

  11. The ad-libitum alcohol ?taste test?: secondary analyses of potential confounds and construct validity

    OpenAIRE

    Jones, Andrew; Button, Emily; Rose, Abigail K.; Robinson, Eric; Christiansen, Paul; Di Lemma, Lisa; Field, Matt

    2015-01-01

    Rationale Motivation to drink alcohol can be measured in the laboratory using an ad-libitum ?taste test?, in which participants rate the taste of alcoholic drinks whilst their intake is covertly monitored. Little is known about the construct validity of this paradigm. Objective The objective of this study was to investigate variables that may compromise the validity of this paradigm and its construct validity. Methods We re-analysed data from 12 studies from our laboratory that incorporated a...

  12. JUPITER and satellites: Clinical implications of the JUPITER study and its secondary analyses.

    Science.gov (United States)

    Kostapanos, Michael S; Elisaf, Moses S

    2011-07-26

    THE JUSTIFICATION FOR THE USE OF STATINS IN PREVENTION: an intervention trial evaluating rosuvastatin (JUPITER) study was a real breakthrough in primary cardiovascular disease prevention with statins, since it was conducted in apparently healthy individuals with normal levels of low-density lipoprotein cholesterol (LDL-C JUPITER, rosuvastatin was associated with significant reductions in cardiovascular outcomes as well as in overall mortality compared with placebo. In this paper the most important secondary analyses of the JUPITER trial are discussed, by focusing on their novel findings regarding the role of statins in primary prevention. Also, the characteristics of otherwise healthy normocholesterolemic subjects who are anticipated to benefit more from statin treatment in the clinical setting are discussed. Subjects at "intermediate" or "high" 10-year risk according to the Framingham score, those who exhibit low post-treatment levels of both LDL-C (JUPITER added to our knowledge that statins may be effective drugs in the primary prevention of cardiovascular disease in normocholesterolemic individuals at moderate-to-high risk. Also, statin treatment may reduce the risk of venous thromboembolism and preserve renal function. An increase in physician-reported diabetes represents a major safety concern associated with the use of the most potent statins.

  13. Secondary Data Analyses of Subjective Outcome Evaluation Data Based on Nine Databases

    Directory of Open Access Journals (Sweden)

    Daniel T. L. Shek

    2012-01-01

    Full Text Available The purpose of this study was to evaluate the effectiveness of the Tier 1 Program of the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong by analyzing 1,327 school-based program reports submitted by program implementers. In each report, program implementers were invited to write down five conclusions based on an integration of the subjective outcome evaluation data collected from the program participants and program implementers. Secondary data analyses were carried out by aggregating nine databases, with 14,390 meaningful units extracted from 6,618 conclusions. Results showed that most of the conclusions were positive in nature. The findings generally showed that the workers perceived the program and program implementers to be positive, and they also pointed out that the program could promote holistic development of the program participants in societal, familial, interpersonal, and personal aspects. However, difficulties encountered during program implementation (2.15% and recommendations for improvement were also reported (16.26%. In conjunction with the evaluation findings based on other strategies, the present study suggests that the Tier 1 Program of the Project P.A.T.H.S. is beneficial to the holistic development of the program participants.

  14. A review of bioinformatic methods for forensic DNA analyses.

    Science.gov (United States)

    Liu, Yao-Yuan; Harbison, SallyAnn

    2018-03-01

    Short tandem repeats, single nucleotide polymorphisms, and whole mitochondrial analyses are three classes of markers which will play an important role in the future of forensic DNA typing. The arrival of massively parallel sequencing platforms in forensic science reveals new information such as insights into the complexity and variability of the markers that were previously unseen, along with amounts of data too immense for analyses by manual means. Along with the sequencing chemistries employed, bioinformatic methods are required to process and interpret this new and extensive data. As more is learnt about the use of these new technologies for forensic applications, development and standardization of efficient, favourable tools for each stage of data processing is being carried out, and faster, more accurate methods that improve on the original approaches have been developed. As forensic laboratories search for the optimal pipeline of tools, sequencer manufacturers have incorporated pipelines into sequencer software to make analyses convenient. This review explores the current state of bioinformatic methods and tools used for the analyses of forensic markers sequenced on the massively parallel sequencing (MPS) platforms currently most widely used. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Study of thermal-hydraulic analyses with CIP method

    International Nuclear Information System (INIS)

    Doi, Yoshihiro

    1996-09-01

    New type of numerical scheme CIP has been proposed for solving hyperbolic type equations and the CIP is focused on as a less numerical diffusive scheme. C-CUP method with the CIP scheme is adopted to numerical simulations that treat compressible and incompressible fluids, phase change phenomena and Mixture fluids. To evaluate applicabilities of the CIP scheme and C-CUP method for thermal hydraulic analyses related to Fast Breeder Reactors (FBRs), the scheme and the method were reviewed. Feature of the CIP scheme and procedure of the C-CUP method were presented. The CIP scheme is used to solve linear hyperbolic type equations for advection term in basic equations of fluids. Key issues of the scheme is that profile between grid points is described to solve the equation by cubic polynomial and spatial derivatives of the polynomial. The scheme can capture steep change of solution and suppress numerical error. In the C-CUP method, the basic equations of fluids are divided into advection terms and the other terms. The advection terms is solved with CIP scheme and the other terms is solved with difference method. The C-CUP method is robust for numerical instability, but mass of fluid will be in unfair preservation with nonconservative equations for fluids. Numerical analyses with the CIP scheme and the C-CUP method has been performed for phase change, mixture and moving object. These analyses are depend on characteristics of that the scheme and the method are robust for steep change of density and useful for interface tracking. (author)

  16. Dynamics of energy systems: Methods of analysing technology change

    Energy Technology Data Exchange (ETDEWEB)

    Neij, Lena

    1999-05-01

    Technology change will have a central role in achieving a sustainable energy system. This calls for methods of analysing the dynamics of energy systems in view of technology change and policy instruments for effecting and accelerating technology change. In this thesis, such methods have been developed, applied, and assessed. Two types of methods have been considered, methods of analysing and projecting the dynamics of future technology change and methods of evaluating policy instruments effecting technology change, i.e. market transformation programmes. Two methods are focused on analysing the dynamics of future technology change; vintage models and experience curves. Vintage models, which allow for complex analysis of annual streams of energy and technological investments, are applied to the analysis of the time dynamics of electricity demand for lighting and air-distribution in Sweden. The results of the analyses show that the Swedish electricity demand for these purposes could decrease over time, relative to a reference scenario, if policy instruments are used. Experience curves are used to provide insight into the prospects of diffusion of wind turbines and photo voltaic (PV) modules due to cost reduction. The results show potential for considerable cost reduction for wind-generated electricity, which, in turn, could lead to major diffusion of wind turbines. The results also show that major diffusion of PV modules, and a reduction of PV generated electricity down to the level of conventional base-load electricity, will depend on large investments in bringing the costs down (through R D and D, market incentives and investments in niche markets) or the introduction of new generations of PV modules (e.g. high-efficiency mass-produced thin-film cells). Moreover, a model has been developed for the evaluation of market transformation programmes, i.e. policy instruments that effect technology change and the introduction and commercialisation of energy

  17. APPLYING SPECTROSCOPIC METHODS ON ANALYSES OF HAZARDOUS WASTE

    OpenAIRE

    Dobrinić, Julijan; Kunić, Marija; Ciganj, Zlatko

    2000-01-01

    Abstract The paper presents results of measuring the content of heavy and other metals in waste samples from the hazardous waste disposal site of Sovjak near Rijeka. The preliminary design elaboration and the choice of the waste disposal sanification technology were preceded by the sampling and physico-chemical analyses of disposed waste, enabling its categorization. The following spectroscopic methods were applied on metal content analysis: Atomic absorption spectroscopy (AAS) and plas...

  18. Rtools: a web server for various secondary structural analyses on single RNA sequences.

    Science.gov (United States)

    Hamada, Michiaki; Ono, Yukiteru; Kiryu, Hisanori; Sato, Kengo; Kato, Yuki; Fukunaga, Tsukasa; Mori, Ryota; Asai, Kiyoshi

    2016-07-08

    The secondary structures, as well as the nucleotide sequences, are the important features of RNA molecules to characterize their functions. According to the thermodynamic model, however, the probability of any secondary structure is very small. As a consequence, any tool to predict the secondary structures of RNAs has limited accuracy. On the other hand, there are a few tools to compensate the imperfect predictions by calculating and visualizing the secondary structural information from RNA sequences. It is desirable to obtain the rich information from those tools through a friendly interface. We implemented a web server of the tools to predict secondary structures and to calculate various structural features based on the energy models of secondary structures. By just giving an RNA sequence to the web server, the user can get the different types of solutions of the secondary structures, the marginal probabilities such as base-paring probabilities, loop probabilities and accessibilities of the local bases, the energy changes by arbitrary base mutations as well as the measures for validations of the predicted secondary structures. The web server is available at http://rtools.cbrc.jp, which integrates software tools, CentroidFold, CentroidHomfold, IPKnot, CapR, Raccess, Rchange and RintD. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Role of BWR secondary containments in severe accident mitigation: issues and insights from recent analyses

    International Nuclear Information System (INIS)

    Greene, S.R.

    1988-01-01

    All commercial boiling water reactor (BWR) plants in the US employ primary containments of the pressure suppression design. These primary containments are surrounded and enclosed by a secondary containment consisting of a reactor building and refueling bay (MK I and MK II designs), a shield building, auxiliary building and fuel building (MK III), or an auxiliary building and enclosure building (Grand Gulf style MK III). Although secondary containment designs are highly plant specific, their purpose is to minimize the ground level release of radioactive material for a spectrum of traditional design basis accidents. While not designed for severe accident mitigation, these secondary containments might also reduce the radiological consequences of severe accidents. This issue is receiving increasing attention due to concerns that BWR MK I primary containment integrity would be lost should a significant mass of molten debris escape the reactor vessel during a severe accident. This paper presents a brief overview of domestic BWR secondary containment designs and highlights plant-specific features that could influence secondary containment severe accident survivability and accident mitigation effectiveness. Current issues surrounding secondary containment performance are discussed, and insights gained from recent ORNL secondary containment studies of Browns Ferry, Peach Bottom, and Shoreham are presented. Areas of significant uncertainty are identified and recommendations for future research are presented

  20. Analyses of the Secondary Particle Radiation and the DNA Damage it Causes to Human Keratinocytes

    Energy Technology Data Exchange (ETDEWEB)

    Lebel E. A.; Tafrov S.; Rusek, A.; Sivertz, M. B.; Yip, K.; Thompson, K. H.

    2011-11-01

    High-energy protons, and high mass and energy ions, along with the secondary particles they produce, are the main contributors to the radiation hazard during space explorations. Skin, particularly the epidermis, consisting mainly of keratinocytes with potential for proliferation and malignant transformation, absorbs the majority of the radiation dose. Therefore, we used normal human keratinocytes to investigate and quantify the DNA damage caused by secondary radiation. Its manifestation depends on the presence of retinol in the serum-free media, and is regulated by phosphatidylinositol 3-kinases. We simulated the generation of secondary radiation after the impact of protons and iron ions on an aluminum shield. We also measured the intensity and the type of the resulting secondary particles at two sample locations; our findings agreed well with our predictions. We showed that secondary particles inflict DNA damage to different extents, depending on the type of primary radiation. Low-energy protons produce fewer secondary particles and cause less DNA damage than do high-energy protons. However, both generate fewer secondary particles and inflict less DNA damage than do high mass and energy ions. The majority of cells repaired the initial damage, as denoted by the presence of 53BPI foci, within the first 24 hours after exposure, but some cells maintained the 53BP1 foci longer.

  1. Global earthquake casualties due to secondary effects: A quantitative analysis for improving rapid loss analyses

    Science.gov (United States)

    Marano, K.D.; Wald, D.J.; Allen, T.I.

    2010-01-01

    This study presents a quantitative and geospatial description of global losses due to earthquake-induced secondary effects, including landslide, liquefaction, tsunami, and fire for events during the past 40 years. These processes are of great importance to the US Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, which is currently being developed to deliver rapid earthquake impact and loss assessments following large/significant global earthquakes. An important question is how dominant are losses due to secondary effects (and under what conditions, and in which regions)? Thus, which of these effects should receive higher priority research efforts in order to enhance PAGER's overall assessment of earthquakes losses and alerting for the likelihood of secondary impacts? We find that while 21.5% of fatal earthquakes have deaths due to secondary (non-shaking) causes, only rarely are secondary effects the main cause of fatalities. The recent 2004 Great Sumatra-Andaman Islands earthquake is a notable exception, with extraordinary losses due to tsunami. The potential for secondary hazards varies greatly, and systematically, due to regional geologic and geomorphic conditions. Based on our findings, we have built country-specific disclaimers for PAGER that address potential for each hazard (Earle et al., Proceedings of the 14th World Conference of the Earthquake Engineering, Beijing, China, 2008). We will now focus on ways to model casualties from secondary effects based on their relative importance as well as their general predictability. ?? Springer Science+Business Media B.V. 2009.

  2. Classification of methods and equipment recovery secondary waters

    Directory of Open Access Journals (Sweden)

    G. V. Kalashnikov

    2017-01-01

    Full Text Available The issues of purification of secondary waters of industrial production have an important place and are relevant in the environmental activities of all food and chemical industries. For cleaning the transporter-washing water of beet-sugar production the key role is played by the equipment of treatment plants. A wide variety of wastewater treatment equipment is classified according to various methods. Typical structures used are sedimentation tanks, hydrocyclones, separators, centrifuges. In turn, they have a different degree of purification, productivity through the incoming suspension and purified secondary water. This is equipment is divided into designs, depending on the range of particles to be removed. A general classification of methods for cleaning the transporter-washing water, as well as the corresponding equipment, is made. Based on the analysis of processes and instrumentation, the main methods of wastewater treatment are identified: mechanical, physicochemical, combined, biological and disinfection. To increase the degree of purification and reduce technical and economic costs, a combined method is widely used. The main task of the site for cleaning the transporter-washing waters of sugar beet production is to provide the enterprise with water in the required quantity and quality, with economical use of water resources, taking into account the absence of pollution of surface and groundwater by industrial wastewater. In the sugar industry is currently new types of washing equipment of foreign production are widely used, which require high quality and a large amount of purified transporter-washing water for normal operation. The proposed classification makes it possible to carry out a comparative technical and economic analysis when choosing the methods and equipment for recuperation of secondary waters. The main equipment secondary water recovery used at the beet-sugar plant is considered. The most common beet processing plant is a

  3. Methodical treatment of dependent failures in risk analyses

    International Nuclear Information System (INIS)

    Hennings, W.; Mertens, J.

    1987-06-01

    In this report the state-of-the-art regarding dependent failures is compiled and commented on. Among others the following recommendations are infered: The term 'common mode failures' should be restricted to failures of redundant, similar components; the generic term is 'dependent failures' with the subsets 'causal failures' and 'common cause failures'. In risk studies, dependent failures should be covered as far as possible by 'explicit methods'. Nevertheless an uncovered rest remains, which should be accounted for by sensitivity analyses using 'implicit methods'. For this the homogeneous Marshall-Olkin model is recommended. Because the available reports on operating experiences only record 'common mode failures' systematically, it is recommended to additionally apply other methods, e.g. carry out a 'precursor study'. (orig.) [de

  4. International intercalibration as a method for control of radiochemical analyses

    International Nuclear Information System (INIS)

    Angelova, A.; Totseva, R.; Karaivanova, R.; Dandulova, Z.; Botsova, L.

    1994-01-01

    The participation of the Radioecology Section at the National Centre for Radiology and Radiation Protection (NCRRP) in the International Interlaboratory Comparison of radiochemical analyses organized by WHO is reported. The method of evaluating accuracy of the results from inter calibrations concerning radionuclide determination of environmental samples is outlined. The data from analysis of cesium 137, strontium 90 and radium 226 in milk, sediments, soil and seaweed made by 21 laboratories are presented. They show a good accuracy values of the results from NCRRP. 1 tab. 2 figs., 6 refs

  5. Quantitative numerical method for analysing slip traces observed by AFM

    International Nuclear Information System (INIS)

    Veselý, J; Cieslar, M; Coupeau, C; Bonneville, J

    2013-01-01

    Atomic force microscopy (AFM) is used more and more routinely to study, at the nanometre scale, the slip traces produced on the surface of deformed crystalline materials. Taking full advantage of the quantitative height data of the slip traces, which can be extracted from these observations, requires however an adequate and robust processing of the images. In this paper an original method is presented, which allows the fitting of AFM scan-lines with a specific parameterized step function without any averaging treatment of the original data. This yields a quantitative and full description of the changes in step shape along the slip trace. The strength of the proposed method is established on several typical examples met in plasticity by analysing nano-scale structures formed on the sample surface by emerging dislocations. (paper)

  6. The ad-libitum alcohol 'taste test': secondary analyses of potential confounds and construct validity.

    Science.gov (United States)

    Jones, Andrew; Button, Emily; Rose, Abigail K; Robinson, Eric; Christiansen, Paul; Di Lemma, Lisa; Field, Matt

    2016-03-01

    Motivation to drink alcohol can be measured in the laboratory using an ad-libitum 'taste test', in which participants rate the taste of alcoholic drinks whilst their intake is covertly monitored. Little is known about the construct validity of this paradigm. The objective of this study was to investigate variables that may compromise the validity of this paradigm and its construct validity. We re-analysed data from 12 studies from our laboratory that incorporated an ad-libitum taste test. We considered time of day and participants' awareness of the purpose of the taste test as potential confounding variables. We examined whether gender, typical alcohol consumption, subjective craving, scores on the Alcohol Use Disorders Identification Test and perceived pleasantness of the drinks predicted ad-libitum consumption (construct validity). We included 762 participants (462 female). Participant awareness and time of day were not related to ad-libitum alcohol consumption. Males drank significantly more alcohol than females (p alcohol consumption (p = 0.04), craving (p alcohol consumption. The construct validity of the taste test was supported by relationships between ad-libitum consumption and typical alcohol consumption, craving and pleasantness ratings of the drinks. The ad-libitum taste test is a valid method for the assessment of alcohol intake in the laboratory.

  7. A wavenumber approach to analysing the active control of plane waves with arrays of secondary sources

    Science.gov (United States)

    Elliott, Stephen J.; Cheer, Jordan; Bhan, Lam; Shi, Chuang; Gan, Woon-Seng

    2018-04-01

    The active control of an incident sound field with an array of secondary sources is a fundamental problem in active control. In this paper the optimal performance of an infinite array of secondary sources in controlling a plane incident sound wave is first considered in free space. An analytic solution for normal incidence plane waves is presented, indicating a clear cut-off frequency for good performance, when the separation distance between the uniformly-spaced sources is equal to a wavelength. The extent of the near field pressure close to the source array is also quantified, since this determines the positions of the error microphones in a practical arrangement. The theory is also extended to oblique incident waves. This result is then compared with numerical simulations of controlling the sound power radiated through an open aperture in a rigid wall, subject to an incident plane wave, using an array of secondary sources in the aperture. In this case the diffraction through the aperture becomes important when its size is compatible with the acoustic wavelength, in which case only a few sources are necessary for good control. When the size of the aperture is large compared to the wavelength, and diffraction is less important but more secondary sources need to be used for good control, the results then become similar to those for the free field problem with an infinite source array.

  8. A Fuzzy Logic Based Method for Analysing Test Results

    Directory of Open Access Journals (Sweden)

    Le Xuan Vinh

    2017-11-01

    Full Text Available Network operators must perform many tasks to ensure smooth operation of the network, such as planning, monitoring, etc. Among those tasks, regular testing of network performance, network errors and troubleshooting is very important. Meaningful test results will allow the operators to evaluate network performanceof any shortcomings and to better plan for network upgrade. Due to the diverse and mainly unquantifiable nature of network testing results, there is a needs to develop a method for systematically and rigorously analysing these results. In this paper, we present STAM (System Test-result Analysis Method which employs a bottom-up hierarchical processing approach using Fuzzy logic. STAM is capable of combining all test results into a quantitative description of the network performance in terms of network stability, the significance of various network erros, performance of each function blocks within the network. The validity of this method has been successfully demonstrated in assisting the testing of a VoIP system at the Research Instiute of Post and Telecoms in Vietnam. The paper is organized as follows. The first section gives an overview of fuzzy logic theory the concepts of which will be used in the development of STAM. The next section describes STAM. The last section, demonstrating STAM’s capability, presents a success story in which STAM is successfully applied.

  9. Findings from analysing and quantifying human error using current methods

    International Nuclear Information System (INIS)

    Dang, V.N.; Reer, B.

    1999-01-01

    In human reliability analysis (HRA), the scarcity of data means that, at best, judgement must be applied to transfer to the domain of the analysis what data are available for similar tasks. In particular for the quantification of tasks involving decisions, the analyst has to choose among quantification approaches that all depend to a significant degree on expert judgement. The use of expert judgement can be made more reliable by eliciting relative judgements rather than absolute judgements. These approaches, which are based on multiple criterion decision theory, focus on ranking the tasks to be analysed by difficulty. While these approaches remedy at least partially the poor performance of experts in the estimation of probabilities, they nevertheless require the calibration of the relative scale on which the actions are ranked in order to obtain the probabilities of interest. This paper presents some results from a comparison of some current HRA methods performed in the frame of a study of SLIM calibration options. The HRA quantification methods THERP, HEART, and INTENT were applied to derive calibration human error probabilities for two groups of operator actions. (author)

  10. Analysing Symbolic Expressions in Secondary School Chemistry: Their Functions and Implications for Pedagogy

    Science.gov (United States)

    Liu, Yu; Taber, Keith S.

    2016-01-01

    Symbolic expressions are essential resources for producing knowledge, yet they are a source of learning difficulties in chemistry education. This study aims to employ social semiotics to analyse the symbolic representation of chemistry from two complementary perspectives, referred to here as contextual (i.e., historical) and functional. First, the…

  11. Metabolomics methods for the synthetic biology of secondary metabolism

    NARCIS (Netherlands)

    Quoc-Thai Nguyen, [No Value; Merlo, Maria E.; Medema, Marnix H.; Jankevics, Andris; Breitling, Rainer; Takano, Eriko; Just, Wilhelm; Reiss, Thomas

    2012-01-01

    Many microbial secondary metabolites are of high biotechnological value for medicine, agriculture, and the food industry. Bacterial genome mining has revealed numerous novel secondary metabolite biosynthetic gene clusters, which encode the potential to synthesize a large diversity of compounds that

  12. Interpretive focus groups: a participatory method for interpreting and extending secondary analysis of qualitative data

    Directory of Open Access Journals (Sweden)

    Michelle Redman-MacLaren

    2014-08-01

    Full Text Available Background: Participatory approaches to qualitative research practice constantly change in response to evolving research environments. Researchers are increasingly encouraged to undertake secondary analysis of qualitative data, despite epistemological and ethical challenges. Interpretive focus groups can be described as a more participative method for groups to analyse qualitative data. Objective: To facilitate interpretive focus groups with women in Papua New Guinea to extend analysis of existing qualitative data and co-create new primary data. The purpose of this was to inform a transformational grounded theory and subsequent health promoting action. Design: A two-step approach was used in a grounded theory study about how women experience male circumcision in Papua New Guinea. Participants analysed portions or ‘chunks’ of existing qualitative data in story circles and built upon this analysis by using the visual research method of storyboarding. Results: New understandings of the data were evoked when women in interpretive focus groups analysed the data ‘chunks’. Interpretive focus groups encouraged women to share their personal experiences about male circumcision. The visual method of storyboarding enabled women to draw pictures to represent their experiences. This provided an additional focus for whole-of-group discussions about the research topic. Conclusions: Interpretive focus groups offer opportunity to enhance trustworthiness of findings when researchers undertake secondary analysis of qualitative data. The co-analysis of existing data and co-generation of new data between research participants and researchers informed an emergent transformational grounded theory and subsequent health promoting action.

  13. Interpretive focus groups: a participatory method for interpreting and extending secondary analysis of qualitative data.

    Science.gov (United States)

    Redman-MacLaren, Michelle; Mills, Jane; Tommbe, Rachael

    2014-01-01

    Participatory approaches to qualitative research practice constantly change in response to evolving research environments. Researchers are increasingly encouraged to undertake secondary analysis of qualitative data, despite epistemological and ethical challenges. Interpretive focus groups can be described as a more participative method for groups to analyse qualitative data. To facilitate interpretive focus groups with women in Papua New Guinea to extend analysis of existing qualitative data and co-create new primary data. The purpose of this was to inform a transformational grounded theory and subsequent health promoting action. A two-step approach was used in a grounded theory study about how women experience male circumcision in Papua New Guinea. Participants analysed portions or 'chunks' of existing qualitative data in story circles and built upon this analysis by using the visual research method of storyboarding. New understandings of the data were evoked when women in interpretive focus groups analysed the data 'chunks'. Interpretive focus groups encouraged women to share their personal experiences about male circumcision. The visual method of storyboarding enabled women to draw pictures to represent their experiences. This provided an additional focus for whole-of-group discussions about the research topic. Interpretive focus groups offer opportunity to enhance trustworthiness of findings when researchers undertake secondary analysis of qualitative data. The co-analysis of existing data and co-generation of new data between research participants and researchers informed an emergent transformational grounded theory and subsequent health promoting action.

  14. Apparatus and method for controlling the secondary injection of fuel

    Science.gov (United States)

    Martin, Scott M.; Cai, Weidong; Harris, Jr., Arthur J.

    2013-03-05

    A combustor (28) for a gas turbine engine is provided comprising a primary combustion chamber (30) for combusting a first fuel to form a combustion flow stream (50) and a transition piece (32) located downstream from the primary combustion chamber (30). The transition piece (32) comprises a plurality of injectors (66) located around a circumference of the transition piece (32) for injecting a second fuel into the combustion flow stream (50). The injectors (66) are effective to create a radial temperature profile (74) at an exit (58) of the transition piece (32) having a reduced coefficient of variation relative to a radial temperature profile (64) at an inlet (54) of the transition piece (32). Methods for controlling the temperature profile of a secondary injection are also provided.

  15. Analysing annual financial statements of public ordinary secondary schools in the Tshwane north district, South Africa

    Directory of Open Access Journals (Sweden)

    Frank Doussy

    2015-08-01

    Full Text Available This paper presents the results from an analysis of the annual financial statements of public ordinary secondary schools in the Tshwane North District, South Africa. The analysis was done to assess the quality of these annual financial statements as well as the apparent usefulness thereof for the parents of the learners in the school. These users are probably most concerned with the quality and usefulness of information presented to them for providing the necessary assurance that the funds received by the school are properly accounted for and used to the advantage of their children. The results suggest that assurance in this regard is lacking as audits are not done at all, or are of an extremely poor quality. The quality of the financial statements is also poor, with scant regard for Generally Accepted Accounting Practice or the South African Schools Act. Urgent intervention from the Education Departments is needed to ensure that the South African Schools Act is adhered to and that proper audits are conducted by suitably qualified accountants and auditors. The South African Institute for Chartered Accountants (SAICA should also play a more positive role in this regard by ensuring that audit practices are enforced and quality annual financial statements are presented

  16. Log sampling methods and software for stand and landscape analyses.

    Science.gov (United States)

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe methods for efficient, accurate sampling of logs at landscape and stand scales to estimate density, total length, cover, volume, and weight. Our methods focus on optimizing the sampling effort by choosing an appropriate sampling method and transect length for specific forest conditions and objectives. Sampling methods include the line-intersect method and...

  17. Secondary structure analyses of the nuclear rRNA internal transcribed spacers and assessment of its phylogenetic utility across the Brassicaceae (mustards.

    Directory of Open Access Journals (Sweden)

    Patrick P Edger

    Full Text Available The internal transcribed spacers of the nuclear ribosomal RNA gene cluster, termed ITS1 and ITS2, are the most frequently used nuclear markers for phylogenetic analyses across many eukaryotic groups including most plant families. The reasons for the popularity of these markers include: 1. Ease of amplification due to high copy number of the gene clusters, 2. Available cost-effective methods and highly conserved primers, 3. Rapidly evolving markers (i.e. variable between closely related species, and 4. The assumption (and/or treatment that these sequences are non-functional, neutrally evolving phylogenetic markers. Here, our analyses of ITS1 and ITS2 for 50 species suggest that both sequences are instead under selective constraints to preserve proper secondary structure, likely to maintain complete self-splicing functions, and thus are not neutrally-evolving phylogenetic markers. Our results indicate the majority of sequence sites are co-evolving with other positions to form proper secondary structure, which has implications for phylogenetic inference. We also found that the lowest energy state and total number of possible alternate secondary structures are highly significantly different between ITS regions and random sequences with an identical overall length and Guanine-Cytosine (GC content. Lastly, we review recent evidence highlighting some additional problematic issues with using these regions as the sole markers for phylogenetic studies, and thus strongly recommend additional markers and cost-effective approaches for future studies to estimate phylogenetic relationships.

  18. [Methods, challenges and opportunities for big data analyses of microbiome].

    Science.gov (United States)

    Sheng, Hua-Fang; Zhou, Hong-Wei

    2015-07-01

    Microbiome is a novel research field related with a variety of chronic inflamatory diseases. Technically, there are two major approaches to analysis of microbiome: metataxonome by sequencing the 16S rRNA variable tags, and metagenome by shot-gun sequencing of the total microbial (mainly bacterial) genome mixture. The 16S rRNA sequencing analyses pipeline includes sequence quality control, diversity analyses, taxonomy and statistics; metagenome analyses further includes gene annotation and functional analyses. With the development of the sequencing techniques, the cost of sequencing will decrease, and big data analyses will become the central task. Data standardization, accumulation, modeling and disease prediction are crucial for future exploit of these data. Meanwhile, the information property in these data, and the functional verification with culture-dependent and culture-independent experiments remain the focus in future research. Studies of human microbiome will bring a better understanding of the relations between the human body and the microbiome, especially in the context of disease diagnosis and therapy, which promise rich research opportunities.

  19. Reading Ability Development from Kindergarten to Junior Secondary: Latent Transition Analyses with Growth Mixture Modeling

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2016-10-01

    Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.

  20. Application of digital image correlation method for analysing crack ...

    Indian Academy of Sciences (India)

    centrated strain by imitating the treatment of micro-cracks using the finite element ... water and moisture to penetrate the concrete leading to serious rust of the ... The correlations among various grey values of digital images are analysed for ...

  1. Measurement System Analyses - Gauge Repeatability and Reproducibility Methods

    Science.gov (United States)

    Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej

    2018-02-01

    The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.

  2. Evaluation Of Three Methods Of Sugar Analyses For Determination ...

    African Journals Online (AJOL)

    Chemical methods developed by Lane and Eynon, Knight and Allen, and colorimetric method by Dubois et al were used to determine reducing sugar in eight fruit samples. The methods showed detection limits as follows: Lane and Eynon (1ppt); Knight and Allen (0.1ppt); and Dubois et al (<2ppm). The coefficients of ...

  3. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P.; Rother, Kristian M.; Bujnicki, Janusz M.

    2013-01-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks. PMID:23435231

  4. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction.

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P; Rother, Kristian M; Bujnicki, Janusz M

    2013-04-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks.

  5. Statistical methods for analysing responses of wildlife to human disturbance.

    Science.gov (United States)

    Haiganoush K. Preisler; Alan A. Ager; Michael J. Wisdom

    2006-01-01

    1. Off-road recreation is increasing rapidly in many areas of the world, and effects on wildlife can be highly detrimental. Consequently, we have developed methods for studying wildlife responses to off-road recreation with the use of new technologies that allow frequent and accurate monitoring of human-wildlife interactions. To illustrate these methods, we studied the...

  6. Assessing Commercial and Alternative Poultry Processing Methods using Microbiome Analyses

    Science.gov (United States)

    Assessing poultry processing methods/strategies has historically used culture-based methods to assess bacterial changes or reductions, both in terms of general microbial communities (e.g. total aerobic bacteria) or zoonotic pathogens of interest (e.g. Salmonella, Campylobacter). The advent of next ...

  7. An efficient method for studying and analysing the propagation ...

    African Journals Online (AJOL)

    The paper describes a method, based on the solution of travelling-wave phenomena in polyphase systems by the use of matrix methods, of deriving the basic matrices of the conductor system taking into account the effect of conductor geometry, conductor internal impedance and the earth-return path. It is then shown how ...

  8. Omega-3 Supplementation and Loneliness-Related Memory Problems: Secondary Analyses Of A Randomized Controlled Trial

    Science.gov (United States)

    Jaremka, Lisa M.; Derry, Heather M.; Bornstein, Robert; Prakash, Ruchika Shaurya; Peng, Juan; Belury, Martha A.; Andridge, Rebecca R.; Malarkey, William B.; Kiecolt-Glaser, Janice K.

    2014-01-01

    Objective Loneliness enhances risk for episodic memory declines over time. Omega-3 supplementation can improve cognitive function for people experiencing mild cognitive difficulties. Accordingly, we explored whether omega-3 supplementation would attenuate loneliness-related episodic memory problems. Methods Participants (N=138) from a parent randomized controlled trial (RCT) were randomized to the placebo, 1.25 grams/day of omega-3, or 2.50 grams/day of omega-3 conditions for a 4-month period. They completed a baseline loneliness questionnaire and a battery of cognitive tests both at baseline and at the end of the RCT. Results Controlling for baseline verbal episodic memory scores, lonelier people within the placebo condition had poorer verbal episodic memory post-supplementation, as measured by immediate (b = −0.28, t(117) = −2.62, p = .010) and long-delay (b = −.06, t(116) = −2.07, p = .040) free recall, than their less lonely counterparts. This effect was not observed in the 1.25 grams/day and 2.50 grams/day supplementation groups, all p values > .10. The plasma omega-6:omega-3 ratio data mirrored these results. There were no loneliness-related effects of omega-3 supplementation on short-delay recall or the other cognitive tests, all p values > .32. Conclusion These results suggest that omega-3 supplementation attenuates loneliness-related verbal episodic memory declines over time and support the utility of exploring novel interventions for treating episodic memory problems among lonely people. ClinicalTrials.gov identifier: NCT00385723 PMID:25264972

  9. Development of digital image correlation method to analyse crack ...

    Indian Academy of Sciences (India)

    samples were performed to verify the performance of the digital image correlation method. ... development cannot be measured accurately. ..... Mendelson A 1983 Plasticity: Theory and application (USA: Krieger Publishing company Malabar,.

  10. Simplified elastoplastic methods of analysing fatigue in notches

    International Nuclear Information System (INIS)

    Autrusson, B.

    1993-01-01

    The aim of this study is to show the state of the art concerning methods of mechanical analysis available in the literature for evaluating notch root elastoplastic strain. The components of fast breeder reactors are subjected to numerous thermal transients, which can cause fatigue failure. To prevent this from happening, it is necessary to know the local strain range and to use it to estimate the number of cycles to crack initiation. Practical methods have been developed for the calculation of the local strain range, and have led to the drafting of design rules. Direct methods of determining the local strain range of the 'inelastic analysis' type have also been described. In conclusion a series of recommendations is made on the applicability and the conservatism of these methods

  11. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...

  12. Short-wavelength magnetic recording new methods and analyses

    CERN Document Server

    Ruigrok, JJM

    2013-01-01

    Short-wavelength magnetic recording presents a series of practical solutions to a wide range of problems in the field of magnetic recording. It features many new and original results, all derived from fundamental principles as a result of up-to-date research.A special section is devoted to the playback process, including the calculations of head efficiency and head impedance, derived from new theorems.Features include:A simple and fast method for measuring efficiency; a simple method for the accurate separation of the read and write behaviour of magnetic heads; a new concept - the bandpass hea

  13. Comparisons and Analyses of Gifted Students' Characteristics and Learning Methods

    Science.gov (United States)

    Lu, Jiamei; Li, Daqi; Stevens, Carla; Ye, Renmin

    2017-01-01

    Using PISA 2009, an international education database, this study compares gifted and talented (GT) students in three groups with normal (non-GT) students by examining student characteristics, reading, schooling, learning methods, and use of strategies for understanding and memorizing. Results indicate that the GT and non-GT gender distributions…

  14. Non-Statistical Methods of Analysing of Bankruptcy Risk

    Directory of Open Access Journals (Sweden)

    Pisula Tomasz

    2015-06-01

    Full Text Available The article focuses on assessing the effectiveness of a non-statistical approach to bankruptcy modelling in enterprises operating in the logistics sector. In order to describe the issue more comprehensively, the aforementioned prediction of the possible negative results of business operations was carried out for companies functioning in the Polish region of Podkarpacie, and in Slovakia. The bankruptcy predictors selected for the assessment of companies operating in the logistics sector included 28 financial indicators characterizing these enterprises in terms of their financial standing and management effectiveness. The purpose of the study was to identify factors (models describing the bankruptcy risk in enterprises in the context of their forecasting effectiveness in a one-year and two-year time horizon. In order to assess their practical applicability the models were carefully analysed and validated. The usefulness of the models was assessed in terms of their classification properties, and the capacity to accurately identify enterprises at risk of bankruptcy and healthy companies as well as proper calibration of the models to the data from training sample sets.

  15. Entropy resistance minimization: An alternative method for heat exchanger analyses

    International Nuclear Information System (INIS)

    Cheng, XueTao

    2013-01-01

    In this paper, the concept of entropy resistance is proposed based on the entropy generation analyses of heat transfer processes. It is shown that smaller entropy resistance leads to larger heat transfer rate with fixed thermodynamic force difference and smaller thermodynamic force difference with fixed heat transfer rate, respectively. For the discussed two-stream heat exchangers in which the heat transfer rates are not given and the three-stream heat exchanger with prescribed heat capacity flow rates and inlet temperatures of the streams, smaller entropy resistance leads to larger heat transfer rate. For the two-stream heat exchangers with fixed heat transfer rate, smaller entropy resistance leads to larger effectiveness. Furthermore, it is shown that smaller values of the concepts of entropy generation numbers and modified entropy generation number do not always correspond to better performance of the discussed heat exchangers. - Highlights: • The concept of entropy resistance is defined for heat exchangers. • The concepts based on entropy generation are used to analyze heat exchangers. • Smaller entropy resistance leads to better performance of heat exchangers. • The applicability of entropy generation minimization is conditional

  16. Mathematical methods for B0 anti B0 oscillation analyses

    International Nuclear Information System (INIS)

    Moser, H.G.; Roussarie, A.

    1996-01-01

    The measurement of the B 0 s anti B 0 s mixing frequency Δm s requires the search for a periodic pattern in the time distribution of the data. Using Fourier analysis the consequences of vertex and boost resolution, mistag and statistical fluctuations are treated analytically and a general expression to estimate the significance of a B 0 anti B 0 mixing analysis is derived. With the help of Fourier analysis the behaviour of a classical maximum likelihood analysis in time space is studied, too. It can be shown that a naive maximum likelihood fit fails in general to give correct confidence levels. This is especially important if limits are calculated. Alternative methods, based on the likelihood, which give correct limits are discussed. A new method, the amplitude fit, is introduced which combines the advantages of a Fourier analysis with the power and simplicity of a maximum likelihood fit. (orig.)

  17. Emergy and exergy analyses: Complementary methods or irreducible ideological options?

    International Nuclear Information System (INIS)

    Sciubba, Enrico; Ulgiati, Sergio

    2005-01-01

    The paper discusses the similarities and the incompatibilities between two forms of Energy Analysis (exergy and emergy, 'EXA' and 'EMA' in the following), both of which try to represent the behavior of physical systems by means of cumulative energy input/output methods that result in a double integration over space and time domains. Theoretical background, definitions and balance algebra are discussed first, in a 'statement-counterstatement' format that helps pinpointing differences and similarities. A significant, albeit simplified, benchmark case (ethanol production from corn) is used to compare the results and analytically assess the merits of each approach as well as possible synergic aspects. Corn production, transport and industrial conversion to ethanol are included in the analysis. First, mass balance and energy accounting are performed in each step of the process, then, exergy and emergy evaluations are carried out separately to lead to a set of performance indicators, the meaning of which is discussed with reference to their proper scale of application. The Authors underline that each method has its own preferred field of application and conclude that the two approaches appear to be characterized not much as different (and therefore competing) tools, but as different paradigms, whose meta-levels (their 'philosophies') substantially differ. In particular, EXA is found to provide the most correct and insightful assessment of thermodynamic features of any process and to offer a clear quantitative indication of both the irreversibilities and the degree of matching between the used resources and the end-use material or energy flows. EXA combined with costing considerations results in Thermo-Economics (TE), presently the best engineering method for System optimization. One of EXA recent extensions, Extended Exergy Accounting (EEA) includes all externalities in the exergy resource accounting, thus providing a more complete picture of how a process is interacting

  18. A test method for analysing disturbed ethernet data streams

    Science.gov (United States)

    Kreitlow, M.; Sabath, F.; Garbe, H.

    2015-11-01

    Ethernet connections, which are widely used in many computer networks, can suffer from electromagnetic interference. Typically, a degradation of the data transmission rate can be perceived as electromagnetic disturbances lead to corruption of data frames on the network media. In this paper a software-based measuring method is presented, which allows a direct assessment of the effects on the link layer. The results can directly be linked to the physical interaction without the influence of software related effects on higher protocol layers. This gives a simple tool for a quantitative analysis of the disturbance of an Ethernet connection based on time domain data. An example is shown, how the data can be used for further investigation of mechanisms and detection of intentional electromagnetic attacks.

  19. Secondary Data Analyses of Conclusions Drawn by the Program Implementers of a Positive Youth Development Program in Hong Kong

    Directory of Open Access Journals (Sweden)

    Andrew M. H. Siu

    2010-01-01

    Full Text Available The Tier 2 Program of the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes is designed for adolescents with significant psychosocial needs, and its various programs are designed and implemented by social workers (program implementers for specific student groups in different schools. Using subjective outcome evaluation data collected from the program participants (Form C at 207 schools, the program implementers were asked to aggregate data and write down five conclusions (n = 1,035 in their evaluation reports. The conclusions stated in the evaluation reports were further analyzed via secondary data analyses in this study. Results showed that the participants regarded the Tier 2 Program as a success, and was effective in enhancing self-understanding, interpersonal skills, and self-management. They liked the experiential learning approach and activities that are novel, interesting, diversified, adventure-based, and outdoor in nature. They also liked instructors who were friendly, supportive, well-prepared, and able to bring challenges and give positive recognition. Most of the difficulties encountered in running the programs were related to time constraints, clashes with other activities, and motivation of participants. Consistent with the previous evaluation findings, the present study suggests that the Tier 2 Program was well received by the participants and that it was beneficial to the development of the program participants.

  20. Impact of the Project P.A.T.H.S. in the Junior Secondary School Years: Individual Growth Curve Analyses

    Directory of Open Access Journals (Sweden)

    Daniel T. L. Shek

    2011-01-01

    Full Text Available The Tier 1 Program of the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programs is a positive youth development program implemented in school settings utilizing a curricular-based approach. In the third year of the Full Implementation Phase, 19 experimental schools (n = 3,006 students and 24 control schools (n = 3,727 students participated in a randomized group trial. Analyses based on linear mixed models via SPSS showed that participants in the experimental schools displayed better positive youth development than did participants in the control schools based on different indicators derived from the Chinese Positive Youth Development Scale, including positive self-identity, prosocial behavior, and general positive youth development attributes. Differences between experimental and control participants were also found when students who joined the Tier 1 Program and perceived the program to be beneficial were employed as participants of the experimental schools. The present findings strongly suggest that the Project P.A.T.H.S. is making an important positive impact for junior secondary school students in Hong Kong.

  1. Impact of the project P.A.T.H.S. In the junior secondary school years: individual growth curve analyses.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-02-03

    The Tier 1 Program of the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programs) is a positive youth development program implemented in school settings utilizing a curricular-based approach. In the third year of the Full Implementation Phase, 19 experimental schools (n = 3,006 students) and 24 control schools (n = 3,727 students) participated in a randomized group trial. Analyses based on linear mixed models via SPSS showed that participants in the experimental schools displayed better positive youth development than did participants in the control schools based on different indicators derived from the Chinese Positive Youth Development Scale, including positive self-identity, prosocial behavior, and general positive youth development attributes. Differences between experimental and control participants were also found when students who joined the Tier 1 Program and perceived the program to be beneficial were employed as participants of the experimental schools. The present findings strongly suggest that the Project P.A.T.H.S. is making an important positive impact for junior secondary school students in Hong Kong.

  2. Effects of an instruction method in thinking skills with students from compulsory secondary education.

    Science.gov (United States)

    de Acedo Lizarraga, María Luisa Sanz; de Acedo Baquedano, María Teresa Sanz; Pollán Rufo, Milagros

    2010-05-01

    The purpose of this study was to assess the effects caused by the instruction method "Think actively in academic contexts, TAAC", an adaptation of Wallace and Adams' (1993) method of thinking skills, creativity, self-regulation, and academic learning, with students from the second grade of Compulsory Secondary Education (CSE). We used a pretest-intervention-posttest design with control group. The sample was made up of 110 participants, aged between 13 and 15 years, 58 of them in the experimental group and 52 in the control group. Six assessment instruments were administered before and after applying the method in order to measure the dependent variables. The method, divided into eight stages, was used in all the didactic units of the syllabus content of Natural Sciences, Social Sciences, and Language, during one academic course, and allowed the conjoint teaching of thinking skills and the syllabus content. The results of the analyses of variance indicate positive impact of the intervention, as the experimental subjects improved significantly in thinking skills and academic achievement. Some interesting reflections for research and education are derived from this study.

  3. Key Competencies and Characteristics for Innovative Teaching among Secondary School Teachers: A Mixed-Methods Research

    Science.gov (United States)

    Zhu, Chang; Wang, Di

    2014-01-01

    This research aims to understand the key competencies and characteristics for innovative teaching as perceived by Chinese secondary teachers. A mixed-methods research was used to investigate secondary teachers' views. First, a qualitative study was conducted with interviews of teachers to understand the perceived key competencies and…

  4. Computational RNA secondary structure design: empirical complexity and improved methods

    Directory of Open Access Journals (Sweden)

    Condon Anne

    2007-01-01

    Full Text Available Abstract Background We investigate the empirical complexity of the RNA secondary structure design problem, that is, the scaling of the typical difficulty of the design task for various classes of RNA structures as the size of the target structure is increased. The purpose of this work is to understand better the factors that make RNA structures hard to design for existing, high-performance algorithms. Such understanding provides the basis for improving the performance of one of the best algorithms for this problem, RNA-SSD, and for characterising its limitations. Results To gain insights into the practical complexity of the problem, we present a scaling analysis on random and biologically motivated structures using an improved version of the RNA-SSD algorithm, and also the RNAinverse algorithm from the Vienna package. Since primary structure constraints are relevant for designing RNA structures, we also investigate the correlation between the number and the location of the primary structure constraints when designing structures and the performance of the RNA-SSD algorithm. The scaling analysis on random and biologically motivated structures supports the hypothesis that the running time of both algorithms scales polynomially with the size of the structure. We also found that the algorithms are in general faster when constraints are placed only on paired bases in the structure. Furthermore, we prove that, according to the standard thermodynamic model, for some structures that the RNA-SSD algorithm was unable to design, there exists no sequence whose minimum free energy structure is the target structure. Conclusion Our analysis helps to better understand the strengths and limitations of both the RNA-SSD and RNAinverse algorithms, and suggests ways in which the performance of these algorithms can be further improved.

  5. Toward a unified method for analysing and teaching Human Robot Interaction

    DEFF Research Database (Denmark)

    Dinesen, Jens Vilhelm

    , drawing on key theories and methods from both communications- and interaction-theory. The aim is to provide a single unified method for analysing interaction, through means of video analysis and then applying theories, with proven mutual compatibility, to reach a desired granularity of study.......This abstract aims to present key aspect of a future paper, which outlines the ongoing development ofa unified method for analysing and teaching Human-Robot-Interaction. The paper will propose a novel method for analysing both HRI, interaction with other forms of technologies and fellow humans...

  6. Supporting Optimal Child Development through Early Head Start and Head Start Programs: Reflections on Secondary Data Analyses of FACES and EHSREP

    Science.gov (United States)

    Chazan-Cohen, Rachel; Halle, Tamara G.; Barton, Lauren R.; Winsler, Adam

    2012-01-01

    We are delighted to reflect on the 10 papers highlighted in this important special issue of "Early Childhood Research Quarterly" devoted to recent secondary data analyses of the FACES and EHSREP datasets. First, we provide some background on Head Start research and give an overview of the large-scale Head Start and Early Head Start…

  7. Methods to assess secondary volatile lipid oxidation products in complex food matrices

    DEFF Research Database (Denmark)

    Jacobsen, Charlotte; Yesiltas, Betül

    A range of different methods are available to determine secondary volatile lipid oxidation products. These methods include e.g. spectrophotometric determination of anisidine values and TBARS as well as GC based methods for determination of specific volatile oxidation products such as pentanal...... headspace methods on the same food matrices will be presented....

  8. Standard Test Method for Calibration of Non-Concentrator Photovoltaic Secondary Reference Cells

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers calibration and characterization of secondary terrestrial photovoltaic reference cells to a desired reference spectral irradiance distribution. The recommended physical requirements for these reference cells are described in Specification E1040. Reference cells are principally used in the determination of the electrical performance of a photovoltaic device. 1.2 Secondary reference cells are calibrated indoors using simulated sunlight or outdoors in natural sunlight by reference to a primary reference cell previously calibrated to the same desired reference spectral irradiance distribution. 1.3 Secondary reference cells calibrated according to this test method will have the same radiometric traceability as the of the primary reference cell used for the calibration. Therefore, if the primary reference cell is traceable to the World Radiometric Reference (WRR, see Test Method E816), the resulting secondary reference cell will also be traceable to the WRR. 1.4 This test method appli...

  9. A method for analysing secondary economic effects generated by big research centres.

    CERN Document Server

    Bianchi-Streit, M.; Budde, R.; Reitz, H.; Sagnell, B.; Schmied, H.; Schorr, B.

    Research activities in the natural sciences, and especially those in the field of pure research work as opposed to applied research, are being financially supported for various reasons, probably the least of which is the hope for a quick economic return. It has, nevertheless, been realised for a number of years that benefits of one sort or another may appear in various and sometimes unexpected ways, where these be— nefits are not the direct consequence of the applica— tion of a research result. They are rather to be com— pared with the well—known ”spin—off” effects obtained while pursuing the research work. An example may help to illustrate what is meant.

  10. Quantitative proteome and phosphoproteome analyses of Streptomyces coelicolor reveal proteins and phosphoproteins modulating differentiation and secondary metabolism

    DEFF Research Database (Denmark)

    Rioseras, Beatriz; Sliaha, Pavel V; Gorshkov, Vladimir

    2018-01-01

    identified and quantified 3461 proteins corresponding to 44.3% of the S. coelicolor proteome across three developmental stages: vegetative hypha (MI); secondary metabolite producing hyphae (MII); and sporulating hyphae. A total of 1350 proteins exhibited more than 2-fold expression changes during....../Thr/Tyr kinases, making this genus an outstanding model for the study of bacterial protein phosphorylation events. We used mass spectrometry based quantitative proteomics and phosphoproteomics to characterize bacterial differentiation and activation of secondary metabolism of Streptomyces coelicolor. We...... the bacterial differentiation process. These proteins include 136 regulators (transcriptional regulators, transducers, Ser/Thr/Tyr kinases, signalling proteins), as well as 542 putative proteins with no clear homology to known proteins which are likely to play a role in differentiation and secondary metabolism...

  11. Estimation method for first excursion probability of secondary system with impact and friction using maximum response

    International Nuclear Information System (INIS)

    Shigeru Aoki

    2005-01-01

    The secondary system such as pipings, tanks and other mechanical equipment is installed in the primary system such as building. The important secondary systems should be designed to maintain their function even if they are subjected to destructive earthquake excitations. The secondary system has many nonlinear characteristics. Impact and friction characteristic, which are observed in mechanical supports and joints, are common nonlinear characteristics. As impact damper and friction damper, impact and friction characteristic are used for reduction of seismic response. In this paper, analytical methods of the first excursion probability of the secondary system with impact and friction, subjected to earthquake excitation are proposed. By using the methods, the effects of impact force, gap size and friction force on the first excursion probability are examined. When the tolerance level is normalized by the maximum response of the secondary system without impact or friction characteristics, variation of the first excursion probability is very small for various values of the natural period. In order to examine the effectiveness of the proposed method, the obtained results are compared with those obtained by the simulation method. Some estimation methods for the maximum response of the secondary system with nonlinear characteristics have been developed. (author)

  12. Protein secondary structure assignment revisited: a detailed analysis of different assignment methods

    Directory of Open Access Journals (Sweden)

    de Brevern Alexandre G

    2005-09-01

    Full Text Available Abstract Background A number of methods are now available to perform automatic assignment of periodic secondary structures from atomic coordinates, based on different characteristics of the secondary structures. In general these methods exhibit a broad consensus as to the location of most helix and strand core segments in protein structures. However the termini of the segments are often ill-defined and it is difficult to decide unambiguously which residues at the edge of the segments have to be included. In addition, there is a "twilight zone" where secondary structure segments depart significantly from the idealized models of Pauling and Corey. For these segments, one has to decide whether the observed structural variations are merely distorsions or whether they constitute a break in the secondary structure. Methods To address these problems, we have developed a method for secondary structure assignment, called KAKSI. Assignments made by KAKSI are compared with assignments given by DSSP, STRIDE, XTLSSTR, PSEA and SECSTR, as well as secondary structures found in PDB files, on 4 datasets (X-ray structures with different resolution range, NMR structures. Results A detailed comparison of KAKSI assignments with those of STRIDE and PSEA reveals that KAKSI assigns slightly longer helices and strands than STRIDE in case of one-to-one correspondence between the segments. However, KAKSI tends also to favor the assignment of several short helices when STRIDE and PSEA assign longer, kinked, helices. Helices assigned by KAKSI have geometrical characteristics close to those described in the PDB. They are more linear than helices assigned by other methods. The same tendency to split long segments is observed for strands, although less systematically. We present a number of cases of secondary structure assignments that illustrate this behavior. Conclusion Our method provides valuable assignments which favor the regularity of secondary structure segments.

  13. A quantitative method to analyse an open answer questionnaire: A case study about the Boltzmann Factor

    International Nuclear Information System (INIS)

    Battaglia, Onofrio Rosario; Di Paola, Benedetto

    2015-01-01

    This paper describes a quantitative method to analyse an openended questionnaire. Student responses to a specially designed written questionnaire are quantitatively analysed by not hierarchical clustering called k-means method. Through this we can characterise behaviour students with respect their expertise to formulate explanations for phenomena or processes and/or use a given model in the different context. The physics topic is about the Boltzmann Factor, which allows the students to have a unifying view of different phenomena in different contexts.

  14. Effects οf Yoga οn cancer patients: Secondary analysis οf Systematic Reviews/Meta-Analyses

    Directory of Open Access Journals (Sweden)

    Dimos Mastrogiannis

    2016-09-01

    Full Text Available Introduction: Western medicine’s model has been enriched with alternative management and treatment methods. Yoga is a technique that incorporates physical exercises, breathing methods, meditation and relaxation which has been proposed to have positive effects on cancer patients. Aim: The aim of the present study is to critically review findings of systematic reviews/meta-analyses that are available in the international literature, regarding the effects of yoga on cancer patients. Methodology: Literature review was performed in five (5 databases: PubMed, Scopus, EMBASE, Web of Science and Cochrane Database of systematic Reviews. Authors adopted the Preferred Reporting Items for Systematic Reviews and Meta-Analysis model in order to perform a critical review of the existing literature. Methodological assessment of papers under review was performed according to criteria of the Assessment of Multiple Systematic Reviews instrument. Inclusion criteria were the English language, research undertaken in adults and evaluation of effects of yoga as the primary intervention. Results: Although there is a lack of guidelines regarding the assessment of systematic reviews using the above mentioned instruments, 16 papers were included in the present study. Findings are confounding, even though there is evidence supporting the use of yoga in certain types of cancer. Specifically, improvements in quality of life and psychological wellbeing, anxiety, however, not pain reduction, have been reported in women suffering from breast cancer, however, no statistically significant effect has been documented in patients with haematological malignancies. Conclusions: The majority of studies about the effects of yoga on quality of life, fatigue, sleep quality, anxiety, and depression have small samples. There is a trend towards an overall quality of life improvement, nonetheless, the lack of sufficiently powered randomized controlled trials prevents the extraction of a safe

  15. Novel evolutionary lineages revealed in the Chaetothyriales (Fungi) based on multigene phylogenetic analyses and comparison of ITS secondary structure

    Czech Academy of Sciences Publication Activity Database

    Réblová, Martina; Untereiner, W. A.; Réblová, K.

    2013-01-01

    Roč. 8, č. 5 (2013), e63547 E-ISSN 1932-6203 R&D Projects: GA ČR GAP506/12/0038 Institutional support: RVO:67985939 Keywords : Cyphelophora * Phialophora * secondary structure Subject RIV: EF - Botanics Impact factor: 3.534, year: 2013

  16. Application of Monte Carlo method in determination of secondary characteristic X radiation in XFA

    International Nuclear Information System (INIS)

    Roubicek, P.

    1982-01-01

    Secondary characteristic radiation is excited by primary radiation from the X-ray tube and by secondary radiation of other elements so that excitations of several orders result. The Monte Carlo method was used to consider all these possibilities and the resulting flux of characteristic radiation was simulated for samples of silicate raw materials. A comparison of the results of these computations with experiments allows to determine the effect of sample preparation on the characteristic radiation flux. (M.D.)

  17. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    Directory of Open Access Journals (Sweden)

    Jordi Marcé-Nogué

    2017-10-01

    Full Text Available Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches.

  18. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    Science.gov (United States)

    De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep

    2017-01-01

    Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107

  19. A Methodology for Conducting Integrative Mixed Methods Research and Data Analyses

    Science.gov (United States)

    Castro, Felipe González; Kellison, Joshua G.; Boyd, Stephen J.; Kopak, Albert

    2011-01-01

    Mixed methods research has gained visibility within the last few years, although limitations persist regarding the scientific caliber of certain mixed methods research designs and methods. The need exists for rigorous mixed methods designs that integrate various data analytic procedures for a seamless transfer of evidence across qualitative and quantitative modalities. Such designs can offer the strength of confirmatory results drawn from quantitative multivariate analyses, along with “deep structure” explanatory descriptions as drawn from qualitative analyses. This article presents evidence generated from over a decade of pilot research in developing an integrative mixed methods methodology. It presents a conceptual framework and methodological and data analytic procedures for conducting mixed methods research studies, and it also presents illustrative examples from the authors' ongoing integrative mixed methods research studies. PMID:22167325

  20. Investigation of the spray characteristics for a secondary fuel injection nozzle using a digital image processing method

    Science.gov (United States)

    Jeong, Haeyoung; Lee, Kihyung; Ikeda, Yuji

    2007-05-01

    There are many ways to reduce diesel engine exhaust emissions. However, NOx emission is difficult to reduce because the hydrocarbon (HC) concentration in a diesel engine is not sufficient for NOx conversion. Therefore, in order to create stoichiometric conditions in the De-NOx catalyst, a secondary injection system is designed to inject liquid HC into the exhaust pipe. The atomization and distribution characteristics of the HC injected from a secondary injector are key technologies to obtain a high NOx conversion because inhomogeneous droplets of injected HC cause not only high fuel consumption but also deterioration of NOx emission. This paper describes the spray characteristics of a secondary injector including the spray angle, penetration length and breakup behaviour of the spray to optimize the reduction rate of the NOx catalyst. In this study, various optical diagnostics were applied to investigate these spray characteristics, the atomization mechanism and spray developing process. The visualization and image processing method for the spray pulsation were developed by high speed photography. The influence of the fuel supply pressure on the spray behaviour and a more detailed spray developing process have been analysed experimentally using image processing. Finally, the experimental results were used to correlate the spray structure to the injection system performance and to provide a design guide for a secondary injector nozzle.

  1. Teaching for Conceptual Change in Elementary and Secondary Science Methods Courses.

    Science.gov (United States)

    Marion, Robin; Hewson, Peter W.; Tabachnick, B. Robert; Blomker, Kathryn B.

    1999-01-01

    Describes and analyzes two science methods courses at the elementary and secondary levels for how they addressed four ideas: (1) how students learn science; (2) how teachers teach science to students; (3) how prospective science teachers learn about the first two ideas; and (4) how methods instructors teach prospective science teachers about the…

  2. Methods and Beyond: Learning to Teach Latino Bilingual Learners in Mainstream Secondary[superscript3] Classes

    Science.gov (United States)

    Schall-Leckrone, Laura; Pavlak, Christina

    2014-01-01

    This article reports empirical evidence about the influence of a pre-service methods course on preparing aspiring and practicing content teachers to work with adolescent bilingual learners in secondary schools. Qualitative methods were used to analyze the extent to which participants developed abilities to plan instruction and to think complexly…

  3. Comparing Management Models of Secondary Schools in Tamaulipas, Mexico: An Exploration with a Delphi Method

    Science.gov (United States)

    Navarro-Leal, Marco Aurelio; Garcia, Concepcion Nino; Saldivar, Luisa Caballero

    2012-01-01

    For a preliminary exploration of management models between two secondary schools, a Delphi method was used in order to identify and focus relevant topics for a larger research. A first approximation with this method proved to be a heuristic tool to focus and define some categories and guidelines of enquiry. It was found that in both of the schools…

  4. The surface analysis methods; Les methodes d`analyse des surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Deville, J.P. [Institut de Physique et Chimie, 67 - Strasbourg (France)

    1998-11-01

    Nowadays, there are a lot of surfaces analysis methods, each having its specificity, its qualities, its constraints (for instance vacuum) and its limits. Expensive in time and in investment, these methods have to be used deliberately. This article appeals to non specialists. It gives some elements of choice according to the studied information, the sensitivity, the use constraints or the answer to a precise question. After having recalled the fundamental principles which govern these analysis methods, based on the interaction between radiations (ultraviolet, X) or particles (ions, electrons) with matter, two methods will be more particularly described: the Auger electron spectroscopy (AES) and x-rays photoemission spectroscopy (ESCA or XPS). Indeed, they are the most widespread methods in laboratories, the easier for use and probably the most productive for the analysis of surface of industrial materials or samples submitted to treatments in aggressive media. (O.M.) 11 refs.

  5. Classifying Secondary Task Driving Safety Using Method of F-ANP

    Directory of Open Access Journals (Sweden)

    Lisheng Jin

    2015-02-01

    Full Text Available This study was designed to build an evaluation system for secondary task driving safety by using method of Fuzzy Analytic Network Process (F-ANP. Forty drivers completed driving on driving simulator while interacting with or without a secondary task. Measures of fixations, saccades, and vehicle running status were analyzed. According to five experts' opinions, a hierarchical model for secondary task driving safety evaluation was built. The hierarchical model was divided into three levels: goal, assessment dimension, and criteria. Seven indexes make up the level of criteria, and the assessment dimension includes two clusters: vehicle control risk and driver eye movement risk. By method of F-ANP, the priorities of the criteria and the subcriteria were determined. Furthermore, to rank the driving safety, an approach based on the principle of maximum membership degree was adopted. At last, a case study of secondary task driving safety evaluation by forty drivers using the proposed method was done. The results indicated that the application of the proposed method is practically feasible and adoptable for secondary task driving safety evaluation.

  6. SAAS: Short Amino Acid Sequence - A Promising Protein Secondary Structure Prediction Method of Single Sequence

    Directory of Open Access Journals (Sweden)

    Zhou Yuan Wu

    2013-07-01

    Full Text Available In statistical methods of predicting protein secondary structure, many researchers focus on single amino acid frequencies in α-helices, β-sheets, and so on, or the impact near amino acids on an amino acid forming a secondary structure. But the paper considers a short sequence of amino acids (3, 4, 5 or 6 amino acids as integer, and statistics short sequence's probability forming secondary structure. Also, many researchers select low homologous sequences as statistical database. But this paper select whole PDB database. In this paper we propose a strategy to predict protein secondary structure using simple statistical method. Numerical computation shows that, short amino acids sequence as integer to statistics, which can easy see trend of short sequence forming secondary structure, and it will work well to select large statistical database (whole PDB database without considering homologous, and Q3 accuracy is ca. 74% using this paper proposed simple statistical method, but accuracy of others statistical methods is less than 70%.

  7. Genomic and transcriptomic analyses reveal differential regulation of diverse terpenoid and polyketides secondary metabolites in Hericium erinaceus.

    Science.gov (United States)

    Chen, Juan; Zeng, Xu; Yang, Yan Long; Xing, Yong Mei; Zhang, Qi; Li, Jia Mei; Ma, Ke; Liu, Hong Wei; Guo, Shun Xing

    2017-08-31

    The lion's mane mushroom Hericium erinaceus is a famous traditional medicinal fungus credited with anti-dementia activity and a producer of cyathane diterpenoid natural products (erinacines) useful against nervous system diseases. To date, few studies have explored the biosynthesis of these compounds, although their chemical synthesis is known. Here, we report the first genome and tanscriptome sequence of the medicinal fungus H. erinaceus. The size of the genome is 39.35 Mb, containing 9895 gene models. The genome of H. erinaceus reveals diverse enzymes and a large family of cytochrome P450 (CYP) proteins involved in the biosynthesis of terpenoid backbones, diterpenoids, sesquiterpenes and polyketides. Three gene clusters related to terpene biosynthesis and one gene cluster for polyketides biosynthesis (PKS) were predicted. Genes involved in terpenoid biosynthesis were generally upregulated in mycelia, while the PKS gene was upregulated in the fruiting body. Comparative genome analysis of 42 fungal species of Basidiomycota revealed that most edible and medicinal mushroom show many more gene clusters involved in terpenoid and polyketide biosynthesis compared to the pathogenic fungi. None of the gene clusters for terpenoid or polyketide biosynthesis were predicted in the poisonous mushroom Amanita muscaria. Our findings may facilitate future discovery and biosynthesis of bioactive secondary metabolites from H. erinaceus and provide fundamental information for exploring the secondary metabolites in other Basidiomycetes.

  8. Performing dynamic time history analyses by extension of the response spectrum method

    International Nuclear Information System (INIS)

    Hulbert, G.M.

    1983-01-01

    A method is presented to calculate the dynamic time history response of finite-element models using results from response spectrum analyses. The proposed modified time history method does not represent a new mathamatical approach to dynamic analysis but suggests a more efficient ordering of the analytical equations and procedures. The modified time history method is considerably faster and less expensive to use than normal time hisory methods. This paper presents the theory and implementation of the modified time history approach along with comparisons of the modified and normal time history methods for a prototypic seismic piping design problem

  9. Genomic and secondary metabolite analyses of Streptomyces sp. 2AW provide insight into the evolution of the cycloheximide pathway

    Directory of Open Access Journals (Sweden)

    Elizabeth eStulberg

    2016-05-01

    Full Text Available The dearth of new antibiotics in the face of widespread antimicrobial resistance makes developing innovative strategies for discovering new antibiotics critical for the future management of infectious disease. Understanding the genetics and evolution of antibiotic producers will help guide the discovery and bioengineering of novel antibiotics. We discovered an isolate in Alaskan boreal forest soil that had broad antimicrobial activity. We elucidated the corresponding antimicrobial natural products and sequenced the genome of this isolate, designated Streptomyces sp. 2AW. This strain illustrates the chemical virtuosity typical of the Streptomyces genus, producing cycloheximide as well as two other biosynthetically unrelated antibiotics, neutramycin and hygromycin A. Combining bioinformatic and chemical analyses, we identified the gene clusters responsible for antibiotic production. Interestingly, 2AW appears dissimilar from other cycloheximide producers in that the gene encoding the polyketide synthase resides on a separate part of the chromosome from the genes responsible for tailoring cycloheximide-specific modifications. This gene arrangement and our phylogenetic analyses of the gene products suggest that 2AW holds an evolutionarily ancestral lineage of the cycloheximide pathway. Our analyses support the hypothesis that the 2AW glutaramide gene cluster is basal to the lineage wherein cycloheximide production diverged from other glutarimide antibiotics. This study illustrates the power of combining modern biochemical and genomic analyses to gain insight into the evolution of antibiotic-producing microorganisms.

  10. Method of accounting for code safety valve setpoint drift in safety analyses

    International Nuclear Information System (INIS)

    Rousseau, K.R.; Bergeron, P.A.

    1989-01-01

    In performing the safety analyses for transients that result in a challenge to the reactor coolant system (RCS) pressure boundary, the general acceptance criterion is that the peak RCS pressure not exceed the American Society of Mechanical Engineers limit of 110% of the design pressure. Without crediting non-safety-grade pressure mitigating systems, protection from this limit is mainly provided by the primary and secondary code safety valves. In theory, the combination of relief capacity and setpoints for these valves is designed to provide this protection. Generally, banks of valves are set at varying setpoints staggered by 15- to 20-psid increments to minimize the number of valves that would open by an overpressure challenge. In practice, however, when these valves are removed and tested (typically during a refueling outage), setpoints are sometimes found to have drifted by >50 psid. This drift should be accounted for during the performance of the safety analysis. This paper describes analyses performed by Yankee Atomic Electric Company (YAEC) to account for setpoint drift in safety valves from testing. The results of these analyses are used to define safety valve operability or acceptance criteria

  11. Can adverse maternal and perinatal outcomes be predicted when blood pressure becomes elevated? Secondary analyses from the CHIPS (Control of Hypertension In Pregnancy Study) randomized controlled trial

    NARCIS (Netherlands)

    Magee, Laura A.; von Dadelszen, Peter; Singer, Joel; Lee, Terry; Rey, Evelyne; Ross, Susan; Asztalos, Elizabeth; Murphy, Kellie E.; Menzies, Jennifer; Sanchez, Johanna; Gafni, Amiram; Gruslin, Andrée; Helewa, Michael; Hutton, Eileen; Lee, Shoo K.; Logan, Alexander G.; Ganzevoort, Wessel; Welch, Ross; Thornton, Jim G.; Moutquin, Jean Marie

    2016-01-01

    Introduction. For women with chronic or gestational hypertension in CHIPS (Control of Hypertension In Pregnancy Study, NCT01192412), we aimed to examine whether clinical predictors collected at randomization could predict adverse outcomes. Material and methods. This was a planned, secondary analysis

  12. Reliability analyses to detect weak points in secondary-side residual heat removal systems of KWU PWR plants

    International Nuclear Information System (INIS)

    Schilling, R.

    1983-01-01

    Requirements made by Federal German licensing authorities called for the analysis of the secondary-side residual heat removal systems of new PWR plants with regard to availability, possible weak points and the balanced nature of the overall system for different incident sequences. Following a description of the generic concept and the process and safety-related systems for steam generator feed and main steam discharge, the reliability of the latter is analyzed for the small break LOCA and emergency power mode incidents, weak points in the process systems identified, remedial measures of a system-specific and test-strategic nature presented and their contribution to improving system availability quantified. A comparison with the results of the German Risk Study on Nuclear Power Plants (GRS) shows a distinct reduction in core meltdown frequency. (orig.)

  13. Experience in Use of Project Method during Technology Lessons in Secondary Schools of the USA

    Science.gov (United States)

    Sheludko, Inna

    2015-01-01

    The article examines the opportunities and prospects for the use of experience of project method during "technology lessons" in US secondary schools, since the value of project technology implementation experience into the educational process in the USA for ensuring holistic development of children, preparing them for adult life, in…

  14. Motivation Beliefs of Secondary School Teachers in Canada and Singapore: A Mixed Methods Study

    Science.gov (United States)

    Klassen, Robert M.; Chong, Wan Har; Huan, Vivien S.; Wong, Isabella; Kates, Allison; Hannok, Wanwisa

    2008-01-01

    A mixed methods approach was used to explore secondary teachers' motivation beliefs in Canada and Singapore. Results from Study 1 revealed that socio-economic status (SES) was the strongest predictor of school climate in Canada, and that collective efficacy mediated the effect of SES on school climate in Singapore, but not in Canada. In Study 2,…

  15. How Do We Know What Is Happening Online?: A Mixed Methods Approach to Analysing Online Activity

    Science.gov (United States)

    Charalampidi, Marina; Hammond, Michael

    2016-01-01

    Purpose: The purpose of this paper is to discuss the process of analysing online discussion and argue for the merits of mixed methods. Much research of online participation and e-learning has been either message-focused or person-focused. The former covers methodologies such as content and discourse analysis, the latter interviewing and surveys.…

  16. SQA of finite element method (FEM) codes used for analyses of pit storage/transport packages

    Energy Technology Data Exchange (ETDEWEB)

    Russel, E. [Lawrence Livermore National Lab., CA (United States)

    1997-11-01

    This report contains viewgraphs on the software quality assurance of finite element method codes used for analyses of pit storage and transport projects. This methodology utilizes the ISO 9000-3: Guideline for application of 9001 to the development, supply, and maintenance of software, for establishing well-defined software engineering processes to consistently maintain high quality management approaches.

  17. Deciphering the Cryptic Genome: Genome-wide Analyses of the Rice Pathogen Fusarium fujikuroi Reveal Complex Regulation of Secondary Metabolism and Novel Metabolites

    Science.gov (United States)

    Studt, Lena; Niehaus, Eva-Maria; Espino, Jose J.; Huß, Kathleen; Michielse, Caroline B.; Albermann, Sabine; Wagner, Dominik; Bergner, Sonja V.; Connolly, Lanelle R.; Fischer, Andreas; Reuter, Gunter; Kleigrewe, Karin; Bald, Till; Wingfield, Brenda D.; Ophir, Ron; Freeman, Stanley; Hippler, Michael; Smith, Kristina M.; Brown, Daren W.; Proctor, Robert H.; Münsterkötter, Martin; Freitag, Michael; Humpf, Hans-Ulrich; Güldener, Ulrich; Tudzynski, Bettina

    2013-01-01

    The fungus Fusarium fujikuroi causes “bakanae” disease of rice due to its ability to produce gibberellins (GAs), but it is also known for producing harmful mycotoxins. However, the genetic capacity for the whole arsenal of natural compounds and their role in the fungus' interaction with rice remained unknown. Here, we present a high-quality genome sequence of F. fujikuroi that was assembled into 12 scaffolds corresponding to the 12 chromosomes described for the fungus. We used the genome sequence along with ChIP-seq, transcriptome, proteome, and HPLC-FTMS-based metabolome analyses to identify the potential secondary metabolite biosynthetic gene clusters and to examine their regulation in response to nitrogen availability and plant signals. The results indicate that expression of most but not all gene clusters correlate with proteome and ChIP-seq data. Comparison of the F. fujikuroi genome to those of six other fusaria revealed that only a small number of gene clusters are conserved among these species, thus providing new insights into the divergence of secondary metabolism in the genus Fusarium. Noteworthy, GA biosynthetic genes are present in some related species, but GA biosynthesis is limited to F. fujikuroi, suggesting that this provides a selective advantage during infection of the preferred host plant rice. Among the genome sequences analyzed, one cluster that includes a polyketide synthase gene (PKS19) and another that includes a non-ribosomal peptide synthetase gene (NRPS31) are unique to F. fujikuroi. The metabolites derived from these clusters were identified by HPLC-FTMS-based analyses of engineered F. fujikuroi strains overexpressing cluster genes. In planta expression studies suggest a specific role for the PKS19-derived product during rice infection. Thus, our results indicate that combined comparative genomics and genome-wide experimental analyses identified novel genes and secondary metabolites that contribute to the evolutionary success of F

  18. Numerical and Experimental Characterization of a Composite Secondary Bonded Adhesive Lap Joint Using the Ultrasonics method

    Science.gov (United States)

    Kumar, M. R.; Ghosh, A.; Karuppannan, D.

    2018-05-01

    The construction of aircraft using advanced composites have become very popular during the past two decades, in which many innovative manufacturing processes, such as cocuring, cobonding, and secondary bonding processes, have been adopted. The secondary bonding process has become less popular than the other two ones because of nonavailability of process database and certification issues. In this article, an attempt is made to classify the quality of bonding using nondestructive ultrasonic inspection methods. Specimens were prepared and tested using the nondestructive ultrasonic Through Transmission (TT), Pulse Echo (PE), and air coupled guided wave techniques. It is concluded that the ultrasonic pulse echo technique is the best one for inspecting composite secondary bonded adhesive joints.

  19. Aerodynamic and cephalometric analyses of velopharyngeal structure and function following re-pushback surgery for secondary correction in cleft palate.

    Science.gov (United States)

    Nakamura, Norifumi; Ogata, Yuko; Sasaguri, Masaaki; Suzuki, Akira; Kikuta, Rumiko; Ohishi, Masamichi

    2003-01-01

    The goal of this study was to clarify the efficacy of and indication for re-pushback surgery as secondary treatment for cleft palate. Fifteen patients treated by re-pushback surgery involving intravelar veloplasty (IVV) with buccal mucosal grafting on the nasal surface and followed up more than 6 months were enrolled in this study. Pre- and postoperative velopharyngeal functions were analyzed by perceptual voice analysis, blowing ratio, and nasalance scores during phonation of /i/ and /tsu/. Cephalometric analysis was used to evaluate the relationship between velopharyngeal structure and the outcome of re-pushback surgery. Control data were obtained from the longitudinal files of normal 10-year-old children in Kyushu University Dental Hospital. Eight of 15 patients obtained complete velopharyngeal closure (complete group), five patients improved remarkably (improved group), and no effective result was seen in two patients (ineffective group). Nasality disappeared or remarkably improved after the operation in 13 patients. Effective surgical results were found in 86.7% of the patients. Partial flap necrosis was seen in two patients in whom re-pushback surgery was performed using mucosal palatal flaps instead of mucoperiosteal flaps. Preoperative velar length and the length/depth ratio of the re-pushback group were significantly smaller than the controls, but there was no difference after the operation. Furthermore, the preoperative length/depth ratio of the complete group (ranged more than 100%) was significantly greater than those of the other two groups (ranged less than 100%). Re-pushback surgery by IVV with free mucous grafting on the nasal surface was effective in managing velopharyngeal incompetence secondarily, improving velopharyngeal structure and function.

  20. A new online secondary path modeling method for adaptive active structure vibration control

    International Nuclear Information System (INIS)

    Pu, Yuxue; Zhang, Fang; Jiang, Jinhui

    2014-01-01

    This paper proposes a new variable step size FXLMS algorithm with an auxiliary noise power scheduling strategy for online secondary path modeling. The step size for the secondary path modeling filter and the gain of auxiliary noise are varied in accordance with the parameters available directly. The proposed method has a low computational complexity. Computer simulations show that an active vibration control system with the proposed method gives much better vibration attenuation and modeling accuracy at a faster convergence rate than existing methods. National Instruments’ CompactRIO is used as an embedded processor to control simply supported beam vibration. Experimental results indicate that the vibration of the beam has been effectively attenuated. (papers)

  1. Interdependencies of aortic arch secondary flow patterns, geometry, and age analysed by 4-dimensional phase contrast magnetic resonance imaging at 3 Tesla

    Energy Technology Data Exchange (ETDEWEB)

    Frydrychowicz, Alex [University Hospital Schleswig-Holstein, Clinic for Radiology and Nuclear Medicine, Luebeck (Germany); Berger, Alexander; Russe, Maximilian F.; Bock, Jelena [University Hospital Freiburg, Department of Radiology, Medical Physics, Freiburg (Germany); Munoz del Rio, Alejandro [University of Wisconsin - Madison, Departments of Radiology and Medical Physics, Madison, WI (United States); Harloff, Andreas [University Hospital Freiburg, Department of Neurology and Clinical Neurophysiology, Freiburg (Germany); Markl, Michael [University Hospital Freiburg, Department of Radiology, Medical Physics, Freiburg (Germany); Northwestern University, Departments of Radiology and Biomedical Engineering, Chicago, IL (United States)

    2012-05-15

    It was the aim to analyse the impact of age, aortic arch geometry, and size on secondary flow patterns such as helix and vortex flow derived from flow-sensitive magnetic resonance imaging (4D PC-MRI). 62 subjects (age range = 20-80 years) without circumscribed pathologies of the thoracic aorta (ascending aortic (AAo) diameter: 3.2 {+-} 0.6 cm [range 2.2-5.1]) were examined by 4D PC-MRI after IRB-approval and written informed consent. Blood flow visualisation based on streamlines and time-resolved 3D particle traces was performed. Aortic diameter, shape (gothic, crook-shaped, cubic), angle, and age were correlated with existence and extent of secondary flow patterns (helicity, vortices); statistical modelling was performed. Helical flow was the typical pattern in standard crook-shaped aortic arches. With altered shapes and increasing age, helicity was less common. AAo diameter and age had the highest correlation (r = 0.69 and 0.68, respectively) with number of detected vortices. None of the other arch geometric or demographic variables (for all, P {>=} 0.177) improved statistical modelling. Substantially different secondary flow patterns can be observed in the normal thoracic aorta. Age and the AAo diameter were the parameters correlating best with presence and amount of vortices. Findings underline the importance of age- and geometry-matched control groups for haemodynamic studies. (orig.)

  2. School belongingness and mental health functioning across the primary-secondary transition in a mainstream sample: multi-group cross-lagged analyses.

    Science.gov (United States)

    Vaz, Sharmila; Falkmer, Marita; Parsons, Richard; Passmore, Anne Elizabeth; Parkin, Timothy; Falkmer, Torbjörn

    2014-01-01

    The relationship between school belongingness and mental health functioning before and after the primary-secondary school transition has not been previously investigated in students with and without disabilities. This study used a prospective longitudinal design to test the bi-directional relationships between these constructs, by surveying 266 students with and without disabilities and their parents, 6-months before and after the transition to secondary school. Cross-lagged multi-group analyses found student perception of belongingness in the final year of primary school to contribute to change in their mental health functioning a year later. The beneficial longitudinal effects of school belongingness on subsequent mental health functioning were evident in all student subgroups; even after accounting for prior mental health scores and the cross-time stability in mental health functioning and school belongingness scores. Findings of the current study substantiate the role of school contextual influences on early adolescent mental health functioning. They highlight the importance for primary and secondary schools to assess students' school belongingness and mental health functioning and transfer these records as part of the transition process, so that appropriate scaffolds are in place to support those in need. Longer term longitudinal studies are needed to increase the understanding of the temporal sequencing between school belongingness and mental health functioning of all mainstream students.

  3. School Belongingness and Mental Health Functioning across the Primary-Secondary Transition in a Mainstream Sample: Multi-Group Cross-Lagged Analyses

    Science.gov (United States)

    Vaz, Sharmila; Falkmer, Marita; Parsons, Richard; Passmore, Anne Elizabeth; Parkin, Timothy; Falkmer, Torbjörn

    2014-01-01

    The relationship between school belongingness and mental health functioning before and after the primary-secondary school transition has not been previously investigated in students with and without disabilities. This study used a prospective longitudinal design to test the bi-directional relationships between these constructs, by surveying 266 students with and without disabilities and their parents, 6-months before and after the transition to secondary school. Cross-lagged multi-group analyses found student perception of belongingness in the final year of primary school to contribute to change in their mental health functioning a year later. The beneficial longitudinal effects of school belongingness on subsequent mental health functioning were evident in all student subgroups; even after accounting for prior mental health scores and the cross-time stability in mental health functioning and school belongingness scores. Findings of the current study substantiate the role of school contextual influences on early adolescent mental health functioning. They highlight the importance for primary and secondary schools to assess students’ school belongingness and mental health functioning and transfer these records as part of the transition process, so that appropriate scaffolds are in place to support those in need. Longer term longitudinal studies are needed to increase the understanding of the temporal sequencing between school belongingness and mental health functioning of all mainstream students. PMID:24967580

  4. School belongingness and mental health functioning across the primary-secondary transition in a mainstream sample: multi-group cross-lagged analyses.

    Directory of Open Access Journals (Sweden)

    Sharmila Vaz

    Full Text Available The relationship between school belongingness and mental health functioning before and after the primary-secondary school transition has not been previously investigated in students with and without disabilities. This study used a prospective longitudinal design to test the bi-directional relationships between these constructs, by surveying 266 students with and without disabilities and their parents, 6-months before and after the transition to secondary school. Cross-lagged multi-group analyses found student perception of belongingness in the final year of primary school to contribute to change in their mental health functioning a year later. The beneficial longitudinal effects of school belongingness on subsequent mental health functioning were evident in all student subgroups; even after accounting for prior mental health scores and the cross-time stability in mental health functioning and school belongingness scores. Findings of the current study substantiate the role of school contextual influences on early adolescent mental health functioning. They highlight the importance for primary and secondary schools to assess students' school belongingness and mental health functioning and transfer these records as part of the transition process, so that appropriate scaffolds are in place to support those in need. Longer term longitudinal studies are needed to increase the understanding of the temporal sequencing between school belongingness and mental health functioning of all mainstream students.

  5. Interdependencies of aortic arch secondary flow patterns, geometry, and age analysed by 4-dimensional phase contrast magnetic resonance imaging at 3 Tesla

    International Nuclear Information System (INIS)

    Frydrychowicz, Alex; Berger, Alexander; Russe, Maximilian F.; Bock, Jelena; Munoz del Rio, Alejandro; Harloff, Andreas; Markl, Michael

    2012-01-01

    It was the aim to analyse the impact of age, aortic arch geometry, and size on secondary flow patterns such as helix and vortex flow derived from flow-sensitive magnetic resonance imaging (4D PC-MRI). 62 subjects (age range = 20-80 years) without circumscribed pathologies of the thoracic aorta (ascending aortic (AAo) diameter: 3.2 ± 0.6 cm [range 2.2-5.1]) were examined by 4D PC-MRI after IRB-approval and written informed consent. Blood flow visualisation based on streamlines and time-resolved 3D particle traces was performed. Aortic diameter, shape (gothic, crook-shaped, cubic), angle, and age were correlated with existence and extent of secondary flow patterns (helicity, vortices); statistical modelling was performed. Helical flow was the typical pattern in standard crook-shaped aortic arches. With altered shapes and increasing age, helicity was less common. AAo diameter and age had the highest correlation (r = 0.69 and 0.68, respectively) with number of detected vortices. None of the other arch geometric or demographic variables (for all, P ≥ 0.177) improved statistical modelling. Substantially different secondary flow patterns can be observed in the normal thoracic aorta. Age and the AAo diameter were the parameters correlating best with presence and amount of vortices. Findings underline the importance of age- and geometry-matched control groups for haemodynamic studies. (orig.)

  6. Comparative transcriptome analyses of three medicinal Forsythia species and prediction of candidate genes involved in secondary metabolisms.

    Science.gov (United States)

    Sun, Luchao; Rai, Amit; Rai, Megha; Nakamura, Michimi; Kawano, Noriaki; Yoshimatsu, Kayo; Suzuki, Hideyuki; Kawahara, Nobuo; Saito, Kazuki; Yamazaki, Mami

    2018-05-07

    The three Forsythia species, F. suspensa, F. viridissima and F. koreana, have been used as herbal medicines in China, Japan and Korea for centuries and they are known to be rich sources of numerous pharmaceutical metabolites, forsythin, forsythoside A, arctigenin, rutin and other phenolic compounds. In this study, de novo transcriptome sequencing and assembly was performed on these species. Using leaf and flower tissues of F. suspensa, F. viridissima and F. koreana, 1.28-2.45-Gbp sequences of Illumina based pair-end reads were obtained and assembled into 81,913, 88,491 and 69,458 unigenes, respectively. Classification of the annotated unigenes in gene ontology terms and KEGG pathways was used to compare the transcriptome of three Forsythia species. The expression analysis of orthologous genes across all three species showed the expression in leaf tissues being highly correlated. The candidate genes presumably involved in the biosynthetic pathway of lignans and phenylethanoid glycosides were screened as co-expressed genes. They express highly in the leaves of F. viridissima and F. koreana. Furthermore, the three unigenes annotated as acyltransferase were predicted to be associated with the biosynthesis of acteoside and forsythoside A from the expression pattern and phylogenetic analysis. This study is the first report on comparative transcriptome analyses of medicinally important Forsythia genus and will serve as an important resource to facilitate further studies on biosynthesis and regulation of therapeutic compounds in Forsythia species.

  7. Development of an ellipse fitting method with which to analyse selected area electron diffraction patterns

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, D.R.G., E-mail: dmitchel@uow.edu.au [Electron Microscopy Centre, Australian Institute for Innovative Materials, Innovation Campus, University of Wollongong, North Wollongong, NSW 2500 (Australia); Van den Berg, J.A. [Electron Microscopy Centre, Australian Institute for Innovative Materials, Innovation Campus, University of Wollongong, North Wollongong, NSW 2500 (Australia); Catalyst Fundamentals, Fischer-Tropsch and Syngas Conversion Research, Sasol Technology R & D, Sasolburg 1947 (South Africa)

    2016-01-15

    A software method has been developed which uses ellipse fitting to analyse electron diffraction patterns from polycrystalline materials. The method, which requires minimal user input, can determine the pattern centre and the diameter of diffraction rings with sub-pixel precision. This enables accurate crystallographic information to be obtained in a rapid and consistent manner. Since the method fits ellipses, it can detect, quantify and correct any elliptical distortion introduced by the imaging system. Distortion information derived from polycrystalline patterns as a function of camera length can be subsequently recalled and applied to single crystal patterns, resulting in improved precision and accuracy. The method has been implemented as a plugin for the DigitalMicrograph software by Gatan, and is a freely available via the internet. - Highlights: • A robust ellipse fitting method is developed. • Freely available software for automated diffraction pattern analysis is demonstrated. • Measurement and correction of elliptical distortion is routinely achieved.

  8. Statistical methods for analysing the relationship between bank profitability and liquidity

    OpenAIRE

    Boguslaw Guzik

    2006-01-01

    The article analyses the most popular methods for the empirical estimation of the relationship between bank profitability and liquidity. Owing to the fact that profitability depends on various factors (both economic and non-economic), a simple correlation coefficient, two-dimensional (profitability/liquidity) graphs or models where profitability depends only on liquidity variable do not provide good and reliable results. Quite good results can be obtained only when multifactorial profitabilit...

  9. ObStruct: a method to objectively analyse factors driving population structure using Bayesian ancestry profiles.

    Directory of Open Access Journals (Sweden)

    Velimir Gayevskiy

    Full Text Available Bayesian inference methods are extensively used to detect the presence of population structure given genetic data. The primary output of software implementing these methods are ancestry profiles of sampled individuals. While these profiles robustly partition the data into subgroups, currently there is no objective method to determine whether the fixed factor of interest (e.g. geographic origin correlates with inferred subgroups or not, and if so, which populations are driving this correlation. We present ObStruct, a novel tool to objectively analyse the nature of structure revealed in Bayesian ancestry profiles using established statistical methods. ObStruct evaluates the extent of structural similarity between sampled and inferred populations, tests the significance of population differentiation, provides information on the contribution of sampled and inferred populations to the observed structure and crucially determines whether the predetermined factor of interest correlates with inferred population structure. Analyses of simulated and experimental data highlight ObStruct's ability to objectively assess the nature of structure in populations. We show the method is capable of capturing an increase in the level of structure with increasing time since divergence between simulated populations. Further, we applied the method to a highly structured dataset of 1,484 humans from seven continents and a less structured dataset of 179 Saccharomyces cerevisiae from three regions in New Zealand. Our results show that ObStruct provides an objective metric to classify the degree, drivers and significance of inferred structure, as well as providing novel insights into the relationships between sampled populations, and adds a final step to the pipeline for population structure analyses.

  10. Analytical methods used by the geochemical section: water; Methodes d'analyses utilisees par la section de geochimie: les eaux

    Energy Technology Data Exchange (ETDEWEB)

    Berthollet, P; Cavalier, G [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires

    1971-07-01

    The authors describe the analytical methods used by the C.E.A. Geochemical Section to determine the chemical composition of natural waters encountered during the prospecting of uraniferous deposits or in the course of mining operations. Because of the diversity of the samples and the different items of information requested, methods were selected and adapted to answer to the demands peculiar to mining research. Methods and know-how concerning the quantitative analysis of natural water to find out the concentration of the following chemicals are reviewed: carbonates and bicarbonates, calcium, magnesium, chlorides, sodium and potassium, sulfates, nitrates, silica, phosphates, iron, manganese, aluminium, fluorides, dissolved oxygen, CO{sub 2}, SH{sub 2} and sulphur, and uranium. (authors) [French] Les auteurs decrivent les methodes d'analyses utilisees par la Section de Geochimie du C.E.A., pour la determination de la composition chimique des eaux naturelles rencontrees au cours de prospections de gites uraniferes ou de travaux miniers. La diversite des echantillons et les differents renseignements demandes a l'analyse les ont conduit a selectionner et a adapter des methodes afin qu'elles repondent aux exigences particulieres de la recherche miniere. Les methodes concernant le dosage dans les eaux superficielles et eaux souterraines des elements qui suivent sont presentees : carbonates and bicarbonates, calcium, magnesium, chlorures, sodium et potassium, sulfates, nitrates, silice, phosphates, fer ferreus et ferrique, manganese, aluminium, fluorures, oxygene dissous, CO{sub 2} libre, SH{sub 2} et soufre total, et uranium. (auteurs)

  11. Partial differential equation techniques for analysing animal movement: A comparison of different methods.

    Science.gov (United States)

    Wang, Yi-Shan; Potts, Jonathan R

    2017-03-07

    Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Advanced computational tools and methods for nuclear analyses of fusion technology systems

    International Nuclear Information System (INIS)

    Fischer, U.; Chen, Y.; Pereslavtsev, P.; Simakov, S.P.; Tsige-Tamirat, H.; Loughlin, M.; Perel, R.L.; Petrizzi, L.; Tautges, T.J.; Wilson, P.P.H.

    2005-01-01

    An overview is presented of advanced computational tools and methods developed recently for nuclear analyses of Fusion Technology systems such as the experimental device ITER ('International Thermonuclear Experimental Reactor') and the intense neutron source IFMIF ('International Fusion Material Irradiation Facility'). These include Monte Carlo based computational schemes for the calculation of three-dimensional shut-down dose rate distributions, methods, codes and interfaces for the use of CAD geometry models in Monte Carlo transport calculations, algorithms for Monte Carlo based sensitivity/uncertainty calculations, as well as computational techniques and data for IFMIF neutronics and activation calculations. (author)

  13. An application of the explicit method for analysing intersystem dependencies in the evaluation of event trees

    International Nuclear Information System (INIS)

    Oliveira, L.F.S. de; Frutuoso e Melo, P.F.F.; Lima, J.E.P.; Stal, I.L.

    1985-01-01

    A computacional application of the explicit method for analyzing event trees in the context of probabilistic risk assessments is discussed. A detailed analysis of the explicit method is presented, including the train level analysis (TLA) of safety systems and the impact vector method. It is shown that the penalty for not adopting TLA is that in some cases non-conservative results may be reached. The impact vector method can significantly reduce the number of sequences to be considered, and its use has inspired the definition of a dependency matrix, which enables the proper running of a computer code especially developed for analysing event trees. The code has been extensively used in the Angra 1 PRA currently underway. In its present version it gives as output the dominant sequences for each given initiator, properly classiying them in core-degradation classes as specified by the user. (Author) [pt

  14. Stress and deflection analyses of floating roofs based on a load-modifying method

    International Nuclear Information System (INIS)

    Sun Xiushan; Liu Yinghua; Wang Jianbin; Cen Zhangzhi

    2008-01-01

    This paper proposes a load-modifying method for the stress and deflection analyses of floating roofs used in cylindrical oil storage tanks. The formulations of loads and deformations are derived according to the equilibrium analysis of floating roofs. Based on these formulations, the load-modifying method is developed to conduct a geometrically nonlinear analysis of floating roofs with the finite element (FE) simulation. In the procedure with the load-modifying method, the analysis is carried out through a series of iterative computations until a convergence is achieved within the error tolerance. Numerical examples are given to demonstrate the validity and reliability of the proposed method, which provides an effective and practical numerical solution to the design and analysis of floating roofs

  15. Determining Methods used in Teaching Geography in Secondary Schools in Rongo District, Kenya

    OpenAIRE

    Omoro Benjamin; Luke Wakhungu Nato

    2014-01-01

    This article dealt with methods of teaching Geography in Kenya but also the world over. The importance of Geography in secondary school curriculum cannot be overemphasized. Improving the performance of Geography education is a great societal need in Kenya not only for industrialization of the country as contained in the vision 2030 but also for ensuring food security in the country through practices like land reclamation and irrigation farming The objective of this article was; to find out th...

  16. Radiation leakage monitoring method and device from primary to secondary coolant systems in nuclear reactor

    International Nuclear Information System (INIS)

    Tajiri, Yoshiaki; Umehara, Toshihiro; Yamada, Masataka.

    1993-01-01

    The present invention monitors radiation leaked from any one of primary cooling systems to secondary cooling systems in a plurality of steam generators. That is, radiation monitoring means each corresponding to steam each generators are disposed to the upstream of a position where main steam pipes are joined. With such a constitution, since the detection object of each of radiation monitoring means is secondary coolants before mixing with secondary coolants of other secondary loops or dilution, lowering of detection accuracy can be avoided. Except for the abnormal case, that is, a case neither of radiation leakage nor of background change, the device is adapted as a convenient measuring system only with calculation performance. Once abnormality occurs, a loop having a value exceeding a standard value is identified by a single channel analyzer function. The amount of radiation leakage from the steam generator belonging to the specified loop is monitored quantitatively by a multichannel analyzer function. According to the method of the present invention, since specific spectrum analysis is conducted upon occurrence of abnormality, presence of radiation leakage and the scale thereof can be judged rapidly. (I.S.)

  17. SVM-PB-Pred: SVM based protein block prediction method using sequence profiles and secondary structures.

    Science.gov (United States)

    Suresh, V; Parthasarathy, S

    2014-01-01

    We developed a support vector machine based web server called SVM-PB-Pred, to predict the Protein Block for any given amino acid sequence. The input features of SVM-PB-Pred include i) sequence profiles (PSSM) and ii) actual secondary structures (SS) from DSSP method or predicted secondary structures from NPS@ and GOR4 methods. There were three combined input features PSSM+SS(DSSP), PSSM+SS(NPS@) and PSSM+SS(GOR4) used to test and train the SVM models. Similarly, four datasets RS90, DB433, LI1264 and SP1577 were used to develop the SVM models. These four SVM models developed were tested using three different benchmarking tests namely; (i) self consistency, (ii) seven fold cross validation test and (iii) independent case test. The maximum possible prediction accuracy of ~70% was observed in self consistency test for the SVM models of both LI1264 and SP1577 datasets, where PSSM+SS(DSSP) input features was used to test. The prediction accuracies were reduced to ~53% for PSSM+SS(NPS@) and ~43% for PSSM+SS(GOR4) in independent case test, for the SVM models of above two same datasets. Using our method, it is possible to predict the protein block letters for any query protein sequence with ~53% accuracy, when the SP1577 dataset and predicted secondary structure from NPS@ server were used. The SVM-PB-Pred server can be freely accessed through http://bioinfo.bdu.ac.in/~svmpbpred.

  18. Physical analytical methods for uranium hexafluoride; Methodes physiques d'analyse de l'hexafluorure d'uranium

    Energy Technology Data Exchange (ETDEWEB)

    Vandenbussche, G [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires

    1965-12-15

    Various physical methods of analysis currently used or still under investigation such as: sound analysis, vapor pressure measurements, fractional distillation, cryogenics, micro-sublimation, ultra-violet, visible and infra-red absorption spectrophotometry, nuclear magnetic resonance and mass spectrometry are reviewed. For each method, principle and applications are given, and results obtained concerning reproducibility, application limits and rapidity are discussed. (author) [French] On passe en revue les differentes methodes physiques d'analyse utilisees ou en cours d'etude actuellement: par mesure de la vitesse du son, de la pression de vapeur, par distillation fractionnee, cryometrie, microsublimation, spectrometrie d'absorption dans l'ultraviolet, le visible et l'infrarouge, par resonance magnetique nucleaire et par spectrometrie de masse. Pour chaque methode, on donne le principe et son application et on examine les resultats obtenus concernant la reproductibilite, le domaine d'application et la duree des mesures. (auteur)

  19. Sobol method application in dimensional sensitivity analyses of different AFM cantilevers for biological particles

    Science.gov (United States)

    Korayem, M. H.; Taheri, M.; Ghahnaviyeh, S. D.

    2015-08-01

    Due to the more delicate nature of biological micro/nanoparticles, it is necessary to compute the critical force of manipulation. The modeling and simulation of reactions and nanomanipulator dynamics in a precise manipulation process require an exact modeling of cantilevers stiffness, especially the stiffness of dagger cantilevers because the previous model is not useful for this investigation. The stiffness values for V-shaped cantilevers can be obtained through several methods. One of them is the PBA method. In another approach, the cantilever is divided into two sections: a triangular head section and two slanted rectangular beams. Then, deformations along different directions are computed and used to obtain the stiffness values in different directions. The stiffness formulations of dagger cantilever are needed for this sensitivity analyses so the formulations have been driven first and then sensitivity analyses has been started. In examining the stiffness of the dagger-shaped cantilever, the micro-beam has been divided into two triangular and rectangular sections and by computing the displacements along different directions and using the existing relations, the stiffness values for dagger cantilever have been obtained. In this paper, after investigating the stiffness of common types of cantilevers, Sobol sensitivity analyses of the effects of various geometric parameters on the stiffness of these types of cantilevers have been carried out. Also, the effects of different cantilevers on the dynamic behavior of nanoparticles have been studied and the dagger-shaped cantilever has been deemed more suitable for the manipulation of biological particles.

  20. An application of the explicit method for analysing intersystem dependencies in the evaluation of event trees

    International Nuclear Information System (INIS)

    Oliveira, L.F.S.; Frutuoso e Melo, P.F.; Lima, J.E.P.; Stal, I.L.

    1985-01-01

    We discuss in this paper a computational application of the explicit method for analyzing event trees in the context of probabilistic risk assessments. A detailed analysis of the explicit method is presented, including the train level analysis (TLA) of safety systems and the impact vector method. It is shown that the penalty for not adopting TLA is that in some cases non-conservative results may be reached. The impact vector method can significantly reduce the number of sequences to be considered, and its use has inspired the definition of a dependency matrix, which enables the proper running of a computer code especially developed for analysing event trees. This code constructs and quantifies the event trees in the fashion just discussed, by receiving as input the construction and quantification dependencies defined in the dependency matrix. The code has been extensively used in the Angra 1 PRA currently underway. In its present version it gives as output the dominant sequences for each given initiator, properly classifying them in core-degradation classes as specified by the user. This calculation is made in a pointwise fashion. Extensions of this code are being developed in order to perform uncertainty analyses on the dominant sequences and also risk importance measures of the safety systems envolved. (orig.)

  1. Chemistry and liquid chromatography methods for the analyses of primary oxidation products of triacylglycerols.

    Science.gov (United States)

    Zeb, A

    2015-05-01

    Triacylglycerols (TAGs) are one of the major components of the cells in higher biological systems, which can act as an energy reservoir in the living cells. The unsaturated fatty acid moiety is the key site of oxidation and formation of oxidation compounds. The TAG free radical generates several primary oxidation compounds. These include hydroperoxides, hydroxides, epidioxides, hydroperoxy epidioxides, hydroxyl epidioxides, and epoxides. The presence of these oxidized TAGs in the cell increases the chances of several detrimental processes. For this purpose, several liquid chromatography (LC) methods were reported in their analyses. This review is therefore focused on the chemistry, oxidation, extraction, and the LC methods reported in the analyses of oxidized TAGs. The studies on thin-layer chromatography were mostly focused on the total oxidized TAGs separation and employ hexane as major solvent. High-performance LC (HPLC) methods were discussed in details along with their merits and demerits. It was found that most of the HPLC methods employed isocratic elution with methanol and acetonitrile as major solvents with an ultraviolet detector. The coupling of HPLC with mass spectrometry (MS) highly increases the efficiency of analysis as well as enables reliable structural elucidation. The use of MS was found to be helpful in studying the oxidation chemistry of TAGs and needs to be extended to the complex biological systems.

  2. Effectiveness of Demonstration and Lecture Methods in Learning Concept in Economics among Secondary School Students in Borno State, Nigeria

    Science.gov (United States)

    Muhammad, Amin Umar; Bala, Dauda; Ladu, Kolomi Mutah

    2016-01-01

    This study investigated the Effectiveness of Demonstration and Lecture Methods in Learning concepts in Economics among Secondary School Students in Borno state, Nigeria. Five objectives: to determine the effectiveness of demonstration method in learning economics concepts among secondary school students in Borno state, determine the effectiveness…

  3. A simplified method for obtaining high-purity perchlorate from groundwater for isotope analyses.

    Energy Technology Data Exchange (ETDEWEB)

    vonKiparski, G; Hillegonds, D

    2011-04-04

    Investigations into the occurrence and origin of perchlorate (ClO{sub 4}{sup -}) found in groundwater from across North America have been sparse until recent years, and there is mounting evidence that natural formation mechanisms are important. New opportunities for identifying groundwater perchlorate and its origin have arisen with the utilization of improved detection methods and sampling techniques. Additionally, application of the forensic potential of isotopic measurements has begun to elucidate sources, potential formation mechanisms and natural attenuation processes. Procedures developed appear to be amenable to enable high precision stable isotopic analyses, as well as lower precision AMS analyses of {sup 36}Cl. Immediate work is in analyzing perchlorate isotope standards and developing full analytical accuracy and uncertainty expectations. Field samples have also been collected, and will be analyzed when final qa/qc samples are deemed acceptable.

  4. Hybrid and Parallel Domain-Decomposition Methods Development to Enable Monte Carlo for Reactor Analyses

    International Nuclear Information System (INIS)

    Wagner, John C.; Mosher, Scott W.; Evans, Thomas M.; Peplow, Douglas E.; Turner, John A.

    2010-01-01

    This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which

  5. Hybrid and parallel domain-decomposition methods development to enable Monte Carlo for reactor analyses

    International Nuclear Information System (INIS)

    Wagner, J.C.; Mosher, S.W.; Evans, T.M.; Peplow, D.E.; Turner, J.A.

    2010-01-01

    This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform 'real' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the 'gold standard' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method

  6. MODEL OF METHODS OF FORMING BIOLOGICAL PICTURE OF THE WORLD OF SECONDARY SCHOOL PUPILS

    Directory of Open Access Journals (Sweden)

    Mikhail A. Yakunchev

    2016-12-01

    Full Text Available Introduction: the problem of development of a model of methods of forming the biological picture of the world of pupils as a multicomponent and integrative expression of the complete educational process is considered in the article. It is stated that the results of the study have theoretical and practical importance for effective subject preparation of senior pupils based on acquiring of systematic and generalized knowledge about wildlife. The correspondence of the main idea of the article to the scientific profile of the journal “Integration of Education” determines the choice of the periodical for publication. Materials and Methods: the results of the analysis of materials on modeling of the educational process, on specific models of the formation of a complete comprehension of the scientific picture of the world and its biological component make it possible to suggest a lack of elaboration of the aspect of pedagogical research under study. Therefore, the search for methods to overcome these gaps and to substantiate a particular model, relevant for its practical application by a teacher, is important. The study was based on the use of methods of theoretical level, including the analysis of pedagogical and methodological literature, modeling and generalized expression of the model of forming the biological picture of the world of secondary school senior pupils, which were of higher priority. Results: the use of models of organization of subject preparation of secondary school pupils takes a priority position, as they help to achieve the desired results of training, education and development. The model of methods of forming a biological picture of the world is represented as a theoretical construct in the unity of objective, substantive, procedural, diagnostic and effective blocks. Discussion and Conclusions: in a generalized form the article expresses the model of methods of forming the biological picture of the world of secondary school

  7. A method for energy and exergy analyses of product transformation processes in industry

    International Nuclear Information System (INIS)

    Abou Khalil, B.

    2008-12-01

    After a literature survey enabling the determination of the advantages and drawbacks of existing methods of assessment of the potential energy gains of an industrial site, this research report presents a newly developed method, named Energy and Exergy Analysis of Transformation Processes (or AEEP for Analyse energetique et exergetique des procedes de transformation), while dealing with actual industrial operations, in order to demonstrate the systematic character of this method. The different steps of the method are presented and detailed, one of them, the process analysis, being critical for the application of the developed method. This particular step is then applied to several industrial unitary operations in order to be a base for future energy audits in the concerned industry sectors, as well as to demonstrate its generic and systematic character. The method is the then applied in a global manner to a cheese manufacturing plant, all the different steps of the AEEP being applied. The author demonstrates that AEEP is a systematic method and can be applied to all energy audit levels, moreover to the lowest levels which have a relatively low cost

  8. Quantitative methods for analysing cumulative effects on fish migration success: a review.

    Science.gov (United States)

    Johnson, J E; Patterson, D A; Martins, E G; Cooke, S J; Hinch, S G

    2012-07-01

    It is often recognized, but seldom addressed, that a quantitative assessment of the cumulative effects, both additive and non-additive, of multiple stressors on fish survival would provide a more realistic representation of the factors that influence fish migration. This review presents a compilation of analytical methods applied to a well-studied fish migration, a more general review of quantitative multivariable methods, and a synthesis on how to apply new analytical techniques in fish migration studies. A compilation of adult migration papers from Fraser River sockeye salmon Oncorhynchus nerka revealed a limited number of multivariable methods being applied and the sub-optimal reliance on univariable methods for multivariable problems. The literature review of fisheries science, general biology and medicine identified a large number of alternative methods for dealing with cumulative effects, with a limited number of techniques being used in fish migration studies. An evaluation of the different methods revealed that certain classes of multivariable analyses will probably prove useful in future assessments of cumulative effects on fish migration. This overview and evaluation of quantitative methods gathered from the disparate fields should serve as a primer for anyone seeking to quantify cumulative effects on fish migration survival. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.

  9. Development and verification of an efficient spatial neutron kinetics method for reactivity-initiated event analyses

    International Nuclear Information System (INIS)

    Ikeda, Hideaki; Takeda, Toshikazu

    2001-01-01

    A space/time nodal diffusion code based on the nodal expansion method (NEM), EPISODE, was developed in order to evaluate transient neutron behavior in light water reactor cores. The present code employs the improved quasistatic (IQS) method for spatial neutron kinetics, and neutron flux distribution is numerically obtained by solving the neutron diffusion equation with the nonlinear iteration scheme to achieve fast computation. A predictor-corrector (PC) method developed in the present study enabled to apply a coarse time mesh to the transient spatial neutron calculation than that applicable in the conventional IQS model, which improved computational efficiency further. Its computational advantage was demonstrated by applying to the numerical benchmark problems that simulate reactivity-initiated events, showing reduction of computational times up to a factor of three than the conventional IQS. The thermohydraulics model was also incorporated in EPISODE, and the capability of realistic reactivity event analyses was verified using the SPERT-III/E-Core experimental data. (author)

  10. Improved Method for In Vitro Secondary Amastigogenesis of Trypanosoma cruzi: Morphometrical and Molecular Analysis of Intermediate Developmental Forms

    Directory of Open Access Journals (Sweden)

    L. A. Hernández-Osorio

    2010-01-01

    Full Text Available Trypanosoma cruzi undergoes a biphasic life cycle that consists of four alternate developmental stages. In vitro conditions to obtain a synchronic transformation and efficient rates of pure intermediate forms (IFs, which are indispensable for further biochemical, biological, and molecular studies, have not been reported. In the present study, we established an improved method to obtain IFs from secondary amastigogenesis. During the transformation kinetics, we observed progressive decreases in the size of the parasite body, undulating membrane and flagellum that were concomitant with nucleus remodeling and kinetoplast displacement. In addition, a gradual reduction in parasite movement and acquisition of the amastigote-specific Ssp4 antigen were observed. Therefore, our results showed that the in vitro conditions used obtained large quantities of highly synchronous and pure IFs that were clearly distinguished by morphometrical and molecular analyses. Obtaining these IFs represents the first step towards an understanding of the molecular mechanisms involved in amastigogenesis.

  11. Improvement of a separation method for the reduction of secondary waste from the water-jet abrasive suspension cutting technique

    International Nuclear Information System (INIS)

    Brandauer, M.; Gentes, S.; Heneka, A.; Krauss, C.O.; Geckeis, H.; Plaschke, M.; Schild, D.; Tobie, W.

    2017-01-01

    microscopic analysis. In addition, the microscopic analyses showed that abrasive particles are considerably larger than steel particles. Therefore, the grade of separation is related to the particle size distribution. For this reason, a measurement device was installed to measure the particle size distribution during the separation process. Based on the particle size distribution a good estimate of the separation grade could be given during the separation process. However, further optimization and additional understanding of the separation process is still needed to improve the performance of the setup. This is the main goal of the subsequent project MASK ('Magnetic Separation of granular mixtures from water-jet cutting to minimize secondary waste from the decommissioning of nuclear facilities', funded by BMBF), i.e. the improvement of the separation grade. For this purpose, a numerical simulation of the fluid flow inside the magnetic filter unit will be performed. Furthermore, experimental studies using radioactive abrasive-steel grain mixtures are envisaged, to demonstrate the applicability of the method to real radioactive contaminated secondary waste. For this purpose, a small scale separation device for the use in a radioactive controlled area will be constructed and tested. In this presentation the MASK separation device and the experimental procedures will be explained. In addition, new results obtained with the NENAWAS separation device will be presented. (authors)

  12. Methods in pharmacoepidemiology: a review of statistical analyses and data reporting in pediatric drug utilization studies.

    Science.gov (United States)

    Sequi, Marco; Campi, Rita; Clavenna, Antonio; Bonati, Maurizio

    2013-03-01

    To evaluate the quality of data reporting and statistical methods performed in drug utilization studies in the pediatric population. Drug utilization studies evaluating all drug prescriptions to children and adolescents published between January 1994 and December 2011 were retrieved and analyzed. For each study, information on measures of exposure/consumption, the covariates considered, descriptive and inferential analyses, statistical tests, and methods of data reporting was extracted. An overall quality score was created for each study using a 12-item checklist that took into account the presence of outcome measures, covariates of measures, descriptive measures, statistical tests, and graphical representation. A total of 22 studies were reviewed and analyzed. Of these, 20 studies reported at least one descriptive measure. The mean was the most commonly used measure (18 studies), but only five of these also reported the standard deviation. Statistical analyses were performed in 12 studies, with the chi-square test being the most commonly performed test. Graphs were presented in 14 papers. Sixteen papers reported the number of drug prescriptions and/or packages, and ten reported the prevalence of the drug prescription. The mean quality score was 8 (median 9). Only seven of the 22 studies received a score of ≥10, while four studies received a score of statistical methods and reported data in a satisfactory manner. We therefore conclude that the methodology of drug utilization studies needs to be improved.

  13. Assessing the Risk of Secondary Transfer Via Fingerprint Brush Contamination Using Enhanced Sensitivity DNA Analysis Methods.

    Science.gov (United States)

    Bolivar, Paula-Andrea; Tracey, Martin; McCord, Bruce

    2016-01-01

    Experiments were performed to determine the extent of cross-contamination of DNA resulting from secondary transfer due to fingerprint brushes used on multiple items of evidence. Analysis of both standard and low copy number (LCN) STR was performed. Two different procedures were used to enhance sensitivity, post-PCR cleanup and increased cycle number. Under standard STR typing procedures, some additional alleles were produced that were not present in the controls or blanks; however, there was insufficient data to include the contaminant donor as a contributor. Inclusion of the contaminant donor did occur for one sample using post-PCR cleanup. Detection of the contaminant donor occurred for every replicate of the 31 cycle amplifications; however, using LCN interpretation recommendations for consensus profiles, only one sample would include the contaminant donor. Our results indicate that detection of secondary transfer of DNA can occur through fingerprint brush contamination and is enhanced using LCN-DNA methods. © 2015 American Academy of Forensic Sciences.

  14. Computational methods, tools and data for nuclear analyses of fusion technology systems

    International Nuclear Information System (INIS)

    Fischer, U.

    2006-01-01

    An overview is presented of the Research and Development work conducted at Forschungszentrum Karlsruhe in co-operation with other associations in the framework of the European Fusion Technology Programme on the development and qualification of computational tools and data for nuclear analyses of Fusion Technology systems. The focus is on the development of advanced methods and tools based on the Monte Carlo technique for particle transport simulations, and the evaluation and qualification of dedicated nuclear data to satisfy the needs of the ITER and the IFMIF projects. (author)

  15. Standard method for economic analyses of inertial confinement fusion power plants

    International Nuclear Information System (INIS)

    Meier, W.R.

    1986-01-01

    A standard method for calculating the total capital cost and the cost of electricity for a typical inertial confinement fusion electric power plant has been developed. A standard code of accounts at the two-digit level is given for the factors making up the total capital cost of the power plant. Equations are given for calculating the indirect capital costs, the project contingency, and the time-related costs. Expressions for calculating the fixed charge rate, which is necessary to determine the cost of electricity, are also described. Default parameters are given to define a reference case for comparative economic analyses

  16. Methods to reduce intraocular pressure on secondary glaucoma after severe eye burns

    Directory of Open Access Journals (Sweden)

    A. V. Solovieva

    2014-07-01

    Full Text Available Purpose: Show the results of treatment of secondary glaucoma after severe eye burns.Methods: We observed 70 patients (108 eyes with severe burns the eyes and their consequences, secondary glaucoma was observed in 40 patients (58 eyes. All patients with secondary glaucoma received traditional antihypertensive therapy, with its failure to resort to antiglaucomatous surgery. Cataract extraction performed in 24 cases, 16 of them in combination with other surgery: the reconstruction of the anterior chamber, penetrating keratoplasty, sinustrabeculectomy, diode laser cyclocoagulation. Diode laser cy- clocoagulation performed 42 times in 8 of them in combination with other antiglaucomatous surgery: cataract surgery, reconstruction of the anterior chamber. Sinustrabeculectomy in patients with secondary glaucoma was performed in 7 cases, 4 of them with collagen implant drainage. Ahmed glaucoma drainage implant performed in 5 cases.Results: In 23 out of 58 (39.6% of long-term compensation glaucoma IOP was achieved antihypertensive therapy without sur- gery. After cataract extraction resistant compensated IOP was achieved in 10 cases, a temporary (1 to 42 months — in 11 cases, IOP is not reduced in 2 cases. After completing diode laser cyclocoagulation stable normalization of IOP occurred in 16 cases, the temporary (from 1 month to 2 years — in 20 cases, 4 cases of IOP reduction was not achieved. As a result sinustrabeculectomy in 4 cases IOP decreased, in one case the hypotensive effect is not there. After implantation Ahmed glaucoma valve in 2 cases was achieved stable normalization of IOP, in the 2 cases — the temporary; in 1 case developed endophthalmitis, and the device was removed.Conclusion: the immediate effect of antiglaucomatous treatment was 96.6%, but the high incidence of IOP decompensation (73.7% suggesting the need for continuous follow-up patients after severe eye burn injury, and a readiness to use other methods to reduce IOP.

  17. Methods to reduce intraocular pressure on secondary glaucoma after severe eye burns

    Directory of Open Access Journals (Sweden)

    A. V. Solovieva

    2012-01-01

    Full Text Available Purpose: Show the results of treatment of secondary glaucoma after severe eye burns.Methods: We observed 70 patients (108 eyes with severe burns the eyes and their consequences, secondary glaucoma was observed in 40 patients (58 eyes. All patients with secondary glaucoma received traditional antihypertensive therapy, with its failure to resort to antiglaucomatous surgery. Cataract extraction performed in 24 cases, 16 of them in combination with other surgery: the reconstruction of the anterior chamber, penetrating keratoplasty, sinustrabeculectomy, diode laser cyclocoagulation. Diode laser cy- clocoagulation performed 42 times in 8 of them in combination with other antiglaucomatous surgery: cataract surgery, reconstruction of the anterior chamber. Sinustrabeculectomy in patients with secondary glaucoma was performed in 7 cases, 4 of them with collagen implant drainage. Ahmed glaucoma drainage implant performed in 5 cases.Results: In 23 out of 58 (39.6% of long-term compensation glaucoma IOP was achieved antihypertensive therapy without sur- gery. After cataract extraction resistant compensated IOP was achieved in 10 cases, a temporary (1 to 42 months — in 11 cases, IOP is not reduced in 2 cases. After completing diode laser cyclocoagulation stable normalization of IOP occurred in 16 cases, the temporary (from 1 month to 2 years — in 20 cases, 4 cases of IOP reduction was not achieved. As a result sinustrabeculectomy in 4 cases IOP decreased, in one case the hypotensive effect is not there. After implantation Ahmed glaucoma valve in 2 cases was achieved stable normalization of IOP, in the 2 cases — the temporary; in 1 case developed endophthalmitis, and the device was removed.Conclusion: the immediate effect of antiglaucomatous treatment was 96.6%, but the high incidence of IOP decompensation (73.7% suggesting the need for continuous follow-up patients after severe eye burn injury, and a readiness to use other methods to reduce IOP.

  18. An axisymmetric method of creep analysis for primary and secondary creep

    International Nuclear Information System (INIS)

    Jahed, Hamid; Bidabadi, Jalal

    2003-01-01

    A general axisymmetric method for elastic-plastic analysis was previously proposed by Jahed and Dubey [ASME J Pressure Vessels Technol 119 (1997) 264]. In the present work the method is extended to the time domain. General rate type governing equations are derived and solved in terms of rate of change of displacement as a function of rate of change in loading. Different types of loading, such as internal and external pressure, centrifugal loading and temperature gradient, are considered. To derive specific equations and employ the proposed formulation, the problem of an inhomogeneous non-uniform rotating disc is worked out. Primary and secondary creep behaviour is predicted using the proposed method and results are compared to FEM results. The problem of creep in pressurized vessels is also solved. Several numerical examples show the effectiveness and robustness of the proposed method

  19. Forward-Weighted CADIS Method for Variance Reduction of Monte Carlo Reactor Analyses

    International Nuclear Information System (INIS)

    Wagner, John C.; Mosher, Scott W.

    2010-01-01

    Current state-of-the-art tools and methods used to perform 'real' commercial reactor analyses use high-fidelity transport codes to produce few-group parameters at the assembly level for use in low-order methods applied at the core level. Monte Carlo (MC) methods, which allow detailed and accurate modeling of the full geometry and energy details and are considered the 'gold standard' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the several-decade-old methodology used in current practice. However, the prohibitive computational requirements associated with obtaining fully converged system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. A goal of current research at Oak Ridge National Laboratory (ORNL) is to change this paradigm by enabling the direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome is the slow non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, research has focused on development in the following two areas: (1) a hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The focus of this paper is limited to the first area mentioned above. It describes the FW-CADIS method applied to variance reduction of MC reactor analyses and provides initial results for calculating

  20. A New Optimization Method for Centrifugal Compressors Based on 1D Calculations and Analyses

    Directory of Open Access Journals (Sweden)

    Pei-Yuan Li

    2015-05-01

    Full Text Available This paper presents an optimization design method for centrifugal compressors based on one-dimensional calculations and analyses. It consists of two parts: (1 centrifugal compressor geometry optimization based on one-dimensional calculations and (2 matching optimization of the vaned diffuser with an impeller based on the required throat area. A low pressure stage centrifugal compressor in a MW level gas turbine is optimized by this method. One-dimensional calculation results show that D3/D2 is too large in the original design, resulting in the low efficiency of the entire stage. Based on the one-dimensional optimization results, the geometry of the diffuser has been redesigned. The outlet diameter of the vaneless diffuser has been reduced, and the original single stage diffuser has been replaced by a tandem vaned diffuser. After optimization, the entire stage pressure ratio is increased by approximately 4%, and the efficiency is increased by approximately 2%.

  1. Analyses of Dynamics in Dairy Products and Identification of Lactic Acid Bacteria Population by Molecular Methods

    Directory of Open Access Journals (Sweden)

    Aytül Sofu

    2017-01-01

    Full Text Available Lactic acid bacteria (LAB with different ecological niches are widely seen in fermented meat, vegetables, dairy products and cereals as well as in fermented beverages. Lactic acid bacteria are the most important group of bacteria in dairy industry due to their probiotic characteristics and fermentation agents as starter culture. In the taxonomy of the lactic acid bacteria; by means of rep-PCR, which is the analysis of repetitive sequences that are based on 16S ribosomal RNA (rRNA gene sequence, it is possible to conduct structural microbial community analyses such as Restriction Fragment Length Polymorphism (RFLP analysis of DNA fragments of different sizes cut with enzymes, Random Amplified Polymorphic DNA (RAPD polymorphic DNA amplified randomly at low temperatures and Amplified Fragment-Length Polymorphism (AFLP-PCR of cut genomic DNA. Besides, in the recent years, non-culture-based molecular methods such as Pulse Field Gel Electrophoresis (PFGE, Denaturing Gradient Gel Electrophoresis (DGGE, Thermal Gradient Gel Electrophoresis (TGGE, and Fluorescence In-situ Hybridization (FISH have replaced classical methods once used for the identification of LAB. Identification of lactic acid bacteria culture independent regardless of the method will be one of the most important methods used in the future pyrosequencing as a Next Generation Sequencing (NGS techniques. This paper reviews molecular-method based studies conducted on the identification of LAB species in dairy products.

  2. The multiple imputation method: a case study involving secondary data analysis.

    Science.gov (United States)

    Walani, Salimah R; Cleland, Charles M

    2015-05-01

    To illustrate with the example of a secondary data analysis study the use of the multiple imputation method to replace missing data. Most large public datasets have missing data, which need to be handled by researchers conducting secondary data analysis studies. Multiple imputation is a technique widely used to replace missing values while preserving the sample size and sampling variability of the data. The 2004 National Sample Survey of Registered Nurses. The authors created a model to impute missing values using the chained equation method. They used imputation diagnostics procedures and conducted regression analysis of imputed data to determine the differences between the log hourly wages of internationally educated and US-educated registered nurses. The authors used multiple imputation procedures to replace missing values in a large dataset with 29,059 observations. Five multiple imputed datasets were created. Imputation diagnostics using time series and density plots showed that imputation was successful. The authors also present an example of the use of multiple imputed datasets to conduct regression analysis to answer a substantive research question. Multiple imputation is a powerful technique for imputing missing values in large datasets while preserving the sample size and variance of the data. Even though the chained equation method involves complex statistical computations, recent innovations in software and computation have made it possible for researchers to conduct this technique on large datasets. The authors recommend nurse researchers use multiple imputation methods for handling missing data to improve the statistical power and external validity of their studies.

  3. Novel citation-based search method for scientific literature: application to meta-analyses.

    Science.gov (United States)

    Janssens, A Cecile J W; Gwinn, M

    2015-10-13

    Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of co-citation with one or more "known" articles before reviewing their eligibility. In two independent studies, we aimed to reproduce the results of literature searches for sets of published meta-analyses (n = 10 and n = 42). For each meta-analysis, we extracted co-citations for the randomly selected 'known' articles from the Web of Science database, counted their frequencies and screened all articles with a score above a selection threshold. In the second study, we extended the method by retrieving direct citations for all selected articles. In the first study, we retrieved 82% of the studies included in the meta-analyses while screening only 11% as many articles as were screened for the original publications. Articles that we missed were published in non-English languages, published before 1975, published very recently, or available only as conference abstracts. In the second study, we retrieved 79% of included studies while screening half the original number of articles. Citation searching appears to be an efficient and reasonably accurate method for finding articles similar to one or more articles of interest for meta-analysis and reviews.

  4. Do pregnant women prefer timing of elective cesarean section prior to versus after 39 weeks of gestation? Secondary analyses from a randomized controlled trial.

    Science.gov (United States)

    Glavind, J; Henriksen, T B; Kindberg, S F; Uldbjerg, N

    2014-11-01

    To evaluate women's preferences for timing of elective cesarean section (ECS) scheduled prior to versus after 39 completed weeks. Secondary analyses from a randomized controlled open-label trial were conducted at seven Danish tertiary hospitals from March 2009 to June 2011 with inclusion of singleton pregnant women with a healthy fetus. The women were allocated by a computerized telephone system to ECS scheduled at 38(+3) weeks or 39(+3) weeks of gestation. Dissatisfaction with timing of ECS and preferred timing of the procedure in a proposed future ECS delivery were evaluated. Data analyses were done by intention-to-treat, using logistic regression. A total of 1196 women (94%) completed an online questionnaire at follow-up eight weeks postpartum. In the 38 weeks group, 61 (10%) women 601 were dissatisfied with the timing of their ECS, whereas in the 39 weeks group 157 (26%) of 595 were dissatisfied (adjOR 3.18, 95% CI 2.30; 4.40). The proportion of women who preferred the same timing in a future ECS were 272 (45%) in the 38 weeks group compared to 232 (39%) in the 39(+3) weeks group (adjOR 0.75, 95% CI 0.60; 0.95). The women in this trial preferred ECS scheduled prior to 39 weeks of gestation.

  5. Perceived Effectiveness of Identified Methods and Techniques Teachers Adopt in Prose Literature Lessons in some Secondary Schools in Owerri

    Directory of Open Access Journals (Sweden)

    F. O. Ezeokoli

    2016-07-01

    Full Text Available The study determined the methods adopted by teachers in prose literature-in-English classrooms, activities of teachers and students, teachers’ perceived effectiveness of techniques used. It also examined the objectives of teaching prose literature that teachers should address and the extent teachers believe in student-identified difficulties of studying prose literature. The study adopted the descriptive survey research design. Purposive sampling technique was used to select 85 schools in Owerri metropolis and in each school, all literature teachers of senior secondary I and II were involved. In all, 246 literature teachers participated out of which 15 were purposively selected for observation. The two instruments were: Teachers’ Questionnaire (r = 0.87 and Classroom Observation Schedule (r = 0.73. Data were analysed using frequency counts and percentages. Results revealed that teachers adopted lecture (28.4%, reading (10.9% and discussion (7.3% methods. Teacher’s activities during the lesson include: giving background information, summarizing, dictating notes, reading aloud and explaining and asking questions. The adopted techniques include: questioning, oral reading, silent reading and discussion. Teachers’ perceived questioning as the most effective technique followed by debating and summarizing. Teachers identified development of students’ critical faculties and analytical skills, literary appreciation and language skills to be of utmost concern. It was concluded that the methods adopted by teachers are not diverse enough to cater for the needs and backgrounds of students. Keywords: Methods, Techniques, Perceived Effectiveness, Objectives, Literature-in-English

  6. Structural validity of the Wechsler Intelligence Scale for Children-Fifth Edition: Confirmatory factor analyses with the 16 primary and secondary subtests.

    Science.gov (United States)

    Canivez, Gary L; Watkins, Marley W; Dombrowski, Stefan C

    2017-04-01

    The factor structure of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V; Wechsler, 2014a) standardization sample (N = 2,200) was examined using confirmatory factor analyses (CFA) with maximum likelihood estimation for all reported models from the WISC-V Technical and Interpretation Manual (Wechsler, 2014b). Additionally, alternative bifactor models were examined and variance estimates and model-based reliability estimates (ω coefficients) were provided. Results from analyses of the 16 primary and secondary WISC-V subtests found that all higher-order CFA models with 5 group factors (VC, VS, FR, WM, and PS) produced model specification errors where the Fluid Reasoning factor produced negative variance and were thus judged inadequate. Of the 16 models tested, the bifactor model containing 4 group factors (VC, PR, WM, and PS) produced the best fit. Results from analyses of the 10 primary WISC-V subtests also found the bifactor model with 4 group factors (VC, PR, WM, and PS) produced the best fit. Variance estimates from both 16 and 10 subtest based bifactor models found dominance of general intelligence (g) in accounting for subtest variance (except for PS subtests) and large ω-hierarchical coefficients supporting general intelligence interpretation. The small portions of variance uniquely captured by the 4 group factors and low ω-hierarchical subscale coefficients likely render the group factors of questionable interpretive value independent of g (except perhaps for PS). Present CFA results confirm the EFA results reported by Canivez, Watkins, and Dombrowski (2015); Dombrowski, Canivez, Watkins, and Beaujean (2015); and Canivez, Dombrowski, and Watkins (2015). (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. LITERATURE SEARCH FOR METHODS FOR HAZARD ANALYSES OF AIR CARRIER OPERATIONS.

    Energy Technology Data Exchange (ETDEWEB)

    MARTINEZ - GURIDI,G.; SAMANTA,P.

    2002-07-01

    Representatives of the Federal Aviation Administration (FAA) and several air carriers under Title 14 of the Code of Federal Regulations (CFR) Part 121 developed a system-engineering model of the functions of air-carrier operations. Their analyses form the foundation or basic architecture upon which other task areas are based: hazard analyses, performance measures, and risk indicator design. To carry out these other tasks, models may need to be developed using the basic architecture of the Air Carrier Operations System Model (ACOSM). Since ACOSM encompasses various areas of air-carrier operations and can be used to address different task areas with differing but interrelated objectives, the modeling needs are broad. A literature search was conducted to identify and analyze the existing models that may be applicable for pursuing the task areas in ACOSM. The intent of the literature search was not necessarily to identify a specific model that can be directly used, but rather to identify relevant ones that have similarities with the processes and activities defined within ACOSM. Such models may provide useful inputs and insights in structuring ACOSM models. ACOSM simulates processes and activities in air-carrier operation, but, in a general framework, it has similarities with other industries where attention also has been paid to hazard analyses, emphasizing risk management, and in designing risk indicators. To assure that efforts in other industries are adequately considered, the literature search includes publications from other industries, e.g., chemical, nuclear, and process industries. This report discusses the literature search, the relevant methods identified and provides a preliminary assessment of their use in developing the models needed for the ACOSM task areas. A detailed assessment of the models has not been made. Defining those applicable for ACOSM will need further analyses of both the models and tools identified. The report is organized in four chapters

  8. Application of geostatistical methods to long-term safety analyses for radioactive waste repositories

    International Nuclear Information System (INIS)

    Roehlig, K.J.

    2001-01-01

    Long-term safety analyses are an important part of the design and optimisation process as well as of the licensing procedure for final repositories for radioactive waste in deep geological formations. For selected scenarios describing possible evolutions of the repository system in the post-closure phase, quantitative consequence analyses are performed. Due to the complexity of the phenomena of concern and the large timeframes under consideration, several types of uncertainties have to be taken into account. The modelling work for the far-field (geosphere) surrounding or overlaying the repository is based on model calculations concerning the groundwater movement and the resulting migration of radionuclides which possibly will be released from the repository. In contrast to engineered systems, the geosphere shows a strong spatial variability of facies, materials and material properties. The paper presented here describes the first steps towards a quantitative approach for an uncertainty assessment taking into account this variability. Due to the availability of a large amount of data and information of several types, the Gorleben site (Germany) has been used for a case study in order to demonstrate the method. (orig.)

  9. On statistical methods for analysing the geographical distribution of cancer cases near nuclear installations

    International Nuclear Information System (INIS)

    Bithell, J.F.; Stone, R.A.

    1989-01-01

    This paper sets out to show that epidemiological methods most commonly used can be improved. When analysing geographical data it is necessary to consider location. The most obvious quantification of location is ranked distance, though other measures which may be more meaningful in relation to aetiology may be substituted. A test based on distance ranks, the ''Poisson maximum test'', depends on the maximum of observed relative risk in regions of increasing size, but with significance level adjusted for selection. Applying this test to data from Sellafield and Sizewell shows that the excess of leukaemia incidence observed at Seascale, near Sellafield, is not an artefact due to data selection by region, and that the excess probably results from a genuine, if as yet unidentified cause (there being little evidence of any other locational association once the Seascale cases have been removed). So far as Sizewell is concerned, geographical proximity to the nuclear power station does not seen particularly important. (author)

  10. Statistical parameters of random heterogeneity estimated by analysing coda waves based on finite difference method

    Science.gov (United States)

    Emoto, K.; Saito, T.; Shiomi, K.

    2017-12-01

    Short-period (2 s) seismograms. We found that the energy of the coda of long-period seismograms shows a spatially flat distribution. This phenomenon is well known in short-period seismograms and results from the scattering by small-scale heterogeneities. We estimate the statistical parameters that characterize the small-scale random heterogeneity by modelling the spatiotemporal energy distribution of long-period seismograms. We analyse three moderate-size earthquakes that occurred in southwest Japan. We calculate the spatial distribution of the energy density recorded by a dense seismograph network in Japan at the period bands of 8-16 s, 4-8 s and 2-4 s and model them by using 3-D finite difference (FD) simulations. Compared to conventional methods based on statistical theories, we can calculate more realistic synthetics by using the FD simulation. It is not necessary to assume a uniform background velocity, body or surface waves and scattering properties considered in general scattering theories. By taking the ratio of the energy of the coda area to that of the entire area, we can separately estimate the scattering and the intrinsic absorption effects. Our result reveals the spectrum of the random inhomogeneity in a wide wavenumber range including the intensity around the corner wavenumber as P(m) = 8πε2a3/(1 + a2m2)2, where ε = 0.05 and a = 3.1 km, even though past studies analysing higher-frequency records could not detect the corner. Finally, we estimate the intrinsic attenuation by modelling the decay rate of the energy. The method proposed in this study is suitable for quantifying the statistical properties of long-wavelength subsurface random inhomogeneity, which leads the way to characterizing a wider wavenumber range of spectra, including the corner wavenumber.

  11. Do regional methods really help reduce uncertainties in flood frequency analyses?

    Science.gov (United States)

    Cong Nguyen, Chi; Payrastre, Olivier; Gaume, Eric

    2013-04-01

    Flood frequency analyses are often based on continuous measured series at gauge sites. However, the length of the available data sets is usually too short to provide reliable estimates of extreme design floods. To reduce the estimation uncertainties, the analyzed data sets have to be extended either in time, making use of historical and paleoflood data, or in space, merging data sets considered as statistically homogeneous to build large regional data samples. Nevertheless, the advantage of the regional analyses, the important increase of the size of the studied data sets, may be counterbalanced by the possible heterogeneities of the merged sets. The application and comparison of four different flood frequency analysis methods to two regions affected by flash floods in the south of France (Ardèche and Var) illustrates how this balance between the number of records and possible heterogeneities plays in real-world applications. The four tested methods are: (1) a local statistical analysis based on the existing series of measured discharges, (2) a local analysis valuating the existing information on historical floods, (3) a standard regional flood frequency analysis based on existing measured series at gauged sites and (4) a modified regional analysis including estimated extreme peak discharges at ungauged sites. Monte Carlo simulations are conducted to simulate a large number of discharge series with characteristics similar to the observed ones (type of statistical distributions, number of sites and records) to evaluate to which extent the results obtained on these case studies can be generalized. These two case studies indicate that even small statistical heterogeneities, which are not detected by the standard homogeneity tests implemented in regional flood frequency studies, may drastically limit the usefulness of such approaches. On the other hand, these result show that the valuation of information on extreme events, either historical flood events at gauged

  12. Comparative analyses reveal discrepancies among results of commonly used methods for Anopheles gambiaemolecular form identification

    Directory of Open Access Journals (Sweden)

    Pinto João

    2011-08-01

    Full Text Available Abstract Background Anopheles gambiae M and S molecular forms, the major malaria vectors in the Afro-tropical region, are ongoing a process of ecological diversification and adaptive lineage splitting, which is affecting malaria transmission and vector control strategies in West Africa. These two incipient species are defined on the basis of single nucleotide differences in the IGS and ITS regions of multicopy rDNA located on the X-chromosome. A number of PCR and PCR-RFLP approaches based on form-specific SNPs in the IGS region are used for M and S identification. Moreover, a PCR-method to detect the M-specific insertion of a short interspersed transposable element (SINE200 has recently been introduced as an alternative identification approach. However, a large-scale comparative analysis of four widely used PCR or PCR-RFLP genotyping methods for M and S identification was never carried out to evaluate whether they could be used interchangeably, as commonly assumed. Results The genotyping of more than 400 A. gambiae specimens from nine African countries, and the sequencing of the IGS-amplicon of 115 of them, highlighted discrepancies among results obtained by the different approaches due to different kinds of biases, which may result in an overestimation of MS putative hybrids, as follows: i incorrect match of M and S specific primers used in the allele specific-PCR approach; ii presence of polymorphisms in the recognition sequence of restriction enzymes used in the PCR-RFLP approaches; iii incomplete cleavage during the restriction reactions; iv presence of different copy numbers of M and S-specific IGS-arrays in single individuals in areas of secondary contact between the two forms. Conclusions The results reveal that the PCR and PCR-RFLP approaches most commonly utilized to identify A. gambiae M and S forms are not fully interchangeable as usually assumed, and highlight limits of the actual definition of the two molecular forms, which might

  13. Constraints on primary and secondary particulate carbon sources using chemical tracer and 14C methods during CalNex-Bakersfield

    Data.gov (United States)

    U.S. Environmental Protection Agency — The present study investigates primary and secondary sources of organic carbon for Bakersfield, CA, USA as part of the 2010 CalNex study. The method used here...

  14. Numerical design and testing of a sound source for secondary calibration of microphones using the Boundary Element Method

    DEFF Research Database (Denmark)

    Cutanda Henriquez, Vicente; Juhl, Peter Møller; Barrera Figueroa, Salvador

    2009-01-01

    Secondary calibration of microphones in free field is performed by placing the microphone under calibration in an anechoic chamber with a sound source, and exposing it to a controlled sound field. A calibrated microphone is also measured as a reference. While the two measurements are usually made...... apart to avoid acoustic interaction. As a part of the project Euromet-792, aiming to investigate and improve methods for secondary free-field calibration of microphones, a sound source suitable for simultaneous secondary free-field calibration has been designed using the Boundary Element Method...... of the Danish Fundamental Metrology Institute (DFM). The design and verification of the source are presented in this communication....

  15. Normalization Methods and Selection Strategies for Reference Materials in Stable Isotope Analyses - Review

    International Nuclear Information System (INIS)

    Skrzypek, G.; Sadler, R.; Paul, D.; Forizs, I.

    2011-01-01

    A stable isotope analyst has to make a number of important decisions regarding how to best determine the 'true' stable isotope composition of analysed samples in reference to an international scale. It has to be decided which reference materials should be used, the number of reference materials and how many repetitions of each standard is most appropriate for a desired level of precision, and what normalization procedure should be selected. In this paper we summarise what is known about propagation of uncertainties associated with normalization procedures and propagation of uncertainties associated with reference materials used as anchors for the determination of 'true' values for δ''1''3C and δ''1''8O. Normalization methods Several normalization methods transforming the 'raw' value obtained from mass spectrometers to one of the internationally recognized scales has been developed. However, as summarised by Paul et al. different normalization transforms alone may lead to inconsistencies between laboratories. The most common normalization procedures are: single-point anchoring (versus working gas and certified reference standard), modified single-point normalization, linear shift between the measured and the true isotopic composition of two certified reference standards, two-point and multipoint linear normalization methods. The accuracy of these various normalization methods has been compared by using analytical laboratory data by Paul et al., with the single-point and normalization versus tank calibrations resulting in the largest normalization errors, and that also exceed the analytical uncertainty recommended for δ 13 C. The normalization error depends greatly on the relative differences between the stable isotope composition of the reference material and the sample. On the other hand, the normalization methods using two or more certified reference standards produces a smaller normalization error, if the reference materials are bracketing the whole range of

  16. Practicable methods for histological section thickness measurement in quantitative stereological analyses.

    Science.gov (United States)

    Matenaers, Cyrill; Popper, Bastian; Rieger, Alexandra; Wanke, Rüdiger; Blutke, Andreas

    2018-01-01

    The accuracy of quantitative stereological analysis tools such as the (physical) disector method substantially depends on the precise determination of the thickness of the analyzed histological sections. One conventional method for measurement of histological section thickness is to re-embed the section of interest vertically to its original section plane. The section thickness is then measured in a subsequently prepared histological section of this orthogonally re-embedded sample. However, the orthogonal re-embedding (ORE) technique is quite work- and time-intensive and may produce inaccurate section thickness measurement values due to unintentional slightly oblique (non-orthogonal) positioning of the re-embedded sample-section. Here, an improved ORE method is presented, allowing for determination of the factual section plane angle of the re-embedded section, and correction of measured section thickness values for oblique (non-orthogonal) sectioning. For this, the analyzed section is mounted flat on a foil of known thickness (calibration foil) and both the section and the calibration foil are then vertically (re-)embedded. The section angle of the re-embedded section is then calculated from the deviation of the measured section thickness of the calibration foil and its factual thickness, using basic geometry. To find a practicable, fast, and accurate alternative to ORE, the suitability of spectral reflectance (SR) measurement for determination of plastic section thicknesses was evaluated. Using a commercially available optical reflectometer (F20, Filmetrics®, USA), the thicknesses of 0.5 μm thick semi-thin Epon (glycid ether)-sections and of 1-3 μm thick plastic sections (glycolmethacrylate/ methylmethacrylate, GMA/MMA), as regularly used in physical disector analyses, could precisely be measured within few seconds. Compared to the measured section thicknesses determined by ORE, SR measures displayed less than 1% deviation. Our results prove the applicability

  17. Exploratory study on a statistical method to analyse time resolved data obtained during nanomaterial exposure measurements

    International Nuclear Information System (INIS)

    Clerc, F; Njiki-Menga, G-H; Witschger, O

    2013-01-01

    Most of the measurement strategies that are suggested at the international level to assess workplace exposure to nanomaterials rely on devices measuring, in real time, airborne particles concentrations (according different metrics). Since none of the instruments to measure aerosols can distinguish a particle of interest to the background aerosol, the statistical analysis of time resolved data requires special attention. So far, very few approaches have been used for statistical analysis in the literature. This ranges from simple qualitative analysis of graphs to the implementation of more complex statistical models. To date, there is still no consensus on a particular approach and the current period is always looking for an appropriate and robust method. In this context, this exploratory study investigates a statistical method to analyse time resolved data based on a Bayesian probabilistic approach. To investigate and illustrate the use of the this statistical method, particle number concentration data from a workplace study that investigated the potential for exposure via inhalation from cleanout operations by sandpapering of a reactor producing nanocomposite thin films have been used. In this workplace study, the background issue has been addressed through the near-field and far-field approaches and several size integrated and time resolved devices have been used. The analysis of the results presented here focuses only on data obtained with two handheld condensation particle counters. While one was measuring at the source of the released particles, the other one was measuring in parallel far-field. The Bayesian probabilistic approach allows a probabilistic modelling of data series, and the observed task is modelled in the form of probability distributions. The probability distributions issuing from time resolved data obtained at the source can be compared with the probability distributions issuing from the time resolved data obtained far-field, leading in a

  18. Can adverse maternal and perinatal outcomes be predicted when blood pressure becomes elevated? Secondary analyses from the CHIPS (Control of Hypertension In Pregnancy Study) randomized controlled trial.

    Science.gov (United States)

    Magee, Laura A; von Dadelszen, Peter; Singer, Joel; Lee, Terry; Rey, Evelyne; Ross, Susan; Asztalos, Elizabeth; Murphy, Kellie E; Menzies, Jennifer; Sanchez, Johanna; Gafni, Amiram; Gruslin, Andrée; Helewa, Michael; Hutton, Eileen; Lee, Shoo K; Logan, Alexander G; Ganzevoort, Wessel; Welch, Ross; Thornton, Jim G; Moutquin, Jean Marie

    2016-07-01

    For women with chronic or gestational hypertension in CHIPS (Control of Hypertension In Pregnancy Study, NCT01192412), we aimed to examine whether clinical predictors collected at randomization could predict adverse outcomes. This was a planned, secondary analysis of data from the 987 women in the CHIPS Trial. Logistic regression was used to examine the impact of 19 candidate predictors on the probability of adverse perinatal (pregnancy loss or high level neonatal care for >48 h, or birthweight hypertension, preeclampsia, or delivery at blood pressure within 1 week before randomization. Continuous variables were represented continuously or dichotomized based on the smaller p-value in univariate analyses. An area-under-the-receiver-operating-curve (AUC ROC) of ≥0.70 was taken to reflect a potentially useful model. Point estimates for AUC ROC were hypertension (0.70, 95% CI 0.67-0.74) and delivery at hypertension develop an elevated blood pressure in pregnancy, or formerly normotensive women develop new gestational hypertension, maternal and current pregnancy clinical characteristics cannot predict adverse outcomes in the index pregnancy. © 2016 The Authors. Acta Obstetricia et Gynecologica Scandinavica published by John Wiley & Sons Ltd on behalf of Nordic Federation of Societies of Obstetrics and Gynecology (NFOG).

  19. Effect of heating on the behaviors of hydrogen in C-TiC films with auger electron spectroscopy and secondary ion mass spectroscopy analyses

    International Nuclear Information System (INIS)

    Zou, Y.; Wang, L.W.; Huang, N.K.

    2007-01-01

    C-TiC films with a content of 75% TiC were prepared with magnetron sputtering deposition followed by Ar + ion bombardment. Effect of heating on the behaviors of hydrogen in C-TiC films before and after heating was studied with Auger Electron Spectroscopy and Secondary Ion Mass Spectroscopy (SIMS) analyses. SIMS depth profiles of hydrogen after H + ion implantation and thermal treatment show different hydrogen concentrations in C-TiC coatings and stainless steel. SIMS measurements show the existence of TiH, TiH 2 , CH 3 , CH 4 , C 2 H 2 bonds in the films after H + ion irradiation and the changes in the Ti LMM, Ti LMV and C KLL Auger line shape reveal that they have a good hydrogen retention ability after heating up to the temperature 393 K. All the results show that C-TiC coatings can be used as a hydrogen retainer or hydrogen permeable barrier on stainless steel to protect it from hydrogen brittleness

  20. Factors predicting the development of pressure ulcers in an at-risk population who receive standardized preventive care: secondary analyses of a multicentre randomised controlled trial.

    Science.gov (United States)

    Demarre, Liesbet; Verhaeghe, Sofie; Van Hecke, Ann; Clays, Els; Grypdonck, Maria; Beeckman, Dimitri

    2015-02-01

    To identify predictive factors associated with the development of pressure ulcers in patients at risk who receive standardized preventive care. Numerous studies have examined factors that predict risk for pressure ulcer development. Only a few studies identified risk factors associated with pressure ulcer development in hospitalized patients receiving standardized preventive care. Secondary analyses of data collected in a multicentre randomized controlled trial. The sample consisted of 610 consecutive patients at risk for pressure ulcer development (Braden Score Pressure ulcers in category II-IV were significantly associated with non-blanchable erythema, urogenital disorders and higher body temperature. Predictive factors significantly associated with superficial pressure ulcers were admission to an internal medicine ward, incontinence-associated dermatitis, non-blanchable erythema and a lower Braden score. Superficial sacral pressure ulcers were significantly associated with incontinence-associated dermatitis. Despite the standardized preventive measures they received, hospitalized patients with non-blanchable erythema, urogenital disorders and a higher body temperature were at increased risk for developing pressure ulcers. Improved identification of at-risk patients can be achieved by taking into account specific predictive factors. Even if preventive measures are in place, continuous assessment and tailoring of interventions is necessary in all patients at risk. Daily skin observation can be used to continuously monitor the effectiveness of the intervention. © 2014 John Wiley & Sons Ltd.

  1. Evaluation of the Tier 1 Program of Project P.A.T.H.S.: Secondary Data Analyses of Conclusions Drawn by the Program Implementers

    Directory of Open Access Journals (Sweden)

    Daniel T. L. Shek

    2008-01-01

    Full Text Available The Tier 1 Program of the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes is a curricula-based positive youth development program. In the experimental implementation phase, 52 schools participated in the program. Based on subjective outcome evaluation data collected from the program participants (Form A and program implementers (Form B in each school, the program implementers were invited to write down five conclusions based on an integration of the evaluation findings (N = 52. The conclusions stated in the 52 evaluation reports were further analyzed via secondary data analyses in this paper. Results showed that most of the conclusions concerning perceptions of the Tier 1 Program, instructors, and effectiveness of the programs were positive in nature. There were also conclusions reflecting the respondents’ appreciation of the program. Finally, responses on the difficulties encountered and suggestions for improvements were observed. In conjunction with the previous evaluation findings, the present study suggests that the Tier 1 Program was well received by the stakeholders and the program was beneficial to the development of the program participants.

  2. Evaluation of the Teaching Methods Used in Secondary School Biology Lessons

    Directory of Open Access Journals (Sweden)

    Porozovs Juris

    2015-06-01

    Full Text Available The teacher’s skills in conducting the lesson and choice of teaching methods play an essential role in creating students’ interest in biology. The aim of the research was to study the opinion of secondary school students and biology teachers regarding the most successful teaching methods used in biology lessons and viable options to make biology lessons more interesting. The research comprised polling students and biology teachers from several schools, namely: 2 secondary schools in Jelgava, 2 in Riga and 1 in Vecumnieki. The responses revealed that 58% of students find biology lessons interesting. 56% of students indicated that their ability to focus attention during biology lessons depends on the task presented to them. Most of all they prefer watching the teacher’s presentations, listening to their teacher telling about the actual topic as well as performing laboratory work and group-work. Many students like participating in discussions, whereas a far smaller number would do various exercises, individual tasks, fill out worksheets or complete projects. Least of all students wish to work with the textbook. The methods most frequently applied by teachers are as follows: lecture, explanation, demonstration, and discussion. Teachers believe that their students prefer laboratory work and discussions as well as listening to their teacher and watching presentations or films. They also indicate at the necessity to link theory with practice and to involve information technologies. While teaching their subject biology teachers try to establish relationship between theory and real life in order to develop their students’ interest in natural processes.

  3. The viability of alternative assessment methods in the Greek upper secondary school: the oral portfolio

    Directory of Open Access Journals (Sweden)

    Angeliki Daphni

    2012-02-01

    Full Text Available Τhe final examination of the English language subject, in the context of the Greek state upper secondary education, is a traditional paper-and-pencil test which does not include any assessment of oracy skills. This article explores the viability of the oral portfolio as an alternative assessment and pedagogic method that can facilitate the assessment of speaking and listening skills and create a more motivating learning environment. To this effect, three methodological tools were designed, namely, a questionnaire addressing upper secondary English teachers in Greek state schools, a case study involving an oral portfolio implementation and finally, a questionnaire for students to record their experience. The study demonstrates that implementation of the portfolio contributed to a successful assessment of oracy skills and that it was a stimulating experience for students. The results of the study also showed that the pedagogical value of the portfolio counterbalanced its practical constraints. The paper concludes by putting forward recommendations for the future application of this assessment technique in state school education.

  4. Validation of Method Performance of pH, PCO2, PO2, Na(+), K(+) of Cobas b121 ABG Analyser.

    Science.gov (United States)

    Nanda, Sunil Kumar; Ray, Lopamudra; Dinakaran, Asha

    2014-06-01

    The introduction of a new method or new analyser is a common occurrence in clinical biochemistry laboratory. Blood gas measurements and electrolytes are often performed in Point-of-Care (POC) settings. When a new POC analyser is obtained, the performance of the analyser should be evaluated by comparison to the measurements with the reference analyser in the laboratory. Evaluation of method performance of pH, PCO2, PO2, Na(+), K(+) of cobas b121 ABG analyser. The evaluation of method performance of pH, PO2, PCO2, Na(+), K(+) of cobas b121 ABG analyser was done by comparing the results of 50 patient samples run on cobas b121 with the results obtained from Rapid lab values (reference analyser). Correlation coefficient was calculated from the results obtained from both the analysers. Precision was calculated by running biorad ABG control samples. The correlation coefficient values obtained for parameters were close to 1.0 indicating good correlation. The CV obtained for all the parameters were less than 5 indicating good precision. The new ABG analyser, Cobas b121 correlated well with the reference ABG analyser (Rapid Lab) and could be used to run on patient samples.

  5. Method of forming components for a high-temperature secondary electrochemical cell

    Science.gov (United States)

    Mrazek, Franklin C.; Battles, James E.

    1983-01-01

    A method of forming a component for a high-temperature secondary electrochemical cell having a positive electrode including a sulfide selected from the group consisting of iron sulfides, nickel sulfides, copper sulfides and cobalt sulfides, a negative electrode including an alloy of aluminum and an electrically insulating porous separator between said electrodes. The improvement comprises forming a slurry of solid particles dispersed in a liquid electrolyte such as the lithium chloride-potassium chloride eutetic, casting the slurry into a form having the shape of one of the components and smoothing the exposed surface of the slurry, cooling the cast slurry to form the solid component, and removing same. Electrodes and separators can be thus formed.

  6. An ontology-based method for secondary use of electronic dental record data

    Science.gov (United States)

    Schleyer, Titus KL; Ruttenberg, Alan; Duncan, William; Haendel, Melissa; Torniai, Carlo; Acharya, Amit; Song, Mei; Thyvalikakath, Thankam P.; Liu, Kaihong; Hernandez, Pedro

    A key question for healthcare is how to operationalize the vision of the Learning Healthcare System, in which electronic health record data become a continuous information source for quality assurance and research. This project presents an initial, ontology-based, method for secondary use of electronic dental record (EDR) data. We defined a set of dental clinical research questions; constructed the Oral Health and Disease Ontology (OHD); analyzed data from a commercial EDR database; and created a knowledge base, with the OHD used to represent clinical data about 4,500 patients from a single dental practice. Currently, the OHD includes 213 classes and reuses 1,658 classes from other ontologies. We have developed an initial set of SPARQL queries to allow extraction of data about patients, teeth, surfaces, restorations and findings. Further work will establish a complete, open and reproducible workflow for extracting and aggregating data from a variety of EDRs for research and quality assurance. PMID:24303273

  7. An ontology-based method for secondary use of electronic dental record data.

    Science.gov (United States)

    Schleyer, Titus Kl; Ruttenberg, Alan; Duncan, William; Haendel, Melissa; Torniai, Carlo; Acharya, Amit; Song, Mei; Thyvalikakath, Thankam P; Liu, Kaihong; Hernandez, Pedro

    2013-01-01

    A key question for healthcare is how to operationalize the vision of the Learning Healthcare System, in which electronic health record data become a continuous information source for quality assurance and research. This project presents an initial, ontology-based, method for secondary use of electronic dental record (EDR) data. We defined a set of dental clinical research questions; constructed the Oral Health and Disease Ontology (OHD); analyzed data from a commercial EDR database; and created a knowledge base, with the OHD used to represent clinical data about 4,500 patients from a single dental practice. Currently, the OHD includes 213 classes and reuses 1,658 classes from other ontologies. We have developed an initial set of SPARQL queries to allow extraction of data about patients, teeth, surfaces, restorations and findings. Further work will establish a complete, open and reproducible workflow for extracting and aggregating data from a variety of EDRs for research and quality assurance.

  8. Processing method for chemical cleaning liquid on the secondary side of steam generator

    International Nuclear Information System (INIS)

    Nishihara, Yukio; Inagaki, Yuzo.

    1993-01-01

    Upon processing nitrilotriacetate (NTA), Fe liquid wastes mainly comprising Fe and Cu liquid wastes mainly comprising ethylene diamine and Cu generated upon chemical cleaning on the secondary side of a steam generator, pH of the Fe liquid wastes is lowered to deposit and separate NTA. Then, Fe ions in a filtrates are deposited on a cathode by electrolysis, as well as remaining NTA is decomposed by oxidation at an anode by O 2 gas. Cu liquid wastes are reacted with naphthalene disulfate and Ba ions and the reaction products are separated by deposition as sludges. Remaining Cu ions in the filtrates are deposited on the cathode by electrolysis. With such procedures, concentration of COD(NTA), Fe ions and Cu ions can greatly be reduced. Further, since capacity of the device can easily be increased in this method, a great amount of liquid wastes can be processed in a relatively short period of time. (T.M.)

  9. Secondary iris recognition method based on local energy-orientation feature

    Science.gov (United States)

    Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing

    2015-01-01

    This paper proposes a secondary iris recognition based on local features. The application of the energy-orientation feature (EOF) by two-dimensional Gabor filter to the extraction of the iris goes before the first recognition by the threshold of similarity, which sets the whole iris database into two categories-a correctly recognized class and a class to be recognized. Therefore, the former are accepted and the latter are transformed by histogram to achieve an energy-orientation histogram feature (EOHF), which is followed by a second recognition with the chi-square distance. The experiment has proved that the proposed method, because of its higher correct recognition rate, could be designated as the most efficient and effective among its companion studies in iris recognition algorithms.

  10. Instructional Methods and Students' End of Term Achievement in Biology in Selected Secondary Schools in Sokoto Metropolis, Sokoto State Nigeria

    Science.gov (United States)

    Shamsuddeen, Abdulrahman; Amina, Hassan

    2016-01-01

    This study investigated the Correlation between instructional methods and students end of term achievement in Biology in selected secondary schools in Sokoto Metropolis, Sokoto State Nigeria. The study addressed three Specific objectives. To examine the relationship between; Cooperative learning methods, guided discovery, Simulation Method and…

  11. Improving the safety of a body composition analyser based on the PGNAA method

    Energy Technology Data Exchange (ETDEWEB)

    Miri-Hakimabad, Hashem; Izadi-Najafabadi, Reza; Vejdani-Noghreiyan, Alireza; Panjeh, Hamed [FUM Radiation Detection And Measurement Laboratory, Ferdowsi University of Mashhad (Iran, Islamic Republic of)

    2007-12-15

    The {sup 252}Cf radioisotope and {sup 241}Am-Be are intense neutron emitters that are readily encapsulated in compact, portable and sealed sources. Some features such as high flux of neutron emission and reliable neutron spectrum of these sources make them suitable for the prompt gamma neutron activation analysis (PGNAA) method. The PGNAA method can be used in medicine for neutron radiography and body chemical composition analysis. {sup 252}Cf and {sup 241}Am-Be sources generate not only neutrons but also are intense gamma emitters. Furthermore, the sample in medical treatments is a human body, so it may be exposed to the bombardments of these gamma-rays. Moreover, accumulations of these high-rate gamma-rays in the detector volume cause simultaneous pulses that can be piled up and distort the spectra in the region of interest (ROI). In order to remove these disadvantages in a practical way without being concerned about losing the thermal neutron flux, a gamma-ray filter made of Pb must be employed. The paper suggests a relatively safe body chemical composition analyser (BCCA) machine that uses a spherical Pb shield, enclosing the neutron source. Gamma-ray shielding effects and the optimum radius of the spherical Pb shield have been investigated, using the MCNP-4C code, and compared with the unfiltered case, the bare source. Finally, experimental results demonstrate that an optimised gamma-ray shield for the neutron source in a BCCA can reduce effectively the risk of exposure to the {sup 252}Cf and {sup 241}Am-Be sources.

  12. Correlating tephras and cryptotephras using glass compositional analyses and numerical and statistical methods: Review and evaluation

    Science.gov (United States)

    Lowe, David J.; Pearce, Nicholas J. G.; Jorgensen, Murray A.; Kuehn, Stephen C.; Tryon, Christian A.; Hayward, Chris L.

    2017-11-01

    We define tephras and cryptotephras and their components (mainly ash-sized particles of glass ± crystals in distal deposits) and summarize the basis of tephrochronology as a chronostratigraphic correlational and dating tool for palaeoenvironmental, geological, and archaeological research. We then document and appraise recent advances in analytical methods used to determine the major, minor, and trace elements of individual glass shards from tephra or cryptotephra deposits to aid their correlation and application. Protocols developed recently for the electron probe microanalysis of major elements in individual glass shards help to improve data quality and standardize reporting procedures. A narrow electron beam (diameter ∼3-5 μm) can now be used to analyze smaller glass shards than previously attainable. Reliable analyses of 'microshards' (defined here as glass shards T2 test). Randomization tests can be used where distributional assumptions such as multivariate normality underlying parametric tests are doubtful. Compositional data may be transformed and scaled before being subjected to multivariate statistical procedures including calculation of distance matrices, hierarchical cluster analysis, and PCA. Such transformations may make the assumption of multivariate normality more appropriate. A sequential procedure using Mahalanobis distance and the Hotelling two-sample T2 test is illustrated using glass major element data from trachytic to phonolitic Kenyan tephras. All these methods require a broad range of high-quality compositional data which can be used to compare 'unknowns' with reference (training) sets that are sufficiently complete to account for all possible correlatives, including tephras with heterogeneous glasses that contain multiple compositional groups. Currently, incomplete databases are tending to limit correlation efficacy. The development of an open, online global database to facilitate progress towards integrated, high

  13. Iron interference in arsenic absorption by different plant species, analysed by neutron activation, k0-method

    International Nuclear Information System (INIS)

    Uemura, George; Matos, Ludmila Vieira da Silva; Silva, Maria Aparecida da; Menezes, Maria Angela de Barros Correia

    2009-01-01

    Natural arsenic contamination is a cause for concern in many countries of the world including Argentina, Bangladesh, Chile, China, India, Mexico, Thailand, United States of America and also in Brazil, specially in the Iron Quadrangle area, where mining activities have been contributing to aggravate natural contamination. Among other elements, iron is capable to interfere with the arsenic absorption by plants; iron ore has been proposed to remediate areas contaminated by the mentioned metalloid. In order to verify if iron can interfere with arsenic absorption by different taxa of plants, specimens of Brassicacea and Equisetaceae were kept in a 1/4 Murashige and Skoog basal salt solution (M and S), with 10 μgL -1 of arsenic acid. And varying concentrations of iron. The specimens were analysed by neutron activation analysis, k 0 -method, a routine technique in CDTN, and also very appropriate for arsenic studies. The preliminary results were quite surprising, showing that iron can interfere with arsenic absorption by plants, but in different ways, according to the species studied. (author)

  14. Network methods to support user involvement in qualitative data analyses: an introduction to Participatory Theme Elicitation.

    Science.gov (United States)

    Best, Paul; Badham, Jennifer; Corepal, Rekesh; O'Neill, Roisin F; Tully, Mark A; Kee, Frank; Hunter, Ruth F

    2017-11-23

    While Patient and Public Involvement (PPI) is encouraged throughout the research process, engagement is typically limited to intervention design and post-analysis stages. There are few approaches to participatory data analyses within complex health interventions. Using qualitative data from a feasibility randomised controlled trial (RCT), this proof-of-concept study tests the value of a new approach to participatory data analysis called Participatory Theme Elicitation (PTE). Forty excerpts were given to eight members of a youth advisory PPI panel to sort into piles based on their perception of related thematic content. Using algorithms to detect communities in networks, excerpts were then assigned to a thematic cluster that combined the panel members' perspectives. Network analysis techniques were also used to identify key excerpts in each grouping that were then further explored qualitatively. While PTE analysis was, for the most part, consistent with the researcher-led analysis, young people also identified new emerging thematic content. PTE appears promising for encouraging user led identification of themes arising from qualitative data collected during complex interventions. Further work is required to validate and extend this method. ClinicalTrials.gov, ID: NCT02455986 . Retrospectively Registered on 21 May 2015.

  15. Who runs public health? A mixed-methods study combining qualitative and network analyses.

    Science.gov (United States)

    Oliver, Kathryn; de Vocht, Frank; Money, Annemarie; Everett, Martin

    2013-09-01

    Persistent health inequalities encourage researchers to identify new ways of understanding the policy process. Informal relationships are implicated in finding evidence and making decisions for public health policy (PHP), but few studies use specialized methods to identify key actors in the policy process. We combined network and qualitative data to identify the most influential individuals in PHP in a UK conurbation and describe their strategies to influence policy. Network data were collected by asking for nominations of powerful and influential people in PHP (n = 152, response rate 80%), and 23 semi-structured interviews were analysed using a framework approach. The most influential PHP makers in this conurbation were mid-level managers in the National Health Service and local government, characterized by managerial skills: controlling policy processes through gate keeping key organizations, providing policy content and managing selected experts and executives to lead on policies. Public health professionals and academics are indirectly connected to policy via managers. The most powerful individuals in public health are managers, not usually considered targets for research. As we show, they are highly influential through all stages of the policy process. This study shows the importance of understanding the daily activities of influential policy individuals.

  16. Effect of high-dose simvastatin on cognitive, neuropsychiatric, and health-related quality-of-life measures in secondary progressive multiple sclerosis: secondary analyses from the MS-STAT randomised, placebo-controlled trial.

    Science.gov (United States)

    Chan, Dennis; Binks, Sophie; Nicholas, Jennifer M; Frost, Chris; Cardoso, M Jorge; Ourselin, Sebastien; Wilkie, David; Nicholas, Richard; Chataway, Jeremy

    2017-08-01

    In the 24-month MS-STAT phase 2 trial, we showed that high-dose simvastatin significantly reduced the annualised rate of whole brain atrophy in patients with secondary progressive multiple sclerosis (SPMS). We now describe the results of the MS-STAT cognitive substudy, in which we investigated the treatment effect on cognitive, neuropsychiatric, and health-related quality-of-life (HRQoL) outcome measures. We did a secondary analysis of MS-STAT, a 24-month, double-blind, controlled trial of patients with SPMS done at three neuroscience centres in the UK between Jan 28, 2008, and Nov 4, 2011. Patients were randomly assigned (1:1) to either 80 mg simvastatin (n=70) or placebo (n=70). The cognitive assessments done were the National Adult Reading Test, Wechsler Abbreviated Scale of Intelligence, Graded Naming Test, Birt Memory and Information Processing Battery (BMIPB), Visual Object and Space Perception battery (cube analysis), Frontal Assessment Battery (FAB), and Paced Auditory Serial Addition Test. Neuropsychiatric status was assessed using the Hamilton Depression Rating Scale and the Neuropsychiatric Inventory Questionnaire. HRQoL was assessed using the self-reported 36-Item Short Form Survey (SF-36) version 2. Assessments were done at study entry, 12 months, and 24 months. Patients, treating physicians, and outcome assessors were masked to treatment allocation. Analyses were by intention to treat. MS-STAT is registered with ClinicalTrials.gov, number NCT00647348. Baseline assessment revealed impairments in 60 (45%) of 133 patients on the test of frontal lobe function (FAB), and in between 13 (10%) and 43 (33%) of 130 patients in tests of non-verbal and verbal memory (BMIPB). Over the entire trial, we noted significant worsening on tests of verbal memory (T score decline of 5·7 points, 95% CI 3·6-7·8; pmultiple sclerosis treatment trials. The Moulton Foundation, the Berkeley Foundation, the Multiple Sclerosis Trials Collaboration, the Rosetrees Trust, a

  17. Combining sequence-based prediction methods and circular dichroism and infrared spectroscopic data to improve protein secondary structure determinations

    Directory of Open Access Journals (Sweden)

    Lees Jonathan G

    2008-01-01

    Full Text Available Abstract Background A number of sequence-based methods exist for protein secondary structure prediction. Protein secondary structures can also be determined experimentally from circular dichroism, and infrared spectroscopic data using empirical analysis methods. It has been proposed that comparable accuracy can be obtained from sequence-based predictions as from these biophysical measurements. Here we have examined the secondary structure determination accuracies of sequence prediction methods with the empirically determined values from the spectroscopic data on datasets of proteins for which both crystal structures and spectroscopic data are available. Results In this study we show that the sequence prediction methods have accuracies nearly comparable to those of spectroscopic methods. However, we also demonstrate that combining the spectroscopic and sequences techniques produces significant overall improvements in secondary structure determinations. In addition, combining the extra information content available from synchrotron radiation circular dichroism data with sequence methods also shows improvements. Conclusion Combining sequence prediction with experimentally determined spectroscopic methods for protein secondary structure content significantly enhances the accuracy of the overall results obtained.

  18. Effect of a medical food on body mass index and activities of daily living in patients with Alzheimer's disease: secondary analyses from a randomized, controlled trial.

    Science.gov (United States)

    Kamphuis, P J G H; Verhey, F R J; Olde Rikkert, M G M; Twisk, J W R; Swinkels, S H N; Scheltens, P

    2011-08-01

    To investigate the effect of a medical food (Souvenaid) on body mass index (BMI) and functional abilities in patients with mild Alzheimer's disease (AD). DESIGN/SETTING/PARTICIPANTS/INTERVENTION /MEASUREMENTS: These analyses were performed on data from a 12-week, double-blind, randomized, controlled, multicenter, proof-of-concept study with a similarly designed and exploratory 12-week extension period. Patients with mild AD (Mini-Mental State Examination score of 20-26) were randomized to receive either the active product or an iso-caloric control product. While primary outcomes included measures of cognition, the 23-item Alzheimer's Disease Cooperative Study-Activities of Daily Living (ADCS-ADL) scale was included as a secondary outcome. Both ADCS-ADL and BMI were assessed at baseline and Weeks 6, 12 and 24. Data were analyzed using a repeated-measures mixed model. Overall, data suggested an increased BMI in the active versus the control group at Week 24 (ITT: p = 0.07; PP: p = 0.03), but no treatment effect on ADCS-ADL was observed. However, baseline BMI was found to be a significant treatment effect modifier (ITT: p = 0.04; PP: p = 0.05), and an increase in ADCS-ADL was observed at Week 12 in patients with a 'low' baseline BMI (ITT: p = 0.02; PP: p = 0.04). These data indicate that baseline BMI significantly impacts the effect of Souvenaid on functional abilities. In addition, there was a suggestion that Souvenaid increased BMI.

  19. No Exacerbation of Knee Joint Pain and Effusion Following Preoperative Progressive Resistance Training in Patients Scheduled for Total Knee Arthroplasty: Secondary Analyses From a Randomized Controlled Trial.

    Science.gov (United States)

    Skoffer, Birgit; Dalgas, Ulrik; Maribo, Thomas; Søballe, Kjeld; Mechlenburg, Inger

    2017-11-09

    Preoperative progressive resistance training (PRT) is controversial in patients scheduled for total knee arthroplasty (TKA), because of the concern that it may exacerbate knee joint pain and effusion. To examine whether preoperative PRT initiated 5 weeks prior to TKA would exacerbate pain and knee effusion, and would allow a progressively increased training load throughout the training period that would subsequently increase muscle strength. Secondary analyses from a randomized controlled trial. University Hospital and a Regional Hospital. A total of 30 patients who were scheduled for TKA due to osteoarthritis and assigned as the intervention group. Patients underwent unilateral PRT (3 sessions per week). Exercise loading was 12 repetitions maximum (RM) with progression toward 8 RM. The training program consisted of 6 exercises performed unilaterally. Before and after each training session, knee joint pain was rated on an 11-point scale, effusion was assessed by measuring the knee joint circumference, and training load was recorded. The first and last training sessions were initiated by 1 RM testing of unilateral leg press, unilateral knee extension, and unilateral knee flexion. The median pain change score from before to after each training session was 0 at all training sessions. The average increase in knee joint effusion across the 12 training sessions was a mean 0.16 cm ± 0.23 cm. No consistent increase in knee joint effusion after training sessions during the training period was found (P = .21). Training load generally increased, and maximal muscle strength improved as follows: unilateral leg press: 18% ± 30% (P = .03); unilateral knee extension: 81% ± 156% (P knee flexion: 53% ± 57% (P knee joint pain and effusion, despite a substantial progression in loading and increased muscle strength. Concerns for side effects such as pain and effusion after PRT seem unfounded. To be determined. Copyright © 2017. Published by Elsevier Inc.

  20. Erythrocyte omega-3 fatty acids are inversely associated with incident dementia: Secondary analyses of longitudinal data from the Women's Health Initiative Memory Study (WHIMS).

    Science.gov (United States)

    Ammann, Eric M; Pottala, James V; Robinson, Jennifer G; Espeland, Mark A; Harris, William S

    2017-06-01

    To assess whether red blood cell (RBC) docosahexaenoic acid and eicosapentaenoic acid (DHA+EPA) levels have a protective association with the risk of dementia in older women. RBC DHA+EPA levels were assessed at baseline, and cognitive status was evaluated annually in a cohort of 6706 women aged ≥65 years who participated in the Women's Health Initiative Memory Study (WHIMS). Cox regression was used to quantify the association between RBC DHA+EPA and the risk of probable dementia, independent of major dementia risk factors. During a median follow-up period of 9.8 years, 587 incident cases of probable dementia were identified. After adjusting for demographic, clinical, and behavioral risk factors, a one standard deviation increase in DHA+EPA levels was associated with a significantly lower risk of dementia (HR = 0.92, 95% CI: 0.84, 1.00; p < 0.05). This effect estimate did not meaningfully change after further adjustment for baseline cognitive function and APOE genotype. For women with high DHA+EPA exposure (1SD above mean) compared to low exposure (1SD below mean), the adjusted 15-year absolute risk difference for dementia was 2.1% (95% CI: 0.2%, 4.0%). In secondary analyses, we also observed a protective association with longitudinal change in Modified Mini-Mental State (3MS) Exam scores, but no significant association with incident MCI, PD/MCI, or baseline 3MS scores. Higher levels of DHA+EPA may help protect against the development of dementia. Results from prospective randomized controlled trials of DHA+EPA supplementation are needed to help clarify whether this association is causal. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Individualising Chronic Care Management by Analysing Patients' Needs - A Mixed Method Approach.

    Science.gov (United States)

    Timpel, P; Lang, C; Wens, J; Contel, J C; Gilis-Januszewska, A; Kemple, K; Schwarz, P E

    2017-11-13

    Modern health systems are increasingly faced with the challenge to provide effective, affordable and accessible health care for people with chronic conditions. As evidence on the specific unmet needs and their impact on health outcomes is limited, practical research is needed to tailor chronic care to individual needs of patients with diabetes. Qualitative approaches to describe professional and informal caregiving will support understanding the complexity of chronic care. Results are intended to provide practical recommendations to be used for systematic implementation of sustainable chronic care models. A mixed method study was conducted. A standardised survey (n = 92) of experts in chronic care using mail responses to open-ended questions was conducted to analyse existing chronic care programs focusing on effective, problematic and missing components. An expert workshop (n = 22) of professionals and scientists of a European funded research project MANAGE CARE was used to define a limited number of unmet needs and priorities of elderly patients with type 2 diabetes mellitus and comorbidities. This list was validated and ranked using a multilingual online survey (n = 650). Participants of the online survey included patients, health care professionals and other stakeholders from 56 countries. The survey indicated that current care models need to be improved in terms of financial support, case management and the consideration of social care. The expert workshop identified 150 patient needs which were summarised in 13 needs dimensions. The online survey of these pre-defined dimensions revealed that financial issues, education of both patients and professionals, availability of services as well as health promotion are the most important unmet needs for both patients and professionals. The study uncovered competing demands which are not limited to medical conditions. The findings emphasise that future care models need to focus stronger on individual patient needs and

  2. Sustainability of outdoor school ground smoking bans at secondary schools: a mixed-method study.

    Science.gov (United States)

    Rozema, A D; Mathijssen, J J P; Jansen, M W J; van Oers, J A M

    2018-02-01

    Although increasing numbers of countries are implementing outdoor school ground smoking bans at secondary schools, less attention is paid to the post-implementation period even though sustainability of a policy is essential for long-term effectiveness. Therefore, this study assesses the level of sustainability and examines perceived barriers/facilitators related to the sustainability of an outdoor school ground smoking ban at secondary schools. A mixed-method design was used with a sequential explanatory approach. In phase I, 438 online surveys were conducted and in phase II, 15 semi-structured interviews were obtained from directors of relevant schools. ANOVA (phase I) and a thematic approach (phase II) were used to analyze data. Level of sustainability of an outdoor school ground smoking ban was high at the 48% Dutch schools with an outdoor smoking ban. Furthermore, school size was significantly associated with sustainability. The perceived barriers/facilitators fell into three categories: (i) smoking ban implementation factors (side-effects, enforcement, communication, guidelines and collaboration), (ii) school factors (physical environment, school culture, education type and school policy) and (iii) community environment factors (legislation and social environment). Internationally, the spread of outdoor school ground smoking bans could be further promoted. Once implemented, the ban has become 'normal' practice and investments tend to endure. Moreover, involvement of all staff is important for sustainability as they function as role models, have an interrelationship with students, and share responsibility for enforcement. These findings are promising for the sustainability of future tobacco control initiatives to further protect against the morbidity/mortality associated with smoking. © The Author 2017. Published by Oxford University Press on behalf of the European Public Health Association.

  3. Individualising Chronic Care Management by Analysing Patients’ Needs – A Mixed Method Approach

    Directory of Open Access Journals (Sweden)

    P. Timpel

    2017-11-01

    Full Text Available Background: Modern health systems are increasingly faced with the challenge to provide effective, affordable and accessible health care for people with chronic conditions. As evidence on the specific unmet needs and their impact on health outcomes is limited, practical research is needed to tailor chronic care to individual needs of patients with diabetes. Qualitative approaches to describe professional and informal caregiving will support understanding the complexity of chronic care. Results are intended to provide practical recommendations to be used for systematic implementation of sustainable chronic care models. Method: A mixed method study was conducted. A standardised survey (n = 92 of experts in chronic care using mail responses to open-ended questions was conducted to analyse existing chronic care programs focusing on effective, problematic and missing components. An expert workshop (n = 22 of professionals and scientists of a European funded research project MANAGE CARE was used to define a limited number of unmet needs and priorities of elderly patients with type 2 diabetes mellitus and comorbidities. This list was validated and ranked using a multilingual online survey (n = 650. Participants of the online survey included patients, health care professionals and other stakeholders from 56 countries. Results: The survey indicated that current care models need to be improved in terms of financial support, case management and the consideration of social care. The expert workshop identified 150 patient needs which were summarised in 13 needs dimensions. The online survey of these pre-defined dimensions revealed that financial issues, education of both patients and professionals, availability of services as well as health promotion are the most important unmet needs for both patients and professionals. Conclusion: The study uncovered competing demands which are not limited to medical conditions. The findings emphasise that future care

  4. Development of the evaluation methods in reactor safety analyses and core characteristics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-08-15

    In order to support the safety reviews by NRA on reactor safety design including the phenomena with multiple failures, the computer codes are developed and the safety evaluations with analyses are performed in the areas of thermal hydraulics and core characteristics evaluation. In the code preparation of safety analyses, the TRACE and RELAP5 code were prepared to conduct the safety analyses of LOCA and beyond design basis accidents with multiple failures. In the core physics code preparation, the functions of sensitivity and uncertainty analysis were incorporated in the lattice physics code CASMO-4. The verification of improved CASMO-4 /SIMULATE-3 was continued by using core physics data. (author)

  5. Calculation methods for analysing nuclear power plant accidents and its qualification

    International Nuclear Information System (INIS)

    Sacco, W.

    1986-01-01

    A methodology of transient and accident analyses able to carried out calculations for all transients and accidents required to support operation and operation licensing of Angra-1 reactor reload, is presented. (M.C.K.) [pt

  6. Method of chemical analysis of silicate rocks (1962); Methode d'analyse chimique des roches silicatees (1962)

    Energy Technology Data Exchange (ETDEWEB)

    Pouget, R [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires

    1962-07-01

    A rapid method of analysis for the physical and chemical determination of the major constituents of silicate rocks is described. Water losses at 100 deg. C and losses of volatile elements at 1000 deg. C are estimated after staying in oven for these temperatures, or by mean of a thermo-balance. The determination of silica is made by a double insolubilization with hydrochloric acid on attack solution with sodium carbonate; total iron and aluminium, both with calcium and magnesium, after ammoniacal precipitation of Fe and Al, are determined on the filtration product of silica by titrimetry-photometry of their complexes with EDTA. The alkalis Na and K by flame spectrophotometry, Mn by colorimetry of the permanganate, and Ti by mean of his complex with H{sub 2}O{sub 2}, are determined on fluosulfuric attack solution. Phosphorus is determined by his complex with 'molybdenum blue' on a fluoro-nitro-boric attack solution; iron is estimated by potentiometry, with the help of bichromate on hydrofluoric solution. (author) [French] Une methode d'analyse rapide est decrite pour la determination physico-chimique des constituants principaux des roches silicatees. Les pertes en eau a 100 deg. C et en matieres volatiles a 1000 deg. C sont evaluees apres passage au four a ces temperatures, ou a l'aide d'une thermobalance. La determination de la silice se fait par double insolubilisation a l'acide chlorhydrique, sur une attaque au carbonate de sodium; le fer total et l'aluminium ainsi que le calcium et le magnesium, apres precipitation a l'ammoniaque des deux premiers metaux, sont determines sur le filtrat de la silice par titrimetrie-photometrie de leurs complexes avec l'E.D.T.A. Les alcalins sodium et potassium par spectrophotometrie de flamme, le manganese par colorimetrie du permanganate, le titane a l'aide de son complexe avec l'eau oxygenee, sont determines sur une attaque fluosulfurique. Le phosphore est determine par son complexe du 'bleu de molybdene' sur une attaque fluo

  7. Effect of Simulation Techniques and Lecture Method on Students' Academic Performance in Mafoni Day Secondary School Maiduguri, Borno State, Nigeria

    Science.gov (United States)

    Bello, Sulaiman; Ibi, Mustapha Baba; Bukar, Ibrahim Bulama

    2016-01-01

    The study examined the effect of simulation technique and lecture method on students' academic performance in Mafoni Day Secondary School, Maiduguri. The study used both simulation technique and lecture methods of teaching at the basic level of education in the teaching/learning environment. The study aimed at determining the best predictor among…

  8. Factor structure of the Wechsler Intelligence Scale for Children-Fifth Edition: Exploratory factor analyses with the 16 primary and secondary subtests.

    Science.gov (United States)

    Canivez, Gary L; Watkins, Marley W; Dombrowski, Stefan C

    2016-08-01

    The factor structure of the 16 Primary and Secondary subtests of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V; Wechsler, 2014a) standardization sample was examined with exploratory factor analytic methods (EFA) not included in the WISC-V Technical and Interpretive Manual (Wechsler, 2014b). Factor extraction criteria suggested 1 to 4 factors and results favored 4 first-order factors. When this structure was transformed with the Schmid and Leiman (1957) orthogonalization procedure, the hierarchical g-factor accounted for large portions of total and common variance while the 4 first-order factors accounted for small portions of total and common variance; rendering interpretation at the factor index level less appropriate. Although the publisher favored a 5-factor model where the Perceptual Reasoning factor was split into separate Visual Spatial and Fluid Reasoning dimensions, no evidence for 5 factors was found. It was concluded that the WISC-V provides strong measurement of general intelligence and clinical interpretation should be primarily, if not exclusively, at that level. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Relationships of Indoor, Outdoor, and Personal Air (RIOPA). Part I. Collection methods and descriptive analyses.

    Science.gov (United States)

    Weisel, Clifford P; Zhang, Junfeng; Turpin, Barbara J; Morandi, Maria T; Colome, Steven; Stock, Thomas H; Spektor, Dalia M; Korn, Leo; Winer, Arthur M; Kwon, Jaymin; Meng, Qing Yu; Zhang, Lin; Harrington, Robert; Liu, Weili; Reff, Adam; Lee, Jong Hoon; Alimokhtari, Shahnaz; Mohan, Kishan; Shendell, Derek; Jones, Jennifer; Farrar, L; Maberti, Slivia; Fan, Tina

    2005-11-01

    This study on the relationships of indoor, outdoor, and personal air (RIOPA) was undertaken to collect data for use in evaluating the contribution of outdoor sources of air toxics and particulate matter (PM) to personal exposure. The study was not designed to obtain a population-based sample, but rather to provide matched indoor, outdoor, and personal concentrations in homes that varied in their proximity to outdoor pollution sources and had a wide range of air exchange rates (AERs). This design allowed examination of relations among indoor, outdoor, and personal concentrations of air toxics and PM across a wide range of environmental conditions; the resulting data set obtained for a wide range of environmental pollutants and AERs can be used to evaluate exposure models. Approximately 100 households with residents who do not smoke participated in each of three cities in distinct locations expected to have different climates and housing characteristics: Elizabeth, New Jersey; Houston, Texas; and Los Angeles County, California. Questionnaires were administered to characterize homes, neighborhoods, and personal activities that might affect exposures. The concentrations of a suite of volatile organic compounds (VOCs) and carbonyl compounds, as well as the fraction of airborne particulate matter with a mass median aerodynamic diameter personal air samples were collected simultaneously. During the same 48-hour period, the AER (exchanges/hr; x hr(-1)) was determined in each home, and carbonyl compounds were measured inside vehicle cabins driven by a subset of the participants. In most of the homes, measurements were made twice, during two different seasons, to obtain a wide distribution of AERs. This report presents in detail the data collection methods, quality control measures, and initial analyses of data distributions and relations among indoor, outdoor, and personal concentrations. The results show that indoor sources dominated personal and indoor air concentrations

  10. Methods and Materials in Teaching Secondary School Mathematics - Syllabus. Revised Edition.

    Science.gov (United States)

    Gallia, Thomas J.

    This syllabus describes a course designed for the student interested in teaching mathematics at the secondary level and includes both campus centered activities and a field experience. The professor teaching this class is expected to "bridge the gap" between theory in the college classroom and practice as viewed in the secondary school. The…

  11. Research with secondary data : Different matching methods and does it matter.

    NARCIS (Netherlands)

    de Leeuw, Tim; Keijl, Stefan

    Our review of all SMJ studies of the last six years reveals that 63 percent use multiple secondary databases, but only 11 percent report how the connections between these databases were made. This limits our knowledge of how information across secondary databases can be combined and restricts the

  12. Impact of Including Authentic Inquiry Experiences in Methods Courses for Pre-Service Secondary Teachers

    Science.gov (United States)

    Slater, T. F.; Elfring, L.; Novodvorsky, I.; Talanquer, V.; Quintenz, J.

    2007-12-01

    Science education reform documents universally call for students to have authentic and meaningful experiences using real data in the context of their science education. The underlying philosophical position is that students analyzing data can have experiences that mimic actual research. In short, research experiences that reflect the scientific spirit of inquiry potentially can: prepare students to address real world complex problems; develop students' ability to use scientific methods; prepare students to critically evaluate the validity of data or evidence and of the consequent interpretations or conclusions; teach quantitative skills, technical methods, and scientific concepts; increase verbal, written, and graphical communication skills; and train students in the values and ethics of working with scientific data. However, it is unclear what the broader pre-service teacher preparation community is doing in preparing future teachers to promote, manage, and successful facilitate their own students in conducting authentic scientific inquiry. Surveys of undergraduates in secondary science education programs suggests that students have had almost no experiences themselves in conducting open scientific inquiry where they develop researchable questions, design strategies to pursue evidence, and communicate data-based conclusions. In response, the College of Science Teacher Preparation Program at the University of Arizona requires all students enrolled in its various science teaching methods courses to complete an open inquiry research project and defend their findings at a specially designed inquiry science mini-conference at the end of the term. End-of-term surveys show that students enjoy their research experience and believe that this experience enhances their ability to facilitate their own future students in conducting open inquiry.

  13. Revisit of analytical methods for the process and plant control analyses during reprocessing of fast reactor fuels

    International Nuclear Information System (INIS)

    Subba Rao, R.V.

    2016-01-01

    CORAL (COmpact facility for Reprocessing of Advanced fuels in Lead cell) is an experimental facility for demonstrating the reprocessing of irradiated fast reactor fuels discharged from the Fast Breeder Test Reactor (FBTR). The objective of the reprocessing plant is to achieve nuclear grade plutonium and uranium oxides with minimum process waste volumes. The process flow sheet for the reprocessing of spent Fast Reactor Fuel consists of Transport of spent fuel, Chopping, Dissolution, Feed conditioning, Solvent Extraction cycle, Partitioning Cycle and Re-conversion of Plutonium nitrate and uranium nitrate to respective oxides. The efficiency and performance of the plant to achieve desired objective depends on the analyses of various species in the different steps adopted during reprocessing of fuels. The analytical requirements in the plant can be broadly classified as 1. Process control Analyses (Analyses which effect the performance of the plant- PCA); 2. Plant control Analyses (Analyses which indicates efficiency of the plant-PLCA); 3. Nuclear Material Accounting samples (Analyses which has bearing on nuclear material accounting in the plant - NUMAC) and Quality control Analyses (Quality of the input bulk chemicals as well as products - QCA). The analytical methods selected are based on the duration of analyses, precision and accuracies required for each type analytical requirement classified earlier. The process and plant control analyses requires lower precision and accuracies as compared to NUMAC analyses, which requires very high precision accuracy. The time taken for analyses should be as lower as possible for process and plant control analyses as compared to NUMAC analyses. The analytical methods required for determining U and Pu in process and plant samples from FRFR will be different as compared to samples from TRFR (Thermal Reactor Fuel Reprocessing) due to higher Pu to U ratio in FRFR as compared TRFR and they should be such that they can be easily

  14. HIGH THROUGHPUT SCREENING METHOD AND APPARATUS FOR ANALYSING INTERACTIONS BETWEEN SURFACES WITH DIFFERENT TOPOGRAPHY AND THE ENVIRONMENT

    NARCIS (Netherlands)

    de Boer, Jan; van Blitterswijk, Clemens; Unadkat, H.V.; Stamatialis, Dimitrios; Papenburg, B.J.; Wessling, Matthias

    2009-01-01

    The invention is directed to a high throughput screening method for analysing and interaction between a surface of a material and an environment. The screening method of the invention comprises: providing a micro-array comprising said material and having a multitude of units at least part of which

  15. A new automated assign and analysing method for high-resolution rotationally resolved spectra using genetic algorithms

    NARCIS (Netherlands)

    Meerts, W.L.; Schmitt, M.

    2006-01-01

    This paper describes a numerical technique that has recently been developed to automatically assign and fit high-resolution spectra. The method makes use of genetic algorithms (GA). The current algorithm is compared with previously used analysing methods. The general features of the GA and its

  16. A consistent modelling methodology for secondary settling tanks: a reliable numerical method.

    Science.gov (United States)

    Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena

    2013-01-01

    The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.

  17. Spurious effects of electron emission from the grids of a retarding field analyser on secondary electron emission measurements. Results on a (111) copper single crystal

    International Nuclear Information System (INIS)

    Pillon, J.; Roptin, D.; Cailler, M.

    1976-01-01

    Spurious effects of a four grid retarding field analyzer were studied for low energy secondary electron measurements. Their behavior was investigated and two peaks in the energy spectrum were interpreted as resulting from tertiary electrons from the grids. It was shown that the true secondary electron peak has to be separated from these spurious peaks. The spectrum and the yields sigma and eta obtained for a Cu(111) crystal after a surface cleanness control by Auger spectroscopy are given

  18. A Review of Sparsity-Based Methods for Analysing Radar Returns from Helicopter Rotor Blades

    Science.gov (United States)

    2016-09-01

    performance study of these algorithms in the particular problem of analysing backscatter signals from rotating blades. The report is organised as follows...provide further insight into the behaviour of the techniques. Here, the algorithms for MP, OMP, CGP, gOMP and ROMP terminate when 10 atoms are

  19. Analysis methods used by the geochemistry department; Methodes d'analyses utilisees par la section de geochimie

    Energy Technology Data Exchange (ETDEWEB)

    Berthollet, P.

    1958-06-15

    This note presents various analytical techniques which are respectively used for the dosing of uranium in soils (fluorescence method, chromatographic method), for the dosing of uranium in natural waters (ion exchange method, evaporation method), and for the dosing of uranium in plants. Principles, equipment and products, reactant preparation, operation mode, sample preparation and measurements, expression of results and calculations) are indicated for each of these methods.

  20. Utilisation of best estimate system codes and best estimate methods in safety analyses of VVER reactors in the Czech Republic

    International Nuclear Information System (INIS)

    Macek, Jiri; Kral, Pavel

    2010-01-01

    The content of the presentation was as follows: Conservative versus best estimate approach, Brief description and selection of methodology, Description of uncertainty methods, Examples of the BE methodology. It is concluded that where BE computer codes are used, uncertainty and sensitivity analyses should be included; if best estimate codes + uncertainty are used, the safety margins increase; and BE + BSA is the next step in licensing analyses. (P.A.)

  1. Surgical management of pilonidal sinus patients by primary and secondary repair methods: a comparative study

    Directory of Open Access Journals (Sweden)

    Haji Barati B

    2010-12-01

    Full Text Available "n Normal 0 false false false EN-US X-NONE AR-SA MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi;} Background: Gross difference in return to work exists in pilonidal sinus patients operated by primary and secondary repair. This survey was to evaluate the results of surgical management of pilonidal sinus with primary or secondary closure."n"nMethods: In a randomized clinical trial, patients with pilonidal sinus referring to the surgical clinic of Shariati Hospital in Tehran, Iran between March 2007 and March 2009 were underwent either excision with midline closure (primary, n=40, or excision without closure (secondary, n=40. The recorded outcomes were hospital stay, healing time, time off work, postoperative pain, patient's satisfaction and the recurrence rate."n"nResults: Majority of the patients were male (87.50%. There was no significant difference in the hospital stay. Time off work (8.65±1.73 Vs. 11.53±2.33 days, p=0.001 and healing time (3.43±0.92 Vs. 5.3±0.79 days, p=0.001 were shorter in primary group; but, there were no significant differences in hospital stay and number of visits. Intensity of postoperative pain in the 1st (37.75±6.5 Vs. 43.63±5.06, p=0.001, 2nd (26.75±6.66 Vs. 34.63±5.48, p=0.001, 3rd (18.25±6.05 Vs. 27.88±6.88, p=0.001, and 7th (8.45±3.85 Vs. 17.88±6.19, p=0.001 days were

  2. A comparative method for finding and folding RNA secondary structures within protein-coding regions

    DEFF Research Database (Denmark)

    Pedersen, Jakob Skou; Meyer, Irmtraud Margret; Forsberg, Roald

    2004-01-01

    that RNA-DECODER's parameters can be automatically trained to successfully fold known secondary structures within the HCV genome. We scan the genomes of HCV and polio virus for conserved secondary-structure elements, and analyze performance as a function of available evolutionary information. On known...... secondary structures, RNA-DECODER shows a sensitivity similar to the programs MFOLD, PFOLD and RNAALIFOLD. When scanning the entire genomes of HCV and polio virus for structure elements, RNA-DECODER's results indicate a markedly higher specificity than MFOLD, PFOLD and RNAALIFOLD....

  3. Neuroprotection and secondary damage following spinal cord injury: concepts and methods.

    Science.gov (United States)

    Hilton, Brett J; Moulson, Aaron J; Tetzlaff, Wolfram

    2017-06-23

    Neuroprotection refers to the attenuation of pathophysiological processes triggered by acute injury to minimize secondary damage. The development of neuroprotective treatments represents a major goal in the field of spinal cord injury (SCI) research. In this review, we discuss the strengths and limitations of the methodologies employed to assess secondary damage and neuroprotection in preclinical models of traumatic SCI. We also discuss modelling issues and how new tools might be exploited to study secondary damage and neuroprotection. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. A method for reducing memory errors in the isotopic analyses of uranium hexafluoride by mass spectrometry; Methode de reduction des erreurs de memoire dans les analyses isotopiques de l'hexafluorure d'uranium par spectrometrie de masse

    Energy Technology Data Exchange (ETDEWEB)

    Bir, R [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires

    1961-07-01

    One of the most serious causes of systematic error in isotopic analyses of uranium from UF{sub 6} is the tendency of this material to become fixed in various ways in the mass spectrometer. As a result the value indicated by the instrument is influenced by the isotopic composition of the substances previously analysed. The resulting error is called a memory error. Making use of an elementary mathematical theory, the various methods used to reduce memory errors are analysed and compared. A new method is then suggested, which reduces the memory errors to an extent where they become negligible over a wide range of {sup 235}U concentration. The method is given in full, together with examples of its application. (author) [French] Une des causes d'erreurs systematiques les plus graves dans les analyses isotopiques d'uranium a partir d'UF{sub 6} est l'aptitude de ce produit a se fixer de diverses manieres dans le spectrometre de masse. Il en resulte une influence de la composition isotopique des produits precedemment analyses sur la valeur indiquee par l'appareil. L'erreur resultante est appelee erreur de memoire. A partir d'une theorie mathematique elementaire, on analyse et on compare les differentes methodes utilisees pour reduire les erreurs de memoire. On suggere ensuite une nouvelle methode qui reduit les erreurs de memoire dans une proportion telle qu'elles deviennent negligeables dans un grand domaine de concentration en {sup 235}U. On donne le mode operatoire complet et des exemples d'application. (auteur)

  5. Application of annual ring analyses to the detection of smoke damage. I. A methodical contribution to the treatment of annual ring analyses

    Energy Technology Data Exchange (ETDEWEB)

    Vins, B

    1961-01-01

    Losses in growth of silvicultural stands caused by smoke can be measured by annual ring analysis. The method is advantageous mainly because the annual gains can be checked far into the past and thus compared with gains before the onset of the pollution. Experience gained in the Krusne Hory area of Czechoslovakia with the methodical processing of 2000 annual ring analyses is reviewed. The principal problem was that more than half the trees exposed to pollution failed to grow annual rings. At first no rings are added from the ground up to a certain height; then the defect spreads all the way into the crowns of the affected trees. This observation is of fundamental importance in the calculation of losses in growth gains due to industrial emissions because hitherto the last annual ring next to the bark was always identified with the test year, while in reality a number of annual rings might already have failed to grow due to the effects of pollution. Errors far exceeding permissible limits might have occurred in the analysis.

  6. Comparison of Genome-Wide Association Methods in Analyses of Admixed Populations with Complex Familial Relationships

    DEFF Research Database (Denmark)

    Kadri, Naveen; Guldbrandtsen, Bernt; Sørensen, Peter

    2014-01-01

    Population structure is known to cause false-positive detection in association studies. We compared the power, precision, and type-I error rates of various association models in analyses of a simulated dataset with structure at the population (admixture from two populations; P) and family (K......) levels. We also compared type-I error rates among models in analyses of publicly available human and dog datasets. The models corrected for none, one, or both structure levels. Correction for K was performed with linear mixed models incorporating familial relationships estimated from pedigrees or genetic...... corrected for P. In contrast, correction for P alone in linear models was insufficient. The power and precision of linear mixed models with and without correction for P were similar. Furthermore, power, precision, and type-I error rate were comparable in linear mixed models incorporating pedigree...

  7. An approach to automated chromosome analysis; Etudes pour une methode d'automatisation des analyses chromosomiques

    Energy Technology Data Exchange (ETDEWEB)

    Le Go, Roland

    1972-05-03

    The methods of approach developed with a view to automatic processing of the different stages of chromosome analysis are described in this study divided into three parts. Part 1 relates the study of automated selection of metaphase spreads, which operates a decision process in order to reject ail the non-pertinent images and keep the good ones. This approach has been achieved by Computing a simulation program that has allowed to establish the proper selection algorithms in order to design a kit of electronic logical units. Part 2 deals with the automatic processing of the morphological study of the chromosome complements in a metaphase: the metaphase photographs are processed by an optical-to-digital converter which extracts the image information and writes it out as a digital data set on a magnetic tape. For one metaphase image this data set includes some 200 000 grey values, encoded according to a 16, 32 or 64 grey-level scale, and is processed by a pattern recognition program isolating the chromosomes and investigating their characteristic features (arm tips, centromere areas), in order to get measurements equivalent to the lengths of the four arms. Part 3 studies a program of automated karyotyping by optimized pairing of human chromosomes. The data are derived from direct digitizing of the arm lengths by means of a BENSON digital reader. The program supplies' 1/ a list of the pairs, 2/ a graphic representation of the pairs so constituted according to their respective lengths and centromeric indexes, and 3/ another BENSON graphic drawing according to the author's own representation of the chromosomes, i.e. crosses with orthogonal arms, each branch being the accurate measurement of the corresponding chromosome arm. This conventionalized karyotype indicates on the last line the really abnormal or non-standard images unpaired by the program, which are of special interest for the biologist. (author) [French] Ce travail expose les methodes d'approche etudiees en vue

  8. Comparison of genome-wide association methods in analyses of admixed populations with complex familial relationships.

    Directory of Open Access Journals (Sweden)

    Naveen K Kadri

    Full Text Available Population structure is known to cause false-positive detection in association studies. We compared the power, precision, and type-I error rates of various association models in analyses of a simulated dataset with structure at the population (admixture from two populations; P and family (K levels. We also compared type-I error rates among models in analyses of publicly available human and dog datasets. The models corrected for none, one, or both structure levels. Correction for K was performed with linear mixed models incorporating familial relationships estimated from pedigrees or genetic markers. Linear models that ignored K were also tested. Correction for P was performed using principal component or structured association analysis. In analyses of simulated and real data, linear mixed models that corrected for K were able to control for type-I error, regardless of whether they also corrected for P. In contrast, correction for P alone in linear models was insufficient. The power and precision of linear mixed models with and without correction for P were similar. Furthermore, power, precision, and type-I error rate were comparable in linear mixed models incorporating pedigree and genomic relationships. In summary, in association studies using samples with both P and K, ancestries estimated using principal components or structured assignment were not sufficient to correct type-I errors. In such cases type-I errors may be controlled by use of linear mixed models with relationships derived from either pedigree or from genetic markers.

  9. A new method for analysing socio-ecological patterns of vulnerability

    OpenAIRE

    Kok, M.; Lüdeke, M.; Lucas, P.; Sterzel, T.; Walther, C.; Janssen, P.; Sietz, D.; de Soysa, I.

    2016-01-01

    This paper presents a method for the analysis of socio-ecological patterns of vulnerability of people being at risk of losing their livelihoods as a consequence of global environmental change. This method fills a gap in methodologies for vulnerability analysis by providing generalizations of the factors that shape vulnerability in specific socio-ecological systems and showing their spatial occurrence. The proposed method consists of four steps that include both quantitative and qualitative an...

  10. Effects of Lecture Method Supplemented with Music and Computer Animation on Senior Secondary School Students' Academic Achievement in Electrochemistry

    Science.gov (United States)

    Akpoghol, T. V.; Ezeudu, F. O.; Adzape, J. N.; Otor, E. E.

    2016-01-01

    The study investigated the effects of Lecture Method Supplemented with Music (LMM) and Computer Animation (LMC) on senior secondary school students' academic achievement in electrochemistry in Makurdi metropolis. Six research questions and six hypotheses guided the study. The design of the study was quasi experimental, specifically the pre-test,…

  11. Relative Effect of Lecture Method Supplemented with Music and Computer Animation on Senior Secondary School Students' Retention in Electrochemistry

    Science.gov (United States)

    Akpoghol, T. V.; Ezeudu, F. O.; Adzape, J. N.; Otor, E. E.

    2016-01-01

    The study investigated the effects of Lecture Method Supplemented with Music (LMM) and Computer Animation (LMC) on senior secondary school students' retention in electrochemistry in Makurdi metropolis. Three research questions and three hypotheses guided the study. The design of the study was quasi experimental, specifically the pre-test,…

  12. A new method for analysing socio-ecological patterns of vulnerability

    NARCIS (Netherlands)

    Kok, M.; Lüdeke, M.; Lucas, P.; Sterzel, T.; Walther, C.; Janssen, P.; Sietz, D.; Soysa, de I.

    2016-01-01

    This paper presents a method for the analysis of socio-ecological patterns of vulnerability of people being at risk of losing their livelihoods as a consequence of global environmental change. This method fills a gap in methodologies for vulnerability analysis by providing generalizations of the

  13. Quantitative Research Methods in Chaos and Complexity: From Probability to Post Hoc Regression Analyses

    Science.gov (United States)

    Gilstrap, Donald L.

    2013-01-01

    In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…

  14. Method for analysing radium in powder samples and its application to uranium prospecting

    International Nuclear Information System (INIS)

    Gong Xinxi; Hu Minzhi.

    1987-01-01

    The decayed daughter of Rn released from the power sample (soil) in a sealed bottle were collected on a piece of copper and the radium in the sample can be measured by counting α-particles with an Alphameter for uranium prospection, thus it is called the radium method. This method has many advantages, such as high sensitivity (the lowest limit of detection for radium sample per gram is 2.7 x 10 -15 g), high efficiency, low cost and easy to use. On the basis of measuring more than 700 samples taken along 20 sections in 8 deposits, the results show that the radium method is better than γ-measurement and equal to 210 Po method for the capability to descover anomalies. The author also summarizes the anomaly intensities of radium method, 210 Po method and γ-measurement respectively at the surface with deep blind ores, with or without surficial mineralization, and the figures of their profiles and the variation of Ra/ 210 Po ratios. According to the above-mentioned distinguishing features, the uranium mineralization located in deep and/or shallow parts can be distinguishd. The combined application of radium, 210 Po and γ-measurement methods may be regarded as one of the important methods used for anomaly assessment. Based on the experiments of the radium measurements with 771 stream sediments samples in an area of 100 km 2 , it is demonstrated that the radium mehtod can be used in the stages of uranium reconnaissance and prospecting

  15. An implementation of multiple multipole method in the analyse of elliptical objects to enhance backscattering light

    Science.gov (United States)

    Jalali, T.

    2015-07-01

    In this paper, we present dielectric elliptical shapes modelling with respect to a highly confined power distribution in the resulting nanojet, which has been parameterized according to the beam waist and its beam divergence. The method is based on spherical bessel function as a basis function, which is adapted to standard multiple multipole method. This method can handle elliptically shaped particles due to the change of size and refractive indices, which have been studied under plane wave illumination in two and three dimensional multiple multipole method. Because of its fast and good convergence, the results obtained from simulation are highly accurate and reliable. The simulation time is less than minute for two and three dimension. Therefore, the proposed method is found to be computationally efficient, fast and accurate.

  16. Method for analysing the adequacy of electric power systems with wind power plants and energy storages

    Directory of Open Access Journals (Sweden)

    Perzhabinsky Sergey

    2017-01-01

    Full Text Available Currently, renewable energy sources and energy storage devices are actively introduced into electric power systems. We developed method to analyze the adequacy of these electric power systems. The method takes into account the uncertainty of electricity generation by wind power plants and the processes of energy storage. The method is based on the Monte Carlo method and allowed to use of long-term meteorological data in open access. The performed experimental research of electrical power system is constructed on the basis of the real technical and meteorological data. The method allows to estimate of effectiveness of introducing generators based on renewable energy sources and energy storages in electric power systems.

  17. Relative conservatisms of combination methods used in response spectrum analyses of nuclear piping systems

    International Nuclear Information System (INIS)

    Gupta, S.; Kustu, O.; Jhaveri, D.P.; Blume, J.A.

    1983-01-01

    The paper presents the conclusions of a comprehensive study that investigated the relative conservatisms represented by various combination techniques. Two approaches were taken for the study, producing mutually consistent results. In the first, 20 representative nuclear piping systems were systematically analyzed using the response spectrum method. The total response was obtained using nine different combination methods. One procedure, using the SRSS method for combining spatial components of response and the 10% method for combining the responses of different modes (which is currently acceptable to the U.S. NRC), was the standard for comparison. Responses computed by the other methods were normalized to this standard method. These response ratios were then used to develop cumulative frequency-distribution curves, which were used to establish the relative conservatism of the methods in a probabilistic sense. In the second approach, 30 single-degree-of-freedom (SDOF) systems that represent different modes of hypothetical piping systems and have natural frequencies varying from 1 Hz to 30 Hz, were analyzed for 276 sets of three-component recorded ground motion. A set of hypothetical systems assuming a variety of modes and frequency ranges was developed. The responses of these systems were computed from the responses of the SDOF systems by combining the spatial response components by algebraic summation and the individual mode responses by the Navy method, or combining both spatial and modal response components using the SRSS method. Probability density functions and cumulative distribution functions were developed for the ratio of the responses obtained by both methods. (orig./HP)

  18. Evaluation method of corrosive conditions in cooling systems of nuclear power plants by combined analyses of flow dynamics and corrosion

    Energy Technology Data Exchange (ETDEWEB)

    Uchida, Shunsuke [Nuclear Power Engineering Corporation (NUPEC), Tokyo (Japan); Atomic Energy Society of Japan (AESJ) (Japan). Research Committee on Water Chemistry Standard; Naitoh, Masanori [Nuclear Power Engineering Corporation (NUPEC), Tokyo (Japan); Atomic Energy Society of Japan (AESJ) (Japan). Computational Science and Engineering Div.; Uehara, Yasushi; Okada, Hidetoshi [Nuclear Power Engineering Corporation (NUPEC), Tokyo (Japan); Hotta, Koji [ITOCHU Techno-Solutions Corporation (Japan); Ichikawa, Ryoko [Mizuho Information and Research Inst., Inc. (Japan); Koshizuka, Seiichi [Tokyo Univ. (Japan)

    2007-03-15

    Problems in major components and structural materials in nuclear power plants have often been caused by flow induced vibration, corrosion and their overlapping effects. In order to establish safe and reliable plant operation, it is necessary to predict future problems for structural materials based on combined analyses of flow dynamics and corrosion and to mitigate them before they become serious issues for plant operation. The analysis models are divided into two types. 1. Prediction models for future problems with structural materials: Distributions of oxidant concentrations along flow paths are obtained by solving water radiolysis reactions in the boiling water reactor (BWR) primary cooling water and hydrazine-oxygen reactions in the pressurized water reactor (PWR) secondary cooling water. Then, the electrochemical corrosion potential (ECP) at the point of interest is also obtained by the mixed potential model using oxidant concentration. Higher ECP enhances the possibility of intergranular stress corrosion cracking (IGSCC) in the BWR primary system, while lower ECP enhances flow accelerated corrosion (FAC) in the PWR secondary system. 2. Evaluation models of wall thinning caused by flow accelerated corrosion: The degree of wall thinning is evaluated at a location with a higher possibility of FAC occurrence, and lifetime is estimated for preventive maintenance. General features of models are reviewed in this paper and the prediction models for oxidant concentrations are briefly introduced. (orig.)

  19. Evaluation method of corrosive conditions in cooling systems of nuclear power plants by combined analyses of flow dynamics and corrosion

    International Nuclear Information System (INIS)

    Uchida, Shunsuke; Hotta, Koji; Ichikawa, Ryoko; Koshizuka, Seiichi

    2007-01-01

    Problems in major components and structural materials in nuclear power plants have often been caused by flow induced vibration, corrosion and their overlapping effects. In order to establish safe and reliable plant operation, it is necessary to predict future problems for structural materials based on combined analyses of flow dynamics and corrosion and to mitigate them before they become serious issues for plant operation. The analysis models are divided into two types. 1. Prediction models for future problems with structural materials: Distributions of oxidant concentrations along flow paths are obtained by solving water radiolysis reactions in the boiling water reactor (BWR) primary cooling water and hydrazine-oxygen reactions in the pressurized water reactor (PWR) secondary cooling water. Then, the electrochemical corrosion potential (ECP) at the point of interest is also obtained by the mixed potential model using oxidant concentration. Higher ECP enhances the possibility of intergranular stress corrosion cracking (IGSCC) in the BWR primary system, while lower ECP enhances flow accelerated corrosion (FAC) in the PWR secondary system. 2. Evaluation models of wall thinning caused by flow accelerated corrosion: The degree of wall thinning is evaluated at a location with a higher possibility of FAC occurrence, and lifetime is estimated for preventive maintenance. General features of models are reviewed in this paper and the prediction models for oxidant concentrations are briefly introduced. (orig.)

  20. Analysing Infinite-State Systems by Combining Equivalence Reduction and the Sweep-Line Method

    DEFF Research Database (Denmark)

    Mailund, Thomas

    2002-01-01

    The sweep-line method is a state space exploration method for on-the-fly verification aimed at systems exhibiting progress. Presence of progress in the system makes it possible to delete certain states during state space generation, which reduces the memory used for storing the states. Unfortunat......The sweep-line method is a state space exploration method for on-the-fly verification aimed at systems exhibiting progress. Presence of progress in the system makes it possible to delete certain states during state space generation, which reduces the memory used for storing the states....... Unfortunately, the same progress that is used to improve memory performance in state space exploration often leads to an infinite state space: The progress in the system is carried over to the states resulting in infinitely many states only distinguished through the progress. A finite state space can...... property essential for the sweep-line method. We evaluate the new method on two case studies, showing significant improvements in performance, and we briefly discuss the new method in the context of Timed Coloured Petri Nets, where the “increasing global time” semantics can be exploited for more efficient...

  1. The evaluation method of soil-spring for the analyses of foundation structures on layered bedsoil

    International Nuclear Information System (INIS)

    Satoh, S.; Sasaki, F.

    1985-01-01

    When performing the finite element method analysis of foundation structures, such as mat slab of reactor buildings and turbine buildings, it is very important to evaluate and model the soil-spring mechanism between foundation and soil correctly. In this model, this paper presents the method in which soil-spring mechanism is evaluated from the theoretical solution. In this theory the semi-infinite elastic solid is assumed to be made of multi-layered soil systems. From the analytical example, it is concluded that the stress analysis of foundation structures on multi-layered soil systems cannot be evaluated by the conventional methods. (orig.)

  2. On stream radioisotope X-ray fluorescence analyser and a method for the determination of copper in slurry

    International Nuclear Information System (INIS)

    Holynska, B.; Lankosz, M.; Lacki, E.; Ostachowicz, J.; Baran, W.; Owsiak, T.

    1975-01-01

    The paper presents an ''on stream'' analyser and a radioisotope X-ray fluorescence method for the continuous determination of copper content in feed 0.5-2.5% Cu, concentrates 15-25% Cu and tailings 0.01-0.03% Cu. The analyser consists essentially of a radioisotope X-ray fluorescence measuring head, γ-density gauge, electronic unit, analog processor and recorders. The method is based on the measurement of the characteristic radiation of Cu series, selected by nickel-cobalt filters. The total relative error (1s) of the determination of copper in feed is 6-8%, in concentrates 5-7% and in tailings about 18%. The ''on stream'' analyser has been succesfully operated in a pilot plant. (author)

  3. Development of advanced methods and related software for human reliability evaluation within probabilistic safety analyses

    International Nuclear Information System (INIS)

    Kosmowski, K.T.; Mertens, J.; Degen, G.; Reer, B.

    1994-06-01

    Human Reliability Analysis (HRA) is an important part of Probabilistic Safety Analysis (PSA). The first part of this report consists of an overview of types of human behaviour and human error including the effect of significant performance shaping factors on human reliability. Particularly with regard to safety assessments for nuclear power plants a lot of HRA methods have been developed. The most important of these methods are presented and discussed in the report, together with techniques for incorporating HRA into PSA and with models of operator cognitive behaviour. Based on existing HRA methods the concept of a software system is described. For the development of this system the utilization of modern programming tools is proposed; the essential goal is the effective application of HRA methods. A possible integration of computeraided HRA within PSA is discussed. The features of Expert System Technology and examples of applications (PSA, HRA) are presented in four appendices. (orig.) [de

  4. Novel method for equivalent stiffness and Coulomb's damping ratio analyses of leaf spring

    International Nuclear Information System (INIS)

    Wen Jun, Wu; Yu, Xiang; Le Mei, Zhu; Li Jun, He

    2012-01-01

    The leaf spring is a representative type of laminated structure. Based on the linear theories of curve beams, the first derivatives of the leave's status vector of the leaf spring are provided. The first derivatives of the combination status-vector are obtained by properly dealing with the nonlinear interacted forces between adjacent leaves. Moreover, the precise integration technology and the transform matrix method are introduced to solve the equations. The force displacement curve of a leaf spring is then calculated separately by using the present method and the finite element software ANSYS. From the results, the precision and advantages of the present methods for analyzing the leaf spring are revealed. The Coulomb's damping ratio of the leaf spring is studied by using the present method

  5. Protein fraction heterogeneity in donkey’s milk analysed by proteomic methods

    Directory of Open Access Journals (Sweden)

    G. D'Urso

    2010-04-01

    Full Text Available Donkey’s milk is often well tolerate by patients affected by cow’s milk protein allergy, probably thanks to its protein composition. This empiric evidence, confirmed by some clinical trials, needs to be better investigated. A preliminary survey on the protein fraction of donkey’s milk was carried out: fifty-six individual milk samples have been collected and analysed by IEF and SDS-PAGE. Five different IEF patterns have been identified, showing a marked heterogeneity both in casein and whey protein fractions. A single IEF pattern showed an apparent reduced amount of casein fraction highlighted by SDS. Three of the five IEF patterns have been further investigated by Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry (MALDI-TOF MS.

  6. Calculation methods and algorithms development of dynamic loadings on NPP secondary circuit equipment at shock and pulse actions

    International Nuclear Information System (INIS)

    Kuznetsov, D.V.; Kormilitsyn, V.M.; Proskuryakov, K.N.

    2010-01-01

    Calculation results of acoustic parameters fluctuations in low-pressure regenerative heating system of NPP with WWER-1000 type reactor were presented. The spectral structure of acoustic fluctuations was shown to depend on configuration of secondary circuit equipment, its geometrical sizes and operation mode. Estimations of natural oscillations frequencies of working medium pressure in the secondary circuit equipment were resulted. The developed calculation methods and algorithms are intended for revealing and prevention of initiation conditions of vibrations resonances in elements of the secondary circuit equipment with acoustic oscillations in working medium, both under operating conditions and in the design stage of the second circuit of NPP with WWER-1000 type reactor. Analysis of pass-band dependence on operation mode was carried out to solve the given problem [ru

  7. Validity and reliability of methods for the detection of secondary caries around amalgam restorations in primary teeth

    Directory of Open Access Journals (Sweden)

    Mariana Minatel Braga

    2010-03-01

    Full Text Available Secondary caries has been reported as the main reason for restoration replacement. The aim of this in vitro study was to evaluate the performance of different methods - visual inspection, laser fluorescence (DIAGNOdent, radiography and tactile examination - for secondary caries detection in primary molars restored with amalgam. Fifty-four primary molars were photographed and 73 suspect sites adjacent to amalgam restorations were selected. Two examiners evaluated independently these sites using all methods. Agreement between examiners was assessed by the Kappa test. To validate the methods, a caries-detector dye was used after restoration removal. The best cut-off points for the sample were found by a Receiver Operator Characteristic (ROC analysis, and the area under the ROC curve (Az, and the sensitivity, specificity and accuracy of the methods were calculated for enamel (D2 and dentine (D3 thresholds. These parameters were found for each method and then compared by the McNemar test. The tactile examination and visual inspection presented the highest inter-examiner agreement for the D2 and D3 thresholds, respectively. The visual inspection also showed better performance than the other methods for both thresholds (Az = 0.861 and Az = 0.841, respectively. In conclusion, the visual inspection presented the best performance for detecting enamel and dentin secondary caries in primary teeth restored with amalgam.

  8. A method of mounting multiple otoliths for beam-based microchemical analyses

    Science.gov (United States)

    Donohoe, C.J.; Zimmerman, C.E.

    2010-01-01

    Beam-based analytical methods are widely used to measure the concentrations of elements and isotopes in otoliths. These methods usually require that otoliths be individually mounted and prepared to properly expose the desired growth region to the analytical beam. Most analytical instruments, such as LA-ICPMS and ion and electron microprobes, have sample holders that will accept only one to six slides or mounts at a time. We describe a method of mounting otoliths that allows for easy transfer of many otoliths to a single mount after they have been prepared. Such an approach increases the number of otoliths that can be analyzed in a single session by reducing the need open the sample chamber to exchange slides-a particularly time consuming step on instruments that operate under vacuum. For ion and electron microprobes, the method also greatly reduces the number of slides that must be coated with an electrical conductor prior to analysis. In this method, a narrow strip of cover glass is first glued at one end to a standard microscope slide. The otolith is then mounted in thermoplastic resin on the opposite, free end of the strip. The otolith can then be ground and flipped, if needed, by reheating the mounting medium. After otolith preparation is complete, the cover glass is cut with a scribe to free the otolith and up to 20 small otoliths can be arranged on a single petrographic slide. ?? 2010 The Author(s).

  9. Methods to induce primary and secondary traumatic damage in organotypic hippocampal slice cultures.

    Science.gov (United States)

    Adamchik, Y; Frantseva, M V; Weisspapir, M; Carlen, P L; Perez Velazquez, J L

    2000-04-01

    Organotypic brain slice cultures have been used in a variety of studies on neurodegenerative processes [K.M. Abdel-Hamid, M. Tymianski, Mechanisms and effects of intracellular calcium buffering on neuronal survival in organotypic hippocampal cultures exposed to anoxia/aglycemia or to excitotoxins, J. Neurosci. 17, 1997, pp. 3538-3553; D.W. Newell, A. Barth, V. Papermaster, A.T. Malouf, Glutamate and non-glutamate receptor mediated toxicity caused by oxygen and glucose deprivation in organotypic hippocampal cultures, J. Neurosci. 15, 1995, pp. 7702-7711; J.L. Perez Velazquez, M.V. Frantseva, P.L. Carlen, In vitro ischemia promotes glutamate mediated free radical generation and intracellular calcium accumulation in pyramidal neurons of cultured hippocampal slices, J. Neurosci. 23, 1997, pp. 9085-9094; L. Stoppini, L.A. Buchs, D. Muller, A simple method for organotypic cultures of nervous tissue, J. Neurosci. Methods 37, 1991, pp. 173-182; R.C. Tasker, J.T. Coyle, J.J. Vornov, The regional vulnerability to hypoglycemia induced neurotoxicity in organotypic hippocampal culture: protection by early tetrodotoxin or delayed MK 801, J. Neurosci. 12, 1992, pp. 4298-4308.]. We describe two methods to induce traumatic cell damage in hippocampal organotypic cultures. Primary trauma injury was achieved by rolling a stainless steel cylinder (0.9 g) on the organotypic slices. Secondary injury was followed after dropping a weight (0.137 g) on a localised area of the organotypic slice, from a height of 2 mm. The time course and extent of cell death were determined by measuring the fluorescence of the viability indicator propidium iodide (PI) at several time points after the injury. The initial localised impact damage spread 24 and 67 h after injury, cell death being 25% and 54%, respectively, when slices were kept at 37 degrees C. To validate these methods as models to assess neuroprotective strategies, similar insults were applied to slices at relatively low temperatures (30

  10. Alarm analysis of secondary loop system based on MFM and SDG methods

    International Nuclear Information System (INIS)

    Yang Ning; Lu Gubing; Chen Pan

    2014-01-01

    The multilevel flow model (MFM) and the signed directed graph (SDG) were combined to analyze the alarm signals of the secondary loop system for nuclear power plant. The MFM was used to delaminate and describe nuclear power plant, and the SDG was used to analyze the logicality of the facility sign in the MFM. Two kinds of faults in the secondary loop system in nuclear power plant were simulated and the alarm signals were analyzed. The simulation results show that the fault source can be identified exactly and the transmit route of the alarm signals can be described clearly, which is helpful for operators to judge. (authors)

  11. Application of the SPH method in nodal diffusion analyses of SFR cores

    Energy Technology Data Exchange (ETDEWEB)

    Nikitin, Evgeny; Fridman, Emil [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Div. Reactor Safety; Mikityuk, K. [Paul Scherrer Institut, Villigen (Switzerland)

    2016-07-01

    The current study investigated the potential of the SPH method, applied to correct the few-group XS produced by Serpent, to further improve the accuracy of the nodal diffusion solutions. The procedure for the generation of SPH-corrected few-group XS is presented in the paper. The performance of the SPH method was tested on a large oxide SFR core from the OECD/NEA SFR benchmark. The reference SFR core was modeled with the DYN3D and PARCS nodal diffusion codes using the SPH-corrected few-group XS generated by Serpent. The nodal diffusion results obtained with and without SPH correction were compared to the reference full-core Serpent MC solution. It was demonstrated that the application of the SPH method improves the accuracy of the nodal diffusion solutions, particularly for the rodded core state.

  12. Analyses of Methods and Algorithms for Modelling and Optimization of Biotechnological Processes

    Directory of Open Access Journals (Sweden)

    Stoyan Stoyanov

    2009-08-01

    Full Text Available A review of the problems in modeling, optimization and control of biotechnological processes and systems is given in this paper. An analysis of existing and some new practical optimization methods for searching global optimum based on various advanced strategies - heuristic, stochastic, genetic and combined are presented in the paper. Methods based on the sensitivity theory, stochastic and mix strategies for optimization with partial knowledge about kinetic, technical and economic parameters in optimization problems are discussed. Several approaches for the multi-criteria optimization tasks are analyzed. The problems concerning optimal controls of biotechnological systems are also discussed.

  13. The use of bootstrap methods for analysing health-related quality of life outcomes (particularly the SF-36

    Directory of Open Access Journals (Sweden)

    Campbell Michael J

    2004-12-01

    Full Text Available Abstract Health-Related Quality of Life (HRQoL measures are becoming increasingly used in clinical trials as primary outcome measures. Investigators are now asking statisticians for advice on how to analyse studies that have used HRQoL outcomes. HRQoL outcomes, like the SF-36, are usually measured on an ordinal scale. However, most investigators assume that there exists an underlying continuous latent variable that measures HRQoL, and that the actual measured outcomes (the ordered categories, reflect contiguous intervals along this continuum. The ordinal scaling of HRQoL measures means they tend to generate data that have discrete, bounded and skewed distributions. Thus, standard methods of analysis such as the t-test and linear regression that assume Normality and constant variance may not be appropriate. For this reason, conventional statistical advice would suggest that non-parametric methods be used to analyse HRQoL data. The bootstrap is one such computer intensive non-parametric method for analysing data. We used the bootstrap for hypothesis testing and the estimation of standard errors and confidence intervals for parameters, in four datasets (which illustrate the different aspects of study design. We then compared and contrasted the bootstrap with standard methods of analysing HRQoL outcomes. The standard methods included t-tests, linear regression, summary measures and General Linear Models. Overall, in the datasets we studied, using the SF-36 outcome, bootstrap methods produce results similar to conventional statistical methods. This is likely because the t-test and linear regression are robust to the violations of assumptions that HRQoL data are likely to cause (i.e. non-Normality. While particular to our datasets, these findings are likely to generalise to other HRQoL outcomes, which have discrete, bounded and skewed distributions. Future research with other HRQoL outcome measures, interventions and populations, is required to

  14. Handbook of methods for risk-based analyses of technical specifications

    International Nuclear Information System (INIS)

    Samanta, P.K.; Kim, I.S.; Mankamo, T.; Vesely, W.E.

    1994-12-01

    Technical Specifications (TS) requirements for nuclear power plants define the Limiting Conditions for Operation (LCOs) and Surveillance Requirements (SRs) to assure safety during operation. In general, these requirements are based on deterministic analysis and engineering judgments. Experiences with plant operation indicate that some elements of the requirements are unnecessarily restrictive, while a few may not be conducive to safety. The US Nuclear Regulatory Commission (USNRC) Office of Research has sponsored research to develop systematic risk-based methods to improve various aspects of TS requirements. This handbook summarizes these risk-based methods. The scope of the handbook includes reliability and risk-based methods for evaluating allowed outage times (AOTs), scheduled or preventive maintenance, action statements requiring shutdown where shutdown risk may be substantial, surveillance test intervals (STIs), and management of plant configurations resulting from outages of systems, or components. For each topic, the handbook summarizes analytic methods with data needs, outlines the insights to be gained, lists additional references, and gives examples of evaluations

  15. Sparse PCA, a new method for unsupervised analyses of fMRI data

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Lund, Torben E.; Madsen, Kristoffer Hougaard

    2006-01-01

    favorable circumstances, one of more of these signals describe activation patterns, while others model noise and other nuisance factors. This work introduces a competing method for fMRI analysis known as sparse principal component analysis (SPCA). We argue that SPCA is less committed than ICA and show...... that similar results, with better suppression of noise, are obtained....

  16. Handbook of methods for risk-based analyses of technical specifications

    Energy Technology Data Exchange (ETDEWEB)

    Samanta, P.K.; Kim, I.S. [Brookhaven National Lab., Upton, NY (United States); Mankamo, T. [Avaplan Oy, Espoo (Finland); Vesely, W.E. [Science Applications International Corp., Dublin, OH (United States)

    1994-12-01

    Technical Specifications (TS) requirements for nuclear power plants define the Limiting Conditions for Operation (LCOs) and Surveillance Requirements (SRs) to assure safety during operation. In general, these requirements are based on deterministic analysis and engineering judgments. Experiences with plant operation indicate that some elements of the requirements are unnecessarily restrictive, while a few may not be conducive to safety. The US Nuclear Regulatory Commission (USNRC) Office of Research has sponsored research to develop systematic risk-based methods to improve various aspects of TS requirements. This handbook summarizes these risk-based methods. The scope of the handbook includes reliability and risk-based methods for evaluating allowed outage times (AOTs), scheduled or preventive maintenance, action statements requiring shutdown where shutdown risk may be substantial, surveillance test intervals (STIs), and management of plant configurations resulting from outages of systems, or components. For each topic, the handbook summarizes analytic methods with data needs, outlines the insights to be gained, lists additional references, and gives examples of evaluations.

  17. N-Nitrosodiethanolamine (NDELA) in cosmetica methode ontwikkeling en monster analyse

    NARCIS (Netherlands)

    Rijst EC van; Schothorst RC; ARO

    1997-01-01

    In 1995 is op verzoek van de Inspectie Gezondheids Bescherming de N-nitrosamine problematiek in cosmetische producten in kaart gebracht. Voor het bepalen van NDELA in cosmetica is de GC-TEA methode ontwikkeld en gevalideerd. De terugwinning voor NDELA is gemiddeld 99% (N=4) en de

  18. Structural and magnetic properties of multi-core nanoparticles analysed using a generalised numerical inversion method

    Science.gov (United States)

    Bender, P.; Bogart, L. K.; Posth, O.; Szczerba, W.; Rogers, S. E.; Castro, A.; Nilsson, L.; Zeng, L. J.; Sugunan, A.; Sommertune, J.; Fornara, A.; González-Alonso, D.; Barquín, L. Fernández; Johansson, C.

    2017-01-01

    The structural and magnetic properties of magnetic multi-core particles were determined by numerical inversion of small angle scattering and isothermal magnetisation data. The investigated particles consist of iron oxide nanoparticle cores (9 nm) embedded in poly(styrene) spheres (160 nm). A thorough physical characterisation of the particles included transmission electron microscopy, X-ray diffraction and asymmetrical flow field-flow fractionation. Their structure was ultimately disclosed by an indirect Fourier transform of static light scattering, small angle X-ray scattering and small angle neutron scattering data of the colloidal dispersion. The extracted pair distance distribution functions clearly indicated that the cores were mostly accumulated in the outer surface layers of the poly(styrene) spheres. To investigate the magnetic properties, the isothermal magnetisation curves of the multi-core particles (immobilised and dispersed in water) were analysed. The study stands out by applying the same numerical approach to extract the apparent moment distributions of the particles as for the indirect Fourier transform. It could be shown that the main peak of the apparent moment distributions correlated to the expected intrinsic moment distribution of the cores. Additional peaks were observed which signaled deviations of the isothermal magnetisation behavior from the non-interacting case, indicating weak dipolar interactions. PMID:28397851

  19. Improved Method for the Qualitative Analyses of Palm Oil Carotenes Using UPLC.

    Science.gov (United States)

    Ng, Mei Han; Choo, Yuen May

    2016-04-01

    Palm oil is the richest source of natural carotenes, comprising 500-700 ppm in crude palm oil (CPO). Its concentration is found to be much higher in oil extracted from palm-pressed fiber, a by-product from the milling of oil palm fruits. There are 11 types of carotenes in palm oil, excluding the cis/trans isomers of some of the carotenes. Qualitative separation of these individual carotenes is particularly useful for the identification and confirmation of different types of oil as the carotenes profile is unique to each type of vegetable oil. Previous studies on HPLC separation of the individual palm carotenes reported a total analyses time of up to 100 min using C30 stationary phase. In this study, the separation was completed in <5 min. The qualitative separation was successfully carried out using a commonly used stationary phase, C18. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Shielding analysis method applied to nuclear ship 'MUTSU' and its evaluation based on experimental analyses

    International Nuclear Information System (INIS)

    Yamaji, Akio; Miyakoshi, Jun-ichi; Iwao, Yoshiaki; Tsubosaka, Akira; Saito, Tetsuo; Fujii, Takayoshi; Okumura, Yoshihiro; Suzuoki, Zenro; Kawakita, Takashi.

    1984-01-01

    Procedures of shielding analysis are described which were used for the shielding modification design of the Nuclear Ship ''MUTSU''. The calculations of the radiation distribution on board were made using Sn codes ANISN and TWOTRAN, a point kernel code QAD and a Monte Carlo code MORSE. The accuracies of these calculations were investigated through the analysis of various shielding experiments: the shield tank experiment of the Nuclear Ship ''Otto Hahn'', the shielding mock-up experiment for ''MUTSU'' performed in JRR-4, the shielding benchmark experiment using the 16 N radiation facility of AERE Harwell and the shielding effect experiment of the ship structure performed in the training ship ''Shintoku-Maru''. The values calculated by the ANISN agree with the data measured at ''Otto Hahn'' within a factor of 2 for fast neutrons and within a factor of 3 for epithermal and thermal neutrons. The γ-ray dose rates calculated by the QAD agree with the measured values within 30% for the analysis of the experiment in JRR-4. The design values for ''MUTSU'' were determined in consequence of these experimental analyses. (author)

  1. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    Directory of Open Access Journals (Sweden)

    Sung-Chien Lin

    2014-07-01

    Full Text Available In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results of this study show that when the number of topics was set to 10, the topic model has the smallest perplexity. Although data scopes and analysis methodsare different to previous studies, the generating topics of this study are consistent with those results produced by analyses of experts. Empirical case studies and measurements of bibliometric indicators were concerned important in every year during the whole analytic period, and the field was increasing stability. Both the two core journals broadly paid more attention to all of the topics in the field of Informetrics. The Journal of Informetricsput particular emphasis on construction and applications ofbibliometric indicators and Scientometrics focused on the evaluation and the factors of productivity of countries, institutions, domains, and journals.

  2. Sustainability of outdoor school ground smoking bans at secondary schools : A mixed-method study

    NARCIS (Netherlands)

    Rozema, A. D.; Mathijssen, J. J. P.; Jansen, M. W. J.; Van Oers, J. A. M.

    2017-01-01

    Although increasing numbers of countries are implementing outdoor school ground smoking bans at secondary schools, less attention is paid to the post-implementation period even though sustainability of a policy is essential for long-term effectiveness. Therefore, this study assesses the level of

  3. Mixed Methods Study Using Constructive Learning Team Model for Secondary Mathematics Teachers

    Science.gov (United States)

    Ritter, Kristy L.

    2010-01-01

    The constructive learning team model for secondary mathematics teachers (CLTM) was created to provide students with learning opportunities and experiences that address deficiencies in oral and written communication, logical processes and analysis, mathematical operations, independent learning, teamwork, and technology utilization. This study…

  4. Developing and Validating a Competence Framework for Secondary Mathematics Student Teachers through a Delphi Method

    Science.gov (United States)

    Muñiz-Rodríguez, Laura; Alonso, Pedro; Rodríguez-Muñiz, Luis J.; Valcke, Martin

    2017-01-01

    Initial teacher education programmes provide student teachers with the desired competences to develop themselves as teachers. Although a generic framework for teaching competences is available covering all school subjects in Spain, the initial teacher education programmes curriculum does not specify which competences secondary mathematics student…

  5. A review of methods for analysing the whipping movement of pipework

    International Nuclear Information System (INIS)

    Nicholson, M.D.

    1977-11-01

    A postulated rupture of a pipe carrying high energy fluid results in large forces acting on the piping run due to the reactive forces of the escaping fluid. This can cause a large whipping movement of the pipe. It is necessary to design and position restraints to contain this motion, which, although it only occurs in the event of a low probability accident, may produce damage in systems essential for the safe shutdown of a nuclear reactor. Hence an understanding of the dynamic response of the piping and supports is required. A discussion is presented of the pipe whip problem and present methods of investigation are reviewed. In order to assess their applicability, some general methods of non-linear dynamics are surveyed. (author)

  6. A METHOD FOR PREPARING A SUBSTRATE BY APPLYING A SAMPLE TO BE ANALYSED

    DEFF Research Database (Denmark)

    2017-01-01

    The invention relates to a method for preparing a substrate (105a) comprising a sample reception area (110) and a sensing area (111). The method comprises the steps of: 1) applying a sample on the sample reception area; 2) rotating the substrate around a predetermined axis; 3) during rotation......, at least part of the liquid travels from the sample reception area to the sensing area due to capillary forces acting between the liquid and the substrate; and 4) removing the wave of particles and liquid formed at one end of the substrate. The sensing area is closer to the predetermined axis than...... the sample reception area. The sample comprises a liquid part and particles suspended therein....

  7. Handbook of methods for acid-deposition studies. Laboratory analyses for soil chemistry

    International Nuclear Information System (INIS)

    Blume, L.J.; Schumacher, P.W.; Schaffer, K.A.; Cappo, K.A.; Papp, M.L.

    1990-09-01

    The handbook describes methods used to process and analyze soil samples. It is intended as a guidance document for groups involved in acid deposition monitoring activities similar to those implemented by the Aquatic Effects Research Program of the National Acid Precipitation Assessment Program. These methods were developed for use in the Direct/Delayed Response Project, a component project of the Aquatic Effects Research Program within the Office of Ecological Processes and Effects Research. The program addresses the following issues relating to the effects of acid deposition on aquatic ecosystems: The extent and magnitude of past change; The change to be expected in the future under various deposition scenarios; The maximum rates of deposition below which further change is not expected; and The rate of change or recovery of aquatic ecosystems if deposition rates are decreased. Chemical and physical parameters were measured during the Direct/Delayed Response Project and are described in the document

  8. Method for performing diversity and defense-in-depth analyses of reactor protection systems

    International Nuclear Information System (INIS)

    Preckshot, G.G.

    1994-12-01

    The purpose of this NUREG is to describe a method for analyzing computer-based nuclear reactor protection systems that discovers design vulnerabilities to common-mode failure. The potential for common-mode failure has become an important issue as the software content of protection systems has increased. This potential was not present in earlier analog protection systems because it could usually be assumed that common-mode failure, if it did occur, was due to slow processes such as corrosion or premature wear-out. This assumption is no longer true for systems containing software. It is the purpose of the analysis method described here to determine points of a design for which credible common-mode failures are uncompensated either by diversity or defense-in-depth

  9. Application of chaos analyses methods on East Anatolian Fault Zone fractures

    Energy Technology Data Exchange (ETDEWEB)

    Kamışlıoğlu, Miraç, E-mail: m.kamislioglu@gmail.com; Külahcı, Fatih, E-mail: fatihkulahci@firat.edu.tr [Nuclear Physics Division, Department of Physics, Faculty of Science, Fırat University, Elazig, TR-23119 (Turkey)

    2016-06-08

    Nonlinear time series analysis techniques have large application areas on the geoscience and geophysics fields. Modern nonlinear methods are provided considerable evidence for explain seismicity phenomena. In this study nonlinear time series analysis, fractal analysis and spectral analysis have been carried out for researching the chaotic behaviors of release radon gas ({sup 222}Rn) concentration occurring during seismic events. Nonlinear time series analysis methods (Lyapunov exponent, Hurst phenomenon, correlation dimension and false nearest neighbor) were applied for East Anatolian Fault Zone (EAFZ) Turkey and its surroundings where there are about 35,136 the radon measurements for each region. In this paper were investigated of {sup 222}Rn behavior which it’s used in earthquake prediction studies.

  10. Genetic analyses of the human eye colours using a novel objective method for eye colour classification

    DEFF Research Database (Denmark)

    Andersen, Jeppe D.; Johansen, Peter; Harder, Stine

    2013-01-01

    In this study, we present a new objective method for measuring the eye colour on a continuous scale that allows researchers to associate genetic markers with different shades of eye colour. With the use of the custom designed software Digital Iris Analysis Tool (DIAT), the iris was automatically...... and TYR rs1393350) on the eye colour. We evaluated the two published prediction models for eye colour (IrisPlex [1] and Snipper[2]) and compared the predictions with the PIE-scores. We found good concordance with the prediction from individuals typed as HERC2 rs12913832 G. However, both methods had......-score ranged from −1 to 1 (brown to blue). The software eliminated the need for user based interpretation and qualitative eye colour categories. In 94% (570) of 605 analyzed eye images, the iris region was successfully extracted and a PIE-score was calculated. A very high correlation between the PIE...

  11. Reconciling PM10 analyses by different sampling methods for Iron King Mine tailings dust.

    Science.gov (United States)

    Li, Xu; Félix, Omar I; Gonzales, Patricia; Sáez, Avelino Eduardo; Ela, Wendell P

    2016-03-01

    The overall project objective at the Iron King Mine Superfund site is to determine the level and potential risk associated with heavy metal exposure of the proximate population emanating from the site's tailings pile. To provide sufficient size-fractioned dust for multi-discipline research studies, a dust generator was built and is now being used to generate size-fractioned dust samples for toxicity investigations using in vitro cell culture and animal exposure experiments as well as studies on geochemical characterization and bioassay solubilization with simulated lung and gastric fluid extractants. The objective of this study is to provide a robust method for source identification by comparing the tailing sample produced by dust generator and that collected by MOUDI sampler. As and Pb concentrations of the PM10 fraction in the MOUDI sample were much lower than in tailing samples produced by the dust generator, indicating a dilution of Iron King tailing dust by dust from other sources. For source apportionment purposes, single element concentration method was used based on the assumption that the PM10 fraction comes from a background source plus the Iron King tailing source. The method's conclusion that nearly all arsenic and lead in the PM10 dust fraction originated from the tailings substantiates our previous Pb and Sr isotope study conclusion. As and Pb showed a similar mass fraction from Iron King for all sites suggesting that As and Pb have the same major emission source. Further validation of this simple source apportionment method is needed based on other elements and sites.

  12. Qualitative Elemental Analyses of a Meteorite Sample Found in Turkey by Photo-activation Analysis Method

    International Nuclear Information System (INIS)

    Ertugay, C; Boztosun, I; Ozmen, S F; Dapo, H

    2015-01-01

    In this paper, a meteorite sample provided from TÜBITAK National Observatory found in Turkey has been investigated by using a clinical linear accelerator that has endpoint energy of 18 MeV, and a high purity Germanium detector for qualitative elemental analysis within photo-activation analysis method. 21 nuclei ranging from 24Na to 149Nd have been identified in the meteorite sample. (paper)

  13. HOW DO FIRMS SOURCE EXTERNAL KNOWLEDGE FOR INNOVATION? ANALYSING EFFECTS OF DIFFERENT KNOWLEDGE SOURCING METHODS

    OpenAIRE

    KI H. KANG; JINA KANG

    2009-01-01

    In the era of "open innovation", external knowledge is a very important source for technology innovation. In this paper, we investigate the relationship between external knowledge and performance of technology innovation. The effect of external knowledge on the performance of technology innovation can vary with different external knowledge sourcing methods. We identify three ways of external knowledge sourcing: information transfer from informal network, R&D collaboration and technology acqui...

  14. The MDI Method as a Generalization of Logit, Probit and Hendry Analyses in Marketing.

    Science.gov (United States)

    1980-04-01

    model involves nothing more than fitting a normal distribution function ( Hanushek and Jackson (1977)). For a given value of x, the probit model...preference shifts within the soft drink category. --For applications of probit models relevant for marketing, see Hausman and Wise (1978) and Hanushek and...Marketing Research" JMR XIV, Feb. (1977). Hanushek , E.A., and J.E. Jackson, Statistical Methods for Social Scientists. Academic Press, New York (1977

  15. Experiment and analyses on intentional secondary-side depressurization during PWR small break LOCA. Effects of depressurization rate and break area on core liquid level behavior

    International Nuclear Information System (INIS)

    Asaka, Hideaki; Ohtsu, Iwao; Anoda, Yoshinari; Kukita, Yutaka

    1997-01-01

    The effects of the secondary-side depressurization rate and break area on the core liquid level behavior during a PWR small-break LOCA were studied using experimental data from the Large Scale Test Facility (LSTF) and by using analysis results obtained with a JAERI modified version of RELAP5/MOD3 code. The LSTF is a 1/ 48 volumetrically scaled full-height integral model of a Westinghouse-type PWR. The code reproduced the thermal-hydraulic responses, observed in the experiment, for important parameters such as the primary and secondary side pressures and core liquid level behavior. The sensitivity of the core minimum liquid level to the depressurization rate and break area was studied by using the code assessed above. It was found that the core liquid level took a local minimum value for a given break area as a function of secondary side depressurization rate. Further efforts are, however, needed to quantitatively define the maximum core temperature as a function of break area and depressurization rate. (author)

  16. Determination of the spatial response of neutron based analysers using a Monte Carlo based method

    International Nuclear Information System (INIS)

    Tickner, James

    2000-01-01

    One of the principal advantages of using thermal neutron capture (TNC, also called prompt gamma neutron activation analysis or PGNAA) or neutron inelastic scattering (NIS) techniques for measuring elemental composition is the high penetrating power of both the incident neutrons and the resultant gamma-rays, which means that large sample volumes can be interrogated. Gauges based on these techniques are widely used in the mineral industry for on-line determination of the composition of bulk samples. However, attenuation of both neutrons and gamma-rays in the sample and geometric (source/detector distance) effects typically result in certain parts of the sample contributing more to the measured composition than others. In turn, this introduces errors in the determination of the composition of inhomogeneous samples. This paper discusses a combined Monte Carlo/analytical method for estimating the spatial response of a neutron gauge. Neutron propagation is handled using a Monte Carlo technique which allows an arbitrarily complex neutron source and gauge geometry to be specified. Gamma-ray production and detection is calculated analytically which leads to a dramatic increase in the efficiency of the method. As an example, the method is used to study ways of reducing the spatial sensitivity of on-belt composition measurements of cement raw meal

  17. Beam transient analyses of Accelerator Driven Subcritical Reactors based on neutron transport method

    Energy Technology Data Exchange (ETDEWEB)

    He, Mingtao; Wu, Hongchun [School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an 710049, Shaanxi (China); Zheng, Youqi, E-mail: yqzheng@mail.xjtu.edu.cn [School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an 710049, Shaanxi (China); Wang, Kunpeng [Nuclear and Radiation Safety Center, PO Box 8088, Beijing 100082 (China); Li, Xunzhao; Zhou, Shengcheng [School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an 710049, Shaanxi (China)

    2015-12-15

    Highlights: • A transport-based kinetics code for Accelerator Driven Subcritical Reactors is developed. • The performance of different kinetics methods adapted to the ADSR is investigated. • The impacts of neutronic parameters deteriorating with fuel depletion are investigated. - Abstract: The Accelerator Driven Subcritical Reactor (ADSR) is almost external source dominated since there is no additional reactivity control mechanism in most designs. This paper focuses on beam-induced transients with an in-house developed dynamic analysis code. The performance of different kinetics methods adapted to the ADSR is investigated, including the point kinetics approximation and space–time kinetics methods. Then, the transient responds of beam trip and beam overpower are calculated and analyzed for an ADSR design dedicated for minor actinides transmutation. The impacts of some safety-related neutronics parameters deteriorating with fuel depletion are also investigated. The results show that the power distribution varying with burnup leads to large differences in temperature responds during transients, while the impacts of kinetic parameters and feedback coefficients are not very obvious. Classification: Core physic.

  18. Combustion water purification techniques influence on OBT analysing using liquid scintillation counting method

    Energy Technology Data Exchange (ETDEWEB)

    Varlam, C.; Vagner, I.; Faurescu, I.; Faurescu, D. [National Institute for Cryogenics and Isotopic Technologies, Valcea (Romania)

    2015-03-15

    In order to determine organically bound tritium (OBT) from environmental samples, these must be converted into water, measurable by liquid scintillation counting (LSC). For this purpose we conducted some experiments to determine OBT level of a grass sample collected from an uncontaminated area. The studied grass sample was combusted in a Parr bomb. However usual interfering phenomena were identified: color or chemical quench, chemiluminescence, overlap over tritium spectrum because of other radionuclides presence as impurities ({sup 14}C from organically compounds, {sup 36}Cl as chloride and free chlorine, {sup 40}K as potassium cations) and emulsion separation. So the purification of the combustion water before scintillation counting appeared to be essential. 5 purification methods were tested: distillation with chemical treatment (Na{sub 2}O{sub 2} and KMnO{sub 4}), lyophilization, chemical treatment (Na{sub 2}O{sub 2} and KMnO{sub 4}) followed by lyophilization, azeotropic distillation with toluene and treatment with a volcanic tuff followed by lyophilization. After the purification step each sample was measured and the OBT measured concentration, together with physico-chemical analysis of the water analyzed, revealed that the most efficient method applied for purification of the combustion water was the method using chemical treatment followed by lyophilization.

  19. Linear time delay methods and stability analyses of the human spine. Effects of neuromuscular reflex response.

    Science.gov (United States)

    Franklin, Timothy C; Granata, Kevin P; Madigan, Michael L; Hendricks, Scott L

    2008-08-01

    Linear stability methods were applied to a biomechanical model of the human musculoskeletal spine to investigate effects of reflex gain and reflex delay on stability. Equations of motion represented a dynamic 18 degrees-of-freedom rigid-body model with time-delayed reflexes. Optimal muscle activation levels were identified by minimizing metabolic power with the constraints of equilibrium and stability with zero reflex time delay. Muscle activation levels and associated muscle forces were used to find the delay margin, i.e., the maximum reflex delay for which the system was stable. Results demonstrated that stiffness due to antagonistic co-contraction necessary for stability declined with increased proportional reflex gain. Reflex delay limited the maximum acceptable proportional reflex gain, i.e., long reflex delay required smaller maximum reflex gain to avoid instability. As differential reflex gain increased, there was a small increase in acceptable reflex delay. However, differential reflex gain with values near intrinsic damping caused the delay margin to approach zero. Forward-dynamic simulations of the fully nonlinear time-delayed system verified the linear results. The linear methods accurately found the delay margin below which the nonlinear system was asymptotically stable. These methods may aid future investigations in the role of reflexes in musculoskeletal stability.

  20. Molecular analyses of two bacterial sampling methods in ligature-induced periodontitis in rats.

    Science.gov (United States)

    Fontana, Carla Raquel; Grecco, Clovis; Bagnato, Vanderlei Salvador; de Freitas, Laura Marise; Boussios, Constantinos I; Soukos, Nikolaos S

    2018-02-01

    The prevalence profile of periodontal pathogens in dental plaque can vary as a function of the detection method; however, the sampling technique may also play a role in determining dental plaque microbial profiles. We sought to determine the bacterial composition comparing two sampling methods, one well stablished and a new one proposed here. In this study, a ligature-induced periodontitis model was used in 30 rats. Twenty-seven days later, ligatures were removed and microbiological samples were obtained directly from the ligatures as well as from the periodontal pockets using absorbent paper points. Microbial analysis was performed using DNA probes to a panel of 40 periodontal species in the checkerboard assay. The bacterial composition patterns were similar for both sampling methods. However, detection levels for all species were markedly higher for ligatures compared with paper points. Ligature samples provided more bacterial counts than paper points, suggesting that the technique for induction of periodontitis could also be applied for sampling in rats. Our findings may be helpful in designing studies of induced periodontal disease-associated microbiota.

  1. Combustion water purification techniques influence on OBT analysing using liquid scintillation counting method

    International Nuclear Information System (INIS)

    Varlam, C.; Vagner, I.; Faurescu, I.; Faurescu, D.

    2015-01-01

    In order to determine organically bound tritium (OBT) from environmental samples, these must be converted into water, measurable by liquid scintillation counting (LSC). For this purpose we conducted some experiments to determine OBT level of a grass sample collected from an uncontaminated area. The studied grass sample was combusted in a Parr bomb. However usual interfering phenomena were identified: color or chemical quench, chemiluminescence, overlap over tritium spectrum because of other radionuclides presence as impurities ( 14 C from organically compounds, 36 Cl as chloride and free chlorine, 40 K as potassium cations) and emulsion separation. So the purification of the combustion water before scintillation counting appeared to be essential. 5 purification methods were tested: distillation with chemical treatment (Na 2 O 2 and KMnO 4 ), lyophilization, chemical treatment (Na 2 O 2 and KMnO 4 ) followed by lyophilization, azeotropic distillation with toluene and treatment with a volcanic tuff followed by lyophilization. After the purification step each sample was measured and the OBT measured concentration, together with physico-chemical analysis of the water analyzed, revealed that the most efficient method applied for purification of the combustion water was the method using chemical treatment followed by lyophilization

  2. Magnetotomography—a new method for analysing fuel cell performance and quality

    Science.gov (United States)

    Hauer, Karl-Heinz; Potthast, Roland; Wüster, Thorsten; Stolten, Detlef

    Magnetotomography is a new method for the measurement and analysis of the current density distribution of fuel cells. The method is based on the measurement of the magnetic flux surrounding the fuel cell stack caused by the current inside the stack. As it is non-invasive, magnetotomography overcomes the shortcomings of traditional methods for the determination of current density in fuel cells [J. Stumper, S.A. Campell, D.P. Wilkinson, M.C. Johnson, M. Davis, In situ methods for the determination of current distributions in PEM fuel cells, Electrochem. Acta 43 (1998) 3773; S.J.C. Cleghorn, C.R. Derouin, M.S. Wilson, S. Gottesfeld, A printed circuit board approach to measuring current distribution in a fuel cell, J. Appl. Electrochem. 28 (1998) 663; Ch. Wieser, A. Helmbold, E. Gülzow, A new technique for two-dimensional current distribution measurements in electro-chemical cells, J. Appl. Electrochem. 30 (2000) 803; Grinzinger, Methoden zur Ortsaufgelösten Strommessung in Polymer Elektrolyt Brennstoffzellen, Diploma thesis, TU-München, 2003; Y.-G. Yoon, W.-Y. Lee, T.-H. Yang, G.-G. Park, C.-S. Kim, Current distribution in a single cell of PEMFC, J. Power Sources 118 (2003) 193-199; M.M. Mench, C.Y. Wang, An in situ method for determination of current distribution in PEM fuel cells applied to a direct methanol fuel cell, J. Electrochem. Soc. 150 (2003) A79-A85; S. Schönbauer, T. Kaz, H. Sander, E. Gülzow, Segmented bipolar plate for the determination of current distribution in polymer electrolyte fuel cells, in: Proceedings of the Second European PEMFC Forum, vol. 1, Lucerne/Switzerland, 2003, pp. 231-237; G. Bender, S.W. Mahlon, T.A. Zawodzinski, Further refinements in the segmented cell approach to diagnosing performance in polymer electrolyte fuel cells, J. Power Sources 123 (2003) 163-171]. After several years of research a complete prototype system is now available for research on single cells and stacks. This paper describes the basic system (fundamentals

  3. A method for calorimetric analysis in variable conditions heating; Methode d'analyse calorimetrique en regime variable

    Energy Technology Data Exchange (ETDEWEB)

    Berthier, G [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1965-07-01

    By the analysis of the thermal transition conditions given by the quenching of a sample in a furnace maintained at a high temperature, it is possible to study the thermal diffusivity of some materials and those of solid state structure transformation on a qualitative as well as a quantitative standpoint. For instance the transformation energy of {alpha}-quartz into {beta}-quartz and the Wigner energy stored within neutron-irradiated beryllium oxide have been measured. (author) [French] L'analyse du regime thermique transitoire, obtenu par la trempe d'un echantillon dans l'enceinte d'un four maintenu a tres haute temperature, peut permettre l'etude de la diffusivite thermique de certains materiaux et celle des transformations structurales en phase solide, tant du point de vue qualitatif que du point de vue quantitatif (mesure de l'energie de transformation du quartz {alpha} en quartz {beta} et determination de l'energie Wigner emmagasinee par l'oxyde de beryllium irradie aux neutrons). (auteur)

  4. Systematic selection method for probabilistic fire analyses. Final report; Systematisches Auswahlverfahren fuer probabilistische Brandanalysen. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Tuerschmann, M.; Linden, J. von; Roewekamp, M.

    2005-07-01

    A PSA for the plant internal fire hazard is carried out in several steps. First step is a selection process ('screening'). The screening can be performed qualitatively or quantitatively or by means of a combined qualitative and quantitative approach as developed by GRS in the frame of a research project. During the revision of the PSA guidance documents it turned out that the GRS screening approach needs further automation and developments, in particular regarding the systems specific part to reduce the effects of expert decisions as far as possible. Therefore, the combined approach has been further improved. The improved screening approach as outlined in this report provides estimated values for damage frequencies. By this means, it is possible to identify relevant fire scenarios and to apply the cut-off criteria defined in the PSA for fire analyses. The approach corresponds as far as possible to the existing PSA models. The event and fault trees of these models describe in detail the correlation between component failures and the occurrence of damage states. The screening process combines fire and compartment specific information for estimating fire induced component failures with the PSA models for determining damage frequencies. The screening process is carried out in three steps starting with an as far as possible automated fire specific screening based on a comprehensive plant specific information collection. In a second step, qualitative PSA specific information is considered. The rooms and/or plant areas not screened out and ranked by fire specific ranking criteria are subject to a quantitative systems specific selection. (orig.)

  5. Scientific Drilling of Impact Craters - Well Logging and Core Analyses Using Magnetic Methods (Invited)

    Science.gov (United States)

    Fucugauchi, J. U.; Perez-Cruz, L. L.; Velasco-Villarreal, M.

    2013-12-01

    Drilling projects of impact structures provide data on the structure and stratigraphy of target, impact and post-impact lithologies, providing insight on the impact dynamics and cratering. Studies have successfully included magnetic well logging and analyses in core and cuttings, directed to characterize the subsurface stratigraphy and structure at depth. There are 170-180 impact craters documented in the terrestrial record, which is a small proportion compared to expectations derived from what is observed on the Moon, Mars and other bodies of the solar system. Knowledge of the internal 3-D deep structure of craters, critical for understanding impacts and crater formation, can best be studied by geophysics and drilling. On Earth, few craters have yet been investigated by drilling. Craters have been drilled as part of industry surveys and/or academic projects, including notably Chicxulub, Sudbury, Ries, Vredefort, Manson and many other craters. As part of the Continental ICDP program, drilling projects have been conducted on the Chicxulub, Bosumtwi, Chesapeake, Ries and El gygytgyn craters. Inclusion of continuous core recovery expanded the range of paleomagnetic and rock magnetic applications, with direct core laboratory measurements, which are part of the tools available in the ocean and continental drilling programs. Drilling studies are here briefly reviewed, with emphasis on the Chicxulub crater formed by an asteroid impact 66 Ma ago at the Cretaceous/Paleogene boundary. Chicxulub crater has no surface expression, covered by a kilometer of Cenozoic sediments, thus making drilling an essential tool. As part of our studies we have drilled eleven wells with continuous core recovery. Magnetic susceptibility logging, magnetostratigraphic, rock magnetic and fabric studies have been carried out and results used for lateral correlation, dating, formation evaluation, azimuthal core orientation and physical property contrasts. Contributions of magnetic studies on impact

  6. Performance Analyses of Counter-Flow Closed Wet Cooling Towers Based on a Simplified Calculation Method

    Directory of Open Access Journals (Sweden)

    Xiaoqing Wei

    2017-02-01

    Full Text Available As one of the most widely used units in water cooling systems, the closed wet cooling towers (CWCTs have two typical counter-flow constructions, in which the spray water flows from the top to the bottom, and the moist air and cooling water flow in the opposite direction vertically (parallel or horizontally (cross, respectively. This study aims to present a simplified calculation method for conveniently and accurately analyzing the thermal performance of the two types of counter-flow CWCTs, viz. the parallel counter-flow CWCT (PCFCWCT and the cross counter-flow CWCT (CCFCWCT. A simplified cooling capacity model that just includes two characteristic parameters is developed. The Levenberg–Marquardt method is employed to determine the model parameters by curve fitting of experimental data. Based on the proposed model, the predicted outlet temperatures of the process water are compared with the measurements of a PCFCWCT and a CCFCWCT, respectively, reported in the literature. The results indicate that the predicted values agree well with the experimental data in previous studies. The maximum absolute errors in predicting the process water outlet temperatures are 0.20 and 0.24 °C for the PCFCWCT and CCFCWCT, respectively. These results indicate that the simplified method is reliable for performance prediction of counter-flow CWCTs. Although the flow patterns of the two towers are different, the variation trends of thermal performance are similar to each other under various operating conditions. The inlet air wet-bulb temperature, inlet cooling water temperature, air flow rate, and cooling water flow rate are crucial for determining the cooling capacity of a counter-flow CWCT, while the cooling tower effectiveness is mainly determined by the flow rates of air and cooling water. Compared with the CCFCWCT, the PCFCWCT is much more applicable in a large-scale cooling water system, and the superiority would be amplified when the scale of water

  7. A method for analysing incidents due to human errors on nuclear installations

    International Nuclear Information System (INIS)

    Griffon, M.

    1980-01-01

    This paper deals with the development of a methodology adapted to a detailed analysis of incidents considered to be due to human errors. An identification of human errors and a search for their eventual multiple causes is then needed. They are categorized in eight classes: education and training of personnel, installation design, work organization, time and work duration, physical environment, social environment, history of the plant and performance of the operator. The method is illustrated by the analysis of a handling incident generated by multiple human errors. (author)

  8. Steady and transient analyses of natural convection in a horizontal porous annulus with Galerkin method

    International Nuclear Information System (INIS)

    Rao, Y.F.; Fukuda, K.; Hasegawa, S.

    1986-01-01

    Steady and transient analytical investigation with the Galerkin method has been performed on natural convection in a horizontal porous annulus heated from the inner surface. Three families of convergent solutions, appearing one after another with increasing RaDa numbers, were obtained corresponding to different initial conditions. Despite the fact that the flow structures of two branching solutions are quite different, there exists a critical RaDa number at which their overall heat transfer rates have the same value. The bifurcation point was determined numerically, which coincided very well with that from experimental observation. The solutions in which higher wavenumber modes are dominant agree better with experimental data of overall heat transfer

  9. Evaluation of Two Surface Sampling Methods for Microbiological and Chemical Analyses To Assess the Presence of Biofilms in Food Companies.

    Science.gov (United States)

    Maes, Sharon; Huu, Son Nguyen; Heyndrickx, Marc; Weyenberg, Stephanie van; Steenackers, Hans; Verplaetse, Alex; Vackier, Thijs; Sampers, Imca; Raes, Katleen; Reu, Koen De

    2017-12-01

    Biofilms are an important source of contamination in food companies, yet the composition of biofilms in practice is still mostly unknown. The chemical and microbiological characterization of surface samples taken after cleaning and disinfection is very important to distinguish free-living bacteria from the attached bacteria in biofilms. In this study, sampling methods that are potentially useful for both chemical and microbiological analyses of surface samples were evaluated. In the manufacturing facilities of eight Belgian food companies, surfaces were sampled after cleaning and disinfection using two sampling methods: the scraper-flocked swab method and the sponge stick method. Microbiological and chemical analyses were performed on these samples to evaluate the suitability of the sampling methods for the quantification of extracellular polymeric substance components and microorganisms originating from biofilms in these facilities. The scraper-flocked swab method was most suitable for chemical analyses of the samples because the material in these swabs did not interfere with determination of the chemical components. For microbiological enumerations, the sponge stick method was slightly but not significantly more effective than the scraper-flocked swab method. In all but one of the facilities, at least 20% of the sampled surfaces had more than 10 2 CFU/100 cm 2 . Proteins were found in 20% of the chemically analyzed surface samples, and carbohydrates and uronic acids were found in 15 and 8% of the samples, respectively. When chemical and microbiological results were combined, 17% of the sampled surfaces were contaminated with both microorganisms and at least one of the analyzed chemical components; thus, these surfaces were characterized as carrying biofilm. Overall, microbiological contamination in the food industry is highly variable by food sector and even within a facility at various sampling points and sampling times.

  10. An optimized and simplified method for analysing urea and ammonia in freshwater aquaculture systems

    DEFF Research Database (Denmark)

    Larsen, Bodil Katrine; Dalsgaard, Anne Johanne Tang; Pedersen, Per Bovbjerg

    2015-01-01

    This study presents a simple urease method for analysis of ammonia and urea in freshwater aquaculture systems. Urea is hydrolysed into ammonia using urease followed by analysis of released ammonia using the salicylate-hypochlorite method. The hydrolysis of urea is performed at room temperature...... and without addition of a buffer. A number of tests were performed on water samples obtained from a commercial rainbow trout farm to determine the optimal urease concentration and time for complete hydrolysis. One mL of water sample was spiked with 1.3 mL urea at three different concentrations: 50 lg L 1, 100...... lg L 1 and 200 lg L 1 urea-N. In addition, five concentrations of urease were tested, ranging from 0.1 U mL 1 to 4 U mL 1. Samples were hydrolysed for various time periods ranging from 5 to 120 min. A urease concentration of 0.4 U mL 1 and a hydrolysis period of 120 min gave the best results, with 99...

  11. A new method for odour impact assessment based on spatial and temporal analyses of community response

    International Nuclear Information System (INIS)

    Henshaw, P.; Nicell, J.; Sikdar, A.

    2002-01-01

    Odorous emission from stationary sources account for the majority of air pollution complaints to regulatory agencies. Sometimes regulators rely on nuisance provisions of common law to assess odour impact, which is highly subjective. The other commonly used approach, the dilution-to-threshold principle, assumes that an odour is a problem simply if detected, without regard to the fact that a segment of the population can detect the odour at concentrations below the threshold. The odour impact model (OIM) represents a significant improvement over current methods for quantifying odours by characterizing the dose-response relationship of the odour. Dispersion modelling can be used in conjunction with the OIM to estimate the probability of response in the surrounding vicinity, taking into account the local meteorological conditions. The objective of this research is to develop an objective method of assessing the impact of odorous airborne emissions. To this end, several metrics were developed to quantify the impact of an odorous stationary source on the surrounding community. These 'odour impact parameters' are: maximum concentration, maximum probability of response, footprint area, probability-weighted footprint area and the number of people responding to the odour. These impact parameters were calculated for a stationary odour source in Canada. Several remediation scenarios for reducing the odour impact were proposed and their effect on the impact parameters calculated. (author)

  12. Reliability of an experimental method to analyse the impact point on a golf ball during putting.

    Science.gov (United States)

    Richardson, Ashley K; Mitchell, Andrew C S; Hughes, Gerwyn

    2015-06-01

    This study aimed to examine the reliability of an experimental method identifying the location of the impact point on a golf ball during putting. Forty trials were completed using a mechanical putting robot set to reproduce a putt of 3.2 m, with four different putter-ball combinations. After locating the centre of the dimple pattern (centroid) the following variables were tested; distance of the impact point from the centroid, angle of the impact point from the centroid and distance of the impact point from the centroid derived from the X, Y coordinates. Good to excellent reliability was demonstrated in all impact variables reflected in very strong relative (ICC = 0.98-1.00) and absolute reliability (SEM% = 0.9-4.3%). The highest SEM% observed was 7% for the angle of the impact point from the centroid. In conclusion, the experimental method was shown to be reliable at locating the centroid location of a golf ball, therefore allowing for the identification of the point of impact with the putter head and is suitable for use in subsequent studies.

  13. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods

    Directory of Open Access Journals (Sweden)

    Giuliano de Oliveira Freitas

    2013-10-01

    Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.

  14. Sensitivity and uncertainty analyses applied to criticality safety validation, methods development. Volume 1

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Hopper, C.M.; Childs, R.L.; Parks, C.V.

    1999-01-01

    This report presents the application of sensitivity and uncertainty (S/U) analysis methodologies to the code/data validation tasks of a criticality safety computational study. Sensitivity and uncertainty analysis methods were first developed for application to fast reactor studies in the 1970s. This work has revitalized and updated the available S/U computational capabilities such that they can be used as prototypic modules of the SCALE code system, which contains criticality analysis tools currently used by criticality safety practitioners. After complete development, simplified tools are expected to be released for general use. The S/U methods that are presented in this volume are designed to provide a formal means of establishing the range (or area) of applicability for criticality safety data validation studies. The development of parameters that are analogous to the standard trending parameters forms the key to the technique. These parameters are the D parameters, which represent the differences by group of sensitivity profiles, and the ck parameters, which are the correlation coefficients for the calculational uncertainties between systems; each set of parameters gives information relative to the similarity between pairs of selected systems, e.g., a critical experiment and a specific real-world system (the application)

  15. An approach of sensitivity and uncertainty analyses methods installation in a safety calculation

    International Nuclear Information System (INIS)

    Pepin, G.; Sallaberry, C.

    2003-01-01

    Simulation of the migration in deep geological formations leads to solve convection-diffusion equations in porous media, associated with the computation of hydrogeologic flow. Different time-scales (simulation during 1 million years), scales of space, contrasts of properties in the calculation domain, are taken into account. This document deals more particularly with uncertainties on the input data of the model. These uncertainties are taken into account in total analysis with the use of uncertainty and sensitivity analysis. ANDRA (French national agency for the management of radioactive wastes) carries out studies on the treatment of input data uncertainties and their propagation in the models of safety, in order to be able to quantify the influence of input data uncertainties of the models on the various indicators of safety selected. The step taken by ANDRA consists initially of 2 studies undertaken in parallel: - the first consists of an international review of the choices retained by ANDRA foreign counterparts to carry out their uncertainty and sensitivity analysis, - the second relates to a review of the various methods being able to be used in sensitivity and uncertainty analysis in the context of ANDRA's safety calculations. Then, these studies are supplemented by a comparison of the principal methods on a test case which gathers all the specific constraints (physical, numerical and data-processing) of the problem studied by ANDRA

  16. Towards an Industrial Application of Statistical Uncertainty Analysis Methods to Multi-physical Modelling and Safety Analyses

    International Nuclear Information System (INIS)

    Zhang, Jinzhao; Segurado, Jacobo; Schneidesch, Christophe

    2013-01-01

    Since 1980's, Tractebel Engineering (TE) has being developed and applied a multi-physical modelling and safety analyses capability, based on a code package consisting of the best estimate 3D neutronic (PANTHER), system thermal hydraulic (RELAP5), core sub-channel thermal hydraulic (COBRA-3C), and fuel thermal mechanic (FRAPCON/FRAPTRAN) codes. A series of methodologies have been developed to perform and to license the reactor safety analysis and core reload design, based on the deterministic bounding approach. Following the recent trends in research and development as well as in industrial applications, TE has been working since 2010 towards the application of the statistical sensitivity and uncertainty analysis methods to the multi-physical modelling and licensing safety analyses. In this paper, the TE multi-physical modelling and safety analyses capability is first described, followed by the proposed TE best estimate plus statistical uncertainty analysis method (BESUAM). The chosen statistical sensitivity and uncertainty analysis methods (non-parametric order statistic method or bootstrap) and tool (DAKOTA) are then presented, followed by some preliminary results of their applications to FRAPCON/FRAPTRAN simulation of OECD RIA fuel rod codes benchmark and RELAP5/MOD3.3 simulation of THTF tests. (authors)

  17. Method to develop data supporting consequence analyses of transporting nuclear materials in the United States

    International Nuclear Information System (INIS)

    Reese, R.T.; Sandoval, R.P.

    1980-01-01

    The Transportation System Safety Evaluation (TSSE) program at Sandia National Laboratories' Transportation Technology Center was initiated to provide the necessary information on source terms for nuclear materials subjected to extreme environments. The techniques for derivation of source terms for the fuel alone has been described as well as the outline for package response. An additional facet of this problem is the development of analytical methods to describe the transport of the released radionuclides from the fuel rods to possible release points. This work is also covered in the TSSE program. Not all the work required will be performed or funded by Sandia; rather existing work will be sought out and ongoing work will be utilized in an attempt to unify the presentation of data and thus increase its usefulness

  18. Flank wear analysing of high speed end milling for hardened steel D2 using Taguchi Method

    Science.gov (United States)

    Hazza Faizi Al-Hazza, Muataz; Ibrahim, Nur Asmawiyah bt; Adesta, Erry T. Y.; Khan, Ahsan Ali; Abdullah Sidek, Atiah Bt.

    2017-03-01

    One of the main challenges for any manufacturer is how to decrease the machining cost without affecting the final quality of the product. One of the new advanced machining processes in industry is the high speed hard end milling process that merges three advanced machining processes: high speed milling, hard milling and dry milling. However, one of the most important challenges in this process is to control the flank wear rate. Therefore a analyzing the flank wear rate during machining should be investigated in order to determine the best cutting levels that will not affect the final quality of the product. In this research Taguchi method has been used to investigate the effect of cutting speed, feed rate and depth of cut and determine the best level s to minimize the flank wear rate up to total length of 0.3mm based on the ISO standard to maintain the finishing requirements.

  19. Analysing flow structures around a blade using spectral/hp method and HPIV

    International Nuclear Information System (INIS)

    Stoevesandt, Bernhard; Steigerwald, Christian; Shishkin, Andrei; Wagner, Claus; Peinke, Joachim

    2007-01-01

    A still difficult, yet pressing task for blade manufacturers and turbine producers is the correct prediction of the effects of turbulent winds on the blade. Reynolds Averaged Numerical Simulations (RANS) are a limited tool for calculating the effects. For large eddy simulations (LES) boundary layer calculation are still difficult therefore the spectral element method seems to be an approach to improve numerical calculations of flow separation. The flow field around an fx79-w151a airfoil has been calculated by the spectral element code NεκTαrusing a direct numerical simulation (DNS) solver. In a first step a laminar inflow on the airfoil at angle of attack of α = 12 0 and a Reynolds number of Re= 33000 was simulated using the 2D Version of the code. The flow pattern was compared to measurements using holographic particle induced velocimetry (HPIV) in a wind tunnel

  20. Cancer risks, risk-cost-benefit analyses, and the scientific method

    International Nuclear Information System (INIS)

    Seiler, F.A.; Alvarez, J.L.

    1995-01-01

    Two main changes in risk analysis are increasingly beginning to influence the manner in which, in the perception of scientists, low-dose modeling of radiation carcinogenesis is supposed to be done. In the past, efforts to model radiation risks have been carried out under the banner of scientific endeavors. On closer inspection, however, it has become obvious that these efforts were not guided by the scientific method and that a change in approach is needed. We realize increasingly that risk analysis is not done in a vacuum and that any action taken due to the result of the analysis not only has a benefit in the form of a risk reduction but leads inevitably to an increase in cost and an increase in the risks of persons effecting the benefit. Thus, a risk-cost-benefit analysis should be done and show a clear-cut net benefit before a remedial action is taken

  1. Comparative exergy analyses of Jatropha curcas oil extraction methods: Solvent and mechanical extraction processes

    International Nuclear Information System (INIS)

    Ofori-Boateng, Cynthia; Keat Teong, Lee; JitKang, Lim

    2012-01-01

    Highlights: ► Exergy analysis detects locations of resource degradation within a process. ► Solvent extraction is six times exergetically destructive than mechanical extraction. ► Mechanical extraction of jatropha oil is 95.93% exergetically efficient. ► Solvent extraction of jatropha oil is 79.35% exergetically efficient. ► Exergy analysis of oil extraction processes allow room for improvements. - Abstract: Vegetable oil extraction processes are found to be energy intensive. Thermodynamically, any energy intensive process is considered to degrade the most useful part of energy that is available to produce work. This study uses literature values to compare the efficiencies and degradation of the useful energy within Jatropha curcas oil during oil extraction taking into account solvent and mechanical extraction methods. According to this study, J. curcas seeds on processing into J. curcas oil is upgraded with mechanical extraction but degraded with solvent extraction processes. For mechanical extraction, the total internal exergy destroyed is 3006 MJ which is about six times less than that for solvent extraction (18,072 MJ) for 1 ton J. curcas oil produced. The pretreatment processes of the J. curcas seeds recorded a total internal exergy destructions of 5768 MJ accounting for 24% of the total internal exergy destroyed for solvent extraction processes and 66% for mechanical extraction. The exergetic efficiencies recorded are 79.35% and 95.93% for solvent and mechanical extraction processes of J. curcas oil respectively. Hence, mechanical oil extraction processes are exergetically efficient than solvent extraction processes. Possible improvement methods are also elaborated in this study.

  2. Problems of method of technology assessment. A methodological analysis; Methodenprobleme des Technology Assessment; Eine methodologische Analyse

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, V

    1993-03-01

    The study undertakes to analyse the theoretical and methodological structure of Technology Assessment (TA). It is based on a survey of TA studies which provided an important condition for theoreticall sound statements on methodological aspects of TA. It was established that the main basic theoretical problems of TA are in the field of dealing with complexity. This is also apparent in the constitution of problems, the most elementary and central approach of TA. Scientifically founded constitution of problems and the corresponding construction of models call for interdisciplinary scientific work. Interdisciplinarity in the TA research process is achieved at the level of virtual networks, these networks being composed of individuals suited to teamwork. The emerging network structures have an objective-organizational and an ideational basis. The objective-organizational basis is mainly the result of team composition and the external affiliations of the team members. The ideational basis of the virtual network is represented by the team members` mode of thinking, which is individually located at a multidisciplinary level. The theoretical `skeleton` of the TA knowledge system, which is represented by process knowledge based linkage structures, can be generated and also processed in connection with the knowledge on types of problems, areas of analysis and procedures to deal with complexity. Within this process, disciplinary knowledge is a necessary but not a sufficient condition. Metatheoretical and metadisciplinary knowledge and the correspondingly processes complexity of models are the basis for the necessary methodological awareness, that allows TA to become designable as a research procedure. (orig./HP) [Deutsch] Die Studie stellt sich die Aufgabe, die theoretische und methodische Struktur des Technology Assessment (TA) zu analysieren. Sie fusst auf Erhebungen, die bei Technology-Assessment-Studien vorgenommen wurden und die wesentliche Voraussetzungen fuer

  3. Calculation of primary and secondary dose in proton therapy of brain tumors using Monte Carlo method

    International Nuclear Information System (INIS)

    Moghbel Esfahani, F.; Alamatsaz, M.; Karimian, A.

    2012-01-01

    High-energy beams of protons offer significant advantages for the treatment of deep-seated local tumors. Their physical depth-dose distribution in tissue is characterized by a small entrance dose and a distinct maximum - Bragg peak - near the end of range with a sharp falloff at the distal edge. Therefore, research must be done to investigate the possible negative and positive effects of using proton therapy as a treatment modality. In proton therapy, protons do account for the vast majority of dose. However, when protons travel through matter, secondary particles are created by the interactions of protons and matter en route to and within the patient. It is believed that secondary dose can lead to secondary cancer, especially in pediatric cases. Therefore, the focus of this work is determining both primary and secondary dose. Dose calculations were performed by MCNPX in tumoral and healthy parts of brain. The brain tumor has a 10 mm diameter and is located 16 cm under the skin surface. The brain was simulated by a cylindrical water phantom with the dimensions of 19 x 19cm 2 (length x diameter), with 0.5 cm thickness of plexiglass (C 4 H 6 O 2 ). Then beam characteristics were investigated to ensure the accuracy of the model. Simulations were initially validated with against packages such as SRIM/TRIM. Dose calculations were performed using different configurations to evaluate depth-dose profiles and dose 2D distributions.The results of the simulation show that the best proton energy interval, to cover completely the brain tumor, is from 152 to 154 MeV. (authors)

  4. Analyses of Crime Patterns in NIBRS Data Based on a Novel Graph Theory Clustering Method: Virginia as a Case Study

    Directory of Open Access Journals (Sweden)

    Peixin Zhao

    2014-01-01

    Full Text Available This paper suggests a novel clustering method for analyzing the National Incident-Based Reporting System (NIBRS data, which include the determination of correlation of different crime types, the development of a likelihood index for crimes to occur in a jurisdiction, and the clustering of jurisdictions based on crime type. The method was tested by using the 2005 assault data from 121 jurisdictions in Virginia as a test case. The analyses of these data show that some different crime types are correlated and some different crime parameters are correlated with different crime types. The analyses also show that certain jurisdictions within Virginia share certain crime patterns. This information assists with constructing a pattern for a specific crime type and can be used to determine whether a jurisdiction may be more likely to see this type of crime occur in their area.

  5. A method for multiple sequential analyses of macrophage functions using a small single cell sample

    Directory of Open Access Journals (Sweden)

    F.R.F. Nascimento

    2003-09-01

    Full Text Available Microbial pathogens such as bacillus Calmette-Guérin (BCG induce the activation of macrophages. Activated macrophages can be characterized by the increased production of reactive oxygen and nitrogen metabolites, generated via NADPH oxidase and inducible nitric oxide synthase, respectively, and by the increased expression of major histocompatibility complex class II molecules (MHC II. Multiple microassays have been developed to measure these parameters. Usually each assay requires 2-5 x 10(5 cells per well. In some experimental conditions the number of cells is the limiting factor for the phenotypic characterization of macrophages. Here we describe a method whereby this limitation can be circumvented. Using a single 96-well microassay and a very small number of peritoneal cells obtained from C3H/HePas mice, containing as little as <=2 x 10(5 macrophages per well, we determined sequentially the oxidative burst (H2O2, nitric oxide production and MHC II (IAk expression of BCG-activated macrophages. More specifically, with 100 µl of cell suspension it was possible to quantify H2O2 release and nitric oxide production after 1 and 48 h, respectively, and IAk expression after 48 h of cell culture. In addition, this microassay is easy to perform, highly reproducible and more economical.

  6. Progress in the methods for analyses and measurements of environmental radionuclides

    International Nuclear Information System (INIS)

    1984-01-01

    The tenth seminar on environment of the National Institute of Radiological Sciences was held in Chiba on December 9 and 10, 1982, under the joint auspices with Japan Health Physics Society. The recent progress of the measuring techniques for environmental radiation substances is remarkable. The Japanese data on environmental radiation presented to the UN Scientific Committee on the Effect of Atomic Radiation have obtained very high esteem because the data have been reliable due to the progress of measuring techniques. However, this field is in steady progress and changes rapidly, therefore, this seminar was planned. In this report, the history of the analysis and measurement of environmental radioactivity, the method of sampling and pretreatment operation for such environmental specimens as gaseous radionuclides, atmospheric floating dust, soil, agricultural products, sea water and sea bottom sediment, marine life, foods and living bodies, the progress of chemical separation process, the automation of analysis and measurement, the progress of the analysis of low level nuclides with long half-value period, the manual for the analysis and measurement, the quality of the analysis and measurement and its assurance are described. (Kako, I.)

  7. A model for asymmetric ballooning and analyses of ballooning behaviour of single rods with probabilistic methods

    International Nuclear Information System (INIS)

    Keusenhoff, J.G.; Schubert, J.D.; Chakraborty, A.K.

    1985-01-01

    Plastic deformation behaviour of Zircaloy cladding has been extensively examined in the past and can be described best by a model for asymmetric deformation. Slight displacement between the pellet and cladding will always exist and this will lead to the formation of azimuthal temperature differences. The ballooning process is strongly temperature dependent and, as a result of the built up temperature differences, differing deformation behaviours along the circumference of the cladding result. The calculated ballooning of cladding is mainly influenced by its temperature, the applied burst criterion and the parameters used in the deformation model. All these influencing parameters possess uncertainties. In order to quantify these uncertainties and to estimate distribution functions of important parameters such as temperature and deformation the response surface method was applied. For a hot rod the calculated standard deviation of cladding temperature amounts to 50 K. From this high value the large influence of the external cooling conditions on the deformation and burst behaviour of cladding can be estimated. In an additional statistical examination the parameters of deformation and burst models have been included and their influence on the deformation of the rod has been studied. (author)

  8. Methods of Identifying, Collecting and Analysing Accelerants in Arson Fires in the Kingdom of Saudi Arabia

    Directory of Open Access Journals (Sweden)

    Abdulrhman M. Dhabbah

    2015-12-01

    Full Text Available If there is a suspicion of arson, analysis of fire debris and identification of potential accelerants is considered to be one of the most essential examinations of the investigation. The existence of any traces of potential accelerants in a sample taken from the fire scene is crucial in determining whether the fire was started deliberately or not. This study is divided into four parts: the first part describes the most important ignition accelerators which are used in arson fires in Saudi Arabia. The second part is devoted to determining the methods that are used to collect and store trace evidences from fire scenes in Saudi Arabia, if there is a suspicion that accelerants have been used to ignite the fire. The most important techniques used in the extraction and analysis of ignitable liquid residue (ILR in arson cases are presented in the third section. Finally, the fourth part discusses the problems and difficulties which both experts and employees in The General Department of Forensic Evidence in Saudi Arabia face when collecting and sampling traces as well as some recommendations to address these issues. The results obtained from this study indicate that the most common accelerant used to start fires is gasoline, specifically ‘Octane 91’, followed by kerosene, thereafter diesel and finally paint thinner. Experts are also agreed on the difficulty of obtaining evidence from this type of crime scene, especially after the fire has been extinguished and the scene is released for investigation by the Civil Defense. They also agree that the best technique for extracting and analyzing ignitable liquid residue (ILR in the solid phase should be Gas Chromatography coupled with Headspace (GC-Headspace. In liquid samples, either Gas Chromatography coupled with Mass Spectroscopy (GC-MS or Fourier transform infrared (FT- IR can be used.

  9. Methods and results for stress analyses on 14-ton, thin-wall depleted UF6 cylinders

    International Nuclear Information System (INIS)

    Kirkpatrick, J.R.; Chung, C.K.; Frazier, J.L.; Kelley, D.K.

    1996-10-01

    Uranium enrichment operations at the three US gaseous diffusion plants produce depleted uranium hexafluoride (DUF 6 ) as a residential product. At the present time, the inventory of DUF 6 in this country is more than half a million tons. The inventory of DUF 6 is contained in metal storage cylinders, most of which are located at the gaseous diffusion plants. The principal objective of the project is to ensure the integrity of the cylinders to prevent causing an environmental hazard by releasing the contents of the cylinders into the atmosphere. Another objective is to maintain the cylinders in such a manner that the DUF 6 may eventually be converted to a less hazardous material for final disposition. An important task in the DUF 6 cylinders management project is determining how much corrosion of the walls can be tolerated before the cylinders are in danger of being damaged during routine handling and shipping operations. Another task is determining how to handle cylinders that have already been damaged in a manner that will minimize the chance that a breach will occur or that the size of an existing breach will be significantly increased. A number of finite element stress analysis (FESA) calculations have been done to analyze the stresses for three conditions: (1) while the cylinder is being lifted, (2) when a cylinder is resting on two cylinders under it in the customary two-tier stacking array, and (3) when a cylinder is resting on tis chocks on the ground. Various documents describe some of the results and discuss some of the methods whereby they have been obtained. The objective of the present report is to document as many of the FESA cases done at Oak Ridge for 14-ton thin-wall cylinders as possible, giving results and a description of the calculations in some detail

  10. A comparative study of conventional and supercritical fluid extraction methods for the recovery of secondary metabolites from Syzygium campanulatum Korth#

    Science.gov (United States)

    Memon, Abdul Hakeem; Hamil, Mohammad Shahrul Ridzuan; Laghari, Madeeha; Rithwan, Fahim; Zhari, Salman; Saeed, Mohammed Ali Ahmed; Ismail, Zhari; Majid, Amin Malik Shah Abdul

    2016-01-01

    Syzygium campanulatum Korth is a plant, which is a rich source of secondary metabolites (especially flavanones, chalcone, and triterpenoids). In our present study, three conventional solvent extraction (CSE) techniques and supercritical fluid extraction (SFE) techniques were performed to achieve a maximum recovery of two flavanones, chalcone, and two triterpenoids from S. campanulatum leaves. Furthermore, a Box-Behnken design was constructed for the SFE technique using pressure, temperature, and particle size as independent variables, and yields of crude extract, individual and total secondary metabolites as the dependent variables. In the CSE procedure, twenty extracts were produced using ten different solvents and three techniques (maceration, soxhletion, and reflux). An enriched extract of five secondary metabolites was collected using n-hexane:methanol (1:1) soxhletion. Using food-grade ethanol as a modifier, the SFE methods produced a higher recovery (25.5%‒84.9%) of selected secondary metabolites as compared to the CSE techniques (0.92%‒66.00%). PMID:27604860

  11. A comparative study of conventional and supercritical fluid extraction methods for the recovery of secondary metabolites from Syzygium campanulatum Korth.

    Science.gov (United States)

    Memon, Abdul Hakeem; Hamil, Mohammad Shahrul Ridzuan; Laghari, Madeeha; Rithwan, Fahim; Zhari, Salman; Saeed, Mohammed Ali Ahmed; Ismail, Zhari; Majid, Amin Malik Shah Abdul

    2016-09-01

    Syzygium campanulatum Korth is a plant, which is a rich source of secondary metabolites (especially flavanones, chalcone, and triterpenoids). In our present study, three conventional solvent extraction (CSE) techniques and supercritical fluid extraction (SFE) techniques were performed to achieve a maximum recovery of two flavanones, chalcone, and two triterpenoids from S. campanulatum leaves. Furthermore, a Box-Behnken design was constructed for the SFE technique using pressure, temperature, and particle size as independent variables, and yields of crude extract, individual and total secondary metabolites as the dependent variables. In the CSE procedure, twenty extracts were produced using ten different solvents and three techniques (maceration, soxhletion, and reflux). An enriched extract of five secondary metabolites was collected using n-hexane:methanol (1:1) soxhletion. Using food-grade ethanol as a modifier, the SFE methods produced a higher recovery (25.5%‒84.9%) of selected secondary metabolites as compared to the CSE techniques (0.92%‒66.00%).

  12. Secondary Analysis of National Longitudinal Transition Study 2 Data

    Science.gov (United States)

    Hicks, Tyler A.; Knollman, Greg A.

    2015-01-01

    This review examines published secondary analyses of National Longitudinal Transition Study 2 (NLTS2) data, with a primary focus upon statistical objectives, paradigms, inferences, and methods. Its primary purpose was to determine which statistical techniques have been common in secondary analyses of NLTS2 data. The review begins with an…

  13. Effect of a medical food on body mass index and activities of daily living in patients with Alzheimer's disease: secondary analyses from a randomized, controlled trial

    NARCIS (Netherlands)

    Kamphuis, P.J.G.H.; Verhey, F.R.; Olde Rikkert, M.G.; Twisk, J.W.; Swinkels, S.H.; Scheltens, P.

    2011-01-01

    Objectives: To investigate the effect of a medical food (Souvenaid) on body mass index (BMI) and functional abilities in patients with mild Alzheimer's disease (AD). Design/setting/participants/intervention /measurements: These analyses were performed on data from a 12-week, double-blind,

  14. Effect of a medical food on body mass index and activities of daily living in patients with Alzheimer's disease: secondary analyses from a randomized, controlled trial

    NARCIS (Netherlands)

    Kamphuis, P.J.; Verhey, F.R.J.; Olde Rikkert, M.G.M.; Twisk, J.W.R.; Swinkels, S.H.N.; Scheltens, P.

    2011-01-01

    OBJECTIVES: To investigate the effect of a medical food (Souvenaid) on body mass index (BMI) and functional abilities in patients with mild Alzheimer's disease (AD). DESIGN/SETTING/PARTICIPANTS/INTERVENTION /MEASUREMENTS: These analyses were performed on data from a 12-week, double-blind,

  15. Suitability of Secondary PEEK Telescopic Crowns on Zirconia Primary Crowns: The Influence of Fabrication Method and Taper.

    Science.gov (United States)

    Merk, Susanne; Wagner, Christina; Stock, Veronika; Eichberger, Marlis; Schmidlin, Patrick R; Roos, Malgorzata; Stawarczyk, Bogna

    2016-11-08

    This study investigates the retention load (RL) between ZrO₂ primary crowns and secondary polyetheretherketone (PEEK) crowns made by different fabrication methods with three different tapers. Standardized primary ZrO₂ crowns were fabricated with three different tapers: 0°, 1°, and 2° ( n = 10/group). Ten secondary crowns were fabricated (i) milled from breCam BioHPP blanks (PM); (ii) pressed from industrially fabricated PEEK pellets (PP) (BioHPP Pellet); or (iii) pressed from granular PEEK (PG) (BioHPP Granulat). One calibrated operator adjusted all crowns. In total, the RL of 90 secondary crowns were measured in pull-off tests at 50 mm/min, and each specimen was tested 20 times. Two- and one-way ANOVAs followed by a Scheffé's post-hoc test were used for data analysis ( p impact on the results. Within the 2° taper, the fabrication method had no influence on the RL. Within the PM group, the 2° taper showed significantly higher retention load compared with the 1° taper. The taper with 0° was in the same range value as the 1° and 2° tapers. No impact of the taper on the retention value was observed between the PP groups. Within the PG groups, the 0° taper presented significantly lower RL than the 1° taper, whereas the 2° taper showed no differences. The fabrication method of the secondary PEEK crowns and taper angles showed no consistent effect within all tested groups.

  16. Suitability of Secondary PEEK Telescopic Crowns on Zirconia Primary Crowns: The Influence of Fabrication Method and Taper

    Directory of Open Access Journals (Sweden)

    Susanne Merk

    2016-11-01

    Full Text Available This study investigates the retention load (RL between ZrO2 primary crowns and secondary polyetheretherketone (PEEK crowns made by different fabrication methods with three different tapers. Standardized primary ZrO2 crowns were fabricated with three different tapers: 0°, 1°, and 2° (n = 10/group. Ten secondary crowns were fabricated (i milled from breCam BioHPP blanks (PM; (ii pressed from industrially fabricated PEEK pellets (PP (BioHPP Pellet; or (iii pressed from granular PEEK (PG (BioHPP Granulat. One calibrated operator adjusted all crowns. In total, the RL of 90 secondary crowns were measured in pull-off tests at 50 mm/min, and each specimen was tested 20 times. Two- and one-way ANOVAs followed by a Scheffé’s post-hoc test were used for data analysis (p < 0.05. Within crowns with a 0° taper, the PP group showed significantly higher retention load values compared with the other groups. Among the 1° taper, the PM group presented significantly lower retention loads than the PP group. However, the pressing type had no impact on the results. Within the 2° taper, the fabrication method had no influence on the RL. Within the PM group, the 2° taper showed significantly higher retention load compared with the 1° taper. The taper with 0° was in the same range value as the 1° and 2° tapers. No impact of the taper on the retention value was observed between the PP groups. Within the PG groups, the 0° taper presented significantly lower RL than the 1° taper, whereas the 2° taper showed no differences. The fabrication method of the secondary PEEK crowns and taper angles showed no consistent effect within all tested groups.

  17. Alkalisation agent measurement with differential conductivity method in secondary water system

    International Nuclear Information System (INIS)

    Wuhrmann, Peter; Lendi, Marco

    2012-09-01

    Besides ammonia hydroxide, also morpholine and ethanol-amine (ETA) are mainly used as a pH regulating agent on the secondary water side [1]. The concentration of the alkalisation agent can only be calculated if the chemical composition in the sample is known [2]. Therefore, for a reliable alkalisation agent measurement, there are three major steps to take: A reliable specific and (degassed) acid conductivity measurement, pH calculation and the selection of the chemical model for concentration calculation of the alkalisation agent (authors)

  18. In-situ measurements of the secondary electron yield in an accelerator environment: Instrumentation and methods

    International Nuclear Information System (INIS)

    Hartung, W.H.; Asner, D.M.; Conway, J.V.; Dennett, C.A.; Greenwald, S.; Kim, J.-S.; Li, Y.; Moore, T.P.; Omanovic, V.; Palmer, M.A.; Strohman, C.R.

    2015-01-01

    The performance of a particle accelerator can be limited by the build-up of an electron cloud (EC) in the vacuum chamber. Secondary electron emission from the chamber walls can contribute to EC growth. An apparatus for in-situ measurements of the secondary electron yield (SEY) in the Cornell Electron Storage Ring (CESR) was developed in connection with EC studies for the CESR Test Accelerator program. The CESR in-situ system, in operation since 2010, allows for SEY measurements as a function of incident electron energy and angle on samples that are exposed to the accelerator environment, typically 5.3 GeV counter-rotating beams of electrons and positrons. The system was designed for periodic measurements to observe beam conditioning of the SEY with discrimination between exposure to direct photons from synchrotron radiation versus scattered photons and cloud electrons. The samples can be exchanged without venting the CESR vacuum chamber. Measurements have been done on metal surfaces and EC-mitigation coatings. The in-situ SEY apparatus and improvements to the measurement tools and techniques are described

  19. In-situ measurements of the secondary electron yield in an accelerator environment: Instrumentation and methods

    Energy Technology Data Exchange (ETDEWEB)

    Hartung, W.H., E-mail: wh29@cornell.edu; Asner, D.M.; Conway, J.V.; Dennett, C.A.; Greenwald, S.; Kim, J.-S.; Li, Y.; Moore, T.P.; Omanovic, V.; Palmer, M.A.; Strohman, C.R.

    2015-05-21

    The performance of a particle accelerator can be limited by the build-up of an electron cloud (EC) in the vacuum chamber. Secondary electron emission from the chamber walls can contribute to EC growth. An apparatus for in-situ measurements of the secondary electron yield (SEY) in the Cornell Electron Storage Ring (CESR) was developed in connection with EC studies for the CESR Test Accelerator program. The CESR in-situ system, in operation since 2010, allows for SEY measurements as a function of incident electron energy and angle on samples that are exposed to the accelerator environment, typically 5.3 GeV counter-rotating beams of electrons and positrons. The system was designed for periodic measurements to observe beam conditioning of the SEY with discrimination between exposure to direct photons from synchrotron radiation versus scattered photons and cloud electrons. The samples can be exchanged without venting the CESR vacuum chamber. Measurements have been done on metal surfaces and EC-mitigation coatings. The in-situ SEY apparatus and improvements to the measurement tools and techniques are described.

  20. A study on the Stress Corrosion Cracking reduction method of Steam Generator secondary side of KSNP

    International Nuclear Information System (INIS)

    Kim, June Hoon; Lee, Goune Jin

    2014-01-01

    In order to avoid sludge accumulation affecting the life of the steam generator, the best way is to prevent the sludge inflow in advance by optimization of water quality management through chemical concentration and pH control etc. However it is very difficult to prevent sludge accumulation under the weak condition of corrosion, such as condensation, boiling and high temperature of feed-water in NPPs. Particularly stress corrosion cracking occurs in a top-of-tube sheet area of steam generator with an increase in number of operation years of Korea Standard Nuclear Plant(KSNP)... The purpose of this study is to improve suppression of stress corrosion cracking and life extension for steam generator and improve plant efficiency by performing full length bulk high chemical cleaning in order to remove iron oxide of steam generator secondary side in KSNP Hanbit Unit 6. This study analyzed the Free EDTA and Fe concentrations and sludge removal after performed full length bulk high temperature chemical cleaning for removing the iron oxide of steam generator secondary side, which of Hanbit unit 6 of KSNP. 1) It showed a typical pattern that Fe concentration increased in accordance with to decrease Free EDTA(Ethylene Diamine Tetea acetic Acid) concentration. 2) Sludge removal based on iron oxide after performing the full length bulk high temperature chemical cleaning was 3001kg and sludge removal by lancing additionally was 200.1kg

  1. Evaluation of Pentachlorophenol Residues in Some Hygienic Papers Prepared from Virgin and Secondary Pulp by Electron Capture Gas Chromatographic Method

    Directory of Open Access Journals (Sweden)

    Behrouz Akbari-adergani

    2016-01-01

    Full Text Available In this study, residual amount of pentachlorophenol (PCP as the most important paper preservative, which is extremely hazardous pollutant, was determined in some tissue papers and napkins. Twenty-five samples of two producing hygienic paper factories prepared from virgin and secondary pulp were analyzed for the presence of trace amount of PCP. The analytical procedure involved direct extraction of PCP from hygienic paper and its determination by gas chromatography with electron capture detection. The statistical results for the analysis of all samples revealed that there were significant differences between mean of PCP in hygienic papers prepared from virgin and secondary pulp (P<0.05. This method gave recoveries of 86-98% for hygienic paper made from virgin pulp and 79-92% for hygienic paper made from secondary pulp. The limit of detection (LOD and limit of quantification (LOQ for PCP were 6.3 and 21.0 mg/kg, respectively. The analytical method has the requisite sensitivity, accuracy, precision and specificity to assay PCP in hygienic papers. This study demonstrates a concern with exposition to PCP considering that hygienic paper is largely consumed in the society.

  2. The quality of reporting methods and results of cost-effectiveness analyses in Spain: a methodological systematic review.

    Science.gov (United States)

    Catalá-López, Ferrán; Ridao, Manuel; Alonso-Arroyo, Adolfo; García-Altés, Anna; Cameron, Chris; González-Bermejo, Diana; Aleixandre-Benavent, Rafael; Bernal-Delgado, Enrique; Peiró, Salvador; Tabarés-Seisdedos, Rafael; Hutton, Brian

    2016-01-07

    Cost-effectiveness analysis has been recognized as an important tool to determine the efficiency of healthcare interventions and services. There is a need for evaluating the reporting of methods and results of cost-effectiveness analyses and establishing their validity. We describe and examine reporting characteristics of methods and results of cost-effectiveness analyses conducted in Spain during more than two decades. A methodological systematic review was conducted with the information obtained through an updated literature review in PubMed and complementary databases (e.g. Scopus, ISI Web of Science, National Health Service Economic Evaluation Database (NHS EED) and Health Technology Assessment (HTA) databases from Centre for Reviews and Dissemination (CRD), Índice Médico Español (IME) Índice Bibliográfico Español en Ciencias de la Salud (IBECS)). We identified cost-effectiveness analyses conducted in Spain that used quality-adjusted life years (QALYs) as outcome measures (period 1989-December 2014). Two reviewers independently extracted the data from each paper. The data were analysed descriptively. In total, 223 studies were included. Very few studies (10; 4.5 %) reported working from a protocol. Most studies (200; 89.7 %) were simulation models and included a median of 1000 patients. Only 105 (47.1 %) studies presented an adequate description of the characteristics of the target population. Most study interventions were categorized as therapeutic (189; 84.8 %) and nearly half (111; 49.8 %) considered an active alternative as the comparator. Effectiveness of data was derived from a single study in 87 (39.0 %) reports, and only few (40; 17.9 %) used evidence synthesis-based estimates. Few studies (42; 18.8 %) reported a full description of methods for QALY calculation. The majority of the studies (147; 65.9 %) reported that the study intervention produced "more costs and more QALYs" than the comparator. Most studies (200; 89.7 %) reported favourable

  3. A comparative study between xerographic, computer-assisted overlay generation and animated-superimposition methods in bite mark analyses.

    Science.gov (United States)

    Tai, Meng Wei; Chong, Zhen Feng; Asif, Muhammad Khan; Rahmat, Rabiah A; Nambiar, Phrabhakaran

    2016-09-01

    This study was to compare the suitability and precision of xerographic and computer-assisted methods for bite mark investigations. Eleven subjects were asked to bite on their forearm and the bite marks were photographically recorded. Alginate impressions of the subjects' dentition were taken and their casts were made using dental stone. The overlays generated by xerographic method were obtained by photocopying the subjects' casts and the incisal edge outlines were then transferred on a transparent sheet. The bite mark images were imported into Adobe Photoshop® software and printed to life-size. The bite mark analyses using xerographically generated overlays were done by comparing an overlay to the corresponding printed bite mark images manually. In computer-assisted method, the subjects' casts were scanned into Adobe Photoshop®. The bite mark analyses using computer-assisted overlay generation were done by matching an overlay and the corresponding bite mark images digitally using Adobe Photoshop®. Another comparison method was superimposing the cast images with corresponding bite mark images employing the Adobe Photoshop® CS6 and GIF-Animator©. A score with a range of 0-3 was given during analysis to each precision-determining criterion and the score was increased with better matching. The Kruskal Wallis H test showed significant difference between the three sets of data (H=18.761, p<0.05). In conclusion, bite mark analysis using the computer-assisted animated-superimposition method was the most accurate, followed by the computer-assisted overlay generation and lastly the xerographic method. The superior precision contributed by digital method is discernible despite the human skin being a poor recording medium of bite marks. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Analysing radio-frequency coil arrays in high-field magnetic resonance imaging by the combined field integral equation method

    Energy Technology Data Exchange (ETDEWEB)

    Wang Shumin; Duyn, Jeff H [Laboratory of Functional and Molecular Imaging, National Institute of Neurological Disorders and Stroke, National Institutes of Health, 10 Center Drive, 10/B1D728, Bethesda, MD 20892 (United States)

    2006-06-21

    We present the combined field integral equation (CFIE) method for analysing radio-frequency coil arrays in high-field magnetic resonance imaging (MRI). Three-dimensional models of coils and the human body were used to take into account the electromagnetic coupling. In the method of moments formulation, we applied triangular patches and the Rao-Wilton-Glisson basis functions to model arbitrarily shaped geometries. We first examined a rectangular loop coil to verify the CFIE method and also demonstrate its efficiency and accuracy. We then studied several eight-channel receive-only head coil arrays for 7.0 T SENSE functional MRI. Numerical results show that the signal dropout and the average SNR are two major concerns in SENSE coil array design. A good design should be a balance of these two factors.

  5. Analysis of the accuracy of certain methods used for measuring very low reactivities; Analyse de la precision de certaines methodes de mesure de tres basses reactivites

    Energy Technology Data Exchange (ETDEWEB)

    Valat, J; Stern, T E

    1964-07-01

    The rapid measurement of anti-reactivities, in particular very low ones (i.e. a few tens of {beta}) appears to be an interesting method for the automatic start-up a reactor and its optimisation. With this in view, the present report explores the various methods studied essentially from the point of view of the time required for making the measurement with a given statistical accuracy, especially as far as very low activities are concerned. The statistical analysis is applied in turn to: the methods for the natural background noise (auto-correlation and spectral density); the sinusoidal excitation methods for the reactivity or the source, with synchronous detection ; the periodic source excitation method using pulsed neutrons. Finally, the statistical analysis leads to the suggestion of a new method of source excitation using neutronic random square waves combined with an intercorrelation between the random excitation and the resulting output. (authors) [French] La mesure rapide des antireactivites, en particulier celle des tres basses (soit quelques dizaines de {beta}), apparait comme une voie interessante pour le demarrage automatique d'un reacteur et son optimalisation. Dans cette optique, le present rapport explore diverses methodes etudiees essentiellement sous l'angle de la duree de mesure necessaire a une precision relative statistique donnee, plus particulierement en ce qui concerne les tres basses reactivites. L'analyse statistique porte successivement sur: les methodes du bruit de fond naturel (autocorrelation et densite spectrale); les methodes d'excitation sinusoidale de reactivite ou de source, avec detection synchrone; la methode d'excitation periodique de source par neutrons pulses. Enfin l'analyse statistique amene a proposer une methode nouvelle d'excitation de source par creneaux neutroniques aleatoires alliee a une intercorrelation entre l'excitation aleatoire et la sortie resultante. (auteurs)

  6. A Statistical Method for Aggregated Wind Power Plants to Provide Secondary Frequency Control

    DEFF Research Database (Denmark)

    Hu, Junjie; Ziras, Charalampos; Bindner, Henrik W.

    2017-01-01

    curtailment for aggregated wind power plants providing secondary frequency control (SFC) to the power system. By using historical SFC signals and wind speed data, we calculate metrics for the reserve provision error as a function of the scheduled wind power. We show that wind curtailment can be significantly......The increasing penetration of wind power brings significant challenges to power system operators due to the wind’s inherent uncertainty and variability. Traditionally, power plants and more recently demand response have been used to balance the power system. However, the use of wind power...... as a balancing-power source has also been investigated, especially for wind power dominated power systems such as Denmark. The main drawback is that wind power must be curtailed by setting a lower operating point, in order to offer upward regulation. We propose a statistical approach to reduce wind power...

  7. Becoming pregnant during secondary school: findings from concurrent mixed methods research in Anambra State, Nigeria.

    Science.gov (United States)

    Onyeka, Ifeoma N; Miettola, Juhani; Ilika, Amobi L; Vaskilampi, Tuula

    2012-03-01

    Pregnancies among teenagers and problems associated with premarital births have raised concerns in many countries. It is important to explore unintended pregnancy from the viewpoints of local stakeholders such as students, schools/teachers, and community members. This study assessed reported cases of unintended pregnancy among students and perceptions of these pregnancies by members of the community. This study took place in a rural community in Anambra state, southeastern Nigeria. A cross-sectional survey of 1,234 students and 46 teachers in five secondary schools was carried out using self-administered questionnaires. In addition, focus group discussions (FGD) involving 10 parents and in-depth interview (IDI) with a student who became pregnant were conducted. Reports of pregnancy were more common during second and third years of junior secondary school than other school years or level. According to teachers, ignorance was the main reason given by students who became pregnant. Students who became pregnant were reported to have performed poorly academically and lived with both parents, who were either subsistence farmers or petty traders. In the IDI, the ex-student opined that pregnant students faced shame, marital limitations and lack of respect from community members. Participants in the FGD suggested that teenagers should be provided with sex education in schools and in churches; parents should communicate with teenagers about sexual matters and make adequate financial provision; and the male partners should be held more accountable for the pregnancies. Poor sexual knowledge and poor socioeconomic conditions play important roles in teenage pregnancy. Male participation may enhance effectiveness of prevention programmes.

  8. Calculation of the secondary gamma radiation by the Monte Carlo method at displaced sampling from distributed sources

    International Nuclear Information System (INIS)

    Petrov, Eh.E.; Fadeev, I.A.

    1979-01-01

    A possibility to use displaced sampling from a bulk gamma source in calculating the secondary gamma fields by the Monte Carlo method is discussed. The algorithm proposed is based on the concept of conjugate functions alongside the dispersion minimization technique. For the sake of simplicity a plane source is considered. The algorithm has been put into practice on the M-220 computer. The differential gamma current and flux spectra in 21cm-thick lead have been calculated. The source of secondary gamma-quanta was assumed to be a distributed, constant and isotropic one emitting 4 MeV gamma quanta with the rate of 10 9 quanta/cm 3 xs. The calculations have demonstrated that the last 7 cm of lead are responsible for the whole gamma spectral pattern. The spectra practically coincide with the ones calculated by the ROZ computer code. Thus the algorithm proposed can be offectively used in the calculations of secondary gamma radiation transport and reduces the computation time by 2-4 times

  9. A score system for complete cytoreduction in selected recurrent ovarian cancer patients undergoing secondary cytoreductive surgery: predictors- and nomogram-based analyses.

    Science.gov (United States)

    Bogani, Giorgio; Tagliabue, Elena; Signorelli, Mauro; Ditto, Antonino; Martinelli, Fabio; Chiappa, Valentina; Mosca, Lavinia; Sabatucci, Ilaria; Leone Roberti Maggiore, Umberto; Lorusso, Domenica; Raspagliesi, Francesco

    2018-05-01

    To test the applicability of the Arbeitsgemeinschaft Gynäkologische Onkologie (AGO) and Memorial Sloan Kettering (MSK) criteria in predicting complete cytoreduction (CC) in patients undergoing secondary cytoreductive surgery (SCS) for recurrent ovarian cancer (ROC). Data of consecutive patients undergoing SCS were reviewed. The Arbeitsgemeinschaft Gynäkologische Onkologie OVARian cancer study group (AGO-OVAR) and MSK criteria were retrospectively applied. Nomograms, based on AGO criteria, MSK criteria and both AGO and MSK criteria were built in order to assess the probability to achieve CC at SCS. Overall, 194 patients met the inclusion criteria. CC was achieved in 161 (82.9%) patients. According to the AGO-OVAR criteria, we observed that CC was achieved in 87.0% of patients with positive AGO score. However, 45 out of 71 (63.4%) patients who did not fulfilled the AGO score had CC. Similarly, CC was achieved in 87.1%, 61.9% and 66.7% of patients for whom SCS was recommended, had to be considered and was not recommended, respectively. In order to evaluate the predictive value of the AGO-OVAR and MSK criteria we built 2 separate nomograms (c-index: 0.5900 and 0.5989, respectively) to test the probability to achieve CC at SCS. Additionally, we built a nomogram using both the aforementioned criteria (c-index: 0.5857). The AGO and MSK criteria help identifying patients deserving SCS. However, these criteria might be strict, thus prohibiting a beneficial treatment in patients who do not met these criteria. Further studies are needed to clarify factors predicting CC at SCS. Copyright © 2018. Asian Society of Gynecologic Oncology, Korean Society of Gynecologic Oncology.

  10. Proteomic and Metabolomic Analyses Reveal Contrasting Anti-Inflammatory Effects of an Extract of Mucor Racemosus Secondary Metabolites Compared to Dexamethasone.

    Science.gov (United States)

    Meier, Samuel M; Muqaku, Besnik; Ullmann, Ronald; Bileck, Andrea; Kreutz, Dominique; Mader, Johanna C; Knasmüller, Siegfried; Gerner, Christopher

    2015-01-01

    Classical drug assays are often confined to single molecules and targeting single pathways. However, it is also desirable to investigate the effects of complex mixtures on complex systems such as living cells including the natural multitude of signalling pathways. Evidence based on herbal medicine has motivated us to investigate potential beneficial health effects of Mucor racemosus (M rac) extracts. Secondary metabolites of M rac were collected using a good-manufacturing process (GMP) approved production line and a validated manufacturing process, in order to obtain a stable product termed SyCircue (National Drug Code USA: 10424-102). Toxicological studies confirmed that this product does not contain mycotoxins and is non-genotoxic. Potential effects on inflammatory processes were investigated by treating stimulated cells with M rac extracts and the effects were compared to the standard anti-inflammatory drug dexamethasone on the levels of the proteome and metabolome. Using 2D-PAGE, slight anti-inflammatory effects were observed in primary white blood mononuclear cells, which were more pronounced in primary human umbilical vein endothelial cells (HUVECs). Proteome profiling based on nLC-MS/MS analysis of tryptic digests revealed inhibitory effects of M rac extracts on pro-inflammatory cytoplasmic mediators and secreted cytokines and chemokines in these endothelial cells. This finding was confirmed using targeted proteomics, here treatment of stimulated cells with M rac extracts down-regulated the secretion of IL-6, IL-8, CXCL5 and GROA significantly. Finally, the modulating effects of M rac on HUVECs were also confirmed on the level of the metabolome. Several metabolites displayed significant concentration changes upon treatment of inflammatory activated HUVECs with the M rac extract, including spermine and lysophosphatidylcholine acyl C18:0 and sphingomyelin C26:1, while the bulk of measured metabolites remained unaffected. Interestingly, the effects of M rac

  11. Determination of S_1_7 from systematic analyses on "8B Coulomb breakup with the Eikonal-CDCC method

    International Nuclear Information System (INIS)

    Ogata, K.; Matsumoto, T.; Yamashita, N.; Kamimura, M.; Yahiro, M.; Iseri, Y.

    2003-01-01

    Systematic analysis of "8B Coulomb dissociation with the Asymptotic Normalization Coefficient (ANC) method is proposed to determine the astrophysical factor S_1_7(0) accurately. An important advantage of the analysis is that uncertainties of the extracted S_1_7(0) coming from the use of the ANC method can quantitatively be evaluated, in contrast to previous analyses using the Virtual Photon Theory (VPT). Calculation of measured spectra in dissociation experiments is done by means of the method of Continuum-Discretized Coupled-Channels (CDCC). From the analysis of "5"8Ni("8B,"7Be+p) "5"8Ni at 25.8 MeV, S_1_7(0) = 22.83 ± 0.51(theo) ± 2.28(expt) (eVb) is obtained; the ANC method turned out to work in this case within 1% of error. Preceding systematic analysis of experimental data at intermediate energies, we propose hybrid (HY) Coupled-Channels (CC) calculation of "8B Coulomb dissociation, which makes numerical calculation much simple, retaining its accuracy. The validity of the HY calculation is tested for "5"8Ni("8B,"7Be+p) "5"8Ni at 240 MeV. The ANC method combined with the HY CC calculation is shown to be a powerful technique to obtain a reliable S_1_7(0).

  12. Development of the mitigation method for carbon steel corrosion with ceramics in PWR secondary system

    International Nuclear Information System (INIS)

    Okamura, Masato; Shibasaki, Osamu; Miyazaki, Toyoaki; Kaneko, Tetsuji

    2012-09-01

    To verify the effect of depositing ceramic (TiO 2 , La 2 O 3 , and Y 2 O 3 ) on carbon steel to mitigate corrosion, corrosion tests were conducted under simulated chemistry conditions in a PWR secondary system. Test specimens (STPT410) were prepared with and without deposited ceramics. The ceramics were deposited on the specimens under high-temperature and high-pressure water conditions. Corrosion tests were conducted under high pH conditions (9.8) with a flow rate of 1.0-4.7 m/s at 185 deg. C for 200 hours. At a flow rate of 1.0 m/s, the amount of corrosion of the specimens with the ceramics was less than half of that of the specimens without the ceramics. As the flow rate increased, the amount of corrosion increased. However, even at a flow rate of 4.7 m/s, the amount of corrosion was reduced by approximately 30% by depositing the ceramics. After the corrosion tests, the surfaces of the specimens were analyzed with SEM and XRD. When the deposited ceramic was TiO 2 , the surface was densely covered with fine particles (less than 1 μm). From XRD analysis, these particles were identified as ilmenite (FeTiO 3 ). We consider that ilmenite may play an important role in mitigating the corrosion of carbon steel. (authors)

  13. A geometric buckling expression for regular polygons: II. Analyses based on the multiple reciprocity boundary element method

    International Nuclear Information System (INIS)

    Itagaki, Masafumi; Miyoshi, Yoshinori; Hirose, Hideyuki

    1993-01-01

    A procedure is presented for the determination of geometric buckling for regular polygons. A new computation technique, the multiple reciprocity boundary element method (MRBEM), has been applied to solve the one-group neutron diffusion equation. The main difficulty in applying the ordinary boundary element method (BEM) to neutron diffusion problems has been the need to compute a domain integral, resulting from the fission source. The MRBEM has been developed for transforming this type of domain integral into an equivalent boundary integral. The basic idea of the MRBEM is to apply repeatedly the reciprocity theorem (Green's second formula) using a sequence of higher order fundamental solutions. The MRBEM requires discretization of the boundary only rather than of the domain. This advantage is useful for extensive survey analyses of buckling for complex geometries. The results of survey analyses have indicated that the general form of geometric buckling is B g 2 = (a n /R c ) 2 , where R c represents the radius of the circumscribed circle of the regular polygon under consideration. The geometric constant A n depends on the type of regular polygon and takes the value of π for a square and 2.405 for a circle, an extreme case that has an infinite number of sides. Values of a n for a triangle, pentagon, hexagon, and octagon have been calculated as 4.190, 2.281, 2.675, and 2.547, respectively

  14. Problem-Based Learning Method: Secondary Education 10th Grade Chemistry Course Mixtures Topic

    Science.gov (United States)

    Üce, Musa; Ates, Ismail

    2016-01-01

    In this research; aim was determining student achievement by comparing problem-based learning method with teacher-centered traditional method of teaching 10th grade chemistry lesson mixtures topic. Pretest-posttest control group research design is implemented. Research sample includes; two classes of (total of 48 students) an Anatolian High School…

  15. Reliability of the k{sub 0}-standardization method using geological sample analysed in a proficiency test

    Energy Technology Data Exchange (ETDEWEB)

    Pelaes, Ana Clara O.; Menezes, Maria Ângela de B.C., E-mail: anacpelaes@gmail.com, E-mail: menezes@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2017-11-01

    The Neutron Activation Analysis (NAA) is an analytical technique to determine the elemental chemical composition in samples of several matrices, that has been applied by the Laboratory for Neutron Activation Analysis, located at Centro de Desenvolvimento da Tecnologia Nuclear /Comissão Nacional de Energia Nuclear (Nuclear Technology Development Center/Brazilian Commission for Nuclear Energy), CDTN/CNEN, since the starting up of the TRIGA MARK I IPR-R1 reactor, in 1960. Among the methods of application of the technique, the k{sub 0}-standardization method, which was established at CDTN in 1995, is the most efficient and in 2003 it was reestablished and optimized. In order to verify the reproducibility of the results generated by the application of the k{sub 0}-standardization method at CDTN, aliquots of a geological sample sent by WEPAL (Wageningen Evaluating Programs for Analytical Laboratories) were analysed and its results were compared with the results obtained through the Intercomparison of Results organized by the International Atomic Energy Agency in 2015. WEPAL is an accredited institution for the organisation of interlaboratory studies, preparing and organizing proficiency testing schemes all over the world. Therefore, the comparison with the results provided aims to contribute to the continuous improvement of the quality of the results obtained by the CDTN. The objective of this study was to verify the reliability of the method applied two years after the intercomparison round. (author)

  16. Prevalence and Social Determinants of Smoking in 15 Countries from North Africa, Central and Western Asia, Latin America and Caribbean: Secondary Data Analyses of Demographic and Health Surveys.

    Science.gov (United States)

    Sreeramareddy, Chandrashekhar T; Pradhan, Pranil Man Singh

    2015-01-01

    Article 20 of the World Health Organisation Framework Convention on Tobacco Control calls for a cross-country surveillance of tobacco use through population-based surveys. We aimed to provide country-level prevalence estimates for current smoking and current smokeless tobacco use and to assess social determinants of smoking. Data from Demographic and Health Surveys done between 2005 and 2012, among men and women from nine North African, Central and West Asian countries and six Latin American and Caribbean countries were analyzed. Weighted country-level prevalence rates were estimated for 'current smoking' and 'current use of smokeless tobacco (SLT) products' among men and women. In each country, social determinants of smoking among men and women were assessed by binary logistic regression analyses by including men's and women's sampling weights to account for the complex survey design. Prevalence of smoking among men was higher than 40% in Armenia (63.1%), Moldova (51.1%), Ukraine (52%), Azerbaijan (49.8 %), Kyrgyz Republic (44.3 %) and Albania (42.52%) but the prevalence of smoking among women was less than 10% in most countries except Ukraine (14.81%) and Jordan (17.96%). The prevalence of smokeless tobacco use among men and women was less than 5% in all countries except among men in the Kyrgyz Republic (10.6 %). Smoking was associated with older age, lower education and poverty among men and higher education and higher wealth among women. Smoking among both men and women was associated with unskilled work, living in urban areas and being single. Smoking among men was very high in Central and West Asian countries. Social pattern of smoking among women that was different from men in education and wealth should be considered while formulating tobacco control policies in some Central and West Asian countries.

  17. Prevalence and Social Determinants of Smoking in 15 Countries from North Africa, Central and Western Asia, Latin America and Caribbean: Secondary Data Analyses of Demographic and Health Surveys.

    Directory of Open Access Journals (Sweden)

    Chandrashekhar T Sreeramareddy

    Full Text Available Article 20 of the World Health Organisation Framework Convention on Tobacco Control calls for a cross-country surveillance of tobacco use through population-based surveys. We aimed to provide country-level prevalence estimates for current smoking and current smokeless tobacco use and to assess social determinants of smoking.Data from Demographic and Health Surveys done between 2005 and 2012, among men and women from nine North African, Central and West Asian countries and six Latin American and Caribbean countries were analyzed. Weighted country-level prevalence rates were estimated for 'current smoking' and 'current use of smokeless tobacco (SLT products' among men and women. In each country, social determinants of smoking among men and women were assessed by binary logistic regression analyses by including men's and women's sampling weights to account for the complex survey design.Prevalence of smoking among men was higher than 40% in Armenia (63.1%, Moldova (51.1%, Ukraine (52%, Azerbaijan (49.8 %, Kyrgyz Republic (44.3 % and Albania (42.52% but the prevalence of smoking among women was less than 10% in most countries except Ukraine (14.81% and Jordan (17.96%. The prevalence of smokeless tobacco use among men and women was less than 5% in all countries except among men in the Kyrgyz Republic (10.6 %. Smoking was associated with older age, lower education and poverty among men and higher education and higher wealth among women. Smoking among both men and women was associated with unskilled work, living in urban areas and being single.Smoking among men was very high in Central and West Asian countries. Social pattern of smoking among women that was different from men in education and wealth should be considered while formulating tobacco control policies in some Central and West Asian countries.

  18. Analysing malaria drug trials on a per-individual or per-clone basis: a comparison of methods.

    Science.gov (United States)

    Jaki, Thomas; Parry, Alice; Winter, Katherine; Hastings, Ian

    2013-07-30

    There are a variety of methods used to estimate the effectiveness of antimalarial drugs in clinical trials, invariably on a per-person basis. A person, however, may have more than one malaria infection present at the time of treatment. We evaluate currently used methods for analysing malaria trials on a per-individual basis and introduce a novel method to estimate the cure rate on a per-infection (clone) basis. We used simulated and real data to highlight the differences of the various methods. We give special attention to classifying outcomes as cured, recrudescent (infections that never fully cleared) or ambiguous on the basis of genetic markers at three loci. To estimate cure rates on a per-clone basis, we used the genetic information within an individual before treatment to determine the number of clones present. We used the genetic information obtained at the time of treatment failure to classify clones as recrudescence or new infections. On the per-individual level, we find that the most accurate methods of classification label an individual as newly infected if all alleles are different at the beginning and at the time of failure and as a recrudescence if all or some alleles were the same. The most appropriate analysis method is survival analysis or alternatively for complete data/per-protocol analysis a proportion estimate that treats new infections as successes. We show that the analysis of drug effectiveness on a per-clone basis estimates the cure rate accurately and allows more detailed evaluation of the performance of the treatment. Copyright © 2012 John Wiley & Sons, Ltd.

  19. The Threat of Unexamined Secondary Data: A Critical Race Transformative Convergent Mixed Methods

    Science.gov (United States)

    Garcia, Nichole M.; Mayorga, Oscar J.

    2018-01-01

    This article uses a critical race theory framework to conceptualize a Critical Race Transformative Convergent Mixed Methods (CRTCMM) in education. CRTCMM is a methodology that challenges normative educational research practices by acknowledging that racism permeates educational institutions and marginalizes Communities of Color. The focus of this…

  20. Teaching Basic Quantum Mechanics in Secondary School Using Concepts of Feynman Path Integrals Method

    Science.gov (United States)

    Fanaro, Maria de los Angeles; Otero, Maria Rita; Arlego, Marcelo

    2012-01-01

    This paper discusses the teaching of basic quantum mechanics in high school. Rather than following the usual formalism, our approach is based on Feynman's path integral method. Our presentation makes use of simulation software and avoids sophisticated mathematical formalism. (Contains 3 figures.)

  1. Teaching Politics in Secondary Education: Analyzing Instructional Methods from the 2008 Presidential Election

    Science.gov (United States)

    Journell, Wayne

    2011-01-01

    This article describes the instructional methods of four high school government teachers during their coverage of the 2008 presidential election. By analyzing the ways in which these teachers attempted to generate interest in the election and further their students' conceptualization of politics, the author seeks to better understand political…

  2. Teaching Two Basic Nanotechnology Concepts in Secondary School by Using a Variety of Teaching Methods

    Science.gov (United States)

    Blonder, Ron; Sakhnini, Sohair

    2012-01-01

    A nanotechnology module was developed for ninth grade students in the context of teaching chemistry. Two basic concepts in nanotechnology were chosen: (1) size and scale and (2) surface-area-to-volume ratio (SA/V). A wide spectrum of instructional methods (e.g., game-based learning, learning with multimedia, learning with models, project based…

  3. Appreciative Inquiry as a Method for Participatory Change in Secondary Schools in Lebanon

    Science.gov (United States)

    Shuayb, Maha

    2014-01-01

    Appreciative inquiry is a strategy which takes a positive approach to organizational development. It aims to identify good practice, design effective development plans, and ensure implementation. This article examines the potentials and limitations of using the appreciative inquiry in a mixed methods research design for developing school…

  4. Ab initio O(N) elongation-counterpoise method for BSSE-corrected interaction energy analyses in biosystems

    Energy Technology Data Exchange (ETDEWEB)

    Orimoto, Yuuichi; Xie, Peng; Liu, Kai [Department of Material Sciences, Faculty of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Yamamoto, Ryohei [Department of Molecular and Material Sciences, Interdisciplinary Graduate School of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Imamura, Akira [Hiroshima Kokusai Gakuin University, 6-20-1 Nakano, Aki-ku, Hiroshima 739-0321 (Japan); Aoki, Yuriko, E-mail: aoki.yuriko.397@m.kyushu-u.ac.jp [Department of Material Sciences, Faculty of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Japan Science and Technology Agency, CREST, 4-1-8 Hon-chou, Kawaguchi, Saitama 332-0012 (Japan)

    2015-03-14

    An Elongation-counterpoise (ELG-CP) method was developed for performing accurate and efficient interaction energy analysis and correcting the basis set superposition error (BSSE) in biosystems. The method was achieved by combining our developed ab initio O(N) elongation method with the conventional counterpoise method proposed for solving the BSSE problem. As a test, the ELG-CP method was applied to the analysis of the DNAs’ inter-strands interaction energies with respect to the alkylation-induced base pair mismatch phenomenon that causes a transition from G⋯C to A⋯T. It was found that the ELG-CP method showed high efficiency (nearly linear-scaling) and high accuracy with a negligibly small energy error in the total energy calculations (in the order of 10{sup −7}–10{sup −8} hartree/atom) as compared with the conventional method during the counterpoise treatment. Furthermore, the magnitude of the BSSE was found to be ca. −290 kcal/mol for the calculation of a DNA model with 21 base pairs. This emphasizes the importance of BSSE correction when a limited size basis set is used to study the DNA models and compare small energy differences between them. In this work, we quantitatively estimated the inter-strands interaction energy for each possible step in the transition process from G⋯C to A⋯T by the ELG-CP method. It was found that the base pair replacement in the process only affects the interaction energy for a limited area around the mismatch position with a few adjacent base pairs. From the interaction energy point of view, our results showed that a base pair sliding mechanism possibly occurs after the alkylation of guanine to gain the maximum possible number of hydrogen bonds between the bases. In addition, the steps leading to the A⋯T replacement accompanied with replications were found to be unfavorable processes corresponding to ca. 10 kcal/mol loss in stabilization energy. The present study indicated that the ELG-CP method is promising for

  5. An overview of translation in language teaching methods: implications for EFL in secondary education in the region of Murcia

    Directory of Open Access Journals (Sweden)

    Teresa Marqués Aguado

    2013-07-01

    Full Text Available Various activities and resources have been used across time to promote and enhance the learning of foreign languages. Among these, translation has been cherished or dismissed depending on the preferred teaching method at each period. With the arrival of the Communicative approach, which focuses on communicative competence, its role has apparently become even more unstable.This article seeks to explore the role of translation in the main teaching methods used in Spain. This will in turn serve as the background against which the current educational scenario (with the communicative approach and the tenets of the Common European Framework of Reference for Languages will be measured with a view to ascertaining the role that translation may currently play. The particular situation of Secondary Education in the Region of Murcia will be discussed in the light of the curricula for this stage.

  6. A comparison of two microscale laboratory reporting methods in a secondary chemistry classroom

    Science.gov (United States)

    Martinez, Lance Michael

    This study attempted to determine if there was a difference between the laboratory achievement of students who used a modified reporting method and those who used traditional laboratory reporting. The study also determined the relationships between laboratory performance scores and the independent variables score on the Group Assessment of Logical Thinking (GALT) test, chronological age in months, gender, and ethnicity for each of the treatment groups. The study was conducted using 113 high school students who were enrolled in first-year general chemistry classes at Pueblo South High School in Colorado. The research design used was the quasi-experimental Nonequivalent Control Group Design. The statistical treatment consisted of the Multiple Regression Analysis and the Analysis of Covariance. Based on the GALT, students in the two groups were generally in the concrete and transitional stages of the Piagetian cognitive levels. The findings of the study revealed that the traditional and the modified methods of laboratory reporting did not have any effect on the laboratory performance outcome of the subjects. However, the students who used the traditional method of reporting showed a higher laboratory performance score when evaluation was conducted using the New Standards rubric recommended by the state. Multiple Regression Analysis revealed that there was a significant relationship between the criterion variable student laboratory performance outcome of individuals who employed traditional laboratory reporting methods and the composite set of predictor variables. On the contrary, there was no significant relationship between the criterion variable student laboratory performance outcome of individuals who employed modified laboratory reporting methods and the composite set of predictor variables.

  7. A DNA microarray-based methylation-sensitive (MS)-AFLP hybridization method for genetic and epigenetic analyses.

    Science.gov (United States)

    Yamamoto, F; Yamamoto, M

    2004-07-01

    We previously developed a PCR-based DNA fingerprinting technique named the Methylation Sensitive (MS)-AFLP method, which permits comparative genome-wide scanning of methylation status with a manageable number of fingerprinting experiments. The technique uses the methylation sensitive restriction enzyme NotI in the context of the existing Amplified Fragment Length Polymorphism (AFLP) method. Here we report the successful conversion of this gel electrophoresis-based DNA fingerprinting technique into a DNA microarray hybridization technique (DNA Microarray MS-AFLP). By performing a total of 30 (15 x 2 reciprocal labeling) DNA Microarray MS-AFLP hybridization experiments on genomic DNA from two breast and three prostate cancer cell lines in all pairwise combinations, and Southern hybridization experiments using more than 100 different probes, we have demonstrated that the DNA Microarray MS-AFLP is a reliable method for genetic and epigenetic analyses. No statistically significant differences were observed in the number of differences between the breast-prostate hybridization experiments and the breast-breast or prostate-prostate comparisons.

  8. A Rapid and Efficient Method for Purifying High Quality Total RNA from Peaches (Prunus persica for Functional Genomics Analyses

    Directory of Open Access Journals (Sweden)

    LEE MEISEL

    2005-01-01

    Full Text Available Prunus persica has been proposed as a genomic model for deciduous trees and the Rosaceae family. Optimized protocols for RNA isolation are necessary to further advance studies in this model species such that functional genomics analyses may be performed. Here we present an optimized protocol to rapidly and efficiently purify high quality total RNA from peach fruits (Prunus persica. Isolating high-quality RNA from fruit tissue is often difficult due to large quantities of polysaccharides and polyphenolic compounds that accumulate in this tissue and co-purify with the RNA. Here we demonstrate that a modified version of the method used to isolate RNA from pine trees and the woody plant Cinnamomun tenuipilum is ideal for isolating high quality RNA from the fruits of Prunus persica. This RNA may be used for many functional genomic based experiments such as RT-PCR and the construction of large-insert cDNA libraries.

  9. Preliminary investigation of fuel cycle in fast reactors by the correlations method and sensitivity analyses of nuclear characteristics

    International Nuclear Information System (INIS)

    Amorim, E.S. do; Castro Lobo, P.D. de.

    1980-11-01

    A reduction of computing effort was achieved as a result of the application of space - independent continuous slowing down theory in the spectrum averaged cross sections and further expressing then in a quadratic corelation whith the temperature and the composition. The decoupling between variables that express some of the important nuclear characteristics allowed to introduce a sensitivity analyses treatment for the full prediction of the behavior, over the fuel cycle, of the LMFBR considered. As a potential application of the method here in developed is to predict the nuclear characteristics of another reactor, face some reference reactor of the family considered. Excellent agreement with exact calculation is observed only when perturbations occur in nuclear data and/or fuel isotopic characteristics, but fair results are obtained whith variations in system components other than the fuel. (Author) [pt

  10. Calculating ellipse area by the Monte Carlo method and analysing dice poker with Excel at high school

    Science.gov (United States)

    Benacka, Jan

    2016-08-01

    This paper reports on lessons in which 18-19 years old high school students modelled random processes with Excel. In the first lesson, 26 students formulated a hypothesis on the area of ellipse by using the analogy between the areas of circle, square and rectangle. They verified the hypothesis by the Monte Carlo method with a spreadsheet model developed in the lesson. In the second lesson, 27 students analysed the dice poker game. First, they calculated the probability of the hands by combinatorial formulae. Then, they verified the result with a spreadsheet model developed in the lesson. The students were given a questionnaire to find out if they found the lesson interesting and contributing to their mathematical and technological knowledge.

  11. The SPHERE Study. Secondary prevention of heart disease in general practice: protocol of a randomised controlled trial of tailored practice and patient care plans with parallel qualitative, economic and policy analyses. [ISRCTN24081411].

    LENUS (Irish Health Repository)

    Murphy, Andrew W

    2005-07-29

    BACKGROUND: The aim of the SPHERE study is to design, implement and evaluate tailored practice and personal care plans to improve the process of care and objective clinical outcomes for patients with established coronary heart disease (CHD) in general practice across two different health systems on the island of Ireland. CHD is a common cause of death and a significant cause of morbidity in Ireland. Secondary prevention has been recommended as a key strategy for reducing levels of CHD mortality and general practice has been highlighted as an ideal setting for secondary prevention initiatives. Current indications suggest that there is considerable room for improvement in the provision of secondary prevention for patients with established heart disease on the island of Ireland. The review literature recommends structured programmes with continued support and follow-up of patients; the provision of training, tailored to practice needs of access to evidence of effectiveness of secondary prevention; structured recall programmes that also take account of individual practice needs; and patient-centred consultations accompanied by attention to disease management guidelines. METHODS: SPHERE is a cluster randomised controlled trial, with practice-level randomisation to intervention and control groups, recruiting 960 patients from 48 practices in three study centres (Belfast, Dublin and Galway). Primary outcomes are blood pressure, total cholesterol, physical and mental health status (SF-12) and hospital re-admissions. The intervention takes place over two years and data is collected at baseline, one-year and two-year follow-up. Data is obtained from medical charts, consultations with practitioners, and patient postal questionnaires. The SPHERE intervention involves the implementation of a structured systematic programme of care for patients with CHD attending general practice. It is a multi-faceted intervention that has been developed to respond to barriers and solutions to

  12. Risk analysis for decision support in electricity distribution system asset management: methods and frameworks for analysing intangible risks

    Energy Technology Data Exchange (ETDEWEB)

    Nordgaard, Dag Eirik

    2010-04-15

    During the last 10 to 15 years electricity distribution companies throughout the world have been ever more focused on asset management as the guiding principle for their activities. Within asset management, risk is a key issue for distribution companies, together with handling of cost and performance. There is now an increased awareness of the need to include risk analyses into the companies' decision making processes. Much of the work on risk in electricity distribution systems has focused on aspects of reliability. This is understandable, since it is surely an important feature of the product delivered by the electricity distribution infrastructure, and it is high on the agenda for regulatory authorities in many countries. However, electricity distribution companies are also concerned with other risks relevant for their decision making. This typically involves intangible risks, such as safety, environmental impacts and company reputation. In contrast to the numerous methodologies developed for reliability risk analysis, there are relatively few applications of structured analyses to support decisions concerning intangible risks, even though they represent an important motivation for decisions taken in electricity distribution companies. The overall objective of this PhD work has been to explore risk analysis methods that can be used to improve and support decision making in electricity distribution system asset management, with an emphasis on the analysis of intangible risks. The main contributions of this thesis can be summarised as: An exploration and testing of quantitative risk analysis (QRA) methods to support decisions concerning intangible risks; The development of a procedure for using life curve models to provide input to QRA models; The development of a framework for risk-informed decision making where QRA are used to analyse selected problems; In addition, the results contribute to clarify the basic concepts of risk, and highlight challenges

  13. Multiple methods of surgical treatment combined with primary IOL implantation on traumatic lens subluxation/dislocation in patients with secondary glaucoma

    OpenAIRE

    Wang, Rui; Bi, Chun-Chao; Lei, Chun-Ling; Sun, Wen-Tao; Wang, Shan-Shan; Dong, Xiao-Juan

    2014-01-01

    AIM:To describe clinical findings and complications from cases of traumatic lens subluxation/dislocation in patients with secondary glaucoma, and discuss the multiple treating methods of operation combined with primary intraocular lens (IOL) implantation.METHODS:Non-comparative retrospective observational case series. Participants:30 cases (30 eyes) of lens subluxation/dislocation in patients with secondary glaucoma were investigated which accepted the surgical treatment by author in the Opht...

  14. Learners' experiences of teachers' aggression in a secondary ...

    African Journals Online (AJOL)

    2014-10-17

    Oct 17, 2014 ... to one or more types of aggression. It has been observed ... Method: The population consisted of school learners at a secondary school. Inclusion .... qualitative research who analysed the data independently. The researcher ...

  15. A QUANTITATIVE METHOD FOR ANALYSING 3-D BRANCHING IN EMBRYONIC KIDNEYS: DEVELOPMENT OF A TECHNIQUE AND PRELIMINARY DATA

    Directory of Open Access Journals (Sweden)

    Gabriel Fricout

    2011-05-01

    Full Text Available The normal human adult kidney contains between 300,000 and 1 million nephrons (the functional units of the kidney. Nephrons develop at the tips of the branching ureteric duct, and therefore ureteric duct branching morphogenesis is critical for normal kidney development. Current methods for analysing ureteric branching are mostly qualitative and those quantitative methods that do exist do not account for the 3- dimensional (3D shape of the ureteric "tree". We have developed a method for measuring the total length of the ureteric tree in 3D. This method is described and preliminary data are presented. The algorithm allows for performing a semi-automatic segmentation of a set of grey level confocal images and an automatic skeletonisation of the resulting binary object. Measurements of length are automatically obtained, and numbers of branch points are manually counted. The final representation can be reconstructed by means of 3D volume rendering software, providing a fully rotating 3D perspective of the skeletonised tree, making it possible to identify and accurately measure branch lengths. Preliminary data shows the total length estimates obtained with the technique to be highly reproducible. Repeat estimates of total tree length vary by just 1-2%. We will now use this technique to further define the growth of the ureteric tree in vitro, under both normal culture conditions, and in the presence of various levels of specific molecules suspected of regulating ureteric growth. The data obtained will provide fundamental information on the development of renal architecture, as well as the regulation of nephron number.

  16. Methods and theory in bone modeling drift: comparing spatial analyses of primary bone distributions in the human humerus.

    Science.gov (United States)

    Maggiano, Corey M; Maggiano, Isabel S; Tiesler, Vera G; Chi-Keb, Julio R; Stout, Sam D

    2016-01-01

    This study compares two novel methods quantifying bone shaft tissue distributions, and relates observations on human humeral growth patterns for applications in anthropological and anatomical research. Microstructural variation in compact bone occurs due to developmental and mechanically adaptive circumstances that are 'recorded' by forming bone and are important for interpretations of growth, health, physical activity, adaptation, and identity in the past and present. Those interpretations hinge on a detailed understanding of the modeling process by which bones achieve their diametric shape, diaphyseal curvature, and general position relative to other elements. Bone modeling is a complex aspect of growth, potentially causing the shaft to drift transversely through formation and resorption on opposing cortices. Unfortunately, the specifics of modeling drift are largely unknown for most skeletal elements. Moreover, bone modeling has seen little quantitative methodological development compared with secondary bone processes, such as intracortical remodeling. The techniques proposed here, starburst point-count and 45° cross-polarization hand-drawn histomorphometry, permit the statistical and populational analysis of human primary tissue distributions and provide similar results despite being suitable for different applications. This analysis of a pooled archaeological and modern skeletal sample confirms the importance of extreme asymmetry in bone modeling as a major determinant of microstructural variation in diaphyses. Specifically, humeral drift is posteromedial in the human humerus, accompanied by a significant rotational trend. In general, results encourage the usage of endocortical primary bone distributions as an indicator and summary of bone modeling drift, enabling quantitative analysis by direction and proportion in other elements and populations. © 2015 Anatomical Society.

  17. Chemicals of emerging concern in water and bottom sediment in Great Lakes areas of concern, 2010 to 2011-Collection methods, analyses methods, quality assurance, and data

    Science.gov (United States)

    Lee, Kathy E.; Langer, Susan K.; Menheer, Michael A.; Foreman, William T.; Furlong, Edward T.; Smith, Steven G.

    2012-01-01

    The U.S. Geological Survey (USGS) cooperated with the U.S. Environmental Protection Agency and the U.S. Fish and Wildlife Service on a study to identify the occurrence of chemicals of emerging concern (CECs) in water and bottom-sediment samples collected during 2010–11 at sites in seven areas of concern (AOCs) throughout the Great Lakes. Study sites include tributaries to the Great Lakes in AOCs located near Duluth, Minn.; Green Bay, Wis.; Roches­ter, N.Y.; Detroit, Mich.; Toledo, Ohio; Milwaukee, Wis.; and Ashtabula, Ohio. This report documents the collection meth­ods, analyses methods, quality-assurance data and analyses, and provides the data for this study. Water and bottom-sediment samples were analyzed at the USGS National Water Quality Laboratory in Denver, Colo., for a broad suite of CECs. During this study, 135 environmental and 23 field dupli­cate samples of surface water and wastewater effluent, 10 field blank water samples, and 11 field spike water samples were collected and analyzed. Sixty-one of the 69 wastewater indicator chemicals (laboratory method 4433) analyzed were detected at concentrations ranging from 0.002 to 11.2 micrograms per liter. Twenty-eight of the 48 pharmaceuticals (research method 8244) analyzed were detected at concentrations ranging from 0.0029 to 22.0 micro­grams per liter. Ten of the 20 steroid hormones and sterols analyzed (research method 4434) were detected at concentrations ranging from 0.16 to 10,000 nanograms per liter. During this study, 75 environmental, 13 field duplicate samples, and 9 field spike samples of bottom sediment were collected and analyzed for a wide variety of CECs. Forty-seven of the 57 wastewater indicator chemicals (laboratory method 5433) analyzed were detected at concentrations ranging from 0.921 to 25,800 nanograms per gram. Seventeen of the 20 steroid hormones and sterols (research method 6434) analyzed were detected at concentrations ranging from 0.006 to 8,921 nanograms per gram. Twelve of

  18. A Systematic Protein Refolding Screen Method using the DGR Approach Reveals that Time and Secondary TSA are Essential Variables.

    Science.gov (United States)

    Wang, Yuanze; van Oosterwijk, Niels; Ali, Ameena M; Adawy, Alaa; Anindya, Atsarina L; Dömling, Alexander S S; Groves, Matthew R

    2017-08-24

    Refolding of proteins derived from inclusion bodies is very promising as it can provide a reliable source of target proteins of high purity. However, inclusion body-based protein production is often limited by the lack of techniques for the detection of correctly refolded protein. Thus, the selection of the refolding conditions is mostly achieved using trial and error approaches and is thus a time-consuming process. In this study, we use the latest developments in the differential scanning fluorimetry guided refolding approach as an analytical method to detect correctly refolded protein. We describe a systematic buffer screen that contains a 96-well primary pH-refolding screen in conjunction with a secondary additive screen. Our research demonstrates that this approach could be applied for determining refolding conditions for several proteins. In addition, it revealed which "helper" molecules, such as arginine and additives are essential. Four different proteins: HA-RBD, MDM2, IL-17A and PD-L1 were used to validate our refolding approach. Our systematic protocol evaluates the impact of the "helper" molecules, the pH, buffer system and time on the protein refolding process in a high-throughput fashion. Finally, we demonstrate that refolding time and a secondary thermal shift assay buffer screen are critical factors for improving refolding efficiency.

  19. Growth of Bi doped cadmium zinc telluride single crystals by Bridgman oscillation method and its structural, optical, and electrical analyses

    International Nuclear Information System (INIS)

    Carcelen, V.; Rodriguez-Fernandez, J.; Dieguez, E.; Hidalgo, P.

    2010-01-01

    The II-VI compound semiconductor cadmium zinc telluride (CZT) is very useful for room temperature radiation detection applications. In the present research, we have successfully grown Bi doped CZT single crystals with two different zinc concentrations (8 and 14 at. %) by the Bridgman oscillation method, in which one experiment has been carried out with a platinum (Pt) tube as the ampoule support. Pt also acts as a cold finger and reduces the growth velocity and enhances crystalline perfection. The grown single crystals have been studied with different analysis methods. The stoichiometry was confirmed by energy dispersive by x-ray and inductively coupled plasma mass spectroscopy analyses and it was found there is no incorporation of impurities in the grown crystal. The presence of Cd and Te vacancies was determined by cathodoluminescence studies. Electrical properties were assessed by I-V analysis and indicated higher resistive value (8.53x10 8 Ω cm) for the crystal grown with higher zinc concentration (with Cd excess) compare to the other (3.71x10 5 Ω cm).

  20. RESEARCH AND DEVELOPMENT CONTROL METHOD PATHOGENIC PRION INFECTIONS SECONDARY RAW MEAT INDUSTRY

    Directory of Open Access Journals (Sweden)

    A. Y. Prosekov

    2016-01-01

    Full Text Available Highly sensitive and specific method for identification of pathogenic prion protein was developed. It was found that the water-soluble fractions of beef proteins and plasma proteins of farm animals are normal prion proteins in cattle. Aligning gene sequences of pathogenic and normal prion protein of sheep (Ovis aries revealed that the nucleotide sequences of PrPc and PrPsc are identical. Murine monoclonal antibody 15B3 was selected. Synthetic sequence of 194 bps was randomly produced (DNA-tail. The produced sequence and the database sequences have no homologues. Two primer of20 bps were selected for synthesized DNA-tail. The experimental data indicate that by using AGTCAGTCCTTGGCCTCCTT (left and CAGTTTCGATCCTCCTCCAG (right primers the amplification should be performed as follows: pre-denaturation, 95 °C, 60 seconds, 1 cycle; denaturation, 95 °C, 30 seconds, 30 cycles; annealing, 56 °C, 60 seconds, 30 cycles; elongation, 72 °C, 30 seconds, 30 cycles, additional elongation, 1 cycle, 600 seconds. The optimum concentration of reaction mixture components for PCR was established. High specificity of the developed test system and oligonucleotide primers was confirmed by electrophoretic separation of ground beef samples containing  pathogenic prion protein, as well as by comparative analysis of the results of pathogenic prion protein determination. These results were obtained using PCR test system and TeSeE™ ELISA system.

  1. Place of the imaging methods for diagnosis and treatment of primary and secondary hepatic tumors

    Energy Technology Data Exchange (ETDEWEB)

    Mircheva, M [Meditsinska Akademiya, Sofia (Bulgaria). Nauchen Institut po Vytreshni Bolesti

    1991-01-01

    A review is presented of the methods in which the imaging and treatment of liver tumors are mutually connected: ultrasound study, radioimmunoscintigraphy with {sup 111}InF(ab){sub 2}-fragments from anti-CEA monoclonal antibodies and computerized tomography (CT). Special attention has been paid to different approaches for increasing of the sensitivity and specificity of CT: contrast intensification of the image with water-soluble contrast media; bolus administration of the contrast media by dynamic CT-scanning; introduction of the contrast matter in truncus coeliacus and/or a. hepatica; late CT. From the terapeutic point of view suspending of different cytostatics in Lipiodol and their intraarterial administration are discussed. The angiographic study is a prerequisite for arterial chemotherapy. Its advantages is stressed against to transcatheter introduction of antimitotic drugs (by using of percutaneous catheters through a. brachialis, a. femoralis or by surgical implantation). The creation of conjugated chemotherapeutic polymers which helps to obtain a depot-effect in conservative treatment is mentioned as a new approach in improving the regional chemotherapy. The first results from nuclear-magnetic resonance tomography are evaluated. The tendency for integration of the individual visualization and therapeutic approaches is stressed. 58 refs.

  2. Application of microbiological methods for secondary oil recovery from the Carpathian crude oil reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Karaskiewicz, J

    1974-01-01

    The investigation made it possible to isolate from different ecologic environmental (soil, crude oil, formation water, industrial wastes) bacteria cultures of the genus Arthrobacter, Clostridium, Mycobacterium, Peptococcus, and Pseudomonas. These heterotrophic bacteria are characterized by a high metabolic and biogeochemical activity hydrocarbon transformation. Experiments on a technical scale were conducted from 1961 to 1971 in 20 wells; in this study, only the 16 most typical examples are discussed. The experiments were conducted in Carpathian crude oil reservoirs. To each well, a 500:1 mixture of the so-called bacteria vaccine (containing an active biomass of cultures obtained by a specific cultivation method and holding 6 x 10/sup 5/ bacteria cells in 1 ml of fluid, 2,000 kg of molasses, and 50 cu m of water originating from the reservoir submitted to treatment) was injected at 500 to 1,200 m. The intensification of the microbiological processes in the reservoir was observed. This phenomenon occurred not only in the wells to which the bacteria vaccine was injected, but also in the surrounding producing wells. At the same time, an increase in the crude oil production occurred on the average within the range from 20 to 200% and the surpluses of crude oil production continued for 2 to 8 yr. (92 refs.)

  3. Secondary successions of biota in oil-polluted peat soil upon different biological remediation methods

    Science.gov (United States)

    Melekhina, E. N.; Markarova, M. Yu.; Shchemelinina, T. N.; Anchugova, E. M.; Kanev, V. A.

    2015-06-01

    The effects of different bioremediation methods on restoration of the oil-polluted peat soil (Histosol) in the northernmost taiga subzone of European Russia was studied. The population dynamics of microorganisms belonging to different trophic groups (hydrocarbon-oxidizing, ammonifying, nitrifying, and oligonitrophilic) were analyzed together with data on the soil enzyme (catalase and dehydrogenase) activities, population densities of soil microfauna groups, their structures, and states of phytocenoses during a sevenyear-long succession. The remediation with biopreparations Roder composed of oil-oxidizing microorganisms-Roder with Rhodococcus rubber and R. erythropolis and Universal with Rhodotorula glutinis and Rhodococcus sp.-was more efficient than the agrochemical and technical remediation. It was concluded that the biopreparations activate microbiological oil destruction, thereby accelerating restoration succession of phytocenosis and zoocenosis. The succession of dominant microfauna groups was observed: the dipteran larvae and Mesostigmata mites predominant at the early stages were replaced by collembolans at later stages. The pioneer oribatid mite species were Tectocepheus velatus, Oppiella nova, Liochthonius sellnicki, Oribatula tibialis, and Eupelops sp.

  4. Digital Badges for STEM Learning in Secondary Contexts: A Mixed Methods Study

    Science.gov (United States)

    Elkordy, Angela

    The deficit in STEM skills is a matter of concern for national economies and a major focus for educational policy makers. The development of Information and Communications Technologies (ICT) has resulted in a rapidly changing workforce of global scale. In addition, ICT have fostered the growth of digital and mobile technologies which have been the learning context, formal and informal, for a generation of youth. The purpose of this study was to design an intervention based upon a competency-based, digitally-mediated, learning intervention: digital badges for learning STEM habits of mind and practices. Designed purposefully, digital badge learning trajectories and criteria can be flexible tools for scaffolding, measuring, and communicating the acquisition of knowledge, skills, or competencies. One of the most often discussed attributes of digital badges, is the ability of badges to motivate learners. However, the research base to support this claim is in its infancy; there is little empirical evidence. A skills-based digital badge intervention was designed to demonstrate mastery learning in key, age-appropriate, STEM competencies aligned with Next Generation Science Standards (NGSS) and other educational standards. A mixed methods approach was used to study the impact of a digital badge intervention in the sample middle and high school population. Among the findings were statistically significant measures which substantiate that in this student population, the digital badges increased perceived competence and motivated learners to persist at task.

  5. Measurement of Primary and Secondary Stability of Dental Implants by Resonance Frequency Analysis Method in Mandible

    Science.gov (United States)

    Shokri, Mehran; Daraeighadikolaei, Arash

    2013-01-01

    Background. There is no doubt that the success of the dental implants depends on the stability. The aim of this work was to measure the stability of dental implants prior to loading the implants, using a resonance frequency analysis (RFA) by Osstell mentor device. Methods. Ten healthy and nonsmoker patients over 40 years of age with at least six months of complete or partial edentulous mouth received screw-type dental implants by a 1-stage procedure. RFA measurements were obtained at surgery and 1, 2, 3, 4, 5, 7, and 11 weeks after the implant surgery. Results. Among fifteen implants, the lowest mean stability measurement was for the 4th week after surgery in all bone types. At placement, the mean ISQ obtained with the magnetic device was 77.2 with 95% confidence interval (CI) = 2.49, and then it decreased until the 4th week to 72.13 (95% CI = 2.88), and at the last measurement, the mean implant stability significantly (P value implant placement. These suggestions need to be further assessed through future studies. PMID:23737790

  6. Comparison of a point-of-care analyser for the determination of HbA1c with HPLC method.

    Science.gov (United States)

    Grant, D A; Dunseath, G J; Churm, R; Luzio, S D

    2017-08-01

    As the use of Point of Care Testing (POCT) devices for measurement of glycated haemoglobin (HbA1c) increases, it is imperative to determine how their performance compares to laboratory methods. This study compared the performance of the automated Quo-Test POCT device (EKF Diagnostics), which uses boronate fluorescence quenching technology, with a laboratory based High Performance Liquid Chromatography (HPLC) method (Biorad D10) for measurement of HbA1c. Whole blood EDTA samples from subjects (n=100) with and without diabetes were assayed using a BioRad D10 and a Quo-Test analyser. Intra-assay variation was determined by measuring six HbA1c samples in triplicate and inter-assay variation was determined by assaying four samples on 4 days. Stability was determined by assaying three samples stored at -20 °C for 14 and 28 days post collection. Median (IQR) HbA1c was 60 (44.0-71.2) mmol/mol (7.6 (6.17-8.66) %) and 62 (45.0-69.0) mmol/mol (7.8 (6.27-8.46) %) for D10 and Quo-Test, respectively, with very good agreement (R 2 =0.969, Pglucose intolerance (IGT and T2DM) and 100% for diagnosis of T2DM. Good agreement between the D10 and Quo-Test was seen across a wide HbA1c range. The Quo-Test POCT device provided similar performance to a laboratory based HPLC method.

  7. Secondary Hypertension

    Science.gov (United States)

    Secondary hypertension Overview Secondary hypertension (secondary high blood pressure) is high blood pressure that's caused by another medical condition. Secondary hypertension can be caused by conditions that affect your kidneys, ...

  8. Precursor analyses - The use of deterministic and PSA based methods in the event investigation process at nuclear power plants

    International Nuclear Information System (INIS)

    2004-09-01

    The efficient feedback of operating experience (OE) is a valuable source of information for improving the safety and reliability of nuclear power plants (NPPs). It is therefore essential to collect information on abnormal events from both internal and external sources. Internal operating experience is analysed to obtain a complete understanding of an event and of its safety implications. Corrective or improvement measures may then be developed, prioritized and implemented in the plant if considered appropriate. Information from external events may also be analysed in order to learn lessons from others' experience and prevent similar occurrences at our own plant. The traditional ways of investigating operational events have been predominantly qualitative. In recent years, a PSA-based method called probabilistic precursor event analysis has been developed, used and applied on a significant scale in many places for a number of plants. The method enables a quantitative estimation of the safety significance of operational events to be incorporated. The purpose of this report is to outline a synergistic process that makes more effective use of operating experience event information by combining the insights and knowledge gained from both approaches, traditional deterministic event investigation and PSA-based event analysis. The PSA-based view on operational events and PSA-based event analysis can support the process of operational event analysis at the following stages of the operational event investigation: (1) Initial screening stage. (It introduces an element of quantitative analysis into the selection process. Quantitative analysis of the safety significance of nuclear plant events can be a very useful measure when it comes to selecting internal and external operating experience information for its relevance.) (2) In-depth analysis. (PSA based event evaluation provides a quantitative measure for judging the significance of operational events, contributors to

  9. Theoretical and Experimental Studies of Dissimilar Secondary Metallurgy Methods for Improving Steel Cleanliness

    Science.gov (United States)

    Pitts-Baggett, April

    Due to a continual increasing industry demand for clean steels, a multi-depth sampling approach was developed to gain a more detailed depiction of the reactions occurring in the ladle throughout the Ladle Metallurgy Furnace (LMF) processing. This sampling technique allows for the ability for samples to be reached at depths, which have not been able to be captured before, of approximately 1.5 m below the slag layer. These samples were also taken in conjunction with samples taken just under the slag layer as well as in between those samples. Additional samples were also taken during the processing including multi-point slag sampling. The heats were divided in to five key processing steps: Start of heat (S), after Alloying (A), after desulfurization/start of pre-Rinse (R), prior to Ca treatment (C), and End of heat (E). Sampling sets were collected to compare the effects of silicon, desulfurization rates, slag emulsification, slag evolution and inclusion evolution. By gaining the ability to gather multiple depths, it was determined that the slag emulsification has the ability to follow the flow pattern of the ladle deeper into the ladle than previously seen in literature. Inclusion evolution has been shown by numerous researchers; however, this study showed differences in the inclusion grouping and distribution at the different depths of the ladle through Automated Feature Analysis (AFA). Also, the inclusion path was seen to change depending on both the silicon content and the sulfur content of the steel. This method was applied to develop a desulfurization model at Nucor Steel Tuscaloosa, Inc. (NSTI). In addition to a desulfurization model, a calcium (Ca) model was also developed. The Ca model was applied to target a finished inclusion region based on the conditions up to the wire treatment. These conditions included time, silicon content, and sulfur concentration. Due to the inability of this model to handle every process variable, a new procedure was created to

  10. Primary and secondary battery consumption trends in Sweden 1996–2013: Method development and detailed accounting by battery type

    International Nuclear Information System (INIS)

    Patrício, João; Kalmykova, Yuliya; Berg, Per E.O.; Rosado, Leonardo; Åberg, Helena

    2015-01-01

    Highlights: • Developed MFA method was validated by the national statistics. • Exponential increase of EEE sales leads to increase in integrated battery consumption. • Digital convergence is likely to be a cause for primary batteries consumption decline. • Factors for estimation of integrated batteries in EE are provided. • Sweden reached the collection rates defined by European Union. - Abstract: In this article, a new method based on Material Flow Accounting is proposed to study detailed material flows in battery consumption that can be replicated for other countries. The method uses regularly available statistics on import, industrial production and export of batteries and battery-containing electric and electronic equipment (EEE). To promote method use by other scholars with no access to such data, several empirically results and their trends over time, for different types of batteries occurrence among the EEE types are provided. The information provided by the method can be used to: identify drivers of battery consumption; study the dynamic behavior of battery flows – due to technology development, policies, consumers behavior and infrastructures. The method is exemplified by the study of battery flows in Sweden for years 1996–2013. The batteries were accounted, both in units and weight, as primary and secondary batteries; loose and integrated; by electrochemical composition and share of battery use between different types of EEE. Results show that, despite a fivefold increase in the consumption of rechargeable batteries, they account for only about 14% of total use of portable batteries. Recent increase in digital convergence has resulted in a sharp decline in the consumption of primary batteries, which has now stabilized at a fairly low level. Conversely, the consumption of integrated batteries has increased sharply. In 2013, 61% of the total weight of batteries sold in Sweden was collected, and for the particular case of alkaline manganese

  11. Primary and secondary battery consumption trends in Sweden 1996–2013: Method development and detailed accounting by battery type

    Energy Technology Data Exchange (ETDEWEB)

    Patrício, João, E-mail: joao.patricio@chalmers.se [Department of Civil and Environmental Engineering, Chalmers University of Technology, 412 96 Gothenburg (Sweden); Kalmykova, Yuliya; Berg, Per E.O.; Rosado, Leonardo [Department of Civil and Environmental Engineering, Chalmers University of Technology, 412 96 Gothenburg (Sweden); Åberg, Helena [The Faculty of Education, University of Gothenburg, 40530 Gothenburg (Sweden)

    2015-05-15

    Highlights: • Developed MFA method was validated by the national statistics. • Exponential increase of EEE sales leads to increase in integrated battery consumption. • Digital convergence is likely to be a cause for primary batteries consumption decline. • Factors for estimation of integrated batteries in EE are provided. • Sweden reached the collection rates defined by European Union. - Abstract: In this article, a new method based on Material Flow Accounting is proposed to study detailed material flows in battery consumption that can be replicated for other countries. The method uses regularly available statistics on import, industrial production and export of batteries and battery-containing electric and electronic equipment (EEE). To promote method use by other scholars with no access to such data, several empirically results and their trends over time, for different types of batteries occurrence among the EEE types are provided. The information provided by the method can be used to: identify drivers of battery consumption; study the dynamic behavior of battery flows – due to technology development, policies, consumers behavior and infrastructures. The method is exemplified by the study of battery flows in Sweden for years 1996–2013. The batteries were accounted, both in units and weight, as primary and secondary batteries; loose and integrated; by electrochemical composition and share of battery use between different types of EEE. Results show that, despite a fivefold increase in the consumption of rechargeable batteries, they account for only about 14% of total use of portable batteries. Recent increase in digital convergence has resulted in a sharp decline in the consumption of primary batteries, which has now stabilized at a fairly low level. Conversely, the consumption of integrated batteries has increased sharply. In 2013, 61% of the total weight of batteries sold in Sweden was collected, and for the particular case of alkaline manganese

  12. Comparison of a point-of-care analyser for the determination of HbA1c with HPLC method

    Directory of Open Access Journals (Sweden)

    D.A. Grant

    2017-08-01

    Full Text Available Aims: As the use of Point of Care Testing (POCT devices for measurement of glycated haemoglobin (HbA1c increases, it is imperative to determine how their performance compares to laboratory methods. This study compared the performance of the automated Quo-Test POCT device (EKF Diagnostics, which uses boronate fluorescence quenching technology, with a laboratory based High Performance Liquid Chromatography (HPLC method (Biorad D10 for measurement of HbA1c. Methods: Whole blood EDTA samples from subjects (n=100 with and without diabetes were assayed using a BioRad D10 and a Quo-Test analyser. Intra-assay variation was determined by measuring six HbA1c samples in triplicate and inter-assay variation was determined by assaying four samples on 4 days. Stability was determined by assaying three samples stored at −20 °C for 14 and 28 days post collection. Results: Median (IQR HbA1c was 60 (44.0–71.2 mmol/mol (7.6 (6.17–8.66 % and 62 (45.0–69.0 mmol/mol (7.8 (6.27–8.46 % for D10 and Quo-Test, respectively, with very good agreement (R2=0.969, P<0.0001. Mean (range intra- and inter-assay variation was 1.2% (0.0–2.7% and 1.6% (0.0–2.7% for the D10 and 3.5% (0.0–6.7% and 2.7% (0.7–5.1% for the Quo-Test. Mean change in HbA1c after 28 days storage at −20 °C was −0.7% and +0.3% for D10 and Quo-Test respectively. Compared to the D10, Quo-Test showed 98% agreement for diagnosis of glucose intolerance (IGT and T2DM and 100% for diagnosis of T2DM. Conclusion: Good agreement between the D10 and Quo-Test was seen across a wide HbA1c range. The Quo-Test POCT device provided similar performance to a laboratory based HPLC method. Keywords: Point of care testing, HbA1c measurement

  13. Relating system-to-CFD coupled code analyses to theoretical framework of a multi-scale method

    International Nuclear Information System (INIS)

    Cadinu, F.; Kozlowski, T.; Dinh, T.N.

    2007-01-01

    Over past decades, analyses of transient processes and accidents in a nuclear power plant have been performed, to a significant extent and with a great success, by means of so called system codes, e.g. RELAP5, CATHARE, ATHLET codes. These computer codes, based on a multi-fluid model of two-phase flow, provide an effective, one-dimensional description of the coolant thermal-hydraulics in the reactor system. For some components in the system, wherever needed, the effect of multi-dimensional flow is accounted for through approximate models. The later are derived from scaled experiments conducted for selected accident scenarios. Increasingly, however, we have to deal with newer and ever more complex accident scenarios. In some such cases the system codes fail to serve as simulation vehicle, largely due to its deficient treatment of multi-dimensional flow (in e.g. downcomer, lower plenum). A possible way of improvement is to use the techniques of Computational Fluid Dynamics (CFD). Based on solving Navier-Stokes equations, CFD codes have been developed and used, broadly, to perform analysis of multi-dimensional flow, dominantly in non-nuclear industry and for single-phase flow applications. It is clear that CFD simulations can not substitute system codes but just complement them. Given the intrinsic multi-scale nature of this problem, we propose to relate it to the more general field of research on multi-scale simulations. Even though multi-scale methods are developed on case-by-case basis, the need for a unified framework brought to the development of the heterogeneous multi-scale method (HMM)

  14. Flux decay during thermonuclear X-ray bursts analysed with the dynamic power-law index method

    Science.gov (United States)

    Kuuttila, J.; Kajava, J. J. E.; Nättilä, J.; Motta, S. E.; Sánchez-Fernández, C.; Kuulkers, E.; Cumming, A.; Poutanen, J.

    2017-08-01

    The cooling of type-I X-ray bursts can be used to probe the nuclear burning conditions in neutron star envelopes. The flux decay of the bursts has been traditionally modelled with an exponential, even if theoretical considerations predict power-law-like decays. We have analysed a total of 540 type-I X-ray bursts from five low-mass X-ray binaries observed with the Rossi X-ray Timing Explorer. We grouped the bursts according to the source spectral state during which they were observed (hard or soft), flagging those bursts that showed signs of photospheric radius expansion (PRE). The decay phase of all the bursts were then fitted with a dynamic power-law index method. This method provides a new way of probing the chemical composition of the accreted material. Our results show that in the hydrogen-rich sources the power-law decay index is variable during the burst tails and that simple cooling models qualitatively describe the cooling of presumably helium-rich sources 4U 1728-34 and 3A 1820-303. The cooling in the hydrogen-rich sources 4U 1608-52, 4U 1636-536, and GS 1826-24, instead, is clearly different and depends on the spectral states and whether PRE occurred or not. Especially the hard state bursts behave differently than the models predict, exhibiting a peculiar rise in the cooling index at low burst fluxes, which suggests that the cooling in the tail is much faster than expected. Our results indicate that the drivers of the bursting behaviour are not only the accretion rate and chemical composition of the accreted material, but also the cooling that is somehow linked to the spectral states. The latter suggests that the properties of the burning layers deep in the neutron star envelope might be impacted differently depending on the spectral state.

  15. Coupled 230Th/234U-ESR analyses for corals: A new method to assess sealevel change

    International Nuclear Information System (INIS)

    Blackwell, Bonnie A.B.; Teng, Steve J.T.; Lundberg, Joyce A.; Blickstein, Joel I.B.; Skinner, Anne R.

    2007-01-01

    Although coupled 230 Th/ 234 U-ESR analyses have become routine for dating teeth, they have never been used for corals. While the ESR age depends on, and requires assumptions about, the time-averaged cosmic dose rate, D-bar cos (t), 230 Th/ 234 U dates do not. Since D-bar cos (t) received by corals depends on the attenuation by any intervening material, D-bar cos (t) response reflects changing water depths and sediment cover. By coupling the two methods, one can determine the age and a unique D-bar cos,coupled (t) simultaneously. From a coral's water depth and sedimentary history as predicted by a given sealevel curve, one can predict D-bar cos,sealevel (t). If D-bar cos,coupled (t) agrees well with D-bar cos,sealevel (t), this provides independent validation for the curve used to build D-bar cos,sealevel (t). For six corals dated at 7-128 ka from Florida Platform reef crests, the sealevel curve by Waelbroeck et al. [2002. Sea-level and deep water temperature changes derived from benthonic foraminifera isotopic records. Quat. Sci. Rev. 21, 295-305] predicted their D-bar cos,coupled (t) values as well as, or better than, the SPECMAP sealevel curve. Where a whole reef can be sampled over a transect, a precise test for sealevel curves could be developed

  16. Eddy Heat Conduction and Nonlinear Stability of a Darcy Lapwood System Analysed by the Finite Spectral Method

    Directory of Open Access Journals (Sweden)

    Jónas Elíasson

    2014-01-01

    Full Text Available A finite Fourier transform is used to perform both linear and nonlinear stability analyses of a Darcy-Lapwood system of convective rolls. The method shows how many modes are unstable, the wave number instability band within each mode, the maximum growth rate (most critical wave numbers on each mode, and the nonlinear growth rates for each amplitude as a function of the porous Rayleigh number. Single amplitude controls the nonlinear growth rates and thereby the physical flow rate and fluid velocity, on each mode. They are called the flak amplitudes. A discrete Fourier transform is used for numerical simulations and here frequency combinations appear that the traditional cut-off infinite transforms do not have. The discrete show a stationary solution in the weak instability phase, but when carried past 2 unstable modes they show fluctuating motion where all amplitudes except the flak may be zero on the average. This leads to a flak amplitude scaling process of the heat conduction, producing an eddy heat conduction coefficient where a Nu-RaL relationship is found. It fits better to experiments than previously found solutions but is lower than experiments.

  17. Seasonal analyses of carbon dioxide and energy fluxes above an oil palm plantation using the eddy covariance method

    Science.gov (United States)

    Ibrahim, Anis; Haniff Harun, Mohd; Yusup, Yusri

    2017-04-01

    A study presents the measurements of carbon dioxide and latent and sensible heat fluxes above a mature oil palm plantation on mineral soil in Keratong, Pahang, Peninsular Malaysia. The sampling campaign was conducted over an 25-month period, from September 2013 to February 2015 and May 2016 to November 2016, using the eddy covariance method. The main aim of this work is to assess carbon dioxide and energy fluxes over this plantation at different time scales, seasonal and diurnal, and determine the effects of season and relevant meteorological parameters on the latter fluxes. Energy balance closure analyses gave a slope between latent and sensible heat fluxes and total incoming energy to be 0.69 with an R2 value of 0.86 and energy balance ratio of 0.80. The averaged net radiation was 108 W m-2. The results show that at the diurnal scale, carbon dioxide, latent and sensible heat fluxes exhibited a clear diurnal trend where carbon dioxide flux was at its minimum - 3.59 μmol m-2 s-1 in the mid-afternoon and maximum in the morning while latent and sensible behaved conversely to the carbon dioxide flux. The average carbon dioxide flux was - 0.37 μmol m-2 s-1. At the seasonal timescale, carbon dioxide fluxes did not show any apparent trend except during the Northeast Monsoon where the highest variability of the monthly means of carbon dioxide occurred.

  18. Ionic secondary emission SIMS principles and instrumentation

    International Nuclear Information System (INIS)

    Darque-Ceretti, E.; Migeon, H.N.; Aucouturier, M.

    1998-01-01

    The ionic analysis by secondary emission (SIMS) is one of material analysis based on the ions bombardment. That is micro-analysis method in taking into account that the dimensions of the analysed volume are under the micrometer. This paper details in a first part some ionic secondary emission principle to introduce a description of the instrumentation: microprobe, ions production, spectrometers. (A.L.B.)

  19. Primary and secondary battery consumption trends in Sweden 1996-2013: method development and detailed accounting by battery type.

    Science.gov (United States)

    Patrício, João; Kalmykova, Yuliya; Berg, Per E O; Rosado, Leonardo; Åberg, Helena

    2015-05-01

    In this article, a new method based on Material Flow Accounting is proposed to study detailed material flows in battery consumption that can be replicated for other countries. The method uses regularly available statistics on import, industrial production and export of batteries and battery-containing electric and electronic equipment (EEE). To promote method use by other scholars with no access to such data, several empirically results and their trends over time, for different types of batteries occurrence among the EEE types are provided. The information provided by the method can be used to: identify drivers of battery consumption; study the dynamic behavior of battery flows - due to technology development, policies, consumers behavior and infrastructures. The method is exemplified by the study of battery flows in Sweden for years 1996-2013. The batteries were accounted, both in units and weight, as primary and secondary batteries; loose and integrated; by electrochemical composition and share of battery use between different types of EEE. Results show that, despite a fivefold increase in the consumption of rechargeable batteries, they account for only about 14% of total use of portable batteries. Recent increase in digital convergence has resulted in a sharp decline in the consumption of primary batteries, which has now stabilized at a fairly low level. Conversely, the consumption of integrated batteries has increased sharply. In 2013, 61% of the total weight of batteries sold in Sweden was collected, and for the particular case of alkaline manganese dioxide batteries, the value achieved 74%. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Automated correlation and classification of secondary ion mass spectrometry images using a k-means cluster method.

    Science.gov (United States)

    Konicek, Andrew R; Lefman, Jonathan; Szakal, Christopher

    2012-08-07

    We present a novel method for correlating and classifying ion-specific time-of-flight secondary ion mass spectrometry (ToF-SIMS) images within a multispectral dataset by grouping images with similar pixel intensity distributions. Binary centroid images are created by employing a k-means-based custom algorithm. Centroid images are compared to grayscale SIMS images using a newly developed correlation method that assigns the SIMS images to classes that have similar spatial (rather than spectral) patterns. Image features of both large and small spatial extent are identified without the need for image pre-processing, such as normalization or fixed-range mass-binning. A subsequent classification step tracks the class assignment of SIMS images over multiple iterations of increasing n classes per iteration, providing information about groups of images that have similar chemistry. Details are discussed while presenting data acquired with ToF-SIMS on a model sample of laser-printed inks. This approach can lead to the identification of distinct ion-specific chemistries for mass spectral imaging by ToF-SIMS, as well as matrix-assisted laser desorption ionization (MALDI), and desorption electrospray ionization (DESI).

  1. Application of artificial neural network method to exergy and energy analyses of fluidized bed dryer for potato cubes

    International Nuclear Information System (INIS)

    Azadbakht, Mohsen; Aghili, Hajar; Ziaratban, Armin; Torshizi, Mohammad Vahedi

    2017-01-01

    Drying the samples was performed in the inlet temperatures of 45, 50, and 55 °C, air velocity of 3.2, 6.8, and 9.1 m s"−"1, and bed depth of 1.5, 2.2, and 3 cm. The effects of these parameters were evaluated on energy utilization, energy efficiency and utilization ratio and exergy loss and efficiency. Furthermore, artificial neural network was employed in order to predict the energy and exergy parameters, and simulation of thermodynamic drying process was carried out, using the ANN created. A network was constructed from learning algorithms and transfer functions that could predict, with good accuracy, the exergy and energy parameters related to the drying process. The results revealed that energy utilization, efficiency, and utilization ratio increased by increasing the air velocity and depth of the bed; however, energy utilization and efficiency were augmented by increasing the temperature; additionally, energy utilization ratio decreased along with the rise in temperature. Also was found that exergy loss and efficiency improved by increasing the air velocity, temperature, and depth of the bed. Finally, the results of the statistical analyses indicated that neural networks can be utilized in intelligent drying process which has a large share of energy utilization in the food industry. - Highlights: • Energy utilization increased by increasing temperature, air velocity and depth of the bed. • Exergy loss increased with increasing the air velocity, temperature and depth of the bed. • Prediction by a trained neural network is faster than usual mathematical models. • ANN it is a suitable method to predict the energy and exergy in various driers.

  2. Coupled {sup 230}Th/{sup 234}U-ESR analyses for corals: A new method to assess sealevel change

    Energy Technology Data Exchange (ETDEWEB)

    Blackwell, Bonnie A.B. [Department of Chemistry, Williams College, Williamstown, MA 01267 (United States); RFK Science Research Institute, Glenwood Landing, NY 11547 (United States)], E-mail: bonnie.a.b.blackwell@williams.edu; Teng, Steve J.T. [RFK Science Research Institute, Glenwood Landing, NY 11547 (United States)], E-mail: mteng1584@gmail.com; Lundberg, Joyce A. [Department of Geography and Environmental Studies, Carleton University, Ottawa, ON, Canada K1S 5B6 (Canada)], E-mail: joyce_lundberg@carleton.ca; Blickstein, Joel I.B. [Department of Chemistry, Williams College, Williamstown, MA 01267 (United States); RFK Science Research Institute, Glenwood Landing, NY 11547 (United States); Skinner, Anne R. [Department of Chemistry, Williams College, Williamstown, MA 01267 (United States); RFK Science Research Institute, Glenwood Landing, NY 11547 (United States)], E-mail: anne.r.skinner@williams.edu

    2007-07-15

    Although coupled {sup 230}Th/{sup 234}U-ESR analyses have become routine for dating teeth, they have never been used for corals. While the ESR age depends on, and requires assumptions about, the time-averaged cosmic dose rate, D-bar{sub cos}(t), {sup 230}Th/{sup 234}U dates do not. Since D-bar{sub cos}(t) received by corals depends on the attenuation by any intervening material, D-bar{sub cos}(t) response reflects changing water depths and sediment cover. By coupling the two methods, one can determine the age and a unique D-bar{sub cos,coupled}(t) simultaneously. From a coral's water depth and sedimentary history as predicted by a given sealevel curve, one can predict D-bar{sub cos,sealevel}(t). If D-bar{sub cos,coupled}(t) agrees well with D-bar{sub cos,sealevel}(t), this provides independent validation for the curve used to build D-bar{sub cos,sealevel}(t). For six corals dated at 7-128 ka from Florida Platform reef crests, the sealevel curve by Waelbroeck et al. [2002. Sea-level and deep water temperature changes derived from benthonic foraminifera isotopic records. Quat. Sci. Rev. 21, 295-305] predicted their D-bar{sub cos,coupled}(t) values as well as, or better than, the SPECMAP sealevel curve. Where a whole reef can be sampled over a transect, a precise test for sealevel curves could be developed.

  3. Japanese standard method for safety evaluation using best estimate code based on uncertainty and scaling analyses with statistical approach

    International Nuclear Information System (INIS)

    Mizokami, Shinya; Hotta, Akitoshi; Kudo, Yoshiro; Yonehara, Tadashi; Watada, Masayuki; Sakaba, Hiroshi

    2009-01-01

    Current licensing practice in Japan consists of using conservative boundary and initial conditions(BIC), assumptions and analytical codes. The safety analyses for licensing purpose are inherently deterministic. Therefore, conservative BIC and assumptions, such as single failure, must be employed for the analyses. However, using conservative analytical codes are not considered essential. The standard committee of Atomic Energy Society of Japan(AESJ) has drawn up the standard for using best estimate codes for safety analyses in 2008 after three-years of discussions reflecting domestic and international recent findings. (author)

  4. MILROY, Lesley. Observing and Analysing Natural Language: A Critical Account of Sociolinguistic Method. Oxford: Basil Blackwell, 1987. 230pp. MILROY, Lesley. Observing and Analysing Natural Language: A Critical Account of Sociolinguistic Method. Oxford: Basil Blackwell, 1987. 230pp.

    Directory of Open Access Journals (Sweden)

    Iria Werlang Garcia

    2008-04-01

    Full Text Available Lesley Milroy's Observing and Analysing Natural Language is a recent addition to an ever growing number of publications in the field of Sociolinguistics. It carries the weight of one of the experienced authors in the current days in the specified field and should offer basic information to both newcomers and established investigators in natural language. Lesley Milroy's Observing and Analysing Natural Language is a recent addition to an ever growing number of publications in the field of Sociolinguistics. It carries the weight of one of the experienced authors in the current days in the specified field and should offer basic information to both newcomers and established investigators in natural language.

  5. Reform-based science teaching: A mixed-methods approach to explaining variation in secondary science teacher practice

    Science.gov (United States)

    Jetty, Lauren E.

    The purpose of this two-phase, sequential explanatory mixed-methods study was to understand and explain the variation seen in secondary science teachers' enactment of reform-based instructional practices. Utilizing teacher socialization theory, this mixed-methods analysis was conducted to determine the relative influence of secondary science teachers' characteristics, backgrounds and experiences across their teacher development to explain the range of teaching practices exhibited by graduates from three reform-oriented teacher preparation programs. Data for this study were obtained from the Investigating the Meaningfulness of Preservice Programs Across the Continuum of Teaching (IMPPACT) Project, a multi-university, longitudinal study funded by NSF. In the first quantitative phase of the study, data for the sample (N=120) were collected from three surveys from the IMPPACT Project database. Hierarchical multiple regression analysis was used to examine the separate as well as the combined influence of factors such as teachers' personal and professional background characteristics, beliefs about reform-based science teaching, feelings of preparedness to teach science, school context, school culture and climate of professional learning, and influences of the policy environment on the teachers' use of reform-based instructional practices. Findings indicate three blocks of variables, professional background, beliefs/efficacy, and local school context added significant contribution to explaining nearly 38% of the variation in secondary science teachers' use of reform-based instructional practices. The five variables that significantly contributed to explaining variation in teachers' use of reform-based instructional practices in the full model were, university of teacher preparation, sense of preparation for teaching science, the quality of professional development, science content focused professional, and the perceived level of professional autonomy. Using the results

  6. Characterization of biogenic secondary organic aerosols using statistical methods; Charakterisierung Biogener Sekundaerer Organischer Aerosole mit Statistischen Methoden

    Energy Technology Data Exchange (ETDEWEB)

    Spindler, Christian

    2010-07-01

    Atmospheric aerosols have important influence on the radiation balance of the Earth, on visibility and human health. Secondary organic aerosol is formed from gas-to-particle conversion of oxidized volatile organic compounds. A dominant fraction of the gases originates from plant emissions, making biogenic secondary organic aerosol (BSOA) an especially important constituent of the atmosphere. Knowing the chemical composition of BSOA particles is crucial for a thorough understanding of aerosol processes in the environment. In this work, the chemical composition of BSOA particles was measured with aerosol mass spectrometry and analyzed with statistical methods. The experimental part of the work comprises process studies of the formation and aging of biogenic aerosols in simulation chambers. Using a plant chamber, real tree emissions were used to produce particles in a way close to conditions in forest environments. In the outdoor chamber SAPHIR, OH-radicals were produced from the photooxidation of ozone under illumination with natural sunlight. Here, BSOA was produced from defined mixtures of mono- and sesquiterpenes that represent boreal forest emissions. A third kind of experiments was performed in the indoor chamber AIDA. Here, particles were produced from ozonolysis of single monoterpenes and aged by condensing OH-oxidation products. Two aerosol mass spectrometers (AMS) were used to measure the chemical composition of the particles. One of the instruments is equipped with a quadrupole mass spectrometer providing unit mass resolution. The second instrument contains a time-of-flight mass spectrometer and provides mass resolution sufficient to distinguish different fragments with the same nominal mass. Aerosol mass spectra obtained with these instruments are strongly fragmented due to electron impact ionization of the evaporated molecules. In addition, typical BSOA mass spectra are very similar to each other. In order to get a more detailed knowledge about the mass

  7. Secondary mediation and regression analyses of the PTClinResNet database: determining causal relationships among the International Classification of Functioning, Disability and Health levels for four physical therapy intervention trials.

    Science.gov (United States)

    Mulroy, Sara J; Winstein, Carolee J; Kulig, Kornelia; Beneck, George J; Fowler, Eileen G; DeMuth, Sharon K; Sullivan, Katherine J; Brown, David A; Lane, Christianne J

    2011-12-01

    Each of the 4 randomized clinical trials (RCTs) hosted by the Physical Therapy Clinical Research Network (PTClinResNet) targeted a different disability group (low back disorder in the Muscle-Specific Strength Training Effectiveness After Lumbar Microdiskectomy [MUSSEL] trial, chronic spinal cord injury in the Strengthening and Optimal Movements for Painful Shoulders in Chronic Spinal Cord Injury [STOMPS] trial, adult stroke in the Strength Training Effectiveness Post-Stroke [STEPS] trial, and pediatric cerebral palsy in the Pediatric Endurance and Limb Strengthening [PEDALS] trial for children with spastic diplegic cerebral palsy) and tested the effectiveness of a muscle-specific or functional activity-based intervention on primary outcomes that captured pain (STOMPS, MUSSEL) or locomotor function (STEPS, PEDALS). The focus of these secondary analyses was to determine causal relationships among outcomes across levels of the International Classification of Functioning, Disability and Health (ICF) framework for the 4 RCTs. With the database from PTClinResNet, we used 2 separate secondary statistical approaches-mediation analysis for the MUSSEL and STOMPS trials and regression analysis for the STEPS and PEDALS trials-to test relationships among muscle performance, primary outcomes (pain related and locomotor related), activity and participation measures, and overall quality of life. Predictive models were stronger for the 2 studies with pain-related primary outcomes. Change in muscle performance mediated or predicted reductions in pain for the MUSSEL and STOMPS trials and, to some extent, walking speed for the STEPS trial. Changes in primary outcome variables were significantly related to changes in activity and participation variables for all 4 trials. Improvement in activity and participation outcomes mediated or predicted increases in overall quality of life for the 3 trials with adult populations. Variables included in the statistical models were limited to those

  8. Development and application of neutron transport methods and uncertainty analyses for reactor core calculations. Technical report; Entwicklung und Einsatz von Neutronentransportmethoden und Unsicherheitsanalysen fuer Reaktorkernberechnungen. Technischer Bericht

    Energy Technology Data Exchange (ETDEWEB)

    Zwermann, W.; Aures, A.; Bernnat, W.; and others

    2013-06-15

    This report documents the status of the research and development goals reached within the reactor safety research project RS1503 ''Development and Application of Neutron Transport Methods and Uncertainty Analyses for Reactor Core Calculations'' as of the 1{sup st} quarter of 2013. The superordinate goal of the project is the development, validation, and application of neutron transport methods and uncertainty analyses for reactor core calculations. These calculation methods will mainly be applied to problems related to the core behaviour of light water reactors and innovative reactor concepts. The contributions of this project towards achieving this goal are the further development, validation, and application of deterministic and stochastic calculation programmes and of methods for uncertainty and sensitivity analyses, as well as the assessment of artificial neutral networks, for providing a complete nuclear calculation chain. This comprises processing nuclear basis data, creating multi-group data for diffusion and transport codes, obtaining reference solutions for stationary states with Monte Carlo codes, performing coupled 3D full core analyses in diffusion approximation and with other deterministic and also Monte Carlo transport codes, and implementing uncertainty and sensitivity analyses with the aim of propagating uncertainties through the whole calculation chain from fuel assembly, spectral and depletion calculations to coupled transient analyses. This calculation chain shall be applicable to light water reactors and also to innovative reactor concepts, and therefore has to be extensively validated with the help of benchmarks and critical experiments.

  9. Secondary dentine as a sole parameter for age estimation: Comparison and reliability of qualitative and quantitative methods among North Western adult Indians

    Directory of Open Access Journals (Sweden)

    Jasbir Arora

    2016-06-01

    Full Text Available The indestructible nature of teeth against most of the environmental abuses makes its use in disaster victim identification (DVI. The present study has been undertaken to examine the reliability of Gustafson’s qualitative method and Kedici’s quantitative method of measuring secondary dentine for age estimation among North Western adult Indians. 196 (M = 85; F = 111 single rooted teeth were collected from the Department of Oral Health Sciences, PGIMER, Chandigarh. Ground sections were prepared and the amount of secondary dentine formed was scored qualitatively according to Gustafson’s (0–3 scoring system (method 1 and quantitatively following Kedici’s micrometric measurement method (method 2. Out of 196 teeth 180 samples (M = 80; F = 100 were found to be suitable for measuring secondary dentine following Kedici’s method. Absolute mean error of age was calculated by both methodologies. Results clearly showed that in pooled data, method 1 gave an error of ±10.4 years whereas method 2 exhibited an error of approximately ±13 years. A statistically significant difference was noted in absolute mean error of age between two methods of measuring secondary dentine for age estimation. Further, it was also revealed that teeth extracted for periodontal reasons severely decreased the accuracy of Kedici’s method however, the disease had no effect while estimating age by Gustafson’s method. No significant gender differences were noted in the absolute mean error of age by both methods which suggest that there is no need to separate data on the basis of gender.

  10. Comparison of Classic vs. Role plays Teaching Methods on the Menstrual Hygiene Behavior of Secondary School Girls in Iran

    Directory of Open Access Journals (Sweden)

    R Ostovar

    2013-09-01

    Background & aim: Awareness about the different aspects of health during puberty plays an important role in the health of girls and finally on their health future pregnancy. The aim of the present study was to to compare the effect of role playing and classical training methods in the the improvement of puberty health among secondary school girls in Yasouj City, Iran. Methods: In this study, the educational need during puberty school girls in the interview was determined. The two schools girls were randomly selected (students any school-60. Next, a knowledge and attitude questionnaire and a behavior checklist related to the main puberty health problems were completed. Then one of the schools randomly was selected as educational interventions schools and other were studied as controls. After grouping the students into four groups of 15, intervention were conducted in four sessions including: role-play, question and answer, and lecture. In the control group, all number of school students received training on puberty health through a classical education (lectures. The results were subsequently compared. Data were analyzed by Student t-test paired t-test, and analysis of variance. Results: The results of this study showed that the level of knowledge, attitude and behavior related to health matters during puberty showed significant improvement in the girls before and after implementation of educational intervention through role play (p<0.05.Thus, the mean score in group role play before intervention was 2.35±1.53 and after was 3.96±1.27 , The mean performance score before intervention 6.04±2.34 and after was, 8.61±1.55, respectively, while in classical group differences were not statistically significant (p<0.05 Conclusion: In comparison with the classical method of health education, teaching through role play significantly improved the level of knowledge, attitude and practice related to puberty health among adolescent girls. Key Words: Education, Adolescent Girls

  11. Enhancing Academic Achievement and Retention in Senior Secondary School Chemistry through Discussion and Lecture Methods: A Case Study of Some Selected Secondary Schools in Gboko, Benue State, Nigeria

    Science.gov (United States)

    Omwirhiren, Efe M.

    2015-01-01

    The present study was initiated to determine how academic achievement and retention in chemistry is enhanced using the two instructional methods among SSII students and ascertained the differential performance of male and female students in chemistry with a view of improving student performance in chemistry. The study adopted a non-equivalent…

  12. Attitudes, Motivations and Beliefs about L2 Reading in the Filipino Secondary School Classroom: A Mixed-methods Study

    Directory of Open Access Journals (Sweden)

    Andrzej Cirocki

    2016-11-01

    Full Text Available This study is a two-part investigation. The first part focuses on ESL learners' attitudes and motivations for reading in the target language. The second part deals with ESL teachers' beliefs about motivating L2 learners to read. The study involved 100 ESL learners (N=100 and 30 teachers (N=30 from rural schools in Mindanao, the Philippines. All the participants were recruited through convenience sampling. In other words, participants were selected based on their convenient accessibility and proximity. The current study is a mixed-methods project. Both quantitative and qualitative methods were employed to collect different types of data. The instruments used were: a L2 reading attitude survey, a questionnaire dealing with motivations for L2 reading, a survey on beliefs about motivating L2 learners to read in English, a semi-structured interview and a L2-reading-lesson observation. The quantitative data were statistically analysed. Whenever appropriate, the data were presented in tables and on graphs. The qualitative data were analysed through thematic coding and used to support the quantitative data. The findings show that students have both positive and negative attitudes towards various aspects of L2 reading. They also have different levels of motivation for reading in English, with female participants having higher scores than male participants. The teachers, on the other hand, hold diverse beliefs about motivating learners to read in English. No significant correlation was found between teacher beliefs and students' motivations for reading in English. After the findings have been described, implications for teacher education and instructional practice are offered.

  13. Self-compensation in ZnO thin films: An insight from X-ray photoelectron spectroscopy, Raman spectroscopy and time-of-flight secondary ion mass spectroscopy analyses

    International Nuclear Information System (INIS)

    Saw, K.G.; Ibrahim, K.; Lim, Y.T.; Chai, M.K.

    2007-01-01

    As-grown ZnO typically exhibits n-type conductivity and the difficulty of synthesizing p-type ZnO for the realization of ZnO-based optoelectronic devices is mainly due to the compensation effect of a large background n-type carrier concentration. The cause of this self-compensation effect has not been conclusively identified although oxygen vacancies, zinc interstitials and hydrogen have been suggested. In this work, typical n-type ZnO thin films were prepared by sputtering and investigated using X-ray photoelectron spectroscopy, Raman spectroscopy and time-of-flight secondary ion mass spectroscopy to gain an insight on the possible cause of the self-compensation effect. The analyses found that the native defect that most likely behaved as the donor was zinc interstitial but some contribution of n-type conductivity could also come from the electronegative carbonates or hydrogen carbonates incorporated in the ZnO thin films

  14. Serum anti-Müllerian hormone and ovarian morphology assessed by magnetic resonance imaging in response to acupuncture and exercise in women with polycystic ovary syndrome: secondary analyses of a randomized controlled trial.

    Science.gov (United States)

    Leonhardt, Henrik; Hellström, Mikael; Gull, Berit; Lind, Anna-Karin; Nilsson, Lars; Janson, Per Olof; Stener-Victorin, Elisabet

    2015-03-01

    To investigate whether electro-acupuncture or physical exercise influence serum anti-Müllerian hormone (AMH), antral follicle count (AFC) or ovarian volume in women with polycystic ovary syndrome (PCOS). Secondary analyses of a prospective, randomized controlled clinical trial. University Hospital, Sweden. Seventy-four women with PCOS recruited from the general population. Women with PCOS were randomized to 16 weeks of electro-acupuncture (14 treatments), exercise (at least three times/week), or no intervention. Serum AMH recorded at baseline, after 16 weeks of intervention, and at follow up at 32 weeks. AFC, and ovarian volume assessed by magnetic resonance imaging at baseline and at follow up at 32 weeks. After 16 weeks of intervention, serum levels of AMH were significantly decreased in the electro-acupuncture group by 17.5% (p ovarian volume between baseline and follow up in the electro-acupuncture group, and by 11.7% (p = 0.01) in AFC in the physical exercise group. No other variables were affected. This study is the first to demonstrate that acupuncture reduces serum AMH levels and ovarian volume. Physical exercise did not influence circulating AMH or ovarian volume. Despite a within-group decrease in AFC, exercise did not lead to a between-group difference. © 2014 Nordic Federation of Societies of Obstetrics and Gynecology.

  15. Development of modern methods with respect to neutron transport and uncertainty analyses for reactor core calculations. Interim report; Weiterentwicklung moderner Verfahren zu Neutronentransport und Unsicherheitsanalysen fuer Kernberechnungen. Zwischenbericht

    Energy Technology Data Exchange (ETDEWEB)

    Zwermann, Winfried; Aures, Alexander; Bostelmann, Friederike; Pasichnyk, Ihor; Perin, Yann; Velkov, Kiril; Zilly, Matias

    2016-12-15

    This report documents the status of the research and development goals reached within the reactor safety research project RS1536 ''Development of modern methods with respect to neutron transport and uncertainty analyses for reactor core calculations'' as of the 3{sup rd} quarter of 2016. The superordinate goal of the project is the development, validation, and application of neutron transport methods and uncertainty analyses for reactor core calculations. These calculation methods will mainly be applied to problems related to the core behaviour of light water reactors and innovative reactor concepts, in particular fast reactors cooled by liquid metal. The contributing individual goals are the further optimization and validation of deterministic calculation methods with high spatial and energy resolution, the development of a coupled calculation system using the Monte Carlo method for the neutron transport to describe time-dependent reactor core states, the processing and validation of nuclear data, particularly with regard to covariance data, the development, validation, and application of sampling-based methods for uncertainty and sensitivity analyses, the creation of a platform for performing systematic uncertainty analyses for fast reactor systems, as well as the description of states of severe core damage with the Monte Carlo method. Moreover, work regarding the European NURESAFE project, started in the preceding project RS1503, are being continued and completed.

  16. [Development of quantitative analyse method for determination of alkaloid cytisin in Spartium junceum L., growing in Georgia].

    Science.gov (United States)

    Iavich, P A; Churadze, L I; Suladze, T Sh; Rukhadze, T A

    2011-12-01

    The aim of the research was to develop a method for quantitative determination of cytisine in Spartium junceum L. We used the above-ground parts of plants. In developing a method of analysis we used the method of 3-phase extraction. In this case the best results were obtained in the system: chopped raw material - water solution of ammonia - chloroform. In this case, the amount of alkaloids extracted almost entirely from the plant and goes into the chloroform phase. Evaluation of the results was carried out by the validation. The method for determination of cytisine in raw product was proposed. The method comprises the following steps-extraction of raw materials extracting chloroform phase and its evaporation, the translation of solids in methanol, the chromatographic separation cytisine and its fixation of the spectrophotometer method. The method is reproducible, has the required accuracy, is easy to analysis (less than 9 hours).

  17. Response margins investigation of piping dynamic analyses using the independent support motion method and PVRC [Pressure Vessel Research Committee] damping

    International Nuclear Information System (INIS)

    Bezler, P.; Wang, Y.K.; Reich, M.

    1988-03-01

    An evaluation of Independent Support Motion (ISM) response spectrum methods of analysis coupled with the Pressure Vessel Research Committee (PVRC) recommendation for damping, to compute the dynamic component of the seismic response of piping systems, was completed. Response estimates for five piping/structural systems were developed using fourteen variants of the ISM response spectrum method, the Uniform Support Motions response spectrum method and the ISM time history analysis method, all based on the PVRC recommendations for damping. The ISM/PVRC calculational procedures were found to exhibit orderly characteristics with levels of conservatism comparable to those obtained with the ISM/uniform damping procedures. Using the ISM/PVRC response spectrum method with absolute combination between group contributions provided consistently conservative results while using the ISM/PVRC response spectrum method with square root sum of squares combination between group contributions provided estimates of response which were deemed to be acceptable

  18. Comparison of a point-of-care analyser for the determination of HbA1c with HPLC method

    OpenAIRE

    Grant, D.A.; Dunseath, G.J.; Churm, R.; Luzio, S.D.

    2017-01-01

    Aims: As the use of Point of Care Testing (POCT) devices for measurement of glycated haemoglobin (HbA1c) increases, it is imperative to determine how their performance compares to laboratory methods. This study compared the performance of the automated Quo-Test POCT device (EKF Diagnostics), which uses boronate fluorescence quenching technology, with a laboratory based High Performance Liquid Chromatography (HPLC) method (Biorad D10) for measurement of HbA1c. Methods: Whole blood EDTA samples...

  19. Gross alpha and beta activity analyses in urine-a routine laboratory method for internal human radioactivity detection.

    Science.gov (United States)

    Chen, Xiaowen; Zhao, Luqian; Qin, Hongran; Zhao, Meijia; Zhou, Yirui; Yang, Shuqiang; Su, Xu; Xu, Xiaohua

    2014-05-01

    The aim of this work was to develop a method to provide rapid results for humans with internal radioactive contamination. The authors hypothesized that valuable information could be obtained from gas proportional counter techniques by screening urine samples from potentially exposed individuals rapidly. Recommended gross alpha and beta activity screening methods generally employ gas proportional counting techniques. Based on International Standards Organization (ISO) methods, improvements were made in the evaporation process to develop a method to provide rapid results, adequate sensitivity, and minimum sample preparation and operator intervention for humans with internal radioactive contamination. The method described by an American National Standards Institute publication was used to calibrate the gas proportional counter, and urine samples from patients with or without radionuclide treatment were measured to validate the method. By improving the evaporation process, the time required to perform the assay was reduced dramatically. Compared with the reference data, the results of the validation samples were very satisfactory with respect to gross-alpha and gross-beta activities. The gas flow proportional counting method described here has the potential for radioactivity monitoring in the body. This method was easy, efficient, and fast, and its application is of great utility in determining whether a sample should be analyzed by a more complicated method, for example radiochemical and/or γ-spectroscopy. In the future, it may be used commonly in medical examination and nuclear emergency treatment.Health Phys. 106(5):000-000; 2014.

  20. Refining cost-effectiveness analyses using the net benefit approach and econometric methods: an example from a trial of anti-depressant treatment.

    Science.gov (United States)

    Sabes-Figuera, Ramon; McCrone, Paul; Kendricks, Antony

    2013-04-01

    Economic evaluation analyses can be enhanced by employing regression methods, allowing for the identification of important sub-groups and to adjust for imperfect randomisation in clinical trials or to analyse non-randomised data. To explore the benefits of combining regression techniques and the standard Bayesian approach to refine cost-effectiveness analyses using data from randomised clinical trials. Data from a randomised trial of anti-depressant treatment were analysed and a regression model was used to explore the factors that have an impact on the net benefit (NB) statistic with the aim of using these findings to adjust the cost-effectiveness acceptability curves. Exploratory sub-samples' analyses were carried out to explore possible differences in cost-effectiveness. Results The analysis found that having suffered a previous similar depression is strongly correlated with a lower NB, independent of the outcome measure or follow-up point. In patients with previous similar depression, adding an selective serotonin reuptake inhibitors (SSRI) to supportive care for mild-to-moderate depression is probably cost-effective at the level used by the English National Institute for Health and Clinical Excellence to make recommendations. This analysis highlights the need for incorporation of econometric methods into cost-effectiveness analyses using the NB approach.

  1. Computer modeling in free spreadsheets OpenOffice.Calc as one of the modern methods of teaching physics and mathematics cycle subjects in primary and secondary schools

    Directory of Open Access Journals (Sweden)

    Markushevich M.V.

    2016-10-01

    Full Text Available the article details the use of such modern method of training as computer simulation applied to modelling of various kinds of mechanical motion of a material point in the free spreadsheet OpenOffice.org Calc while designing physics and computer science lessons in primary and secondary schools. Particular attention is paid to the application of computer modeling integrated with other modern teaching methods.

  2. The Strategy of New Product Introduction in Durable Goods with Secondary Market: Application of the Optimization Method to Supply Chain Problem

    Directory of Open Access Journals (Sweden)

    Pei Zhao

    2014-01-01

    Full Text Available The aim of this paper is to address how the secondary market affects the strategy of the manufacturer’s new product introduction by using the optimization method. To do so, we develop a two-period model in which a monopolistic manufacturer sells its new durable products directly to end consumers in both periods, while an entrant operates a reverse channel selling used products in the secondary market. We assume that the manufacturer launches a higher quality product in the second period for the technological innovation. We find that the secondary market can actually increase the manufacturer’s profitability and drives the new product introduction in the second period. We also derive the effect of the durability and the degree of quality improvement on the pricing of supply chain partners.

  3. CSSI-PRO: a method for secondary structure type editing, assignment and estimation in proteins using linear combination of backbone chemical shifts

    International Nuclear Information System (INIS)

    Swain, Monalisa; Atreya, Hanudatta S.

    2009-01-01

    Estimation of secondary structure in polypeptides is important for studying their structure, folding and dynamics. In NMR spectroscopy, such information is generally obtained after sequence specific resonance assignments are completed. We present here a new methodology for assignment of secondary structure type to spin systems in proteins directly from NMR spectra, without prior knowledge of resonance assignments. The methodology, named Combination of Shifts for Secondary Structure Identification in Proteins (CSSI-PRO), involves detection of specific linear combination of backbone 1 H α and 13 C' chemical shifts in a two-dimensional (2D) NMR experiment based on G-matrix Fourier transform (GFT) NMR spectroscopy. Such linear combinations of shifts facilitate editing of residues belonging to α-helical/β-strand regions into distinct spectral regions nearly independent of the amino acid type, thereby allowing the estimation of overall secondary structure content of the protein. Comparison of the predicted secondary structure content with those estimated based on their respective 3D structures and/or the method of Chemical Shift Index for 237 proteins gives a correlation of more than 90% and an overall rmsd of 7.0%, which is comparable to other biophysical techniques used for structural characterization of proteins. Taken together, this methodology has a wide range of applications in NMR spectroscopy such as rapid protein structure determination, monitoring conformational changes in protein-folding/ligand-binding studies and automated resonance assignment

  4. A new and standardized method to sample and analyse vitreous samples by the Cellient automated cell block system.

    Science.gov (United States)

    Van Ginderdeuren, Rita; Van Calster, Joachim; Stalmans, Peter; Van den Oord, Joost

    2014-08-01

    In this prospective study, a universal protocol for sampling and analysing vitreous material was investigated. Vitreous biopsies are difficult to handle because of the paucity of cells and the gelatinous structure of the vitreous. Histopathological analysis of the vitreous is useful in difficult uveitis cases to differentiate uveitis from lymphoma or infection and to define the type of cellular reaction. Hundred consecutive vitreous samples were analysed with the Cellient tissue processor (Hologic). This machine is a fully automated processor starting from a specified container with PreservCyt (fixative fluid) with cells to paraffin. Cytology was compared with fixatives Cytolyt (contains a mucolyticum) and PreservCyt. Routine histochemical and immunostainings were evaluated. In 92% of the cases, sufficient material was found for diagnosis. In 14%, a Cytolyt wash was necessary to prevent clotting of the tubes in the Cellient due to the viscosity of the sample. In 23%, the diagnosis was an acute inflammation (presence of granulocytes); in 33%, chronic active inflammation (presence of T lymphocytes); in 33%, low-grade inflammation (presence of CD68 cells, without T lymphocytes); and in 3%, a malignant process. A standardized protocol for sampling and handling vitreous biopsies, fixing in PreservCyt and processing by the Cellient gives a satisfactory result in morphology, number of cells and possibility of immuno-histochemical stainings. The diagnosis can be established or confirmed in more than 90% of cases. © 2014 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  5. Verification of Bioanalytical Method for Quantification of Exogenous Insulin (Insulin Aspart) by the Analyser Advia Centaur® XP.

    Science.gov (United States)

    Mihailov, Rossen; Stoeva, Dilyana; Pencheva, Blagovesta; Pentchev, Eugeni

    2018-03-01

    In a number of cases the monitoring of patients with type I diabetes mellitus requires measurement of the exogenous insulin levels. For the purpose of a clinical investigation of the efficacy of a medical device for application of exogenous insulin aspart, a verification of the method for measurement of this synthetic analogue of the hormone was needed. The information in the available medical literature for the measurement of the different exogenous insulin analogs is insufficient. Thus, verification was required to be in compliance with the active standards in Republic of Bulgaria. A manufactured method developed for ADVIA Centaur XP Immunoassay, Siemens Healthcare, was used which we verified using standard solutions and a patient serum pool by adding the appropriate quantity exogenous insulin aspart. The method was verified in accordance with the bioanalytical method verification criteria and regulatory requirements for using a standard method: CLIA chemiluminescence immunoassay ADVIA Centaur® XP. The following parameters are determined and monitored: intra-day precision and accuracy, inter-day precision and accuracy, limit of detection and lower limit of quantification, linearity, analytical recovery. The routine application of the method for measurement of immunoreactive insulin using the analyzer ADVIA Centaur® XP is directed to the measurement of endogenous insulin. The method is applicable for measuring different types of exogenous insulin, including insulin aspart.

  6. Improvement of spectrographic analyses by the use of a mechanical packer in the arc distillation technique; Amelioration de l'analyse spectrograpique par l'utilisation d'un tasseur mecanique dans la methode de distillation dans l'arc

    Energy Technology Data Exchange (ETDEWEB)

    Buffereau, M; Deniaud, S; Pichotin, B; Violet, R [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1965-07-01

    One studies improvement of spectrographic analysis by the 'carrier distillation' method with the help of a mechanical device. Experiments and advantages of such an apparatus are given (precision and reproducibility improvement, operator factor suppression). A routine apparatus (French patent no 976.493) is described. (authors) [French] On etudie l'amelioration des analyses spectrographiques par la methode de distillation avec entra eur dans l'arc au moyen d'un tasseur mecanique. On indique les experiences realisees et les avantages de l'emploi d'un tel appareil (amelioration de la precision et de la reproductibilite, suppression du facteur operateur). On decrit l'appareil de routine objet du brevet no 976.493 du 29 mai 1964. (auteurs)

  7. 40 CFR Appendix A to Subpart C of... - Alternative Testing Methods Approved for Analyses Under the Safe Drinking Water Act

    Science.gov (United States)

    2010-07-01

    ...) 200.5, Revision 4.2. Conductivity Conductance 2510 B Cyanide Manual Distillation followed by D2036-06... 3111 B D 511-09 B Inductively Coupled Plasma 3120 B Complexation Titrimetric Methods 3500-Mg B D 511-09..., DC 20001-3710. 7 Method ME355.01, Revision 1.0. “Determination of Cyanide in Drinking Water by GC/MS...

  8. NUMERICAL MODELLING AS NON-DESTRUCTIVE METHOD FOR THE ANALYSES AND DIAGNOSIS OF STONE STRUCTURES: MODELS AND POSSIBILITIES

    Directory of Open Access Journals (Sweden)

    Nataša Štambuk-Cvitanović

    1999-12-01

    Full Text Available Assuming the necessity of analysis, diagnosis and preservation of existing valuable stone masonry structures and ancient monuments in today European urban cores, numerical modelling become an efficient tool for the structural behaviour investigation. It should be supported by experimentally found input data and taken as a part of general combined approach, particularly non-destructive techniques on the structure/model within it. For the structures or their detail which may require more complex analyses three numerical models based upon finite elements technique are suggested: (1 standard linear model; (2 linear model with contact (interface elements; and (3 non-linear elasto-plastic and orthotropic model. The applicability of these models depend upon the accuracy of the approach or type of the problem, and will be presented on some characteristic samples.

  9. Greenhouse Facility Management Experts Identification of Competencies and Teaching Methods to Support Secondary Agricultural Education Instructors: A Modified Delphi Study

    Science.gov (United States)

    Franklin, Edward A.

    2011-01-01

    In this study the Delphi technique has been used to develop a list of educational competencies for preparing secondary agricultural education instructors to effectively manage their school greenhouse facilities. The use of specialized facilities in agricultural education requires appropriate preparation of agricultural education teachers. The…

  10. Comparison and validation of Fourier transform infrared spectroscopic methods for monitoring secondary cell wall cellulose from cotton fibers

    Science.gov (United States)

    The amount of secondary cell wall (SCW) cellulose in the fiber affects the quality and commercial value of cotton. Accurate assessments of SCW cellulose are essential for improving cotton fibers. Fourier Transform Infrared (FT-IR) spectroscopy enables distinguishing SCW from other cell wall componen...

  11. Efficient method of protein extraction from Theobroma cacao L. roots for two-dimensional gel electrophoresis and mass spectrometry analyses.

    Science.gov (United States)

    Bertolde, F Z; Almeida, A-A F; Silva, F A C; Oliveira, T M; Pirovani, C P

    2014-07-04

    Theobroma cacao is a woody and recalcitrant plant with a very high level of interfering compounds. Standard protocols for protein extraction were proposed for various types of samples, but the presence of interfering compounds in many samples prevented the isolation of proteins suitable for two-dimensional gel electrophoresis (2-DE). An efficient method to extract root proteins for 2-DE was established to overcome these problems. The main features of this protocol are: i) precipitation with trichloroacetic acid/acetone overnight to prepare the acetone dry powder (ADP), ii) several additional steps of sonication in the ADP preparation and extractions with dense sodium dodecyl sulfate and phenol, and iii) adding two stages of phenol extractions. Proteins were extracted from roots using this new protocol (Method B) and a protocol described in the literature for T. cacao leaves and meristems (Method A). Using these methods, we obtained a protein yield of about 0.7 and 2.5 mg per 1.0 g lyophilized root, and a total of 60 and 400 spots could be separated, respectively. Through Method B, it was possible to isolate high-quality protein and a high yield of roots from T. cacao for high-quality 2-DE gels. To demonstrate the quality of the extracted proteins from roots of T. cacao using Method B, several protein spots were cut from the 2-DE gels, analyzed by tandem mass spectrometry, and identified. Method B was further tested on Citrus roots, with a protein yield of about 2.7 mg per 1.0 g lyophilized root and 800 detected spots.

  12. Transitions in Secondary Education

    DEFF Research Database (Denmark)

    Larsen, Britt Østergaard; Jensen, Leif; Pilegaard Jensen, Torben

    2014-01-01

    statistical model of educational progression. By using this method, we parcel educational attainment into a series of transitions and the model is able to control for educational selection and unobserved heterogeneity. We apply counterfactual analyses to allow a formal decomposition of the effects of social......The purpose of this article is to investigate educational choices and attainment of children who experience social problems during their upbringing. The study explores the extent to which social problems can help explain the gaps in entry and dropout rates in upper secondary education in Denmark...... between students from different socioeconomic backgrounds. Population-based registers are used to include information on family upbringing, e.g. alcohol abuse, criminality, use of psychopharmaca and out-of-home placement. We estimate a parsimonious version of Cameron and Heckman's (2001) dynamic...

  13. "Comparison of some Structural Analyses Methods used for the Test Pavement in the Danish Road Testing Machine

    DEFF Research Database (Denmark)

    Baltzer, S.; Zhang, W.; Macdonald, R.

    1998-01-01

    A flexible test pavement, instrumented to measure stresses and strains in the three primary axes with the upper 400 mm of the subgrade, has been constructed and load tested in the Danish Road Testing Machine (RTM). One objective of this research, which is part of the International Pavement Subgrade...... Pressure Cells, Thermistors and Pore Pressure Sensors. Routine monitoring of instrument responses and surface profiles with a Profilometer and FWD/LWD structural testing were undertaken at regular intervals during the construction and load testing programmes.This paper compares various structural analysis...... methods used for the RTM test pavement with data from FWD testing undertaken after the construction and loading programmes. Multilayer linear elastic forward and backcalculation methods, a finite element program and MS Excel spreadsheet based methods are compared....

  14. A novel method for analysing key corticosteroids in polar bear (Ursus maritimus) hair using liquid chromatography tandem mass spectrometry

    DEFF Research Database (Denmark)

    Weisser, Johan; Hansen, Martin; Björklund, Erland

    2016-01-01

    . This procedure allows for the simultaneous determination of multiple steroids, which is in contrast to previous polar bear studies based on ELISA techniques. Absolute method recoveries were 81%, 75% and 60% for cortisol, corticosterone and aldosterone, respectively. We applied the developed method on a hair......This paper presents the development and evaluation of a methodology for extraction, clean-up and analysis of three key corticosteroids (aldosterone, cortisol and corticosterone) in polar bear hair. Such a methodology can be used to monitor stress biomarkers in polar bears and may provide...

  15. A quick seismic assessment method for jacket type offshore structures by combining push-over and nonlinear time history analyses

    Energy Technology Data Exchange (ETDEWEB)

    Karimiyan, S.; Hosseini, M. [International Inst. of Earthquake Engineering and Seismology, Tehran (Iran, Islamic Republic of); Karimiyan, M. [Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Earthquake Eng. Dept., School of Engineering

    2010-07-01

    Several offshore structures are located in seismic regions. In order to upgrade their seismic behaviour, their seismic vulnerability must be evaluated. It is thought that the most reliable type of analysis for seismic evaluation is nonlinear time history analysis (NLTHA), however, it is known to be a very time consuming method. This paper presented a quick procedure by combining the push over analysis (POA) and the NLTHA. The paper discussed both methods in detail. In order to identify the more critical members of the structure, based on the range of their plastic deformations, some POA were first performed. The NLTHA was then performed, focusing on the critical members, to obtain their vulnerability with higher reliability. An offshore structure of jacket type, installed in the Lavan oil field in the Persian Gulf in 1970, was also considered in order to demonstrate the efficiency of the proposed method. It was concluded from the numerical results that combining POA and NLTHA was a quick and reliable seismic evaluation method. The results demonstrated that although the vulnerability of the jacket structure was not very high, the level of damage was not the same for different members, and was dependent on their location in the structure and also its geometric orientation and load bearing situation. 6 refs., 1 tab., 8 figs.

  16. Advances in methods of commercial FBR core characteristics analyses. Investigations of a treatment of the double-heterogeneity and a method to calculate homogenized control rod cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Sugino, Kazuteru [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center; Iwai, Takehiko

    1998-07-01

    A standard data base for FBR core nuclear design is under development in order to improve the accuracy of FBR design calculation. As a part of the development, we investigated an improved treatment of double-heterogeneity and a method to calculate homogenized control rod cross sections in a commercial reactor geometry, for the betterment of the analytical accuracy of commercial FBR core characteristics. As an improvement in the treatment of double-heterogeneity, we derived a new method (the direct method) and compared both this and conventional methods with continuous energy Monte-Carlo calculations. In addition, we investigated the applicability of the reaction rate ratio preservation method as a advanced method to calculate homogenized control rod cross sections. The present studies gave the following information: (1) An improved treatment of double-heterogeneity: for criticality the conventional method showed good agreement with Monte-Carlo result within one sigma standard deviation; the direct method was consistent with conventional one. Preliminary evaluation of effects in core characteristics other than criticality showed that the effect of sodium void reactivity (coolant reactivity) due to the double-heterogeneity was large. (2) An advanced method to calculate homogenize control rod cross sections: for control rod worths the reaction rate ratio preservation method agreed with those produced by the calculations with the control rod heterogeneity included in the core geometry; in Monju control rod worth analysis, the present method overestimated control rod worths by 1 to 2% compared with the conventional method, but these differences were caused by more accurate model in the present method and it is considered that this method is more reliable than the conventional one. These two methods investigated in this study can be directly applied to core characteristics other than criticality or control rod worth. Thus it is concluded that these methods will

  17. Uncertainty analysis for secondary energy distributions

    International Nuclear Information System (INIS)

    Gerstl, S.A.W.

    1978-01-01

    In many transport calculations the integral design parameter of interest (response) is determined mainly by secondary particles such as gamma rays from (n,γ) reactions or secondary neutrons from inelastic scattering events or (n,2n) reactions. Standard sensitivity analysis usually allows to calculate the sensitivities to the production cross sections of such secondaries, but an extended formalism is needed to also obtain the sensitivities to the energy distribution of the generated secondary particles. For a 30-group standard cross-section set 84% of all non-zero table positions pertain to the description of secondary energy distributions (SED's) and only 16% to the actual reaction cross sections. Therefore, any sensitivity/uncertainty analysis which does not consider the effects of SED's is incomplete and neglects most of the input data. This paper describes the methods of how sensitivity profiles for SED's are obtained and used to estimate the uncertainty of an integral response due to uncertainties in these SED's. The detailed theory is documented elsewhere and implemented in the LASL sensitivity code SENSIT. SED sensitivity profiles have proven particularly valuable in cross-section uncertainty analyses for fusion reactors. Even when the production cross sections for secondary neutrons were assumed to be without error, the uncertainties in the energy distribution of these secondaries produced appreciable uncertainties in the calculated tritium breeding rate. However, complete error files for SED's are presently nonexistent. Therefore, methods will be described that allow rough error estimates due to estimated SED uncertainties based on integral SED sensitivities

  18. A method for analysing small samples of floral pollen for free and protein-bound amino acids.

    Science.gov (United States)

    Stabler, Daniel; Power, Eileen F; Borland, Anne M; Barnes, Jeremy D; Wright, Geraldine A

    2018-02-01

    Pollen provides floral visitors with essential nutrients including proteins, lipids, vitamins and minerals. As an important nutrient resource for pollinators, including honeybees and bumblebees, pollen quality is of growing interest in assessing available nutrition to foraging bees. To date, quantifying the protein-bound amino acids in pollen has been difficult and methods rely on large amounts of pollen, typically more than 1 g. More usual is to estimate a crude protein value based on the nitrogen content of pollen, however, such methods provide no information on the distribution of essential and non-essential amino acids constituting the proteins.Here, we describe a method of microwave-assisted acid hydrolysis using low amounts of pollen that allows exploration of amino acid composition, quantified using ultra high performance liquid chromatography (UHPLC), and a back calculation to estimate the crude protein content of pollen.Reliable analysis of protein-bound and free amino acids as well as an estimation of crude protein concentration was obtained from pollen samples as low as 1 mg. Greater variation in both protein-bound and free amino acids was found in pollen sample sizes amino acids in smaller sample sizes, we suggest a correction factor to apply to specific sample sizes of pollen in order to estimate total crude protein content.The method described in this paper will allow researchers to explore the composition of amino acids in pollen and will aid research assessing the available nutrition to pollinating animals. This method will be particularly useful in assaying the pollen of wild plants, from which it is difficult to obtain large sample weights.

  19. Dry-air drying at room temperature - a practical pre-treatment method of tree leaves for quantitative analyses of phenolics?

    Science.gov (United States)

    Tegelberg, Riitta; Virjamo, Virpi; Julkunen-Tiitto, Riitta

    2018-03-09

    In ecological experiments, storage of plant material is often needed between harvesting and laboratory analyses when the number of samples is too large for immediate, fresh analyses. Thus, accuracy and comparability of the results call for pre-treatment methods where the chemical composition remains unaltered and large number of samples can be treated efficiently. To study if a fast dry-air drying provides an efficient pre-treatment method for quantitative analyses of phenolics. Dry-air drying of mature leaves was done in a drying room equipped with dehumifier (10% relative humidity, room temperature) and results were compared to freeze-drying or freeze-drying after pre-freezing in liquid nitrogen. The quantities of methanol-soluble phenolics of Betula pendula Roth, Betula pubescens Ehrh., Salix myrsinifolia Salisb., Picea abies L. Karsten and Pinus sylvestris L. were analysed with HPLC and condensed tannins were analysed using the acid-butanol test. In deciduous tree leaves (Betula, Salix), the yield of most of the phenolic compounds was equal or higher in samples dried in dry-air room than the yield from freeze-dried samples. In Picea abies needles, however, dry-air drying caused severe reductions in picein, stilbenes, condensed tannin and (+)-catechin concentrations compared to freeze-drying. In Pinus sylvestris highest yields of neolignans but lowest yields of acetylated flavonoids were obtained from samples freeze-dried after pre-freezing. Results show that dry-air drying provides effective pre-treatment method for quantifying the soluble phenolics for deciduous tree leaves, but when analysing coniferous species, the different responses between structural classes of phenolics should be taken into account. Copyright © 2018 John Wiley & Sons, Ltd.

  20. Development of the temperature field at the WWER-440 core outlet monitoring system and application of the data analyses methods

    International Nuclear Information System (INIS)

    Spasova, V.; Georgieva, N.; Haralampieva, Tz.

    2001-01-01

    On-line internal reactor monitoring by 216 thermal couples, located at the reactor core outlet, is carried out during power operation of WWER-440 Units 1 and 2 at Kozloduy NPP. Automatic monitoring of technology process is performed by IB-500MA, which collects and performs initial data processing (discrediting and conversion of analogue signals into digital mode). The paper also presents the results and analyses of power distribution monitoring during the past 21-th and current 22-th fuel cycle at Kozloduy NPP, Unit 1 by using archiving system capacity and related software. The possibility to perform operational assessment and analysis of power distribution in the reactor core in each point of the fuel cycle is checked by comparison of the neutron-physical calculation results with reactor coolant system parameters. Paper shows that the processing and analysis of accumulated significant amount of data in the archive files increases accuracy and reliability of power distribution monitoring in the reactor core in each moment of the fuel cycle of WWER-440 reactors at Kozloduy NPP

  1. Analyses of disruption of cerebral white matter integrity in schizophrenia with MR diffusion tensor fiber tracking method

    International Nuclear Information System (INIS)

    Yamamoto, Utako; Kobayashi, Tetsuo; Kito, Shinsuke; Koga, Yoshihiko

    2010-01-01

    We have analyzed cerebral white matter using magnetic resonance diffusion tensor imaging (MR-DTI) to measure the diffusion anisotropy of water molecules. The goal of this study is the quantitative evaluation of schizophrenia. Diffusion tensor images are acquired for patients with schizophrenia and healthy comparison subjects, group-matched for age, sex, and handedness. Fiber tracking is performed on the superior longitudinal fasciculus for the comparison between the patient and comparison groups. We have analysed and compared the cross-sectional area on the starting coronal plane and the mean and standard deviation of the fractional anisotropy and the apparent diffusion coefficient along fibers in the right and left hemispheres. In the right hemisphere, the cross-sectional areas in patient group are significantly smaller than those in the comparison group. Furthermore, in the comparison group, the cross-sectional areas in the right hemisphere are significantly larger than those in the left hemisphere, whereas there is no significant difference in the patient group. These results suggest that we may evaluate the disruption in white matter integrity in schizophrenic patients quantitatively by comparing the cross-sectional area of the superior longitudinal fasciculus in the right and left hemispheres. (author)

  2. Analyses on interaction of internal and external surface cracks in a pressurized cylinder by hybrid boundary element method

    International Nuclear Information System (INIS)

    Chai Guozhong; Fang Zhimin; Jiang Xianfeng; Li Gan

    2004-01-01

    This paper presents a comprehensive range of analyses on the interaction of two identical semi-elliptical surface cracks at the internal and external surfaces of a pressurized cylinder. The considered ratios of the crack depth to crack length are b/a=0.25, 0.5, 0.75 and 1.0; the ratios of the crack depth to wall thickness of the cylinder are 2b/t=0.2, 0.4, 0.6, 0.7 and 0.8. Forty crack configurations are analyzed and the stress intensity factors along the crack front are presented. The numerical results show that for 2b/t<0.7, the interaction leads to a decrease in the stress intensity factors for both internal and external surface cracks, compared with a single internal or external surface crack. Thus for fracture analysis of a practical pressurized cylinder with two identical semi-elliptical surface cracks at its internal and external surfaces, a conservative result is obtained by ignoring the interaction

  3. AGROBEST: an efficient Agrobacterium-mediated transient expression method for versatile gene function analyses in Arabidopsis seedlings

    Science.gov (United States)

    2014-01-01

    Background Transient gene expression via Agrobacterium-mediated DNA transfer offers a simple and fast method to analyze transgene functions. Although Arabidopsis is the most-studied model plant with powerful genetic and genomic resources, achieving highly efficient and consistent transient expression for gene function analysis in Arabidopsis remains challenging. Results We developed a highly efficient and robust Agrobacterium-mediated transient expression system, named AGROBEST (Agrobacterium-mediated enhanced seedling transformation), which achieves versatile analysis of diverse gene functions in intact Arabidopsis seedlings. Using β-glucuronidase (GUS) as a reporter for Agrobacterium-mediated transformation assay, we show that the use of a specific disarmed Agrobacterium strain with vir gene pre-induction resulted in homogenous GUS staining in cotyledons of young Arabidopsis seedlings. Optimization with AB salts in plant culture medium buffered with acidic pH 5.5 during Agrobacterium infection greatly enhanced the transient expression levels, which were significantly higher than with two existing methods. Importantly, the optimized method conferred 100% infected seedlings with highly increased transient expression in shoots and also transformation events in roots of ~70% infected seedlings in both the immune receptor mutant efr-1 and wild-type Col-0 seedlings. Finally, we demonstrated the versatile applicability of the method for examining transcription factor action and circadian reporter-gene regulation as well as protein subcellular localization and protein–protein interactions in physiological contexts. Conclusions AGROBEST is a simple, fast, reliable, and robust transient expression system enabling high transient expression and transformation efficiency in Arabidopsis seedlings. Demonstration of the proof-of-concept experiments elevates the transient expression technology to the level of functional studies in Arabidopsis seedlings in addition to previous

  4. SCIENTIFIC METHODOLOGY FOR THE APPLIED SOCIAL SCIENCES: CRITICAL ANALYSES ABOUT RESEARCH METHODS, TYPOLOGIES AND CONTRIBUTIONS FROM MARX, WEBER AND DURKHEIM

    Directory of Open Access Journals (Sweden)

    Mauricio Corrêa da Silva

    2015-06-01

    Full Text Available This study aims to discuss the importance of the scientific method to conduct and advertise research in applied social sciences and research typologies, as well as to highlight contributions from Marx, Weber and Durkheim to the scientific methodology. To reach this objective, we conducted a review of the literature on the term research, the scientific method,the research techniques and the scientific methodologies. The results of the investigation revealed that it is fundamental that the academic investigator uses a scientific method to conduct and advertise his/her academic works in applied social sciences in comparison with the biochemical or computer sciences and in the indicated literature. Regarding the contributions to the scientific methodology, we have Marx, dialogued, the dialectical, striking analysis, explicative of social phenomenon, the need to understand the phenomena as historical and concrete totalities; Weber, the distinction between “facts” and “value judgments” to provide objectivity to the social sciences and Durkheim, the need to conceptualize very well its object of study, reject sensible data and imbue with the spirit of discovery and of being surprised with the results.

  5. Cloning of transgenic tobacco BY-2 cells; an efficient method to analyse and reduce high natural heterogeneity of transgene expression.

    Science.gov (United States)

    Nocarova, Eva; Fischer, Lukas

    2009-04-22

    Phenotypic characterization of transgenic cell lines, frequently used in plant biology studies, is complicated because transgene expression in individual cells is often heterogeneous and unstable. To identify the sources and to reduce this heterogeneity, we transformed tobacco (Nicotiana tabacum L.) BY-2 cells with a gene encoding green fluorescent protein (GFP) using Agrobacterium tumefaciens, and then introduced a simple cloning procedure to generate cell lines derived from the individual transformed cells. Expression of the transgene was monitored by analysing GFP fluorescence in the cloned lines and also in lines obtained directly after transformation. The majority ( approximately 90%) of suspension culture lines derived from calli that were obtained directly from transformation consisted of cells with various levels of GFP fluorescence. In contrast, nearly 50% of lines generated by cloning cells from the primary heterogeneous suspensions consisted of cells with homogenous GFP fluorescence. The rest of the lines exhibited "permanent heterogeneity" that could not be resolved by cloning. The extent of fluorescence heterogeneity often varied, even among genetically identical clones derived from the primary transformed lines. In contrast, the offspring of subsequent cloning of the cloned lines was uniform, showing GFP fluorescence intensity and heterogeneity that corresponded to the original clone. The results demonstrate that, besides genetic heterogeneity detected in some lines, the primary lines often contained a mixture of epigenetically different cells that could be separated by cloning. This indicates that a single integration event frequently results in various heritable expression patterns, which are probably accidental and become stabilized in the offspring of the primary transformed cells early after the integration event. Because heterogeneity in transgene expression has proven to be a serious problem, it is highly advisable to use transgenes tagged with

  6. Two Model-Based Methods for Policy Analyses of Fine Particulate Matter Control in China: Source Apportionment and Source Sensitivity

    Science.gov (United States)

    Li, X.; Zhang, Y.; Zheng, B.; Zhang, Q.; He, K.

    2013-12-01

    Anthropogenic emissions have been controlled in recent years in China to mitigate fine particulate matter (PM2.5) pollution. Recent studies show that sulfate dioxide (SO2)-only control cannot reduce total PM2.5 levels efficiently. Other species such as nitrogen oxide, ammonia, black carbon, and organic carbon may be equally important during particular seasons. Furthermore, each species is emitted from several anthropogenic sectors (e.g., industry, power plant, transportation, residential and agriculture). On the other hand, contribution of one emission sector to PM2.5 represents contributions of all species in this sector. In this work, two model-based methods are used to identify the most influential emission sectors and areas to PM2.5. The first method is the source apportionment (SA) based on the Particulate Source Apportionment Technology (PSAT) available in the Comprehensive Air Quality Model with extensions (CAMx) driven by meteorological predictions of the Weather Research and Forecast (WRF) model. The second method is the source sensitivity (SS) based on an adjoint integration technique (AIT) available in the GEOS-Chem model. The SA method attributes simulated PM2.5 concentrations to each emission group, while the SS method calculates their sensitivity to each emission group, accounting for the non-linear relationship between PM2.5 and its precursors. Despite their differences, the complementary nature of the two methods enables a complete analysis of source-receptor relationships to support emission control policies. Our objectives are to quantify the contributions of each emission group/area to PM2.5 in the receptor areas and to intercompare results from the two methods to gain a comprehensive understanding of the role of emission sources in PM2.5 formation. The results will be compared in terms of the magnitudes and rankings of SS or SA of emitted species and emission groups/areas. GEOS-Chem with AIT is applied over East Asia at a horizontal grid

  7. Seed storage at elevated partial pressure of oxygen, a fast method for analysing seed ageing under dry conditions

    Science.gov (United States)

    Groot, S. P. C.; Surki, A. A.; de Vos, R. C. H.; Kodde, J.

    2012-01-01

    Background and Aims Despite differences in physiology between dry and relative moist seeds, seed ageing tests most often use a temperature and seed moisture level that are higher than during dry storage used in commercial practice and gene banks. This study aimed to test whether seed ageing under dry conditions can be accelerated by storing under high-pressure oxygen. Methods Dry barley (Hordeum vulgare), cabbage (Brassica oleracea), lettuce (Lactuca sativa) and soybean (Glycine max) seeds were stored between 2 and 7 weeks in steel tanks under 18 MPa partial pressure of oxygen. Storage under high-pressure nitrogen gas or under ambient air pressure served as controls. The method was compared with storage at 45 °C after equilibration at 85 % relative humidity and long-term storage at the laboratory bench. Germination behaviour, seedling morphology and tocopherol levels were assessed. Key Results The ageing of the dry seeds was indeed accelerated by storing under high-pressure oxygen. The morphological ageing symptoms of the stored seeds resembled those observed after ageing under long-term dry storage conditions. Barley appeared more tolerant of this storage treatment compared with lettuce and soybean. Less-mature harvested cabbage seeds were more sensitive, as was the case for primed compared with non-primed lettuce seeds. Under high-pressure oxygen storage the tocopherol levels of dry seeds decreased, in a linear way with the decline in seed germination, but remained unchanged in seeds deteriorated during storage at 45 °C after equilibration at 85 % RH. Conclusions Seed storage under high-pressure oxygen offers a novel and relatively fast method to study the physiology and biochemistry of seed ageing at different seed moisture levels and temperatures, including those that are representative of the dry storage conditions as used in gene banks and commercial practice. PMID:22967856

  8. Seed storage at elevated partial pressure of oxygen, a fast method for analysing seed ageing under dry conditions.

    Science.gov (United States)

    Groot, S P C; Surki, A A; de Vos, R C H; Kodde, J

    2012-11-01

    Despite differences in physiology between dry and relative moist seeds, seed ageing tests most often use a temperature and seed moisture level that are higher than during dry storage used in commercial practice and gene banks. This study aimed to test whether seed ageing under dry conditions can be accelerated by storing under high-pressure oxygen. methods: Dry barley (Hordeum vulgare), cabbage (Brassica oleracea), lettuce (Lactuca sativa) and soybean (Glycine max) seeds were stored between 2 and 7 weeks in steel tanks under 18 MPa partial pressure of oxygen. Storage under high-pressure nitrogen gas or under ambient air pressure served as controls. The method was compared with storage at 45 °C after equilibration at 85 % relative humidity and long-term storage at the laboratory bench. Germination behaviour, seedling morphology and tocopherol levels were assessed. The ageing of the dry seeds was indeed accelerated by storing under high-pressure oxygen. The morphological ageing symptoms of the stored seeds resembled those observed after ageing under long-term dry storage conditions. Barley appeared more tolerant of this storage treatment compared with lettuce and soybean. Less-mature harvested cabbage seeds were more sensitive, as was the case for primed compared with non-primed lettuce seeds. Under high-pressure oxygen storage the tocopherol levels of dry seeds decreased, in a linear way with the decline in seed germination, but remained unchanged in seeds deteriorated during storage at 45 °C after equilibration at 85 % RH. Seed storage under high-pressure oxygen offers a novel and relatively fast method to study the physiology and biochemistry of seed ageing at different seed moisture levels and temperatures, including those that are representative of the dry storage conditions as used in gene banks and commercial practice.

  9. Accuracy Improvement of the Method of Multiple Scales for Nonlinear Vibration Analyses of Continuous Systems with Quadratic and Cubic Nonlinearities

    Directory of Open Access Journals (Sweden)

    Akira Abe

    2010-01-01

    and are the driving and natural frequencies, respectively. The application of Galerkin's procedure to the equation of motion yields nonlinear ordinary differential equations with quadratic and cubic nonlinear terms. The steady-state responses are obtained by using the discretization approach of the MMS in which the definition of the detuning parameter, expressing the relationship between the natural frequency and the driving frequency, is changed in an attempt to improve the accuracy of the solutions. The validity of the solutions is discussed by comparing them with solutions of the direct approach of the MMS and the finite difference method.

  10. A novel method for analysing key corticosteroids in polar bear (Ursus maritimus) hair using liquid chromatography tandem mass spectrometry.

    Science.gov (United States)

    Weisser, Johan J; Hansen, Martin; Björklund, Erland; Sonne, Christian; Dietz, Rune; Styrishave, Bjarne

    2016-04-01

    This paper presents the development and evaluation of a methodology for extraction, clean-up and analysis of three key corticosteroids (aldosterone, cortisol and corticosterone) in polar bear hair. Such a methodology can be used to monitor stress biomarkers in polar bears and may provide as a useful tool for long-term and retrospective information. We developed a combined pressurized liquid extraction (PLE)-solid phase extraction (SPE) procedure for corticosteroid extraction and clean-up followed by high pressure liquid chromatography tandem mass spectrometry (HPLC-MS/MS) analysis. This procedure allows for the simultaneous determination of multiple steroids, which is in contrast to previous polar bear studies based on ELISA techniques. Absolute method recoveries were 81%, 75% and 60% for cortisol, corticosterone and aldosterone, respectively. We applied the developed method on a hair sample pooled from four East Greenland polar bears. Herein cortisol and corticosterone were successfully determined in levels of 0.32±0.02ng/g hair and 0.13±0.02ng/g hair, respectively. Aldosterone was below limit of detection (LODpolar bears was consistent with cortisol levels previously determined in the Southern Hudson Bay and James Bay in Canada using ELISA kits. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. An autoradiographical method using an imaging plate for the analyses of plutonium contamination in a plutonium handling facility

    International Nuclear Information System (INIS)

    Takasaki, Koji; Sagawa, Naoki; Kurosawa, Shigeyuki; Mizuniwa, Harumi

    2011-01-01

    An autoradiographical method using an imaging plate (IP) was developed to analyze plutonium contamination in a plutonium handling facility. The IPs were exposed to ten specimens having a single plutonium particle. Photostimulated luminescence (PSL) images of the specimens were taken using a laser scanning machine. One relatively large spot induced by α-radioactivity from plutonium was observed in each PSL image. The plutonium-induced spots were discriminated by a threshold derived from background and the size of the spot. A good relationship between the PSL intensities of the spots and α-radioactivities measured using a radiation counter was obtained by least-square fitting, taking the fading effect into consideration. This method was applied to workplace monitoring in an actual uranium-plutonium mixed oxide (MOX) fuel fabrication facility. Plutonium contaminations were analyzed in ten other specimens having more than two plutonium spots. The α-radioactivities of plutonium contamination were derived from the PSL images and their relative errors were evaluated from exposure time. (author)

  12. Sensitivity analyses on natural convection in an 8:1 tall enclosure using finite-volume methods

    International Nuclear Information System (INIS)

    Ambrosini, Walter; Forgione, N.; Ferreri, Juan C.

    2004-01-01

    Full text: The results herein presented are an extension of those obtained in previous work by the Authors in a benchmark problem dealing with flow driven by buoyancy in an 8:1 tall enclosure. A simple finite-volume model purposely set up for this application has provided the preliminary results reported. The adopted modeling technique was a direct extension of the one previously adopted by the Authors to deal with single-phase natural convection and boiling channel instabilities. This extension to two-dimensional flow is based on a finite-volume scheme using first order approximation in time and space. Despite its simplicity, results were reasonably good and detected the flow instabilities due to proper selection of cell Courant number and a semi-implicit solution algorithm. In this paper, results using the same code with different discretisations are presented in a more detailed way and are further discussed. They show proper capture of all the main characteristics of the flow, also reported by other authors and considered as 'converged' solutions. Results show that, as expected, first order explicit or semi-implicit methods can be considered reliable tools when dealing with stability problems, if properly used. Some initial results obtained using a second order upwind method are also presented for the purpose of comparison. Additionally, results obtained using a commercial code (FLUENT) are also reported. (author)

  13. New method to analyse internal disruptions with five-camera soft x-ray tomography on RTP

    Energy Technology Data Exchange (ETDEWEB)

    Tanzi, C.P. [FOM-Instituut voor Plasmafysica, Rijnhuizen (Netherlands); Blank, H.J. de [Max-Planck-Institut fuer Plasmaphysik, Garching (Germany)

    1994-12-31

    The five-camera soft x-ray diagnostic on the Rijnhuizen Tokamak Project (RTP) offers a wealth of information on sawteeth. Using four or five cameras, tomographic images with 7 poloidal harmonics have been obtained throughout sawtooth crashes and precursor oscillations. The purpose of this paper is to determine whether the precursors are ideal MHD modes or can be attributed to the resistive growth of a magnetic island. In practice, the detection of the topology of magnetic surfaces from the reconstructed tomographic images is complicated by the fact that (except during the final phase of the collapse) the time dependence is dominated by rotation of the m = 1 displacement. A novel method allows to define quantities, e.g. the plasma volume where the emissivity is within a certain range, whose change is only determined by cross-field transport or reconnection, and is not affected by m = 1 convection and by rotation. (author) 6 refs., 2 figs.

  14. New method to analyse internal disruptions with five-camera soft x-ray tomography on RTP

    International Nuclear Information System (INIS)

    Tanzi, C.P.; Blank, H.J. de

    1994-01-01

    The five-camera soft x-ray diagnostic on the Rijnhuizen Tokamak Project (RTP) offers a wealth of information on sawteeth. Using four or five cameras, tomographic images with 7 poloidal harmonics have been obtained throughout sawtooth crashes and precursor oscillations. The purpose of this paper is to determine whether the precursors are ideal MHD modes or can be attributed to the resistive growth of a magnetic island. In practice, the detection of the topology of magnetic surfaces from the reconstructed tomographic images is complicated by the fact that (except during the final phase of the collapse) the time dependence is dominated by rotation of the m = 1 displacement. A novel method allows to define quantities, e.g. the plasma volume where the emissivity is within a certain range, whose change is only determined by cross-field transport or reconnection, and is not affected by m = 1 convection and by rotation. (author) 6 refs., 2 figs

  15. Arsenic absorption by members of the Brassicacea family, analysed by neutron activation, k0-method - preliminary results

    International Nuclear Information System (INIS)

    Uemura, George; Matos, Ludmila Vieira da Silva; Silva, Maria Aparecida da; Ferreira, Alexandre Santos Martorano; Menezes, Maria Angela de Barros Correia

    2009-01-01

    Natural arsenic contamination is a cause for concern in many countries of the world including Argentina, Bangladesh, Chile, China, India, Mexico, Thailand and the United States of America and also in Brazil, specially in the Iron Quadrangle area, where mining activities has been contributing to aggravate natural contamination. Brassicacea is a plant family with edible species (arugula, cabbage, cauliflower, cress, kale, mustard, radish), ornamental ones (alysssum, field pennycress, ornamental cabbages and kales) and some species are known as metal and metalloid accumulators (Indian mustard, field pennycress), like chromium, nickel, and arsenic. The present work aimed at studying other taxa of the Brassicaceae family to verify their capability in absorbing arsenic, under controlled conditions, for possible utilisation in remediation activities. The analytical method chosen was neutron activation analysis, k 0 method, a routine technique at CDTN, and also very appropriate for arsenic studies. To avoid possible interference from solid substrates, like sand or vermiculite, attempts were carried out to keep the specimens in 1/4 Murashige and Skoog basal salt solution (M and S). Growth was stumped, plants withered and perished, showing that modifications in M and S had to be done. The addition of nickel and silicon allowed normal growth of the plant specimens, for periods longer than usually achieved (more than two months); yielding samples large enough for further studies with other techniques, like ICP-MS, and other targets, like speciation studies. The results of arsenic absorption are presented here and the need of nickel and silicon in the composition of M and S is discussed. (author)

  16. Arsenic absorption by members of the Brassicacea family, analysed by neutron activation, k{sub 0}-method - preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Uemura, George; Matos, Ludmila Vieira da Silva; Silva, Maria Aparecida da; Ferreira, Alexandre Santos Martorano; Menezes, Maria Angela de Barros Correia [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN-CNEN/MG), Belo Horizonte, MG (Brazil)], e-mail: george@cdtn.br, e-mail: menezes@cdtn.br

    2009-07-01

    Natural arsenic contamination is a cause for concern in many countries of the world including Argentina, Bangladesh, Chile, China, India, Mexico, Thailand and the United States of America and also in Brazil, specially in the Iron Quadrangle area, where mining activities has been contributing to aggravate natural contamination. Brassicacea is a plant family with edible species (arugula, cabbage, cauliflower, cress, kale, mustard, radish), ornamental ones (alysssum, field pennycress, ornamental cabbages and kales) and some species are known as metal and metalloid accumulators (Indian mustard, field pennycress), like chromium, nickel, and arsenic. The present work aimed at studying other taxa of the Brassicaceae family to verify their capability in absorbing arsenic, under controlled conditions, for possible utilisation in remediation activities. The analytical method chosen was neutron activation analysis, k{sub 0} method, a routine technique at CDTN, and also very appropriate for arsenic studies. To avoid possible interference from solid substrates, like sand or vermiculite, attempts were carried out to keep the specimens in 1/4 Murashige and Skoog basal salt solution (M and S). Growth was stumped, plants withered and perished, showing that modifications in M and S had to be done. The addition of nickel and silicon allowed normal growth of the plant specimens, for periods longer than usually achieved (more than two months); yielding samples large enough for further studies with other techniques, like ICP-MS, and other targets, like speciation studies. The results of arsenic absorption are presented here and the need of nickel and silicon in the composition of M and S is discussed. (author)

  17. Analyses of the internal structure of the oscillating vibro-packed fuels by the micro focus X-rays CT method

    International Nuclear Information System (INIS)

    Mizuta, Yasutoshi

    2003-02-01

    The purpose of this study is to support the development of vibro-packed fuel technology at Japan Nuclear Cycle Development Institute. 3-dimensional (3-D) data was built from the multi-cross sectional images obtained by the micro focus X-rays CT method in the vibro-packed fuel models. The structural analyses were carried out about the obtained 3-D CT images. The packing-rate distribution and the density distribution were measured as well as the number distribution of particles, etc. Consequently, it is obtained that vibrate conditions and a vibrating state have strong correlation, and it is also shown that the 3-D analyses of the internal structure by the micro focus X-rays CT method are effective in performance evaluation of vibro-packed fuels. (author)

  18. Experimental and numerical analyses of pure copper during ECFE process as a novel severe plastic deformation method

    Directory of Open Access Journals (Sweden)

    M. Ebrahimi

    2014-02-01

    Full Text Available In this paper, a new severe plastic deformation method called equal channel forward extrusion (ECFE process has been proposed and investigated by experimental and numerical approaches on the commercial pure copper billets. The experimental results indicated that the magnitudes of yield strength, ultimate tensile strength and Vickers micro-hardness have been markedly improved from 114 MPa, 204 MPa and 68 HV as the annealed condition to 269 MPa, 285 MPa and 126 HV after the fourth pass of ECFE process, respectively. In addition, scanning electron microscopy observation of the samples showed that the average grain size of the as-received state which is about 22 μm has been reduced to 1.4 μm after the final pass. The numerical investigation suggested that although one pass ECFE process fabricates material with the mean effective strain magnitude of about 1, the level of imposed effective plastic strain gradually diminishes from the circumference to the center of the deformed billet.

  19. Developments based on stochastic and determinist methods for studying complex nuclear systems; Developpements utilisant des methodes stochastiques et deterministes pour l'analyse de systemes nucleaires complexes

    Energy Technology Data Exchange (ETDEWEB)

    Giffard, F.X

    2000-05-19

    In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)

  20. Developments based on stochastic and determinist methods for studying complex nuclear systems; Developpements utilisant des methodes stochastiques et deterministes pour l'analyse de systemes nucleaires complexes

    Energy Technology Data Exchange (ETDEWEB)

    Giffard, F X

    2000-05-19

    In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)

  1. Initial placement and secondary displacement of a new suture-method catheter for sciatic nerve block in healthy volunteers

    DEFF Research Database (Denmark)

    Lyngeraa, T S; Rothe, C; Steen-Hansen, C

    2017-01-01

    electromyography and cold sensation. After return of motor and sensory function, volunteers performed standardised physical exercises; injection of the same study medication was repeated in the same leg and followed by motor and sensory assessments. Fifteen of 16 (94%; 95%CI 72-99%) initial catheter placements...... displacement was 5 mm. Catheters with secondary block failure were displaced between 6 and 10 mm. One catheter was displaced 1.8 mm that resulted in a decrease in maximum voluntary isometric contraction of less than 20%. After repeat test injection, 14 of the 16 volunteers had loss of cold sensation. Neither...

  2. Optimized methods to measure acetoacetate, 3-hydroxybutyrate, glycerol, alanine, pyruvate, lactate and glucose in human blood using a centrifugal analyser with a fluorimetric attachment

    OpenAIRE

    Stappenbeck, R.; Hodson, A. W.; Skillen, A. W.; Agius, L.; Alberti, K. G. M. M.

    1990-01-01

    Optimized methods are described for the analysis of glucose, lactate, pyruvate, alanine, glycerol, D-3-hydroxybutyrate and acetoacetate in perchloric acid extracts of human blood using the Cobas Bio centrifugal analyser. Glucose and lactate are measured using the photometric mode and other metabolites using the fluorimetric mode. The intra-assay coefficients of variation ranged from 0.7 to 4.1%, except with very low levels of pyruvate and acetoacetate where the coefficients of variation were ...

  3. A systemic gene silencing method suitable for high throughput, reverse genetic analyses of gene function in fern gametophytes

    Directory of Open Access Journals (Sweden)

    Tanurdzic Milos

    2004-04-01

    Full Text Available Abstract Background Ceratopteris richardii is a useful experimental system for studying gametophyte development and sexual reproduction in plants. However, few tools for cloning mutant genes or disrupting gene function exist for this species. The feasibility of systemic gene silencing as a reverse genetics tool was examined in this study. Results Several DNA constructs targeting a Ceratopteris protoporphyrin IX magnesium chelatase (CrChlI gene that is required for chlorophyll biosynthesis were each introduced into young gametophytes by biolistic delivery. Their transient expression in individual cells resulted in a colorless cell phenotype that affected most cells of the mature gametophyte, including the meristem and gametangia. The colorless phenotype was associated with a 7-fold decrease in the abundance of the endogenous transcript. While a construct designed to promote the transient expression of a CrChlI double stranded, potentially hairpin-forming RNA was found to be the most efficient in systemically silencing the endogenous gene, a plasmid containing the CrChlI cDNA insert alone was sufficient to induce silencing. Bombarded, colorless hermaphroditic gametophytes produced colorless embryos following self-fertilization, demonstrating that the silencing signal could be transmitted through gametogenesis and fertilization. Bombardment of young gametophytes with constructs targeting the Ceratopteris filamentous temperature sensitive (CrFtsZ and uroporphyrin dehydrogenase (CrUrod genes also produced the expected mutant phenotypes. Conclusion A method that induces the systemic silencing of target genes in the Ceratopteris gametophyte is described. It provides a simple, inexpensive and rapid means to test the functions of genes involved in gametophyte development, especially those involved in cellular processes common to all plants.

  4. Quantitative analyses of impurity silicon-carbide (SiC) and high-purity-titanium by neutron activation analyses based on k0-standardization method. Development of irradiation silicon technology in productivity using research reactor (Joint research)

    International Nuclear Information System (INIS)

    Motohashi, Jun; Takahashi, Hiroyuki; Magome, Hirokatsu; Sasajima, Fumio; Tokunaga, Okihiro; Kawasaki, Kozo; Onizawa, Koji; Isshiki, Masahiko

    2009-07-01

    JRR-3 and JRR-4 have been providing neutron-transmutation-doped silicon (NTD-Si) by using the silicon NTD process, which is a method to produce a high quality semiconductor. The domestic supply of NTD-Si is insufficient for the demand, and the market of NTD-Si is significantly growing at present. It is very important to increase achieve the production. To fulfill the requirement, we have been investigating a neutron filter, which is made of high-purity-titanium, for uniform doping. Silicon-carbide (SiC) semiconductor doped with NTD technology is considered suitable for high power devices with superior performances to conventional Si-based devices. We are very interested in the SiC as well. This report presents the results obtained after the impurity contents in the high-purity-titanium and SiC were analyzed by neutron activation analyses (NAA) using k 0 -standardization method. There were 6 and 9 impurity elements detected from the high-purity-titanium and SiC, respectively. Among those Sc from the high-purity-titanium and Fe from SiC were comparatively long half life nuclides. From the viewpoint of exposure in handling them, we need to examine the impurity control of materials. (author)

  5. On Rigorous Drought Assessment Using Daily Time Scale: Non-Stationary Frequency Analyses, Revisited Concepts, and a New Method to Yield Non-Parametric Indices

    Directory of Open Access Journals (Sweden)

    Charles Onyutha

    2017-10-01

    Full Text Available Some of the problems in drought assessments are that: analyses tend to focus on coarse temporal scales, many of the methods yield skewed indices, a few terminologies are ambiguously used, and analyses comprise an implicit assumption that the observations come from a stationary process. To solve these problems, this paper introduces non-stationary frequency analyses of quantiles. How to use non-parametric rescaling to obtain robust indices that are not (or minimally skewed is also introduced. To avoid ambiguity, some concepts on, e.g., incidence, extremity, etc., were revisited through shift from monthly to daily time scale. Demonstrations on the introduced methods were made using daily flow and precipitation insufficiency (precipitation minus potential evapotranspiration from the Blue Nile basin in Africa. Results show that, when a significant trend exists in extreme events, stationarity-based quantiles can be far different from those when non-stationarity is considered. The introduced non-parametric indices were found to closely agree with the well-known standardized precipitation evapotranspiration indices in many aspects but skewness. Apart from revisiting some concepts, the advantages of the use of fine instead of coarse time scales in drought assessment were given. The links for obtaining freely downloadable tools on how to implement the introduced methods were provided.

  6. Quantitative secondary electron detection

    Science.gov (United States)

    Agrawal, Jyoti; Joy, David C.; Nayak, Subuhadarshi

    2018-05-08

    Quantitative Secondary Electron Detection (QSED) using the array of solid state devices (SSD) based electron-counters enable critical dimension metrology measurements in materials such as semiconductors, nanomaterials, and biological samples (FIG. 3). Methods and devices effect a quantitative detection of secondary electrons with the array of solid state detectors comprising a number of solid state detectors. An array senses the number of secondary electrons with a plurality of solid state detectors, counting the number of secondary electrons with a time to digital converter circuit in counter mode.

  7. Statistical properties of interval mapping methods on quantitative trait loci location: impact on QTL/eQTL analyses

    Directory of Open Access Journals (Sweden)

    Wang Xiaoqiang

    2012-04-01

    Full Text Available Abstract Background Quantitative trait loci (QTL detection on a huge amount of phenotypes, like eQTL detection on transcriptomic data, can be dramatically impaired by the statistical properties of interval mapping methods. One of these major outcomes is the high number of QTL detected at marker locations. The present study aims at identifying and specifying the sources of this bias, in particular in the case of analysis of data issued from outbred populations. Analytical developments were carried out in a backcross situation in order to specify the bias and to propose an algorithm to control it. The outbred population context was studied through simulated data sets in a wide range of situations. The likelihood ratio test was firstly analyzed under the "one QTL" hypothesis in a backcross population. Designs of sib families were then simulated and analyzed using the QTL Map software. On the basis of the theoretical results in backcross, parameters such as the population size, the density of the genetic map, the QTL effect and the true location of the QTL, were taken into account under the "no QTL" and the "one QTL" hypotheses. A combination of two non parametric tests - the Kolmogorov-Smirnov test and the Mann-Whitney-Wilcoxon test - was used in order to identify the parameters that affected the bias and to specify how much they influenced the estimation of QTL location. Results A theoretical expression of the bias of the estimated QTL location was obtained for a backcross type population. We demonstrated a common source of bias under the "no QTL" and the "one QTL" hypotheses and qualified the possible influence of several parameters. Simulation studies confirmed that the bias exists in outbred populations under both the hypotheses of "no QTL" and "one QTL" on a linkage group. The QTL location was systematically closer to marker locations than expected, particularly in the case of low QTL effect, small population size or low density of markers, i

  8. Development of evaluation method of long-term confinement performance for canister. Part 1. Fundamental study of analyses method for helium leak detection

    International Nuclear Information System (INIS)

    Takeda, Hirofumi; Toriu, Daisuke; Ushijima, Satoru

    2014-01-01

    The storage management of spent nuclear fuel for ageing degradation is becoming a global issue, so we researched the present status and measures of the management in each country. In particular, for the concrete cask storage, a leak detecting method that detects the leak from the change in canister surface temperature has been proposed. We performed thermal hydraulics analysis to clarify the phenomenon and to work toward practical use of the detecting method. For analyzing the leak phenomenon with high accuracy, it is necessary to stably solve the low-Mach number flow problem considering compressibility of gas. Therefore, we originally modified the conventional compressible flow solution method and proposed a new method which is applicable to thermo-hydraulics phenomenon and satisfies the mass conservation law with high accuracy. For the cavity natural convection analysis, the mass conservation in a calculating area was satisfied with high accuracy. As for the analysis of leak from the cavity, a helium leak phenomenon could be calculated stably by using the proposed method. The pressure in the cavity and the change of the mass could be also analyzed validly. As for the temperature distribution in the cavity, it was confirmed that the temperature changes before and after the leak. (author)

  9. Guidelines for secondary analysis in search of response shift

    NARCIS (Netherlands)

    Schwartz, Carolyn E.; Ahmed, Sara; Sawatzky, Richard; Sajobi, Tolulope; Mayo, Nancy; Finkelstein, Joel; Lix, Lisa; Verdam, Mathilde G. E.; Oort, Frans J.; Sprangers, Mirjam A. G.

    2013-01-01

    Response shift methods have developed substantially in the past decade, with a notable emphasis on model-based methods for response shift detection that are appropriate for the analysis of existing data sets. These secondary data analyses have yielded useful insights and motivated the continued

  10. Guidelines for secondary analysis in search of response shift

    NARCIS (Netherlands)

    Schwartz, C.E.; Ahmed, S.; Sawatzky, R.; Sajobi, T.; Mayo, N.; Finkelstein, J.; Verdam, M.G.E.; Oort, F.J.; Sprangers, M.A.G.

    2013-01-01

    Objective: Response shift methods have developed substantially in the past decade, with a notable emphasis on model-based methods for response shift detection that are appropriate for the analysis of existing data sets. These secondary data analyses have yielded useful insights and motivated the

  11. Measurements of secondary electron cross sections by the pulsed electron beam time-of-flight method. I. Molecular nitrogen

    International Nuclear Information System (INIS)

    Goruganthu, R.R.; Wilson, W.G.; Bonham, R.A.

    1983-01-01

    The secondary electron cross sections for gaseous molecular nitrogen are reported at ejection angles of 30, 45, 60, 75, 90, 105, 120, 135 and 150 0 , for the energy range 1.5 eV to 20 eV and incident electron energy of 1 keV. The pulsed electron beam time-of-flight methd was employed. The results were placed on an absolute scale by normalization to the elastic scattering. They were compared, where possible, with those reported by Opal, Beaty, and Peterson (OBP). The agreement is somewhat better when the OBP data are divided by 0.53 + 0.47 sintheta as suggested by Rudd and DuBois. Fits of our data by Legendre-polynomial expansions are used to estimate the low-energy portion of the cross-section, dsigma/dE. This work suggests that existing experimental cross sections for secondary electron ejection as a function of angle and ejected energy may be no better known than +-40%, especially in the low energy region. 7 references, 14 figures, 2 tables

  12. Analytical procedures used by the uranium - radon - radium geochemistry group; Methodes d'analyses utilisees par la section de geochimie uranium, radon, radium

    Energy Technology Data Exchange (ETDEWEB)

    Berthollet, P [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires

    1968-07-01

    The analytical methods described are applied to the geochemical prospecting of uranium. The nature of the material under investigation, which may be soil, alluvium, rock, plant or water, and the particular requirements of geochemical exploration, have prompted us to adjust the widely used conventional methods to the demands of large scale operation, without lowering their standards of accuracy and reliability. These procedures are explained in great detail. Though most of this technical information may appear superfluous to the chemical engineer well versed in trace element determination, it will, however, serve a useful purpose both with the operator in charge of routine testing and with the chemist called upon to interpret results. (author) [French] Les methodes d'analyses decrites sont utilisees pour la prospection geochimique de l'uranium. La nature des materiaux: sols, alluvions, roches, vegetaux, eaux, et les exigences propres a la prospection geochimique, nous ont conduit a adapter des methodes classique couramment utilisees pour les rendre aptes a etre executees en grande serie, sans abandonner leurs qualites de precision et de fidelite. Ces methodes sont presentees avec un maximum de details operatoires qui paraitront superflus aux chimistes habitues aux dosages de traces, mais seront utiles aussi bien aux manipulateurs charges des analyses qu'aux geochimistes appeles a exploiter les resultats. (auteur)

  13. [Secondary hypertension].

    Science.gov (United States)

    Yoshida, Yuichi; Shibata, Hirotaka

    2015-11-01

    Hypertension is a common disease and a crucial predisposing factor of cardiovascular diseases. Approximately 10% of hypertensive patients are secondary hypertension, a pathogenetic factor of which can be identified. Secondary hypertension consists of endocrine, renal, and other diseases. Primary aldosteronism, Cushing's syndrome, pheochromocytoma, hyperthyroidism, and hypothyroidism result in endocrine hypertension. Renal parenchymal hypertension and renovascular hypertension result in renal hypertension. Other diseases such as obstructive sleep apnea syndrome are also very prevalent in secondary hypertension. It is very crucial to find and treat secondary hypertension at earlier stages since most secondary hypertension is curable or can be dramatically improved by specific treatment. One should keep in mind that screening of secondary hypertension should be done at least once in a daily clinical practice.

  14. EFFECTS OF TWO METHODS OF INSTRUCTION ON STUDENTS’ CRITICAL RESPONSE TO PROSE LITERATURE TEXT IN ENGLISH IN SOME SECONDARY SCHOOLS IN BENIN CITY

    Directory of Open Access Journals (Sweden)

    F. O. EZEOKOLI

    2016-08-01

    Full Text Available This study investigated the effects of two methods of instruction on secondary school students’ critical response to Prose Literature text. The study adopted a pretest, posttest, control group quasi experimental design. The participants in the study were 84 Senior Secondary II students of Literature-in-English purposively selected from four Schools in Ikpoba-Okha Local Government Area of Edo State. Two intact classes were randomly assigned to each of the treatment and control groups. Three hypotheses were tested at 0.05 alpha level. The instruments used were: Critical Response to Prose Literature Test (r = .75, Questionnaire on Home Background of Students (r = .82, and Critical Response to Prose Literature Test Marking Guide. Data obtained were subjected to Analysis of Covariance and graph. The results showed significant main effect of treatment on students’ critical response to Prose Literature (F (1, 77 = 44.731; p < .05. Students exposed to Engagement Strategies Method performed better than those exposed to the Conventional Method of instruction. Further, home background of students had no significant effect on students’ critical response to Prose Literature text (F (2, 77 = 4.902; p < .05. There was significant interaction effect of treatment and home background of students on students’ critical response to Prose Literature text (F (2, 77 = 3.508; p < .05. It was concluded that Engagement Strategies Method is effective in promoting students’ critical response to Prose Literature text. Teachers of Literature-in-English should employ Engagement Strategies Method in teaching Prose Literature to students in Senior Secondary Schools.

  15. An efficient method for the prediction of deleterious multiple-point mutations in the secondary structure of RNAs using suboptimal folding solutions

    Directory of Open Access Journals (Sweden)

    Barash Danny

    2008-04-01

    Full Text Available Abstract Background RNAmute is an interactive Java application which, given an RNA sequence, calculates the secondary structure of all single point mutations and organizes them into categories according to their similarity to the predicted structure of the wild type. The secondary structure predictions are performed using the Vienna RNA package. A more efficient implementation of RNAmute is needed, however, to extend from the case of single point mutations to the general case of multiple point mutations, which may often be desired for computational predictions alongside mutagenesis experiments. But analyzing multiple point mutations, a process that requires traversing all possible mutations, becomes highly expensive since the running time is O(nm for a sequence of length n with m-point mutations. Using Vienna's RNAsubopt, we present a method that selects only those mutations, based on stability considerations, which are likely to be conformational rearranging. The approach is best examined using the dot plot representation for RNA secondary structure. Results Using RNAsubopt, the suboptimal solutions for a given wild-type sequence are calculated once. Then, specific mutations are selected that are most likely to cause a conformational rearrangement. For an RNA sequence of about 100 nts and 3-point mutations (n = 100, m = 3, for example, the proposed method reduces the running time from several hours or even days to several minutes, thus enabling the practical application of RNAmute to the analysis of multiple-point mutations. Conclusion A highly efficient addition to RNAmute that is as user friendly as the original application but that facilitates the practical analysis of multiple-point mutations is presented. Such an extension can now be exploited prior to site-directed mutagenesis experiments by virologists, for example, who investigate the change of function in an RNA virus via mutations that disrupt important motifs in its secondary

  16. Secondary ion mass spectroscopy (SIMS)

    International Nuclear Information System (INIS)

    Naik, P.K.

    1975-01-01

    Secondary Ion Mass Spectrometry (SIMS) which is primarily a method for investigating the chemical composition of the uppermost atomic layer of solid surfaces is explained. In this method, the specimen is bombarded with a primary positive ion beam of small current density monolayer. Positive and negative ions sputtered from the specimen are mass analysed to give the surface chemical composition. The analytical system which consists of a primary ion source, a target manipulator and a mass spectrometer housed in an ultrahigh vacuum system is described. This method can also be used for profile measurements in thin films by using higher current densities of the primary ions. Fields of application such as surface reactions, semiconductors, thin films emission processes, chemistry, metallurgy are touched upon. Various aspects of this method such as the sputtering process, instrumentation, and applications are discussed. (K.B.)

  17. Determination of detection limits for a VPD ICPMS method of analysis; Determination des limites de detection d'une methode d'analyse VPD ICPMS

    Energy Technology Data Exchange (ETDEWEB)

    Badard, M.; Veillerot, M

    2007-07-01

    This training course report presents the different methods of detection and quantifying of metallic impurities in semiconductors. One of the most precise technique is the collection of metal impurities by vapor phase decomposition (VPD) followed by their analysis by ICPMS (inductively coupled plasma mass spectrometry). The study shows the importance of detection limits in the domain of chemical analysis and the way to determine them for the ICPMS analysis. The results found on detection limits are excellent. Even if the detection limits reached with ICPMS performed after manual or automatic VPD are much higher than detection limits of ICPMS alone, this method remains one of the most sensible for ultra-traces analysis. (J.S.)

  18. The Yusuf-Peto method was not a robust method for meta-analyses of rare events data from antidepressant trials

    DEFF Research Database (Denmark)

    Sharma, Tarang; Gøtzsche, Peter C.; Kuss, Oliver

    2017-01-01

    Objectives The aim of the study was to identify the validity of effect estimates for serious rare adverse events in clinical study reports of antidepressants trials, across different meta-analysis methods. Study Design and Setting Four serious rare adverse events (all-cause mortality, suicidality......, aggressive behavior, and akathisia) were meta-analyzed using different methods. The Yusuf-Peto odds ratio ignores studies with no events and was compared with the alternative approaches of generalized linear mixed models (GLMMs), conditional logistic regression, a Bayesian approach using Markov Chain Monte...... from 1. For example, the odds ratio for suicidality for children and adolescents was 2.39 (95% confidence interval = 1.32–4.33), using the Yusuf-Peto method but increased to 2.64 (1.33–5.26) using conditional logistic regression, to 2.69 (1.19–6.09) using beta-binomial, to 2.73 (1.37–5.42) using...

  19. Optimal Control Method of Parabolic Partial Differential Equations and Its Application to Heat Transfer Model in Continuous Cast Secondary Cooling Zone

    Directory of Open Access Journals (Sweden)

    Yuan Wang

    2015-01-01

    Full Text Available Our work is devoted to a class of optimal control problems of parabolic partial differential equations. Because of the partial differential equations constraints, it is rather difficult to solve the optimization problem. The gradient of the cost function can be found by the adjoint problem approach. Based on the adjoint problem approach, the gradient of cost function is proved to be Lipschitz continuous. An improved conjugate method is applied to solve this optimization problem and this algorithm is proved to be convergent. This method is applied to set-point values in continuous cast secondary cooling zone. Based on the real data in a plant, the simulation experiments show that the method can ensure the steel billet quality. From these experiment results, it is concluded that the improved conjugate gradient algorithm is convergent and the method is effective in optimal control problem of partial differential equations.

  20. Methods for Functional Connectivity Analyses

    Science.gov (United States)

    2012-12-13

    motor , or hand motor function (green, red, or blue shading, respectively). Thus, this work produced the first comprehensive analysis of ECoG...Computer Engineering, University of Texas at El Paso , TX, USA 3Department of Neurology, Albany Medical College, Albany, NY, USA 4Department of Computer...Department of Health, Albany, NY, USA bDepartment of Electrical and Computer Engineering, University of Texas at El Paso , TX, USA cDepartment of Neurology

  1. Methods and Techniques Used to Convey Total System Performance Assessment Analyses and Results for Site Recommendation at Yucca Mountain, Nevada, USA

    International Nuclear Information System (INIS)

    Mattie, Patrick D.; McNeish, Jerry A.; Sevougian, S. David; Andrews, Robert W.

    2001-01-01

    Total System Performance Assessment (TSPA) is used as a key decision-making tool for the potential geologic repository of high level radioactive waste at Yucca Mountain, Nevada USA. Because of the complexity and uncertainty involved in a post-closure performance assessment, an important goal is to produce a transparent document describing the assumptions, the intermediate steps, the results, and the conclusions of the analyses. An important objective for a TSPA analysis is to illustrate confidence in performance projections of the potential repository given a complex system of interconnected process models, data, and abstractions. The methods and techniques used for the recent TSPA analyses demonstrate an effective process to portray complex models and results with transparency and credibility

  2. Assessing the accuracy and reliability of ultrasonographic three-dimensional parathyroid volume measurement in a patient with secondary hyperparathyroidism: a comparison with the two-dimensional conventional method

    Directory of Open Access Journals (Sweden)

    Sung-Hye You

    2017-01-01

    Full Text Available Purpose The purpose of this study was to investigate the accuracy and reliability of the semi-automated ultrasonographic volume measurement tool, virtual organ computer-aided analysis (VOCAL, for measuring the volume of parathyroid glands. Methods Volume measurements for 40 parathyroid glands were performed in patients with secondary hyperparathyroidism caused by chronic renal failure. The volume of the parathyroid glands was measured twice by experienced radiologists by two-dimensional (2D and three-dimensional (3D methods using conventional sonograms and the VOCAL with 30°angle increments before parathyroidectomy. The specimen volume was also measured postoperatively. Intraclass correlation coefficients (ICCs and the absolute percentage error were used for estimating the reproducibility and accuracy of the two different methods. Results The ICC value between two measurements of the 2D method and the 3D method was 0.956 and 0.999, respectively. The mean absolute percentage error of the 2D method and the 3D VOCAL technique was 29.56% and 5.78%, respectively. For accuracy and reliability, the plots of the 3D method showed a more compact distribution than those of the 2D method on the Bland-Altman graph. Conclusion The rotational VOCAL method for measuring the parathyroid gland is more accurate and reliable than the conventional 2D measurement. This VOCAL method could be used as a more reliable follow-up imaging modality in a patient with hyperparathyroidism.

  3. Secondary Evaluations.

    Science.gov (United States)

    Cook, Thomas D.

    Secondary evaluations, in which an investigator takes a body of evaluation data collected by a primary evaluation researcher and examines the data to see if the original conclusions about the program correspond with his own, are discussed. The different kinds of secondary evaluations and the advantages and disadvantages of each are pointed out,…

  4. Stochastic methods for the quantification of sensitivities and uncertainties in criticality analyses; Stochastische Methoden zur Quantifizierung von Sensitivitaeten und Unsicherheiten in Kritikalitaetsanalysen

    Energy Technology Data Exchange (ETDEWEB)

    Behler, Matthias; Bock, Matthias; Stuke, Maik; Wagner, Markus

    2014-06-15

    This work describes statistical analyses based on Monte Carlo sampling methods for criticality safety analyses. The methods analyse a large number of calculations of a given problem with statistically varied model parameters to determine uncertainties and sensitivities of the computed results. The GRS development SUnCISTT (Sensitivities and Uncertainties in Criticality Inventory and Source Term Tool) is a modular, easily extensible abstract interface program, designed to perform such Monte Carlo sampling based uncertainty and sensitivity analyses in the field of criticality safety. It couples different criticality and depletion codes commonly used in nuclear criticality safety assessments to the well-established GRS tool SUSA for sensitivity and uncertainty analyses. For uncertainty analyses of criticality calculations, SunCISTT couples various SCALE sequences developed at Oak Ridge National Laboratory and the general Monte Carlo N-particle transport code MCNP from Los Alamos National Laboratory to SUSA. The impact of manufacturing tolerances of a fuel assembly configuration on the neutron multiplication factor for the various sequences is shown. Uncertainties in nuclear inventories, dose rates, or decay heat can be investigated via the coupling of the GRS depletion system OREST to SUSA. Some results for a simplified irradiated Pressurized Water Reactor (PWR) UO{sub 2} fuel assembly are shown. SUnCISTT also combines the two aforementioned modules for burnup credit criticality analysis of spent nuclear fuel to ensures an uncertainty and sensitivity analysis using the variations of manufacturing tolerances in the burn-up code and criticality code simultaneously. Calculations and results for a storage cask loaded with typical irradiated PWR UO{sub 2} fuel are shown, including Monte Carlo sampled axial burn-up profiles. The application of SUnCISTT in the field of code validation, specifically, how it is applied to compare a simulation model to available benchmark

  5. Optimized methods to measure acetoacetate, 3-hydroxybutyrate, glycerol, alanine, pyruvate, lactate and glucose in human blood using a centrifugal analyser with a fluorimetric attachment.

    Science.gov (United States)

    Stappenbeck, R; Hodson, A W; Skillen, A W; Agius, L; Alberti, K G

    1990-01-01

    Optimized methods are described for the analysis of glucose, lactate, pyruvate, alanine, glycerol, D-3-hydroxybutyrate and acetoacetate in perchloric acid extracts of human blood using the Cobas Bio centrifugal analyser. Glucose and lactate are measured using the photometric mode and other metabolites using the fluorimetric mode. The intra-assay coefficients of variation ranged from 0.7 to 4.1%, except with very low levels of pyruvate and acetoacetate where the coefficients of variation were 7.1 and 12% respectively. All seven metabolites can be measured in a perchloric acid extract of 20 mul of blood. The methods have been optimized with regard to variation in the perchloric acid content of the samples. These variations arise from the method of sample preparation used to minimize changes occurring in metabolite concentration after venepuncture.

  6. A novel electrostatic ion-energy spectrometer by the use of a proposed ``self-collection'' method for secondary-electron emission from a metal collector

    Science.gov (United States)

    Hirata, M.; Nagashima, S.; Cho, T.; Kohagura, J.; Yoshida, M.; Ito, H.; Numakura, T.; Minami, R.; Kondoh, T.; Nakashima, Y.; Yatsu, K.; Miyoshi, S.

    2003-03-01

    For the purpose of end-loss-ion energy analyses in open-field plasmas, a newly developed electrostatic ion-energy spectrometer is proposed on the basis of a "self-collection" principle for secondary-electron emission from a metal collector. The ion-energy spectrometer is designed with multiple grids for analyzing incident ion energies, and a set of parallelly placed metal plates with respect to lines of ambient magnetic forces in an open-ended device. One of the most important characteristic properties of this spectrometer is the use of our proposed principle of a "self-collection" mechanism due to E×B drifts for secondary electrons emitted from the grounded metal-plate collector by the use of no further additional magnetic systems except the ambient open-ended fields B. The proof-of-principle and characterization experiments are carried out by the use of a test-ion-beam line along with an additional use of a Helmholtz coil system for the formation of open magnetic fields similar to those in the GAMMA 10 end region. The applications of the developed ion-energy spectrometer for end-loss-ion diagnostics in the GAMMA 10 plasma experiments are demonstrated under the conditions with simultaneous incidence of energetic electrons produced by electron-cyclotron heatings for end-loss-plugging potential formation, since these electrons have contributed to disturb these ion signals from conventional end-loss-ion detectors.

  7. DETECTION OF LOCAL SITE CONDITIONS INFLUENCING EARTHQUAKE SHOCK AND SECONDARY EFFECTS IN THE VALPARAISO AREA IN CENTRAL-CHILE USING REMOTE SENSING AND GIS METHODS

    Directory of Open Access Journals (Sweden)

    Barbara Theilen-Willige

    2011-01-01

    Full Text Available The potential contribution of remote sensing and GIS techniques to earthquake hazard analysis was investigated in Valparaiso in Chile in order to improve the systematic, standardized inventory of those areas that are more susceptible to earthquake ground motions or to earthquake related secondary effects such as landslides, liquefaction, soil amplifications, compaction or even tsunami-waves. Geophysical, topographical, geological data and satellite images were collected, processed, and integrated into a spatial database using Geoinformation Systems (GIS and image processing techniques. The GIS integrated evaluation of satellite imageries, of digital topographic data and of various open-source geodata can contribute to the acquisition of those specific tectonic, geomorphologic/ topographic settings influencing local site conditions in Valparaiso, Chile. Using the weighted overlay techniques in GIS, susceptibility maps were produced indicating areas, where causal factors influencing near- surface earthquake shock occur aggregated. Causal factors (such as unconsolidated sedimentary layers within a basin’s topography, higher groundwater tables, etc. summarizing and interfering each other, rise the susceptibility of soil amplification and of earthquake related secondary effects. This approach was used as well to create a tsunami flooding susceptibility map. LANDSAT Thermal Band 6-imageries were analysed to get information of surface water currents in this area.

  8. Radiochemical methods and spectroscopical analyses for investigating the catalytic effects of 2-methyltetrahydro-anthraquinone and phenanthraquinone in wood pulp production using the soda additive method

    International Nuclear Information System (INIS)

    Besser, R. v.

    1982-01-01

    The studies were to show whether 2-methyltetrahydroanthraquinone or phenanthraquinone, two additives obtainable at low cost, will have a suitable catalytic effect on the delignification using the soda additive pulping method. For this purpose, soda cookings have been made in a 7 l rotary autoclave. The results have shown that 2-MeTHAQ is by far the better catalytic agent. Further experiments have been made to investigate the mode of action of the redox additives, intended to reveal further characteristics which can be correlated with the knowledge obtained from the preceding soda cookings. The analysis shows that there is a connection between the analytical lignin characteristics and the effectiveness of quinoid additives. (orig./PW) [de

  9. Multiple methods of surgical treatment combined with primary IOL implantation on traumatic lens subluxation/dislocation in patients with secondary glaucoma.

    Science.gov (United States)

    Wang, Rui; Bi, Chun-Chao; Lei, Chun-Ling; Sun, Wen-Tao; Wang, Shan-Shan; Dong, Xiao-Juan

    2014-01-01

    To describe clinical findings and complications from cases of traumatic lens subluxation/dislocation in patients with secondary glaucoma, and discuss the multiple treating methods of operation combined with primary intraocular lens (IOL) implantation. Non-comparative retrospective observational case series. 30 cases (30 eyes) of lens subluxation/dislocation in patients with secondary glaucoma were investigated which accepted the surgical treatment by author in the Ophthalmology of Xi'an No.4 Hospital from 2007 to 2011. According to the different situations of lens subluxation/dislocation, various surgical procedures were performed such as crystalline lens phacoemulsification, crystalline lens phacoemulsification combined anterior vitrectomy, intracapsular cataract extraction combined anterior vitrectomy, lensectomy combined anterior vitrectomy though peripheral transparent cornea incision, pars plana lensectomy combined pars plana vitrectomy, and intravitreal cavity crystalline lens phacofragmentation combined pars plana vitrectomy. And whether to implement trabeculectomy depended on the different situations of secondary glaucoma. The posterior chamber intraocular lenses (PC-IOLs) were implanted in the capsular-bag or trassclerally sutured in the sulus decided by whether the capsular were present. visual acuity, intraocular pressure, the situation of intraocular lens and complications after the operations. The follow-up time was 11-36mo (21.4±7.13). Postoperative visual acuity of all eyes were improved; 28 cases maintained IOP below 21 mm Hg; 2 cases had slightly IOL subluxation, 4 cases had slightly tilted lens optical area; 1 case had postoperative choroidal detachment; 4 cases had postoperative corneal edema more than 1wk, but eventually recovered transparent; 2 cases had mild postoperative vitreous hemorrhage, and absorbed 4wk later. There was no postoperative retinal detachment, IOL dislocation, and endophthalmitis. To take early treatment of traumatic lens

  10. Assessing the accuracy and reliability of ultrasonographic three-dimensional parathyroid volume measurement in a patient with secondary hyperparathyroidism: a comparison with the two-dimensional conventional method

    Energy Technology Data Exchange (ETDEWEB)

    You, Sung Hye; Son, Gyu Ri; Lee, Nam Joon [Dept. of Radiology, Korea University Anam Hospital, Seoul (Korea, Republic of); Suh, Sangil; Ryoo, In Seon; Seol, Hae Young [Dept. of Radiology, Korea University Guro Hospital, Seoul (Korea, Republic of); Lee, Young Hen; Seo, Hyung Suk [Dept. of Radiology, Korea University Ansan Hospital, Ansan (Korea, Republic of)

    2017-01-15

    The purpose of this study was to investigate the accuracy and reliability of the semi-automated ultrasonographic volume measurement tool, virtual organ computer-aided analysis (VOCAL), for measuring the volume of parathyroid glands. Volume measurements for 40 parathyroid glands were performed in patients with secondary hyperparathyroidism caused by chronic renal failure. The volume of the parathyroid glands was measured twice by experienced radiologists by two-dimensional (2D) and three-dimensional (3D) methods using conventional sonograms and the VOCAL with 30°angle increments before parathyroidectomy. The specimen volume was also measured postoperatively. Intraclass correlation coefficients (ICCs) and the absolute percentage error were used for estimating the reproducibility and accuracy of the two different methods. The ICC value between two measurements of the 2D method and the 3D method was 0.956 and 0.999, respectively. The mean absolute percentage error of the 2D method and the 3D VOCAL technique was 29.56% and 5.78%, respectively. For accuracy and reliability, the plots of the 3D method showed a more compact distribution than those of the 2D method on the Bland-Altman graph. The rotational VOCAL method for measuring the parathyroid gland is more accurate and reliable than the conventional 2D measurement. This VOCAL method could be used as a more reliable follow-up imaging modality in a patient with hyperparathyroidism.

  11. Assessing the accuracy and reliability of ultrasonographic three-dimensional parathyroid volume measurement in a patient with secondary hyperparathyroidism: a comparison with the two-dimensional conventional method

    International Nuclear Information System (INIS)

    You, Sung Hye; Son, Gyu Ri; Lee, Nam Joon; Suh, Sangil; Ryoo, In Seon; Seol, Hae Young; Lee, Young Hen; Seo, Hyung Suk

    2017-01-01

    The purpose of this study was to investigate the accuracy and reliability of the semi-automated ultrasonographic volume measurement tool, virtual organ computer-aided analysis (VOCAL), for measuring the volume of parathyroid glands. Volume measurements for 40 parathyroid glands were performed in patients with secondary hyperparathyroidism caused by chronic renal failure. The volume of the parathyroid glands was measured twice by experienced radiologists by two-dimensional (2D) and three-dimensional (3D) methods using conventional sonograms and the VOCAL with 30°angle increments before parathyroidectomy. The specimen volume was also measured postoperatively. Intraclass correlation coefficients (ICCs) and the absolute percentage error were used for estimating the reproducibility and accuracy of the two different methods. The ICC value between two measurements of the 2D method and the 3D method was 0.956 and 0.999, respectively. The mean absolute percentage error of the 2D method and the 3D VOCAL technique was 29.56% and 5.78%, respectively. For accuracy and reliability, the plots of the 3D method showed a more compact distribution than those of the 2D method on the Bland-Altman graph. The rotational VOCAL method for measuring the parathyroid gland is more accurate and reliable than the conventional 2D measurement. This VOCAL method could be used as a more reliable follow-up imaging modality in a patient with hyperparathyroidism

  12. EPR Technology as Sensitive Method for Oxidative Stress Detection in Primary and Secondary Keratinocytes Induced by Two Selected Nanoparticles.

    Science.gov (United States)

    Lohan, S B; Ahlberg, S; Mensch, A; Höppe, D; Giulbudagian, M; Calderón, M; Grether-Beck, S; Krutmann, J; Lademann, J; Meinke, M C

    2017-12-01

    Exogenous factors can cause an imbalance in the redox state of biological systems, promoting the development of oxidative stress, especially reactive oxygen species (ROS). To monitor the intensity of ROS production in secondary keratinocytes (HaCaT) by diesel exhaust particles and thermoresponsive nanogels (tNG), electron paramagnetic resonance (EPR) spectroscopy after 1 and 24 h of incubation, respectively, was applied. Their cytotoxicity was analyzed by a cell viability assay (XTT). For tNG an increase in the cell viability and ROS production of 10% was visible after 24 h, whereas 1 h showed no effect. A ten times lower concentration of diesel exhaust particles exhibited no significant toxic effects on HaCaT cells for both incubation times, thus normal adult human keratinocytes (NHK) were additionally analyzed by XTT and EPR spectroscopy. Here, after 24 h a slight increase of 18% in metabolic activity was observed. However, this effect could not be explained by the ROS formation. A slight increase in the ROS production was only visible after 1 h of incubation time for HaCaT (9%) and NHK (14%).

  13. Simple method for preparation of secondary amides of phosphorylacetic acids and their use for actinide extraction and sorption from nitric acid solutions

    International Nuclear Information System (INIS)

    Artyushin, O.I.; Sharova, E.V.; Odinets, I.L.; Lenevich, S.V.; Mastruykova, T.A.; Morgalyuk, V.P.; Tananaev, I.G.; Pribylova, G.V.; Myasoedova, G.V.; Myasoedov, B.F.

    2004-01-01

    An effective method of synthesis of secondary alkylamides of phosphorylacetic acids (APA), based on amidation of ethyl esters of phosphorylacetic acids with primary aliphatic amines, was developed. Extraction of americium(III) complexes with APA solutions in dichloroethane and uranium(VI) sorption by sorbents with non-covalently fixed APA from nitric acid solutions were studied. In the course of americium(III) extraction there is no correlation between Am III distribution factor and APA structure, whereas during uranium(VI) sorption a dependence of U VI extraction degree on the complexing agent structure is observed [ru

  14. Propensity-score matching in economic analyses: comparison with regression models, instrumental variables, residual inclusion, differences-in-differences, and decomposition methods.

    Science.gov (United States)

    Crown, William H

    2014-02-01

    This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.

  15. Application of the Random Forest method to analyse epidemiological and phenotypic characteristics of Salmonella 4,[5],12:i:- and Salmonella Typhimurium strains

    DEFF Research Database (Denmark)

    Barco, L.; Mancin, M.; Ruffa, M.

    2012-01-01

    in Italy, particularly as far as veterinary isolates are concerned. For this reason, a data set of 877 strains isolated in the north-east of Italy from foodstuffs, animals and environment was analysed during 2005-2010. The Random Forests (RF) method was used to identify the most important epidemiological...... and phenotypic variables to show the difference between the two serovars. Both descriptive analysis and RF revealed that S. 4,[5],12:i:- is less heterogeneous than S. Typhimurium. RF highlighted that phage type was the most important variable to differentiate the two serovars. The most common phage types...

  16. Phylogenetic analyses of Vitis (Vitaceae) based on complete chloroplast genome sequences: effects of taxon sampling and phylogenetic methods on resolving relationships among rosids.

    Science.gov (United States)

    Jansen, Robert K; Kaittanis, Charalambos; Saski, Christopher; Lee, Seung-Bum; Tomkins, Jeffrey; Alverson, Andrew J; Daniell, Henry

    2006-04-09

    The Vitaceae (grape) is an economically important family of angiosperms whose phylogenetic placement is currently unresolved. Recent phylogenetic analyses based on one to several genes have suggested several alternative placements of this family, including sister to Caryophyllales, asterids, Saxifragales, Dilleniaceae or to rest of rosids, though support for these different results has been weak. There has been a recent interest in using complete chloroplast genome sequences for resolving phylogenetic relationships among angiosperms. These studies have clarified relationships among several major lineages but they have also emphasized the importance of taxon sampling and the effects of different phylogenetic methods for obtaining accurate phylogenies. We sequenced the complete chloroplast genome of Vitis vinifera and used these data to assess relationships among 27 angiosperms, including nine taxa of rosids. The Vitis vinifera chloroplast genome is 160,928 bp in length, including a pair of inverted repeats of 26,358 bp that are separated by small and large single copy regions of 19,065 bp and 89,147 bp, respectively. The gene content and order of Vitis is identical to many other unrearranged angiosperm chloroplast genomes, including tobacco. Phylogenetic analyses using maximum parsimony and maximum likelihood were performed on DNA sequences of 61 protein-coding genes for two datasets with 28 or 29 taxa, including eight or nine taxa from four of the seven currently recognized major clades of rosids. Parsimony and likelihood phylogenies of both data sets provide strong support for the placement of Vitaceae as sister to the remaining rosids. However, the position of the Myrtales and support for the monophyly of the eurosid I clade differs between the two data sets and the two methods of analysis. In parsimony analyses, the inclusion of Gossypium is necessary to obtain trees that support the monophyly of the eurosid I clade. However, maximum likelihood analyses place

  17. Phylogenetic analyses of Vitis (Vitaceae based on complete chloroplast genome sequences: effects of taxon sampling and phylogenetic methods on resolving relationships among rosids

    Directory of Open Access Journals (Sweden)

    Alverson Andrew J

    2006-04-01

    Full Text Available Abstract Background The Vitaceae (grape is an economically important family of angiosperms whose phylogenetic placement is currently unresolved. Recent phylogenetic analyses based on one to several genes have suggested several alternative placements of this family, including sister to Caryophyllales, asterids, Saxifragales, Dilleniaceae or to rest of rosids, though support for these different results has been weak. There has been a recent interest in using complete chloroplast genome sequences for resolving phylogenetic relationships among angiosperms. These studies have clarified relationships among several major lineages but they have also emphasized the importance of taxon sampling and the effects of different phylogenetic methods for obtaining accurate phylogenies. We sequenced the complete chloroplast genome of Vitis vinifera and used these data to assess relationships among 27 angiosperms, including nine taxa of rosids. Results The Vitis vinifera chloroplast genome is 160,928 bp in length, including a pair of inverted repeats of 26,358 bp that are separated by small and large single copy regions of 19,065 bp and 89,147 bp, respectively. The gene content and order of Vitis is identical to many other unrearranged angiosperm chloroplast genomes, including tobacco. Phylogenetic analyses using maximum parsimony and maximum likelihood were performed on DNA sequences of 61 protein-coding genes for two datasets with 28 or 29 taxa, including eight or nine taxa from four of the seven currently recognized major clades of rosids. Parsimony and likelihood phylogenies of both data sets provide strong support for the placement of Vitaceae as sister to the remaining rosids. However, the position of the Myrtales and support for the monophyly of the eurosid I clade differs between the two data sets and the two methods of analysis. In parsimony analyses, the inclusion of Gossypium is necessary to obtain trees that support the monophyly of the eurosid I clade

  18. Multivariate Analyses and Evaluation of Heavy Metals by Chemometric BCR Sequential Extraction Method in Surface Sediments from Lingdingyang Bay, South China

    Directory of Open Access Journals (Sweden)

    Linglong Cao

    2015-04-01

    Full Text Available Sediments in estuary areas are recognized as the ultimate reservoirs for numerous contaminants, e.g., toxic metals. Multivariate analyses by chemometric evaluation were performed to classify metal ions (Cu, Zn, As, Cr, Pb, Ni and Cd in superficial sediments from Lingdingyang Bay and to determine whether or not there were potential contamination risks based on the BCR sequential extraction scheme. The results revealed that Cd was mainly in acid-soluble form with an average of 75.99% of its total contents and thus of high potential availability, indicating significant anthropogenic sources, while Cr, As, Ni were enriched in the residual fraction which could be considered as the safest ingredients to the environment. According to the proportion of secondary to primary phases (KRSP, Cd had the highest bioavailable fraction and represented high or very high risk, followed by Pb and Cu with medium risks in most of samples. The combined evaluation of the Pollution Load Index (PLI and the mean Effect Range Median Quotient (mERM-Q highlighted that the greatest potential environmental risk area was in the northwest of Lingdingyang Bay. Almost all of the sediments had a 21% probability of toxicity. Additionally, Principal Component Analysis (PCA revealed that the survey region was significantly affected by two main sources of anthropogenic contributions: PC1 showed increased loadings of variables in acid-soluble and reducible fractions that were consistent with the input from industrial wastes (such as manufacturing, metallurgy, chemical industry and domestic sewages; PC2 was characterized by increased loadings of variables in residual fraction that could be attributed to leaching and weathering of parent rocks. The results obtained demonstrated the need for appropriate remediation measures to alleviate soil pollution problem due to the more aggregation of potentially risky metals. Therefore, it is of crucial significance to implement the targeted

  19. Secondary fuel delivery system

    Science.gov (United States)

    Parker, David M.; Cai, Weidong; Garan, Daniel W.; Harris, Arthur J.

    2010-02-23

    A secondary fuel delivery system for delivering a secondary stream of fuel and/or diluent to a secondary combustion zone located in the transition piece of a combustion engine, downstream of the engine primary combustion region is disclosed. The system includes a manifold formed integral to, and surrounding a portion of, the transition piece, a manifold inlet port, and a collection of injection nozzles. A flowsleeve augments fuel/diluent flow velocity and improves the system cooling effectiveness. Passive cooling elements, including effusion cooling holes located within the transition boundary and thermal-stress-dissipating gaps that resist thermal stress accumulation, provide supplemental heat dissipation in key areas. The system delivers a secondary fuel/diluent mixture to a secondary combustion zone located along the length of the transition piece, while reducing the impact of elevated vibration levels found within the transition piece and avoiding the heat dissipation difficulties often associated with traditional vibration reduction methods.

  20. Secondary Headaches

    Science.gov (United States)

    ... in the medical history or examination to suggest secondary headache. Headache can be caused by general medical conditions such as severe hypertension, or by conditions that affect the brain and ...

  1. An investigation into the disciplinary methods used by teachers in a secondary township school in South Africa

    Directory of Open Access Journals (Sweden)

    Numvula J. Serame

    2013-11-01

    Full Text Available The research that this article reports on investigated the incidence of learner discipline problems, the effect of them on teachers, the teachers’ methods of maintaining discipline and the effectiveness of those methods in one township, namely Jouberton in Klerksdorp, South Africa. The experiences of both teachers and learners were surveyed. It was found that discipline at the school is far from satisfactory and problems with discipline are more serious than the international norm indicates. These problems have a serious effect on a large part of the teachers’ family life, personal health, job satisfaction and morale. Whilst both teachers and learners commendably prefer the educationally sound preventive and positive methods of maintaining discipline, the application of these methods appears not to be effective: it seems as if teacher education falls short in the area of maintaining discipline, particularly regarding the successful application of proactive methods of maintaining discipline. Finally, the learners’ views on the maintenance of discipline are an alarming indictment of the principles of democracy, human rights and human dignity, and of rationality as a tool for conflict resolution. Recommendations are made for follow-up research with the objective of amelioration.

  2. A Secondary Voltage Control Method for an AC/DC Coupled Transmission System Based on Model Predictive Control

    DEFF Research Database (Denmark)

    Xu, Fengda; Guo, Qinglai; Sun, Hongbin

    2015-01-01

    For an AC/DC coupled transmission system, the change of transmission power on the DC lines will significantly influence the AC systems’ voltage. This paper describes a method to coordinated control the reactive power of power plants and shunt capacitors at DC converter stations nearby, in order t...

  3. Multiple methods of surgical treatment combined with primary IOL implantation on traumatic lens subluxation/dislocation in patients with secondary glaucoma

    Directory of Open Access Journals (Sweden)

    Rui Wang

    2014-04-01

    Full Text Available AIM:To describe clinical findings and complications from cases of traumatic lens subluxation/dislocation in patients with secondary glaucoma, and discuss the multiple treating methods of operation combined with primary intraocular lens (IOL implantation.METHODS:Non-comparative retrospective observational case series. Participants:30 cases (30 eyes of lens subluxation/dislocation in patients with secondary glaucoma were investigated which accepted the surgical treatment by author in the Ophthalmology of Xi''an No.4 Hospital from 2007 to 2011. According to the different situations of lens subluxation/dislocation, various surgical procedures were performed such as crystalline lens phacoemulsification, crystalline lens phacoemulsification combined anterior vitrectomy, intracapsular cataract extraction combined anterior vitrectomy, lensectomy combined anterior vitrectomy though peripheral transparent cornea incision, pars plana lensectomy combined pars plana vitrectomy, and intravitreal cavity crystalline lens phacofragmentation combined pars plana vitrectomy. And whether to implement trabeculectomy depended on the different situations of secondary glaucoma. The posterior chamber intraocular lenses (PC-IOLs were implanted in the capsular-bag or trassclerally sutured in the sulus decided by whether the capsular were present. Main outcome measures:visual acuity, intraocular pressure, the situation of intraocular lens and complications after the operations.RESULTS: The follow-up time was 11-36mo (21.4±7.13. Postoperative visual acuity of all eyes were improved; 28 cases maintained IOP below 21 mm Hg; 2 cases had slightly IOL subluxation, 4 cases had slightly tilted lens optical area; 1 case had postoperative choroidal detachment; 4 cases had postoperative corneal edema more than 1wk, but eventually recovered transparent; 2 cases had mild postoperative vitreous hemorrhage, and absorbed 4wk later. There was no postoperative retinal detachment, IOL

  4. Development of methods to predict both the dynamic and the pseudo-static response of secondary structures subjected to seismic excitations

    International Nuclear Information System (INIS)

    Subudhi, M.; Bezler, P.

    1984-01-01

    Multiple independent support excitation time history formulations have been used to investigate simplified methods to predict the inertial (or dynamic) component of response as well as the pseudo-static (or static) component of response of secondary structures subjected to seismic excitations. For the dynamic component the independent response spectrum method is used with current industry practice for the modal and direction of excitation combinations being adopted and various procedures for the group combination and sequence being investigated. SRSS combination between support groups is found to yield satisfactory results. For the static component, support grouping by elevation for preliminary design followed by support grouping by attachment point for final design assure overall safety in the design

  5. Precursor systems analyses of automated highway systems. Knowledge based systems and learning methods for AHS. Volume 10. Final report, September 1993-February 1995

    Energy Technology Data Exchange (ETDEWEB)

    Schmoltz, J.; Blumer, A.; Noonan, J.; Shedd, D.; Twarog, J.

    1995-06-01

    Managing each AHS vehicle and the AHS system as a whole is an extremely complex yndertaking. The authors have investigated and now report on Artificial Intelligence (AI) approaches that can help. In particular, we focus on AI technologies known as Knowledge Based Systems (KBSs) and Learning Methods (LMs). Our primary purpose is to identify opportunities: we identify several problems in AHS and AI technologies that can solve them. Our secondary purpose is to examine in some detail a subset of these opportunities: we examine how KBSs and LMs can help in controlling the high level movements--e.g., keep in lane, change lanes, speed up, slow down--of an automated vehicle. This detailed examination includes the implementation of a prototype system having three primary components. The Tufts Automated Highway System Kit(TAHSK) discrete time micro-level traffic simulator is a generic AHS simulator. TAHSK interfaces with the Knowledge Based Controller (KBCon) knowledge based high level controller, which controls the high level actions of individual AHS vehicles. Finally, TAHSK also interfaces with a Reinforcement learning (RL) module that was used to explore the possibilities of RL techniques in an AHS environment.

  6. Teaching secondary mathematics

    CERN Document Server

    Rock, David

    2013-01-01

    Solidly grounded in up-to-date research, theory and technology,?Teaching Secondary Mathematics?is a practical, student-friendly, and popular text for secondary mathematics methods courses. It provides clear and useful approaches for mathematics teachers, and shows how concepts typically found in a secondary mathematics curriculum can be taught in a positive and encouraging way. The thoroughly revised fourth edition combines this pragmatic approach with truly innovative and integrated technology content throughout. Synthesized content between the book and comprehensive companion websi

  7. A new method to discriminate secondary organic aerosols from different sources using high-resolution aerosol mass spectra

    Directory of Open Access Journals (Sweden)

    M. F. Heringa

    2012-02-01

    Full Text Available Organic aerosol (OA represents a significant and often major fraction of the non-refractory PM1 (particulate matter with an aerodynamic diameter da < 1 μm mass. Secondary organic aerosol (SOA is an important contributor to the OA and can be formed from biogenic and anthropogenic precursors. Here we present results from the characterization of SOA produced from the emissions of three different anthropogenic sources. SOA from a log wood burner, a Euro 2 diesel car and a two-stroke Euro 2 scooter were characterized with an Aerodyne high-resolution time-of-flight aerosol mass spectrometer (HR-TOF-AMS and compared to SOA from α-pinene.

    The emissions were sampled from the chimney/tailpipe by a heated inlet system and filtered before injection into a smog chamber. The gas phase emissions were irradiated by xenon arc lamps to initiate photo-chemistry which led to nucleation and subsequent particle growth by SOA production.

    Duplicate experiments were performed for each SOA type, with the averaged organic mass spectra showing Pearson's r values >0.94 for the correlations between the four different SOA types after five hours of aging. High-resolution mass spectra (HR-MS showed that the dominant peaks in the MS, m/z 43 and 44, are dominated by the oxygenated ions C2H3O+ and CO2+, respectively, similarly to the relatively fresh semi-volatile oxygenated OA (SV-OOA observed in the ambient aerosol. The atomic O:C ratios were found to be in the range of 0.25–0.55 with no major increase during the first five hours of aging. On average, the diesel SOA showed the lowest O:C ratio followed by SOA from wood burning, α-pinene and the scooter emissions. Grouping the fragment ions revealed that the SOA source with the highest O:C ratio had the largest fraction of small ions.

    The HR data of the four sources could be clustered and separated using

  8. Medicine authentication technology as a counterfeit medicine-detection tool: a Delphi method study to establish expert opinion on manual medicine authentication technology in secondary care.

    Science.gov (United States)

    Naughton, Bernard; Roberts, Lindsey; Dopson, Sue; Brindley, David; Chapman, Stephen

    2017-05-06

    This study aims to establish expert opinion and potential improvements for the Falsified Medicines Directive mandated medicines authentication technology. A two-round Delphi method study using an online questionnaire. Large National Health Service (NHS) foundation trust teaching hospital. Secondary care pharmacists and accredited checking technicians. Seven-point rating scale answers which reached a consensus of 70-80% with a standard deviation (SD) of <1.0. Likert scale questions which reached a consensus of 70-80%, a SD of <1.0 and classified as important according to study criteria. Consensus expert opinion has described database cross-checking technology as quick and user friendly and suggested the inclusion of an audio signal to further support the detection of counterfeit medicines in secondary care (70% consensus, 0.9 SD); other important consensus with a SD of <1.0 included reviewing the colour and information in warning pop up screens to ensure they were not mistaken for the 'already dispensed here' pop up, encouraging the dispenser/checker to act on the warnings and making it mandatory to complete an 'action taken' documentation process to improve the quarantine of potentially counterfeit, expired or recalled medicines. This paper informs key opinion leaders and decision makers as to the positives and negatives of medicines authentication technology from an operator's perspective and suggests the adjustments which may be required to improve operator compliance and the detection of counterfeit medicines in the secondary care sector. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  9. How Genes Modulate Patterns of Aging-Related Changes on the Way to 100: Biodemographic Models and Methods in Genetic Analyses of Longitudinal Data

    Science.gov (United States)

    Yashin, Anatoliy I.; Arbeev, Konstantin G.; Wu, Deqing; Arbeeva, Liubov; Kulminski, Alexander; Kulminskaya, Irina; Akushevich, Igor; Ukraintseva, Svetlana V.

    2016-01-01

    Background and Objective To clarify mechanisms of genetic regulation of human aging and longevity traits, a number of genome-wide association studies (GWAS) of these traits have been performed. However, the results of these analyses did not meet expectations of the researchers. Most detected genetic associations have not reached a genome-wide level of statistical significance, and suffered from the lack of replication in the studies of independent populations. The reasons for slow progress in this research area include low efficiency of statistical methods used in data analyses, genetic heterogeneity of aging and longevity related traits, possibility of pleiotropic (e.g., age dependent) effects of genetic variants on such traits, underestimation of the effects of (i) mortality selection in genetically heterogeneous cohorts, (ii) external factors and differences in genetic backgrounds of individuals in the populations under study, the weakness of conceptual biological framework that does not fully account for above mentioned factors. One more limitation of conducted studies is that they did not fully realize the potential of longitudinal data that allow for evaluating how genetic influences on life span are mediated by physiological variables and other biomarkers during the life course. The objective of this paper is to address these issues. Data and Methods We performed GWAS of human life span using different subsets of data from the original Framingham Heart Study cohort corresponding to different quality control (QC) procedures and used one subset of selected genetic variants for further analyses. We used simulation study to show that approach to combining data improves the quality of GWAS. We used FHS longitudinal data to compare average age trajectories of physiological variables in carriers and non-carriers of selected genetic variants. We used stochastic process model of human mortality and aging to investigate genetic influence on hidden biomarkers of aging

  10. Stable isotope analyses of oxygen (18O:17O:16O) and chlorine (37Cl:35Cl) in perchlorate: reference materials, calibrations, methods, and interferences

    Science.gov (United States)

    Böhlke, John Karl; Mroczkowski, Stanley J.; Sturchio, Neil C.; Heraty, Linnea J.; Richman, Kent W.; Sullivan, Donald B.; Griffith, Kris N.; Gu, Baohua; Hatzinger, Paul B.

    2017-01-01

    RationalePerchlorate (ClO4−) is a common trace constituent of water, soils, and plants; it has both natural and synthetic sources and is subject to biodegradation. The stable isotope ratios of Cl and O provide three independent quantities for ClO4− source attribution and natural attenuation studies: δ37Cl, δ18O, and δ17O (or Δ17O or 17Δ) values. Documented reference materials, calibration schemes, methods, and interferences will improve the reliability of such studies.MethodsThree large batches of KClO4 with contrasting isotopic compositions were synthesized and analyzed against VSMOW-SLAP, atmospheric O2, and international nitrate and chloride reference materials. Three analytical methods were tested for O isotopes: conversion of ClO4− to CO for continuous-flow IRMS (CO-CFIRMS), decomposition to O2 for dual-inlet IRMS (O2-DIIRMS), and decomposition to O2 with molecular-sieve trap (O2-DIIRMS+T). For Cl isotopes, KCl produced by thermal decomposition of KClO4 was reprecipitated as AgCl and converted into CH3Cl for DIIRMS.ResultsKClO4 isotopic reference materials (USGS37, USGS38, USGS39) represent a wide range of Cl and O isotopic compositions, including non-mass-dependent O isotopic variation. Isotopic fractionation and exchange can affect O isotope analyses of ClO4− depending on the decomposition method. Routine analyses can be adjusted for such effects by normalization, using reference materials prepared and analyzed as samples. Analytical errors caused by SO42−, NO3−, ReO42−, and C-bearing contaminants include isotope mixing and fractionation effects on CO and O2, plus direct interference from CO2 in the mass spectrometer. The results highlight the importance of effective purification of ClO4− from environmental samples.ConclusionsKClO4 reference materials are available for testing methods and calibrating isotopic data for ClO4− and other substances with widely varying Cl or O isotopic compositions. Current ClO4−extraction, purification

  11. The Cartographic Method of Research in Exploring the Real Estate Market - A Case of Using Maps for the Introductory Analyses of the Lublin Suburban Commune of Konopnica

    Directory of Open Access Journals (Sweden)

    Nieścioruk Kamil

    2015-06-01

    Full Text Available The paper deals with the introductory analysis of the real estate market of a suburban commune located near a big (voivodeship capital city. The analysis is based mainly of the cartographic method of research. Besides data mining and preparation, maps play an important role here, presenting values acquired directly from the register of notarial deeds of estate sales and purchases, as well as values resulting from statistic computation, for example mean values of area or price, absolute numbers of transactions or real estate type. The spatial factor is also taken into consideration when it comes to more complex or specific analyses. The influence of distance understood as a metric and time factor, as well as regression analysis results are also visualized on maps. Such presentation is a good step towards advanced analyses providing maps are prepared according to the rules of cartography. The paper stresses that a map can be a great tool in aiding every stage of research, but may also cause misinterpretations and false conclusions when at least basic rules are not complied with.

  12. Microwave and thermal pretreatment as methods for increasing the biogas potential of secondary sludge from municipal wastewater treatment plants

    DEFF Research Database (Denmark)

    Kuglarz, Mariusz; Karakashev, Dimitar Borisov; Angelidaki, Irini

    2013-01-01

    In the present study, the sludge was pretreated with microwave irradiation and low-temperature thermal method, both conducted under the same temperature range (30–100°C). Microwave pretreatment was found to be superior over the thermal treatment with respect to sludge solubilization and biogas pr...... experiments indicated that pre-treated sludge (microwave irradiation: 900W, temperature: 60–70°C) gave 35% more methane, compared to untreated sludge. Moreover, the results of this study clearly demonstrated that microwave pretreated sludge showed better degree of sanitation....

  13. Decision support for environmental management of industrial non-hazardous secondary materials: New analytical methods combined with simulation and optimization modeling.

    Science.gov (United States)

    Little, Keith W; Koralegedara, Nadeesha H; Northeim, Coleen M; Al-Abed, Souhail R

    2017-07-01

    Non-hazardous solid materials from industrial processes, once regarded as waste and disposed in landfills, offer numerous environmental and economic advantages when put to beneficial uses (BUs). Proper management of these industrial non-hazardous secondary materials (INSM) requires estimates of their probable environmental impacts among disposal as well as BU options. The U.S. Environmental Protection Agency (EPA) has recently approved new analytical methods (EPA Methods 1313-1316) to assess leachability of constituents of potential concern in these materials. These new methods are more realistic for many disposal and BU options than historical methods, such as the toxicity characteristic leaching protocol. Experimental data from these new methods are used to parameterize a chemical fate and transport (F&T) model to simulate long-term environmental releases from flue gas desulfurization gypsum (FGDG) when disposed of in an industrial landfill or beneficially used as an agricultural soil amendment. The F&T model is also coupled with optimization algorithms, the Beneficial Use Decision Support System (BUDSS), under development by EPA to enhance INSM management. Published by Elsevier Ltd.

  14. Patterns and Prevalence of School Access, Transitions and Equity in South Africa: Secondary Analyses of BT20 Large-Scale Data Sources. CREATE Pathways to Access. Research Monograph No. 27

    Science.gov (United States)

    Fleisch, Brahm; Shindler, Jennifer

    2009-01-01

    This monograph looks at patterns and prevalence of initial school enrolment, late entry, attainment promotion, and repetition in urban South Africa. The paper pays special attention to the particular gender nature of the patterns of school participation. The study analyses data generated in the genuine representative cohort study, Birth-to-Twenty…

  15. Evaluating public involvement in research design and grant development: Using a qualitative document analysis method to analyse an award scheme for researchers.

    Science.gov (United States)

    Baxter, Susan; Muir, Delia; Brereton, Louise; Allmark, Christine; Barber, Rosemary; Harris, Lydia; Hodges, Brian; Khan, Samaira; Baird, Wendy

    2016-01-01

    money was used, including a description of the aims and outcomes of the public involvement activities. The purpose of this study was to analyse the content of these reports. We aimed to find out what researcher views and experiences of public involvement activities were, and what lessons might be learned. Methods We used an innovative method of data analysis, drawing on group participatory approaches, qualitative content analysis, and Framework Analysis to sort and label the content of the reports. We developed a framework of categories and sub-categories (or themes and sub-themes) from this process. Results Twenty five documents were analysed. Four main themes were identified in the data: the added value of public involvement; planning and designing involvement; the role of public members; and valuing public member contributions. Within these themes, sub-themes related to the timing of involvement (prior to the research study/intended during the research study), and also specific benefits of public involvement such as: validating ideas; ensuring appropriate outcomes; ensuring the acceptability of data collection methods/tools and advice regarding research processes. Other sub-themes related to: finding and approaching public members; timing of events; training/support; the format of sessions; setting up public involvement panels: use of public contributors in analysis and interpretation of data; and using public members to assist with dissemination and translation into practice. Conclusions The analysis of reports submitted by researchers following involvement events provides evidence of the value of public involvement during the development of applications for research funding, and details a method for involving members of the public in data analysis which could be of value to other researchers The findings of the analysis indicate recognition amongst researchers of the variety in potential roles for public members in research, and also an acknowledgement of how

  16. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    Science.gov (United States)

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  17. Assessing the Influence of Seasonal and Spatial Variations on the Estimation of Secondary Organic Carbon in Urban Particulate Matter by Applying the EC-Tracer Method

    Directory of Open Access Journals (Sweden)

    Sandra Wagener

    2014-04-01

    Full Text Available The elemental carbon (EC-tracer method was applied to PM10 and PM1 data of three sampling sites in the City of Berlin from February to October 2010. The sites were characterized by differing exposure to traffic and vegetation. The aim was to determine the secondary organic carbon (SOC concentration and to describe the parameters influencing the application of the EC-tracer method. The evaluation was based on comparisons with results obtained from positive matrix factorization (PMF applied to the same samples. To obtain site- and seasonal representative primary OC/EC-ratios ([OC/EC]p, the EC-tracer method was performed separately for each station, and additionally discrete for samples with high and low contribution of biomass burning. Estimated SOC-concentrations for all stations were between 11% and 33% of total OC. SOC-concentrations obtained with PMF exceeded EC-tracer results more than 100% at the park in the period with low biomass burning emissions in PM10. The deviations were besides others attributed to the high ratio of biogenic to combustion emissions and to direct exposure to vegetation. The occurrences of biomass burning emissions in contrast lead to increased SOC-concentrations compared to PMF in PM10. The obtained results distinguish that the EC-tracer-method provides well comparable results with PMF if sites are strongly influenced by one characteristic primary combustion source, but was found to be adversely influenced by direct and relatively high biogenic emissions.

  18. Assessment methods as effective tools for learning outcomes of students in senior secondary schools in Ila-Orangun, south western Nigeria

    Directory of Open Access Journals (Sweden)

    Lamidi W.A

    2013-06-01

    Full Text Available Different methods of assessment on the students learning outcomes in Agricultural Science at five different secondary schools in Ila-Orangun, Osun State were studied. An arm of a class was used for each test; Continuous Assessment (CA and Conventional Method (CM were used for each arm. Students were taught during their normal school times for the maximum time of forty minutes thrice a week. There were ten objective questions weekly for each assessment of the students in the CA method for six weeks. The same questions were used throughout for all the schools, done simultaneously for CA. Also, sixty questions at once at the end of the sixth week for CM. Standard deviation and regression equations for the mean values were used in the analysis. The results show that CA could be adjudged to be better off than the CM because of its higher mean values in all the schools than the CM. The higher R2 values of 0.99 and 0.88 revealed stronger correla