WorldWideScience

Sample records for univariate analysis high

  1. Handbook of univariate and multivariate data analysis with IBM SPSS

    Ho, Robert

    2013-01-01

    Using the same accessible, hands-on approach as its best-selling predecessor, the Handbook of Univariate and Multivariate Data Analysis with IBM SPSS, Second Edition explains how to apply statistical tests to experimental findings, identify the assumptions underlying the tests, and interpret the findings. This second edition now covers more topics and has been updated with the SPSS statistical package for Windows.New to the Second EditionThree new chapters on multiple discriminant analysis, logistic regression, and canonical correlationNew section on how to deal with missing dataCoverage of te

  2. [A SAS marco program for batch processing of univariate Cox regression analysis for great database].

    Yang, Rendong; Xiong, Jie; Peng, Yangqin; Peng, Xiaoning; Zeng, Xiaomin

    2015-02-01

    To realize batch processing of univariate Cox regression analysis for great database by SAS marco program. We wrote a SAS macro program, which can filter, integrate, and export P values to Excel by SAS9.2. The program was used for screening survival correlated RNA molecules of ovarian cancer. A SAS marco program could finish the batch processing of univariate Cox regression analysis, the selection and export of the results. The SAS macro program has potential applications in reducing the workload of statistical analysis and providing a basis for batch processing of univariate Cox regression analysis.

  3. A Guideline to Univariate Statistical Analysis for LC/MS-Based Untargeted Metabolomics-Derived Data

    Maria Vinaixa

    2012-10-01

    Full Text Available Several metabolomic software programs provide methods for peak picking, retention time alignment and quantification of metabolite features in LC/MS-based metabolomics. Statistical analysis, however, is needed in order to discover those features significantly altered between samples. By comparing the retention time and MS/MS data of a model compound to that from the altered feature of interest in the research sample, metabolites can be then unequivocally identified. This paper reports on a comprehensive overview of a workflow for statistical analysis to rank relevant metabolite features that will be selected for further MS/MS experiments. We focus on univariate data analysis applied in parallel on all detected features. Characteristics and challenges of this analysis are discussed and illustrated using four different real LC/MS untargeted metabolomic datasets. We demonstrate the influence of considering or violating mathematical assumptions on which univariate statistical test rely, using high-dimensional LC/MS datasets. Issues in data analysis such as determination of sample size, analytical variation, assumption of normality and homocedasticity, or correction for multiple testing are discussed and illustrated in the context of our four untargeted LC/MS working examples.

  4. Univariate and multivariate analysis on processing tomato quality under different mulches

    Carmen Moreno

    2014-04-01

    Full Text Available The use of eco-friendly mulch materials as alternatives to the standard polyethylene (PE has become increasingly prevalent worldwide. Consequently, a comparison of mulch materials from different origins is necessary to evaluate their feasibility. Several researchers have compared the effects of mulch materials on each crop variable through univariate analysis (ANOVA. However, it is important to focus on the effect of these materials on fruit quality, because this factor decisively influences the acceptance of the final product by consumers and the industrial sector. This study aimed to analyze the information supplied by a randomized complete block experiment combined over two seasons, a principal component analysis (PCA and a cluster analysis (CA when studying the effects of mulch materials on the quality of processing tomato (Lycopersicon esculentum Mill.. The study focused on the variability in the quality measurements and on the determination of mulch materials with a similar response to them. A comparison of the results from both types of analysis yielded complementary information. ANOVA showed the similarity of certain materials. However, considering the totality of the variables analyzed, the final interpretation was slightly complicated. PCA indicated that the juice color, the fruit firmness and the soluble solid content were the most influential factors in the total variability of a set of 12 juice and fruit variables, and CA allowed us to establish four categories of treatment: plastics (polyethylene - PE, oxo- and biodegradable materials, papers, manual weeding and barley (Hordeum vulgare L. straw. Oxobiodegradable and PE were most closely related based on CA.

  5. Univariate and Cross Tabulation Analysis of Construction Accidents in the Aegean Region

    BARADAN, Selim; AKBOĞA, Özge; ÇETİNKAYA, Ufuk; USMEN, Mümtaz A.

    2016-01-01

    It is crucial toinvestigate case studies and analyze accident statistics to establish safetyand health culture in the construction industry, which exhibits high fatalityrates. However, it is difficult to find reliable and accurate constructionaccidents data in Turkeydue to inadequate accident reporting and recordkeeping system, which hindersstatistical safety research. Therefore, an independent database was generatedby using inspection reports in this research study. Data mining was performed...

  6. Who uses nursing theory? A univariate descriptive analysis of five years' research articles.

    Bond, A Elaine; Eshah, Nidal Farid; Bani-Khaled, Mohammed; Hamad, Atef Omar; Habashneh, Samira; Kataua', Hussein; al-Jarrah, Imad; Abu Kamal, Andaleeb; Hamdan, Falastine Rafic; Maabreh, Roqia

    2011-06-01

    Since the early 1950s, nursing leaders have worked diligently to build the Scientific Discipline of Nursing, integrating Theory, Research and Practice. Recently, the role of theory has again come into question, with some scientists claiming nurses are not using theory to guide their research, with which to improve practice. The purposes of this descriptive study were to determine: (i) Were nursing scientists' research articles in leading nursing journals based on theory? (ii) If so, were the theories nursing theories or borrowed theories? (iii) Were the theories integrated into the studies, or were they used as organizing frameworks? Research articles from seven top ISI journals were analysed, excluding regularly featured columns, meta-analyses, secondary analysis, case studies and literature reviews. The authors used King's dynamic Interacting system and Goal Attainment Theory as an organizing framework. They developed consensus on how to identify the integration of theory, searching the Title, Abstract, Aims, Methods, Discussion and Conclusion sections of each research article, whether quantitative or qualitative. Of 2857 articles published in the seven journals from 2002 to, and including, 2006, 2184 (76%) were research articles. Of the 837 (38%) authors who used theories, 460 (55%) used nursing theories, 377 (45%) used other theories: 776 (93%) of those who used theory integrated it into their studies, including qualitative studies, while 51 (7%) reported they used theory as an organizing framework for their studies. Closer analysis revealed theory principles were implicitly implied, even in research reports that did not explicitly report theory usage. Increasing numbers of nursing research articles (though not percentagewise) continue to be guided by theory, and not always by nursing theory. Newer nursing research methods may not explicitly state the use of nursing theory, though it is implicitly implied. © 2010 The Authors. Scandinavian Journal of Caring

  7. Evaluation of genetic diversity among soybean (Glycine max) genotypes using univariate and multivariate analysis.

    Oliveira, M M; Sousa, L B; Reis, M C; Silva Junior, E G; Cardoso, D B O; Hamawaki, O T; Nogueira, A P O

    2017-05-31

    The genetic diversity study has paramount importance in breeding programs; hence, it allows selection and choice of the parental genetic divergence, which have the agronomic traits desired by the breeder. This study aimed to characterize the genetic divergence between 24 soybean genotypes through their agronomic traits, using multivariate clustering methods to select the potential genitors for the promising hybrid combinations. Six agronomic traits evaluated were number of days to flowering and maturity, plant height at flowering and maturity, insertion height of the first pod, and yield. The genetic divergence evaluated by multivariate analysis that esteemed first the Mahalanobis' generalized distance (D 2 ), then the clustering using Tocher's optimization methods, and then the unweighted pair group method with arithmetic average (UPGMA). Tocher's optimization method and the UPGMA agreed with the groups' constitution between each other, the formation of eight distinct groups according Tocher's method and seven distinct groups using UPGMA. The trait number of days for flowering (45.66%) was the most efficient to explain dissimilarity between genotypes, and must be one of the main traits considered by the breeder in the moment of genitors choice in soybean-breeding programs. The genetic variability allowed the identification of dissimilar genotypes and with superior performances. The hybridizations UFU 18 x UFUS CARAJÁS, UFU 15 x UFU 13, and UFU 13 x UFUS CARAJÁS are promising to obtain superior segregating populations, which enable the development of more productive genotypes.

  8. What do differences between multi-voxel and univariate analysis mean? How subject-, voxel-, and trial-level variance impact fMRI analysis.

    Davis, Tyler; LaRocque, Karen F; Mumford, Jeanette A; Norman, Kenneth A; Wagner, Anthony D; Poldrack, Russell A

    2014-08-15

    Multi-voxel pattern analysis (MVPA) has led to major changes in how fMRI data are analyzed and interpreted. Many studies now report both MVPA results and results from standard univariate voxel-wise analysis, often with the goal of drawing different conclusions from each. Because MVPA results can be sensitive to latent multidimensional representations and processes whereas univariate voxel-wise analysis cannot, one conclusion that is often drawn when MVPA and univariate results differ is that the activation patterns underlying MVPA results contain a multidimensional code. In the current study, we conducted simulations to formally test this assumption. Our findings reveal that MVPA tests are sensitive to the magnitude of voxel-level variability in the effect of a condition within subjects, even when the same linear relationship is coded in all voxels. We also find that MVPA is insensitive to subject-level variability in mean activation across an ROI, which is the primary variance component of interest in many standard univariate tests. Together, these results illustrate that differences between MVPA and univariate tests do not afford conclusions about the nature or dimensionality of the neural code. Instead, targeted tests of the informational content and/or dimensionality of activation patterns are critical for drawing strong conclusions about the representational codes that are indicated by significant MVPA results. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. [Retrospective statistical analysis of clinical factors of recurrence in chronic subdural hematoma: correlation between univariate and multivariate analysis].

    Takayama, Motoharu; Terui, Keita; Oiwa, Yoshitsugu

    2012-10-01

    Chronic subdural hematoma is common in elderly individuals and surgical procedures are simple. The recurrence rate of chronic subdural hematoma, however, varies from 9.2 to 26.5% after surgery. The authors studied factors of the recurrence using univariate and multivariate analyses in patients with chronic subdural hematoma We retrospectively reviewed 239 consecutive cases of chronic subdural hematoma who received burr-hole surgery with irrigation and closed-system drainage. We analyzed the relationships between recurrence of chronic subdural hematoma and factors such as sex, age, laterality, bleeding tendency, other complicated diseases, density on CT, volume of the hematoma, residual air in the hematoma cavity, use of artificial cerebrospinal fluid. Twenty-one patients (8.8%) experienced a recurrence of chronic subdural hematoma. Multiple logistic regression found that the recurrence rate was higher in patients with a large volume of the residual air, and was lower in patients using artificial cerebrospinal fluid. No statistical differences were found in bleeding tendency. Techniques to reduce the air in the hematoma cavity are important for good outcome in surgery of chronic subdural hematoma. Also, the use of artificial cerebrospinal fluid reduces recurrence of chronic subdural hematoma. The surgical procedures can be the same for patients with bleeding tendencies.

  10. A comparison between univariate probabilistic and multivariate (logistic regression) methods for landslide susceptibility analysis: the example of the Febbraro valley (Northern Alps, Italy)

    Rossi, M.; Apuani, T.; Felletti, F.

    2009-04-01

    .40). Geological map and land use map were also used, considering geological and land use properties as categorical variables. Appling the univariate probabilistic method the Landslide Susceptibility Index (LSI) is defined as the sum of the ratio Ra/Rb calculated for each predisposing factor, where Ra is the ratio between number of pixel of class and the total number of pixel of the study area, and Rb is the ratio between number of landslides respect to the pixel number of the interval area. From the analysis of the Ra/Rb ratio the relationship between landslide occurrence and predisposing factors were defined. Then the equation of LSI was used in GIS to trace the landslide susceptibility maps. The multivariate method for landslide susceptibility analysis, based on logistic regression, was performed starting from the density maps of the predisposing factors, calculated with the intervals defined above using the equation Rb/Rbtot, where Rbtot is a sum of all Rb values. Using stepwise forward algorithms the logistic regression was performed in two successive steps: first a univariate logistic regression is used to choose the most significant predisposing factors, then the multivariate logistic regression can be performed. The univariate regression highlighted the importance of the following factors: elevation, accumulation flow, drainage density, lineament density, geology and land use. When the multivariate regression was applied the number of controlling factors was reduced neglecting the geological properties. The resulting final susceptibility equation is: P = 1 / (1 + exp-(6.46-22.34*elevation-5.33*accumulation flow-7.99* drainage density-4.47*lineament density-17.31*land use)) and using this equation the susceptibility maps were obtained. To easy compare the results of the two methodologies, the susceptibility maps were reclassified in five susceptibility intervals (very high, high, moderate, low and very low) using natural breaks. Then the maps were validated using two

  11. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  12. Comparison between the univariate and multivariate analysis on the partial characterization of the endoglucanase produced in the solid state fermentation by Aspergillus oryzae ATCC 10124.

    de Brito, Aila Riany; Santos Reis, Nadabe Dos; Silva, Tatielle Pereira; Ferreira Bonomo, Renata Cristina; Trovatti Uetanabaro, Ana Paula; de Assis, Sandra Aparecida; da Silva, Erik Galvão Paranhos; Aguiar-Oliveira, Elizama; Oliveira, Julieta Rangel; Franco, Marcelo

    2017-11-26

    Endoglucanase production by Aspergillus oryzae ATCC 10124 cultivated in rice husks or peanut shells was optimized by experimental design as a function of humidity, time, and temperature. The optimum temperature for the endoglucanase activity was estimated by a univariate analysis (one factor at the time) as 50°C (rice husks) and 60°C (peanut shells), however, by a multivariate analysis (synergism of factors), it was determined a different temperature (56°C) for endoglucanase from peanut shells. For the optimum pH, values determined by univariate and multivariate analysis were 5 and 5.2 (rice husk) and 5 and 7.6 (peanut shells). In addition, the best half-lives were observed at 50°C as 22.8 hr (rice husks) and 7.3 hr (peanut shells), also, 80% of residual activities was obtained between 30 and 50°C for both substrates, and the pH stability was improved at 5-7 (rice hulls) and 6-9 (peanut shells). Both endoglucanases obtained presented different characteristics as a result of the versatility of fungi in different substrates.

  13. Evaluation of in-line Raman data for end-point determination of a coating process: Comparison of Science-Based Calibration, PLS-regression and univariate data analysis.

    Barimani, Shirin; Kleinebudde, Peter

    2017-10-01

    A multivariate analysis method, Science-Based Calibration (SBC), was used for the first time for endpoint determination of a tablet coating process using Raman data. Two types of tablet cores, placebo and caffeine cores, received a coating suspension comprising a polyvinyl alcohol-polyethylene glycol graft-copolymer and titanium dioxide to a maximum coating thickness of 80µm. Raman spectroscopy was used as in-line PAT tool. The spectra were acquired every minute and correlated to the amount of applied aqueous coating suspension. SBC was compared to another well-known multivariate analysis method, Partial Least Squares-regression (PLS) and a simpler approach, Univariate Data Analysis (UVDA). All developed calibration models had coefficient of determination values (R 2 ) higher than 0.99. The coating endpoints could be predicted with root mean square errors (RMSEP) less than 3.1% of the applied coating suspensions. Compared to PLS and UVDA, SBC proved to be an alternative multivariate calibration method with high predictive power. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Development of a Univariate Membrane-Based Mid-Infrared Method for Protein Quantitation and Total Lipid Content Analysis of Biological Samples

    Ivona Strug

    2014-01-01

    Full Text Available Biological samples present a range of complexities from homogeneous purified protein to multicomponent mixtures. Accurate qualification of such samples is paramount to downstream applications. We describe the development of an MIR spectroscopy-based analytical method offering simultaneous protein quantitation (0.25–5 mg/mL and analysis of total lipid or detergent species, as well as the identification of other biomolecules present in biological samples. The method utilizes a hydrophilic PTFE membrane engineered for presentation of aqueous samples in a dried format compatible with fast infrared analysis. Unlike classical quantification techniques, the reported method is amino acid sequence independent and thus applicable to complex samples of unknown composition. By comparison to existing platforms, this MIR-based method enables direct quantification using minimal sample volume (2 µL; it is well-suited where repeat access and limited sample size are critical parameters. Further, accurate results can be derived without specialized training or knowledge of IR spectroscopy. Overall, the simplified application and analysis system provides a more cost-effective alternative to high-throughput IR systems for research laboratories with minimal throughput demands. In summary, the MIR-based system provides a viable alternative to current protein quantitation methods; it also uniquely offers simultaneous qualification of other components, notably lipids and detergents.

  15. Comparison of spectrum normalization techniques for univariate ...

    Laser-induced breakdown spectroscopy; univariate study; normalization models; stainless steel; standard error of prediction. Abstract. Analytical performance of six different spectrum normalization techniques, namelyinternal normalization, normalization with total light, normalization with background along with their ...

  16. VC-dimension of univariate decision trees.

    Yildiz, Olcay Taner

    2015-02-01

    In this paper, we give and prove the lower bounds of the Vapnik-Chervonenkis (VC)-dimension of the univariate decision tree hypothesis class. The VC-dimension of the univariate decision tree depends on the VC-dimension values of its subtrees and the number of inputs. Via a search algorithm that calculates the VC-dimension of univariate decision trees exhaustively, we show that our VC-dimension bounds are tight for simple trees. To verify that the VC-dimension bounds are useful, we also use them to get VC-generalization bounds for complexity control using structural risk minimization in decision trees, i.e., pruning. Our simulation results show that structural risk minimization pruning using the VC-dimension bounds finds trees that are more accurate as those pruned using cross validation.

  17. Univariate characterization of the German business cycle 1955-1994

    Weihs, Claus; Garczarek, Ursula

    2002-01-01

    We present a descriptive analysis of stylized facts for the German business cycle. We demonstrate that simple ad-hoc instructions for identifying univariate rules characterizing the German business cycle 1955-1994 lead to an error rate comparable to standard multivariate methods.

  18. Evaluation of droplet size distributions using univariate and multivariate approaches

    Gauno, M.H.; Larsen, C.C.; Vilhelmsen, T.

    2013-01-01

    of the distribution. The current study was aiming to compare univariate and multivariate approach in evaluating droplet size distributions. As a model system, the atomization of a coating solution from a two-fluid nozzle was investigated. The effect of three process parameters (concentration of ethyl cellulose...... in ethanol, atomizing air pressure, and flow rate of coating solution) on the droplet size and droplet size distribution using a full mixed factorial design was used. The droplet size produced by a two-fluid nozzle was measured by laser diffraction and reported as volume based size distribution....... Investigation of loading and score plots from principal component analysis (PCA) revealed additional information on the droplet size distributions and it was possible to identify univariate statistics (volume median droplet size), which were similar, however, originating from varying droplet size distributions...

  19. Evaluation of droplet size distributions using univariate and multivariate approaches.

    Gaunø, Mette Høg; Larsen, Crilles Casper; Vilhelmsen, Thomas; Møller-Sonnergaard, Jørn; Wittendorff, Jørgen; Rantanen, Jukka

    2013-01-01

    Pharmaceutically relevant material characteristics are often analyzed based on univariate descriptors instead of utilizing the whole information available in the full distribution. One example is droplet size distribution, which is often described by the median droplet size and the width of the distribution. The current study was aiming to compare univariate and multivariate approach in evaluating droplet size distributions. As a model system, the atomization of a coating solution from a two-fluid nozzle was investigated. The effect of three process parameters (concentration of ethyl cellulose in ethanol, atomizing air pressure, and flow rate of coating solution) on the droplet size and droplet size distribution using a full mixed factorial design was used. The droplet size produced by a two-fluid nozzle was measured by laser diffraction and reported as volume based size distribution. Investigation of loading and score plots from principal component analysis (PCA) revealed additional information on the droplet size distributions and it was possible to identify univariate statistics (volume median droplet size), which were similar, however, originating from varying droplet size distributions. The multivariate data analysis was proven to be an efficient tool for evaluating the full information contained in a distribution.

  20. Prognostic factors in children and adolescents with acute myeloid leukemia (excluding children with Down syndrome and acute promyelocytic leukemia): univariate and recursive partitioning analysis of patients treated on Pediatric Oncology Group (POG) Study 8821.

    Chang, M; Raimondi, S C; Ravindranath, Y; Carroll, A J; Camitta, B; Gresik, M V; Steuber, C P; Weinstein, H

    2000-07-01

    The purpose of the paper was to define clinical or biological features associated with the risk for treatment failure for children with acute myeloid leukemia. Data from 560 children and adolescents with newly diagnosed acute myeloid leukemia who entered the Pediatric Oncology Group Study 8821 from June 1988 to March 1993 were analyzed by univariate and recursive partitioning methods. Children with Down syndrome or acute promyelocytic leukemia were excluded from the study. Factors examined included age, number of leukocytes, sex, FAB morphologic subtype, cytogenetic findings, and extramedullary disease at the time of diagnosis. The overall event-free survival (EFS) rate at 4 years was 32.7% (s.e. = 2.2%). Age > or =2 years, fewer than 50 x 10(9)/I leukocytes, and t(8;21) or inv(16), and normal chromosomes were associated with higher rates of EFS (P value = 0.003, 0.049, 0.0003, 0.031, respectively), whereas the M5 subtype of AML (P value = 0.0003) and chromosome abnormalities other than t(8;21) and inv(16) were associated with lower rates of EFS (P value = 0.0001). Recursive partitioning analysis defined three groups of patients with widely varied prognoses: female patients with t(8;21), inv(16), or a normal karyotype (n = 89) had the best prognosis (4-year EFS = 55.1%, s.e. = 5.7%); male patients with t(8;21), inv(16) or normal chromosomes (n = 106) had an intermediate prognosis (4-year EFS = 38.1%, s.e. = 5.3%); patients with chromosome abnormalities other than t(8;21) and inv(16) (n = 233) had the worst prognosis (4-year EFS = 27.0%, s.e. = 3.2%). One hundred and thirty-two patients (24%) could not be grouped because of missing cytogenetic data, mainly due to inadequate marrow samples. The results suggest that pediatric patients with acute myeloid leukemia can be categorized into three potential risk groups for prognosis and that differences in sex and chromosomal abnormalities are associated with differences in estimates of EFS. These results are tentative and

  1. Univaried models in the series of temperature of the air

    Leon Aristizabal Gloria esperanza

    2000-01-01

    The theoretical framework for the study of the air's temperature time series is the theory of stochastic processes, particularly those known as ARIMA, that make it possible to carry out a univaried analysis. ARIMA models are built in order to explain the structure of the monthly temperatures corresponding to the mean, the absolute maximum, absolute minimum, maximum mean and minimum mean temperatures, for four stations in Colombia. By means of those models, the possible evolution of the latter variables is estimated with predictive aims in mind. The application and utility of the models is discussed

  2. The pathways for intelligible speech: multivariate and univariate perspectives.

    Evans, S; Kyong, J S; Rosen, S; Golestani, N; Warren, J E; McGettigan, C; Mourão-Miranda, J; Wise, R J S; Scott, S K

    2014-09-01

    An anterior pathway, concerned with extracting meaning from sound, has been identified in nonhuman primates. An analogous pathway has been suggested in humans, but controversy exists concerning the degree of lateralization and the precise location where responses to intelligible speech emerge. We have demonstrated that the left anterior superior temporal sulcus (STS) responds preferentially to intelligible speech (Scott SK, Blank CC, Rosen S, Wise RJS. 2000. Identification of a pathway for intelligible speech in the left temporal lobe. Brain. 123:2400-2406.). A functional magnetic resonance imaging study in Cerebral Cortex used equivalent stimuli and univariate and multivariate analyses to argue for the greater importance of bilateral posterior when compared with the left anterior STS in responding to intelligible speech (Okada K, Rong F, Venezia J, Matchin W, Hsieh IH, Saberi K, Serences JT,Hickok G. 2010. Hierarchical organization of human auditory cortex: evidence from acoustic invariance in the response to intelligible speech. 20: 2486-2495.). Here, we also replicate our original study, demonstrating that the left anterior STS exhibits the strongest univariate response and, in decoding using the bilateral temporal cortex, contains the most informative voxels showing an increased response to intelligible speech. In contrast, in classifications using local "searchlights" and a whole brain analysis, we find greater classification accuracy in posterior rather than anterior temporal regions. Thus, we show that the precise nature of the multivariate analysis used will emphasize different response profiles associated with complex sound to speech processing. © The Author 2013. Published by Oxford University Press.

  3. Comparison of spectrum normalization techniques for univariate ...

    2016-02-29

    Feb 29, 2016 ... 1Fuel Chemistry Division, Bhabha Atomic Research Centre, Mumbai 400 085, India. 2Department of ... their three-point smoothing methods were studied using LIBS for quantification of Cr, Mn and Ni ... nique for the qualitative and quantitative analysis of the samples. .... SEP is a type of mean square error.

  4. Univariate and multivariate skewness and kurtosis for measuring nonnormality: Prevalence, influence and estimation.

    Cain, Meghan K; Zhang, Zhiyong; Yuan, Ke-Hai

    2017-10-01

    Nonnormality of univariate data has been extensively examined previously (Blanca et al., Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 9(2), 78-84, 2013; Miceeri, Psychological Bulletin, 105(1), 156, 1989). However, less is known of the potential nonnormality of multivariate data although multivariate analysis is commonly used in psychological and educational research. Using univariate and multivariate skewness and kurtosis as measures of nonnormality, this study examined 1,567 univariate distriubtions and 254 multivariate distributions collected from authors of articles published in Psychological Science and the American Education Research Journal. We found that 74 % of univariate distributions and 68 % multivariate distributions deviated from normal distributions. In a simulation study using typical values of skewness and kurtosis that we collected, we found that the resulting type I error rates were 17 % in a t-test and 30 % in a factor analysis under some conditions. Hence, we argue that it is time to routinely report skewness and kurtosis along with other summary statistics such as means and variances. To facilitate future report of skewness and kurtosis, we provide a tutorial on how to compute univariate and multivariate skewness and kurtosis by SAS, SPSS, R and a newly developed Web application.

  5. Comparison of multivariate and univariate statistical process control and monitoring methods

    Leger, R.P.; Garland, WM.J.; Macgregor, J.F.

    1996-01-01

    Work in recent years has lead to the development of multivariate process monitoring schemes which use Principal Component Analysis (PCA). This research compares the performance of a univariate scheme and a multivariate PCA scheme used for monitoring a simple process with 11 measured variables. The multivariate PCA scheme was able to adequately represent the process using two principal components. This resulted in a PCA monitoring scheme which used two charts as opposed to 11 charts for the univariate scheme and therefore had distinct advantages in terms of both data representation, presentation, and fault diagnosis capabilities. (author)

  6. Decoding auditory spatial and emotional information encoding using multivariate versus univariate techniques.

    Kryklywy, James H; Macpherson, Ewan A; Mitchell, Derek G V

    2018-04-01

    Emotion can have diverse effects on behaviour and perception, modulating function in some circumstances, and sometimes having little effect. Recently, it was identified that part of the heterogeneity of emotional effects could be due to a dissociable representation of emotion in dual pathway models of sensory processing. Our previous fMRI experiment using traditional univariate analyses showed that emotion modulated processing in the auditory 'what' but not 'where' processing pathway. The current study aims to further investigate this dissociation using a more recently emerging multi-voxel pattern analysis searchlight approach. While undergoing fMRI, participants localized sounds of varying emotional content. A searchlight multi-voxel pattern analysis was conducted to identify activity patterns predictive of sound location and/or emotion. Relative to the prior univariate analysis, MVPA indicated larger overlapping spatial and emotional representations of sound within early secondary regions associated with auditory localization. However, consistent with the univariate analysis, these two dimensions were increasingly segregated in late secondary and tertiary regions of the auditory processing streams. These results, while complimentary to our original univariate analyses, highlight the utility of multiple analytic approaches for neuroimaging, particularly for neural processes with known representations dependent on population coding.

  7. Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.

    Vidal, Sherry

    Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…

  8. New Riemannian Priors on the Univariate Normal Model

    Salem Said

    2014-07-01

    Full Text Available The current paper introduces new prior distributions on the univariate normal model, with the aim of applying them to the classification of univariate normal populations. These new prior distributions are entirely based on the Riemannian geometry of the univariate normal model, so that they can be thought of as “Riemannian priors”. Precisely, if {pθ ; θ ∈ Θ} is any parametrization of the univariate normal model, the paper considers prior distributions G( θ - , γ with hyperparameters θ - ∈ Θ and γ > 0, whose density with respect to Riemannian volume is proportional to exp(−d2(θ, θ - /2γ2, where d2(θ, θ - is the square of Rao’s Riemannian distance. The distributions G( θ - , γ are termed Gaussian distributions on the univariate normal model. The motivation for considering a distribution G( θ - , γ is that this distribution gives a geometric representation of a class or cluster of univariate normal populations. Indeed, G( θ - , γ has a unique mode θ - (precisely, θ - is the unique Riemannian center of mass of G( θ - , γ, as shown in the paper, and its dispersion away from θ - is given by γ.  Therefore, one thinks of members of the class represented by G( θ - , γ as being centered around θ - and  lying within a typical  distance determined by γ. The paper defines rigorously the Gaussian distributions G( θ - , γ and describes an algorithm for computing maximum likelihood estimates of their hyperparameters. Based on this algorithm and on the Laplace approximation, it describes how the distributions G( θ - , γ can be used as prior distributions for Bayesian classification of large univariate normal populations. In a concrete application to texture image classification, it is shown that  this  leads  to  an  improvement  in  performance  over  the  use  of  conjugate  priors.

  9. Univariate normalization of bispectrum using Hölder's inequality.

    Shahbazi, Forooz; Ewald, Arne; Nolte, Guido

    2014-08-15

    Considering that many biological systems including the brain are complex non-linear systems, suitable methods capable of detecting these non-linearities are required to study the dynamical properties of these systems. One of these tools is the third order cummulant or cross-bispectrum, which is a measure of interfrequency interactions between three signals. For convenient interpretation, interaction measures are most commonly normalized to be independent of constant scales of the signals such that its absolute values are bounded by one, with this limit reflecting perfect coupling. Although many different normalization factors for cross-bispectra were suggested in the literature these either do not lead to bounded measures or are themselves dependent on the coupling and not only on the scale of the signals. In this paper we suggest a normalization factor which is univariate, i.e., dependent only on the amplitude of each signal and not on the interactions between signals. Using a generalization of Hölder's inequality it is proven that the absolute value of this univariate bicoherence is bounded by zero and one. We compared three widely used normalizations to the univariate normalization concerning the significance of bicoherence values gained from resampling tests. Bicoherence values are calculated from real EEG data recorded in an eyes closed experiment from 10 subjects. The results show slightly more significant values for the univariate normalization but in general, the differences are very small or even vanishing in some subjects. Therefore, we conclude that the normalization factor does not play an important role in the bicoherence values with regard to statistical power, although a univariate normalization is the only normalization factor which fulfills all the required conditions of a proper normalization. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Univariate and Bivariate Empirical Mode Decomposition for Postural Stability Analysis

    Jacques Duchêne

    2008-05-01

    Full Text Available The aim of this paper was to compare empirical mode decomposition (EMD and two new extended methods of  EMD named complex empirical mode decomposition (complex-EMD and bivariate empirical mode decomposition (bivariate-EMD. All methods were used to analyze stabilogram center of pressure (COP time series. The two new methods are suitable to be applied to complex time series to extract complex intrinsic mode functions (IMFs before the Hilbert transform is subsequently applied on the IMFs. The trace of the analytic IMF in the complex plane has a circular form, with each IMF having its own rotation frequency. The area of the circle and the average rotation frequency of IMFs represent efficient indicators of the postural stability status of subjects. Experimental results show the effectiveness of these indicators to identify differences in standing posture between groups.

  11. Which DTW Method Applied to Marine Univariate Time Series Imputation

    Phan , Thi-Thu-Hong; Caillault , Émilie; Lefebvre , Alain; Bigand , André

    2017-01-01

    International audience; Missing data are ubiquitous in any domains of applied sciences. Processing datasets containing missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Therefore, the aim of this paper is to build a framework for filling missing values in univariate time series and to perform a comparison of different similarity metrics used for the imputation task. This allows to suggest the most suitable methods for the imp...

  12. Univariate decision tree induction using maximum margin classification

    Yıldız, Olcay Taner

    2012-01-01

    In many pattern recognition applications, first decision trees are used due to their simplicity and easily interpretable nature. In this paper, we propose a new decision tree learning algorithm called univariate margin tree where, for each continuous attribute, the best split is found using convex optimization. Our simulation results on 47 data sets show that the novel margin tree classifier performs at least as good as C4.5 and linear discriminant tree (LDT) with a similar time complexity. F...

  13. Acceleration techniques in the univariate Lipschitz global optimization

    Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.; De Franco, Angela

    2016-10-01

    Univariate box-constrained Lipschitz global optimization problems are considered in this contribution. Geometric and information statistical approaches are presented. The novel powerful local tuning and local improvement techniques are described in the contribution as well as the traditional ways to estimate the Lipschitz constant. The advantages of the presented local tuning and local improvement techniques are demonstrated using the operational characteristics approach for comparing deterministic global optimization algorithms on the class of 100 widely used test functions.

  14. Stress assessment based on EEG univariate features and functional connectivity measures.

    Alonso, J F; Romero, S; Ballester, M R; Antonijoan, R M; Mañanas, M A

    2015-07-01

    The biological response to stress originates in the brain but involves different biochemical and physiological effects. Many common clinical methods to assess stress are based on the presence of specific hormones and on features extracted from different signals, including electrocardiogram, blood pressure, skin temperature, or galvanic skin response. The aim of this paper was to assess stress using EEG-based variables obtained from univariate analysis and functional connectivity evaluation. Two different stressors, the Stroop test and sleep deprivation, were applied to 30 volunteers to find common EEG patterns related to stress effects. Results showed a decrease of the high alpha power (11 to 12 Hz), an increase in the high beta band (23 to 36 Hz, considered a busy brain indicator), and a decrease in the approximate entropy. Moreover, connectivity showed that the high beta coherence and the interhemispheric nonlinear couplings, measured by the cross mutual information function, increased significantly for both stressors, suggesting that useful stress indexes may be obtained from EEG-based features.

  15. Effect Sizes for Research Univariate and Multivariate Applications

    Grissom, Robert J

    2011-01-01

    Noted for its comprehensive coverage, this greatly expanded new edition now covers the use of univariate and multivariate effect sizes. Many measures and estimators are reviewed along with their application, interpretation, and limitations. Noted for its practical approach, the book features numerous examples using real data for a variety of variables and designs, to help readers apply the material to their own data. Tips on the use of SPSS, SAS, R, and S-Plus are provided. The book's broad disciplinary appeal results from its inclusion of a variety of examples from psychology, medicine, educa

  16. Nuisance forecasting. Univariate modelling and very-short-term forecasting of winter smog episodes; Immissionsprognose. Univariate Modellierung und Kuerzestfristvorhersage von Wintersmogsituationen

    Schlink, U.

    1996-12-31

    The work evaluates specifically the nuisance data provided by the measuring station in the centre of Leipig during the period from 1980 to 1993, with the aim to develop an algorithm for making very short-term forecasts of excessive nuisances. Forecasting was to be univariate, i.e., based exclusively on the half-hourly readings of SO{sub 2} concentrations taken in the past. As shown by Fourier analysis, there exist three main and mutually independent spectral regions: the high-frequency sector (period < 12 hours) of unstable irregularities, the seasonal sector with the periods of 24 and 12 hours, and the low-frequency sector (period > 24 hours). After breaking the measuring series up into components, the low-frequency sector is termed trend component, or trend for short. For obtaining the components, a Kalman filter is used. It was found that smog episodes are most adequately described by the trend component. This is therefore more closely investigated. The phase representation then shows characteristic trajectories of the trends. (orig./KW) [Deutsch] In der vorliegende Arbeit wurden speziell die Immissionsdaten der Messstation Leipzig-Mitte des Zeitraumes 1980-1993 mit dem Ziel der Erstellung eines Algorithmus fuer die Kuerzestfristprognose von Ueberschreitungssituationen untersucht. Die Prognosestellung sollte allein anhand der in der Vergangenheit registrierten Halbstundenwerte der SO{sub 2}-Konzentration, also univariat erfolgen. Wie die Fourieranalyse zeigt, gibt es drei wesentliche und voneinander unabhaengige Spektralbereiche: Den hochfrequenten Bereich (Periode <12 Stunden) der instabilen Irregularitaeten, den saisonalen Anteil mit den Perioden von 24 und 12 Stunden und den niedrigfrequenten Bereich (Periode >24 Stunden). Letzterer wird nach einer Zerlegung der Messreihe in Komponenten als Trendkomponente (oder kurz Trend) bezeichnet. Fuer die Komponentenzerlegung wird ein Kalman-Filter verwendet. Es stellt sich heraus, dass Smogepisoden am deutlichsten

  17. Automatic Image Segmentation Using Active Contours with Univariate Marginal Distribution

    I. Cruz-Aceves

    2013-01-01

    Full Text Available This paper presents a novel automatic image segmentation method based on the theory of active contour models and estimation of distribution algorithms. The proposed method uses the univariate marginal distribution model to infer statistical dependencies between the control points on different active contours. These contours have been generated through an alignment process of reference shape priors, in order to increase the exploration and exploitation capabilities regarding different interactive segmentation techniques. This proposed method is applied in the segmentation of the hollow core in microscopic images of photonic crystal fibers and it is also used to segment the human heart and ventricular areas from datasets of computed tomography and magnetic resonance images, respectively. Moreover, to evaluate the performance of the medical image segmentations compared to regions outlined by experts, a set of similarity measures has been adopted. The experimental results suggest that the proposed image segmentation method outperforms the traditional active contour model and the interactive Tseng method in terms of segmentation accuracy and stability.

  18. Univariate/multivariate genome-wide association scans using data from families and unrelated samples.

    Lei Zhang

    2009-08-01

    Full Text Available As genome-wide association studies (GWAS are becoming more popular, two approaches, among others, could be considered in order to improve statistical power for identifying genes contributing subtle to moderate effects to human diseases. The first approach is to increase sample size, which could be achieved by combining both unrelated and familial subjects together. The second approach is to jointly analyze multiple correlated traits. In this study, by extending generalized estimating equations (GEEs, we propose a simple approach for performing univariate or multivariate association tests for the combined data of unrelated subjects and nuclear families. In particular, we correct for population stratification by integrating principal component analysis and transmission disequilibrium test strategies. The proposed method allows for multiple siblings as well as missing parental information. Simulation studies show that the proposed test has improved power compared to two popular methods, EIGENSTRAT and FBAT, by analyzing the combined data, while correcting for population stratification. In addition, joint analysis of bivariate traits has improved power over univariate analysis when pleiotropic effects are present. Application to the Genetic Analysis Workshop 16 (GAW16 data sets attests to the feasibility and applicability of the proposed method.

  19. A comparison of bivariate and univariate QTL mapping in livestock populations

    Sorensen Daniel

    2003-11-01

    Full Text Available Abstract This study presents a multivariate, variance component-based QTL mapping model implemented via restricted maximum likelihood (REML. The method was applied to investigate bivariate and univariate QTL mapping analyses, using simulated data. Specifically, we report results on the statistical power to detect a QTL and on the precision of parameter estimates using univariate and bivariate approaches. The model and methodology were also applied to study the effectiveness of partitioning the overall genetic correlation between two traits into a component due to many genes of small effect, and one due to the QTL. It is shown that when the QTL has a pleiotropic effect on two traits, a bivariate analysis leads to a higher statistical power of detecting the QTL and to a more precise estimate of the QTL's map position, in particular in the case when the QTL has a small effect on the trait. The increase in power is most marked in cases where the contributions of the QTL and of the polygenic components to the genetic correlation have opposite signs. The bivariate REML analysis can successfully partition the two components contributing to the genetic correlation between traits.

  20. Wind Speed Prediction Using a Univariate ARIMA Model and a Multivariate NARX Model

    Erasmo Cadenas

    2016-02-01

    Full Text Available Two on step ahead wind speed forecasting models were compared. A univariate model was developed using a linear autoregressive integrated moving average (ARIMA. This method’s performance is well studied for a large number of prediction problems. The other is a multivariate model developed using a nonlinear autoregressive exogenous artificial neural network (NARX. This uses the variables: barometric pressure, air temperature, wind direction and solar radiation or relative humidity, as well as delayed wind speed. Both models were developed from two databases from two sites: an hourly average measurements database from La Mata, Oaxaca, Mexico, and a ten minute average measurements database from Metepec, Hidalgo, Mexico. The main objective was to compare the impact of the various meteorological variables on the performance of the multivariate model of wind speed prediction with respect to the high performance univariate linear model. The NARX model gave better results with improvements on the ARIMA model of between 5.5% and 10. 6% for the hourly database and of between 2.3% and 12.8% for the ten minute database for mean absolute error and mean squared error, respectively.

  1. Influence of microclimatic ammonia levels on productive performance of different broilers' breeds estimated with univariate and multivariate approaches.

    Soliman, Essam S; Moawed, Sherif A; Hassan, Rania A

    2017-08-01

    Birds litter contains unutilized nitrogen in the form of uric acid that is converted into ammonia; a fact that does not only affect poultry performance but also has a negative effect on people's health around the farm and contributes in the environmental degradation. The influence of microclimatic ammonia emissions on Ross and Hubbard broilers reared in different housing systems at two consecutive seasons (fall and winter) was evaluated using a discriminant function analysis to differentiate between Ross and Hubbard breeds. A total number of 400 air samples were collected and analyzed for ammonia levels during the experimental period. Data were analyzed using univariate and multivariate statistical methods. Ammonia levels were significantly higher (p0.05) were found between the two farms in body weight, body weight gain, feed intake, feed conversion ratio, and performance index (PI) of broilers. Body weight; weight gain and PI had increased values (pbroiler breed. Ammonia emissions were positively (although weekly) correlated with the ambient relative humidity (r=0.383; p0.05). Test of significance of discriminant function analysis did not show a classification based on the studied traits suggesting that they cannot been used as predictor variables. The percentage of correct classification was 52% and it was improved after deletion of highly correlated traits to 57%. The study revealed that broiler's growth was negatively affected by increased microclimatic ammonia concentrations and recommended the analysis of broilers' growth performance parameters data using multivariate discriminant function analysis.

  2. Characteristics of genomic signatures derived using univariate methods and mechanistically anchored functional descriptors for predicting drug- and xenobiotic-induced nephrotoxicity.

    Shi, Weiwei; Bugrim, Andrej; Nikolsky, Yuri; Nikolskya, Tatiana; Brennan, Richard J

    2008-01-01

    ABSTRACT The ideal toxicity biomarker is composed of the properties of prediction (is detected prior to traditional pathological signs of injury), accuracy (high sensitivity and specificity), and mechanistic relationships to the endpoint measured (biological relevance). Gene expression-based toxicity biomarkers ("signatures") have shown good predictive power and accuracy, but are difficult to interpret biologically. We have compared different statistical methods of feature selection with knowledge-based approaches, using GeneGo's database of canonical pathway maps, to generate gene sets for the classification of renal tubule toxicity. The gene set selection algorithms include four univariate analyses: t-statistics, fold-change, B-statistics, and RankProd, and their combination and overlap for the identification of differentially expressed probes. Enrichment analysis following the results of the four univariate analyses, Hotelling T-square test, and, finally out-of-bag selection, a variant of cross-validation, were used to identify canonical pathway maps-sets of genes coordinately involved in key biological processes-with classification power. Differentially expressed genes identified by the different statistical univariate analyses all generated reasonably performing classifiers of tubule toxicity. Maps identified by enrichment analysis or Hotelling T-square had lower classification power, but highlighted perturbed lipid homeostasis as a common discriminator of nephrotoxic treatments. The out-of-bag method yielded the best functionally integrated classifier. The map "ephrins signaling" performed comparably to a classifier derived using sparse linear programming, a machine learning algorithm, and represents a signaling network specifically involved in renal tubule development and integrity. Such functional descriptors of toxicity promise to better integrate predictive toxicogenomics with mechanistic analysis, facilitating the interpretation and risk assessment of

  3. Forecasting electric vehicles sales with univariate and multivariate time series models: The case of China.

    Zhang, Yong; Zhong, Miner; Geng, Nana; Jiang, Yunjian

    2017-01-01

    The market demand for electric vehicles (EVs) has increased in recent years. Suitable models are necessary to understand and forecast EV sales. This study presents a singular spectrum analysis (SSA) as a univariate time-series model and vector autoregressive model (VAR) as a multivariate model. Empirical results suggest that SSA satisfactorily indicates the evolving trend and provides reasonable results. The VAR model, which comprised exogenous parameters related to the market on a monthly basis, can significantly improve the prediction accuracy. The EV sales in China, which are categorized into battery and plug-in EVs, are predicted in both short term (up to December 2017) and long term (up to 2020), as statistical proofs of the growth of the Chinese EV industry.

  4. Lower bounds on the run time of the univariate marginal distribution algorithm on OneMax

    Krejca, Martin S.; Witt, Carsten

    2017-01-01

    The Univariate Marginal Distribution Algorithm (UMDA), a popular estimation of distribution algorithm, is studied from a run time perspective. On the classical OneMax benchmark function, a lower bound of Ω(μ√n + n log n), where μ is the population size, on its expected run time is proved...... values maintained by the algorithm, including carefully designed potential functions. These techniques may prove useful in advancing the field of run time analysis for estimation of distribution algorithms in general........ This is the first direct lower bound on the run time of the UMDA. It is stronger than the bounds that follow from general black-box complexity theory and is matched by the run time of many evolutionary algorithms. The results are obtained through advanced analyses of the stochastic change of the frequencies of bit...

  5. Detection of biomarkers for Hepatocellular Carcinoma using a hybrid univariate gene selection methods

    Abdel Samee Nagwan M

    2012-08-01

    Full Text Available Abstract Background Discovering new biomarkers has a great role in improving early diagnosis of Hepatocellular carcinoma (HCC. The experimental determination of biomarkers needs a lot of time and money. This motivates this work to use in-silico prediction of biomarkers to reduce the number of experiments required for detecting new ones. This is achieved by extracting the most representative genes in microarrays of HCC. Results In this work, we provide a method for extracting the differential expressed genes, up regulated ones, that can be considered candidate biomarkers in high throughput microarrays of HCC. We examine the power of several gene selection methods (such as Pearson’s correlation coefficient, Cosine coefficient, Euclidean distance, Mutual information and Entropy with different estimators in selecting informative genes. A biological interpretation of the highly ranked genes is done using KEGG (Kyoto Encyclopedia of Genes and Genomes pathways, ENTREZ and DAVID (Database for Annotation, Visualization, and Integrated Discovery databases. The top ten genes selected using Pearson’s correlation coefficient and Cosine coefficient contained six genes that have been implicated in cancer (often multiple cancers genesis in previous studies. A fewer number of genes were obtained by the other methods (4 genes using Mutual information, 3genes using Euclidean distance and only one gene using Entropy. A better result was obtained by the utilization of a hybrid approach based on intersecting the highly ranked genes in the output of all investigated methods. This hybrid combination yielded seven genes (2 genes for HCC and 5 genes in different types of cancer in the top ten genes of the list of intersected genes. Conclusions To strengthen the effectiveness of the univariate selection methods, we propose a hybrid approach by intersecting several of these methods in a cascaded manner. This approach surpasses all of univariate selection methods when

  6. Trend and forecasting rate of cancer deaths at a public university hospital using univariate modeling

    Ismail, A.; Hassan, Noor I.

    2013-09-01

    Cancer is one of the principal causes of death in Malaysia. This study was performed to determine the pattern of rate of cancer deaths at a public hospital in Malaysia over an 11 year period from year 2001 to 2011, to determine the best fitted model of forecasting the rate of cancer deaths using Univariate Modeling and to forecast the rates for the next two years (2012 to 2013). The medical records of the death of patients with cancer admitted at this Hospital over 11 year's period were reviewed, with a total of 663 cases. The cancers were classified according to 10th Revision International Classification of Diseases (ICD-10). Data collected include socio-demographic background of patients such as registration number, age, gender, ethnicity, ward and diagnosis. Data entry and analysis was accomplished using SPSS 19.0 and Minitab 16.0. The five Univariate Models used were Naïve with Trend Model, Average Percent Change Model (ACPM), Single Exponential Smoothing, Double Exponential Smoothing and Holt's Method. The overall 11 years rate of cancer deaths showed that at this hospital, Malay patients have the highest percentage (88.10%) compared to other ethnic groups with males (51.30%) higher than females. Lung and breast cancer have the most number of cancer deaths among gender. About 29.60% of the patients who died due to cancer were aged 61 years old and above. The best Univariate Model used for forecasting the rate of cancer deaths is Single Exponential Smoothing Technique with alpha of 0.10. The forecast for the rate of cancer deaths shows a horizontally or flat value. The forecasted mortality trend remains at 6.84% from January 2012 to December 2013. All the government and private sectors and non-governmental organizations need to highlight issues on cancer especially lung and breast cancers to the public through campaigns using mass media, media electronics, posters and pamphlets in the attempt to decrease the rate of cancer deaths in Malaysia.

  7. Influence of microclimatic ammonia levels on productive performance of different broilers’ breeds estimated with univariate and multivariate approaches

    Soliman, Essam S.; Moawed, Sherif A.; Hassan, Rania A.

    2017-01-01

    Background and Aim: Birds litter contains unutilized nitrogen in the form of uric acid that is converted into ammonia; a fact that does not only affect poultry performance but also has a negative effect on people’s health around the farm and contributes in the environmental degradation. The influence of microclimatic ammonia emissions on Ross and Hubbard broilers reared in different housing systems at two consecutive seasons (fall and winter) was evaluated using a discriminant function analysis to differentiate between Ross and Hubbard breeds. Materials and Methods: A total number of 400 air samples were collected and analyzed for ammonia levels during the experimental period. Data were analyzed using univariate and multivariate statistical methods. Results: Ammonia levels were significantly higher (p0.05) were found between the two farms in body weight, body weight gain, feed intake, feed conversion ratio, and performance index (PI) of broilers. Body weight; weight gain and PI had increased values (pbroiler breed. Ammonia emissions were positively (although weekly) correlated with the ambient relative humidity (r=0.383; p0.05). Test of significance of discriminant function analysis did not show a classification based on the studied traits suggesting that they cannot been used as predictor variables. The percentage of correct classification was 52% and it was improved after deletion of highly correlated traits to 57%. Conclusion: The study revealed that broiler’s growth was negatively affected by increased microclimatic ammonia concentrations and recommended the analysis of broilers’ growth performance parameters data using multivariate discriminant function analysis. PMID:28919677

  8. Combinatorial bounds on the α-divergence of univariate mixture models

    Nielsen, Frank; Sun, Ke

    2017-01-01

    We derive lower- and upper-bounds of α-divergence between univariate mixture models with components in the exponential family. Three pairs of bounds are presented in order with increasing quality and increasing computational cost. They are verified

  9. Comparison of different Methods for Univariate Time Series Imputation in R

    Moritz, Steffen; Sardá, Alexis; Bartz-Beielstein, Thomas; Zaefferer, Martin; Stork, Jörg

    2015-01-01

    Missing values in datasets are a well-known problem and there are quite a lot of R packages offering imputation functions. But while imputation in general is well covered within R, it is hard to find functions for imputation of univariate time series. The problem is, most standard imputation techniques can not be applied directly. Most algorithms rely on inter-attribute correlations, while univariate time series imputation needs to employ time dependencies. This paper provides an overview of ...

  10. Visual classification of very fine-grained sediments: Evaluation through univariate and multivariate statistics

    Hohn, M. Ed; Nuhfer, E.B.; Vinopal, R.J.; Klanderman, D.S.

    1980-01-01

    Classifying very fine-grained rocks through fabric elements provides information about depositional environments, but is subject to the biases of visual taxonomy. To evaluate the statistical significance of an empirical classification of very fine-grained rocks, samples from Devonian shales in four cored wells in West Virginia and Virginia were measured for 15 variables: quartz, illite, pyrite and expandable clays determined by X-ray diffraction; total sulfur, organic content, inorganic carbon, matrix density, bulk density, porosity, silt, as well as density, sonic travel time, resistivity, and ??-ray response measured from well logs. The four lithologic types comprised: (1) sharply banded shale, (2) thinly laminated shale, (3) lenticularly laminated shale, and (4) nonbanded shale. Univariate and multivariate analyses of variance showed that the lithologic classification reflects significant differences for the variables measured, difference that can be detected independently of stratigraphic effects. Little-known statistical methods found useful in this work included: the multivariate analysis of variance with more than one effect, simultaneous plotting of samples and variables on canonical variates, and the use of parametric ANOVA and MANOVA on ranked data. ?? 1980 Plenum Publishing Corporation.

  11. Segmentation of Coronary Angiograms Using Gabor Filters and Boltzmann Univariate Marginal Distribution Algorithm

    Fernando Cervantes-Sanchez

    2016-01-01

    Full Text Available This paper presents a novel method for improving the training step of the single-scale Gabor filters by using the Boltzmann univariate marginal distribution algorithm (BUMDA in X-ray angiograms. Since the single-scale Gabor filters (SSG are governed by three parameters, the optimal selection of the SSG parameters is highly desirable in order to maximize the detection performance of coronary arteries while reducing the computational time. To obtain the best set of parameters for the SSG, the area (Az under the receiver operating characteristic curve is used as fitness function. Moreover, to classify vessel and nonvessel pixels from the Gabor filter response, the interclass variance thresholding method has been adopted. The experimental results using the proposed method obtained the highest detection rate with Az=0.9502 over a training set of 40 images and Az=0.9583 with a test set of 40 images. In addition, the experimental results of vessel segmentation provided an accuracy of 0.944 with the test set of angiograms.

  12. The Use of Univariate and Multivariate Analyses in the Geochemical Exploration, Ravanj Lead Mine, Delijan, Iran

    Mostafa Nejadhadad

    2017-11-01

    Full Text Available A geochemical exploration program was applied to recognize the anomalous geochemical haloes at the Ravanj lead mine, Delijan, Iran. Sampling of unweathered rocks were undertaken across rock exposures on a 10 × 10 meter grid (n = 302 as well as the accessible parts of underground mine A (n = 42. First, the threshold values of all elements were determined using the cut-off values used in the exploratory data analysis (EDA method. Then, for further studies, elements with lognormal distributions (Pb, Zn, Ag, As, Cd, Co, Cu, Sb, S, Sr, Th, Ba, Bi, Fe, Ni and Mn were selected. Robustness against outliers is achieved by application of central log ratio transformation to address the closure problems with compositional data prior to principle components analysis (PCA. Results of these analyses show that, in the Ravanj deposit, Pb mineralization is characterized by a Pb-Ba-Ag-Sb ± Zn ± Cd association. The supra-mineralization haloes are characterized by barite and tetrahedrite in a Ba- Th- Ag- Cu- Sb- As- Sr association and sub-mineralization haloes are comprised of pyrite and tetrahedrite, probably reflecting a Fe-Cu-As-Bi-Ni-Co-Mo-Mn association. Using univariate and multivariate geostatistical analyses (e.g., EDA and robust PCA, four anomalies were detected and mapped in Block A of the Ravanj deposit. Anomalies 1 and 2 are around the ancient orebodies. Anomaly 3 is located in a thin bedded limestone-shale intercalation unit that does not show significant mineralization. Drilling of the fourth anomaly suggested a low grade, non-economic Pb mineralization.

  13. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    Weathers, J.B.; Luck, R.; Weathers, J.W.

    2009-01-01

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  14. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com

    2009-11-15

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  15. Forecasting electricity spot-prices using linear univariate time-series models

    Cuaresma, Jesus Crespo; Hlouskova, Jaroslava; Kossmeier, Stephan; Obersteiner, Michael

    2004-01-01

    This paper studies the forecasting abilities of a battery of univariate models on hourly electricity spot prices, using data from the Leipzig Power Exchange. The specifications studied include autoregressive models, autoregressive-moving average models and unobserved component models. The results show that specifications, where each hour of the day is modelled separately present uniformly better forecasting properties than specifications for the whole time-series, and that the inclusion of simple probabilistic processes for the arrival of extreme price events can lead to improvements in the forecasting abilities of univariate models for electricity spot prices. (Author)

  16. Investigating univariate temporal patterns for intrinsic connectivity networks based on complexity and low-frequency oscillation: a test-retest reliability study.

    Wang, X; Jiao, Y; Tang, T; Wang, H; Lu, Z

    2013-12-19

    Intrinsic connectivity networks (ICNs) are composed of spatial components and time courses. The spatial components of ICNs were discovered with moderate-to-high reliability. So far as we know, few studies focused on the reliability of the temporal patterns for ICNs based their individual time courses. The goals of this study were twofold: to investigate the test-retest reliability of temporal patterns for ICNs, and to analyze these informative univariate metrics. Additionally, a correlation analysis was performed to enhance interpretability. Our study included three datasets: (a) short- and long-term scans, (b) multi-band echo-planar imaging (mEPI), and (c) eyes open or closed. Using dual regression, we obtained the time courses of ICNs for each subject. To produce temporal patterns for ICNs, we applied two categories of univariate metrics: network-wise complexity and network-wise low-frequency oscillation. Furthermore, we validated the test-retest reliability for each metric. The network-wise temporal patterns for most ICNs (especially for default mode network, DMN) exhibited moderate-to-high reliability and reproducibility under different scan conditions. Network-wise complexity for DMN exhibited fair reliability (ICC<0.5) based on eyes-closed sessions. Specially, our results supported that mEPI could be a useful method with high reliability and reproducibility. In addition, these temporal patterns were with physiological meanings, and certain temporal patterns were correlated to the node strength of the corresponding ICN. Overall, network-wise temporal patterns of ICNs were reliable and informative and could be complementary to spatial patterns of ICNs for further study. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. Cellulose I crystallinity determination using FT-Raman spectroscopy : univariate and multivariate methods

    Umesh P. Agarwal; Richard S. Reiner; Sally A. Ralph

    2010-01-01

    Two new methods based on FT–Raman spectroscopy, one simple, based on band intensity ratio, and the other using a partial least squares (PLS) regression model, are proposed to determine cellulose I crystallinity. In the simple method, crystallinity in cellulose I samples was determined based on univariate regression that was first developed using the Raman band...

  18. Combinatorial bounds on the α-divergence of univariate mixture models

    Nielsen, Frank

    2017-06-20

    We derive lower- and upper-bounds of α-divergence between univariate mixture models with components in the exponential family. Three pairs of bounds are presented in order with increasing quality and increasing computational cost. They are verified empirically through simulated Gaussian mixture models. The presented methodology generalizes to other divergence families relying on Hellinger-type integrals.

  19. A comparison of multivariate and univariate time series approaches to modelling and forecasting emergency department demand in Western Australia.

    Aboagye-Sarfo, Patrick; Mai, Qun; Sanfilippo, Frank M; Preen, David B; Stewart, Louise M; Fatovich, Daniel M

    2015-10-01

    To develop multivariate vector-ARMA (VARMA) forecast models for predicting emergency department (ED) demand in Western Australia (WA) and compare them to the benchmark univariate autoregressive moving average (ARMA) and Winters' models. Seven-year monthly WA state-wide public hospital ED presentation data from 2006/07 to 2012/13 were modelled. Graphical and VARMA modelling methods were used for descriptive analysis and model fitting. The VARMA models were compared to the benchmark univariate ARMA and Winters' models to determine their accuracy to predict ED demand. The best models were evaluated by using error correction methods for accuracy. Descriptive analysis of all the dependent variables showed an increasing pattern of ED use with seasonal trends over time. The VARMA models provided a more precise and accurate forecast with smaller confidence intervals and better measures of accuracy in predicting ED demand in WA than the ARMA and Winters' method. VARMA models are a reliable forecasting method to predict ED demand for strategic planning and resource allocation. While the ARMA models are a closely competing alternative, they under-estimated future ED demand. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Modeling the potential risk factors of bovine viral diarrhea prevalence in Egypt using univariable and multivariable logistic regression analyses

    Abdelfattah M. Selim

    2018-03-01

    Full Text Available Aim: The present cross-sectional study was conducted to determine the seroprevalence and potential risk factors associated with Bovine viral diarrhea virus (BVDV disease in cattle and buffaloes in Egypt, to model the potential risk factors associated with the disease using logistic regression (LR models, and to fit the best predictive model for the current data. Materials and Methods: A total of 740 blood samples were collected within November 2012-March 2013 from animals aged between 6 months and 3 years. The potential risk factors studied were species, age, sex, and herd location. All serum samples were examined with indirect ELIZA test for antibody detection. Data were analyzed with different statistical approaches such as Chi-square test, odds ratios (OR, univariable, and multivariable LR models. Results: Results revealed a non-significant association between being seropositive with BVDV and all risk factors, except for species of animal. Seroprevalence percentages were 40% and 23% for cattle and buffaloes, respectively. OR for all categories were close to one with the highest OR for cattle relative to buffaloes, which was 2.237. Likelihood ratio tests showed a significant drop of the -2LL from univariable LR to multivariable LR models. Conclusion: There was an evidence of high seroprevalence of BVDV among cattle as compared with buffaloes with the possibility of infection in different age groups of animals. In addition, multivariable LR model was proved to provide more information for association and prediction purposes relative to univariable LR models and Chi-square tests if we have more than one predictor.

  1. Study of Ecotype and Sowing Date Interaction in Cumin (Cuminum cyminum L. using Different Univariate Stability Parameters

    J Ghanbari

    2017-06-01

    Full Text Available Introduction Cumin is one of the most important medicinal plants in Iran and today, it is in the second level of popularity between spices in the world after black pepper. Cumin is an aromatic plant used as flavoring and seasoning agent in foods. Cumin seeds have been found to possess significant biological and have been used for treatment of toothache, dyspepsia, diarrhoea, epilepsy and jaundice. Knowledge of GEI is advantageous to have a cultivar that gives consistently high yield in a broad range of environments and to increase efficiency of breeding program and selection of best genotypes. A genotype that has stable trait expression across environments contributes little to GEI and its performance should be more predictable from the main several statistical methods have been proposed for stability analysis, with the aim of explaining the information contained in the GEI. Regression technique was proposed by Finlay and Wilkinson (1963 and was improved by Eberhart and Russell (1966. Generally, genotype stability was estimated by the slope of and deviation from the regression line for each of the genotypes. This is a popular method in stability analysis and has been applied in many crops. Non-parametric methods (rank mean (R, standard deviation rank (SDR and yield index ratio (YIR, environmental variance (S2i and genotypic variation coefficient (CVi Wricke's ecovalence and Shukla's stability variance (Shukla, 1972 have been used to determine genotype-by-environment interaction in many studies. This study was aimed to evaluate the ecotype × sowing date interaction in cumin and to evaluation of genotypic response of cumin to different sowing dates using univariate stability parameters. Materials and Methods In order to study of ecotype × sowing date interaction, different cumin ecotypes: Semnan, Fars, Yazd, Golestan, Khorasan-Razavi, Khorasan-Shomali, Khorasan-Jonoubi, Isfahan and Kerman in 5 different sowing dates (26th December, 10th January

  2. R package imputeTestbench to compare imputations methods for univariate time series

    Bokde, Neeraj; Kulat, Kishore; Beck, Marcus W; Asencio-Cortés, Gualberto

    2016-01-01

    This paper describes the R package imputeTestbench that provides a testbench for comparing imputation methods for missing data in univariate time series. The imputeTestbench package can be used to simulate the amount and type of missing data in a complete dataset and compare filled data using different imputation methods. The user has the option to simulate missing data by removing observations completely at random or in blocks of different sizes. Several default imputation methods are includ...

  3. Univariate time series modeling and an application to future claims amount in SOCSO's invalidity pension scheme

    Chek, Mohd Zaki Awang; Ahmad, Abu Bakar; Ridzwan, Ahmad Nur Azam Ahmad; Jelas, Imran Md.; Jamal, Nur Faezah; Ismail, Isma Liana; Zulkifli, Faiz; Noor, Syamsul Ikram Mohd

    2012-09-01

    The main objective of this study is to forecast the future claims amount of Invalidity Pension Scheme (IPS). All data were derived from SOCSO annual reports from year 1972 - 2010. These claims consist of all claims amount from 7 benefits offered by SOCSO such as Invalidity Pension, Invalidity Grant, Survivors Pension, Constant Attendance Allowance, Rehabilitation, Funeral and Education. Prediction of future claims of Invalidity Pension Scheme will be made using Univariate Forecasting Models to predict the future claims among workforce in Malaysia.

  4. Validated univariate and multivariate spectrophotometric methods for the determination of pharmaceuticals mixture in complex wastewater

    Riad, Safaa M.; Salem, Hesham; Elbalkiny, Heba T.; Khattab, Fatma I.

    2015-04-01

    Five, accurate, precise, and sensitive univariate and multivariate spectrophotometric methods were developed for the simultaneous determination of a ternary mixture containing Trimethoprim (TMP), Sulphamethoxazole (SMZ) and Oxytetracycline (OTC) in waste water samples collected from different cites either production wastewater or livestock wastewater after their solid phase extraction using OASIS HLB cartridges. In univariate methods OTC was determined at its λmax 355.7 nm (0D), while (TMP) and (SMZ) were determined by three different univariate methods. Method (A) is based on successive spectrophotometric resolution technique (SSRT). The technique starts with the ratio subtraction method followed by ratio difference method for determination of TMP and SMZ. Method (B) is successive derivative ratio technique (SDR). Method (C) is mean centering of the ratio spectra (MCR). The developed multivariate methods are principle component regression (PCR) and partial least squares (PLS). The specificity of the developed methods is investigated by analyzing laboratory prepared mixtures containing different ratios of the three drugs. The obtained results are statistically compared with those obtained by the official methods, showing no significant difference with respect to accuracy and precision at p = 0.05.

  5. Handbook of univariate and multivariate data analysis and interpretation with SPSS

    Ho, Robert

    2006-01-01

    Many statistics texts tend to focus more on the theory and mathematics underlying statistical tests than on their applications and interpretation. This can leave readers with little understanding of how to apply statistical tests or how to interpret their findings. While the SPSS statistical software has done much to alleviate the frustrations of social science professionals and students who must analyze data, they still face daunting challenges in selecting the proper tests, executing the tests, and interpreting the test results.With emphasis firmly on such practical matters, this handbook se

  6. High order depletion sensitivity analysis

    Naguib, K.; Adib, M.; Morcos, H.N.

    2002-01-01

    A high order depletion sensitivity method was applied to calculate the sensitivities of build-up of actinides in the irradiated fuel due to cross-section uncertainties. An iteration method based on Taylor series expansion was applied to construct stationary principle, from which all orders of perturbations were calculated. The irradiated EK-10 and MTR-20 fuels at their maximum burn-up of 25% and 65% respectively were considered for sensitivity analysis. The results of calculation show that, in case of EK-10 fuel (low burn-up), the first order sensitivity was found to be enough to perform an accuracy of 1%. While in case of MTR-20 (high burn-up) the fifth order was found to provide 3% accuracy. A computer code SENS was developed to provide the required calculations

  7. QRS complex detection based on continuous density hidden Markov models using univariate observations

    Sotelo, S.; Arenas, W.; Altuve, M.

    2018-04-01

    In the electrocardiogram (ECG), the detection of QRS complexes is a fundamental step in the ECG signal processing chain since it allows the determination of other characteristics waves of the ECG and provides information about heart rate variability. In this work, an automatic QRS complex detector based on continuous density hidden Markov models (HMM) is proposed. HMM were trained using univariate observation sequences taken either from QRS complexes or their derivatives. The detection approach is based on the log-likelihood comparison of the observation sequence with a fixed threshold. A sliding window was used to obtain the observation sequence to be evaluated by the model. The threshold was optimized by receiver operating characteristic curves. Sensitivity (Sen), specificity (Spc) and F1 score were used to evaluate the detection performance. The approach was validated using ECG recordings from the MIT-BIH Arrhythmia database. A 6-fold cross-validation shows that the best detection performance was achieved with 2 states HMM trained with QRS complexes sequences (Sen = 0.668, Spc = 0.360 and F1 = 0.309). We concluded that these univariate sequences provide enough information to characterize the QRS complex dynamics from HMM. Future works are directed to the use of multivariate observations to increase the detection performance.

  8. Univariate and multivariate forecasting of hourly solar radiation with artificial intelligence techniques

    Sfetsos, A. [7 Pirsou Str., Athens (Greece); Coonick, A.H. [Imperial Coll. of Science Technology and Medicine, Dept. of Electrical and Electronic Engineering, London (United Kingdom)

    2000-07-01

    This paper introduces a new approach for the forecasting of mean hourly global solar radiation received by a horizontal surface. In addition to the traditional linear methods, several artificial-intelligence-based techniques are studied. These include linear, feed-forward, recurrent Elman and Radial Basis neural networks alongside the adaptive neuro-fuzzy inference scheme. The problem is examined initially for the univariate case, and is extended to include additional meteorological parameters in the process of estimating the optimum model. The results indicate that the developed artificial intelligence models predict the solar radiation time series more effectively compared to the conventional procedures based on the clearness index. The forecasting ability of some models can be further enhanced with the use of additional meteorological parameters. (Author)

  9. Robust assignment of cancer subtypes from expression data using a uni-variate gene expression average as classifier

    Lauss, Martin; Frigyesi, Attila; Ryden, Tobias; Höglund, Mattias

    2010-01-01

    Genome wide gene expression data is a rich source for the identification of gene signatures suitable for clinical purposes and a number of statistical algorithms have been described for both identification and evaluation of such signatures. Some employed algorithms are fairly complex and hence sensitive to over-fitting whereas others are more simple and straight forward. Here we present a new type of simple algorithm based on ROC analysis and the use of metagenes that we believe will be a good complement to existing algorithms. The basis for the proposed approach is the use of metagenes, instead of collections of individual genes, and a feature selection using AUC values obtained by ROC analysis. Each gene in a data set is assigned an AUC value relative to the tumor class under investigation and the genes are ranked according to these values. Metagenes are then formed by calculating the mean expression level for an increasing number of ranked genes, and the metagene expression value that optimally discriminates tumor classes in the training set is used for classification of new samples. The performance of the metagene is then evaluated using LOOCV and balanced accuracies. We show that the simple uni-variate gene expression average algorithm performs as well as several alternative algorithms such as discriminant analysis and the more complex approaches such as SVM and neural networks. The R package rocc is freely available at http://cran.r-project.org/web/packages/rocc/index.html

  10. Application of Raman Spectroscopy and Univariate Modelling As a Process Analytical Technology for Cell Therapy Bioprocessing

    Baradez, Marc-Olivier; Biziato, Daniela; Hassan, Enas; Marshall, Damian

    2018-01-01

    Cell therapies offer unquestionable promises for the treatment, and in some cases even the cure, of complex diseases. As we start to see more of these therapies gaining market authorization, attention is turning to the bioprocesses used for their manufacture, in particular the challenge of gaining higher levels of process control to help regulate cell behavior, manage process variability, and deliver product of a consistent quality. Many processes already incorporate the measurement of key markers such as nutrient consumption, metabolite production, and cell concentration, but these are often performed off-line and only at set time points in the process. Having the ability to monitor these markers in real-time using in-line sensors would offer significant advantages, allowing faster decision-making and a finer level of process control. In this study, we use Raman spectroscopy as an in-line optical sensor for bioprocess monitoring of an autologous T-cell immunotherapy model produced in a stirred tank bioreactor system. Using reference datasets generated on a standard bioanalyzer, we develop chemometric models from the Raman spectra for glucose, glutamine, lactate, and ammonia. These chemometric models can accurately monitor donor-specific increases in nutrient consumption and metabolite production as the primary T-cell transition from a recovery phase and begin proliferating. Using a univariate modeling approach, we then show how changes in peak intensity within the Raman spectra can be correlated with cell concentration and viability. These models, which act as surrogate markers, can be used to monitor cell behavior including cell proliferation rates, proliferative capacity, and transition of the cells to a quiescent phenotype. Finally, using the univariate models, we also demonstrate how Raman spectroscopy can be applied for real-time monitoring. The ability to measure these key parameters using an in-line Raman optical sensor makes it possible to have immediate

  11. Application of Raman Spectroscopy and Univariate Modelling As a Process Analytical Technology for Cell Therapy Bioprocessing.

    Baradez, Marc-Olivier; Biziato, Daniela; Hassan, Enas; Marshall, Damian

    2018-01-01

    Cell therapies offer unquestionable promises for the treatment, and in some cases even the cure, of complex diseases. As we start to see more of these therapies gaining market authorization, attention is turning to the bioprocesses used for their manufacture, in particular the challenge of gaining higher levels of process control to help regulate cell behavior, manage process variability, and deliver product of a consistent quality. Many processes already incorporate the measurement of key markers such as nutrient consumption, metabolite production, and cell concentration, but these are often performed off-line and only at set time points in the process. Having the ability to monitor these markers in real-time using in-line sensors would offer significant advantages, allowing faster decision-making and a finer level of process control. In this study, we use Raman spectroscopy as an in-line optical sensor for bioprocess monitoring of an autologous T-cell immunotherapy model produced in a stirred tank bioreactor system. Using reference datasets generated on a standard bioanalyzer, we develop chemometric models from the Raman spectra for glucose, glutamine, lactate, and ammonia. These chemometric models can accurately monitor donor-specific increases in nutrient consumption and metabolite production as the primary T-cell transition from a recovery phase and begin proliferating. Using a univariate modeling approach, we then show how changes in peak intensity within the Raman spectra can be correlated with cell concentration and viability. These models, which act as surrogate markers, can be used to monitor cell behavior including cell proliferation rates, proliferative capacity, and transition of the cells to a quiescent phenotype. Finally, using the univariate models, we also demonstrate how Raman spectroscopy can be applied for real-time monitoring. The ability to measure these key parameters using an in-line Raman optical sensor makes it possible to have immediate

  12. Application of Raman Spectroscopy and Univariate Modelling As a Process Analytical Technology for Cell Therapy Bioprocessing

    Marc-Olivier Baradez

    2018-03-01

    Full Text Available Cell therapies offer unquestionable promises for the treatment, and in some cases even the cure, of complex diseases. As we start to see more of these therapies gaining market authorization, attention is turning to the bioprocesses used for their manufacture, in particular the challenge of gaining higher levels of process control to help regulate cell behavior, manage process variability, and deliver product of a consistent quality. Many processes already incorporate the measurement of key markers such as nutrient consumption, metabolite production, and cell concentration, but these are often performed off-line and only at set time points in the process. Having the ability to monitor these markers in real-time using in-line sensors would offer significant advantages, allowing faster decision-making and a finer level of process control. In this study, we use Raman spectroscopy as an in-line optical sensor for bioprocess monitoring of an autologous T-cell immunotherapy model produced in a stirred tank bioreactor system. Using reference datasets generated on a standard bioanalyzer, we develop chemometric models from the Raman spectra for glucose, glutamine, lactate, and ammonia. These chemometric models can accurately monitor donor-specific increases in nutrient consumption and metabolite production as the primary T-cell transition from a recovery phase and begin proliferating. Using a univariate modeling approach, we then show how changes in peak intensity within the Raman spectra can be correlated with cell concentration and viability. These models, which act as surrogate markers, can be used to monitor cell behavior including cell proliferation rates, proliferative capacity, and transition of the cells to a quiescent phenotype. Finally, using the univariate models, we also demonstrate how Raman spectroscopy can be applied for real-time monitoring. The ability to measure these key parameters using an in-line Raman optical sensor makes it possible

  13. Impact of BCL2 and p53 on postmastectomy radiotherapy response in high-risk breast cancer. A subgroup analysis of DBCG82 b&c

    Kyndi, Marianne; Sørensen, Flemming Brandt; Knudsen, Helle

    2008-01-01

    -Meier probability plots showed a significantly improved overall survival after PMRT for the BCL2 positive subgroup, whereas practically no survival improvement was seen after PMRT for the BCL2 negative subgroup. In multivariate analysis of OS, however, no significant interaction was found between BCL2......PURPOSE: To examine p53 and BCL2 expression in high-risk breast cancer patients randomized to postmastectomy radiotherapy (PMRT). PATIENTS AND METHODS: The present analysis included 1 000 of 3 083 high-risk breast cancer patients randomly assigned to PMRT in the DBCG82 b&c studies. Tissue...... tests, Kaplan-Meier probability plots, Log-rank test, and Cox univariate and multivariate regression analyses. RESULTS: p53 accumulation was not significantly associated with increased overall mortality, DM or LRR probability in univariate or multivariate Cox regression analyses. Kaplan...

  14. Impact of BCL2 and p53 on postmastectomy radiotherapy response in high-risk breast cancer. A subgroup analysis of DBCG82 b

    Kyndi, M.; Sorensen, F.B.; Alsner, J.

    2008-01-01

    -Meier probability plots showed a significantly improved overall survival after PMRT for the BCL2 positive subgroup, whereas practically no survival improvement was seen after PMRT for the BCL2 negative subgroup. In multivariate analysis of OS, however, no significant interaction was found between BCL2......Purpose. To examine p53 and BCL2 expression in high-risk breast cancer patients randomized to postmastectomy radiotherapy (PMRT). Patients and methods. The present analysis included 1000 of 3 083 high-risk breast cancer patients randomly assigned to PMRT in the DBCG82 b&c studies. Tissue microarray......, Kaplan-Meier probability plots, Log-rank test, and Cox univariate and multivariate regression analyses. Results. p53 accumulation was not significantly associated with increased overall mortality, DM or LRR probability in univariate or multivariate Cox regression analyses. Kaplan-Meier probability plots...

  15. Comparison of univariate and multivariate calibration for the determination of micronutrients in pellets of plant materials by laser induced breakdown spectrometry

    Batista Braga, Jez Willian; Trevizan, Lilian Cristina; Nunes, Lidiane Cristina; Aparecida Rufini, Iolanda; Santos, Dario; Krug, Francisco Jose

    2010-01-01

    The application of laser induced breakdown spectrometry (LIBS) aiming the direct analysis of plant materials is a great challenge that still needs efforts for its development and validation. In this way, a series of experimental approaches has been carried out in order to show that LIBS can be used as an alternative method to wet acid digestions based methods for analysis of agricultural and environmental samples. The large amount of information provided by LIBS spectra for these complex samples increases the difficulties for selecting the most appropriated wavelengths for each analyte. Some applications have suggested that improvements in both accuracy and precision can be achieved by the application of multivariate calibration in LIBS data when compared to the univariate regression developed with line emission intensities. In the present work, the performance of univariate and multivariate calibration, based on partial least squares regression (PLSR), was compared for analysis of pellets of plant materials made from an appropriate mixture of cryogenically ground samples with cellulose as the binding agent. The development of a specific PLSR model for each analyte and the selection of spectral regions containing only lines of the analyte of interest were the best conditions for the analysis. In this particular application, these models showed a similar performance, but PLSR seemed to be more robust due to a lower occurrence of outliers in comparison to the univariate method. Data suggests that efforts dealing with sample presentation and fitness of standards for LIBS analysis must be done in order to fulfill the boundary conditions for matrix independent development and validation.

  16. Improving the performance of univariate control charts for abnormal detection and classification

    Yiakopoulos, Christos; Koutsoudaki, Maria; Gryllias, Konstantinos; Antoniadis, Ioannis

    2017-03-01

    Bearing failures in rotating machinery can cause machine breakdown and economical loss, if no effective actions are taken on time. Therefore, it is of prime importance to detect accurately the presence of faults, especially at their early stage, to prevent sequent damage and reduce costly downtime. The machinery fault diagnosis follows a roadmap of data acquisition, feature extraction and diagnostic decision making, in which mechanical vibration fault feature extraction is the foundation and the key to obtain an accurate diagnostic result. A challenge in this area is the selection of the most sensitive features for various types of fault, especially when the characteristics of failures are difficult to be extracted. Thus, a plethora of complex data-driven fault diagnosis methods are fed by prominent features, which are extracted and reduced through traditional or modern algorithms. Since most of the available datasets are captured during normal operating conditions, the last decade a number of novelty detection methods, able to work when only normal data are available, have been developed. In this study, a hybrid method combining univariate control charts and a feature extraction scheme is introduced focusing towards an abnormal change detection and classification, under the assumption that measurements under normal operating conditions of the machinery are available. The feature extraction method integrates the morphological operators and the Morlet wavelets. The effectiveness of the proposed methodology is validated on two different experimental cases with bearing faults, demonstrating that the proposed approach can improve the fault detection and classification performance of conventional control charts.

  17. The use of principal components and univariate charts to control multivariate processes

    Marcela A. G. Machado

    2008-04-01

    Full Text Available In this article, we evaluate the performance of the T² chart based on the principal components (PC X chart and the simultaneous univariate control charts based on the original variables (SU charts or based on the principal components (SUPC charts. The main reason to consider the PC chart lies on the dimensionality reduction. However, depending on the disturbance and on the way the original variables are related, the chart is very slow in signaling, except when all variables are negatively correlated and the principal component is wisely selected. Comparing the SU , the SUPC and the T² charts we conclude that the SU X charts (SUPC charts have a better overall performance when the variables are positively (negatively correlated. We also develop the expression to obtain the power of two S² charts designed for monitoring the covariance matrix. These joint S² charts are, in the majority of the cases, more efficient than the generalized variance chart.Neste artigo, avaliamos o desempenho do gráfico de T² baseado em componentes principais (gráfico PC e dos gráficos de controle simultâneos univariados baseados nas variáveis originais (gráfico SU X ou baseados em componentes principais (gráfico SUPC. A principal razão para o uso do gráfico PC é a redução de dimensionalidade. Entretanto, dependendo da perturbação e da correlação entre as variáveis originais, o gráfico é lento em sinalizar, exceto quando todas as variáveis são negativamente correlacionadas e a componente principal é adequadamente escolhida. Comparando os gráficos SU X, SUPC e T² concluímos que o gráfico SU X (gráfico SUPC tem um melhor desempenho global quando as variáveis são positivamente (negativamente correlacionadas. Desenvolvemos também uma expressão para obter o poder de detecção de dois gráficos de S² projetados para controlar a matriz de covariâncias. Os gráficos conjuntos de S² são, na maioria dos casos, mais eficientes que o gr

  18. A comparison of univariate and multivariate methods for analyzing clinal variation in an invasive species

    Edwards, K.R.; Bastlová, D.; Edwards-Jonášová, Magda; Květ, J.

    2011-01-01

    Roč. 674, č. 1 (2011), s. 119-131 ISSN 0018-8158 Institutional research plan: CEZ:AV0Z60870520 Keywords : common garden * life history traits * local adaptation * principal components analysis * purple loosestrife * redundancy analysis Subject RIV: EH - Ecology, Behaviour Impact factor: 1.784, year: 2011 http://www.springerlink.com/content/71r10n3367m98jxl/

  19. Dynamical analysis of highly excited molecular spectra

    Kellman, M.E. [Univ. of Oregon, Eugene (United States)

    1993-12-01

    The goal of this program is new methods for analysis of spectra and dynamics of highly excited vibrational states of molecules. In these systems, strong mode coupling and anharmonicity give rise to complicated classical dynamics, and make the simple normal modes analysis unsatisfactory. New methods of spectral analysis, pattern recognition, and assignment are sought using techniques of nonlinear dynamics including bifurcation theory, phase space classification, and quantization of phase space structures. The emphasis is chaotic systems and systems with many degrees of freedom.

  20. MULTIVARIANT AND UNIVARIANT INTERGROUP DIFFERENCES IN THE INVESTIGATED SPECIFIC MOTOR SPACE BETWEEN RESPONDENTS JUNIORS AND SENIORS MEMBERS OF THE MACEDONIAN NATIONAL KARATE TEAM

    Kеnan Аsani

    2013-07-01

    Full Text Available The aim is to establish intergroup multivariant and univariant investigated differences in specific motor space between respondents juniors and seniors members of the Macedonian karate team. The sample of 30 male karate respondents covers juniors on 16,17 and seniors over 18 years.In the research were applied 20 specific motor tests. Based on Graph 1 where it is presented multivariant analysis of variance Manova and Anova can be noted that respondents juniors and seniors, although not belonging to the same population are not different in multivariant understudied area.W. lambda of .19, Rao-wool R - Approximation of 1.91 degrees of freedom df 1 = 20 and df 2 = 9 provides the level of significance of p =, 16. Based on univariant analysis for each variable separately can be seen that has been around intergroup statistically significant difference in seven SMAEGERI (kick in the sack with favoritism leg mae geri for 10 sec., SMAVASI (kick in the sack with favoritism foot mavashi geri by 10 sec., SUSIRO (kick in the sack with favoritism leg ushiro geri for 10 sec., SKIZAME (kick in the sack with favoritism hand kizame cuki for 10 sec., STAPNSR (taping with foot in sagital plane for 15 sec. SUDMNR (hitting a moving target with weaker hand and SUDMPN (hitting a moving target with favoritism foot of twenty applied manifest variables. There are no intergroup differences in multivariant investigated specific - motor space among the respondents juniors and seniors members of the Macedonian karate team. Based on univariant analysis for each variable separately can be seen that has been around intergroup statistically significant difference in seven SMAEGERI (kick in the sack with favoritism leg mae geri for 10 sec., SMAVASI (kick in the sack with favoritism foot mavashi geri by 10 sec., SUSIRO (kick in the sack with favoritism leg ushiro geri for 10 sec., SKIZAME (kick in the sack with favoritism hand kizame cuki for 10 sec., STAPNSR (taping with foot in

  1. Developing a univariate approach to phase-I monitoring of fuzzy quality profiles

    Kazem Noghondarian

    2012-10-01

    Full Text Available In many real-world applications, the quality of a process or a particular product can be characterized by a functional relationship called profile. A profile builds the relationships between a response quality characteristic and one or more explanatory variables. Monitoring the quality of a profile is implemented to understand and to verify the stability of this functional relationship over time. In some real applications, a fuzzy linear regression model can represent the profile adequately where the response quality characteristic is fuzzy. The purpose of this paper is to develop an approach for monitoring process/product profiles in fuzzy environment. A model in fuzzy linear regression is developed to construct the quality profiles by using linear programming and then fuzzy individuals and moving-range (I-MR control charts are developed to monitor both intercept and slope of fuzzy profiles to achieve an in-control process. A case study in customer satisfaction is presented to show the application of our approach and to express the sensitivity analysis of parameters for building a fuzzy profile.

  2. Real-time prediction of extreme ambient carbon monoxide concentrations due to vehicular exhaust emissions using univariate linear stochastic models

    Sharma, P.; Khare, M.

    2000-01-01

    Historical data of the time-series of carbon monoxide (CO) concentration was analysed using Box-Jenkins modelling approach. Univariate Linear Stochastic Models (ULSMs) were developed to examine the degree of prediction possible for situations where only a limited data set, restricted only to the past record of pollutant data are available. The developed models can be used to provide short-term, real-time forecast of extreme CO concentrations for an Air Quality Control Region (AQCR), comprising a major traffic intersection in a Central Business District of Delhi City, India. (author)

  3. In situ calibration using univariate analyses based on the onboard ChemCam targets: first prediction of Martian rock and soil compositions

    Fabre, C.; Cousin, A.; Wiens, R.C.; Ollila, A.; Gasnault, O.; Maurice, S.; Sautter, V.; Forni, O.; Lasue, J.; Tokar, R.; Vaniman, D.; Melikechi, N.

    2014-01-01

    Curiosity rover landed on August 6th, 2012 in Gale Crater, Mars and it possesses unique analytical capabilities to investigate the chemistry and mineralogy of the Martian soil. In particular, the LIBS technique is being used for the first time on another planet with the ChemCam instrument, and more than 75,000 spectra have been returned in the first year on Mars. Curiosity carries body-mounted calibration targets specially designed for the ChemCam instrument, some of which are homgeneous glasses and others that are fine-grained glass-ceramics. We present direct calibrations, using these onboard standards to infer elements and element ratios by ratioing relative peak areas. As the laser spot size is around 300 μm, the LIBS technique provides measurements of the silicate glass compositions representing homogeneous material and measurements of the ceramic targets that are comparable to fine-grained rock or soil. The laser energy and the auto-focus are controlled for all sequences used for calibration. The univariate calibration curves present relatively to very good correlation coefficients with low RSDs for major and ratio calibrations. Trace element calibration curves (Li, Sr, and Mn), down to several ppm, can be used as a rapid tool to draw attention to remarkable rocks and soils along the traverse. First comparisons to alpha-particle X-ray spectroscopy (APXS) data, on selected targets, show good agreement for most elements and for Mg# and Al/Si estimates. SiO 2 estimates using univariate cannot be yet used. Na 2 O and K 2 O estimates are relevant for high alkali contents, but probably under estimated due to the CCCT initial compositions. Very good results for CaO and Al 2 O 3 estimates and satisfactory results for FeO are obtained. - Highlights: • In situ LIBS univariate calibrations are done using the Curiosity onboard standards. • Major and minor element contents can be rapidly obtained. • Trace element contents can be used as a rapid tool along the

  4. In situ calibration using univariate analyses based on the onboard ChemCam targets: first prediction of Martian rock and soil compositions

    Fabre, C. [GeoRessources lab, Université de Lorraine, Nancy (France); Cousin, A.; Wiens, R.C. [Los Alamos National Laboratory, Los Alamos, NM (United States); Ollila, A. [University of NM, Albuquerque (United States); Gasnault, O.; Maurice, S. [IRAP, Toulouse (France); Sautter, V. [Museum National d' Histoire Naturelle, Paris (France); Forni, O.; Lasue, J. [IRAP, Toulouse (France); Tokar, R.; Vaniman, D. [Planetary Science Institute, Tucson, AZ (United States); Melikechi, N. [Delaware State University (United States)

    2014-09-01

    Curiosity rover landed on August 6th, 2012 in Gale Crater, Mars and it possesses unique analytical capabilities to investigate the chemistry and mineralogy of the Martian soil. In particular, the LIBS technique is being used for the first time on another planet with the ChemCam instrument, and more than 75,000 spectra have been returned in the first year on Mars. Curiosity carries body-mounted calibration targets specially designed for the ChemCam instrument, some of which are homgeneous glasses and others that are fine-grained glass-ceramics. We present direct calibrations, using these onboard standards to infer elements and element ratios by ratioing relative peak areas. As the laser spot size is around 300 μm, the LIBS technique provides measurements of the silicate glass compositions representing homogeneous material and measurements of the ceramic targets that are comparable to fine-grained rock or soil. The laser energy and the auto-focus are controlled for all sequences used for calibration. The univariate calibration curves present relatively to very good correlation coefficients with low RSDs for major and ratio calibrations. Trace element calibration curves (Li, Sr, and Mn), down to several ppm, can be used as a rapid tool to draw attention to remarkable rocks and soils along the traverse. First comparisons to alpha-particle X-ray spectroscopy (APXS) data, on selected targets, show good agreement for most elements and for Mg# and Al/Si estimates. SiO{sub 2} estimates using univariate cannot be yet used. Na{sub 2}O and K{sub 2}O estimates are relevant for high alkali contents, but probably under estimated due to the CCCT initial compositions. Very good results for CaO and Al{sub 2}O{sub 3} estimates and satisfactory results for FeO are obtained. - Highlights: • In situ LIBS univariate calibrations are done using the Curiosity onboard standards. • Major and minor element contents can be rapidly obtained. • Trace element contents can be used as a

  5. Inflation persistence in central and southeastern Europe: Evidence from univariate and structural time series approaches

    Mladenović Zorica

    2012-01-01

    Full Text Available The purpose of this paper is to measure inflation persistence in the following countries of Central and Southeastern Europe: Slovakia, the Czech Republic, Poland, Hungary, Romania and Serbia. The study sample covers monthly data from January, 1995 to May, 2010 for Poland, Hungary and Slovakia, from January 1994 to May, 2010 for the Czech Republic, and from January, 2002 to June 2010 for Romania. The shortest sample used, from January, 2003 to September, 2010, was for Serbia and is due to the late start in the transition process. The results of this study enriched the existing ones on this topic by extending the sample period to cover even the recent years of relatively higher inflation rates and by including Romania and Serbia, which were not previously considered. The study led to two main findings: first, inflation of moderate to high magnitude persistence in Hungary, Poland, Romania and Serbia, and inflation of smaller order persistence in Slovakia and the Czech Republic was detected within the Markov switching model approach. In addition, the changes in inflation persistence often correspond to changes in variability and mean of inflation. Second, New Keynesian Phillips Curve represents a valid structural approach to describe the inflation dynamics in this region. In all the six cases studied, weights on backward and forward looking behaviors were significant, while the impact of the driving variable was insignificant only once. It is found that significant influence of the economic driving variable can be captured by real gross wage inflation and real broad money growth. The estimates show that the backward-looking term plays an important role in determining the inflation dynamics. Similar conclusions are drawn by using quarterly data in econometric estimations for the selected countries.

  6. Neutron activation analysis of high purity substances

    Gil'bert, Eh.N.

    1987-01-01

    Peculiarities of neutron-activation analysis (NAA) of high purity substances are considered. Simultaneous determination of a wide series of elements, high sensitivity (the lower bound of determined contents 10 -9 -10 -10 %), high selectivity and accuracy (Sr=0.10-0.15, and may be decreased up to 0.001), possibility of analysis of the samples from several micrograms to hundreds of grams, simplicity of calibration may be thought NAA advantages. Questions of accounting of NAA systematic errors associated with the neutron flux screening by the analysed matrix and with production of radionuclides of determined elements from accompanying elements according to concurrent nuclear reactions, as well as accounting of errors due to self-absorption of recorded radiation by compact samples, are considered

  7. High priority tank sampling and analysis report

    Brown, T.M.

    1998-01-01

    In July 1993, the Defense Nuclear Facilities Board issued Recommendation 93-5 (Conway 1993) which noted that there was insufficient tank waste technical information and the pace to obtain it was too slow to ensure that Hanford Site wastes could be safely stored, that associated operations could be conducted safely, and that future disposal data requirements could be met. In response, the US Department of Energy, in May 1996, issued Revision 1 of the Recommendation 93-5 Implementation Plan (DOE-RL 1996). The Implementation Plan presented a modified approach to achieve the original plan's objectives, concentrating on actions necessary to ensure that wastes can be safely stored, that operations can be safely conducted, and that timely characterization information for the tank waste Disposal Program could be obtained. The Implementation Plan proposed 28 High Priority tanks for near term core sampling and analysis, which along with sampling and analysis of other non-High Priority tanks, could provide the scientific and technical data to confirm assumptions, calibrate models, and.measure safety related phenomenology of the waste. When the analysis results of the High Priority and other-tank sampling were reviewed, it was expected that a series of 12 questions, 9 related to safety issues and 3 related to planning for the disposal process, should be answered allowing key decisions to be made. This report discusses the execution of the Implementation Plan and the results achieved in addressing the questions. Through sampling and analysis, all nine safety related questions have been answered and extensive data for the three disposal planning related questions have been collected, allowing for key decision making. Many more tanks than the original 28 High Priority tanks identified in the Implementation Plan were sampled and analyzed. Twenty-one High Priority tanks and 85 other tanks were core sampled and used to address the questions. Thirty-eight additional tanks were auger

  8. The issue of multiple univariate comparisons in the context of neuroelectric brain mapping: an application in a neuromarketing experiment.

    Vecchiato, G; De Vico Fallani, F; Astolfi, L; Toppi, J; Cincotti, F; Mattia, D; Salinari, S; Babiloni, F

    2010-08-30

    This paper presents some considerations about the use of adequate statistical techniques in the framework of the neuroelectromagnetic brain mapping. With the use of advanced EEG/MEG recording setup involving hundred of sensors, the issue of the protection against the type I errors that could occur during the execution of hundred of univariate statistical tests, has gained interest. In the present experiment, we investigated the EEG signals from a mannequin acting as an experimental subject. Data have been collected while performing a neuromarketing experiment and analyzed with state of the art computational tools adopted in specialized literature. Results showed that electric data from the mannequin's head presents statistical significant differences in power spectra during the visualization of a commercial advertising when compared to the power spectra gathered during a documentary, when no adjustments were made on the alpha level of the multiple univariate tests performed. The use of the Bonferroni or Bonferroni-Holm adjustments returned correctly no differences between the signals gathered from the mannequin in the two experimental conditions. An partial sample of recently published literature on different neuroscience journals suggested that at least the 30% of the papers do not use statistical protection for the type I errors. While the occurrence of type I errors could be easily managed with appropriate statistical techniques, the use of such techniques is still not so largely adopted in the literature. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  9. Univariate Lp and ɭ p Averaging, 0 < p < 1, in Polynomial Time by Utilization of Statistical Structure

    John E. Lavery

    2012-10-01

    Full Text Available We present evidence that one can calculate generically combinatorially expensive Lp and lp averages, 0 < p < 1, in polynomial time by restricting the data to come from a wide class of statistical distributions. Our approach differs from the approaches in the previous literature, which are based on a priori sparsity requirements or on accepting a local minimum as a replacement for a global minimum. The functionals by which Lp averages are calculated are not convex but are radially monotonic and the functionals by which lp averages are calculated are nearly so, which are the keys to solvability in polynomial time. Analytical results for symmetric, radially monotonic univariate distributions are presented. An algorithm for univariate lp averaging is presented. Computational results for a Gaussian distribution, a class of symmetric heavy-tailed distributions and a class of asymmetric heavy-tailed distributions are presented. Many phenomena in human-based areas are increasingly known to be represented by data that have large numbers of outliers and belong to very heavy-tailed distributions. When tails of distributions are so heavy that even medians (L1 and l1 averages do not exist, one needs to consider using lp minimization principles with 0 < p < 1.

  10. Analysis of high burnup fuel safety issues

    Lee, Chan Bock; Kim, D. H.; Bang, J. G.; Kim, Y. M.; Yang, Y. S.; Jung, Y. H.; Jeong, Y. H.; Nam, C.; Baik, J. H.; Song, K. W.; Kim, K. S

    2000-12-01

    Safety issues in steady state and transient behavior of high burnup LWR fuel above 50 - 60 MWD/kgU were analyzed. Effects of burnup extension upon fuel performance parameters was reviewed, and validity of both the fuel safety criteria and the performance analysis models which were based upon the lower burnup fuel test results was analyzed. It was found that further tests would be necessary in such areas as fuel failure and dispersion for RIA, and high temperature cladding corrosion and mechanical deformation for LOCA. Since domestic fuels have been irradiated in PWR up to burnup higher than 55 MWD/kgU-rod. avg., it can be said that Korea is in the same situation as the other countries in the high burnup fuel safety issues. Therefore, necessary research areas to be performed in Korea were derived. Considering that post-irradiation examination(PIE) for the domestic fuel of burnup higher than 30 MWD/kgU has not been done so far at all, it is primarily necessary to perform PIE for high burnup fuel, and then simulation tests for RIA and LOCA could be performed by using high burnup fuel specimens. For the areas which can not be performed in Korea, international cooperation will be helpful to obtain the test results. With those data base, safety of high burnup domestic fuels will be confirmed, current fuel safety criteria will be re-evaluated, and finally transient high burnup fuel behavior analysis technology will be developed through the fuel performance analysis code development

  11. Analysis of high burnup fuel safety issues

    Lee, Chan Bock; Kim, D. H.; Bang, J. G.; Kim, Y. M.; Yang, Y. S.; Jung, Y. H.; Jeong, Y. H.; Nam, C.; Baik, J. H.; Song, K. W.; Kim, K. S

    2000-12-01

    Safety issues in steady state and transient behavior of high burnup LWR fuel above 50 - 60 MWD/kgU were analyzed. Effects of burnup extension upon fuel performance parameters was reviewed, and validity of both the fuel safety criteria and the performance analysis models which were based upon the lower burnup fuel test results was analyzed. It was found that further tests would be necessary in such areas as fuel failure and dispersion for RIA, and high temperature cladding corrosion and mechanical deformation for LOCA. Since domestic fuels have been irradiated in PWR up to burnup higher than 55 MWD/kgU-rod. avg., it can be said that Korea is in the same situation as the other countries in the high burnup fuel safety issues. Therefore, necessary research areas to be performed in Korea were derived. Considering that post-irradiation examination(PIE) for the domestic fuel of burnup higher than 30 MWD/kgU has not been done so far at all, it is primarily necessary to perform PIE for high burnup fuel, and then simulation tests for RIA and LOCA could be performed by using high burnup fuel specimens. For the areas which can not be performed in Korea, international cooperation will be helpful to obtain the test results. With those data base, safety of high burnup domestic fuels will be confirmed, current fuel safety criteria will be re-evaluated, and finally transient high burnup fuel behavior analysis technology will be developed through the fuel performance analysis code development.

  12. Advanced Analysis Methods in High Energy Physics

    Pushpalatha C. Bhat

    2001-10-03

    During the coming decade, high energy physics experiments at the Fermilab Tevatron and around the globe will use very sophisticated equipment to record unprecedented amounts of data in the hope of making major discoveries that may unravel some of Nature's deepest mysteries. The discovery of the Higgs boson and signals of new physics may be around the corner. The use of advanced analysis techniques will be crucial in achieving these goals. The author discusses some of the novel methods of analysis that could prove to be particularly valuable for finding evidence of any new physics, for improving precision measurements and for exploring parameter spaces of theoretical models.

  13. High-capacity neutron activation analysis facility

    Hochel, R.C.

    1979-01-01

    A high-capacity neutron activation analysis facility, the Reactor Activation Facility, was designed and built and has been in operation for about a year at one of the Savannah River Plant's production reactors. The facility determines uranium and about 19 other trace elements in hydrogeochemical samples collected in the National Uranium Resource Evaluation program. The facility has a demonstrated average analysis rate of over 10,000 samples per month, and a peak rate of over 16,000 samples per month. Uranium is determined by cyclic activation and delayed neutron counting of the U-235 fission products; other elements are determined from gamma-ray spectra recorded in subsequent irradiation, decay, and counting steps. The method relies on the absolute activation technique and is highly automated for round-the-clock unattended operation

  14. High-capacity neutron activation analysis facility

    Hochel, R.C.; Bowman, W.W.; Zeh, C.W.

    1980-01-01

    A high-capacity neutron activation analysis facility, the Reactor Activation Facility, was designed and built and has been in operation for about a year at one of the Savannah River Plant's production reactors. The facility determines uranium and about 19 other elements in hydrogeochemical samples collected in the National Uranium Resource Evaluation program, which is sponsored and funded by the United States Department of Energy, Grand Junction Office. The facility has a demonstrated average analysis rate of over 10,000 samples per month, and a peak rate of over 16,000 samples per month. Uranium is determined by cyclic activation and delayed neutron counting of the U-235 fission products; other elements are determined from gamma-ray spectra recorded in subsequent irradiation, decay, and counting steps. The method relies on the absolute activation technique and is highly automated for round-the-clock unattended operation

  15. Will initiatives to promote hydroelectricity consumption be effective? Evidence from univariate and panel LM unit root tests with structural breaks

    Lean, Hooi Hooi; Smyth, Russell

    2014-01-01

    This paper examines whether initiatives to promote hydroelectricity consumption are likely to be effective by applying univariate and panel Lagrange Multiplier (LM) unit root tests to hydroelectricity consumption in 55 countries over the period 1965–2011. We find that for the panel, as well as about four-fifths of individual countries, that hydroelectricity consumption is stationary. This result implies that shocks to hydroelectricity consumption in most countries will only result in temporary deviations from the long-run growth path. An important consequence of this finding is that initiatives designed to have permanent positive effects on hydroelectricity consumption, such as large-scale dam construction, are unlikely to be effective in increasing the share of hydroelectricity, relative to consumption of fossil fuels. - Highlights: • Applies unit root tests to hydroelectricity consumption. • Hydroelectricity consumption is stationary. • Shocks to hydroelectricity consumption result in temporary deviations from the long-run growth path

  16. Guaranteed Bounds on Information-Theoretic Measures of Univariate Mixtures Using Piecewise Log-Sum-Exp Inequalities

    Nielsen, Frank

    2016-12-09

    Information-theoreticmeasures, such as the entropy, the cross-entropy and the Kullback-Leibler divergence between two mixture models, are core primitives in many signal processing tasks. Since the Kullback-Leibler divergence of mixtures provably does not admit a closed-form formula, it is in practice either estimated using costly Monte Carlo stochastic integration, approximated or bounded using various techniques. We present a fast and generic method that builds algorithmically closed-form lower and upper bounds on the entropy, the cross-entropy, the Kullback-Leibler and the α-divergences of mixtures. We illustrate the versatile method by reporting our experiments for approximating the Kullback-Leibler and the α-divergences between univariate exponential mixtures, Gaussian mixtures, Rayleigh mixtures and Gamma mixtures.

  17. Univariate and multiple linear regression analyses for 23 single nucleotide polymorphisms in 14 genes predisposing to chronic glomerular diseases and IgA nephropathy in Han Chinese.

    Wang, Hui; Sui, Weiguo; Xue, Wen; Wu, Junyong; Chen, Jiejing; Dai, Yong

    2014-09-01

    Immunoglobulin A nephropathy (IgAN) is a complex trait regulated by the interaction among multiple physiologic regulatory systems and probably involving numerous genes, which leads to inconsistent findings in genetic studies. One possibility of failure to replicate some single-locus results is that the underlying genetics of IgAN nephropathy is based on multiple genes with minor effects. To learn the association between 23 single nucleotide polymorphisms (SNPs) in 14 genes predisposing to chronic glomerular diseases and IgAN in Han males, the 23 SNPs genotypes of 21 Han males were detected and analyzed with a BaiO gene chip, and their associations were analyzed with univariate analysis and multiple linear regression analysis. Analysis showed that CTLA4 rs231726 and CR2 rs1048971 revealed a significant association with IgAN. These findings support the multi-gene nature of the etiology of IgAN and propose a potential gene-gene interactive model for future studies.

  18. High energy backscattering analysis using RUMP

    Doolittle, L.R.

    1990-01-01

    A backscattering analysis program such as RUMP fundamentally requires two reference sets of data in order to accomplish anything useful: stopping powers and scattering cross sections. Users of original versions of RUMP had to be satisfied with polynomial stopping powers geared for 1 to 3 MeV, and purely Rutherford scattering cross sections. As people increasingly turn to high beam energies to solve difficult materials analysis problems, RUMP has evolved greater flexibility for its reference data. It now allows data files to be loaded describing different stopping powers and arbitrary scattering cross sections. Auxiliary programs have been written to generate the reference data files, either from a theory or from measured reference data. Descriptions are given of both the underlying physics and the operational details of the software

  19. Evaluation of the efficiency of continuous wavelet transform as processing and preprocessing algorithm for resolution of overlapped signals in univariate and multivariate regression analyses; an application to ternary and quaternary mixtures

    Hegazy, Maha A.; Lotfy, Hayam M.; Mowaka, Shereen; Mohamed, Ekram Hany

    2016-07-01

    Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.

  20. High priority tank sampling and analysis report

    Brown, T.M.

    1998-03-05

    In July 1993, the Defense Nuclear Facilities Safety Board (DNFSB) transmitted Recommendation 93-5 (Conway 1993) to the US Department of Energy (DOE). Recommendation 93-5 noted that there was insufficient tank waste technical information and the pace to obtain it was too slow to ensure that Hanford Site wastes could be safely stored, that associated operations could be conducted safely, and that future disposal data requirements could be met. In May 1996, the DOE issued Revision 1 of the Recommendation 93-5 Implementation Plan (DOE-RL 1996). The Implementation Plan revision presented a modified approach to achieve the original plan`s objectives. The approach concentrated on actions necessary to ensure that wastes can be safely stored, that operations can be safely conducted, and that timely characterization information for the tank waste Disposal Program could be obtained. The Implementation Plan proposed 28 High Priority tanks, which, if sampled and analyzed, were expected to provide information to answer questions regarding safety and disposal issues. The High Priority tank list was originally developed in Section 9.0 of the Tank Waste Characterization Basis (Brown et al. 1995) by integrating the needs of the various safety and disposal programs. The High Priority tank list represents a set of tanks that were expected to provide the highest information return for characterization resources expended. The High Priority tanks were selected for near-term core sampling and were not expected to be the only tanks that would provide meaningful information. Sampling and analysis of non-High Priority tanks also could be used to provide scientific and technical data to confirm assumptions, calibrate models, and measure safety related phenomenological characteristics of the waste. When the sampling and analysis results of the High Priority and other tanks were reviewed, it was expected that a series of questions should be answered allowing key decisions to be made. The first

  1. High priority tank sampling and analysis report

    Brown, T.M.

    1998-01-01

    In July 1993, the Defense Nuclear Facilities Safety Board (DNFSB) transmitted Recommendation 93-5 (Conway 1993) to the US Department of Energy (DOE). Recommendation 93-5 noted that there was insufficient tank waste technical information and the pace to obtain it was too slow to ensure that Hanford Site wastes could be safely stored, that associated operations could be conducted safely, and that future disposal data requirements could be met. In May 1996, the DOE issued Revision 1 of the Recommendation 93-5 Implementation Plan (DOE-RL 1996). The Implementation Plan revision presented a modified approach to achieve the original plan's objectives. The approach concentrated on actions necessary to ensure that wastes can be safely stored, that operations can be safely conducted, and that timely characterization information for the tank waste Disposal Program could be obtained. The Implementation Plan proposed 28 High Priority tanks, which, if sampled and analyzed, were expected to provide information to answer questions regarding safety and disposal issues. The High Priority tank list was originally developed in Section 9.0 of the Tank Waste Characterization Basis (Brown et al. 1995) by integrating the needs of the various safety and disposal programs. The High Priority tank list represents a set of tanks that were expected to provide the highest information return for characterization resources expended. The High Priority tanks were selected for near-term core sampling and were not expected to be the only tanks that would provide meaningful information. Sampling and analysis of non-High Priority tanks also could be used to provide scientific and technical data to confirm assumptions, calibrate models, and measure safety related phenomenological characteristics of the waste. When the sampling and analysis results of the High Priority and other tanks were reviewed, it was expected that a series of questions should be answered allowing key decisions to be made. The first

  2. Effects of univariate and multivariate regression on the accuracy of hydrogen quantification with laser-induced breakdown spectroscopy

    Ytsma, Cai R.; Dyar, M. Darby

    2018-01-01

    Hydrogen (H) is a critical element to measure on the surface of Mars because its presence in mineral structures is indicative of past hydrous conditions. The Curiosity rover uses the laser-induced breakdown spectrometer (LIBS) on the ChemCam instrument to analyze rocks for their H emission signal at 656.6 nm, from which H can be quantified. Previous LIBS calibrations for H used small data sets measured on standards and/or manufactured mixtures of hydrous minerals and rocks and applied univariate regression to spectra normalized in a variety of ways. However, matrix effects common to LIBS make these calibrations of limited usefulness when applied to the broad range of compositions on the Martian surface. In this study, 198 naturally-occurring hydrous geological samples covering a broad range of bulk compositions with directly-measured H content are used to create more robust prediction models for measuring H in LIBS data acquired under Mars conditions. Both univariate and multivariate prediction models, including partial least square (PLS) and the least absolute shrinkage and selection operator (Lasso), are compared using several different methods for normalization of H peak intensities. Data from the ChemLIBS Mars-analog spectrometer at Mount Holyoke College are compared against spectra from the same samples acquired using a ChemCam-like instrument at Los Alamos National Laboratory and the ChemCam instrument on Mars. Results show that all current normalization and data preprocessing variations for quantifying H result in models with statistically indistinguishable prediction errors (accuracies) ca. ± 1.5 weight percent (wt%) H2O, limiting the applications of LIBS in these implementations for geological studies. This error is too large to allow distinctions among the most common hydrous phases (basalts, amphiboles, micas) to be made, though some clays (e.g., chlorites with ≈ 12 wt% H2O, smectites with 15-20 wt% H2O) and hydrated phases (e.g., gypsum with ≈ 20

  3. A high resolution jet analysis for LEP

    Hariri, S.

    1992-11-01

    A high resolution multijet analysis of hadronic events produced in e + e - annihilation at a C.M.S. energy of 91.2 GeV is described. Hadronic events produced in e + e - annihilations are generated using the Monte Carlo program JETSET7.3 with its two options: Matrix Element (M.E.) and Parton Showers (P.S.). The shower option is used with its default parameter values while the M.E. option is used with an invariant mass cut Y CUT =0.01 instead of 0.02. This choice ensures a better continuity in the evolution of the event shape variables. (K.A.) 3 refs.; 26 figs.; 1 tab

  4. RICH High Voltages & PDF Analysis @ LHCb

    Fanchini, E

    2009-01-01

    In the LHCb experiment an important issue is the identification of the hadrons of the final states of the B mesons decays. Two RICH subdetectors are devoted to this task, and the Hybrid Photon Detectors (HPDs) are the photodetectors used to detect Cherenkov light. In this poster there is a description of how the very high voltage (-18 KV) supply stability used to power the HPDs is monitored. It is also presented the basics of a study which can be done with the first collision data: the analysis of the dimuons from the Drell-Yan process. This process is well known and the acceptance of the LHCb detector in terms of pseudorapidity will be very useful to improve the knowledge of the proton structure functions or, alternatively, try to estimate the luminosity from it.

  5. Neutron activation analysis of high purity tellurium

    Gil'bert, Eh.N.; Verevkin, G.V.; Obrazovskij, E.G.; Shatskaya, S.S.

    1980-01-01

    A scheme of neutron activation analysis of high purity tellurium is developed. Weighed amount of Te (0.5 g) is irradiated for 20-40 hr in the flux of 2x10 13 neutron/(cm 2 xs). After decomposition of the sample impurities of gold and palladium are determined by the extraction with organic sulphides. Tellurium separation from the remaining impurities is carried out by the extraction with monothiobenzoic acid from weakly acidic HCl solutions in the presence of iodide-ions, suppressing silver extraction. Remaining impurity elements in the refined product are determined γ-spectrometrically. The method allows to determine 34 impurities with determination limits 10 -6 -10 -11 g

  6. Comparative study of the efficiency of computed univariate and multivariate methods for the estimation of the binary mixture of clotrimazole and dexamethasone using two different spectral regions

    Fayez, Yasmin Mohammed; Tawakkol, Shereen Mostafa; Fahmy, Nesma Mahmoud; Lotfy, Hayam Mahmoud; Shehata, Mostafa Abdel-Aty

    2018-04-01

    Three methods of analysis are conducted that need computational procedures by the Matlab® software. The first is the univariate mean centering method which eliminates the interfering signal of the one component at a selected wave length leaving the amplitude measured to represent the component of interest only. The other two multivariate methods named PLS and PCR depend on a large number of variables that lead to extraction of the maximum amount of information required to determine the component of interest in the presence of the other. Good accurate and precise results are obtained from the three methods for determining clotrimazole in the linearity range 1-12 μg/mL and 75-550 μg/mL with dexamethasone acetate 2-20 μg/mL in synthetic mixtures and pharmaceutical formulation using two different spectral regions 205-240 nm and 233-278 nm. The results obtained are compared statistically to each other and to the official methods.

  7. Highly multiparametric analysis by mass cytometry.

    Ornatsky, Olga; Bandura, Dmitry; Baranov, Vladimir; Nitz, Mark; Winnik, Mitchell A; Tanner, Scott

    2010-09-30

    This review paper describes a new technology, mass cytometry, that addresses applications typically run by flow cytometer analyzers, but extends the capability to highly multiparametric analysis. The detection technology is based on atomic mass spectrometry. It offers quantitation, specificity and dynamic range of mass spectrometry in a format that is familiar to flow cytometry practitioners. The mass cytometer does not require compensation, allowing the application of statistical techniques; this has been impossible given the constraints of fluorescence noise with traditional cytometry instruments. Instead of "colors" the mass cytometer "reads" the stable isotope tags attached to antibodies using metal-chelating labeling reagents. Because there are many available stable isotopes, and the mass spectrometer provides exquisite resolution between detection channels, many parameters can be measured as easily as one. For example, in a single tube the technique allows for the ready detection and characterization of the major cell subsets in blood or bone marrow. Here we describe mass cytometric immunophenotyping of human leukemia cell lines and leukemia patient samples, differential cell analysis of normal peripheral and umbilical cord blood; intracellular protein identification and metal-encoded bead arrays. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. Advanced high conversion PWR: preliminary analysis

    Golfier, H.; Bellanger, V.; Bergeron, A.; Dolci, F.; Gastaldi, B.; Koberl, O.; Mignot, G.; Thevenot, C.

    2007-01-01

    In this paper, physical aspects of a HCPWR (High Conversion Light Water Reactor), which is an innovative PWR fuelled with mixed oxide and having a higher conversion ratio due to a lower moderation ratio. Moderation ratios lower than unity are considered which has led to low moderation PWR fuel assembly designs. The objectives of this parametric study are to define a feasibility area with regard to the following neutronic aspects: moderation ratio, Pu loading, reactor spectrum, irradiation time, and neutronic coefficients. Important thermohydraulic parameters are the pressure drop, the critical heat flux, the maximum temperature in the fuel rod and the pumping power. The thermohydraulic analysis shows that a range of moderation ratios from 0.8 to 1.2 is technically possible. A compromise between improved fuel utilization and research and development effort has been found for the moderation ration of about 1. The parametric study shows that there are 2 ranges of interest for the moderation ratio: -) moderation ratio between 0.8 and 1.2 with reduced fissile heights (> 3 m), hexagonal arrangement fuel assembly and square arrangement fuel assembly are possible; and -) moderation between 0.6 and 0.7 with a modification of the reactor operating conditions (reduction of the primary flow and of the thermal power), the fuel rods could be arranged inside a hexagonal fuel rod assembly. (A.C.)

  9. High-Throughput Analysis of Enzyme Activities

    Lu, Guoxin [Iowa State Univ., Ames, IA (United States)

    2007-01-01

    High-throughput screening (HTS) techniques have been applied to many research fields nowadays. Robot microarray printing technique and automation microtiter handling technique allows HTS performing in both heterogeneous and homogeneous formats, with minimal sample required for each assay element. In this dissertation, new HTS techniques for enzyme activity analysis were developed. First, patterns of immobilized enzyme on nylon screen were detected by multiplexed capillary system. The imaging resolution is limited by the outer diameter of the capillaries. In order to get finer images, capillaries with smaller outer diameters can be used to form the imaging probe. Application of capillary electrophoresis allows separation of the product from the substrate in the reaction mixture, so that the product doesn't have to have different optical properties with the substrate. UV absorption detection allows almost universal detection for organic molecules. Thus, no modifications of either the substrate or the product molecules are necessary. This technique has the potential to be used in screening of local distribution variations of specific bio-molecules in a tissue or in screening of multiple immobilized catalysts. Another high-throughput screening technique is developed by directly monitoring the light intensity of the immobilized-catalyst surface using a scientific charge-coupled device (CCD). Briefly, the surface of enzyme microarray is focused onto a scientific CCD using an objective lens. By carefully choosing the detection wavelength, generation of product on an enzyme spot can be seen by the CCD. Analyzing the light intensity change over time on an enzyme spot can give information of reaction rate. The same microarray can be used for many times. Thus, high-throughput kinetic studies of hundreds of catalytic reactions are made possible. At last, we studied the fluorescence emission spectra of ADP and obtained the detection limits for ADP under three different

  10. Structural analysis with high brilliance synchrotron radiation

    Ohno, Hideo [Japan Atomic Energy Research Inst., Kamigori, Hyogo (Japan). Kansai Research Establishment

    1997-11-01

    The research subjects in diffraction and scattering of materials with high brilliance synchrotron radiation such as SPring-8 (Super Photon ring 8 GeV) are summarized. The SPring-8 project is going well and 10 public beamlines will be opened for all users in October, 1997. Three JAERI beamlines are also under construction for researches of heavy element science, physical and structural properties under extreme conditions such as high temperature and high pressure. (author)

  11. Gait analysis by high school students

    Heck, A.; van Dongen, C.

    2008-01-01

    Human walking is a complicated motion. Movement scientists have developed various research methods to study gait. This article describes how a high school student collected and analysed high quality gait data in much the same way that movement scientists do, via the recording and measurement of

  12. Chemical analysis of high purity graphite

    1993-03-01

    The Sub-Committee on Chemical Analysis of Graphite was organized in April 1989, under the Committee on Chemical Analysis of Nuclear Fuels and Reactor Materials, JAERI. The Sub-Committee carried out collaborative analyses among eleven participating laboratories for the certification of the Certified Reference Materials (CRMs), JAERI-G5 and G6, after developing and evaluating analytical methods during the period of September 1989 to March 1992. The certified values were given for ash, boron and silicon in the CRM based on the collaborative analysis. The values for ten elements (Al, Ca, Cr, Fe, Mg, Mo, Ni, Sr, Ti, V) were not certified, but given for information. Preparation, homogeneity testing and chemical analyses for certification of reference materials were described in this paper. (author) 52 refs

  13. High resolution synchrotron light analysis at ELSA

    Switka, Michael; Zander, Sven; Hillert, Wolfgang [Bonn Univ. (Germany). Elektronen-Stretcher Anlage ELSA-Facility (ELSA)

    2013-07-01

    The pulse stretcher ring ELSA provides polarized electrons with energies up to 3.5 GeV for external hadron experiments. In order to suffice the need of stored beam intensities towards 200 mA, advanced beam instability studies need to be carried out. An external diagnostic beamline for synchrotron light analysis has been set up and provides the space for multiple diagnostic tools including a streak camera with time resolution of <1 ps. Beam profile measurements are expected to identify instabilities and reveal their thresholds. The effect of adequate countermeasures is subject to analysis. The current status of the beamline development is presented.

  14. High-Level Overview of Data Needs for RE Analysis

    Lopez, Anthony

    2016-12-22

    This presentation provides a high level overview of analysis topics and associated data needs. Types of renewable energy analysis are grouped into two buckets: First, analysis for renewable energy potential, and second, analysis for other goals. Data requirements are similar but and they build upon one another.

  15. Analysis of high-fold gamma data

    Radford, D. C.; Cromaz, M.; Beyer, C. J.

    1999-01-01

    Historically, γ-γ and γ-γ-γ coincidence spectra were utilized to build nuclear level schemes. With the development of large detector arrays, it has became possible to analyze higher fold coincidence data sets. This paper briefly reports on software to analyze 4-fold coincidence data sets that allows creation of 4-fold histograms (hypercubes) of at least 1024 channels per side (corresponding to a 43 gigachannel data space) that will fit onto a few gigabytes of disk space, and extraction of triple-gated spectra in a few seconds. Future detector arrays may have even much higher efficiencies, and detect as many as 15 or 20 γ rays simultaneously; such data will require very different algorithms for storage and analysis. Difficulties inherent in the analysis of such data are discussed, and two possible new solutions are presented, namely adaptive list-mode systems and 'list-list-mode' storage

  16. Analysis of cadmium in high alpha solutions

    Gray, L.W.; Overman, L.A.; Hodgens, H.F.

    1977-07-01

    Cadmium nitrate is occasionally used as a neutron poison for convenience in the separation of uranium, neptunium, and plutonium. As the classical methods of analysis for cadmium are very time-consuming, a method to isolate it in solution using solvent extraction of uranium, neptunium, and plutonium with TBP in an n-paraffin hydrocarbon was investigated. After removal of the radionuclides, the cadmium is determined by atomic absorption spectroscopy. Precision of the method at the 95 percent confidence level is +-2.4 percent. Alpha content of the solutions was typically reduced from 1-10 x 10 11 dis/(min ml) 238 Pu to 1-15 x 10 4 dis/(min ml). Analysis time was typically reduced from approximately 24 hours per sample to less than 1 hour

  17. Cost minimization analysis of high-dose-rate versus low-dose-rate brachytherapy in endometrial cancer

    Pinilla, James

    1998-01-01

    Purpose: Endometrial cancer is a common, usually curable malignancy whose treatment frequently involves low-dose-rate (LDR) or high-dose-rate (HDR) brachytherapy. These treatments involve substantial resource commitments and this is increasingly important. This paper presents a cost minimization analysis of HDR versus LDR brachytherapy in the treatment of endometrial cancer. Methods and Materials: The perspective of the analysis is that of the payor, in this case the Ministry of Health. One course of LDR treatment is compared to two courses of HDR treatment. The two alternatives are considered to be comparable with respect to local control, survival, and toxicities. Labor, overhead, and capital costs are accounted for and carefully measured. A 5% inflation rate is used where applicable. A univariate sensitivity analysis is performed. Results: The HDR regime is 22% less expensive compared to the LDR regime. This is $991.66 per patient or, based on the current workload of this department (30 patients per year) over the useful lifetime of the after loader, $297,498 over 10 years in 1997 dollars. Conclusion: HDR brachytherapy minimizes costs in the treatment of endometrial cancer relative to LDR brachytherapy. These results may be used by other centers to make rational decisions regarding brachytherapy equipment replacement or acquisition

  18. CMMI High Maturity Measurement and Analysis Workshop Report: March 2008

    Stoddard, II, Robert W; Goldenson, Dennis R; Zubrow, Dave; Harper, Erin

    2008-01-01

    .... In response to the need for clarification and guidance on implementing measurement and analysis in the context of high maturity processes, members of the SEI s Software Engineering Measurement and Analysis (SEMA...

  19. High Throughput Analysis of Photocatalytic Water Purification

    Sobral Romao, J.I.; Baiao Barata, David; Habibovic, Pamela; Mul, Guido; Baltrusaitis, Jonas

    2014-01-01

    We present a novel high throughput photocatalyst efficiency assessment method based on 96-well microplates and UV-Vis spectroscopy. We demonstrate the reproducibility of the method using methyl orange (MO) decomposition, and compare kinetic data obtained with those provided in the literature for

  20. Analysis of high-pressure safety valves

    Beune, A.

    2009-01-01

    In presently used safety valve sizing standards the gas discharge capacity is based on a nozzle flow derived from ideal gas theory. At high pressures or low temperatures real gas effects can no longer be neglected, so the discharge coefficient corrected for flow losses cannot be assumed constant

  1. Analysis of high school students’ environmental literacy

    Wardani, R. A. K.; Karyanto, P.; Ramli, M.

    2018-05-01

    The student’s environmental literacy (EL) is a vital component to improve the awareness of student on environmental issues. This research aims to measure and analyse the EL of high school students, and how the topic of environment has been taught in high school. The research was conducted in February to April 2017. The EL was measured on three aspects, i.e. knowledge, attitude and concern. The participants were sixty-five (21 boys, 44 girls) purposively selected from students of grade X, XI and XII of one Senior High School in Karanganyar Regency, Indonesia. The knowledge of students on concepts of environmental issues was tested by fourteen main questions followed by supported questions. The result showed that 80% of students were classified as inadequate category. The attitude of students was measured by New Ecological Paradigm (NEP) consisted of fifteen items, and students’ average score was 46.42 (medium). The concern was measured by fifteen statements about environment, and it was ranged from 2.58 to 4.18. EL of students may low due to students’ lack understanding of the environment concepts, the limited theories and concepts transferred to students, inappropriate lesson plan to meet the EL components.

  2. Cost-effectiveness Analysis of Nutritional Support for the Prevention of Pressure Ulcers in High-Risk Hospitalized Patients.

    Tuffaha, Haitham W; Roberts, Shelley; Chaboyer, Wendy; Gordon, Louisa G; Scuffham, Paul A

    2016-06-01

    To evaluate the cost-effectiveness of nutritional support compared with standard care in preventing pressure ulcers (PrUs) in high-risk hospitalized patients. An economic model using data from a systematic literature review. A meta-analysis of randomized controlled trials on the efficacy of nutritional support in reducing the incidence of PrUs was conducted. Modeled cohort of hospitalized patients at high risk of developing PrUs and malnutrition simulated during their hospital stay and up to 1 year. Standard care included PrU prevention strategies, such as redistribution surfaces, repositioning, and skin protection strategies, along with standard hospital diet. In addition to the standard care, the intervention group received nutritional support comprising patient education, nutrition goal setting, and the consumption of high-protein supplements. The analysis was from a healthcare payer perspective. Key outcomes of the model included the average costs and quality-adjusted life years. Model results were tested in univariate sensitivity analyses, and decision uncertainty was characterized using a probabilistic sensitivity analysis. Compared with standard care, nutritional support was cost saving at AU $425 per patient and marginally more effective with an average 0.005 quality-adjusted life years gained. The probability of nutritional support being cost-effective was 87%. Nutritional support to prevent PrUs in high-risk hospitalized patients is cost-effective with substantial cost savings predicted. Hospitals should implement the recommendations from the current PrU practice guidelines and offer nutritional support to high-risk patients.

  3. Nanomechanical analysis of high performance materials

    2014-01-01

    This book is intended for researchers who are interested in investigating the nanomechanical properties of materials using advanced instrumentation techniques. The chapters of the book are written in an easy-to-follow format, just like solved examples. The book comprehensively covers a broad range of materials such as polymers, ceramics, hybrids, biomaterials, metal oxides, nanoparticles, minerals, carbon nanotubes and welded joints. Each chapter describes the application of techniques on the selected material and also mentions the methodology adopted for the extraction of information from the raw data. This is a unique book in which both equipment manufacturers and equipment users have contributed chapters. Novices will learn the techniques directly from the inventors and senior researchers will gain in-depth information on the new technologies that are suitable for advanced analysis. On the one hand, fundamental concepts that are needed to understand the nanomechanical behavior of materials is included in t...

  4. Development of automatic image analysis methods for high-throughput and high-content screening

    Di, Zi

    2013-01-01

    This thesis focuses on the development of image analysis methods for ultra-high content analysis of high-throughput screens where cellular phenotype responses to various genetic or chemical perturbations that are under investigation. Our primary goal is to deliver efficient and robust image analysis

  5. Reciprocal Benefits of Mass-Univariate and Multivariate Modeling in Brain Mapping: Applications to Event-Related Functional MRI, H215O-, and FDG-PET

    James R. Moeller

    2006-01-01

    Full Text Available In brain mapping studies of sensory, cognitive, and motor operations, specific waveforms of dynamic neural activity are predicted based on theoretical models of human information processing. For example in event-related functional MRI (fMRI, the general linear model (GLM is employed in mass-univariate analyses to identify the regions whose dynamic activity closely matches the expected waveforms. By comparison multivariate analyses based on PCA or ICA provide greater flexibility in detecting spatiotemporal properties of experimental data that may strongly support alternative neuroscientific explanations. We investigated conjoint multivariate and mass-univariate analyses that combine the capabilities to (1 verify activation of neural machinery we already understand and (2 discover reliable signatures of new neural machinery. We examined combinations of GLM and PCA that recover latent neural signals (waveforms and footprints with greater accuracy than either method alone. Comparative results are illustrated with analyses of real fMRI data, adding to Monte Carlo simulation support.

  6. On the Use of Running Trends as Summary Statistics for Univariate Time Series and Time Series Association

    Trottini, Mario; Vigo, Isabel; Belda, Santiago

    2015-01-01

    Given a time series, running trends analysis (RTA) involves evaluating least squares trends over overlapping time windows of L consecutive time points, with overlap by all but one observation. This produces a new series called the “running trends series,” which is used as summary statistics of the original series for further analysis. In recent years, RTA has been widely used in climate applied research as summary statistics for time series and time series association. There is no doubt that ...

  7. Impact of BCL2 and p53 on postmastectomy radiotherapy response in high-risk breast cancer. A subgroup analysis of DBCG82 b and c

    Kyndi, M.; Alsner, J.; Nielsen, H.M.; Overgaard, J.; Soerensen, F.B.; Knudsen, H.; Overgaard, M.

    2008-01-01

    Purpose. To examine p53 and BCL2 expression in high-risk breast cancer patients randomized to postmastectomy radiotherapy (PMRT). Patients and methods. The present analysis included 1 000 of 3 083 high-risk breast cancer patients randomly assigned to PMRT in the DBCG82 b and c studies. Tissue microarray sections were stained with immunohistochemistry for p53 and BCL2. Median potential follow-up was 17 years. Clinical endpoints were locoregional recurrence (LRR), distant metastases (DM), overall mortality, and overall survival (OS). Statistical analyses included Kappa statistics, χ2 or exact tests, Kaplan-Meier probability plots, Log-rank test, and Cox univariate and multivariate regression analyses. Results. p53 accumulation was not significantly associated with increased overall mortality, DM or LRR probability in univariate or multivariate Cox regression analyses. Kaplan-Meier probability plots showed reduced OS and improved DM and LRR probabilities after PMRT within subgroups of both p53 negative and p53 positive patients. Negative BCL2 expression was significantly associated with increased overall mortality, DM and LRR probability in multivariate Cox regression analyses. Kaplan-Meier probability plots showed a significantly improved overall survival after PMRT for the BCL2 positive subgroup, whereas practically no survival improvement was seen after PMRT for the BCL2 negative subgroup. In multivariate analysis of OS, however, no significant interaction was found between BCL2 and randomization status. Significant reductions in LRR probability after PMRT were recorded within both the BCL2 positive and BCL2 negative subgroups. Conclusion. p53 was not associated with survival after radiotherapy in high-risk breast cancer, but BCL2 might be

  8. National high-level waste systems analysis

    Kristofferson, K.; O'Holleran, T.P.

    1996-01-01

    Previously, no mechanism existed that provided a systematic, interrelated view or national perspective of all high-level waste treatment and storage systems that the US Department of Energy manages. The impacts of budgetary constraints and repository availability on storage and treatment must be assessed against existing and pending negotiated milestones for their impact on the overall HLW system. This assessment can give DOE a complex-wide view of the availability of waste treatment and help project the time required to prepare HLW for disposal. Facilities, throughputs, schedules, and milestones were modeled to ascertain the treatment and storage systems resource requirements at the Hanford Site, Savannah River Site, Idaho National Engineering Laboratory, and West Valley Demonstration Project. The impacts of various treatment system availabilities on schedule and throughput were compared to repository readiness to determine the prudent application of resources. To assess the various impacts, the model was exercised against a number of plausible scenarios as discussed in this paper

  9. High sensitivity analysis of atmospheric gas elements

    Miwa, Shiro; Nomachi, Ichiro; Kitajima, Hideo

    2006-01-01

    We have investigated the detection limit of H, C and O in Si, GaAs and InP using a Cameca IMS-4f instrument equipped with a modified vacuum system to improve the detection limit with a lower sputtering rate We found that the detection limits for H, O and C are improved by employing a primary ion bombardment before the analysis. Background levels of 1 x 10 17 atoms/cm 3 for H, of 3 x 10 16 atoms/cm 3 for C and of 2 x 10 16 atoms/cm 3 for O could be achieved in silicon with a sputtering rate of 2 nm/s after a primary ion bombardment for 160 h. We also found that the use of a 20 K He cryo-panel near the sample holder was effective for obtaining better detection limits in a shorter time, although the final detection limits using the panel are identical to those achieved without it

  10. High sensitivity analysis of atmospheric gas elements

    Miwa, Shiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan)]. E-mail: Shiro.Miwa@jp.sony.com; Nomachi, Ichiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan); Kitajima, Hideo [Nanotechnos Corp., 5-4-30 Nishihashimoto, Sagamihara 229-1131 (Japan)

    2006-07-30

    We have investigated the detection limit of H, C and O in Si, GaAs and InP using a Cameca IMS-4f instrument equipped with a modified vacuum system to improve the detection limit with a lower sputtering rate We found that the detection limits for H, O and C are improved by employing a primary ion bombardment before the analysis. Background levels of 1 x 10{sup 17} atoms/cm{sup 3} for H, of 3 x 10{sup 16} atoms/cm{sup 3} for C and of 2 x 10{sup 16} atoms/cm{sup 3} for O could be achieved in silicon with a sputtering rate of 2 nm/s after a primary ion bombardment for 160 h. We also found that the use of a 20 K He cryo-panel near the sample holder was effective for obtaining better detection limits in a shorter time, although the final detection limits using the panel are identical to those achieved without it.

  11. High resolving power spectrometer for beam analysis

    Moshammer, H.W.; Spencer, J.E.

    1992-03-01

    We describe a system designed to analyze the high energy, closely spaced bunches from individual RF pulses. Neither a large solid angle nor momentum range is required so this allows characteristics that appear useful for other applications such as ion beam lithography. The spectrometer is a compact, double-focusing QBQ design whose symmetry allows the Quads to range between F or D with a correspondingly large range of magnifications, dispersion and resolving power. This flexibility insures the possibility of spatially separating all of the bunches along the focal plane with minimal transverse kicks and bending angle for differing input conditions. The symmetry of the system allows a simple geometric interpretationof the resolving power in terms of thin lenses and ray optics. We discuss the optics and the hardware that is proposed to measure emittance, energy, energy spread and bunch length for each bunch in an RF pulse train for small bunch separations. We also discuss how to use such measurements for feedback and feedforward control of these bunch characteristics as well as maintain their stability. 2 refs

  12. Liquid Scintillation High Resolution Spectral Analysis

    Grau Carles, A.; Grau Malonda, A.

    2010-08-06

    The CIEMAT/NIST and the TDCR methods in liquid scintillation counting are based on the determination of the efficiency for total counting. This paper tries to expand these methods analysing the pulse-height spectrum of radionuclides. To reach this objective we have to generalize the equations used in the model and to analyse the influence of ionization and chemical quench in both spectra and counting efficiency. We present equations to study the influence of different photomultipliers response in systems with one, two or three photomultipliers. We study the effect of the electronic noise discriminator level in both spectra and counting efficiency. The described method permits one to study problems that up to now was not possible to approach, such as the high uncertainty in the standardization of pure beta-ray emitter with low energy when we apply the TDCR method, or the discrepancies in the standardization of some electron capture radionuclides, when the CIEMAT/NIST method is applied. (Author) 107 refs.

  13. Liquid Scintillation High Resolution Spectral Analysis

    Grau Carles, A.; Grau Malonda, A.

    2010-01-01

    The CIEMAT/NIST and the TDCR methods in liquid scintillation counting are based on the determination of the efficiency for total counting. This paper tries to expand these methods analysing the pulse-height spectrum of radionuclides. To reach this objective we have to generalize the equations used in the model and to analyse the influence of ionization and chemical quench in both spectra and counting efficiency. We present equations to study the influence of different photomultipliers response in systems with one, two or three photomultipliers. We study the effect of the electronic noise discriminator level in both spectra and counting efficiency. The described method permits one to study problems that up to now was not possible to approach, such as the high uncertainty in the standardization of pure beta-ray emitter with low energy when we apply the TDCR method, or the discrepancies in the standardization of some electron capture radionuclides, when the CIEMAT/NIST method is applied. (Author) 107 refs.

  14. Noncentral Chi-Square versus Normal Distributions in Describing the Likelihood Ratio Statistic: The Univariate Case and Its Multivariate Implication

    Yuan, Ke-Hai

    2008-01-01

    In the literature of mean and covariance structure analysis, noncentral chi-square distribution is commonly used to describe the behavior of the likelihood ratio (LR) statistic under alternative hypothesis. Due to the inaccessibility of the rather technical literature for the distribution of the LR statistic, it is widely believed that the…

  15. Ultra-high Performance Liquid Chromatography in Steroid Analysis

    Salonen, Fanny

    2017-01-01

    The latest version of liquid chromatography is ultra-high performance (or pressure) chromatography (UHPLC). In the technique, short and narrow-bore columns with particle sizes below 3 µm are used. The extremely high pressure used results in very short analysis times, excellent separation, and good resolution. This makes UHPLC a good choice for steroidal analysis. Steroids are a highly interesting area of study; they can be recognized as biomarkers for several diseases and are a relevant topic...

  16. Comparison study of inelastic analysis codes for high temperature structure

    Kim, Jong Bum; Lee, H. Y.; Park, C. K.; Geon, G. P.; Lee, J. H

    2004-02-01

    LMR high temperature structures subjected to operating and transient loadings may exhibit very complex deformation behaviors due to the use of ductile material such as 316SS and the systematic analysis technology of high temperature structure for reliable safety assessment is essential. In this project, comparative study with developed inelastic analysis program NONSTA and the existing analysis codes was performed applying various types of loading including non-proportional loading. The performance of NONSTA was confirmed and the effect of inelastic constants on the analysis result was analyzed. Also, the applicability of the inelastic analysis was enlarged as a result of applying both the developed program and the existing codes to the analyses of the enhanced creep behavior and the elastic follow-up behavior of high temperature structures and the necessary items for improvements were deduced. Further studies on the improvement of NONSTA program and the decision of the proper values of inelastic constants are necessary.

  17. NOAA High-Resolution Sea Surface Temperature (SST) Analysis Products

    National Oceanic and Atmospheric Administration, Department of Commerce — This archive covers two high resolution sea surface temperature (SST) analysis products developed using an optimum interpolation (OI) technique. The analyses have a...

  18. Prospective multifactorial analysis of preseason risk factors for shoulder and elbow injuries in high school baseball pitchers.

    Shitara, Hitoshi; Kobayashi, Tsutomu; Yamamoto, Atsushi; Shimoyama, Daisuke; Ichinose, Tsuyoshi; Tajika, Tsuyoshi; Osawa, Toshihisa; Iizuka, Haku; Takagishi, Kenji

    2017-10-01

    To prospectively identify preseason physical factors for shoulder and elbow injuries during the season in high school baseball pitchers. The study included 105 high school baseball pitchers [median age 16 (15-17) years]. The range of motion of the shoulder (90° abducted external and internal rotation) and elbow (extension/flexion), shoulder muscle strength (abduction and prone internal and external rotation), shoulder and elbow laxity, horizontal flexion, and scapular dyskinesis were assessed. After the season, the participants completed questionnaires regarding shoulder and/or elbow injuries, with injury defined as an inability to play for ≥1 week due to elbow/shoulder problems. The results of two groups (injured and noninjured) were compared using t tests and Chi-square analyses. Stepwise forward logistic regression models were developed to identify risk factors. Twenty-one injuries were observed. In univariate analysis, 90° abducted internal rotation and total arc of the dominant shoulder and the ratio of prone external rotation in the dominant to nondominant sides in the injured group were significantly less than those in the noninjured group (P = 0.02, 0.04, and 0.01, respectively). In logistic regression analysis, 90° abducted internal rotation in the dominant shoulder and prone external rotation ratio were significantly associated with injuries (P = 0.02 and 0.03, respectively). A low prone external rotation ratio and decreased 90° abducted internal rotation in the dominant shoulder in the preseason were significant risk factors for shoulder and elbow injuries in high school baseball pitchers. The results may contribute to reduce the incidence of these injuries. II.

  19. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    Lee, Youngrok [Iowa State Univ., Ames, IA (United States)

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.

  20. BEAMGAA. A chance for high precision analysis of big samples

    Goerner, W.; Berger, A.; Haase, O.; Segebade, Chr.; Alber, D.; Monse, G.

    2005-01-01

    In activation analysis of traces in small samples, the non-equivalence of the activating radiation doses of sample and calibration material gives rise to sometimes tolerable systematic errors. Conversely, analysis of major components usually demands high trueness and precision. To meet this, beam geometry activation analysis (BEAMGAA) procedures have been developed for instrumental photon (IPAA) and neutron activation analysis (INAA) in which the activating neutron/photon beam exhibits broad, flat-topped characteristics. This results in a very low lateral activating flux gradient compared to known radiation facilities, however, at significantly lower flux density. The axial flux gradient can be accounted for by a monitor-sample-monitor assembly. As a first approach, major components were determined in high purity substances as well as selenium in a cattle fodder additive. (author)

  1. Monitoring endemic livestock diseases using laboratory diagnostic data: A simulation study to evaluate the performance of univariate process monitoring control algorithms.

    Lopes Antunes, Ana Carolina; Dórea, Fernanda; Halasa, Tariq; Toft, Nils

    2016-05-01

    Surveillance systems are critical for accurate, timely monitoring and effective disease control. In this study, we investigated the performance of univariate process monitoring control algorithms in detecting changes in seroprevalence for endemic diseases. We also assessed the effect of sample size (number of sentinel herds tested in the surveillance system) on the performance of the algorithms. Three univariate process monitoring control algorithms were compared: Shewart p Chart(1) (PSHEW), Cumulative Sum(2) (CUSUM) and Exponentially Weighted Moving Average(3) (EWMA). Increases in seroprevalence were simulated from 0.10 to 0.15 and 0.20 over 4, 8, 24, 52 and 104 weeks. Each epidemic scenario was run with 2000 iterations. The cumulative sensitivity(4) (CumSe) and timeliness were used to evaluate the algorithms' performance with a 1% false alarm rate. Using these performance evaluation criteria, it was possible to assess the accuracy and timeliness of the surveillance system working in real-time. The results showed that EWMA and PSHEW had higher CumSe (when compared with the CUSUM) from week 1 until the end of the period for all simulated scenarios. Changes in seroprevalence from 0.10 to 0.20 were more easily detected (higher CumSe) than changes from 0.10 to 0.15 for all three algorithms. Similar results were found with EWMA and PSHEW, based on the median time to detection. Changes in the seroprevalence were detected later with CUSUM, compared to EWMA and PSHEW for the different scenarios. Increasing the sample size 10 fold halved the time to detection (CumSe=1), whereas increasing the sample size 100 fold reduced the time to detection by a factor of 6. This study investigated the performance of three univariate process monitoring control algorithms in monitoring endemic diseases. It was shown that automated systems based on these detection methods identified changes in seroprevalence at different times. Increasing the number of tested herds would lead to faster

  2. High temperature structure design for FBRs and analysis technology

    Iwata, Koji

    1986-01-01

    In the case of FBRs, the operation temperature exceeds 500 deg C, therefore, the design taking the inelastic characteristics of structural materials, such as plasticity and creep, into account is required, and the high grade and detailed evaluation of design is demanded. This new high temperature structure design technology has been advanced in respective countries taking up experimental, prototype and demonstration reactors as the targets. The development of FBRs in Japan was begun with the experimental reactor 'Joyo' which has been operated since 1977, and now, the prototype FBR 'Monju' of 280 MWe is under construction, which is expected to attain the criticality in 1992. In order to realize FBRs which can compete with LWRs through the construction of a demonstration FBR, the construction of large scale plants and the heightening of the economy and reliability are necessary. The features and the role of FBR structural design, the method of high temperature structure design and the trend of its standardization, the trend of the structural analysis technology for FBRs such as inelastic analysis, buckling analysis and fluid and structure coupled vibration analysis, the present status of structural analysis programs, and the subjects for the future of high temperature structure design are explained. (Kako, I.)

  3. Visualization and Data Analysis for High-Performance Computing

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  4. Analysis of High Plains Resource Risk and Economic Impacts

    Tidwell, Vincent C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vargas, Vanessa N [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jones, Shannon M [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dealy, Bern Caudill [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shaneyfelt, Calvin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Braeton James [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Moreland, Barbara Denise [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-04-01

    The importance of the High Plains Aquifer is broadly recognized as is its vulnerability to continued overuse. T his study e xplore s how continued depletions of the High Plains Aquifer might impact both critical infrastructure and the economy at the local, r egional , and national scale. This analysis is conducted at the county level over a broad geographic region within the states of Kansas and Nebraska. In total , 140 counties that overlie the High Plains Aquifer in these two states are analyzed. The analysis utilizes future climate projections to estimate crop production. Current water use and management practices are projected into the future to explore their related impact on the High Plains Aquifer , barring any changes in water management practices, regulat ion, or policy. Finally, the impact of declining water levels and even exhaustion of groundwater resources are projected for specific sectors of the economy as well as particular elements of the region's critical infrastructure.

  5. Data analysis in high-dimensional sparse spaces

    Clemmensen, Line Katrine Harder

    classification techniques for high-dimensional problems are presented: Sparse discriminant analysis, sparse mixture discriminant analysis and orthogonality constrained support vector machines. The first two introduces sparseness to the well known linear and mixture discriminant analysis and thereby provide low...... are applied to classifications of fish species, ear canal impressions used in the hearing aid industry, microbiological fungi species, and various cancerous tissues and healthy tissues. In addition, novel applications of sparse regressions (also called the elastic net) to the medical, concrete, and food...

  6. System and method for high precision isotope ratio destructive analysis

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  7. Statistical learning methods in high-energy and astrophysics analysis

    Zimmermann, J. [Forschungszentrum Juelich GmbH, Zentrallabor fuer Elektronik, 52425 Juelich (Germany) and Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: zimmerm@mppmu.mpg.de; Kiesling, C. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)

    2004-11-21

    We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application.

  8. Statistical learning methods in high-energy and astrophysics analysis

    Zimmermann, J.; Kiesling, C.

    2004-01-01

    We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application

  9. Quantitative coronary plaque analysis predicts high-risk plaque morphology on coronary computed tomography angiography: results from the ROMICAT II trial.

    Liu, Ting; Maurovich-Horvat, Pál; Mayrhofer, Thomas; Puchner, Stefan B; Lu, Michael T; Ghemigian, Khristine; Kitslaar, Pieter H; Broersen, Alexander; Pursnani, Amit; Hoffmann, Udo; Ferencik, Maros

    2018-02-01

    Semi-automated software can provide quantitative assessment of atherosclerotic plaques on coronary CT angiography (CTA). The relationship between established qualitative high-risk plaque features and quantitative plaque measurements has not been studied. We analyzed the association between quantitative plaque measurements and qualitative high-risk plaque features on coronary CTA. We included 260 patients with plaque who underwent coronary CTA in the Rule Out Myocardial Infarction/Ischemia Using Computer Assisted Tomography (ROMICAT) II trial. Quantitative plaque assessment and qualitative plaque characterization were performed on a per coronary segment basis. Quantitative coronary plaque measurements included plaque volume, plaque burden, remodeling index, and diameter stenosis. In qualitative analysis, high-risk plaque was present if positive remodeling, low CT attenuation plaque, napkin-ring sign or spotty calcium were detected. Univariable and multivariable logistic regression analyses were performed to assess the association between quantitative and qualitative high-risk plaque assessment. Among 888 segments with coronary plaque, high-risk plaque was present in 391 (44.0%) segments by qualitative analysis. In quantitative analysis, segments with high-risk plaque had higher total plaque volume, low CT attenuation plaque volume, plaque burden and remodeling index. Quantitatively assessed low CT attenuation plaque volume (odds ratio 1.12 per 1 mm 3 , 95% CI 1.04-1.21), positive remodeling (odds ratio 1.25 per 0.1, 95% CI 1.10-1.41) and plaque burden (odds ratio 1.53 per 0.1, 95% CI 1.08-2.16) were associated with high-risk plaque. Quantitative coronary plaque characteristics (low CT attenuation plaque volume, positive remodeling and plaque burden) measured by semi-automated software correlated with qualitative assessment of high-risk plaque features.

  10. Analysis of chaos in high-dimensional wind power system.

    Wang, Cong; Zhang, Hongli; Fan, Wenhui; Ma, Ping

    2018-01-01

    A comprehensive analysis on the chaos of a high-dimensional wind power system is performed in this study. A high-dimensional wind power system is more complex than most power systems. An 11-dimensional wind power system proposed by Huang, which has not been analyzed in previous studies, is investigated. When the systems are affected by external disturbances including single parameter and periodic disturbance, or its parameters changed, chaotic dynamics of the wind power system is analyzed and chaotic parameters ranges are obtained. Chaos existence is confirmed by calculation and analysis of all state variables' Lyapunov exponents and the state variable sequence diagram. Theoretical analysis and numerical simulations show that the wind power system chaos will occur when parameter variations and external disturbances change to a certain degree.

  11. An analysis of respondent-driven sampling with injecting drug users in a high HIV prevalent state of India.

    Phukan, Sanjib Kumar; Medhi, Gajendra Kumar; Mahanta, Jagadish; Adhikary, Rajatashuvra; Thongamba, Gay; Paranjape, Ramesh S; Akoijam, Brogen S

    2017-07-03

    Personal networks are significant social spaces to spread of HIV or other blood-borne infections among hard-to-reach population, viz., injecting drug users, female sex workers, etc. Sharing of infected needles or syringes among drug users is one of the major routes of HIV transmission in Manipur, a high HIV prevalence state in India. This study was carried out to describe the network characteristics and recruitment patterns of injecting drug users and to assess the association of personal network with injecting risky behaviors in Manipur. A total of 821 injecting drug users were recruited into the study using respondent-driven sampling (RDS) from Bishnupur and Churachandpur districts of Manipur; data on demographic characteristics, HIV risk behaviors, and network size were collected from them. Transition probability matrices and homophily indices were used to describe the network characteristics, and recruitment patterns of injecting drug users. Univariate and multivariate binary logistic regression models were performed to analyze the association between the personal networks and sharing of needles or syringes. The average network size was similar in both the districts. Recruitment analysis indicates injecting drug users were mostly engaged in mixed age group setting for injecting practice. Ever married and new injectors showed lack of in-group ties. Younger injecting drug users had mainly recruited older injecting drug users from their personal network. In logistic regression analysis, higher personal network was found to be significantly associated with increased likelihood of injecting risky behaviors. Because of mixed personal network of new injectors and higher network density associated with HIV exposure, older injecting drug users may act as a link for HIV transmission or other blood-borne infections to new injectors and also to their sexual partners. The information from this study may be useful to understanding the network pattern of injecting drug users

  12. Safety analysis of a high temperature gas-cooled reactor

    Shimazu, Akira; Morimoto, Toshio

    1975-01-01

    In recent years, in order to satisfy the social requirements of environment and safety and also to cope with the current energy stringency, the installation of safe nuclear power plants is indispensable. Herein, safety analysis and evaluation to confirm quantitatively the safety design of a nuclear power plant become more and more important. The safety analysis and its methods for a high temperature gas-cooled reactor are described, with emphasis placed on the practices by Fuji Electric Manufacturing Co. Fundamental rule of securing plant safety ; safety analysis in normal operation regarding plant dynamic characteristics and radioactivity evaluation ; and safety analysis at the time of accidents regarding plant response to the accidents and radioactivity evaluation are explained. (Mori, K.)

  13. High throughput on-chip analysis of high-energy charged particle tracks using lensfree imaging

    Luo, Wei; Shabbir, Faizan; Gong, Chao; Gulec, Cagatay; Pigeon, Jeremy; Shaw, Jessica; Greenbaum, Alon; Tochitsky, Sergei; Joshi, Chandrashekhar [Electrical Engineering Department, University of California, Los Angeles, California 90095 (United States); Ozcan, Aydogan, E-mail: ozcan@ucla.edu [Electrical Engineering Department, University of California, Los Angeles, California 90095 (United States); Bioengineering Department, University of California, Los Angeles, California 90095 (United States); California NanoSystems Institute (CNSI), University of California, Los Angeles, California 90095 (United States)

    2015-04-13

    We demonstrate a high-throughput charged particle analysis platform, which is based on lensfree on-chip microscopy for rapid ion track analysis using allyl diglycol carbonate, i.e., CR-39 plastic polymer as the sensing medium. By adopting a wide-area opto-electronic image sensor together with a source-shifting based pixel super-resolution technique, a large CR-39 sample volume (i.e., 4 cm × 4 cm × 0.1 cm) can be imaged in less than 1 min using a compact lensfree on-chip microscope, which detects partially coherent in-line holograms of the ion tracks recorded within the CR-39 detector. After the image capture, using highly parallelized reconstruction and ion track analysis algorithms running on graphics processing units, we reconstruct and analyze the entire volume of a CR-39 detector within ∼1.5 min. This significant reduction in the entire imaging and ion track analysis time not only increases our throughput but also allows us to perform time-resolved analysis of the etching process to monitor and optimize the growth of ion tracks during etching. This computational lensfree imaging platform can provide a much higher throughput and more cost-effective alternative to traditional lens-based scanning optical microscopes for ion track analysis using CR-39 and other passive high energy particle detectors.

  14. Analysis and Design of High-Order Parallel Resonant Converters

    Batarseh, Issa Eid

    1990-01-01

    In this thesis, a special state variable transformation technique has been derived for the analysis of high order dc-to-dc resonant converters. Converters comprised of high order resonant tanks have the advantage of utilizing the parasitic elements by making them part of the resonant tank. A new set of state variables is defined in order to make use of two-dimensional state-plane diagrams in the analysis of high order converters. Such a method has been successfully used for the analysis of the conventional Parallel Resonant Converters (PRC). Consequently, two -dimensional state-plane diagrams are used to analyze the steady state response for third and fourth order PRC's when these converters are operated in the continuous conduction mode. Based on this analysis, a set of control characteristic curves for the LCC-, LLC- and LLCC-type PRC are presented from which various converter design parameters are obtained. Various design curves for component value selections and device ratings are given. This analysis of high order resonant converters shows that the addition of the reactive components to the resonant tank results in converters with better performance characteristics when compared with the conventional second order PRC. Complete design procedure along with design examples for 2nd, 3rd and 4th order converters are presented. Practical power supply units, normally used for computer applications, were built and tested by using the LCC-, LLC- and LLCC-type commutation schemes. In addition, computer simulation results are presented for these converters in order to verify the theoretical results.

  15. Multiscale Thermo-Mechanical Design and Analysis of High Frequency and High Power Vacuum Electron Devices

    Gamzina, Diana

    Diana Gamzina March 2016 Mechanical and Aerospace Engineering Multiscale Thermo-Mechanical Design and Analysis of High Frequency and High Power Vacuum Electron Devices Abstract A methodology for performing thermo-mechanical design and analysis of high frequency and high average power vacuum electron devices is presented. This methodology results in a "first-pass" engineering design directly ready for manufacturing. The methodology includes establishment of thermal and mechanical boundary conditions, evaluation of convective film heat transfer coefficients, identification of material options, evaluation of temperature and stress field distributions, assessment of microscale effects on the stress state of the material, and fatigue analysis. The feature size of vacuum electron devices operating in the high frequency regime of 100 GHz to 1 THz is comparable to the microstructure of the materials employed for their fabrication. As a result, the thermo-mechanical performance of a device is affected by the local material microstructure. Such multiscale effects on the stress state are considered in the range of scales from about 10 microns up to a few millimeters. The design and analysis methodology is demonstrated on three separate microwave devices: a 95 GHz 10 kW cw sheet beam klystron, a 263 GHz 50 W long pulse wide-bandwidth sheet beam travelling wave tube, and a 346 GHz 1 W cw backward wave oscillator.

  16. High resolution melting (HRM) analysis of DNA--its role and potential in food analysis.

    Druml, Barbara; Cichna-Markl, Margit

    2014-09-01

    DNA based methods play an increasing role in food safety control and food adulteration detection. Recent papers show that high resolution melting (HRM) analysis is an interesting approach. It involves amplification of the target of interest in the presence of a saturation dye by the polymerase chain reaction (PCR) and subsequent melting of the amplicons by gradually increasing the temperature. Since the melting profile depends on the GC content, length, sequence and strand complementarity of the product, HRM analysis is highly suitable for the detection of single-base variants and small insertions or deletions. The review gives an introduction into HRM analysis, covers important aspects in the development of an HRM analysis method and describes how HRM data are analysed and interpreted. Then we discuss the potential of HRM analysis based methods in food analysis, i.e. for the identification of closely related species and cultivars and the identification of pathogenic microorganisms. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. High enrichment to low enrichment core's conversion. Accidents analysis

    Abbate, P.; Rubio, R.; Doval, A.; Lovotti, O.

    1990-01-01

    This work analyzes the different accidents that may occur in the reactor's facility after the 20% high-enriched uranium core's conversion. The reactor (of 5 thermal Mw), built in the 50's and 60's, is of the 'swimming pool' type, with light water and fuel elements of the curve plates MTR type, enriched at 93.15 %. This analysis includes: a) accidents by reactivity insertion; b) accidents by coolant loss; c) analysis by flow loss and d) fission products release. (Author) [es

  18. Pressurizer pump reliability analysis high flux isotope reactor

    Merryman, L.; Christie, B.

    1993-01-01

    During a prolonged outage from November 1986 to May 1990, numerous changes were made at the High Flux Isotope Reactor (HFIR). Some of these changes involved the pressurizer pumps. An analysis was performed to calculate the impact of these changes on the pressurizer system availability. The analysis showed that the availability of the pressurizer system dropped from essentially 100% to approximately 96%. The primary reason for the decrease in availability comes because off-site power grid disturbances sometimes result in a reactor trip with the present pressurizer pump configuration. Changes are being made to the present pressurizer pump configuration to regain some of the lost availability

  19. Neutron analysis of the fuel of high temperature nuclear reactors

    Bastida O, G. E.; Francois L, J. L.

    2014-10-01

    In this work a neutron analysis of the fuel of some high temperature nuclear reactors is presented, studying its main features, besides some alternatives of compound fuel by uranium and plutonium, and of coolant: sodium and helium. For this study was necessary the use of a code able to carry out a reliable calculation of the main parameters of the fuel. The use of the Monte Carlo method was convenient to simulate the neutrons transport in the reactor core, which is the base of the Serpent code, with which the calculations will be made for the analysis. (Author)

  20. Solving nonlinear, High-order partial differential equations using a high-performance isogeometric analysis framework

    Cortes, Adriano Mauricio; Vignal, Philippe; Sarmiento, Adel; Garcí a, Daniel O.; Collier, Nathan; Dalcin, Lisandro; Calo, Victor M.

    2014-01-01

    In this paper we present PetIGA, a high-performance implementation of Isogeometric Analysis built on top of PETSc. We show its use in solving nonlinear and time-dependent problems, such as phase-field models, by taking advantage of the high-continuity of the basis functions granted by the isogeometric framework. In this work, we focus on the Cahn-Hilliard equation and the phase-field crystal equation.

  1. High-dimensional cluster analysis with the Masked EM Algorithm

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  2. Modeling high temperature materials behavior for structural analysis

    Naumenko, Konstantin

    2016-01-01

    This monograph presents approaches to characterize inelastic behavior of materials and structures at high temperature. Starting from experimental observations, it discusses basic features of inelastic phenomena including creep, plasticity, relaxation, low cycle and thermal fatigue. The authors formulate constitutive equations to describe the inelastic response for the given states of stress and microstructure. They introduce evolution equations to capture hardening, recovery, softening, ageing and damage processes. Principles of continuum mechanics and thermodynamics are presented to provide a framework for the modeling materials behavior with the aim of structural analysis of high-temperature engineering components.

  3. Analysis of slug tests in formations of high hydraulic conductivity.

    Butler, James J; Garnett, Elizabeth J; Healey, John M

    2003-01-01

    A new procedure is presented for the analysis of slug tests performed in partially penetrating wells in formations of high hydraulic conductivity. This approach is a simple, spreadsheet-based implementation of existing models that can be used for analysis of tests from confined or unconfined aquifers. Field examples of tests exhibiting oscillatory and nonoscillatory behavior are used to illustrate the procedure and to compare results with estimates obtained using alternative approaches. The procedure is considerably simpler than recently proposed methods for this hydrogeologic setting. Although the simplifications required by the approach can introduce error into hydraulic-conductivity estimates, this additional error becomes negligible when appropriate measures are taken in the field. These measures are summarized in a set of practical field guidelines for slug tests in highly permeable aquifers.

  4. Topics in statistical data analysis for high-energy physics

    Cowan, G.

    2011-01-01

    These lectures concert two topics that are becoming increasingly important in the analysis of high-energy physics data: Bayesian statistics and multivariate methods. In the Bayesian approach, we extend the interpretation of probability not only to cover the frequency of repeatable outcomes but also to include a degree of belief. In this way we are able to associate probability with a hypothesis and thus to answer directly questions that cannot be addressed easily with traditional frequentist methods. In multivariate analysis, we try to exploit as much information as possible from the characteristics that we measure for each event to distinguish between event types. In particular we will look at a method that has gained popularity in high-energy physics in recent years: the boosted decision tree. Finally, we give a brief sketch of how multivariate methods may be applied in a search for a new signal process. (author)

  5. Quantitative high-resolution genomic analysis of single cancer cells.

    Hannemann, Juliane; Meyer-Staeckling, Sönke; Kemming, Dirk; Alpers, Iris; Joosse, Simon A; Pospisil, Heike; Kurtz, Stefan; Görndt, Jennifer; Püschel, Klaus; Riethdorf, Sabine; Pantel, Klaus; Brandt, Burkhard

    2011-01-01

    During cancer progression, specific genomic aberrations arise that can determine the scope of the disease and can be used as predictive or prognostic markers. The detection of specific gene amplifications or deletions in single blood-borne or disseminated tumour cells that may give rise to the development of metastases is of great clinical interest but technically challenging. In this study, we present a method for quantitative high-resolution genomic analysis of single cells. Cells were isolated under permanent microscopic control followed by high-fidelity whole genome amplification and subsequent analyses by fine tiling array-CGH and qPCR. The assay was applied to single breast cancer cells to analyze the chromosomal region centred by the therapeutical relevant EGFR gene. This method allows precise quantitative analysis of copy number variations in single cell diagnostics.

  6. Quantitative high-resolution genomic analysis of single cancer cells.

    Juliane Hannemann

    Full Text Available During cancer progression, specific genomic aberrations arise that can determine the scope of the disease and can be used as predictive or prognostic markers. The detection of specific gene amplifications or deletions in single blood-borne or disseminated tumour cells that may give rise to the development of metastases is of great clinical interest but technically challenging. In this study, we present a method for quantitative high-resolution genomic analysis of single cells. Cells were isolated under permanent microscopic control followed by high-fidelity whole genome amplification and subsequent analyses by fine tiling array-CGH and qPCR. The assay was applied to single breast cancer cells to analyze the chromosomal region centred by the therapeutical relevant EGFR gene. This method allows precise quantitative analysis of copy number variations in single cell diagnostics.

  7. High Performance Descriptive Semantic Analysis of Semantic Graph Databases

    Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan; Feo, John T.; Haglin, David J.; Mackey, Greg E.; Mizell, David W.

    2011-06-02

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.

  8. High-dimensional data in economics and their (robust) analysis

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Institutional support: RVO:67985556 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BA - General Mathematics OBOR OECD: Business and management http://library.utia.cas.cz/separaty/2017/SI/kalina-0474076.pdf

  9. High-dimensional Data in Economics and their (Robust) Analysis

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability

  10. High accuracy 3D electromagnetic finite element analysis

    Nelson, E.M.

    1996-01-01

    A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed

  11. High accuracy 3D electromagnetic finite element analysis

    Nelson, Eric M.

    1997-01-01

    A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed

  12. Bulk Materials Analysis Using High-Energy Positron Beams

    Glade, S C; Asoka-Kumar, P; Nieh, T G; Sterne, P A; Wirth, B D; Dauskardt, R H; Flores, K M; Suh, D; Odette, G.R.

    2002-01-01

    This article reviews some recent materials analysis results using high-energy positron beams at Lawrence Livermore National Laboratory. We are combining positron lifetime and orbital electron momentum spectroscopic methods to provide electron number densities and electron momentum distributions around positron annihilation sites. Topics covered include: correlation of positron annihilation characteristics with structural and mechanical properties of bulk metallic glasses, compositional studies of embrittling features in nuclear reactor pressure vessel steel, pore characterization in Zeolites, and positron annihilation characteristics in alkali halides

  13. Identifying individuals at high risk of psychosis: predictive utility of Support Vector Machine using structural and functional MRI data

    Isabel eValli

    2016-04-01

    Full Text Available The identification of individuals at high risk of developing psychosis is entirely based on clinical assessment, associated with limited predictive potential. There is therefore increasing interest in the development of biological markers that could be used in clinical practice for this purpose. We studied 25 individuals with an At Risk Mental State for psychosis and 25 healthy controls using structural MRI, and functional MRI in conjunction with a verbal memory task. Data were analysed using a standard univariate analysis, and with Support Vector Machine (SVM, a multivariate pattern recognition technique that enables statistical inferences to be made at the level of the individual, yielding results with high translational potential. The application of SVM to structural MRI data permitted the identification of individuals at high risk of psychosis with a sensitivity of 68% and a specificity of 76%, resulting in an accuracy of 72% (p<0.001. Univariate volumetric between-group differences did not reach statistical significance. In contrast, the univariate fMRI analysis identified between-group differences (p<0.05 corrected while the application of SVM to the same data did not. Since SVM is well suited at identifying the pattern of abnormality that distinguishes two groups, whereas univariate methods are more likely to identify regions that individually are most different between two groups, our results suggest the presence of focal functional abnormalities in the context of a diffuse pattern of structural abnormalities in individuals at high clinical risk of psychosis.

  14. Neutron activation analysis of high pure uranium using preconcentration

    Sadikov, I.I.; Rakhimov, A.V.; Salimov, M.I.; Zinov'ev, V.G.

    2006-01-01

    Full text: Uranium and its compounds are used as nuclear fuel, and requirements for purity of initial uranium are very high. Therefore highly sensitive and multielemental analysis of uranium is required. One of such methods is neutron activation analysis (NAA). During irradiation of uranium by nuclear reactor neutrons the induced radioactivity of a sample is formed by uranium radionuclide 239 U (T 1/2 = 23,4 min.) and its daughter radionuclide 239 Np (T 1/2 = 2,39 d). Short-lived 239 U almost completely decays in 24 hours after irradiation and the radioactivity of the sample is mainly due to 239 Np and is more than 10 9 Bq for 0.1 g of uranium sample (F = 1*10 14 cm -2 s -1 , t irr . = 5 h). That is why nondestructive determination of the impurities is impossible and they should be separated from 239 Np. When irradiated uranium yields fission products - radionuclides of some elements with mass numbers 91-104 and 131-144. The main problem in NAA of uranium is to take into account correctly the influence of fission products on the analysis results. We have developed a radiochemical separation procedure for RNAA of uranium [1]. Comparing the results of analysis carried out by radiochemical NAA and instrumental NAA with preconcentration of trace elements can be used for evaluating the interference of fission products on uranium analysis results. Preconcentration of trace elements have been carried out by extraction chromatography in 'TBP - 6M HNO 3 ' system [1]. Experiments have shown that if 0.1 g uranium sample is taken for analysis (F = 1*10 14 cm -2 s -1 , t irr . =5 h) the apparent concentration of Y, Zr, Mo, Cs, La, Ce, Pr, Nd exceeds the true concentration by 2500-3000 times and so determination of these elements is not possible by radiochemical NAA. (author)

  15. Dynamic Stability Analysis Using High-Order Interpolation

    Juarez-Toledo C.

    2012-10-01

    Full Text Available A non-linear model with robust precision for transient stability analysis in multimachine power systems is proposed. The proposed formulation uses the interpolation of Lagrange and Newton's Divided Difference. The High-Order Interpolation technique developed can be used for evaluation of the critical conditions of the dynamic system.The technique is applied to a 5-area 45-machine model of the Mexican interconnected system. As a particular case, this paper shows the application of the High-Order procedure for identifying the slow-frequency mode for a critical contingency. Numerical examples illustrate the method and demonstrate the ability of the High-Order technique to isolate and extract temporal modal behavior.

  16. Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: an SPSS method to analyze univariate data.

    Maric, Marija; de Haan, Else; Hogendoorn, Sanne M; Wolters, Lidewij H; Huizenga, Hilde M

    2015-03-01

    Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a data-analytic method to analyze univariate (i.e., one symptom) single-case data using the common package SPSS. This method can help the clinical researcher to investigate whether an intervention works as compared with a baseline period or another intervention type, and to determine whether symptom improvement is clinically significant. First, we describe the statistical method in a conceptual way and show how it can be implemented in SPSS. Simulation studies were performed to determine the number of observation points required per intervention phase. Second, to illustrate this method and its implications, we present a case study of an adolescent with anxiety disorders treated with cognitive-behavioral therapy techniques in an outpatient psychotherapy clinic, whose symptoms were regularly assessed before each session. We provide a description of the data analyses and results of this case study. Finally, we discuss the advantages and shortcomings of the proposed method. Copyright © 2014. Published by Elsevier Ltd.

  17. Acquisition and analysis strategies in functional MRI at high fields

    Windischberger, C.

    2001-08-01

    Functional magnetic resonance imaging represents a non-invasive technique to examine neuronal activity in the brain. It applies radio waves to excite nuclear spins, using the emitted signal during relaxation for image generation. Signal modulations from local blood flow and oxygenation level changes caused by neuronal activity are the basis for calculating functional brain maps with high spatial resolution. The present work discusses concepts for improving the spatial and temporal resolution, as well as sophisticated analysis approaches. Besides an exhaustive description of image reconstruction algorithms, computational simulations on echo-shifting in echo-planar imaging are presented and effects on spatial resolution are quantified. The results demonstrate that echo-shifting causes only minimal resolution losses for high signal-to-noise data, but leads to severe resolution degradation (up to 30 %) in images with low signal-to-noise ratios. After an overview of the mechanisms that cause fMRI signal changes subsequent to neuronal activity, explorative analysis algorithms like Fuzzy Cluster Analysis, as well as parametric approaches are described and discussed. In the context of fMRI artifacts, effects of respiratory motion are examined. For the first time, well-defined breathing patterns are used to quantify the influences on fMRI signal intensity. Also, the variability of fMRI activation in a mental rotation paradigm are investigated, using single-trial analysis. Such, intra-subject activation consistency was determined successfully. Finally, in a second study on mental rotation explorative data analysis was applied to retrieve neuro-functional hypotheses. (author)

  18. Recent advances in quantitative high throughput and high content data analysis.

    Moutsatsos, Ioannis K; Parker, Christian N

    2016-01-01

    High throughput screening has become a basic technique with which to explore biological systems. Advances in technology, including increased screening capacity, as well as methods that generate multiparametric readouts, are driving the need for improvements in the analysis of data sets derived from such screens. This article covers the recent advances in the analysis of high throughput screening data sets from arrayed samples, as well as the recent advances in the analysis of cell-by-cell data sets derived from image or flow cytometry application. Screening multiple genomic reagents targeting any given gene creates additional challenges and so methods that prioritize individual gene targets have been developed. The article reviews many of the open source data analysis methods that are now available and which are helping to define a consensus on the best practices to use when analyzing screening data. As data sets become larger, and more complex, the need for easily accessible data analysis tools will continue to grow. The presentation of such complex data sets, to facilitate quality control monitoring and interpretation of the results will require the development of novel visualizations. In addition, advanced statistical and machine learning algorithms that can help identify patterns, correlations and the best features in massive data sets will be required. The ease of use for these tools will be important, as they will need to be used iteratively by laboratory scientists to improve the outcomes of complex analyses.

  19. Transport analysis of high radiation and high density plasmas in the ASDEX Upgrade tokamak

    Casali L.

    2014-01-01

    Full Text Available Future fusion reactors, foreseen in the “European road map” such as DEMO, will operate under more demanding conditions compared to present devices. They will require high divertor and core radiation by impurity seeding to reduce heat loads on divertor target plates. In addition, DEMO will have to work at high core densities to reach adequate fusion performance. The performance of fusion reactors depends on three essential parameters: temperature, density and energy confinement time. The latter characterizes the loss rate due to both radiation and transport processes. The DEMO foreseen scenarios described above were not investigated so far, but are now addressed at the ASDEX Upgrade tokamak. In this work we present the transport analysis of such scenarios. Plasma with high radiation by impurity seeding: transport analysis taking into account the radiation distribution shows no change in transport during impurity seeding. The observed confinement improvement is an effect of higher pedestal temperatures which extend to the core via stiffness. A non coronal radiation model was developed and compared to the bolometric measurements in order to provide a reliable radiation profile for transport calculations. High density plasmas with pellets: the analysis of kinetic profiles reveals a transient phase at the start of the pellet fuelling due to a slower density build up compared to the temperature decrease. The low particle diffusion can explain the confinement behaviour.

  20. Local buckling failure analysis of high-strength pipelines

    Yan Li; Jian Shuai; Zhong-Li Jin; Ya-Tong Zhao; Kui Xu

    2017-01-01

    Pipelines in geological disaster regions typically suffer the risk of local buckling failure because of slender structure and complex load.This paper is meant to reveal the local buckling behavior of buried pipelines with a large diameter and high strength,which are under different conditions,including pure bending and bending combined with internal pressure.Finite element analysis was built according to previous data to study local buckling behavior of pressurized and unpressurized pipes under bending conditions and their differences in local buckling failure modes.In parametric analysis,a series of parameters,including pipe geometrical dimension,pipe material properties and internal pressure,were selected to study their influences on the critical bending moment,critical compressive stress and critical compressive strain of pipes.Especially the hardening exponent of pipe material was introduced to the parameter analysis by using the Ramberg-Osgood constitutive model.Results showed that geometrical dimensions,material and internal pressure can exert similar effects on the critical bending moment and critical compressive stress,which have different,even reverse effects on the critical compressive strain.Based on these analyses,more accurate design models of critical bending moment and critical compressive stress have been proposed for high-strength pipelines under bending conditions,which provide theoretical methods for highstrength pipeline engineering.

  1. Uncertainty Instability Risk Analysis of High Concrete Arch Dam Abutments

    Xin Cao

    2017-01-01

    Full Text Available The uncertainties associated with concrete arch dams rise with the increased height of dams. Given the uncertainties associated with influencing factors, the stability of high arch dam abutments as a fuzzy random event was studied. In addition, given the randomness and fuzziness of calculation parameters as well as the failure criterion, hazard point and hazard surface uncertainty instability risk ratio models were proposed for high arch dam abutments on the basis of credibility theory. The uncertainty instability failure criterion was derived through the analysis of the progressive instability failure process on the basis of Shannon’s entropy theory. The uncertainties associated with influencing factors were quantized by probability or possibility distribution assignments. Gaussian random theory was used to generate random realizations for influence factors with spatial variability. The uncertainty stability analysis method was proposed by combining the finite element analysis and the limit equilibrium method. The instability risk ratio was calculated using the Monte Carlo simulation method and fuzzy random postprocessing. Results corroborate that the modeling approach is sound and that the calculation method is feasible.

  2. Nonlinear dynamic analysis of high energy line pipe whip

    Hsu, L.C.; Kuo, A.Y.; Tang, H.T.

    1983-01-01

    To facilitate potential cost savings in pipe whip protection design, TVA conducted a 1'' high pressure line break test to investigate the pipe whip behavior. The test results are available to EPRI as a data base for a generic study on nonlinear dynamic behavior of piping systems and pipe whip phenomena. This paper describes a nonlinear dynamic analysis of the TVA high energy line tests using ABAQUS-EPGEN code. The analysis considers the effects of large deformation and high strain rate on resisting moment and energy absorption capability of the analyzed piping system. The numerical results of impact forces, impact velocities, and reaction forces at pipe supports are compared to the TVA test data. The pipe whip impact time and forces have also been calculated per the current NRC guidelines and compared. The calculated pipe support reaction forces prior to impact have been found to be in good agreement with the TVA test data except for some peak values at the very beginning of the pipe break. These peaks are believed to be due to stress wave propagation which cannot be addressed by the ABAQUS code. Both the effects of elbow crushing and strain rate have been approximately simulated. The results are found to be important on pipe whip impact evaluation. (orig.)

  3. High-Speed Video Analysis in a Conceptual Physics Class

    Desbien, Dwain M.

    2011-09-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.

  4. Thermal analysis of high temperature phase transformations of steel

    K. Gryc

    2013-10-01

    Full Text Available The series of thermal analysis measurements of high temperature phase transformations of real grain oriented electrical steel grade under conditions of two analytical devices (Netzsch STA 449 F3 Jupiter; Setaram SETSYS 18TM were carried out. Two thermo analytical methods were used (DTA and Direct thermal analysis. The different weight of samples was used (200 mg, 23 g. The stability/reproducibility of results obtained by used methodologies was verified. The liquidus and solidus temperatures for close to equilibrium conditions and during cooling (20 °C/min; 80 °C/min were determined. It has been shown that the higher cooling rate lead to lower temperatures for start and end of solidification process of studied steel grade.

  5. Acquisition and Analysis of Data from High Concentration Solutions

    Besong, Tabot M.D.

    2016-05-13

    The problems associated with ultracentrifugal analysis of macromolecular solutions at high (>10 mg/ml) are reviewed. Especially for the case of solutes which are non-monodisperse, meaningful results are not readily achievable using sedimentation velocity approaches. It is shown however by both simulation and analysis of practical data that using a modified form of an algorithm (INVEQ) published in other contexts, sedimentation equilibrium (SE) profiles can be analysed successfully, enabling topics such as oligomer presence or formation to be defined.To achieve this, it is necessary to employ an approach in which the solution density, which in an SE profile is radius-dependent, is taken into consideration. Simulation suggests that any reasonable level of solute concentration can be analysed.

  6. Acquisition and Analysis of Data from High Concentration Solutions

    Besong, Tabot M.D.; Rowe, Arthur J.

    2016-01-01

    The problems associated with ultracentrifugal analysis of macromolecular solutions at high (>10 mg/ml) are reviewed. Especially for the case of solutes which are non-monodisperse, meaningful results are not readily achievable using sedimentation velocity approaches. It is shown however by both simulation and analysis of practical data that using a modified form of an algorithm (INVEQ) published in other contexts, sedimentation equilibrium (SE) profiles can be analysed successfully, enabling topics such as oligomer presence or formation to be defined.To achieve this, it is necessary to employ an approach in which the solution density, which in an SE profile is radius-dependent, is taken into consideration. Simulation suggests that any reasonable level of solute concentration can be analysed.

  7. Thermal design and analysis of high power star sensors

    Fan Jiang

    2015-09-01

    Full Text Available The requirement for the temperature stability is very high in the star sensors as the high precision needs for the altitude information. Thermal design and analysis thus is important for the high power star sensors and their supporters. CCD, normally with Peltier thermoelectric cooler (PTC, is the most important sensor component in the star sensors, which is also the main heat source in the star sensors suite. The major objective for the thermal design in this paper is to design a radiator to optimize the heat diffusion for CCD and PTC. The structural configuration of star sensors, the heat sources and orbit parameters were firstly introduced in this paper. The influences of the geometrical parameters and coating material characteristics of radiators on the heat diffusion were investigated by heat flux analysis. Carbon–carbon composites were then chosen to improve the thermal conductivity for the sensor supporters by studying the heat transfer path. The design is validated by simulation analysis and experiments on orbit. The satellite data show that the temperatures of three star sensors are from 17.8 °C to 19.6 °C, while the simulation results are from 18.1 °C to 20.1 °C. The temperatures of radiator are from 16.1 °C to 16.8 °C and the corresponding simulation results are from 16.0 °C to 16.5 °C. The temperature variety of each star sensor is less than 2 °C, which satisfies the design objectives.

  8. Neutron activation analysis of high-purity zinc

    Khodzhamberdyeva, A.A.; Usmanova, M.M.; Gil'bert, Eh.N.; Ivanov, I.M.; Yankovskaya, T.A.; Kholyavko, E.P.

    1987-01-01

    The methods of neutron activation analysis of high-purity zinc with preliminary separation of the zinc base using extraction by trialkylbenzylammonium rhodanide in carbon tetrachloride from 0.5-2.0 M nitric acid solutions is developed. Only rhenium is quantitatively extracted together with zinc. Gold, iridium and molybdenum are extracted to 50-60%, and selenium - to 20%. The Na, K, La, Cr, Sc, Co, Cs, Rb, Fe, Zr, Sn, Te, As, Cd, Hf, W, Sb, Sm impurities remain in the aqueous phase. The methods permits to determine the impurities above with detection limits from 1x10 -6 to 4x10 -11 g

  9. High accuracy 3D electromagnetic finite element analysis

    Nelson, E.M.

    1997-01-01

    A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed. copyright 1997 American Institute of Physics

  10. CCF analysis of high redundancy systems safety/relief valve data analysis and reference BWR application

    Mankamo, T.; Bjoere, S.; Olsson, Lena

    1992-12-01

    Dependent failure analysis and modeling were developed for high redundancy systems. The study included a comprehensive data analysis of safety and relief valves at the Finnish and Swedish BWR plants, resulting in improved understanding of Common Cause Failure mechanisms in these components. The reference application on the Forsmark 1/2 reactor relief system, constituting of twelve safety/relief lines and two regulating relief lines, covered different safety criteria cases of reactor depressurization and overpressure protection function, and failure to re close sequences. For the quantification of dependencies, the Alpha Factor Model, the Binomial Probability Model and the Common Load Model were compared for applicability in high redundancy systems

  11. High frequency vibration analysis by the complex envelope vectorization.

    Giannini, O; Carcaterra, A; Sestieri, A

    2007-06-01

    The complex envelope displacement analysis (CEDA) is a procedure to solve high frequency vibration and vibro-acoustic problems, providing the envelope of the physical solution. CEDA is based on a variable transformation mapping the high frequency oscillations into signals of low frequency content and has been successfully applied to one-dimensional systems. However, the extension to plates and vibro-acoustic fields met serious difficulties so that a general revision of the theory was carried out, leading finally to a new method, the complex envelope vectorization (CEV). In this paper the CEV method is described, underlying merits and limits of the procedure, and a set of applications to vibration and vibro-acoustic problems of increasing complexity are presented.

  12. High beta and second stability region transport and stability analysis

    1990-01-01

    This document summarizes progress made on the research of high beta and second region transport and stability. In the area second stability region studies we report on an investigation of the possibility of second region access in the center of TFTR ''supershots.'' The instabilities found may coincide with experimental observation. Significant progress has been made on the resistive stability properties of high beta poloidal ''supershot'' discharges. For these studies profiles were taken from the TRANSP transport analysis code which analyzes experimental data. Invoking flattening of the pressure profile on mode rational surfaces causes tearing modes to persist into the experimental range of interest. Further, the experimental observation of the modes seems to be consistent with the predictions of the MHD model. In addition, code development in several areas has proceeded

  13. Remote ignitability analysis of high-level radioactive waste

    Lundholm, C.W.; Morgan, J.M.; Shurtliff, R.M.; Trejo, L.E.

    1992-09-01

    The Idaho Chemical Processing Plant (ICPP), was used to reprocess nuclear fuel from government owned reactors to recover the unused uranium-235. These processes generated highly radioactive liquid wastes which are stored in large underground tanks prior to being calcined into a granular solid. The Resource Conservation and Recovery Act (RCRA) and state/federal clean air statutes require waste characterization of these high level radioactive wastes for regulatory permitting and waste treatment purposes. The determination of the characteristic of ignitability is part of the required analyses prior to calcination and waste treatment. To perform this analysis in a radiologically safe manner, a remoted instrument was needed. The remote ignitability Method and Instrument will meet the 60 deg. C. requirement as prescribed for the ignitability in method 1020 of SW-846. The method for remote use will be equivalent to method 1020 of SW-846

  14. Creep and Shrinkage of High Strength Concretes: an Experimental Analysis

    Berenice Martins Toralles carbonari

    2002-01-01

    Full Text Available The creep and shrinkage behaviour of high strength silica fume concretes is significantly different from that of conventional concretes. In order to represent the proper time-dependent response of the material in structural analysis and design, these aspects should be adequately quantified. This paper discusses an experimental setup that is able to determine the creep and shrinkage of concrete from the time of placing. It also compares different gages that can be used for measuring the strains. The method is applied to five different concretes in the laboratory under controlled environmental conditions. The phenomena that are quantified can be classified as basic shrinkage, drying shrinkage, basic creep and drying creep. The relative importance of these mechanisms in high strength concrete will also be presented.

  15. Fractal dimension analysis in a highly granular calorimeter

    Ruan, M; Brient, J.C; Jeans, D; Videau, H

    2015-01-01

    The concept of “particle flow” has been developed to optimise the jet energy resolution by distinguishing the different jet components. A highly granular calorimeter designed for the particle flow algorithm provides an unprecedented level of detail for the reconstruction of calorimeter showers and enables new approaches to shower analysis. In this paper the measurement and use of the fractal dimension of showers is described. The fractal dimension is a characteristic number that measures the global compactness of the shower. It is highly dependent on the primary particle type and energy. Its application in identifying particles and estimating their energy is described in the context of a calorimeter designed for the International Linear Collider.

  16. Advanced High Temperature Reactor Systems and Economic Analysis

    Holcomb, David Eugene [ORNL; Peretz, Fred J [ORNL; Qualls, A L [ORNL

    2011-09-01

    The Advanced High Temperature Reactor (AHTR) is a design concept for a large-output [3400 MW(t)] fluoride-salt-cooled high-temperature reactor (FHR). FHRs, by definition, feature low-pressure liquid fluoride salt cooling, coated-particle fuel, a high-temperature power cycle, and fully passive decay heat rejection. The AHTR's large thermal output enables direct comparison of its performance and requirements with other high output reactor concepts. As high-temperature plants, FHRs can support either high-efficiency electricity generation or industrial process heat production. The AHTR analysis presented in this report is limited to the electricity generation mission. FHRs, in principle, have the potential to be low-cost electricity producers while maintaining full passive safety. However, no FHR has been built, and no FHR design has reached the stage of maturity where realistic economic analysis can be performed. The system design effort described in this report represents early steps along the design path toward being able to predict the cost and performance characteristics of the AHTR as well as toward being able to identify the technology developments necessary to build an FHR power plant. While FHRs represent a distinct reactor class, they inherit desirable attributes from other thermal power plants whose characteristics can be studied to provide general guidance on plant configuration, anticipated performance, and costs. Molten salt reactors provide experience on the materials, procedures, and components necessary to use liquid fluoride salts. Liquid metal reactors provide design experience on using low-pressure liquid coolants, passive decay heat removal, and hot refueling. High temperature gas-cooled reactors provide experience with coated particle fuel and graphite components. Light water reactors (LWRs) show the potentials of transparent, high-heat capacity coolants with low chemical reactivity. Modern coal-fired power plants provide design experience

  17. Thermal spike analysis of highly charged ion tracks

    Karlušić, M.; Jakšić, M.

    2012-01-01

    The irradiation of material using swift heavy ion or highly charged ion causes excitation of the electron subsystem at nanometer scale along the ion trajectory. According to the thermal spike model, energy deposited into the electron subsystem leads to temperature increase due to electron–phonon coupling. If ion-induced excitation is sufficiently intensive, then melting of the material can occur, and permanent damage (i.e., ion track) can be formed upon rapid cooling. We present an extension of the analytical thermal spike model of Szenes for the analysis of surface ion track produced after the impact of highly charged ion. By applying the model to existing experimental data, more than 60% of the potential energy of the highly charged ion was shown to be retained in the material during the impact and transformed into the energy of the thermal spike. This value is much higher than 20–40% of the transferred energy into the thermal spike by swift heavy ion. Thresholds for formation of highly charged ion track in different materials show uniform behavior depending only on few material parameters.

  18. Data intensive high energy physics analysis in a distributed cloud

    Charbonneau, A.; Agarwal, A.; Anderson, M.; Armstrong, P.; Fransham, K.; Gable, I.; Harris, D.; Impey, R.; Leavett-Brown, C.; Paterson, M.; Podaima, W.; Sobie, R. J.; Vliet, M.

    2012-02-01

    We show that distributed Infrastructure-as-a-Service (IaaS) compute clouds can be effectively used for the analysis of high energy physics data. We have designed a distributed cloud system that works with any application using large input data sets requiring a high throughput computing environment. The system uses IaaS-enabled science and commercial clusters in Canada and the United States. We describe the process in which a user prepares an analysis virtual machine (VM) and submits batch jobs to a central scheduler. The system boots the user-specific VM on one of the IaaS clouds, runs the jobs and returns the output to the user. The user application accesses a central database for calibration data during the execution of the application. Similarly, the data is located in a central location and streamed by the running application. The system can easily run one hundred simultaneous jobs in an efficient manner and should scale to many hundreds and possibly thousands of user jobs.

  19. Data intensive high energy physics analysis in a distributed cloud

    Charbonneau, A; Impey, R; Podaima, W; Agarwal, A; Anderson, M; Armstrong, P; Fransham, K; Gable, I; Harris, D; Leavett-Brown, C; Paterson, M; Sobie, R J; Vliet, M

    2012-01-01

    We show that distributed Infrastructure-as-a-Service (IaaS) compute clouds can be effectively used for the analysis of high energy physics data. We have designed a distributed cloud system that works with any application using large input data sets requiring a high throughput computing environment. The system uses IaaS-enabled science and commercial clusters in Canada and the United States. We describe the process in which a user prepares an analysis virtual machine (VM) and submits batch jobs to a central scheduler. The system boots the user-specific VM on one of the IaaS clouds, runs the jobs and returns the output to the user. The user application accesses a central database for calibration data during the execution of the application. Similarly, the data is located in a central location and streamed by the running application. The system can easily run one hundred simultaneous jobs in an efficient manner and should scale to many hundreds and possibly thousands of user jobs.

  20. Analysis and control of high power synchronous rectifier

    Singh Tejinder.

    1993-01-01

    The description, steady state/dynamic analysis and control design of a high power synchronous rectifier is presented. The proposed rectifier system exploits selective harmonic elimination modulation techniques to minimize filtering requirements, and overcomes the dc voltage limitations of prior art equipment. A detailed derivation of the optimum pulse width modulation switching patterns, in the low frequency range for high power applications is presented. A general mathematical model of the rectifier is established which is non-linear and time-invariant. The transformation of reference frame and small signal linearization techniques are used to obtain closed form solutions from the mathematical model. The modelling procedure is verified by computer simulation. The closed loop design of the synchronous rectifier based on a phase and amplitude control strategy is investigated. The transfer functions derived from this analysis are used for the design of the regulators. The steady-state and dynamic results predicted by computer simulation are verified by PECAN. A systematic design procedure is developed and a detailed design example of a 1 MV-amp rectifer system is presented. 23 refs., 33 figs.

  1. Fault analysis and strategy of high pulsed power supply for high power laser

    Liu Kefu; Qin Shihong; Li Jin; Pan Yuan; Yao Zonggan; Zheng Wanguo; Guo Liangfu; Zhou Peizhang; Li Yizheng; Chen Dehuai

    2001-01-01

    according to the requirements of driving flash-lamp, a high pulsed power supply (PPS) based on capacitors as energy storage elements is designed. The author analyzes in detail the faults of high pulsed power supply for high power laser. Such as capacitor internal short-circuit, main bus breakdown to ground, flashlamp sudden short or break. The fault current and voltage waveforms were given by circuit simulations. Based on the analysis and computation, the protection strategy with the fast fuse and ZnO was put forward, which can reduce the damage of PPS to the lower extent and provide the personnel safe and collateral property from the all threats. The preliminary experiments demonstrated that the design of the PPS can satisfy the project requirements

  2. Porous Au-Ag Nanospheres with High-Density and Highly Accessible Hotspots for SERS Analysis.

    Liu, Kai; Bai, Yaocai; Zhang, Lei; Yang, Zhongbo; Fan, Qikui; Zheng, Haoquan; Yin, Yadong; Gao, Chuanbo

    2016-06-08

    Colloidal plasmonic metal nanoparticles have enabled surface-enhanced Raman scattering (SERS) for a variety of analytical applications. While great efforts have been made to create hotspots for amplifying Raman signals, it remains a great challenge to ensure their high density and accessibility for improved sensitivity of the analysis. Here we report a dealloying process for the fabrication of porous Au-Ag alloy nanoparticles containing abundant inherent hotspots, which were encased in ultrathin hollow silica shells so that the need of conventional organic capping ligands for stabilization is eliminated, producing colloidal plasmonic nanoparticles with clean surface and thus high accessibility of the hotspots. As a result, these novel nanostructures show excellent SERS activity with an enhancement factor of ∼1.3 × 10(7) on a single particle basis (off-resonant condition), promising high applicability in many SERS-based analytical and biomedical applications.

  3. Avo analysis in the high impedance reservoir of Chuchupa Field

    Cediel Mauricio; Almanza Ovidio; Montes Luis

    2012-01-01

    The technique of bright spot as a direct indicator of hydrocarbons has been widely used since the work of Ostrander (1984), particularly in gas fields. Located at north of Colombia, the Chuchupa field has produced gas continuously during 30 years, but despite the coverage with 2D seismic, amplitude anomalies associated with gas accumulation have not been observed. In order to find the relationships between the amplitude information and the gas accumulation, an AVO analysis was performed to describe the seismic reservoir response. The raw data of a 2D seismic line that crosses the field from East to West and a well log data set were used. In a first approach the seismic response was modeled using well logs, so a comparative analysis between the furnished synthetic seismograms and the real CDP gathers was done. The results indicated that the reservoirs top is represented by a low amplitude peak which decreases when the offset increases but whose phase remains unchanged. In the well, where the reservoir has 100% gas saturation, a high correlation between the synthetic and real CDP gathers was observed. In a second approach, anomalous clustered points in the IV quadrant were discriminated through intercept versus gradient cross plot analysis. A weak Class-I anomaly was identified, which could not be observed in stacked sections and hence it should be analyzed using pre-stack data.

  4. On the advancement of highly cited research in China: An analysis of the Highly Cited database.

    Li, John Tianci

    2018-01-01

    This study investigates the progress of highly cited research in China from 2001 to 2016 through the analysis of the Highly Cited database. The Highly Cited database, compiled by Clarivate Analytics, is comprised of the world's most influential researchers in the 22 Essential Science Indicator fields as catalogued by the Web of Science. The database is considered an international standard for the measurement of national and institutional highly cited research output. Overall, we found a consistent and substantial increase in Highly Cited Researchers from China during the timespan. The Chinese institutions with the most Highly Cited Researchers- the Chinese Academy of Sciences, Tsinghua University, Peking University, Zhejiang University, the University of Science and Technology of China, and BGI Shenzhen- are all top ten universities or primary government research institutions. Further evaluation of separate fields of research and government funding data from the National Natural Science Foundation of China revealed disproportionate growth efficiencies among the separate divisions of the National Natural Science Foundation. The most development occurred in the fields of Chemistry, Materials Sciences, and Engineering, whereas the least development occurred in Economics and Business, Health Sciences, and Life Sciences.

  5. High school students presenting science: An interactional sociolinguistic analysis

    Bleicher, Robert

    Presenting science is an authentic activity of practicing scientists. Thus, effective communication of science is an important skill to nurture in high school students who are learning science. This study examines strategies employed by high school students as they make science presentations; it assesses students' conceptual understandings of particular science topics through their presentations and investigates gender differences. Data are derived from science presentation given by eight high school students, three females and five males who attended a summer science program. Data sources included videotaped presentations, ethnographic fieldnotes, interviews with presenters and members of the audience, and presenter notes and overheads. Presentations were transcribed and submitted to discourse analysis from an interactional sociolinguistic perspective. This article focuses on the methodology employed and how it helps inform the above research questions. The author argues that use of this methodology leads to findings that inform important social-communicative issues in the learning of science. Practical advice for teaching students to present science, implications for use of presentations to assess conceptual learning, and indications of some possible gender differences are discussed.Received: 14 April 1993; Revised: 15 February 1994;

  6. Analysis of High Power IGBT Short Circuit Failures

    Pappas, G.

    2005-02-11

    The Next Linear Collider (NLC) accelerator proposal at SLAC requires a highly efficient and reliable, low cost, pulsed-power modulator to drive the klystrons. A solid-state induction modulator has been developed at SLAC to power the klystrons; this modulator uses commercial high voltage and high current Insulated Gate Bipolar Transistor (IGBT) modules. Testing of these IGBT modules under pulsed conditions was very successful; however, the IGBTs failed when tests were performed into a low inductance short circuit. The internal electrical connections of a commercial IGBT module have been analyzed to extract self and mutual partial inductances for the main current paths as well as for the gate structure. The IGBT module, together with the partial inductances, has been modeled using PSpice. Predictions for electrical paths that carry the highest current correlate with the sites of failed die under short circuit tests. A similar analysis has been carried out for a SLAC proposal for an IGBT module layout. This paper discusses the mathematical model of the IGBT module geometry and presents simulation results.

  7. A parallel solution for high resolution histological image analysis.

    Bueno, G; González, R; Déniz, O; García-Rojo, M; González-García, J; Fernández-Carrobles, M M; Vállez, N; Salido, J

    2012-10-01

    This paper describes a general methodology for developing parallel image processing algorithms based on message passing for high resolution images (on the order of several Gigabytes). These algorithms have been applied to histological images and must be executed on massively parallel processing architectures. Advances in new technologies for complete slide digitalization in pathology have been combined with developments in biomedical informatics. However, the efficient use of these digital slide systems is still a challenge. The image processing that these slides are subject to is still limited both in terms of data processed and processing methods. The work presented here focuses on the need to design and develop parallel image processing tools capable of obtaining and analyzing the entire gamut of information included in digital slides. Tools have been developed to assist pathologists in image analysis and diagnosis, and they cover low and high-level image processing methods applied to histological images. Code portability, reusability and scalability have been tested by using the following parallel computing architectures: distributed memory with massive parallel processors and two networks, INFINIBAND and Myrinet, composed of 17 and 1024 nodes respectively. The parallel framework proposed is flexible, high performance solution and it shows that the efficient processing of digital microscopic images is possible and may offer important benefits to pathology laboratories. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  8. Results of radiotherapy for meningeomas with high risk for local recurrence. A retrospective analysis; Ergebnisse der Strahlentherapie bei Meningeomen mit hohem Rezidivrisiko. Eine retrospektive Analyse

    Winkler, C.; Dornfeld, S.; Friedrich, S.; Baumann, M. [Technische Univ. Dresden (Germany). Klinik und Poliklinik fuerStrahlentherapie und Radioonkologie; Schwarz, R. [Universitaetskrankenhaus Hamburg-Eppendorf (Germany). Abt. fuer Strahlentherapie

    1998-12-01

    Aim: Retrospective assessment of the efficacy of radiatiotherapy for meningeomas with high risk for local recurrence. Patients and methods: Records of 67 patients with meningeomas treated from 1974 to 1995 at 2 centres were analyzed. Follow-up time ranged from 0.8 to 213 months (median: 61 months). Radiation therapy was given either after local failure or after biopsy or subtotal resection. The ratio between malignant (n=20) and benign (n=47) meningenoma was 1:2.4. Median age of the patients was 55 years (7 to 77 years). Radiation treatment was given at 1.5 to 2 Gy per fraction to 36 to 79.5 Gy. Survival rates were calculated by the Kaplan-Meier method. Statistical comparisons were performed with the log-rank test and the Cox proportional hazards model. The Bonferroni method was used to correct for multiple comparisons. Results: Five- and 10-year disease-free survival rates were 82%{+-}5% (standard error) and 70%{+-}9%. Local control rates at 5 and 10 years were 78%{+-}5% and 68%{+-}9%. In uni- and multivariate analysis histology, sex, total dose and center showed no significant influence on the results. Patients age was significant for local control (univariate p=0.02; multivariate p=0.03) and disease-free survival (univariate/multivariate p=0.04). The postoperative tumor burden had a significant influence of disease-free survival (multivariate P=0.04). After Bonferroni correction no significant influenc e was observed. We did not observe late side effects, especially brain necrosis. Conclusions: Despite of the negative selection of our patients we observed high survival- and local control rates after radiation therapy. This underscores the role of radiation therapy in the treatment of meningeomas with high risk of local failure. (orig.) [Deutsch] Hintergrund: Retrospektive Auswertung der Behandlungsergebnisse der Bestrahlung von Meningeomen mit hohem Rezidivrisiko. Patienten und Methode: Im Zeitraum zwischen 1974 und 1995 wurden an zwei Zentren insgesamt 67

  9. Time-dependent spectrum analysis of high power gyrotrons

    Schlaich, Andreas

    2015-01-01

    In this work, an investigation of vacuum electronic oscillators capable of generating multi-megawatt continuous wave output power in the millimeter-wave range (so-called gyrotrons) through spectral measurements is presented. The centerpiece is the development of a measurement system with a high dynamic range (50-60 dB) for time-dependent spectrum analysis, covering the frequency range 100-170 GHz with instantaneous bandwidths of 6-12 GHz. Despite relying on heterodyne reception through harmonic mixers, the Pulse Spectrum Analysis (PSA) system maintains RF unambiguity in the spectrogram output through the application of a novel RF reconstruction technique. Using the new possibilities, a wide range of spectral phenomena in gyrotrons has been investigated, such as cavity mode jumps, lowfrequency modulation, frequency tuning in long pulses and the spectral behavior during the presence of an RF window arc. A dedicated investigation on parasitic RF oscillations in W7-X gyrotrons combining several analysis techniques led to the conclusion that after-cavity oscillations can be physical reality in high power gyrotrons, and are the probable cause for the undesired signals observed. Apart from systematic parameter sweeps using the PSA system, an analytical dispersion analysis in the Brillouin diagram was applied, and numerical gyrotron interaction simulations of unprecedented extent were conducted. Furthermore, the improved frequency measurement capabilities were employed to analyze the frequency tuning through thermal expansion and electrostatic neutralization caused by ionization inside the tube in long-pulse operation. By macroscopically modeling the gas dynamics and ionization processes in combination with a fitting process, the time dependences of the two processes could be investigated. In doing so, indication was found that the neutralization in W7-X gyrotrons amounts to only 60% of the electrostatic depression voltage, instead of 100% as widely believed for

  10. Time-dependent spectrum analysis of high power gyrotrons

    Schlaich, Andreas

    2015-07-01

    In this work, an investigation of vacuum electronic oscillators capable of generating multi-megawatt continuous wave output power in the millimeter-wave range (so-called gyrotrons) through spectral measurements is presented. The centerpiece is the development of a measurement system with a high dynamic range (50-60 dB) for time-dependent spectrum analysis, covering the frequency range 100-170 GHz with instantaneous bandwidths of 6-12 GHz. Despite relying on heterodyne reception through harmonic mixers, the Pulse Spectrum Analysis (PSA) system maintains RF unambiguity in the spectrogram output through the application of a novel RF reconstruction technique. Using the new possibilities, a wide range of spectral phenomena in gyrotrons has been investigated, such as cavity mode jumps, lowfrequency modulation, frequency tuning in long pulses and the spectral behavior during the presence of an RF window arc. A dedicated investigation on parasitic RF oscillations in W7-X gyrotrons combining several analysis techniques led to the conclusion that after-cavity oscillations can be physical reality in high power gyrotrons, and are the probable cause for the undesired signals observed. Apart from systematic parameter sweeps using the PSA system, an analytical dispersion analysis in the Brillouin diagram was applied, and numerical gyrotron interaction simulations of unprecedented extent were conducted. Furthermore, the improved frequency measurement capabilities were employed to analyze the frequency tuning through thermal expansion and electrostatic neutralization caused by ionization inside the tube in long-pulse operation. By macroscopically modeling the gas dynamics and ionization processes in combination with a fitting process, the time dependences of the two processes could be investigated. In doing so, indication was found that the neutralization in W7-X gyrotrons amounts to only 60% of the electrostatic depression voltage, instead of 100% as widely believed for

  11. High Accuracy, High Energy He-Erd Analysis of H,C, and T

    Browning, James F.; Langley, Robert A.; Doyle, Barney L.; Banks, James C.; Wampler, William R.

    1999-01-01

    A new analysis technique using high-energy helium ions for the simultaneous elastic recoil detection of all three hydrogen isotopes in metal hydride systems extending to depths of several microm's is presented. Analysis shows that it is possible to separate each hydrogen isotope in a heavy matrix such as erbium to depths of 5 microm using incident 11.48MeV 4 He 2 ions with a detection system composed of a range foil and ΔE-E telescope detector. Newly measured cross sections for the elastic recoil scattering of 4 He 2 ions from protons and deuterons are presented in the energy range 10 to 11.75 MeV for the laboratory recoil angle of 30degree

  12. High beta and second stability region transport and stability analysis

    Hughes, M.H.; Phillps, M.W.; Todd, A.M.M.; Krishnaswami, J.; Hartley, R.

    1992-09-01

    This report describes ideal and resistive studies of high-beta plasmas and of the second stability region. Emphasis is focused on ''supershot'' plasmas in TFIR where MHD instabilities are frequently observed and which spoil their confinement properties. Substantial results are described from the analysis of these high beta poloidal plasmas. During these studies, initial pressure and safety factor profiles were obtained from the TRANSP code, which is used extensively to analyze experimental data. Resistive MBD stability studies of supershot equilibria show that finite pressure stabilization of tearing modes is very strong in these high βp plasmas. This has prompted a detailed re-examination of linear tearing mode theory in which we participated in collaboration with Columbia University and General Atomics. This finite pressure effect is shown to be highly sensitive to small scale details of the pressure profile. Even when an ad hoc method of removing this stabilizing mechanism is implemented, however, it is shown that there is only superficial agreement between resistive MBD stability computation and the experimental data. While the mode structures observed experimentally can be found computationally, there is no convincing correlation with the experimental observations when the computed results are compared with a large set of supershot data. We also describe both the ideal and resistive stability properties of TFIR equilibria near the transition to the second region. It is shown that the highest β plasmas, although stable to infinite-n ideal ballooning modes, can be unstable to the so called ''infernal'' modes associated with small shear. The sensitivity of these results to the assumed pressure and current density profiles is discussed. Finally, we describe results from two collaborative studies with PPPL. The first involves exploratory studies of the role of the 1/1 mode in tokamaks and, secondly, a study of sawtooth stabilization using ICRF

  13. Objective high Resolution Analysis over Complex Terrain with VERA

    Mayer, D.; Steinacker, R.; Steiner, A.

    2012-04-01

    VERA (Vienna Enhanced Resolution Analysis) is a model independent, high resolution objective analysis of meteorological fields over complex terrain. This system consists of a special developed quality control procedure and a combination of an interpolation and a downscaling technique. Whereas the so called VERA-QC is presented at this conference in the contribution titled "VERA-QC, an approved Data Quality Control based on Self-Consistency" by Andrea Steiner, this presentation will focus on the method and the characteristics of the VERA interpolation scheme which enables one to compute grid point values of a meteorological field based on irregularly distributed observations and topography related aprior knowledge. Over a complex topography meteorological fields are not smooth in general. The roughness which is induced by the topography can be explained physically. The knowledge about this behavior is used to define the so called Fingerprints (e.g. a thermal Fingerprint reproducing heating or cooling over mountainous terrain or a dynamical Fingerprint reproducing positive pressure perturbation on the windward side of a ridge) under idealized conditions. If the VERA algorithm recognizes patterns of one or more Fingerprints at a few observation points, the corresponding patterns are used to downscale the meteorological information in a greater surrounding. This technique allows to achieve an analysis with a resolution much higher than the one of the observational network. The interpolation of irregularly distributed stations to a regular grid (in space and time) is based on a variational principle applied to first and second order spatial and temporal derivatives. Mathematically, this can be formulated as a cost function that is equivalent to the penalty function of a thin plate smoothing spline. After the analysis field has been divided into the Fingerprint components and the unexplained part respectively, the requirement of a smooth distribution is applied to the

  14. Primitive Path Analysis and Stress Distribution in Highly Strained Macromolecules.

    Hsu, Hsiao-Ping; Kremer, Kurt

    2018-01-16

    Polymer material properties are strongly affected by entanglement effects. For long polymer chains and composite materials, they are expected to be at the origin of many technically important phenomena, such as shear thinning or the Mullins effect, which microscopically can be related to topological constraints between chains. Starting from fully equilibrated highly entangled polymer melts, we investigate the effect of isochoric elongation on the entanglement structure and force distribution of such systems. Theoretically, the related viscoelastic response usually is discussed in terms of the tube model. We relate stress relaxation in the linear and nonlinear viscoelastic regimes to a primitive path analysis (PPA) and show that tension forces both along the original paths and along primitive paths, that is, the backbone of the tube, in the stretching direction correspond to each other. Unlike homogeneous relaxation along the chain contour, the PPA reveals a so far not observed long-lived clustering of topological constraints along the chains in the deformed state.

  15. Large capacity, high-speed multiparameter multichannel analysis system

    Hendricks, R.W.; Seeger, P.A.; Scheer, J.W.; Suehiro, S.

    1980-01-01

    A data acquisition system for recording multiparameter digital data into a large memory array at over 2.5 MHz is described. The system consists of a MOSTEK MK8600 2048K x 24-bit memory system, I/O ports to various external devices including the CAMAC dataway, a memory incrementer/adder and a daisy-chain of experiment-specific modules which calculate the memory address which is to be incremented. The design of the daisy-chain permits multiple modules and provides for easy modification as experimental needs change. The system has been designed for use in multiparameter, multichannel analysis of high-speed data gathered by position-sensitive detectors at conventional and synchrotron x-ray sources as well as for fixed energy and time-of-flight diffraction at continuous and pulsed neutron sources

  16. CONDITIONED ANALYSIS OF HIGH-LATITUDE SOLAR WIND INTERMITTENCY

    D'Amicis, R.; Consolini, G.; Bavassano, B.; Bruno, R.

    2012-01-01

    The solar wind is a turbulent medium displaying intermittency. Its intermittent features have been widely documented and studied, showing how the intermittent character is different in fast and slow wind. In this paper, a statistical conditioned analysis of the solar wind intermittency for a period of high-latitude fast solar wind is presented. In particular, the intermittent features are investigated as a function of the Alfvénic degree of fluctuations at a given scale. The results show that the main contribution to solar wind intermittency is due to non-Alfvénic structures, while Alfvénic increments are found to be characterized by a smaller level of intermittency than the previous ones. Furthermore, the lifetime statistics of Alfvénic periods are discussed in terms of a multiscale texture of randomly oriented flux tubes.

  17. On discriminant analysis techniques and correlation structures in high dimensions

    Clemmensen, Line Katrine Harder

    This paper compares several recently proposed techniques for performing discriminant analysis in high dimensions, and illustrates that the various sparse methods dier in prediction abilities depending on their underlying assumptions about the correlation structures in the data. The techniques...... the methods in two: Those who assume independence between the variables and thus use a diagonal estimate of the within-class covariance matrix, and those who assume dependence between the variables and thus use an estimate of the within-class covariance matrix, which also estimates the correlations between...... variables. The two groups of methods are compared and the pros and cons are exemplied using dierent cases of simulated data. The results illustrate that the estimate of the covariance matrix is an important factor with respect to choice of method, and the choice of method should thus be driven by the nature...

  18. Activation analysis of high pure quartz used as packing materials

    Luknitskij, V.A.; Morozov, B.A.

    1979-01-01

    A γ-spectrometric technique of neutron activation determination of microelements in quartz tubes used as a packing material for irradiation in reactors is reported. The analysis of 29 micro-admixtures in quartz tubes of USSR brands ''spectrtosil'' and ''KV'' was carried out. The γ-spectra of ''KV'' quartz irradiated by thermal and epithermal neutrons are presented. The activation by epithermal neutrons provides an activity gain for the nuclei whose resonance integral is high enough as compared to the activation cross-section with regard to thermal neutrons. The activation by epithermal neutrons permits additional determination of W, Cd, V, Th, Mn and Ni and provides for a substantial decrease in the activity of 24 Na, 42 K, 140 La, 46 Sc, 141 Ce, 51 Cr, and 59 Fe, which hinder the determination of the above-mentioned elements. The microelement composition of Soviet-made quartz varieties is compared to that of foreign-made quartz brands

  19. Startup analysis for a high temperature gas loaded heat pipe

    Sockol, P. M.

    1973-01-01

    A model for the rapid startup of a high-temperature gas-loaded heat pipe is presented. A two-dimensional diffusion analysis is used to determine the rate of energy transport by the vapor between the hot and cold zones of the pipe. The vapor transport rate is then incorporated in a simple thermal model of the startup of a radiation-cooled heat pipe. Numerical results for an argon-lithium system show that radial diffusion to the cold wall can produce large vapor flow rates during a rapid startup. The results also show that startup is not initiated until the vapor pressure p sub v in the hot zone reaches a precise value proportional to the initial gas pressure p sub i. Through proper choice of p sub i, startup can be delayed until p sub v is large enough to support a heat-transfer rate sufficient to overcome a thermal load on the heat pipe.

  20. Isotopic analysis of uranium hexafluoride highly enriched in U-235

    Chaussy, L.; Boyer, R.

    1968-01-01

    Isotopic analysis of uranium in the form of the hexafluoride by mass-spectrometry gives gross results which are not very accurate. Using a linear interpolation method applied to two standards it is possible to correct for this inaccuracy as long as the isotopic concentrations are less than about 10 per cent in U-235. Above this level, the interpolations formula overestimates the results, especially if the enrichment of the analyzed samples is higher than 1.3 with respect to the standards. A formula is proposed for correcting the interpolation equation and for the extending its field of application to high values of the enrichment (≅2) and of the concentration. It is shown that by using this correction the results obtained have an accuracy which depends practically only on that of the standards, taking into account the dispersion in the measurements. (authors) [fr

  1. Quantitative XPS analysis of high Tc superconductor surfaces

    Jablonski, A.; Sanada, N.; Suzuki, Y.; Fukuda, Y.; Nagoshi, M.

    1993-01-01

    The procedure of quantitative XPS analysis involving the relative sensitivity factors is most convenient to apply to high T c superconductor surfaces because this procedure does not require standards. However, a considerable limitation of such an approach is its relatively low accuracy. In the present work, a proposition is made to use for this purpose a modification of the relative sensitivity factor approach accounting for the matrix and the instrumental effects. The accuracy of this modification when applied to the binary metal alloys is 2% or better. A quantitative XPS analysis was made for surfaces of the compounds Bi 2 Sr 2 CuO 6 , Bi 2 Sr 2 CaCu 2 O 8 , and YBa 2 Cu 3 O Y . The surface composition determined for the polycrystalline samples corresponds reasonably well to the bulk stoichiometry. Slight deficiency of oxygen was found for the Bi-based compounds. The surface exposed on cleavage of the Bi 2 Sr 2 CaCu 2 O 8 single crystal was found to be enriched with bismuth, which indicates that the cleavage occurs along the BiO planes. This result is in agreement with the STM studies published in the literature

  2. Quantitative phase analysis of a highly textured industrial sample using a Rietveld profile analysis

    Shin, Eunjoo; Huh, Moo-Young; Seong, Baek-Seok; Lee, Chang-Hee

    2001-01-01

    For the quantitative phase analysis on highly textured two-phase materials, samples with known weight fractions of zirconium and aluminum were prepared. Strong texture components prevailed in both zirconium and aluminum sheet. The diffraction patterns of samples were measured by the neutron and refined by the Rietveld method. The preferred orientation correction of diffraction patterns was carried out by means of recalculated pole figures from the ODF. The present Rietveld analysis of various samples with different weight fractions showed that the absolute error of the calculated weight fractions was less than 7.1%. (author)

  3. A Bibliometric Analysis of Highly Cited and High Impact Occupational Therapy Publications by American Authors.

    Gutman, Sharon A; Brown, Ted; Ho, Yuh-Shan

    2017-07-01

    A bibliometric analysis was completed of peer-reviewed literature from 1991-2015, written by American occupational therapists, to examine US high impact scholarship with "occupational therapy" and "occupational therapist(s)" used as keywords to search journal articles' publication title, abstract, author details, and keywords. Results included 1,889 journal articles from 1991-2015 published by American occupational therapists as first or corresponding author. Sixty-nine articles attained a TotalCitation 2015 ≥ 50 and 151 attained a Citation 2015 ≥ 5 indicating that they were the most highly cited literature produced in this period. Although the majority (58%) of this literature was published in occupational therapy-specific journals, 41% was published in interdisciplinary journals. Results illustrate that the volume of highly cited American occupational therapy peer-reviewed literature has grown over the last two decades. There is need for the profession to strategize methods to enhance the publication metrics of occupational therapy-specific journals to reduce the loss of high quality publications to external periodicals.

  4. High temperature hall effect measurement system design, measurement and analysis

    Berkun, Isil

    -toxic thermoelectric materials made from abundant elements and are suited for power generation application in the intermediate temperature range of (600 K - 800 K). In this work the thermoelectric materials were synthesized by a solid-state reac- tion using a molten-salt sealing method. The ingots produced were then powder processed, followed by pulsed electric sintering (PECS) densification. A set of Mg2.08Si0.4--x Sn0.6Sbx (0 ≤ x ≤ 0.072) compounds were investigated and a peak ZT of 1.50 was obtained at 716 K in Mg2.08Si 0.364Sn0.6Sb0.036 [2]. The high ZT value is related to a high electrical conductivity in these samples, which are possibly caused by a magnesium deficiency in the final prod- uct. Analysis of the measured results using LabVIEW and MATLAB developed programs showed good agreement with expected results and gave insight on mixed carrier dopant concentrations. [1] I. Berkun, S. N. Demlow, N. Suwanmonkha, T. P. Hogan, and T. A. Grotjohn, "Hall Effect Measurement System for Characterization of Doped Single Crystal Diamond," in MRS Proceedings, vol. 1511, Cambridge Univ Press, 2013. [2] P. Gao, I. Berkun, R. D. Schmidt, M. F. Luzenski, X. Lu, P. B. Sarac, E. D. Case, and T. P. Hogan, "Transport and Mechanical Properties of High-ZT Mg2. 08si0. 4- x Sn0. 6sb x Thermoelectric Materials," Journal of Electronic Materials, pp. 1--14, 2013.

  5. A large capacity, high-speed multiparameter multichannel analysis system

    Hendricks, R.W.; Suehiro, S.; Seeger, P.A.; Scheer, J.W.

    1982-01-01

    A data acquisition system for recording multiparameter digital data into a large memory array at over 2.5 MHz is described. The system consists of a MOSTEK MK 8600 2048 K x 24-bit memory system, I/O ports to various external devices including the CAMAC dataway, a memory incrementer/adder and a daisy-chain of experiment-specific modules which calculate the memory address which is to be incremented. The design of the daisy-chain permits multiple modules and provides for easy modification as experimental needs change. The system has been designed for use in multiparameter, multichannel analysis of high-speed data gathered by position-sensitive detectors at conventional and synchrotron X-ray sources as well as for fixed energy and time-of-flight diffraction at continuous and pulsed neutron sources. Modules which have been developed to date include a buffer for two-dimensional position-sensitive detectors, a mapper for high-speed coordinate transformations, a buffered time-of-flight clock, a time-correlator for synchronized diffraction experiments, and a display unit for data bus diagnostics. (orig.)

  6. High-Reliable PLC RTOS Development and RPS Structure Analysis

    Sohn, H. S.; Song, D. Y.; Sohn, D. S.; Kim, J. H. [Enersys Co., Daejeon (Korea, Republic of)

    2008-04-15

    One of the KNICS objectives is to develop a platform for Nuclear Power Plant(NPP) I and C(Instrumentation and Control) system, especially plant protection system. The developed platform is POSAFE-Q and this work supports the development of POSAFE-Q with the development of high-reliable real-time operating system(RTOS) and programmable logic device(PLD) software. Another KNICS objective is to develop safety I and C systems, such as Reactor Protection System(RPS) and Engineered Safety Feature-Component Control System(ESF-CCS). This work plays an important role in the structure analysis for RPS. Validation and verification(V and V) of the safety critical software is an essential work to make digital plant protection system highly reliable and safe. Generally, the reliability and safety of software based system can be improved by strict quality assurance framework including the software development itself. In other words, through V and V, the reliability and safety of a system can be improved and the development activities like software requirement specification, software design specification, component tests, integration tests, and system tests shall be appropriately documented for V and V.

  7. High-Reliable PLC RTOS Development and RPS Structure Analysis

    Sohn, H. S.; Song, D. Y.; Sohn, D. S.; Kim, J. H.

    2008-04-01

    One of the KNICS objectives is to develop a platform for Nuclear Power Plant(NPP) I and C(Instrumentation and Control) system, especially plant protection system. The developed platform is POSAFE-Q and this work supports the development of POSAFE-Q with the development of high-reliable real-time operating system(RTOS) and programmable logic device(PLD) software. Another KNICS objective is to develop safety I and C systems, such as Reactor Protection System(RPS) and Engineered Safety Feature-Component Control System(ESF-CCS). This work plays an important role in the structure analysis for RPS. Validation and verification(V and V) of the safety critical software is an essential work to make digital plant protection system highly reliable and safe. Generally, the reliability and safety of software based system can be improved by strict quality assurance framework including the software development itself. In other words, through V and V, the reliability and safety of a system can be improved and the development activities like software requirement specification, software design specification, component tests, integration tests, and system tests shall be appropriately documented for V and V.

  8. LSD-based analysis of high-resolution stellar spectra

    Tsymbal, V.; Tkachenko, A.; Van, Reeth T.

    2014-11-01

    We present a generalization of the method of least-squares deconvolution (LSD), a powerful tool for extracting high S/N average line profiles from stellar spectra. The generalization of the method is effected by extending it towards the multiprofile LSD and by introducing the possibility to correct the line strengths from the initial mask. We illustrate the new approach by two examples: (a) the detection of astroseismic signatures from low S/N spectra of single stars, and (b) disentangling spectra of multiple stellar objects. The analysis is applied to spectra obtained with 2-m class telescopes in the course of spectroscopic ground-based support for space missions such as CoRoT and Kepler. Usually, rather high S/N is required, so smaller telescopes can only compete successfully with more advanced ones when one can apply a technique that enables a remarkable increase in the S/N of the spectra which they observe. Since the LSD profiles have a potential for reconstruction what is common in all the spectral profiles, it should have a particular practical application to faint stars observed with 2-m class telescopes and whose spectra show remarkable LPVs.

  9. High-density polyethylene dosimetry by transvinylene FTIR analysis

    McLaughlin, W.L.; Silverman, J.; Al-Sheikhly, M.

    1999-01-01

    and electrons. The useful dose range of 0.053 cm thick high-density polyethylene film (rho = 0.961 g cm(-3); melt index = 0.8 dg min(-1)), for irradiations by (60)Co gamma radiation and 2.0 and 0.4 MeV electron beams in deaerated atmosphere (Na gas), is about 50-10(3) kGy for FTIR transvinylene......The formation of transvinylene unsaturation, -CH=CH-, due to free-radical or cationic-initiated dehydrogenation by irradiation, is a basic reaction in polyethylene and is useful for dosimetry at high absorbed doses. The radiation-enhanced infrared absorption having a maximum at nu = 965 cm......(-l) (lambda = 10.36 mu m) is stable in air and can be measured by Fourier-transform infrared (FTIR) spectrophotometry. The quantitative analysis is a useful means of product end-point dosimetry for radiation processing with gamma rays and electrons, where polyethylene is a component of the processed product...

  10. Fluorescent foci quantitation for high-throughput analysis

    Elena Ledesma-Fernández

    2015-06-01

    Full Text Available A number of cellular proteins localize to discrete foci within cells, for example DNA repair proteins, microtubule organizing centers, P bodies or kinetochores. It is often possible to measure the fluorescence emission from tagged proteins within these foci as a surrogate for the concentration of that specific protein. We wished to develop tools that would allow quantitation of fluorescence foci intensities in high-throughput studies. As proof of principle we have examined the kinetochore, a large multi-subunit complex that is critical for the accurate segregation of chromosomes during cell division. Kinetochore perturbations lead to aneuploidy, which is a hallmark of cancer cells. Hence, understanding kinetochore homeostasis and regulation are important for a global understanding of cell division and genome integrity. The 16 budding yeast kinetochores colocalize within the nucleus to form a single focus. Here we have created a set of freely-available tools to allow high-throughput quantitation of kinetochore foci fluorescence. We use this ‘FociQuant’ tool to compare methods of kinetochore quantitation and we show proof of principle that FociQuant can be used to identify changes in kinetochore protein levels in a mutant that affects kinetochore function. This analysis can be applied to any protein that forms discrete foci in cells.

  11. High Order Differential Frequency Hopping: Design and Analysis

    Yong Li

    2015-01-01

    Full Text Available This paper considers spectrally efficient differential frequency hopping (DFH system design. Relying on time-frequency diversity over large spectrum and high speed frequency hopping, DFH systems are robust against hostile jamming interference. However, the spectral efficiency of conventional DFH systems is very low due to only using the frequency of each channel. To improve the system capacity, in this paper, we propose an innovative high order differential frequency hopping (HODFH scheme. Unlike in traditional DFH where the message is carried by the frequency relationship between the adjacent hops using one order differential coding, in HODFH, the message is carried by the frequency and phase relationship using two-order or higher order differential coding. As a result, system efficiency is increased significantly since the additional information transmission is achieved by the higher order differential coding at no extra cost on either bandwidth or power. Quantitative performance analysis on the proposed scheme demonstrates that transmission through the frequency and phase relationship using two-order or higher order differential coding essentially introduces another dimension to the signal space, and the corresponding coding gain can increase the system efficiency.

  12. Forensic Analysis of High Explosive Residues from Selected Cloth

    Mohamad Afiq Mohamed Huri; Umi Kalthom Ahmad

    2014-01-01

    Increased terrorist activities around the Asian region have resulted in the need for improved analytical techniques in forensic analysis. High explosive residues from post-blast clothing are often encountered as physical evidence submitted to a forensic laboratory. Therefore, this study was initiated to detect high explosives residues of cyclotrimethylenetrinitramine (RDX) and pentaerythritol tetranitrate (PETN) on selected cloth in this study. Cotton swabbing technique was employed as a simple and rapid method in recovering analytes from the sample matrix. Analytes were analyzed using Griess spot test, TLC and HPLC. TLC separation employed toluene-ethyl acetate (9:1) as a good solvent system. Reversed phase HPLC separation employed acetonitrile-water (65:35) as the mobile phase and analytes detected using a programmed wavelength. RDX was detected at 235 nm for the first 3.5 min and then switched to 215 nm for PETN. Limits of detection (LODs) of analytes were in the low ppm range (0.05 ppm for RDX and 0.25 ppm for PETN). Analyte recovery studies revealed that the type of cloth has a profound effect on the extraction efficiency. Analytes were recovered better for nylon as compared to cotton cloth. However, no analytes could be recovered from denim cloth. For post-blast samples, only RDX was detected in low concentration for both nylon and cotton cloth. (author)

  13. msBiodat analysis tool, big data analysis for high-throughput experiments.

    Muñoz-Torres, Pau M; Rokć, Filip; Belužic, Robert; Grbeša, Ivana; Vugrek, Oliver

    2016-01-01

    Mass spectrometry (MS) are a group of a high-throughput techniques used to increase knowledge about biomolecules. They produce a large amount of data which is presented as a list of hundreds or thousands of proteins. Filtering those data efficiently is the first step for extracting biologically relevant information. The filtering may increase interest by merging previous data with the data obtained from public databases, resulting in an accurate list of proteins which meet the predetermined conditions. In this article we present msBiodat Analysis Tool, a web-based application thought to approach proteomics to the big data analysis. With this tool, researchers can easily select the most relevant information from their MS experiments using an easy-to-use web interface. An interesting feature of msBiodat analysis tool is the possibility of selecting proteins by its annotation on Gene Ontology using its Gene Id, ensembl or UniProt codes. The msBiodat analysis tool is a web-based application that allows researchers with any programming experience to deal with efficient database querying advantages. Its versatility and user-friendly interface makes easy to perform fast and accurate data screening by using complex queries. Once the analysis is finished, the result is delivered by e-mail. msBiodat analysis tool is freely available at http://msbiodata.irb.hr.

  14. High-dose-rate brachytherapy in the treatment of uterine cervix cancer. Analysis of dose effectiveness and late complications

    Ferrigno, Robson; Novaes, Paulo Eduardo Ribeiro dos Santos; Pellizzon, Antonio Cassio Assis; Maia, Maria Aparecida Conte; Fogarolli, Ricardo Cesar; Gentil, Andre Cavalcanti; Salvajoli, Joao Victor

    2001-01-01

    Purpose: This retrospective analysis aims to report results of patients with cervix cancer treated by external beam radiotherapy (EBR) and high-dose-rate (HDR) brachytherapy. Methods and Materials: From September 1992 to December 1996, 138 patients with FIGO Stages II and III and mean age of 56 years were treated. Median EBR to the whole pelvis was 45 Gy in 25 fractions. Parametrial boost was performed in 93% of patients, with a median dose of 14.4 Gy. Brachytherapy with HDR was performed during EBR or following its completion with a dose of 24 Gy in four weekly fractions of 6 Gy to point A. Median overall treatment time was of 60 days. Patient age, tumor stage, and overall treatment time were variables analyzed for survival and local control. Cumulative biologic effective dose (BED) at rectal and bladder reference points were correlated with late complications in these organs and dose of EBR at parametrium was correlated with small bowel complications. Results: Median follow-up time was 38 months. Overall survival, disease-free survival, and local control at 5 years was 53.7%, 52.7%, and 62%, respectively. By multivariate and univariate analysis, overall treatment time up to 50 days was the only statistically significant adverse variable for overall survival (p=0.003) and actuarial local control (p=0.008). The 5-year actuarial incidence of rectal, bladder, and small bowel late complications was 16%, 11%, and 14%, respectively. Patients treated with cumulative BED at rectum points above 110 Gy 3 and at bladder point above 125 Gy 3 had a higher but not statistically significant 5-year actuarial rate of complications at these organs (18% vs. 12%, p=0.49 and 17% vs. 9%, p=0.20, respectively). Patients who received parametrial doses larger than 59 Gy had a higher 5-year actuarial rate of complications in the small bowel; however, this was not statistically significant (19% vs. 10%, p=0.260). Conclusion: This series suggests that 45 Gy to the whole pelvis combined with

  15. Speciation analysis of cobalt in foods by high-performance liquid chromatography and neutron activation analysis

    Muto, Toshio; Koyama, Motoko

    1994-01-01

    A combined method by coupling high-performance liquid chromatography (HPLC, as a separation method) with neutron activation analysis (as a detection method) have been applied to the speciation analysis of cobalt in daily foods (e.g. egg, fish and milk). Cobalt species including free cobalt, vitamin B 12 and protein-bound cobalt were separated with a preparative HPLC and a centrifuge. Subsequently, the determination of cobalt in the separated species was made by neutron activation analysis. The results showed that the content of the total cobalt in the foods was found to lie in the range 0.4-11ng/g(0.4-11ppb) based on wet weight. The compositions of free cobalt, vitamin B 12 and protein-bound cobalt were ranged 16-43%, 55-73%, 2.3-17%, respectively. These experimental evidences suggest that the combination of HPLC and neutron activation analysis is expected to be a useful tool for speciation analysis of trace elements in biological as well as environmental materials. (author)

  16. Genetic high throughput screening in Retinitis Pigmentosa based on high resolution melting (HRM) analysis.

    Anasagasti, Ander; Barandika, Olatz; Irigoyen, Cristina; Benitez, Bruno A; Cooper, Breanna; Cruchaga, Carlos; López de Munain, Adolfo; Ruiz-Ederra, Javier

    2013-11-01

    Retinitis Pigmentosa (RP) involves a group of genetically determined retinal diseases caused by a large number of mutations that result in rod photoreceptor cell death followed by gradual death of cone cells. Most cases of RP are monogenic, with more than 80 associated genes identified so far. The high number of genes and variants involved in RP, among other factors, is making the molecular characterization of RP a real challenge for many patients. Although HRM has been used for the analysis of isolated variants or single RP genes, as far as we are concerned, this is the first study that uses HRM analysis for a high-throughput screening of several RP genes. Our main goal was to test the suitability of HRM analysis as a genetic screening technique in RP, and to compare its performance with two of the most widely used NGS platforms, Illumina and PGM-Ion Torrent technologies. RP patients (n = 96) were clinically diagnosed at the Ophthalmology Department of Donostia University Hospital, Spain. We analyzed a total of 16 RP genes that meet the following inclusion criteria: 1) size: genes with transcripts of less than 4 kb; 2) number of exons: genes with up to 22 exons; and 3) prevalence: genes reported to account for, at least, 0.4% of total RP cases worldwide. For comparison purposes, RHO gene was also sequenced with Illumina (GAII; Illumina), Ion semiconductor technologies (PGM; Life Technologies) and Sanger sequencing (ABI 3130xl platform; Applied Biosystems). Detected variants were confirmed in all cases by Sanger sequencing and tested for co-segregation in the family of affected probands. We identified a total of 65 genetic variants, 15 of which (23%) were novel, in 49 out of 96 patients. Among them, 14 (4 novel) are probable disease-causing genetic variants in 7 RP genes, affecting 15 patients. Our HRM analysis-based study, proved to be a cost-effective and rapid method that provides an accurate identification of genetic RP variants. This approach is effective for

  17. Transport modelling and gyrokinetic analysis of advanced high performance discharges

    Kinsey, J.E.; Imbeaux, F.; Staebler, G.M.; Budny, R.; Bourdelle, C.; Fukuyama, A.; Garbet, X.; Tala, T.; Parail, V.

    2005-01-01

    Predictive transport modelling and gyrokinetic stability analyses of demonstration hybrid (HYBRID) and advanced tokamak (AT) discharges from the International Tokamak Physics Activity (ITPA) profile database are presented. Both regimes have exhibited enhanced core confinement (above the conventional ITER reference H-mode scenario) but differ in their current density profiles. Recent contributions to the ITPA database have facilitated an effort to study the underlying physics governing confinement in these advanced scenarios. In this paper, we assess the level of commonality of the turbulent transport physics and the relative roles of the transport suppression mechanisms (i.e. E x B shear and Shafranov shift (α) stabilization) using data for select HYBRID and AT discharges from the DIII-D, JET and AUG tokamaks. GLF23 transport modelling and gyrokinetic stability analysis indicate that E x B shear and Shafranov shift stabilization play essential roles in producing the improved core confinement in both HYBRID and AT discharges. Shafranov shift stabilization is found to be more important in AT discharges than in HYBRID discharges. We have also examined the competition between the stabilizing effects of E x B shear and Shafranov shift stabilization and the destabilizing effects of higher safety factors and parallel velocity shear. Linear and nonlinear gyrokinetic simulations of idealized low and high safety factor cases reveal some interesting consequences. A low safety factor (i.e. HYBRID relevant) is directly beneficial in reducing the transport, and E x B shear stabilization can dominate parallel velocity shear destabilization allowing the turbulence to be quenched. However, at low-q/high current, Shafranov shift stabilization plays less of a role. Higher safety factors (as found in AT discharges), on the other hand, have larger amounts of Shafranov shift stabilization, but parallel velocity shear destabilization can prevent E x B shear quenching of the turbulent

  18. Transport modeling and gyrokinetic analysis of advanced high performance discharges

    Kinsey, J.; Imbeaux, F.; Bourdelle, C.; Garbet, X.; Staebler, G.; Budny, R.; Fukuyama, A.; Tala, T.; Parail, V.

    2005-01-01

    Predictive transport modeling and gyrokinetic stability analyses of demonstration hybrid (HYBRID) and Advanced Tokamak (AT) discharges from the International Tokamak Physics Activity (ITPA) profile database are presented. Both regimes have exhibited enhanced core confinement (above the conventional ITER reference H-mode scenario) but differ in their current density profiles. Recent contributions to the ITPA database have facilitated an effort to study the underlying physics governing confinement in these advanced scenarios. In this paper, we assess the level of commonality of the turbulent transport physics and the relative roles of the transport suppression mechanisms (i.e. ExB shear and Shafranov shift (α) stabilization) using data for select HYBRID and AT discharges from the DIII-D, JET, and AUG tokamaks. GLF23 transport modeling and gyrokinetic stability analysis indicates that ExB shear and Shafranov shift stabilization play essential roles in producing the improved core confinement in both HYBRID and AT discharges. Shafranov shift stabilization is found to be more important in AT discharges than in HYBRID discharges. We have also examined the competition between the stabilizing effects of ExB shear and Shafranov shift stabilization and the destabilizing effects of higher safety factors and parallel velocity shear. Linear and nonlinear gyrokinetic simulations of idealized low and high safety factor cases reveals some interesting consequences. A low safety factor (i.e. HYBRID relevant) is directly beneficial in reducing the transport, and ExB shear stabilization can win out over parallel velocity shear destabilization allowing the turbulence to be quenched. However, at low-q/high current, Shafranov shift stabilization plays less of a role. Higher safety factors (as found in AT discharges), on the other hand, have larger amounts of Shafranov shift stabilization, but parallel velocity shear destabilization can prevent ExB shear quenching of the turbulent

  19. 18F-FET PET prior to recurrent high-grade glioma re-irradiation-additional prognostic value of dynamic time-to-peak analysis and early static summation images?

    Fleischmann, Daniel F; Unterrainer, Marcus; Bartenstein, Peter; Belka, Claus; Albert, Nathalie L; Niyazi, Maximilian

    2017-04-01

    Most high-grade gliomas (HGG) recur after initial multimodal therapy and re-irradiation (Re-RT) has been shown to be a valuable re-treatment option in selected patients. We evaluated the prognostic value of dynamic time-to-peak analysis and early static summation images in O-(2- 18 F-fluoroethyl)-l-tyrosine ( 18 F-FET) PET for patients treated with Re-RT ± concomitant bevacizumab. We retrospectively analyzed 72 patients suffering from recurrent HGG with 18 F-FET PET prior to Re-RT. PET analysis revealed the maximal tumor-to-background-ratio (TBR max ), the biological tumor volume, the number of PET-foci and pattern of time-activity-curves (TACs; increasing vs. decreasing). Furthermore, the novel PET parameters early TBR max (at 5-15 min post-injection) and minimal time-to-peak (TTP min ) were evaluated. Additional analysis was performed for gender, age, KPS, O6-methylguanine-DNA methyltransferase methylation status, isocitrate dehydrogenase 1 mutational status, WHO grade and concomitant bevacizumab therapy. The influence of PET and clinical parameters on post-recurrence survival (PRS) was investigated. Shorter TTP min was related to shorter PRS after Re-RT with 6 months for TTP min  25 min (p = 0.027). TTP min had a significant impact on PRS both on univariate (p = 0.027; continuous) and multivariate analysis (p = 0.011; continuous). Other factors significantly related to PRS on multivariate analysis were increasing vs. decreasing TACs (p = 0.008) and Karnofsky Performance Score (p = 0.015; PET parameters were not significantly related to PRS on univariate analysis. Dynamic 18 F-FET PET with TTP min provides a high prognostic value for recurrent HGG prior to Re-RT, whereas early TBR max does not. Dynamic 18 F-FET PET using TTP min might help to personalize Re-RT treatment regimens in future through voxelwise TTP min analysis for dose painting purposes and PET-guided dose escalation.

  20. Development of high-throughput analysis system using highly-functional organic polymer monoliths

    Umemura, Tomonari; Kojima, Norihisa; Ueki, Yuji

    2008-01-01

    The growing demand for high-throughput analysis in the current competitive life sciences and industries has promoted the development of high-speed HPLC techniques and tools. As one of such tools, monolithic columns have attracted increasing attention and interest in the last decade due to the low flow-resistance and excellent mass transfer, allowing for rapid separations and reactions at high flow rates with minimal loss of column efficiency. Monolithic materials are classified into two main groups: silica- and organic polymer-based monoliths, each with their own advantages and disadvantages. Organic polymer monoliths have several distinct advantages in life-science research, including wide pH stability, less irreversible adsorption, facile preparation and modification. Thus, we have so far tried to develop organic polymer monoliths for various chemical operations, such as separation, extraction, preconcentration, and reaction. In the present paper, recent progress in the development of organic polymer monoliths is discussed. Especially, the procedure for the preparation of methacrylate-based monoliths with various functional groups is described, where the influence of different compositional and processing parameters on the monolithic structure is also addressed. Furthermore, the performance of the produced monoliths is demonstrated through the results for (1) rapid separations of alklybenzenes at high flow rates, (2) flow-through enzymatic digestion of cytochrome c on a trypsin-immobilized monolithic column, and (3) separation of the tryptic digest on a reversed-phase monolithic column. The flexibility and versatility of organic polymer monoliths will be beneficial for further enhancing analytical performance, and will open the way for new applications and opportunities both in scientific and industrial research. (author)

  1. Thermal hydraulics analysis of the Advanced High Temperature Reactor

    Wang, Dean, E-mail: Dean_Wang@uml.edu [University of Massachusetts Lowell, One University Avenue, Lowell, MA 01854 (United States); Yoder, Graydon L.; Pointer, David W.; Holcomb, David E. [Oak Ridge National Laboratory, 1 Bethel Valley RD #6167, Oak Ridge, TN 37831 (United States)

    2015-12-01

    Highlights: • The TRACE AHTR model was developed and used to define and size the DRACS and the PHX. • A LOFF transient was simulated to evaluate the reactor performance during the transient. • Some recommendations for modifying FHR reactor system component designs are discussed. - Abstract: The Advanced High Temperature Reactor (AHTR) is a liquid salt-cooled nuclear reactor design concept, featuring low-pressure molten fluoride salt coolant, a carbon composite fuel form with embedded coated particle fuel, passively triggered negative reactivity insertion mechanisms, and fully passive decay heat rejection. This paper describes an AHTR system model developed using the Nuclear Regulatory Commission (NRC) thermal hydraulic transient code TRAC/RELAP Advanced Computational Engine (TRACE). The TRACE model includes all of the primary components: the core, downcomer, hot legs, cold legs, pumps, direct reactor auxiliary cooling system (DRACS), the primary heat exchangers (PHXs), etc. The TRACE model was used to help define and size systems such as the DRACS and the PHX. A loss of flow transient was also simulated to evaluate the performance of the reactor during an anticipated transient event. Some initial recommendations for modifying system component designs are also discussed. The TRACE model will be used as the basis for developing more detailed designs and ultimately will be used to perform transient safety analysis for the reactor.

  2. High order effects in cross section sensitivity analysis

    Greenspan, E.; Karni, Y.; Gilai, D.

    1978-01-01

    Two types of high order effects associated with perturbations in the flux shape are considered: Spectral Fine Structure Effects (SFSE) and non-linearity between changes in performance parameters and data uncertainties. SFSE are investigated in Part I using a simple single resonance model. Results obtained for each of the resolved and for representative unresolved resonances of 238 U in a ZPR-6/7 like environment indicate that SFSE can have a significant contribution to the sensitivity of group constants to resonance parameters. Methods to account for SFSE both for the propagation of uncertainties and for the adjustment of nuclear data are discussed. A Second Order Sensitivity Theory (SOST) is presented, and its accuracy relative to that of the first order sensitivity theory and of the direct substitution method is investigated in Part II. The investigation is done for the non-linear problem of the effect of changes in the 297 keV sodium minimum cross section on the transport of neutrons in a deep-penetration problem. It is found that the SOST provides a satisfactory accuracy for cross section uncertainty analysis. For the same degree of accuracy, the SOST can be significantly more efficient than the direct substitution method

  3. Gender Inequalities in Highly Qualified Professions: A Social Psychological Analysis

    Maria Helena Santos

    2016-06-01

    Full Text Available Research in social and political psychology contributes towards understanding the persistence of job market gender segregation prevailing in recent decades, the consequences for those involved and their reactions when having to cope with gender inequality. Within the framework of the literature on shared ideologies that justify and legitimize discrimination against women, this article focuses on Portugal and analyses the particular case of women in two highly qualified professions traditionally carried out by men – politics and medicine. Drawing on the results of quantitative and qualitative studies, our analytical approach demonstrates how while a majority of participants show awareness of the existence of gender inequality in these markedly masculine professions, meritocratic individualism and personal attributions to discrimination are the recurring explanations rather than any gender-based account. These results allow us to highlight the relevance of gender-based analysis as an ideology and furthermore to argue that ignoring this perspective not only diminishes individual responsibility for social change but also perpetuates gender asymmetries.

  4. High-Throughput Analysis and Automation for Glycomics Studies.

    Shubhakar, Archana; Reiding, Karli R; Gardner, Richard A; Spencer, Daniel I R; Fernandes, Daryl L; Wuhrer, Manfred

    This review covers advances in analytical technologies for high-throughput (HTP) glycomics. Our focus is on structural studies of glycoprotein glycosylation to support biopharmaceutical realization and the discovery of glycan biomarkers for human disease. For biopharmaceuticals, there is increasing use of glycomics in Quality by Design studies to help optimize glycan profiles of drugs with a view to improving their clinical performance. Glycomics is also used in comparability studies to ensure consistency of glycosylation both throughout product development and between biosimilars and innovator drugs. In clinical studies there is as well an expanding interest in the use of glycomics-for example in Genome Wide Association Studies-to follow changes in glycosylation patterns of biological tissues and fluids with the progress of certain diseases. These include cancers, neurodegenerative disorders and inflammatory conditions. Despite rising activity in this field, there are significant challenges in performing large scale glycomics studies. The requirement is accurate identification and quantitation of individual glycan structures. However, glycoconjugate samples are often very complex and heterogeneous and contain many diverse branched glycan structures. In this article we cover HTP sample preparation and derivatization methods, sample purification, robotization, optimized glycan profiling by UHPLC, MS and multiplexed CE, as well as hyphenated techniques and automated data analysis tools. Throughout, we summarize the advantages and challenges with each of these technologies. The issues considered include reliability of the methods for glycan identification and quantitation, sample throughput, labor intensity, and affordability for large sample numbers.

  5. Isothermal pumping analysis for high-altitude tethered balloons.

    Kuo, Kirsty A; Hunt, Hugh E M

    2015-06-01

    High-altitude tethered balloons have potential applications in communications, surveillance, meteorological observations and climate engineering. To maintain balloon buoyancy, power fuel cells and perturb atmospheric conditions, fluids could be pumped from ground level to altitude using the tether as a hose. This paper examines the pumping requirements of such a delivery system. Cases considered include delivery of hydrogen, sulfur dioxide (SO2) and powders as fluid-based slurries. Isothermal analysis is used to determine the variation of pressures and velocities along the pipe length. Results show that transport of small quantities of hydrogen to power fuel cells and maintain balloon buoyancy can be achieved at pressures and temperatures that are tolerable in terms of both the pipe strength and the current state of pumping technologies. To avoid solidification, transport of SO2 would require elevated temperatures that cannot be tolerated by the strength fibres in the pipe. While the use of particle-based slurries rather than SO2 for climate engineering can reduce the pipe size significantly, the pumping pressures are close to the maximum bursting pressure of the pipe.

  6. Teaching Advanced Data Analysis Tools to High School Astronomy Students

    Black, David V.; Herring, Julie; Hintz, Eric G.

    2015-01-01

    A major barrier to becoming an astronomer is learning how to analyze astronomical data, such as using photometry to compare the brightness of stars. Most fledgling astronomers learn observation, data reduction, and analysis skills through an upper division college class. If the same skills could be taught in an introductory high school astronomy class, then more students would have an opportunity to do authentic science earlier, with implications for how many choose to become astronomers. Several software tools have been developed that can analyze astronomical data ranging from fairly straightforward (AstroImageJ and DS9) to very complex (IRAF and DAOphot). During the summer of 2014, a study was undertaken at Brigham Young University through a Research Experience for Teachers (RET) program to evaluate the effectiveness and ease-of-use of these four software packages. Standard tasks tested included creating a false-color IR image using WISE data in DS9, Adobe Photoshop, and The Gimp; a multi-aperture analyses of variable stars over time using AstroImageJ; creating Spectral Energy Distributions (SEDs) of stars using photometry at multiple wavelengths in AstroImageJ and DS9; and color-magnitude and hydrogen alpha index diagrams for open star clusters using IRAF and DAOphot. Tutorials were then written and combined with screen captures to teach high school astronomy students at Walden School of Liberal Arts in Provo, UT how to perform these same tasks. They analyzed image data using the four software packages, imported it into Microsoft Excel, and created charts using images from BYU's 36-inch telescope at their West Mountain Observatory. The students' attempts to complete these tasks were observed, mentoring was provided, and the students then reported on their experience through a self-reflection essay and concept test. Results indicate that high school astronomy students can successfully complete professional-level astronomy data analyses when given detailed

  7. Brain Network Analysis from High-Resolution EEG Signals

    de Vico Fallani, Fabrizio; Babiloni, Fabio

    lattice and a random structure. Such a model has been designated as "small-world" network in analogy with the concept of the small-world phenomenon observed more than 30 years ago in social systems. In a similar way, many types of functional brain networks have been analyzed according to this mathematical approach. In particular, several studies based on different imaging techniques (fMRI, MEG and EEG) have found that the estimated functional networks showed small-world characteristics. In the functional brain connectivity context, these properties have been demonstrated to reflect an optimal architecture for the information processing and propagation among the involved cerebral structures. However, the performance of cognitive and motor tasks as well as the presence of neural diseases has been demonstrated to affect such a small-world topology, as revealed by the significant changes of L and C. Moreover, some functional brain networks have been mostly found to be very unlike the random graphs in their degree-distribution, which gives information about the allocation of the functional links within the connectivity pattern. It was demonstrated that the degree distributions of these networks follow a power-law trend. For this reason those networks are called "scale-free". They still exhibit the small-world phenomenon but tend to contain few nodes that act as highly connected "hubs". Scale-free networks are known to show resistance to failure, facility of synchronization and fast signal processing. Hence, it would be important to see whether the scaling properties of the functional brain networks are altered under various pathologies or experimental tasks. The present Chapter proposes a theoretical graph approach in order to evaluate the functional connectivity patterns obtained from high-resolution EEG signals. In this way, the "Brain Network Analysis" (in analogy with the Social Network Analysis that has emerged as a key technique in modern sociology) represents an

  8. Framework for Multidisciplinary Analysis, Design, and Optimization with High-Fidelity Analysis Tools

    Orr, Stanley A.; Narducci, Robert P.

    2009-01-01

    A plan is presented for the development of a high fidelity multidisciplinary optimization process for rotorcraft. The plan formulates individual disciplinary design problems, identifies practical high-fidelity tools and processes that can be incorporated in an automated optimization environment, and establishes statements of the multidisciplinary design problem including objectives, constraints, design variables, and cross-disciplinary dependencies. Five key disciplinary areas are selected in the development plan. These are rotor aerodynamics, rotor structures and dynamics, fuselage aerodynamics, fuselage structures, and propulsion / drive system. Flying qualities and noise are included as ancillary areas. Consistency across engineering disciplines is maintained with a central geometry engine that supports all multidisciplinary analysis. The multidisciplinary optimization process targets the preliminary design cycle where gross elements of the helicopter have been defined. These might include number of rotors and rotor configuration (tandem, coaxial, etc.). It is at this stage that sufficient configuration information is defined to perform high-fidelity analysis. At the same time there is enough design freedom to influence a design. The rotorcraft multidisciplinary optimization tool is built and substantiated throughout its development cycle in a staged approach by incorporating disciplines sequentially.

  9. An analysis of predictors of enrollment and successful achievement for girls in high school Advanced Placement physics

    Depalma, Darlene M.

    A problem within science education in the United States persists. U.S students rank lower in science than most other students from participating countries on international tests of achievement (National Center for Education Statistics, 2003). In addition, U.S. students overall enrollment rate in high school Advanced Placement (AP) physics is still low compared to other academic domains, especially for females. This problem is the background for the purpose of this study. This investigation examined cognitive and motivational variables thought to play a part in the under-representation of females in AP physics. Cognitive variables consisted of mathematics, reading, and science knowledge, as measured by scores on the 10th and 11th grade Florida Comprehensive Assessment Tests (FCAT). The motivational factors of attitude, stereotypical views toward science, self-efficacy, and epistemological beliefs were measured by a questionnaire developed with questions taken from previously proven reliable and valid instruments. A general survey regarding participation in extracurricular activities was also included. The sample included 12th grade students from two high schools located in Seminole County, Florida. Of the 106 participants, 20 girls and 27 boys were enrolled in AP physics, and 39 girls and 20 boys were enrolled in other elective science courses. Differences between males and females enrolled in AP physics were examined, as well as differences between females enrolled in AP physics and females that chose not to participate in AP physics, in order to determine predictors that apply exclusively to female enrollment in high school AP physics and predictors of an anticipated science related college major. Data were first analyzed by Exploratory Factor Analysis, followed by Analysis of Variance (ANOVA), independent t-tests, univariate analysis, and logistic regression analysis. One overall theme that emerged from this research was findings that refute the ideas that

  10. High performance computing enabling exhaustive analysis of higher order single nucleotide polymorphism interaction in Genome Wide Association Studies.

    Goudey, Benjamin; Abedini, Mani; Hopper, John L; Inouye, Michael; Makalic, Enes; Schmidt, Daniel F; Wagner, John; Zhou, Zeyu; Zobel, Justin; Reumann, Matthias

    2015-01-01

    Genome-wide association studies (GWAS) are a common approach for systematic discovery of single nucleotide polymorphisms (SNPs) which are associated with a given disease. Univariate analysis approaches commonly employed may miss important SNP associations that only appear through multivariate analysis in complex diseases. However, multivariate SNP analysis is currently limited by its inherent computational complexity. In this work, we present a computational framework that harnesses supercomputers. Based on our results, we estimate a three-way interaction analysis on 1.1 million SNP GWAS data requiring over 5.8 years on the full "Avoca" IBM Blue Gene/Q installation at the Victorian Life Sciences Computation Initiative. This is hundreds of times faster than estimates for other CPU based methods and four times faster than runtimes estimated for GPU methods, indicating how the improvement in the level of hardware applied to interaction analysis may alter the types of analysis that can be performed. Furthermore, the same analysis would take under 3 months on the currently largest IBM Blue Gene/Q supercomputer "Sequoia" at the Lawrence Livermore National Laboratory assuming linear scaling is maintained as our results suggest. Given that the implementation used in this study can be further optimised, this runtime means it is becoming feasible to carry out exhaustive analysis of higher order interaction studies on large modern GWAS.

  11. Analysis of smear in high-resolution remote sensing satellites

    Wahballah, Walid A.; Bazan, Taher M.; El-Tohamy, Fawzy; Fathy, Mahmoud

    2016-10-01

    High-resolution remote sensing satellites (HRRSS) that use time delay and integration (TDI) CCDs have the potential to introduce large amounts of image smear. Clocking and velocity mismatch smear are two of the key factors in inducing image smear. Clocking smear is caused by the discrete manner in which the charge is clocked in the TDI-CCDs. The relative motion between the HRRSS and the observed object obliges that the image motion velocity must be strictly synchronized with the velocity of the charge packet transfer (line rate) throughout the integration time. During imaging an object off-nadir, the image motion velocity changes resulting in asynchronization between the image velocity and the CCD's line rate. A Model for estimating the image motion velocity in HRRSS is derived. The influence of this velocity mismatch combined with clocking smear on the modulation transfer function (MTF) is investigated by using Matlab simulation. The analysis is performed for cross-track and along-track imaging with different satellite attitude angles and TDI steps. The results reveal that the velocity mismatch ratio and the number of TDI steps have a serious impact on the smear MTF; a velocity mismatch ratio of 2% degrades the MTFsmear by 32% at Nyquist frequency when the TDI steps change from 32 to 96. In addition, the results show that to achieve the requirement of MTFsmear >= 0.95 , for TDI steps of 16 and 64, the allowable roll angles are 13.7° and 6.85° and the permissible pitch angles are no more than 9.6° and 4.8°, respectively.

  12. Spectral analysis of highly aliased sea-level signals

    Ray, Richard D.

    1998-10-01

    Observing high-wavenumber ocean phenomena with a satellite altimeter generally calls for "along-track" analyses of the data: measurements along a repeating satellite ground track are analyzed in a point-by-point fashion, as opposed to spatially averaging data over multiple tracks. The sea-level aliasing problems encountered in such analyses can be especially challenging. For TOPEX/POSEIDON, all signals with frequency greater than 18 cycles per year (cpy), including both tidal and subdiurnal signals, are folded into the 0-18 cpy band. Because the tidal bands are wider than 18 cpy, residual tidal cusp energy, plus any subdiurnal energy, is capable of corrupting any low-frequency signal of interest. The practical consequences of this are explored here by using real sea-level measurements from conventional tide gauges, for which the true oceanographic spectrum is known and to which a simulated "satellite-measured" spectrum, based on coarsely subsampled data, may be compared. At many locations the spectrum is sufficently red that interannual frequencies remain unaffected. Intra-annual frequencies, however, must be interpreted with greater caution, and even interannual frequencies can be corrupted if the spectrum is flat. The results also suggest that whenever tides must be estimated directly from the altimetry, response methods of analysis are preferable to harmonic methods, even in nonlinear regimes; this will remain so for the foreseeable future. We concentrate on three example tide gauges: two coastal stations on the Malay Peninsula where the closely aliased K1 and Ssa tides are strong and at Canton Island where trapped equatorial waves are aliased.

  13. Analysis of Biochemical Control and Prognostic Factors in Patients Treated With Either Low-Dose Three-Dimensional Conformal Radiation Therapy or High-Dose Intensity-Modulated Radiotherapy for Localized Prostate Cancer

    Vora, Sujay A.; Wong, William W.; Schild, Steven E.; Ezzell, Gary A.; Halyard, Michele Y.

    2007-01-01

    Purpose: To identify prognostic factors and evaluate biochemical control rates for patients with localized prostate cancer treated with either high-dose intensity-modulated radiotherapy (IMRT) or conventional-dose three-dimensional conformal radiotherapy 3D-CRT. Methods: Four hundred sixteen patients with a minimum follow-up of 3 years (median, 5 years) were included. Two hundred seventy-one patients received 3D-CRT with a median dose of 68.4 Gy (range, 66-71 Gy). The next 145 patients received IMRT with a median dose of 75.6 Gy (range, 70.2-77.4 Gy). Biochemical control rates were calculated according to both American Society for Therapeutic Radiology and Oncology (ASTRO) consensus definitions. Prognostic factors were identified using both univariate and multivariate analyses. Results: The 5-year biochemical control rate was 60.4% for 3D-CRT and 74.1% for IMRT (p < 0.0001, first ASTRO Consensus definition). Using the ASTRO Phoenix definition, the 5-year biochemical control rate was 74.4% and 84.6% with 3D-RT and IMRT, respectively (p = 0.0326). Univariate analyses determined that PSA level, T stage, Gleason score, perineural invasion, and radiation dose were predictive of biochemical control. On multivariate analysis, dose, Gleason score, and perineural invasion remained significant. Conclusion: On the basis of both ASTRO definitions, dose, Gleason score, and perineural invasion were predictive of biochemical control. Intensity-modulated radiotherapy allowed delivery of higher doses of radiation with very low toxicity, resulting in improved biochemical control

  14. Textural analysis of pre-therapeutic [18F]-FET-PET and its correlation with tumor grade and patient survival in high-grade gliomas

    Pyka, Thomas; Hiob, Daniela; Wester, Hans-Juergen [Klinikum Rechts der Isar der TU Muenchen, Department of Nuclear Medicine, Munich (Germany); Gempt, Jens; Ringel, Florian; Meyer, Bernhard [Klinikum Rechts der Isar der TU Muenchen, Neurosurgic Department, Munich (Germany); Schlegel, Juergen [Klinikum Rechts der Isar der TU Muenchen, Institute of Pathology and Neuropathology, Munich (Germany); Bette, Stefanie [Klinikum Rechts der Isar der TU Muenchen, Neuroradiologic department, Munich (Germany); Foerster, Stefan [Klinikum Rechts der Isar der TU Muenchen, Department of Nuclear Medicine, Munich (Germany); Klinikum Rechts der Isar der TU Muenchen, TUM Neuroimaging Center (TUM-NIC), Munich (Germany)

    2016-01-15

    Amino acid positron emission tomography (PET) with [18F]-fluoroethyl-L-tyrosine (FET) is well established in the diagnostic work-up of malignant brain tumors. Analysis of FET-PET data using tumor-to-background ratios (TBR) has been shown to be highly valuable for the detection of viable hypermetabolic brain tumor tissue; however, it has not proven equally useful for tumor grading. Recently, textural features in 18-fluorodeoxyglucose-PET have been proposed as a method to quantify the heterogeneity of glucose metabolism in a variety of tumor entities. Herein we evaluate whether textural FET-PET features are of utility for grading and prognostication in patients with high-grade gliomas. One hundred thirteen patients (70 men, 43 women) with histologically proven high-grade gliomas were included in this retrospective study. All patients received static FET-PET scans prior to first-line therapy. TBR (max and mean), volumetric parameters and textural parameters based on gray-level neighborhood difference matrices were derived from static FET-PET images. Receiver operating characteristic (ROC) and discriminant function analyses were used to assess the value for tumor grading. Kaplan-Meier curves and univariate and multivariate Cox regression were employed for analysis of progression-free and overall survival. All FET-PET textural parameters showed the ability to differentiate between World Health Organization (WHO) grade III and IV tumors (p < 0.001; AUC 0.775). Further improvement in discriminatory power was possible through a combination of texture and metabolic tumor volume, classifying 85 % of tumors correctly (AUC 0.830). TBR and volumetric parameters alone were correlated with tumor grade, but showed lower AUC values (0.644 and 0.710, respectively). Furthermore, a correlation of FET-PET texture but not TBR was shown with patient PFS and OS, proving significant in multivariate analysis as well. Volumetric parameters were predictive for OS, but this correlation did not

  15. Fuel analysis code FAIR and its high burnup modelling capabilities

    Prasad, P.S.; Dutta, B.K.; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.

    1995-01-01

    A computer code FAIR has been developed for analysing performance of water cooled reactor fuel pins. It is capable of analysing high burnup fuels. This code has recently been used for analysing ten high burnup fuel rods irradiated at Halden reactor. In the present paper, the code FAIR and its various high burnup models are described. The performance of code FAIR in analysing high burnup fuels and its other applications are highlighted. (author). 21 refs., 12 figs

  16. Time series analysis in the social sciences the fundamentals

    Shin, Youseop

    2017-01-01

    Times Series Analysis in the Social Sciences is a practical and highly readable introduction written exclusively for students and researchers whose mathematical background is limited to basic algebra. The book focuses on fundamental elements of time series analysis that social scientists need to understand so they can employ time series analysis for their research and practice. Through step-by-step explanations and using monthly violent crime rates as case studies, this book explains univariate time series from the preliminary visual analysis through the modeling of seasonality, trends, and re

  17. Metaphysics methods development for high temperature gas cooled reactor analysis

    Seker, V.; Downar, T. J.

    2007-01-01

    Gas cooled reactors have been characterized as one of the most promising nuclear reactor concepts in the Generation-IV technology road map. Considerable research has been performed on the design and safety analysis of these reactors. However, the calculational tools being used to perform these analyses are not state-of-the-art and are not capable of performing detailed three-dimensional analyses. This paper presents the results of an effort to develop an improved thermal-hydraulic solver for the pebble bed type high temperature gas cooled reactors. The solution method is based on the porous medium approach and the momentum equation including the modified Ergun's resistance model for pebble bed is solved in three-dimensional geometry. The heat transfer in the pebble bed is modeled considering the local thermal non-equilibrium between the solid and gas, which results in two separate energy equations for each medium. The effective thermal conductivity of the pebble-bed can be calculated both from Zehner-Schluender and Robold correlations. Both the fluid flow and the heat transfer are modeled in three dimensional cylindrical coordinates and can be solved in steady-state and time dependent. The spatial discretization is performed using the finite volume method and the theta-method is used in the temporal discretization. A preliminary verification was performed by comparing the results with the experiments conducted at the SANA test facility. This facility is located at the Institute for Safety Research and Reactor Technology (ISR), Julich, Germany. Various experimental cases are modeled and good agreement in the gas and solid temperatures is observed. An on-going effort is to model the control rod ejection scenarios as described in the OECD/NEA/NSC PBMR-400 benchmark problem. In order to perform these analyses PARCS reactor simulator code will be coupled with the new thermal-hydraulic solver. Furthermore, some of the other anticipated accident scenarios in the benchmark

  18. Neutron activation analysis of high purity silver using high resolution gamma-spectrometry

    Gilbert, E.N.; Veriovkin, G.V.; Botchkaryov, B.N.; Godovikov, A.A.; Zhavoronkov, V.Ya.; Mikhailov, V.A.

    1975-01-01

    A method of neutron activation determination of microimpurities in high purity silver has been developed. For matrix activity separation the extraction of silver by dibuthylsulfide /DBS/ was employed. The purification coefficient was 10 8 after triple extraction. To study the behaviour of microimpurities in the extraction procedure and to determine their chemical yields some tracer experiments were undertaken with radionuclides of Na, Se, Fe, Co, Cu, As, Sc, Te, Zr, Hf, Mo, W, Cd, In, Sb, La, Ce, Eu, Ta, Re, Ir, Ru. All the elements studied were found to remain in the aqueous phase up to 96-99% after triple extraction with DBS. To estimate the accuracy of the method and to study the mutual influence of the elements in the sample in various relative amounts on the accuracy of the analysis, a number of experiments of ''added-found'' type was performed and the results were treated statistically. In these experiments model mixtures of 30 nuclides were analysed after triple DBS extraction. The t-criterion values for the confidence interval at P=0.95 show the absence of systematic errors. Variation coefficient values do not exceed 15%. Using Ge/Li/ detector it was possible to determine 30 elements simultaneously in silver samples. (T.G.)

  19. Freud: a software suite for high-throughput simulation analysis

    Harper, Eric; Spellings, Matthew; Anderson, Joshua; Glotzer, Sharon

    Computer simulation is an indispensable tool for the study of a wide variety of systems. As simulations scale to fill petascale and exascale supercomputing clusters, so too does the size of the data produced, as well as the difficulty in analyzing these data. We present Freud, an analysis software suite for efficient analysis of simulation data. Freud makes no assumptions about the system being analyzed, allowing for general analysis methods to be applied to nearly any type of simulation. Freud includes standard analysis methods such as the radial distribution function, as well as new methods including the potential of mean force and torque and local crystal environment analysis. Freud combines a Python interface with fast, parallel C + + analysis routines to run efficiently on laptops, workstations, and supercomputing clusters. Data analysis on clusters reduces data transfer requirements, a prohibitive cost for petascale computing. Used in conjunction with simulation software, Freud allows for smart simulations that adapt to the current state of the system, enabling the study of phenomena such as nucleation and growth, intelligent investigation of phases and phase transitions, and determination of effective pair potentials.

  20. Modular high voltage power supply for chemical analysis

    Stamps, James F [Livermore, CA; Yee, Daniel D [Dublin, CA

    2008-07-15

    A high voltage power supply for use in a system such as a microfluidics system, uses a DC-DC converter in parallel with a voltage-controlled resistor. A feedback circuit provides a control signal for the DC-DC converter and voltage-controlled resistor so as to regulate the output voltage of the high voltage power supply, as well as, to sink or source current from the high voltage supply.

  1. High-throughput Transcriptome analysis, CAGE and beyond

    Kodzius, Rimantas

    2008-01-01

    1. Current research - PhD work on discovery of new allergens - Postdoctoral work on Transcriptional Start Sites a) Tag based technologies allow higher throughput b) CAGE technology to define promoters c) CAGE data analysis to understand Transcription - Wo

  2. High-throughput Transcriptome analysis, CAGE and beyond

    Kodzius, Rimantas

    2008-11-25

    1. Current research - PhD work on discovery of new allergens - Postdoctoral work on Transcriptional Start Sites a) Tag based technologies allow higher throughput b) CAGE technology to define promoters c) CAGE data analysis to understand Transcription - Wo

  3. Linear and nonlinear analysis of high-power rf amplifiers

    Puglisi, M.

    1983-01-01

    After a survey of the state variable analysis method the final amplifier for the CBA is analyzed taking into account the real beam waveshape. An empirical method for checking the stability of a non-linear system is also considered

  4. 3-D Experimental Fracture Analysis at High Temperature

    John H. Jackson; Albert S. Kobayashi

    2001-09-14

    T*e, which is an elastic-plastic fracture parameter based on incremental theory of plasticity, was determined numerically and experimentally. The T*e integral of a tunneling crack in 2024-T3 aluminum, three point bend specimen was obtained through a hybrid analysis of moire interferometry and 3-D elastic-plastic finite element analysis. The results were verified by the good agreement between the experimentally and numerically determined T*e on the specimen surface.

  5. A kernel version of spatial factor analysis

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  6. Analysis of surface degradation of high density polyethylene (HDPE ...

    Unknown

    results from tracking. Having this in view, in the present work, tracking studies ... tween the high voltage and the ground electrode was ad- justed to be equal to 50 ... tures, extra loading and high strain rates due to wind or impacts and the effect ...

  7. Aggressive Students and High School Dropout: An Event History Analysis

    Orozco, Steven R.

    2016-01-01

    Aggressive students often struggle in multiple domains of their school functioning and are at increased risk for high school dropout. Research has identified a variety of warning flags which are strong predictors of high school dropout. While it is known that aggressive students exhibit many of these warning flags, there is little research which…

  8. Exome Sequence Analysis of 14 Families With High Myopia

    Kloss, Bethany A.; Tompson, Stuart W.; Whisenhunt, Kristina N.

    2017-01-01

    Purpose: To identify causal gene mutations in 14 families with autosomal dominant (AD) high myopia using exome sequencing. Methods: Select individuals from 14 large Caucasian families with high myopia were exome sequenced. Gene variants were filtered to identify potential pathogenic changes. Sang...

  9. Steady State Structural Analysis of High Pressure Gas Turbine Blade using Finite Element Analysis

    Mazarbhuiya, Hussain Mahamed Sahed Mostafa; Murari Pandey, Krishna

    2017-08-01

    In gas turbines the major portion of performance dependency lies upon turbine blade design. Turbine blades experience very high centrifugal, axial and tangential force during power generation. While withstanding these forces blades undergo elongation. Different methods have proposed for better enhancement of the mechanical properties of blade to withstand in extreme condition. Present paper describes the stress and elongation for blades having properties of different materials. Steady state structural analysis have performed in the present work for different materials (In 625, In 718, In 738, In 738 LC, MAR M246, Ni-Cr, Ti-alloy, Ti-Al, Ti-T6, U500). Remarkable finding is that the root of the blade is subjected to maximum stress for all blade materials and the blade made of MAR M246 has less stress and deformation among all other blade materials which can be selected as a suitable material for gas turbine blade.

  10. Radiochemical neutron activation analysis based multi-elemental analysis of high purity gallium

    Tashimova, F.A.; Sadikov, I.I; Salimov, M.I.; Zinov'ev, V.G.

    2006-01-01

    Full text: Gallium is one of the widely used materials in semiconductor and optoelectronics industry. Gallium is used to produce infrared detectors, piezoelectric sensors, high- and low-temperature transistors for space and defense technology. One of the most important requirements for semiconductor materials of gallium compounds is an excessive high purity for layers and films. Information on impurities (type of an impurity, concentration, character of distribution) is important as for better understanding of the physical and chemical processes taking place in formed semiconductor structures and for the 'know-how' of devices on their basis. The object of this work is to develop radiochemical neutron activation technique for analysis of high purity gallium. Irradiation of 0.1 g of gallium sample in neutron flux of 5·10 13 cm -2 s -1 for 5 hours will result in induced activity of more than 10 8 Bq, due to 72 Ga radionuclide, half-life of which is 14.1 hours. Therefore to perform instrumental NAA of gallium long period (10 day) cooling is required, and high sensitive determination of elements producing short- and long-lived radionuclides (T 1/2 72 Ga. We have studied the behavior of gallium in extraction-chromatographic system 'TBP-HCl'. The experiments have shown that higher factor of distribution (D) and capacity on gallium can be achieved when 'TBP-4M HCl' system is used. However more than 10 trace elements have high D and thus they cannot be separated from 72 Ga. To resolve the problem and increase the number of separated trace elements we have used preliminary satisfaction of chromatographic column with tellurium, which has D higher than the most of elements in 'TBP-4M HCl' system and thus suppresses extraction of elements. Distribution profile of gallium along the column and elution curve of 25 trace elements have been measured. Chemical yields of separated elements measured by using radiotracers are more than 93%. On the basis of the carried out researches

  11. Trend analysis of modern high-rise construction

    Radushinsky, Dmitry; Gubankov, Andrey; Mottaeva, Asiiat

    2018-03-01

    The article reviews the main trends of modern high-rise construction considered a number of architectural, engineering and technological, economic and image factors that have influenced the intensification of construction of high-rise buildings in the 21st century. The key factors of modern high-rise construction are identified, which are associated with an attractive image component for businessmen and politicians, with the ability to translate current views on architecture and innovations in construction technologies and the lobbying of relevant structures, as well as the opportunity to serve as an effective driver in the development of a complex of national economy sectors with the achievement of a multiplicative effect. The estimation of the priority nature of participation of foreign architectural bureaus in the design of super-high buildings in Russia at the present stage is given. The issue of economic expediency of construction of high-rise buildings, including those with only a residential function, has been investigated. The connection between the construction of skyscrapers as an important component of the image of cities in the marketing of places and territories, the connection of the availability of a high-rise center, the City, with the possibilities of attracting a "creative class" and the features of creating a large working space for specialists on the basis of territorial proximity and density of high-rise buildings.

  12. Exploring charge density analysis in crystals at high pressure: data collection, data analysis and advanced modelling.

    Casati, Nicola; Genoni, Alessandro; Meyer, Benjamin; Krawczuk, Anna; Macchi, Piero

    2017-08-01

    The possibility to determine electron-density distribution in crystals has been an enormous breakthrough, stimulated by a favourable combination of equipment for X-ray and neutron diffraction at low temperature, by the development of simplified, though accurate, electron-density models refined from the experimental data and by the progress in charge density analysis often in combination with theoretical work. Many years after the first successful charge density determination and analysis, scientists face new challenges, for example: (i) determination of the finer details of the electron-density distribution in the atomic cores, (ii) simultaneous refinement of electron charge and spin density or (iii) measuring crystals under perturbation. In this context, the possibility of obtaining experimental charge density at high pressure has recently been demonstrated [Casati et al. (2016). Nat. Commun. 7, 10901]. This paper reports on the necessities and pitfalls of this new challenge, focusing on the species syn-1,6:8,13-biscarbonyl[14]annulene. The experimental requirements, the expected data quality and data corrections are discussed in detail, including warnings about possible shortcomings. At the same time, new modelling techniques are proposed, which could enable specific information to be extracted, from the limited and less accurate observations, like the degree of localization of double bonds, which is fundamental to the scientific case under examination.

  13. Safety analysis of high pressure gasous fuel container punctures

    Swain, M.R. [Univ. of Miami, Coral Gables, FL (United States)

    1995-09-01

    The following report is divided into two sections. The first section describes the results of ignitability tests of high pressure hydrogen and natural gas leaks. The volume of ignitable gases formed by leaking hydrogen or natural gas were measured. Leaking high pressure hydrogen produced a cone of ignitable gases with 28{degrees} included angle. Leaking high pressure methane produced a cone of ignitable gases with 20{degrees} included angle. Ignition of hydrogen produced larger overpressures than did natural gas. The largest overpressures produced by hydrogen were the same as overpressures produced by inflating a 11 inch child`s balloon until it burst.

  14. National high-level waste systems analysis report

    Kristofferson, K.; Oholleran, T.P.; Powell, R.H.

    1995-09-01

    This report documents the assessment of budgetary impacts, constraints, and repository availability on the storage and treatment of high-level waste and on both existing and pending negotiated milestones. The impacts of the availabilities of various treatment systems on schedule and throughput at four Department of Energy sites are compared to repository readiness in order to determine the prudent application of resources. The information modeled for each of these sites is integrated with a single national model. The report suggests a high-level-waste model that offers a national perspective on all high-level waste treatment and storage systems managed by the Department of Energy.

  15. National high-level waste systems analysis report

    Kristofferson, K.; Oholleran, T.P.; Powell, R.H.

    1995-09-01

    This report documents the assessment of budgetary impacts, constraints, and repository availability on the storage and treatment of high-level waste and on both existing and pending negotiated milestones. The impacts of the availabilities of various treatment systems on schedule and throughput at four Department of Energy sites are compared to repository readiness in order to determine the prudent application of resources. The information modeled for each of these sites is integrated with a single national model. The report suggests a high-level-waste model that offers a national perspective on all high-level waste treatment and storage systems managed by the Department of Energy

  16. Analysis of high-quality modes in open chaotic microcavities

    Fang, W.; Yamilov, A.; Cao, H.

    2005-01-01

    We present a numerical study of the high-quality modes in two-dimensional dielectric stadium microcavities. Although the classical ray mechanics is fully chaotic in a stadium billiard, all of the high-quality modes show a 'strong scar' around unstable periodic orbits. When the deformation (ratio of the length of the straight segments over the diameter of the half circles) is small, the high-quality modes correspond to whispering-gallery-type trajectories and their quality factors decrease monotonically with increasing deformation. At large deformation, each high-quality mode is associated with multiple unstable periodic orbits. Its quality factor changes nonmonotonically with the deformation, and there exists an optimal deformation for each mode at which its quality factor reaches a local maximum. This unusual behavior is attributed to the interference of waves propagating along different constituent orbits that could minimize light leakage out of the cavity

  17. Multidimensional analysis of high resolution. gamma. -ray data

    Flibotte, S.; Huettmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, C.; Vivien, J.P.; Zen, J. (Centre de Recherches Nucleaires, 67 - Strasbourg (France)); Bednarczyk, P. (Inst. of Nuclear Physics, Krakow (Poland))

    1992-08-15

    Algorithms are developed to analyze high-fold {gamma}-ray coincidences. Performances of the programs have been tested in 3, 4 and 5 dimensions using events generated with a Monte Carlo simulation. (orig.).

  18. Multicast Performance Analysis for High-Speed Torus Networks

    Oral, S; George, A

    2002-01-01

    ... for unicast-based and path-based multicast communication on high-speed torus networks. Software-based multicast performance results of selected algorithms on a 16-node Scalable Coherent Interface (SCI) torus are given...

  19. High-Throughput Analysis and Automation for Glycomics Studies

    Shubhakar, A.; Reiding, K.R.; Gardner, R.A.; Spencer, D.I.R.; Fernandes, D.L.; Wuhrer, M.

    2015-01-01

    This review covers advances in analytical technologies for high-throughput (HTP) glycomics. Our focus is on structural studies of glycoprotein glycosylation to support biopharmaceutical realization and the discovery of glycan biomarkers for human disease. For biopharmaceuticals, there is increasing

  20. MIPHENO: Data normalization for high throughput metabolic analysis.

    High throughput methodologies such as microarrays, mass spectrometry and plate-based small molecule screens are increasingly used to facilitate discoveries from gene function to drug candidate identification. These large-scale experiments are typically carried out over the course...

  1. Electrospray Ionization Mass Spectrometric Analysis of Highly Reactive Glycosyl Halides

    Lajos Kovács

    2012-07-01

    Full Text Available Highly reactive glycosyl chlorides and bromides have been analysed by a routine mass spectrometric method using electrospray ionization and lithium salt adduct-forming agents in anhydrous acetonitrile solution, providing salient lithiated molecular ions [M+Li]+, [2M+Li]+ etc. The role of other adduct-forming salts has also been evaluated. The lithium salt method is useful for accurate mass determination of these highly sensitive compounds.

  2. Analysis of high heat flux testing of mock-ups

    Salavy, J.-F.; Giancarli, L.; Merola, M.; Picard, F.; Roedig, M.

    2003-01-01

    ITER EU Home Team is performing a large R and D effort in support of the development of high heat flux components for ITER. In this framework, this paper describes the thermal analyses, the fatigue lifetime evaluation and the transient VDE with material melting related to the high heat flux thermo-mechanical tests performed in the JUDITH facility. It reports on several mock-ups representative of different proposed component designs based on Be, W and CFC as armour materials

  3. Failure analysis of a barrel exposed to high temperature

    Usman, A.; Salam, I.; Rizvi, S.A.; Qasir, S.

    2005-01-01

    The paper deals with the study of a tank gun barrel which had failed after firing only a few rounds. The failure was in the form of bulging at the muzzle end (ME). The material of the barrel was characterized using different techniques including chemical and mechanical testing, optical microscopy and electron microscopy. Study disclosed that the barrel was subjected to excessively high temperature that resulted in its softening and consequent bulging under high pressure of the round. (author)

  4. Adjoint sensitivity analysis of high frequency structures with Matlab

    Bakr, Mohamed; Demir, Veysel

    2017-01-01

    This book covers the theory of adjoint sensitivity analysis and uses the popular FDTD (finite-difference time-domain) method to show how wideband sensitivities can be efficiently estimated for different types of materials and structures. It includes a variety of MATLAB® examples to help readers absorb the content more easily.

  5. High speed analysis of high pressure combustion in a constant volume cell

    Frijters, P.J.M.; Klein-Douwel, R.J.H.; Manski, S.S.; Somers, L.M.T.; Baert, R.S.G.; Dias, V.

    2005-01-01

    A combustion process with N2, O2 and C2H4 as fuel used in an opticallyaccessible, high pressure, high temperature, constant volume cell forresearch on diesel fuel spray formation, is studied. The flame frontspeed Vf,HS is determined using high speed imaging. The pressure traceof the combustion

  6. High-performance computing in accelerating structure design and analysis

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  7. High Sensitivity and High Detection Specificity of Gold-Nanoparticle-Grafted Nanostructured Silicon Mass Spectrometry for Glucose Analysis.

    Tsao, Chia-Wen; Yang, Zhi-Jie

    2015-10-14

    Desorption/ionization on silicon (DIOS) is a high-performance matrix-free mass spectrometry (MS) analysis method that involves using silicon nanostructures as a matrix for MS desorption/ionization. In this study, gold nanoparticles grafted onto a nanostructured silicon (AuNPs-nSi) surface were demonstrated as a DIOS-MS analysis approach with high sensitivity and high detection specificity for glucose detection. A glucose sample deposited on the AuNPs-nSi surface was directly catalyzed to negatively charged gluconic acid molecules on a single AuNPs-nSi chip for MS analysis. The AuNPs-nSi surface was fabricated using two electroless deposition steps and one electroless etching step. The effects of the electroless fabrication parameters on the glucose detection efficiency were evaluated. Practical application of AuNPs-nSi MS glucose analysis in urine samples was also demonstrated in this study.

  8. Analysis of production factors in high performance concrete

    Gilberto Carbonari

    2003-01-01

    Full Text Available The incorporation of silica fume and superplasticizers in high strength and high performance concrete, along with a low water-cement ratio, leads to significant changes in the workability and the energy needed to homogenize and compact the concrete. Moreover, several aspects of concrete production that are not critical for conventional concrete are important for high strength concrete. This paper will discuss the need for controlling the humidity of the aggregates, optimizing the mixing sequence used in the fabrication, and the slump loss. The application of a silica fume concrete in typical building columns will be analyzed considering the required consolidation, the variability of the material strength within the structural element and the relation between core and molded specimen strength. Comparisons will also be made with conventional concrete.

  9. High beta and second stability region transport and stability analysis

    1991-01-01

    This document describes ideal and resistive MHD studies of high-beta plasmas and of the second stability region. Significant progress is reported on the resistive stability properties of high beta poloidal ''supershot'' discharges. For these studies initial profiles were taken from the TRANSP code which is used extensively to analyze experimental data. When an ad hoc method of removing the finite pressure stabilization of tearing modes is implemented it is shown that there is substantial agreement between MHD stability computation and experiment. In particular, the mode structures observed experimentally are consistent with the predictions of the resistive MHD model. We also report on resistive stability near the transition to the second region in TFTR. Tearing modes associated with a nearby infernal mode may explain the increase in MHD activity seen in high beta supershots and which impede the realization of Q∼1. We also report on a collaborative study with PPPL involving sawtooth stabilization with ICRF

  10. Additional EIPC Study Analysis: Interim Report on High Priority Topics

    Hadley, Stanton W [ORNL

    2013-11-01

    Between 2010 and 2012 the Eastern Interconnection Planning Collaborative (EIPC) conducted a major long-term resource and transmission study of the Eastern Interconnection (EI). With guidance from a Stakeholder Steering Committee (SSC) that included representatives from the Eastern Interconnection States Planning Council (EISPC) among others, the project was conducted in two phases. Phase 1 involved a long-term capacity expansion analysis that involved creation of eight major futures plus 72 sensitivities. Three scenarios were selected for more extensive transmission- focused evaluation in Phase 2. Five power flow analyses, nine production cost model runs (including six sensitivities), and three capital cost estimations were developed during this second phase. The results from Phase 1 and 2 provided a wealth of data that could be examined further to address energy-related questions. A list of 13 topics was developed for further analysis; this paper discusses the first five.

  11. An analysis of curriculum implementation on high schools in Yogyakarta

    Febriana, Beta Wulan; Arlianty, Widinda Normalia; Diniaty, Artina; Fauzi'ah, Lina

    2017-12-01

    This study aims to find out how the implementation of the curriculum at three schools in Yogyakarta. The selection of these three schools is based on the use of different curriculum in each school. The analysis was done by distributing questionnaire analysis of eight national education standards (NES). The purpose of this questionnaire is to find out how the curriculum implemented in the schools. In addition, to find out whether or not the implementation was done in accordance with the expectations of the curriculum. The questionnaire distributed in the form of indicators on each NES. These indicators include, Content Standards, Process Standards, Graduates Competency Standards, Teacher and Education Staff Standards, Facility and Infrastructure Standards, Management Standards, Financing Standards and Assessment Standards. Results of the observation indicate that there is a discrepancy between the expectations and the reality of the three schools observed.

  12. Hera: High Energy Astronomical Data Analysis via the Internet

    Valencic, Lynne A.; Chai, P.; Pence, W.; Snowden, S.

    2011-09-01

    The HEASARC at NASA Goddard Space Flight Center has developed Hera, a data processing facility for analyzing high energy astronomical data over the internet. Hera provides all the software packages, disk space, and computing resources needed to do general processing of and advanced research on publicly available data from High Energy Astrophysics missions. The data and data products are kept on a server at GSFC and can be downloaded to a user's local machine. This service is provided for free to students, educators, and researchers for educational and research purposes.

  13. Fault Analysis and Detection in Microgrids with High PV Penetration

    El Khatib, Mohamed [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hernandez Alvidrez, Javier [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ellis, Abraham [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    In this report we focus on analyzing current-controlled PV inverters behaviour under faults in order to develop fault detection schemes for microgrids with high PV penetration. Inverter model suitable for steady state fault studies is presented and the impact of PV inverters on two protection elements is analyzed. The studied protection elements are superimposed quantities based directional element and negative sequence directional element. Additionally, several non-overcurrent fault detection schemes are discussed in this report for microgrids with high PV penetration. A detailed time-domain simulation study is presented to assess the performance of the presented fault detection schemes under different microgrid modes of operation.

  14. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the

  15. Analysis of thermodynamic properties for high-temperature superconducting oxides

    Kushwah, S.S.; Shanker, J.

    1993-01-01

    Analysis of thermodynamic properties such as specific heat, Debye temperature, Einstein temperature, thermal expansion coefficient, bulk modulus, and Grueneisen parameter is performed for rare-earth-based, Tl-based, and Bi-based superconducting copper oxides. Values of thermodynamic parameters are calculated and reported. The relationship between the Debye temperature and the superconducting transition temperature is used to estimate the values of T c using the interaction parameters from Ginzburg. (orig.)

  16. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Georgios Arampatzis

    Full Text Available Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of

  17. Highly Robust Statistical Methods in Medical Image Analysis

    Kalina, Jan

    2012-01-01

    Roč. 32, č. 2 (2012), s. 3-16 ISSN 0208-5216 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust statistics * classification * faces * robust image analysis * forensic science Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.208, year: 2012 http://www.ibib.waw.pl/bbe/bbefulltext/BBE_32_2_003_FT.pdf

  18. High Throughput Analysis of Breast Cancer Specimens on the Grid

    Yang, Lin; Chen, Wenjin; Meer, Peter; Salaru, Gratian; Feldman, Michael D.; Foran, David J.

    2007-01-01

    Breast cancer accounts for about 30% of all cancers and 15% of all cancer deaths in women in the United States. Advances in computer assisted diagnosis (CAD) holds promise for early detecting and staging disease progression. In this paper we introduce a Grid-enabled CAD to perform automatic analysis of imaged histopathology breast tissue specimens. More than 100,000 digitized samples (1200 × 1200 pixels) have already been processed on the Grid. We have analyzed results for 3744 breast tissue ...

  19. IDAL: an interactive analysis language for high energy physics

    Burnett, T.H.

    1990-01-01

    The SLAC e + e - experiment SLD has adopted a unique off-line software environment, IDA. It provides a command processor shell for all code, from reconstruction and Monte Carlo production to user DST physics analysis. An essential component is an incrementally-compiled language, IDAL. IDAL allows symbolic access to SLD data structures, and supports special loop constructs to allow examination of all banks of a given type. IDAL also recognizes statements that simultaneously define histograms and generate code to fill them

  20. Analysis of Pacific oyster larval proteome and its response to high-CO2

    Dineshram, R.; Wong, Kelvin K.W.; Xiao, Shu; Yu, Ziniu; Qian, Pei Yuan; Thiyagarajan, Vengatesen

    2012-01-01

    Most calcifying organisms show depressed metabolic, growth and calcification rates as symptoms to high-CO2 due to ocean acidification (OA) process. Analysis of the global expression pattern of proteins (proteome analysis) represents a powerful tool

  1. A highly optimized grid deployment: the metagenomic analysis example.

    Aparicio, Gabriel; Blanquer, Ignacio; Hernández, Vicente

    2008-01-01

    Computational resources and computationally expensive processes are two topics that are not growing at the same ratio. The availability of large amounts of computing resources in Grid infrastructures does not mean that efficiency is not an important issue. It is necessary to analyze the whole process to improve partitioning and submission schemas, especially in the most critical experiments. This is the case of metagenomic analysis, and this text shows the work done in order to optimize a Grid deployment, which has led to a reduction of the response time and the failure rates. Metagenomic studies aim at processing samples of multiple specimens to extract the genes and proteins that belong to the different species. In many cases, the sequencing of the DNA of many microorganisms is hindered by the impossibility of growing significant samples of isolated specimens. Many bacteria cannot survive alone, and require the interaction with other organisms. In such cases, the information of the DNA available belongs to different kinds of organisms. One important stage in Metagenomic analysis consists on the extraction of fragments followed by the comparison and analysis of their function stage. By the comparison to existing chains, whose function is well known, fragments can be classified. This process is computationally intensive and requires of several iterations of alignment and phylogeny classification steps. Source samples reach several millions of sequences, which could reach up to thousands of nucleotides each. These sequences are compared to a selected part of the "Non-redundant" database which only implies the information from eukaryotic species. From this first analysis, a refining process is performed and alignment analysis is restarted from the results. This process implies several CPU years. The article describes and analyzes the difficulties to fragment, automate and check the above operations in current Grid production environments. This environment has been

  2. High Powered Rocketry: Design, Construction, and Launching Experience and Analysis

    Paulson, Pryce; Curtis, Jarret; Bartel, Evan; Cyr, Waycen Owens; Lamsal, Chiranjivi

    2018-01-01

    In this study, the nuts and bolts of designing and building a high powered rocket have been presented. A computer simulation program called RockSim was used to design the rocket. Simulation results are consistent with time variations of altitude, velocity, and acceleration obtained in the actual flight. The actual drag coefficient was determined…

  3. Analysis of key variables controlling phosphorus removal in high ...

    This study evaluates the influence of hydraulic retention time (HRT), solar radiation, and water temperature on phosphorus removal from two experimental high rate oxidation ponds (HROP) with clarifiers. Both HROPs were operated for a period of one year with different HRTs (3 to 10 d), but under the same environmental ...

  4. Approximate analysis of high-rise frames with flexible connections

    Hoenderkamp, J.C.D.; Snijder, H.H.

    2000-01-01

    An approximate hand method for estimating horizontal deflections in high-rise steel frames with flexible beam–column connections subjected to horizontal loading is presented. The method is developed from the continuous medium theory for coupled walls which is expressed in non-dimensional structural

  5. Analysis of Indexed-Guided Highly Birefringent Photonic Crystal ...

    In this paper, a comparative study of three geometries of highly birefringent photonic crystal fibers (HB PCF) is presented. The proposed geometries are: V type PCF, Pseudo-Panda PCF and selectively liquid-filled PCF. Based on the famous Finite Difference Time Domain (FDTD) method with the perfectly matched layer ...

  6. Analysis of High School German Textbooks through Rasch Measurement Model

    Batdi, Veli; Elaldi, Senel

    2016-01-01

    The purpose of the present study is to analyze German teacher trainers' views on high school German textbooks through the Rasch measurement model. A survey research design was employed and study group consisted of a total of 21 teacher trainers, three from each region and selected randomly from provinces which are located in seven regions and…

  7. Monte Carlo analysis of highly compressed fissile assemblies. Pt. 1

    Raspet, R.; Baird, G.E.

    1978-01-01

    Laserinduced fission of highly compressed bare fissionable spheres is analyzed using Monte Carlo techniques. The critical mass and critical radius as a function of density are calculated and the fission energy yield is calculated and compared with the input laser energy necessary to achieve compression to criticality. (orig.) [de

  8. A rigorous analysis of high-order electromagnetic invisibility cloaks

    Weder, Ricardo

    2008-01-01

    There is currently a great deal of interest in the invisibility cloaks recently proposed by Pendry et al that are based on the transformation approach. They obtained their results using first-order transformations. In recent papers, Hendi et al and Cai et al considered invisibility cloaks with high-order transformations. In this paper, we study high-order electromagnetic invisibility cloaks in transformation media obtained by high-order transformations from general anisotropic media. We consider the case where there is a finite number of spherical cloaks located in different points in space. We prove that for any incident plane wave, at any frequency, the scattered wave is identically zero. We also consider the scattering of finite-energy wave packets. We prove that the scattering matrix is the identity, i.e., that for any incoming wave packet the outgoing wave packet is the same as the incoming one. This proves that the invisibility cloaks cannot be detected in any scattering experiment with electromagnetic waves in high-order transformation media, and in particular in the first-order transformation media of Pendry et al. We also prove that the high-order invisibility cloaks, as well as the first-order ones, cloak passive and active devices. The cloaked objects completely decouple from the exterior. Actually, the cloaking outside is independent of what is inside the cloaked objects. The electromagnetic waves inside the cloaked objects cannot leave the concealed regions and vice versa, the electromagnetic waves outside the cloaked objects cannot go inside the concealed regions. As we prove our results for media that are obtained by transformation from general anisotropic materials, we prove that it is possible to cloak objects inside general crystals

  9. Comparative Study of Univariate Spectrophotometry and Multivariate Calibration for the Determination of Levamisole Hydrochloride and Closantel Sodium in a Binary Mixture.

    Abdel-Aziz, Omar; Hussien, Emad M; El Kosasy, Amira M; Ahmed, Neven

    2016-07-01

    Six simple, accurate, reproducible, and selective derivative spectrophotometric and chemometric methods have been developed and validated for the determination of levamisole HCl (Lev) either alone or in combination with closantel sodium (Clo) in the pharmaceutical dosage form. Lev was determined by first-derivative, first-derivative ratio, and mean-centering methods by measuring the peak amplitude at 220.8, 243.8, and 210.4 nm, respectively. The methods were linear over the concentration range 2.0-10.0 μg/mL Lev. The methods exhibited a high accuracy, with recovery data within ±1.9% and RSD <1.3% (n = 9) for the determination of Lev in the presence of Clo. Fortunately, Lev showed no significant UV absorbance at 370.6 nm, which allowed the determination of Clo over the concentration range 16.0-80.0 μg/mL using zero-order spectra, with a high precision (RSD <1.5%, n = 9). Furthermore, principal component regression and partial least-squares with optimized parameters were used for the determination of Lev in the presence of Clo. The recovery was within ±1%, with RSD <1.0% (n = 9) and root mean square error of prediction ≤1.0. The proposed methods were validated according to the International Conference on Harmonization guidelines. The proposed methods were used in the determination of Lev and Clo in a binary mixture and a pharmaceutical formulation, with high accuracy and precision.

  10. Improvement of the cost-benefit analysis algorithm for high-rise construction projects

    Gafurov Andrey

    2018-01-01

    Full Text Available The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the “Project analysis scenario” flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.

  11. Improvement of the cost-benefit analysis algorithm for high-rise construction projects

    Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir

    2018-03-01

    The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.

  12. Dimensionality analysis of multiparticle production at high energies

    Chilingaryan, A.A.

    1989-01-01

    An algorithm of analysis of multiparticle final states is offered. By the Renyi dimensionalities, which were calculated according to experimental data, though it were hadron distribution over the rapidity intervals or particle distribution in an N-dimensional momentum space, we can judge about the degree of correlation of particles, separate the momentum space projections and areas where the probability measure singularities are observed. The method is tested in a series of calculations with samples of fractal object points and with samples obtained by means of different generators of pseudo- and quasi-random numbers. 27 refs.; 11 figs

  13. In-situ nitrite analysis in high level waste tanks

    O'Rourke, P.E.; Prather, W.S.; Livingston, R.R.

    1992-01-01

    The Savannah River Site produces special nuclear materials used in the defense of the United States. Most of the processes at SRS are primarily chemical separations and purifications. In-situ chemical analyses help improve the safety, efficiency and quality of these operations. One area where in situ fiberoptic spectroscopy can have a great impact is the management of high level radioactive waste. High level radioactive waste at SRS is stored in more than 50 large waste tanks. The waste exists as a slurry of nitrate salts and metal hydroxides at pH's higher than 10. Sodium Nitrite is added to the tanks as a corrosion inhibitor. In-situ fiberoptic probes are being developed to measure the nitrate, nitrite and hydroxide concentrations in both liquid and solid fractions. Nitrite levels can be measured between 0.01M and 1M in a 1mm pathlength optical cell

  14. Structural analysis technology for high-temperature design

    Greenstreet, W.L.

    1977-01-01

    Results from an ongoing program devoted to the development of verified high-temperature structural design technology applicable to nuclear reactor systems are described. The major aspects addressed by the program are (1) deformation behavior; (2) failure associated with creep rupture, brittle fracture, fatigue, creep-fatigue interactions, and crack propagation; and (3) the establishment of appropriate design criteria. This paper discusses information developed in the deformation behavior category. The material considered is type 304 stainless steel, and the temperatures range to 1100 0 F (593 0 C). In essence, the paper considers the ingredients necessary for predicting relatively high-temperature inelastic deformation behavior of engineering structures under time-varying temperature and load conditions and gives some examples. These examples illustrate the utility and acceptability of the computational methods identified and developed for prediting essential features of complex inelastic behaviors. Conditions and responses that can be encountered under nuclear reactor service conditions and invoked in the examples. (Auth.)

  15. High Energy Astronomical Data Processing and Analysis via the Internet

    Valencic, Lynne A.; Snowden, S.; Pence, W.

    2012-01-01

    The HEASARC at NASA Goddard Space Flight Center and the US XMM-Newton GOF has developed Hera, a data processing facility for analyzing high energy astronomical data over the internet. Hera provides all the disk space and computing resources needed to do general processing of and advanced research on publicly available data from High Energy Astrophysics missions. The data and data products are kept on a server at GSFC and can be downloaded to a user's local machine. Further, the XMM-GOF has developed scripts to streamline XMM data reduction. These are available through Hera, and can also be downloaded to a user's local machine. These are free services provided to students, educators, and researchers for educational and research purposes.

  16. High resolution radar satellite imagery analysis for safeguards applications

    Minet, Christian; Eineder, Michael [German Aerospace Center, Remote Sensing Technology Institute, Department of SAR Signal Processing, Wessling, (Germany); Rezniczek, Arnold [UBA GmbH, Herzogenrath, (Germany); Niemeyer, Irmgard [Forschungszentrum Juelich, Institue of Energy and Climate Research, IEK-6: Nuclear Waste Management and Reactor Safety, Juelich, (Germany)

    2011-12-15

    For monitoring nuclear sites, the use of Synthetic Aperture Radar (SAR) imagery shows essential promises. Unlike optical remote sensing instruments, radar sensors operate under almost all weather conditions and independently of the sunlight, i.e. time of the day. Such technical specifications are required both for continuous and for ad-hoc, timed surveillance tasks. With Cosmo-Skymed, TerraSARX and Radarsat-2, high-resolution SAR imagery with a spatial resolution up to 1m has recently become available. Our work therefore aims to investigate the potential of high-resolution TerraSAR data for nuclear monitoring. This paper focuses on exploiting amplitude of a single acquisition, assessing amplitude changes and phase differences between two acquisitions, and PS-InSAR processing of an image stack.

  17. High-performance analysis of filtered semantic graphs

    Buluç, A; Fox, A; Gilbert, JR; Kamil, S; Lugowski, A; Oliker, L; Williams, S

    2012-01-01

    High performance is a crucial consideration when executing a complex analytic query on a massive semantic graph. In a semantic graph, vertices and edges carry \\attributes" of various types. Analytic queries on semantic graphs typically depend on the values of these attributes; thus, the computation must either view the graph through a filter that passes only those individual vertices and edges of interest, or else must first materialize a subgraph or subgraphs consisting of only the vertices ...

  18. Video incident analysis of concussions in boys' high school lacrosse.

    Lincoln, Andrew E; Caswell, Shane V; Almquist, Jon L; Dunn, Reginald E; Hinton, Richard Y

    2013-04-01

    Boys' lacrosse has one of the highest rates of concussion among boys' high school sports. A thorough understanding of injury mechanisms and game situations associated with concussions in boys' high school lacrosse is necessary to target injury prevention efforts. To characterize common game-play scenarios and mechanisms of injury associated with concussions in boys' high school lacrosse using game video. Descriptive epidemiological study. In 25 public high schools of a single school system, 518 boys' lacrosse games were videotaped by trained videographers during the 2008 and 2009 seasons. Video of concussion incidents was examined to identify game characteristics and injury mechanisms using a lacrosse-specific coding instrument. A total of 34 concussions were captured on video. All concussions resulted from player-to-player bodily contact. Players were most often injured when contact was unanticipated or players were defenseless (n = 19; 56%), attempting to pick up a loose ball (n = 16; 47%), and/or ball handling (n = 14; 41%). Most frequently, the striking player's head (n = 27; 79%) was involved in the collision, and the struck player's head was the initial point of impact in 20 incidents (59%). In 68% (n = 23) of cases, a subsequent impact with the playing surface occurred immediately after the initial impact. A penalty was called in 26% (n = 9) of collisions. Player-to-player contact was the mechanism for all concussions. Most commonly, injured players were unaware of the pending contact, and the striking player used his head to initiate contact. Further investigation of preventive measures such as education of coaches and officials and enforcement of rules designed to prevent intentional head-to-head contact is warranted to reduce the incidence of concussions in boys' lacrosse.

  19. Gender Inequalities in Highly Qualified Professions: A Social Psychological Analysis

    Santos, Maria Helena; Amâncio, Lígia

    2016-01-01

    Research in social and political psychology contributes towards understanding the persistence of job market gender segregation prevailing in recent decades, the consequences for those involved and their reactions when having to cope with gender inequality. Within the framework of the literature on shared ideologies that justify and legitimize discrimination against women, this article focuses on Portugal and analyses the particular case of women in two highly qualified professions traditional...

  20. [High-performance liquid-liquid chromatography in beverage analysis].

    Bricout, J; Koziet, Y; de Carpentrie, B

    1978-01-01

    Liquid liquid chromatography was performed with columns packed with stationary phases chemically bonded to silica microparticules. These columns show a high efficiency and are used very easily. Flavouring compounds like aromatic aldehydes which have a low volatility were analyzed in brandy using a polar phase alkylnitrile. Sapid substances like amarogentin in Gentiana lutea or glyryrrhizin in Glycyrrhiza glabra were determined by reversed phase chromatography. Finally ionizable substances like synthetic dyes can be analyzed by paired ion chromatography witha non polar stationary phase.

  1. Montecarlo simulation for a new high resolution elemental analysis methodology

    Figueroa S, Rodolfo; Brusa, Daniel; Riveros, Alberto [Universidad de La Frontera, Temuco (Chile). Facultad de Ingenieria y Administracion

    1996-12-31

    Full text. Spectra generated by binary, ternary and multielement matrixes when irradiated by a variable energy photon beam are simulated by means of a Monte Carlo code. Significative jumps in the counting rate are shown when the photon energy is just over the edge associated to each element, because of the emission of characteristic X rays. For a given associated energy, the net height of these jumps depends mainly on the concentration and of the sample absorption coefficient. The spectra were obtained by a monochromatic energy scan considering all the emitted radiation by the sample in a 2{pi} solid angle, associating a single multichannel spectrometer channel to each incident energy (Multichannel Scaling (MCS) mode). The simulated spectra were made with Monte Carlo simulation software adaptation of the package called PENELOPE (Penetration and Energy Loss of Positrons and Electrons in matter). The results show that it is possible to implement a new high resolution spectroscopy methodology, where a synchrotron would be an ideal source, due to the high intensity and ability to control the energy of the incident beam. The high energy resolution would be determined by the monochromating system and not by the detection system and not by the detection system, which would basicalbe a photon counter. (author)

  2. Stability analysis of high temperature superconducting coil in liquid hydrogen

    Nakayama, T.; Yagai, T.; Tsuda, M.; Hamajima, T.

    2007-01-01

    Recently, it is expected that hydrogen plays an important role in energy source including electric power in near future. Liquid hydrogen has high potential for cooling down superconducting coil wound with high temperature superconductors (HTS), such as BSCCO, YBCO. In this paper, we study stabilities of the coils wound with BSCCO tapes, which are immersed in the liquid hydrogen, and compare stability results with those cooled by liquid helium. We treat a minimum propagation zone (MPZ) theory to evaluate the coil stability considering boiling heat flux of the liquid hydrogen, and specific heat, heat conduction and resistivity of HTS materials as a function of temperature. It is found that the coil cooled by the liquid hydrogen has higher stability margin than that cooled by the liquid helium. We compare the stability margins of both coils wound with Bi-2223/Ag tape and Bi-2212/Ag tape in liquid hydrogen. As a result, it is found that the stability of Bi-2212 coil is equivalent to that of Bi-2223 coil in low and high magnetic field, while the maximum current of Bi-2212 coil exceeds a little bit that of Bi-2223 coil in both magnetic fields

  3. Montecarlo simulation for a new high resolution elemental analysis methodology

    Figueroa S, Rodolfo; Brusa, Daniel; Riveros, Alberto

    1996-01-01

    Full text. Spectra generated by binary, ternary and multielement matrixes when irradiated by a variable energy photon beam are simulated by means of a Monte Carlo code. Significative jumps in the counting rate are shown when the photon energy is just over the edge associated to each element, because of the emission of characteristic X rays. For a given associated energy, the net height of these jumps depends mainly on the concentration and of the sample absorption coefficient. The spectra were obtained by a monochromatic energy scan considering all the emitted radiation by the sample in a 2π solid angle, associating a single multichannel spectrometer channel to each incident energy (Multichannel Scaling (MCS) mode). The simulated spectra were made with Monte Carlo simulation software adaptation of the package called PENELOPE (Penetration and Energy Loss of Positrons and Electrons in matter). The results show that it is possible to implement a new high resolution spectroscopy methodology, where a synchrotron would be an ideal source, due to the high intensity and ability to control the energy of the incident beam. The high energy resolution would be determined by the monochromating system and not by the detection system and not by the detection system, which would basicalbe a photon counter. (author)

  4. Reliability test and failure analysis of high power LED packages

    Chen Zhaohui; Zhang Qin; Wang Kai; Luo Xiaobing; Liu Sheng

    2011-01-01

    A new type application specific light emitting diode (LED) package (ASLP) with freeform polycarbonate lens for street lighting is developed, whose manufacturing processes are compatible with a typical LED packaging process. The reliability test methods and failure criterions from different vendors are reviewed and compared. It is found that test methods and failure criterions are quite different. The rapid reliability assessment standards are urgently needed for the LED industry. 85 0 C/85 RH with 700 mA is used to test our LED modules with three other vendors for 1000 h, showing no visible degradation in optical performance for our modules, with two other vendors showing significant degradation. Some failure analysis methods such as C-SAM, Nano X-ray CT and optical microscope are used for LED packages. Some failure mechanisms such as delaminations and cracks are detected in the LED packages after the accelerated reliability testing. The finite element simulation method is helpful for the failure analysis and design of the reliability of the LED packaging. One example is used to show one currently used module in industry is vulnerable and may not easily pass the harsh thermal cycle testing. (semiconductor devices)

  5. Multivariate statistical analysis a high-dimensional approach

    Serdobolskii, V

    2000-01-01

    In the last few decades the accumulation of large amounts of in­ formation in numerous applications. has stimtllated an increased in­ terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de­ ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat­ ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari­ ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen­ ...

  6. Analysis of Institutional Competitiveness of Junior High Schools through the Admission Test to High School Education

    Armendáriz, Joyzukey; Tarango, Javier; Machin-Mastromatteo, Juan Daniel

    2018-01-01

    This descriptive and correlational research studies 15,658 students from 335 secondary schools in the state of Chihuahua, Mexico, through the results of the examination of admission to high school education (National High School Admission Test--EXANI I from the National Assessment Center for Education--CENEVAL) on logical-mathematical and verbal…

  7. Need for High Radiation Dose (≥70 Gy) in Early Postoperative Irradiation After Radical Prostatectomy: A Single-Institution Analysis of 334 High-Risk, Node-Negative Patients

    Cozzarini, Cesare; Montorsi, Francesco; Fiorino, Claudio; Alongi, Filippo; Bolognesi, Angelo; Da Pozzo, Luigi Filippo; Guazzoni, Giorgio; Freschi, Massimo; Roscigno, Marco; Scattoni, Vincenzo; Rigatti, Patrizio; Di Muzio, Nadia

    2009-01-01

    Purpose: To determine the clinical benefit of high-dose early adjuvant radiotherapy (EART) in high-risk prostate cancer (hrCaP) patients submitted to radical retropubic prostatectomy plus pelvic lymphadenectomy. Patients and Methods: The clinical outcome of 334 hrCaP (pT3-4 and/or positive resection margins) node-negative patients submitted to radical retropubic prostatectomy plus pelvic lymphadenectomy before 2004 was analyzed according to the EART dose delivered to the prostatic bed, <70.2 Gy (lower dose, median 66.6 Gy, n = 153) or ≥70.2 Gy (median 70.2 Gy, n = 181). Results: The two groups were comparable except for a significant difference in terms of median follow-up (10 vs. 7 years, respectively) owing to the gradual increase of EART doses over time. Nevertheless, median time to prostate-specific antigen (PSA) failure was almost identical, 38 and 36 months, respectively. At univariate analysis, both 5-year biochemical relapse-free survival (bRFS) and disease-free survival (DFS) were significantly higher (83% vs. 71% [p = 0.001] and 94% vs. 88% [p = 0.005], respectively) in the HD group. Multivariate analysis confirmed EART dose ≥70 Gy to be independently related to both bRFS (hazard ratio 2.5, p = 0.04) and DFS (hazard ratio 3.6, p = 0.004). Similar results were obtained after the exclusion of patients receiving any androgen deprivation. After grouping the hormone-naive patients by postoperative PSA level the statistically significant impact of high-dose EART on both 5-year bRFS and DFS was maintained only for those with undetectable values, possibly owing to micrometastatic disease outside the irradiated area in case of detectable postoperative PSA values. Conclusion: This series provides strong support for the use of EART doses ≥70 Gy after radical retropubic prostatectomy in hrCaP patients with undetectable postoperative PSA levels.

  8. Theoretical analysis of quantum dot amplifiers with high saturation power and low noise figure

    Berg, Tommy Winther; Mørk, Jesper

    2002-01-01

    Semiconductor quantum dot amplifiers are predicted to exhibit superior characteristics such as high gain, and output power and low noise. The analysis provides criteria and design guidelines for the realization of high quality amplifiers.......Semiconductor quantum dot amplifiers are predicted to exhibit superior characteristics such as high gain, and output power and low noise. The analysis provides criteria and design guidelines for the realization of high quality amplifiers....

  9. A content analysis of tweets about high-potency marijuana.

    Cavazos-Rehg, Patricia A; Sowles, Shaina J; Krauss, Melissa J; Agbonavbare, Vivian; Grucza, Richard; Bierut, Laura

    2016-09-01

    "Dabbing" involves heating extremely concentrated forms of marijuana to high temperatures and inhaling the resulting vapor. We studied themes describing the consequences of using highly concentrated marijuana by examining the dabbing-related content on Twitter. Tweets containing dabbing-related keywords were collected from 1/1-1/31/2015 (n=206,854). A random sample of 5000 tweets was coded for content according to pre-determined categories about dabbing-related behaviors and effects experienced using a crowdsourcing service. An examination of tweets from the full sample about respiratory effects and passing out was then conducted by selecting tweets with relevant keywords. Among the 5000 randomly sampled tweets, 3540 (71%) were related to dabbing marijuana concentrates. The most common themes included mentioning current use of concentrates (n=849; 24%), the intense high and/or extreme effects from dabbing (n=763; 22%) and excessive/heavy dabbing (n=517; 15%). Extreme effects included both physiological (n=124/333; 37%) and psychological effects (n=55/333; 17%). The most common physiologic effects, passing out (n=46/333; 14%) and respiratory effects (n=30/333; 9%), were then further studied in the full sample of tweets. Coughing was the most common respiratory effect mentioned (n=807/1179; 68%), and tweeters commonly expressed dabbing with intentions to pass out (416/915; 45%). This study adds to the limited understanding of marijuana concentrates and highlights self-reported physical and psychological effects from this type of marijuana use. Future research should further examine these effects and the potential severity of health consequences associated with concentrates. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Software Tools for Robust Analysis of High-Dimensional Data

    Valentin Todorov

    2014-06-01

    Full Text Available The present work discusses robust multivariate methods specifically designed for highdimensions. Their implementation in R is presented and their application is illustratedon examples. The first group are algorithms for outlier detection, already introducedelsewhere and implemented in other packages. The value added of the new package isthat all methods follow the same design pattern and thus can use the same graphicaland diagnostic tools. The next topic covered is sparse principal components including anobject oriented interface to the standard method proposed by Zou, Hastie, and Tibshirani(2006 and the robust one proposed by Croux, Filzmoser, and Fritz (2013. Robust partialleast squares (see Hubert and Vanden Branden 2003 as well as partial least squares fordiscriminant analysis conclude the scope of the new package.

  11. Variability Bugs in Highly Configurable Systems: A Qualitative Analysis

    Abal, Iago; Melo, Jean; Stanciulescu, Stefan

    2018-01-01

    Variability-sensitive verification pursues effective analysis of the exponentially many variants in number of features of a program family. Several variability-aware techniques have been proposed, but researchers still lack examples of concrete bugs induced by variability, occurring in real large......-scale systems. A collection of real world bugs is needed to evaluate tool implementations of variability-sensitive analyses by testing them on real bugs. We present a qualitative study of 98 diverse variability bugs collected from bug-fixing commits in the Apache, BusyBox, Linux kernel and Marlin repositories....... We analyze each of the bugs, and record the results in a database. For each bug, we create a self-contained simplified C99 version and a simplified patch, in order to help researchers who are not experts on these subject studies to understand them, so that they can use it for evaluation...

  12. High-level waste canister envelope study: structural analysis

    1977-11-01

    The structural integrity of waste canisters, fabricated from standard weight Type 304L stainless steel pipe, was analyzed for sizes ranging from 8 to 24 in. diameter and 10 to 16 feet long under normal, abnormal, and improbable life cycle loading conditions. The canisters are assumed to be filled with vitrified high-level nuclear waste, stored temporarily at a fuel reprocessing plant, and then transported for storage in an underground salt bed or other geologic storage. In each of the three impact conditions studies, the resulting impact force is far greater than the elastic limit capacity of the material. Recommendations are made for further study

  13. Analysis of artificial fireplace logs by high temperature gas chromatography.

    Kuk, Raymond J

    2002-11-01

    High temperature gas chromatography is used to analyze the wax of artificial fireplace logs (firelogs). Firelogs from several different manufacturers are studied and compared. This study shows that the wax within a single firelog is homogeneous and that the wax is also uniform throughout a multi-firelog package. Different brands are shown to have different wax compositions. Firelogs of the same brand, but purchased in different locations, also have different wax compositions. With this information it may be possible to associate an unknown firelog sample to a known sample, but a definitive statement of the origin cannot be made.

  14. Shower fractal dimension analysis in a highly-granular calorimeter

    Ruan, M

    2014-01-01

    We report on an investigation of the self-similar structure of particle showers recorded at a highly-granular calorimeter. On both simulated and experimental data, a strong correlation between the number of hits and the spatial scale of the readout channels is observed, from which we define the shower fractal dimension. The measured fractal dimension turns out to be strongly dependent on particle type, which enables new approaches for particle identification. A logarithmic dependence of the particle energy on the fractal dimension is also observed.

  15. Neutron activation analysis of high-purity iron in comparison with chemical analysis

    Kinomura, Atsushi; Horino, Yuji; Takaki, Seiichi; Abiko, Kenji

    2000-01-01

    Neutron activation analysis of iron samples of three different purity levels has been performed and compared with chemical analysis for 30 metallic and metalloid impurity elements. The concentration of As, Cl, Cu, Sb and V detected by neutron activation analysis was mostly in agreement with that obtained by chemical analysis. The sensitivity limits of neutron activation analysis of three kinds of iron samples were calculated and found to be reasonable compared with measured values or detection limits of chemical analysis; however, most of them were above the detection limits of chemical analysis. Graphite-shielded irradiation to suppress fast neutron reactions was effective for Mn analysis without decreasing sensitivity to the other impurity elements. (author)

  16. High Resolution Gamma Ray Analysis of Medical Isotopes

    Chillery, Thomas

    2015-10-01

    Compton-suppressed high-purity Germanium detectors at the University of Massachusetts Lowell have been used to study medical radioisotopes produced at Brookhaven Linac Isotope Producer (BLIP), in particular isotopes such as Pt-191 used for cancer therapy in patients. The ability to precisely analyze the concentrations of such radio-isotopes is essential for both production facilities such as Brookhaven and consumer hospitals across the U.S. Without accurate knowledge of the quantities and strengths of these isotopes, it is possible for doctors to administer incorrect dosages to patients, thus leading to undesired results. Samples have been produced at Brookhaven and shipped to UML, and the advanced electronics and data acquisition capabilities at UML have been used to extract peak areas in the gamma decay spectra. Levels of Pt isotopes in diluted samples have been quantified, and reaction cross-sections deduced from the irradiation parameters. These provide both cross checks with published work, as well as a rigorous quantitative framework with high quality state-of-the-art detection apparatus in use in the experimental nuclear physics community.

  17. PSYCHOLOGICAL ANALYSIS OF PATRIOTISM MANIFESTATIONS IN HIGH SCHOOL STUDENTS

    G A Shurukhina

    2015-12-01

    Full Text Available In modern society, patriotism is manifested in all spheres of human life and it is understood as a love for the motherland and its people, as a sense of duty and pride for the country, a sense of honor and personal dignity, personal responsibility for occurring events, devotion to the country and its people. Patriotism is a complex concept including patriotic feelings of an individual. In the article, the peculiarities of the manifestation of patriotic feelings in high school students are analyzed on the basis of the system-functional approach. The study involved a group of senior students of secondary comprehensive school, a group of the representatives of the Ministry of Emergency Situations and cadets of a Cadet Corps. Statistically significant differences in sixteen variables characterizing manifestation of patriotic feelings were obtained in the group of cadets, the differences were observed in degrees of manifestation of both harmonic and inharmonic variables. High values were obtained for the variables of the reflective-evaluative component. The personal difficulties have got higher values which indicates a serious and responsible approach of cadets to the manifestation of patriotic feelings. These results were used to formulate correction programs aimed at the harmonization and development of patriotism among representatives of different age groups

  18. Quantitative analysis of cholesteatoma using high resolution computed tomography

    Kikuchi, Shigeru; Yamasoba, Tatsuya; Iinuma, Toshitaka.

    1992-01-01

    Seventy-three cases of adult cholesteatoma, including 52 cases of pars flaccida type cholesteatoma and 21 of pars tensa type cholesteatoma, were examined using high resolution computed tomography, in both axial (lateral semicircular canal plane) and coronal sections (cochlear, vestibular and antral plane). These cases were classified into two subtypes according to the presence of extension of cholesteatoma into the antrum. Sixty cases of chronic otitis media with central perforation (COM) were also examined as controls. Various locations of the middle ear cavity were measured in terms of size in comparison with pars flaccida type cholesteatoma, pars tensa type cholesteatoma and COM. The width of the attic was significantly larger in both pars flaccida type and pars tensa type cholesteatoma than in COM. With pars flaccida type cholesteatoma there was a significantly larger distance between the malleus and lateral wall of the attic than with COM. In contrast, the distance between the malleus and medial wall of the attic was significantly larger with pars tensa type cholesteatoma than with COM. With cholesteatoma extending into the antrum, regardless of the type of cholesteatoma, there were significantly larger distances than with COM at the following sites: the width and height of the aditus ad antrum, and the width, height and anterior-posterior diameter of the antrum. However, these distances were not significantly different between cholesteatoma without extension into the antrum and COM. The hitherto demonstrated qualitative impressions of bone destruction in cholesteatoma were quantitatively verified in detail using high resolution computed tomography. (author)

  19. High resolution visualization and analysis of nasal spray drug delivery.

    Inthavong, Kiao; Fung, Man Chiu; Tong, Xuwen; Yang, William; Tu, Jiyuan

    2014-08-01

    Effective nasal drug delivery of new-generation systemic drugs requires efficient devices that can achieve targeted drug delivery. It has been established that droplet size, spray plume, and droplet velocity are major contributors to drug deposition. Continual effort is needed to better understand and characterise the physical mechanisms underpinning droplet formation from nasal spray devices. High speed laser photography combined with an in-house designed automated actuation system, and a highly precise traversing unit, measurements and images magnified in small field-of-view regions within the spray was performed. The qualitative results showed a swirling liquid sheet at the near-nozzle region as the liquid is discharged before ligaments of fluid are separated off the liquid sheet. Droplets are formed and continue to deform as they travel downstream at velocities of up to 20 m/s. Increase in actuation pressure produces more rapid atomization and discharge time where finer droplets are produced. The results suggest that device designs should consider reducing droplet inertia to penetrate the nasal valve region, but find a way to deposit in the main nasal passage and not escape through to the lungs.

  20. Microscopic Analysis of Bacterial Motility at High Pressure

    Nishiyama, Masayoshi; Sowa, Yoshiyuki

    2012-01-01

    The bacterial flagellar motor is a molecular machine that converts an ion flux to the rotation of a helical flagellar filament. Counterclockwise rotation of the filaments allows them to join in a bundle and propel the cell forward. Loss of motility can be caused by environmental factors such as temperature, pH, and solvation. Hydrostatic pressure is also a physical inhibitor of bacterial motility, but the detailed mechanism of this inhibition is still unknown. Here, we developed a high-pressure microscope that enables us to acquire high-resolution microscopic images, regardless of applied pressures. We also characterized the pressure dependence of the motility of swimming Escherichia coli cells and the rotation of single flagellar motors. The fraction and speed of swimming cells decreased with increased pressure. At 80 MPa, all cells stopped swimming and simply diffused in solution. After the release of pressure, most cells immediately recovered their initial motility. Direct observation of the motility of single flagellar motors revealed that at 80 MPa, the motors generate torque that should be sufficient to join rotating filaments in a bundle. The discrepancy in the behavior of free swimming cells and individual motors could be due to the applied pressure inhibiting the formation of rotating filament bundles that can propel the cell body in an aqueous environment. PMID:22768943

  1. Synchrotron applications in archaeometallurgy: analysis of high zinc brass astrolabes

    Newbury, B.; Stephenson, B.; Almer, J.; Notis, M.; Cargill, G. S. III; Stephenson, G. B.; Haeffner, D.

    2003-01-01

    Astrolabes represent the ingenious application of mathematics and astronomy in creating a single instrument that was used for both mapping the heavens and solving everyday problems in medieval Europe and Islamic lands. Constructed as a sort of analog computer to map the heavens, astrolabes were widely used for 700 years in Europe and 1000 years in Islamic lands. In this study, 14 astrolabes (5 European and 9 Islamic) have been analyzed non-destructively utilizing a high-energy collimated x-ray beam produced at the Advanced Photon Source synchrotron at Argonne National Laboratories. By impinging a high energy (71 KeV) beam of x-rays capable of transmitting through the brass astrolabes (up to 1 cm thick), metallurgical data can be produced from the bulk of the samples without any harm to them. Diffraction, fluorescence, and radiography experiments were performed on the astrolabes. Diffraction experiments allowed composition of the bulk samples as well as mechanical deformation and forming histories to be determined. X-ray fluorescence experiments allowed the near surface (∼ 20 μm) compositions to be determined, while radiography allowed mapping of the relative thickness. From these experiments and the forming history, it is possible to obtain information about microstructural characteristics and compositions fo the astrolabes that no other non-destructive technique could furnish. These data were used to learn more about technical and metalworking techniques used by the astrolabe manufacturers, as well as to determine if any astrolabe parts wee later (or modern) replacements. Of the Islamic astrolabes studied, a group of four from the Lahore region in current day Pakistan were found to have abnormally high (> 35%) zinc compositions. These astrolabes, signed by Diya al-Din Muhammad and dating from 1637-1662 AD, provide evidence for direct alloying of metallic zinc and copper to form brass. While metallic zinc is believed to have been produced in the region for many

  2. Analysis of transistor and snubber turn-off dynamics in high-frequency high-voltage high-power converters

    Wilson, P. M.; Wilson, T. G.; Owen, H. A., Jr.

    Dc to dc converters which operate reliably and efficiently at switching frequencies high enough to effect substantial reductions in the size and weight of converter energy storage elements are studied. A two winding current or voltage stepup (buck boost) dc-to-dc converter power stage submodule designed to operate in the 2.5-kW range, with an input voltage range of 110 to 180 V dc, and an output voltage of 250 V dc is emphasized. In order to assess the limitations of present day component and circuit technologies, a design goal switching frequency of 10 kHz was maintained. The converter design requirements represent a unique combination of high frequency, high voltage, and high power operation. The turn off dynamics of the primary circuit power switching transistor and its associated turn off snubber circuitry are investigated.

  3. Emergency diesel generator reliability analysis high flux isotope reactor

    Merryman, L.; Christie, B.

    1993-01-01

    A program to apply some of the techniques of reliability engineering to the High Flux Isotope Reactor (HFIR) was started on August 8, 1992. Part of the program was to track the conditional probabilities of the emergency diesel generators responding to a valid demand. This was done to determine if the performance of the emergency diesel generators (which are more than 25 years old) has deteriorated. The conditional probabilities of the diesel generators were computed and trended for the period from May 1990 to December 1992. The calculations indicate that the performance of the emergency diesel generators has not deteriorated in recent years, i.e., the conditional probabilities of the emergency diesel generators have been fairly stable over the last few years. This information will be one factor than may be considered in the decision to replace the emergency diesel generators

  4. Tree-indexed processes: a high level crossing analysis

    Mark Kelbert

    2003-01-01

    Full Text Available Consider a branching diffusion process on R1 starting at the origin. Take a high level u>0 and count the number R(u,n of branches reaching u by generation n. Let Fk,n(u be the probability P(R(u,n

  5. Concepts on high temperature design analysis for SNR 300

    Bieniussa, K.; Zolti, E.

    1976-01-01

    The paper briefly describes the evolution, the present situation and the next activities on the design of high temperature components of the DEBENELUX prototype fast breeder reactor SNR-300 with particular regard to the design criteria. Elastic structural analyses are performed for the basic design of the components and are supplied by the manufacturer. In agreement with the Safety Experts simplified and/or detailed inelastic analyses of the critical areas are supplied by the prime contractor of the plant. The elastic computations are evaluated on the basis of a set of design rules derived from ASME Code Case Interpretation 1331-4 but with more conservative limits, and the inelastic ones on the basis of the ASME Code Case Interpretation 1592

  6. Analytical Model for High Impedance Fault Analysis in Transmission Lines

    S. Maximov

    2014-01-01

    Full Text Available A high impedance fault (HIF normally occurs when an overhead power line physically breaks and falls to the ground. Such faults are difficult to detect because they often draw small currents which cannot be detected by conventional overcurrent protection. Furthermore, an electric arc accompanies HIFs, resulting in fire hazard, damage to electrical devices, and risk with human life. This paper presents an analytical model to analyze the interaction between the electric arc associated to HIFs and a transmission line. A joint analytical solution to the wave equation for a transmission line and a nonlinear equation for the arc model is presented. The analytical model is validated by means of comparisons between measured and calculated results. Several cases of study are presented which support the foundation and accuracy of the proposed model.

  7. Machining Chatter Analysis for High Speed Milling Operations

    Sekar, M.; Kantharaj, I.; Amit Siddhappa, Savale

    2017-10-01

    Chatter in high speed milling is characterized by time delay differential equations (DDE). Since closed form solution exists only for simple cases, the governing non-linear DDEs of chatter problems are solved by various numerical methods. Custom codes to solve DDEs are tedious to build, implement and not error free and robust. On the other hand, software packages provide solution to DDEs, however they are not straight forward to implement. In this paper an easy way to solve DDE of chatter in milling is proposed and implemented with MATLAB. Time domain solution permits the study and model of non-linear effects of chatter vibration with ease. Time domain results are presented for various stable and unstable conditions of cut and compared with stability lobe diagrams.

  8. National high-level waste systems analysis plan

    Kristofferson, K.; Oholleran, T.P.; Powell, R.H.; Thiel, E.C.

    1995-05-01

    This document details the development of modeling capabilities that can provide a system-wide view of all US Department of Energy (DOE) high-level waste (HLW) treatment and storage systems. This model can assess the impact of budget constraints on storage and treatment system schedules and throughput. These impacts can then be assessed against existing and pending milestones to determine the impact to the overall HLW system. A nation-wide view of waste treatment availability will help project the time required to prepare HLW for disposal. The impacts of the availability of various treatment systems and throughput can be compared to repository readiness to determine the prudent application of resources or the need to renegotiate milestones

  9. Analysis of elastic interactions of hadrons at high energies

    Yuldashev, B.S.; Fazilova, Z.F.; Ismatov, E.I.; Kurmanbai, M.S.; Ajniyazova, G.T.; Tskhay, K.V.; Medeuova, A.B.

    2004-01-01

    Study of elastic interactions of hadrons at high energies if of great interest due to the fact that the amplitude of this process is the simplest, and at the same time, it is a fundamental object for theoretical and experimental researches. Study of this process allows one to have a quantitative check of various theories and models, and to make a critical selection. By using of fundamental property of theory - unitarity condition of scattering matrix - elastic scattering can be connected with inelastic reaction. Based on S-channel unitarity condition expressing elastic amplitude via inelastic overlapping function, to study the latter, as well as to describe the experimentally measured characteristics of hadron-nucleon interactions at high-energies, as well as for results prediction. By using experimental data on differential cross-section of elastic scattering of hadrons at various energies and by theoretical information on ratio of a real part and an imaginary part of scattering amplitude δ(t) the t-dependence of inelastic and elastic overlapping functions is studied. Influence of a zigzag form of differential cross-section of elastic pp(p) scattering on profile function and inelastic overlapping function to violation of geometric scaling was studied. In frames of the scaling the general expressions for s- and t-dependences of inelastic overlapping function are derived. Comparison of this function in three elastic scattering models was carried out. It was demonstrated that one would need to assume that hadrons become blacker at central part in order to correctly describe experimental angular distribution data. Dependence of differential cross-section on transfer momentum square for elastic hadrons scattering at energies of ISR and SPS in the model of inelastic overlapping function is studied. (author)

  10. High excitation rovibrational molecular analysis in warm environments

    Zhang, Ziwei; Stancil, Phillip C.; Cumbee, Renata; Ferland, Gary J.

    2017-06-01

    Inspired by advances in infrared observation (e.g., Spitzer, Herschel and ALMA), we investigate rovibrational emission CO and SiO in warm astrophysical environments. With recent innovation in collisional rate coefficients and rescaling methods, we are able to construct more comprehensive collisional data with high rovibrational states (vibration up to v=5 and rotation up to J=40) and multiple colliders (H2, H and He). These comprehensive data sets are used in spectral simulations with the radiative transfer codes RADEX and Cloudy. We obtained line ratio diagnostic plots and line spectra for both near- and far-infrared emission lines over a broad range of density and temperature for the case of a uniform medium. Considering the importance of both molecules in probing conditions and activities of UV-irradiated interstellar gas, we model rovibrational emission in photodissociation region (PDR) and AGB star envelopes (such as VY Canis Majoris, IK Tau and IRC +10216) with Cloudy. Rotational diagrams, energy distribution diagrams, and spectra are produced to examine relative state abundances, line emission intensity, and other properties. With these diverse models, we expect to have a better understanding of PDRs and expand our scope in the chemical architecture and evolution of AGB stars and other UV-irradiated regions. The soon to be launched James Webb Space Telescope (JWST) will provide high resolution observations at near- to mid-infrared wavelengths, which opens a new window to study molecular vibrational emission calling for more detailed chemical modeling and comprehensive laboratory astrophysics data on more molecules. This work was partially supported by NASA grants NNX12AF42G and NNX15AI61G. We thank Benhui Yang, Kyle Walker, Robert Forrey, and N. Balakrishnan for collaborating on the collisional data adopted in the current work.

  11. Analysis of elastic interactions of hadrons at high energies

    Fazylov, M.I.; Yuldashev, B.S.; Azhniyazova, G.T.; Ismatov, E.I.; Sartbay, T.; Kurmanbay, M.S.; Tskhay, K.V.

    2004-01-01

    Full text: Study of elastic interactions of hadrons at high energies if of great interest due to the fact that the amplitude of this process is the simplest, and at the same time, it is a fundamental object for theoretical and experimental researches. Study of this process allows one to have a quantitative check of various theories and models, and to make a critical selection. By using of fundamental property of theory - unitarity condition of scattering matrix - elastic scattering can be connected with inelastic reaction. Based on S-channel unitarity condition expressing elastic amplitude via inelastic overlapping function, to study the latter, as well as to describe the experimentally measured characteristics of hadron-nucleon interactions at high-energies, as well as for results prediction. By using experimental data on differential cross-section of elastic scattering of hadrons at various energies and by theoretical information on ratio of a real part and an imaginary part of scattering amplitude δ(t) the t-dependence of inelastic and elastic overlapping functions is studied. Influence of a zigzag form of differential cross-section of elastic pp(p) scattering on profile function and inelastic overlapping function to violation of geometric scaling was studied. In frames of the scaling the general expressions for s- and t-dependences of inelastic overlapping function are derived. Comparison of this function in three elastic scattering models was carried out. It was demonstrated that one would need to assume that hadrons become blacker at central part in order to correctly describe experimental angular distribution data. Dependence of differential cross-section on transfer momentum square for elastic hadrons scattering at energies of ISR and SPS in the model of inelastic overlapping function is studied

  12. Regulatory pathway analysis by high-throughput in situ hybridization.

    Axel Visel

    2007-10-01

    Full Text Available Automated in situ hybridization enables the construction of comprehensive atlases of gene expression patterns in mammals. Such atlases can become Web-searchable digital expression maps of individual genes and thus offer an entryway to elucidate genetic interactions and signaling pathways. Towards this end, an atlas housing approximately 1,000 spatial gene expression patterns of the midgestation mouse embryo was generated. Patterns were textually annotated using a controlled vocabulary comprising >90 anatomical features. Hierarchical clustering of annotations was carried out using distance scores calculated from the similarity between pairs of patterns across all anatomical structures. This process ordered hundreds of complex expression patterns into a matrix that reflects the embryonic architecture and the relatedness of patterns of expression. Clustering yielded 12 distinct groups of expression patterns. Because of the similarity of expression patterns within a group, members of each group may be components of regulatory cascades. We focused on the group containing Pax6, an evolutionary conserved transcriptional master mediator of development. Seventeen of the 82 genes in this group showed a change of expression in the developing neocortex of Pax6-deficient embryos. Electromobility shift assays were used to test for the presence of Pax6-paired domain binding sites. This led to the identification of 12 genes not previously known as potential targets of Pax6 regulation. These findings suggest that cluster analysis of annotated gene expression patterns obtained by automated in situ hybridization is a novel approach for identifying components of signaling cascades.

  13. Numerical Analysis of Film Cooling at High Blowing Ratio

    El-Gabry, Lamyaa; Heidmann, James; Ameri, Ali

    2009-01-01

    Computational Fluid Dynamics is used in the analysis of a film cooling jet in crossflow. Predictions of film effectiveness are compared with experimental results for a circular jet at blowing ratios ranging from 0.5 to 2.0. Film effectiveness is a surface quantity which alone is insufficient in understanding the source and finding a remedy for shortcomings of the numerical model. Therefore, in addition, comparisons are made to flow field measurements of temperature along the jet centerline. These comparisons show that the CFD model is accurately predicting the extent and trajectory of the film cooling jet; however, there is a lack of agreement in the near-wall region downstream of the film hole. The effects of main stream turbulence conditions, boundary layer thickness, turbulence modeling, and numerical artificial dissipation are evaluated and found to have an insufficient impact in the wake region of separated films (i.e. cannot account for the discrepancy between measured and predicted centerline fluid temperatures). Analyses of low and moderate blowing ratio cases are carried out and results are in good agreement with data.

  14. High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC) Environment: Packet-Level Analysis

    2015-09-01

    individual fragments using the hash-based method. In general, fragments 6 appear in order and relatively close to each other in the file. A fragment...data product derived from the data model is shown in Fig. 5, a Google Earth12 Keyhole Markup Language (KML) file. This product includes aggregate...System BLOb binary large object FPGA field-programmable gate array HPC high-performance computing IP Internet Protocol KML Keyhole Markup Language

  15. Preliminary Toxicity Analysis of 3-Dimensional Conformal Radiation Therapy Versus Intensity Modulated Radiation Therapy on the High-Dose Arm of the Radiation Therapy Oncology Group 0126 Prostate Cancer Trial

    Michalski, Jeff M., E-mail: jmichalski@radonc.wustl.edu [Department of Radiation Oncology Washington University Medical Center, St. Louis, Missouri (United States); Yan, Yan [Radiation Therapy Oncology Group Statistical Center, Philadelphia, Pennsylvania (United States); Watkins-Bruner, Deborah [Emory University School of Nursing, Atlanta, Georgia (United States); Bosch, Walter R. [Department of Radiation Oncology Washington University Medical Center, St. Louis, Missouri (United States); Winter, Kathryn [Radiation Therapy Oncology Group Statistical Center, Philadelphia, Pennsylvania (United States); Galvin, James M. [Department of Radiation Oncology Thomas Jefferson University Hospital, Philadelphia, Pennsylvania (United States); Bahary, Jean-Paul [Department of Radiation Oncology Centre Hospitalier de l' Université de Montréal-Notre Dame, Montreal, QC (Canada); Morton, Gerard C. [Department of Radiation Oncology Toronto-Sunnybrook Regional Cancer Centre, Toronto, ON (Canada); Parliament, Matthew B. [Department of Oncology Cross Cancer Institute, Edmonton, AB (Canada); Sandler, Howard M. [Department of Radiation Oncology Samuel Oschin Comprehensive Cancer Institute, Cedars-Sinai Medical Center, Los Angeles, California (United States)

    2013-12-01

    Purpose: To give a preliminary report of clinical and treatment factors associated with toxicity in men receiving high-dose radiation therapy (RT) on a phase 3 dose-escalation trial. Methods and Materials: The trial was initiated with 3-dimensional conformal RT (3D-CRT) and amended after 1 year to allow intensity modulated RT (IMRT). Patients treated with 3D-CRT received 55.8 Gy to a planning target volume that included the prostate and seminal vesicles, then 23.4 Gy to prostate only. The IMRT patients were treated to the prostate and proximal seminal vesicles to 79.2 Gy. Common Toxicity Criteria, version 2.0, and Radiation Therapy Oncology Group/European Organization for Research and Treatment of Cancer late morbidity scores were used for acute and late effects. Results: Of 763 patients randomized to the 79.2-Gy arm of Radiation Therapy Oncology Group 0126 protocol, 748 were eligible and evaluable: 491 and 257 were treated with 3D-CRT and IMRT, respectively. For both bladder and rectum, the volumes receiving 65, 70, and 75 Gy were significantly lower with IMRT (all P<.0001). For grade (G) 2+ acute gastrointestinal/genitourinary (GI/GU) toxicity, both univariate and multivariate analyses showed a statistically significant decrease in G2+ acute collective GI/GU toxicity for IMRT. There were no significant differences with 3D-CRT or IMRT for acute or late G2+ or 3+ GU toxicities. Univariate analysis showed a statistically significant decrease in late G2+ GI toxicity for IMRT (P=.039). On multivariate analysis, IMRT showed a 26% reduction in G2+ late GI toxicity (P=.099). Acute G2+ toxicity was associated with late G3+ toxicity (P=.005). With dose–volume histogram data in the multivariate analysis, RT modality was not significant, whereas white race (P=.001) and rectal V70 ≥15% were associated with G2+ rectal toxicity (P=.034). Conclusions: Intensity modulated RT is associated with a significant reduction in acute G2+ GI/GU toxicity. There is a trend for a

  16. An analysis of the development of high temperature cavitation damage

    Tinivella, R.

    1986-07-01

    The objective of the paper is the investigation of creep cavitation damage in copper. Radii distribution curves obtained from small angle neutron scattering experiments conducted on crept specimens were analyzed and compared with calculated curves. The latter were derived from cavity nucleation- and growth models. From the comparison the appropriateness of particular models can be infered. Valuable information is obtained about the nucleation behaviour. In crept and fatigued specimens, already after very short loading times, cavities appear with remarkable different radii, an observation which is contradictory to the concept of a critical radius. The analysis of the nucleation behavior emphasizes the influence of the stress dependence of the nucleation rate upon the stress dependence of damage and hence upon the stress dependence of the lifetime. In most of damage theories the latter is attributed to the stress dependency of cavity growth. A strong argument is derived in this paper in favour of the idea that both the mechanisms - growth and nucleation - contribute to the stress dependence of the lifetime. The damage development in Cu (as well as in alpha-Fe, AISI 304 and AISI 347) is compared with the prediction of the phenomenological A-model which assumes that the damage rate is proportional to the damage itself. The experiments show, that the damage increases in time slower (Cu, alpha-Fe, AISI 304) or faster (AISI 347) than predicted by the model. In copper the damage rate turns out to be constant independent of time. Accordingly the A-model is modified and the respective consequences are briefly discussed. (orig./GSCH) [de

  17. High performance thermal stress analysis on the earth simulator

    Noriyuki, Kushida; Hiroshi, Okuda; Genki, Yagawa

    2003-01-01

    In this study, the thermal stress finite element analysis code optimized for the earth simulator was developed. A processor node of which of the earth simulator is the 8-way vector processor, and each processor can communicate using the message passing interface. Thus, there are two ways to parallelize the finite element method on the earth simulator. The first method is to assign one processor for one sub-domain, and the second method is to assign one node (=8 processors) for one sub-domain considering the shared memory type parallelization. Considering that the preconditioned conjugate gradient (PCG) method, which is one of the suitable linear equation solvers for the large-scale parallel finite element methods, shows the better convergence behavior if the number of domains is the smaller, we have determined to employ PCG and the hybrid parallelization, which is based on the shared and distributed memory type parallelization. It has been said that it is hard to obtain the good parallel or vector performance, since the finite element method is based on unstructured grids. In such situation, the reordering is inevitable to improve the computational performance [2]. In this study, we used three reordering methods, i.e. Reverse Cuthil-McKee (RCM), cyclic multicolor (CM) and diagonal jagged descending storage (DJDS)[3]. RCM provides the good convergence of the incomplete lower-upper (ILU) PCG, but causes the load imbalance. On the other hand, CM provides the good load balance, but worsens the convergence of ILU PCG if the vector length is so long. Therefore, we used the combined-method of RCM and CM. DJDS is the method to store the sparse matrices such that longer vector length can be obtained. For attaining the efficient inter-node parallelization, such partitioning methods as the recursive coordinate bisection (RCM) or MeTIS have been used. Computational performance of the practical large-scale engineering problems will be shown at the meeting. (author)

  18. Analysis of fatigue reliability for high temperature and high pressure multi-stage decompression control valve

    Yu, Long; Xu, Juanjuan; Zhang, Lifang; Xu, Xiaogang

    2018-03-01

    Based on stress-strength interference theory to establish the reliability mathematical model for high temperature and high pressure multi-stage decompression control valve (HMDCV), and introduced to the temperature correction coefficient for revising material fatigue limit at high temperature. Reliability of key dangerous components and fatigue sensitivity curve of each component are calculated and analyzed by the means, which are analyzed the fatigue life of control valve and combined with reliability theory of control valve model. The impact proportion of each component on the control valve system fatigue failure was obtained. The results is shown that temperature correction factor makes the theoretical calculations of reliability more accurate, prediction life expectancy of main pressure parts accords with the technical requirements, and valve body and the sleeve have obvious influence on control system reliability, the stress concentration in key part of control valve can be reduced in the design process by improving structure.

  19. High-consequence analysis, evaluation, and application of select criteria

    Gutmanis, I.; Jaksch, J.A.

    1984-01-01

    A number of characteristics distinguish environmental risk from pollution problems. The characteristics make environmental risk problems harder to manage through existing regulatory, legal, and economic institutions. Hence, technologies involving environmental risk impose on society extremely difficult collective decisions. This paper is concerned with the process of reaching social decisions that involve low-probability, high-consequence outcomes. It is divided into five major parts. Part I contains the introduction. Part II reviews the two main classes of criteria that have been proposed for social decisions: approaches based on market mechanisms and their extension, and approaches associated with Rawls and Buchanan, which not only focus on outcomes, but also impose a set of minimal constraints on the process for reaching decisions and social consensus. Part III proposes a set of eight criteria for evaluating social decision processes. In Parts IV and V we investigate applying the criteria to two case studies -- one on nuclear waste disposal and the other on transportation of liquefied natural gas

  20. High performance computing environment for multidimensional image analysis.

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-07-10

    The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.

  1. High-resolution analysis of the mechanical behavior of tissue

    Hudnut, Alexa W.; Armani, Andrea M.

    2017-06-01

    The mechanical behavior and properties of biomaterials, such as tissue, have been directly and indirectly connected to numerous malignant physiological states. For example, an increase in the Young's Modulus of tissue can be indicative of cancer. Due to the heterogeneity of biomaterials, it is extremely important to perform these measurements using whole or unprocessed tissue because the tissue matrix contains important information about the intercellular interactions and the structure. Thus, developing high-resolution approaches that can accurately measure the elasticity of unprocessed tissue samples is of great interest. Unfortunately, conventional elastography methods such as atomic force microscopy, compression testing, and ultrasound elastography either require sample processing or have poor resolution. In the present work, we demonstrate the characterization of unprocessed salmon muscle using an optical polarimetric elastography system. We compare the results of compression testing within different samples of salmon skeletal muscle with different numbers of collagen membranes to characterize differences in heterogeneity. Using the intrinsic collagen membranes as markers, we determine the resolution of the system when testing biomaterials. The device reproducibly measures the stiffness of the tissues at variable strains. By analyzing the amount of energy lost by the sample during compression, collagen membranes that are 500 μm in size are detected.

  2. High precision isotopic ratio analysis of volatile metal chelates

    Hachey, D.L.; Blais, J.C.; Klein, P.D.

    1980-01-01

    High precision isotope ratio measurements have been made for a series of volatile alkaline earth and transition metal chelates using conventional GC/MS instrumentation. Electron ionization was used for alkaline earth chelates, whereas isobutane chemical ionization was used for transition metal studies. Natural isotopic abundances were determined for a series of Mg, Ca, Cr, Fe, Ni, Cu, Cd, and Zn chelates. Absolute accuracy ranged between 0.01 and 1.19 at. %. Absolute precision ranged between +-0.01-0.27 at. % (RSD +- 0.07-10.26%) for elements that contained as many as eight natural isotopes. Calibration curves were prepared using natural abundance metals and their enriched 50 Cr, 60 Ni, and 65 Cu isotopes covering the range 0.1-1010.7 at. % excess. A separate multiple isotope calibration curve was similarly prepared using enriched 60 Ni (0.02-2.15 at. % excess) and 62 Ni (0.23-18.5 at. % excess). The samples were analyzed by GC/CI/MS. Human plasma, containing enriched 26 Mg and 44 Ca, was analyzed by EI/MS. 1 figure, 5 tables

  3. Computational Fluid Dynamics Analysis of High Injection Pressure Blended Biodiesel

    Khalid, Amir; Jaat, Norrizam; Faisal Hushim, Mohd; Manshoor, Bukhari; Zaman, Izzuddin; Sapit, Azwan; Razali, Azahari

    2017-08-01

    Biodiesel have great potential for substitution with petrol fuel for the purpose of achieving clean energy production and emission reduction. Among the methods that can control the combustion properties, controlling of the fuel injection conditions is one of the successful methods. The purpose of this study is to investigate the effect of high injection pressure of biodiesel blends on spray characteristics using Computational Fluid Dynamics (CFD). Injection pressure was observed at 220 MPa, 250 MPa and 280 MPa. The ambient temperature was kept held at 1050 K and ambient pressure 8 MPa in order to simulate the effect of boost pressure or turbo charger during combustion process. Computational Fluid Dynamics were used to investigate the spray characteristics of biodiesel blends such as spray penetration length, spray angle and mixture formation of fuel-air mixing. The results shows that increases of injection pressure, wider spray angle is produced by biodiesel blends and diesel fuel. The injection pressure strongly affects the mixture formation, characteristics of fuel spray, longer spray penetration length thus promotes the fuel and air mixing.

  4. The analysis of energy efficiency in water electrolysis under high temperature and high pressure

    Hourng, L. W.; Tsai, T. T.; Lin, M. Y.

    2017-11-01

    This paper aims to analyze the energy efficiency of water electrolysis under high pressure and high temperature conditions. The effects of temperature and pressure on four different kinds of reaction mechanisms, namely, reversible voltage, activation polarization, ohmic polarization, and concentration polarization, are investigated in details. Results show that the ohmic and concentration over-potentials are increased as temperature is increased, however, the reversible and activation over-potentials are decreased as temperature is increased. Therefore, the net efficiency is enhanced as temperature is increased. The efficiency of water electrolysis at 350°C/100 bars is increased about 17%, compared with that at 80°C/1bar.

  5. High-pressure liquid chromatographic analysis of pramoxine hydrochloride in high lipoid aerosol foam dosage form.

    Weinberger, R; Mann, B; Posluszny, J

    1980-04-01

    A rapid and quantitative method for the determination of pramoxine hydrochloride by high-pressure liquid chromatography is presented. The drug is extracted as the salt from a preparation with a high lipoid composition by partitioning it to the aqueous phase of an ether-methanol-water-acetic acid system. The extract is chromatographed on an octadecylsilane bonded packing with a methanol-water-acetic acid-methanesulfonic acid mobile phase. The time required for each separation is approximately 6 min. Analytical recoveries of 100.4 +/- 1.5% were obtained.

  6. Risk of tuberculosis in high-rise and high density dwellings: An exploratory spatial analysis

    Lai, Poh-Chin; Low, Chien-Tat; Tse, Wing-Sze Cindy; Tsui, Chun-Kan; Lee, Herman; Hui, Pak-Kwan

    2013-01-01

    Studies have shown that socioeconomic and environmental factors have direct/indirect influences on TB. This research focuses on TB prevalence of Hong Kong in relation to its compact urban development comprising of high-rise and high-density residential dwellings caused by rapid population growth and limited land resources. It has been postulated that occupants living on higher levels of a building would benefit from better ventilation and direct sunlight and thus less likely to contract infectious respiratory diseases. On the contrary, those on lower floors amid the dense clusters of high-rises are more susceptible to TB infection because of poorer air quality from street-level pollution and lesser exposure to direct sunlight. However, there have not been published studies to support these claims. As TB continues to threaten public health in Hong Kong, this study seeks to understand the effects of housing development on TB occurrences in an urban setting. -- Highlights: ► We examined association between TB prevalence and floor levels using sky view factor. ► TB is more prevalent on lower floors and relationship manifested in taller buildings. ► Floor level and building height jointly affect sky view factors at diseased locations. ► GIS framework is effective in associating disease prevalence in an urban setting. -- Research on TB prevalence of Hong Kong and its compact urban development with public health implications for Asian cities in pursuit of high-rise urban living

  7. A study on structural analysis of highly corrosive melts at high temperature

    Ohtori, N

    2002-01-01

    When sodium is burned at high temperature in the atmosphere, it reacts simultaneously with H sub 2 O in the atmosphere so that it can produce high temperature melt of sodium hydroxide as a solvent. If this melt includes peroxide ion (O sub 2 sup 2 sup -), it will be a considerably active and corrosive for iron so that several sodium iron double oxides will be produced as corrosion products after the reaction with steel structures. The present study was carried out in order to investigate the ability of presence of peroxide ion in sodium hydroxide solvent at high temperature and that of identification of the several corrosion products using laser Raman spectroscopy. The measurement system with ultraviolet laser was developed simultaneously in the present work to improve the ability of the measurement at high temperature. As results from the measurements, the possibility of the presence of peroxide ion was shown up to 823K in sodium peroxide and 823K in the melt of sodium hydroxide mixed with sodium peroxide. A...

  8. Analysis and topology optimization design of high-speed driving spindle

    Wang, Zhilin; Yang, Hai

    2018-04-01

    The three-dimensional model of high-speed driving spindle is established by using SOLIDWORKS. The model is imported through the interface of ABAQUS, A finite element analysis model of high-speed driving spindle was established by using spring element to simulate bearing boundary condition. High-speed driving spindle for the static analysis, the spindle of the stress, strain and displacement nephogram, and on the basis of the results of the analysis on spindle for topology optimization, completed the lightweight design of high-speed driving spindle. The design scheme provides guidance for the design of axial parts of similar structures.

  9. Analysis of technology and seminar on economic trends about High-intensity LED

    2003-09-01

    This is divided into two parts. Contents of this report in the first part are technical trends on high-intensity LED which reports introduction of LED as compound semiconductor, white LED? patent issues, review on technology of High-intensity LED and Reliability of High-intensity LED. The second part deals with economic tends about High-intensity LED. This seminar was held to report analysis and economical trends about High-intensity LED by Korea Industrial Education Institute in 2003.

  10. Econometric analysis of realised covariation: high frequency covariance, regression and correlation in financial economics

    Ole E. Barndorff-Nielsen; Neil Shephard

    2002-01-01

    This paper analyses multivariate high frequency financial data using realised covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis and covariance. It will be based on a fixed interval of time (e.g. a day or week), allowing the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions and covariances change through time. In particular w...

  11. Some aspects of ICP-AES analysis of high purity rare earths

    Murty, P.S.; Biswas, S.S.

    1991-01-01

    Inductively coupled plasma atomic emission spectrometry (ICP-AES) is a technique capable of giving high sensitivity in trace elemental analysis. While the technique possesses high sensitivity, it lacks high selectivity. Selectivity is important where substances emitting complex spectra are to be analysed for trace elements. Rare earths emit highly complex spectra in a plasma source and the determination of adjacent rare earths in a high purity rare earth matrix, with high sensitivity, is not possible due to the inadequate selectivity of ICP-AES. One approach that has yielded reasonably good spectral selectivity in the high purity rare earth analysis by ICP-AES is by employing a combination of wavelength modulation techniques and high resolution echelle grating. However, it was found that by using a high resolution monochromator senstitivities either comparable to or better than those reported by the wavelength modulation technique could be obtained. (author). 2 refs., 2 figs., 2 tabs

  12. High spatial resolution and high brightness ion beam probe for in-situ elemental and isotopic analysis

    Long, Tao; Clement, Stephen W. J.; Bao, Zemin; Wang, Peizhi; Tian, Di; Liu, Dunyi

    2018-03-01

    A high spatial resolution and high brightness ion beam from a cold cathode duoplasmatron source and primary ion optics are presented and applied to in-situ analysis of micro-scale geological material with complex structural and chemical features. The magnetic field in the source as well as the influence of relative permeability of magnetic materials on source performance was simulated using COMSOL to confirm the magnetic field strength of the source. Based on SIMION simulation, a high brightness and high spatial resolution negative ion optical system has been developed to achieve Critical (Gaussian) illumination mode. The ion source and primary column are installed on a new Time-of-Flight secondary ion mass spectrometer for analysis of geological samples. The diameter of the ion beam was measured by the knife-edge method and a scanning electron microscope (SEM). Results show that an O2- beam of ca. 5 μm diameter with a beam intensity of ∼5 nA and an O- beam of ca. 5 μm diameter with a beam intensity of ∼50 nA were obtained, respectively. This design will open new possibilities for in-situ elemental and isotopic analysis in geological studies.

  13. High-Temperature Structural Analysis of a Small-Scale Prototype of a Process Heat Exchanger (IV) - Macroscopic High-Temperature Elastic-Plastic Analysis -

    Song, Kee Nam; Hong, Sung Deok; Park, Hong Yoon

    2011-01-01

    A PHE (Process Heat Exchanger) is a key component required to transfer heat energy of 950 .deg. C generated in a VHTR (Very High Temperature Reactor) to a chemical reaction that yields a large quantity of hydrogen. A small-scale PHE prototype made of Hastelloy-X was scheduled for testing in a small-scale gas loop at the Korea Atomic Energy Research Institute. In this study, as a part of the evaluation of the high-temperature structural integrity of the PHE prototype, high-temperature structural analysis modeling, and macroscopic thermal and elastic-plastic structural analysis of the PHE prototype were carried out under the gas-loop test conditions as a preliminary qwer123$ study before carrying out the performance test in the gas loop. The results obtained in this study will be used to design the performance test setup for the modified PHE prototype

  14. High sensitivity and high resolution element 3D analysis by a combined SIMS–SPM instrument

    Yves Fleming

    2015-04-01

    Full Text Available Using the recently developed SIMS–SPM prototype, secondary ion mass spectrometry (SIMS data was combined with topographical data from the scanning probe microscopy (SPM module for five test structures in order to obtain accurate chemical 3D maps: a polystyrene/polyvinylpyrrolidone (PS/PVP polymer blend, a nickel-based super-alloy, a titanium carbonitride-based cermet, a reticle test structure and Mg(OH2 nanoclusters incorporated inside a polymer matrix. The examples illustrate the potential of this combined approach to track and eliminate artefacts related to inhomogeneities of the sputter rates (caused by samples containing various materials, different phases or having a non-flat surface and inhomogeneities of the secondary ion extraction efficiencies due to local field distortions (caused by topography with high aspect ratios. In this respect, this paper presents the measured relative sputter rates between PVP and PS as well as in between the different phases of the TiCN cermet.

  15. Development and validation of a method for the determination of regulated fragrance allergens by High-Performance Liquid Chromatography and Parallel Factor Analysis 2.

    Pérez-Outeiral, Jessica; Elcoroaristizabal, Saioa; Amigo, Jose Manuel; Vidal, Maider

    2017-12-01

    This work presents the development and validation of a multivariate method for quantitation of 6 potentially allergenic substances (PAS) related to fragrances by ultrasound-assisted emulsification microextraction coupled with HPLC-DAD and PARAFAC2 in the presence of other 18 PAS. The objective is the extension of a previously proposed univariate method to be able to determine the 24 PAS currently considered as allergens. The suitability of the multivariate approach for the qualitative and quantitative analysis of the analytes is discussed through datasets of increasing complexity, comprising the assessment and validation of the method performance. PARAFAC2 showed to adequately model the data facing up different instrumental and chemical issues, such as co-elution profiles, overlapping spectra, unknown interfering compounds, retention time shifts and baseline drifts. Satisfactory quality parameters of the model performance were obtained (R 2 ≥0.94), as well as meaningful chromatographic and spectral profiles (r≥0.97). Moreover, low errors of prediction in external validation standards (below 15% in most cases) as well as acceptable quantification errors in real spiked samples (recoveries from 82 to 119%) confirmed the suitability of PARAFAC2 for resolution and quantification of the PAS. The combination of the previously proposed univariate approach, for the well-resolved peaks, with the developed multivariate method allows the determination of the 24 regulated PAS. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Towards high resolution polarisation analysis using double polarisation and ellipsoidal analysers

    Martin-Y-Marero, D

    2002-01-01

    Classical polarisation analysis methods lack the combination of high resolution and high count rate necessary to cope with the demand of modern condensed-matter experiments. In this work, we present a method to achieve high resolution polarisation analysis based on a double polarisation system. Coupling this method with an ellipsoidal wavelength analyser, a high count rate can be achieved whilst delivering a resolution of around 10 mu eV. This method is ideally suited to pulsed sources, although it can be adapted to continuous sources as well. (orig.)

  17. A study on thermal characteristics analysis model of high frequency switching transformer

    Yoo, Jin-Hyung; Jung, Tae-Uk

    2015-05-01

    Recently, interest has been shown in research on the module-integrated converter (MIC) in small-scale photovoltaic (PV) generation. In an MIC, the voltage boosting high frequency transformer should be designed to be compact in size and have high efficiency. In response to the need to satisfy these requirements, this paper presents a coupled electromagnetic analysis model of a transformer connected with a high frequency switching DC-DC converter circuit while considering thermal characteristics due to the copper and core losses. A design optimization procedure for high efficiency is also presented using this design analysis method, and it is verified by the experimental result.

  18. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  19. Coastal Change Analysis Program (C-CAP) High Resolution Land Cover and Change Data

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA Coastal Change Analysis Program (C-CAP) produces national standardized high resolution land cover and change products for the coastal regions of the U.S....

  20. Laboratory testing and economic analysis of high RAP warm mixed asphalt.

    2009-03-24

    This report contains laboratory testing, economic analysis, literature review, and information obtained from multiple producers throughout the state of Mississippi regarding the use of high RAP (50 % to 100%) mixtures containing warm mix additives. T...

  1. Analysis technique of impurity in high purity deuterium by cryogenic gas-chromatography

    Zhou Junbo; Gao Liping

    2007-01-01

    A veracious and applicable quantitative analysis method of O 2 , N 2 and H 2 , HD in high purity deuterium by the chromatogram columniation filled with 5A molecular sieve and alumina was researched and constituted at natural temperature and 77 K, respectively. Minimum detecting limit of the present method is (150-200) x 10 -6 for H 2 and HD, and it can meet the need of quantitative analysis of the impurity during high purity deuterium preparation. (authors)

  2. NiftyPET: a High-throughput Software Platform for High Quantitative Accuracy and Precision PET Imaging and Analysis.

    Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien

    2018-01-01

    We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.

  3. Local level epidemiological analysis of TB in people from a high incidence country of birth

    Massey Peter D; Durrheim David N; Stephens Nicola; Christensen Amanda

    2013-01-01

    Abstract Background The setting for this analysis is the low tuberculosis (TB) incidence state of New South Wales (NSW), Australia. Local level analysis of TB epidemiology in people from high incidence countries-of-birth (HIC) in a low incidence setting has not been conducted in Australia and has not been widely reported. Local level analysis could inform measures such as active case finding and targeted earlier diagnosis. The aim of this study was to use a novel approach to identify local ar...

  4. In situ flash x-ray high-speed computed tomography for the quantitative analysis of highly dynamic processes

    Moser, Stefan; Nau, Siegfried; Salk, Manfred; Thoma, Klaus

    2014-02-01

    The in situ investigation of dynamic events, ranging from car crash to ballistics, often is key to the understanding of dynamic material behavior. In many cases the important processes and interactions happen on the scale of milli- to microseconds at speeds of 1000 m s-1 or more. Often, 3D information is necessary to fully capture and analyze all relevant effects. High-speed 3D-visualization techniques are thus required for the in situ analysis. 3D-capable optical high-speed methods often are impaired by luminous effects and dust, while flash x-ray based methods usually deliver only 2D data. In this paper, a novel 3D-capable flash x-ray based method, in situ flash x-ray high-speed computed tomography is presented. The method is capable of producing 3D reconstructions of high-speed processes based on an undersampled dataset consisting of only a few (typically 3 to 6) x-ray projections. The major challenges are identified, discussed and the chosen solution outlined. The application is illustrated with an exemplary application of a 1000 m s-1 high-speed impact event on the scale of microseconds. A quantitative analysis of the in situ measurement of the material fragments with a 3D reconstruction with 1 mm voxel size is presented and the results are discussed. The results show that the HSCT method allows gaining valuable visual and quantitative mechanical information for the understanding and interpretation of high-speed events.

  5. One-point fluctuation analysis of the high-energy neutrino sky

    Feyereisen, Michael R.; Tamborra, Irene; Ando, Shin'ichiro

    2017-01-01

    We perform the first one-point fluctuation analysis of the high-energy neutrino sky. This method reveals itself to be especially suited to contemporary neutrino data, as it allows to study the properties of the astrophysical components of the high-energy flux detected by the IceCube telescope, even...

  6. Innovation management and marketing in the high-tech sector: A content analysis of advertisements

    Gerhard, D.; Brem, Alexander; Baccarella, Ch.

    2011-01-01

    Advertizing high-technology products is a tricky and critical task for every company, since it means operating in an environment with high market uncertainty. The work presents results of a content analysis of 110 adverts for consumer electronics products which examines how these products and the...

  7. Remaking Poems: Combining Translation and Digital Media to Interest High School Students in Poetry Analysis

    Simpson, Amy Beth

    2017-01-01

    In American high schools, the practice of poetry analysis as a study of language art has declined. Outworn methods have contributed to the trend away from close interactions with the text, to the unfortunate end that millennial high school students neither understand nor enjoy poetry. Digital technology coupled with principles of translation…

  8. Analysis strategies for high-resolution UHF-fMRI data.

    Polimeni, Jonathan R; Renvall, Ville; Zaretskaya, Natalia; Fischl, Bruce

    2018-03-01

    Functional MRI (fMRI) benefits from both increased sensitivity and specificity with increasing magnetic field strength, making it a key application for Ultra-High Field (UHF) MRI scanners. Most UHF-fMRI studies utilize the dramatic increases in sensitivity and specificity to acquire high-resolution data reaching sub-millimeter scales, which enable new classes of experiments to probe the functional organization of the human brain. This review article surveys advanced data analysis strategies developed for high-resolution fMRI at UHF. These include strategies designed to mitigate distortion and artifacts associated with higher fields in ways that attempt to preserve spatial resolution of the fMRI data, as well as recently introduced analysis techniques that are enabled by these extremely high-resolution data. Particular focus is placed on anatomically-informed analyses, including cortical surface-based analysis, which are powerful techniques that can guide each step of the analysis from preprocessing to statistical analysis to interpretation and visualization. New intracortical analysis techniques for laminar and columnar fMRI are also reviewed and discussed. Prospects for single-subject individualized analyses are also presented and discussed. Altogether, there are both specific challenges and opportunities presented by UHF-fMRI, and the use of proper analysis strategies can help these valuable data reach their full potential. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Linking Automated Data Analysis and Visualization with Applications in Developmental Biology and High-Energy Physics

    Ruebel, Oliver [Technical Univ. of Darmstadt (Germany)

    2009-11-20

    Knowledge discovery from large and complex collections of today's scientific datasets is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the increasing number of data dimensions and data objects is presenting tremendous challenges for data analysis and effective data exploration methods and tools. Researchers are overwhelmed with data and standard tools are often insufficient to enable effective data analysis and knowledge discovery. The main objective of this thesis is to provide important new capabilities to accelerate scientific knowledge discovery form large, complex, and multivariate scientific data. The research covered in this thesis addresses these scientific challenges using a combination of scientific visualization, information visualization, automated data analysis, and other enabling technologies, such as efficient data management. The effectiveness of the proposed analysis methods is demonstrated via applications in two distinct scientific research fields, namely developmental biology and high-energy physics.Advances in microscopy, image analysis, and embryo registration enable for the first time measurement of gene expression at cellular resolution for entire organisms. Analysis of high-dimensional spatial gene expression datasets is a challenging task. By integrating data clustering and visualization, analysis of complex, time-varying, spatial gene expression patterns and their formation becomes possible. The analysis framework MATLAB and the visualization have been integrated, making advanced analysis tools accessible to biologist and enabling bioinformatic researchers to directly integrate their analysis with the visualization. Laser wakefield particle accelerators (LWFAs) promise to be a new compact source of high-energy particles and radiation, with wide applications ranging from medicine to physics. To gain insight into the complex physical processes of particle

  10. Linking Automated Data Analysis and Visualization with Applications in Developmental Biology and High-Energy Physics

    Ruebel, Oliver

    2009-01-01

    Knowledge discovery from large and complex collections of today's scientific datasets is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the increasing number of data dimensions and data objects is presenting tremendous challenges for data analysis and effective data exploration methods and tools. Researchers are overwhelmed with data and standard tools are often insufficient to enable effective data analysis and knowledge discovery. The main objective of this thesis is to provide important new capabilities to accelerate scientific knowledge discovery form large, complex, and multivariate scientific data. The research covered in this thesis addresses these scientific challenges using a combination of scientific visualization, information visualization, automated data analysis, and other enabling technologies, such as efficient data management. The effectiveness of the proposed analysis methods is demonstrated via applications in two distinct scientific research fields, namely developmental biology and high-energy physics.Advances in microscopy, image analysis, and embryo registration enable for the first time measurement of gene expression at cellular resolution for entire organisms. Analysis of high-dimensional spatial gene expression datasets is a challenging task. By integrating data clustering and visualization, analysis of complex, time-varying, spatial gene expression patterns and their formation becomes possible. The analysis framework MATLAB and the visualization have been integrated, making advanced analysis tools accessible to biologist and enabling bioinformatic researchers to directly integrate their analysis with the visualization. Laser wakefield particle accelerators (LWFAs) promise to be a new compact source of high-energy particles and radiation, with wide applications ranging from medicine to physics. To gain insight into the complex physical processes of particle

  11. XPS analysis for cubic boron nitride crystal synthesized under high pressure and high temperature using Li3N as catalysis

    Guo, Xiaofei; Xu, Bin; Zhang, Wen; Cai, Zhichao; Wen, Zhenxing

    2014-01-01

    Highlights: • The cBN was synthesized by Li 3 N as catalyst under high pressure and high temperature (HPHT). • The film coated on the as-grown cBN crystals was studied by XPS. • The electronic structure variation in the film was investigated. • The growth mechanism of cubic boron nitride crystal was analyzed briefly. - Abstract: Cubic boron nitride (cBN) single crystals are synthesized with lithium nitride (Li3N) as catalyst under high pressure and high temperature. The variation of electronic structures from boron nitride of different layers in coating film on the cBN single crystal has been investigated by X-ray photoelectron spectroscopy. Combining the atomic concentration analysis, it was shown that from the film/cBN crystal interface to the inner, the sp 2 fractions are decreasing, and the sp 3 fractions are increasing in the film at the same time. Moreover, by transmission electron microscopy, a lot of cBN microparticles are found in the interface. For there is no Li 3 N in the film, it is possible that Li 3 N first reacts with hexagonal boron nitride to produce Li 3 BN 2 during cBN crystals synthesis under high pressure and high temperature (HPHT). Boron and nitrogen atoms, required for cBN crystals growth, could come from the direct conversion from hexagonal boron nitride with the catalysis of Li 3 BN 2 under high pressure and high temperature, but not directly from the decomposition of Li 3 BN 2

  12. Denaturing high-performance liquid chromatography mutation analysis in patients with reduced Protein S levels

    Bathum, Lise; Münster, Anna-Marie; Nybo, Mads

    2008-01-01

    diagnosis and risk estimation. The aim was to design a high-throughput genetic analysis based on denaturing high-performance liquid chromatography to identify sequence variations in the gene coding for Protein S. PATIENTS: In total, 55 patients referred to the Section of Thrombosis and Haemostasis, Odense......BACKGROUND: Patients with congenital Protein S deficiency have increased risk of venous thromboembolism. However, Protein S levels show large intra-individual variation and the biochemical assays have low accuracy and a high interlaboratory variability. Genetic analysis might aid in a more precise......, giving a precise diagnosis and subsequently a better risk estimation....

  13. Econometric analysis of realized covariation: high frequency based covariance, regression, and correlation in financial economics

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2004-01-01

    This paper analyses multivariate high frequency financial data using realized covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis, and covariance. It will be based on a fixed interval of time (e.g., a day or week), allowing...... the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions, and covariances change through time. In particular we provide confidence intervals for each of these quantities....

  14. Evaluation of 10 AMD Associated Polymorphisms as a Cause of Choroidal Neovascularization in Highly Myopic Eyes.

    Alvaro Velazquez-Villoria

    Full Text Available Choroidal neovascularization (CNV commonly occurs in age related macular degeneration and pathological myopia patients. In this study we conducted a case-control prospective study including 431 participants. The aim of this study was to determine the potential association between 10 single nucleotide polymorphisms (SNPs located in 4 different genetic regions (CFI, COL8A1, LIPC, and APOE, and choroidal neovascularization in age-related macular degeneration and the development of choroidal neovascularization in highly myopic eyes of a Caucasian population. Univariate and multivariate logistic regression analysis adjusted for age, sex and hypertension was performed for each allele, genotype and haplotype frequency analysis. We found that in the univariate analysis that both single-nucleotide polymorphisms in COL8A1 gene (rs13095226 and rs669676 together with age, sex and hypertension were significantly associated with myopic CNV development in Spanish patients (p0.05; however, analysis of the axial length between genotypes of rs13095226 revealed an important influence of COL8A1 in the development of CNV in high myopia. Furthermore we conducted a meta-analysis of COL8A1, CFI and LIPC genes SNPs (rs669676, rs10033900 and rs10468017 and found that only rs669676 of these SNPs were associated with high myopia neovascularization.

  15. Technical Training on High-Order Spectral Analysis and Thermal Anemometry Applications

    Maslov, A. A.; Shiplyuk, A. N.; Sidirenko, A. A.; Bountin, D. A.

    2003-01-01

    The topics of thermal anemometry and high-order spectral analyses were the subject of the technical training. Specifically, the objective of the technical training was to study: (i) the recently introduced constant voltage anemometer (CVA) for high-speed boundary layer; and (ii) newly developed high-order spectral analysis techniques (HOSA). Both CVA and HOSA are relevant tools for studies of boundary layer transition and stability.

  16. Detection of somatic mutations by high-resolution DNA melting (HRM) analysis in multiple cancers.

    Gonzalez-Bosquet, Jesus; Calcei, Jacob; Wei, Jun S; Garcia-Closas, Montserrat; Sherman, Mark E; Hewitt, Stephen; Vockley, Joseph; Lissowska, Jolanta; Yang, Hannah P; Khan, Javed; Chanock, Stephen

    2011-01-17

    Identification of somatic mutations in cancer is a major goal for understanding and monitoring the events related to cancer initiation and progression. High resolution melting (HRM) curve analysis represents a fast, post-PCR high-throughput method for scanning somatic sequence alterations in target genes. The aim of this study was to assess the sensitivity and specificity of HRM analysis for tumor mutation screening in a range of tumor samples, which included 216 frozen pediatric small rounded blue-cell tumors as well as 180 paraffin-embedded tumors from breast, endometrial and ovarian cancers (60 of each). HRM analysis was performed in exons of the following candidate genes known to harbor established commonly observed mutations: PIK3CA, ERBB2, KRAS, TP53, EGFR, BRAF, GATA3, and FGFR3. Bi-directional sequencing analysis was used to determine the accuracy of the HRM analysis. For the 39 mutations observed in frozen samples, the sensitivity and specificity of HRM analysis were 97% and 87%, respectively. There were 67 mutation/variants in the paraffin-embedded samples, and the sensitivity and specificity for the HRM analysis were 88% and 80%, respectively. Paraffin-embedded samples require higher quantity of purified DNA for high performance. In summary, HRM analysis is a promising moderate-throughput screening test for mutations among known candidate genomic regions. Although the overall accuracy appears to be better in frozen specimens, somatic alterations were detected in DNA extracted from paraffin-embedded samples.

  17. Detection of somatic mutations by high-resolution DNA melting (HRM analysis in multiple cancers.

    Jesus Gonzalez-Bosquet

    Full Text Available Identification of somatic mutations in cancer is a major goal for understanding and monitoring the events related to cancer initiation and progression. High resolution melting (HRM curve analysis represents a fast, post-PCR high-throughput method for scanning somatic sequence alterations in target genes. The aim of this study was to assess the sensitivity and specificity of HRM analysis for tumor mutation screening in a range of tumor samples, which included 216 frozen pediatric small rounded blue-cell tumors as well as 180 paraffin-embedded tumors from breast, endometrial and ovarian cancers (60 of each. HRM analysis was performed in exons of the following candidate genes known to harbor established commonly observed mutations: PIK3CA, ERBB2, KRAS, TP53, EGFR, BRAF, GATA3, and FGFR3. Bi-directional sequencing analysis was used to determine the accuracy of the HRM analysis. For the 39 mutations observed in frozen samples, the sensitivity and specificity of HRM analysis were 97% and 87%, respectively. There were 67 mutation/variants in the paraffin-embedded samples, and the sensitivity and specificity for the HRM analysis were 88% and 80%, respectively. Paraffin-embedded samples require higher quantity of purified DNA for high performance. In summary, HRM analysis is a promising moderate-throughput screening test for mutations among known candidate genomic regions. Although the overall accuracy appears to be better in frozen specimens, somatic alterations were detected in DNA extracted from paraffin-embedded samples.

  18. High-Resolution Melt Analysis for Rapid Comparison of Bacterial Community Compositions

    Hjelmsø, Mathis Hjort; Hansen, Lars Hestbjerg; Bælum, Jacob

    2014-01-01

    In the study of bacterial community composition, 16S rRNA gene amplicon sequencing is today among the preferred methods of analysis. The cost of nucleotide sequence analysis, including requisite computational and bioinformatic steps, however, takes up a large part of many research budgets. High......-resolution melt (HRM) analysis is the study of the melt behavior of specific PCR products. Here we describe a novel high-throughput approach in which we used HRM analysis targeting the 16S rRNA gene to rapidly screen multiple complex samples for differences in bacterial community composition. We hypothesized...... that HRM analysis of amplified 16S rRNA genes from a soil ecosystem could be used as a screening tool to identify changes in bacterial community structure. This hypothesis was tested using a soil microcosm setup exposed to a total of six treatments representing different combinations of pesticide...

  19. Shielding analysis of high level waste water storage facilities using MCNP code

    Yabuta, Naohiro [Mitsubishi Research Inst., Inc., Tokyo (Japan)

    2001-01-01

    The neutron and gamma-ray transport analysis for the facility as a reprocessing facility with large buildings having thick shielding was made. Radiation shielding analysis consists of a deep transmission calculation for the concrete wall and a skyshine calculation for the space out of the buildings. An efficient analysis with a short running time and high accuracy needs a variance reduction technique suitable for all the calculation regions and structures. In this report, the shielding analysis using MCNP and a discrete ordinate transport code is explained and the idea and procedure of decision of variance reduction parameter is completed. (J.P.N.)

  20. Computer aided seismic and fire retrofitting analysis of existing high rise reinforced concrete buildings

    Hussain, Raja Rizwan; Hasan, Saeed

    2016-01-01

    This book details the analysis and design of high rise buildings for gravity and seismic analysis. It provides the knowledge structural engineers need to retrofit existing structures in order to meet safety requirements and better prevent potential damage from such disasters as earthquakes and fires. Coverage includes actual case studies of existing buildings, reviews of current knowledge for damages and their mitigation, protective design technologies, and analytical and computational techniques. This monograph also provides an experimental investigation on the properties of fiber reinforced concrete that consists of natural fibres like coconut coir and also steel fibres that are used for comparison in both Normal Strength Concrete (NSC) and High Strength Concrete (HSC). In addition, the authors examine the use of various repair techniques for damaged high rise buildings. The book will help upcoming structural design engineers learn the computer aided analysis and design of real existing high rise buildings ...

  1. A Delay Time Measurement of ULTRAS (Ultra-high Temperature Ultrasonic Response Analysis System) for a High Temperature Experiment

    Koo, Kil Mo; Kim, Sang Baik

    2010-01-01

    The temperature measurement of very high temperature core melt is of importance in a high temperature as the molten pool experiment in which gap formation between core melt and the reactor lower head, and the effect of the gap on thermal behavior are to be measured. The existing temperature measurement techniques have some problems, which the thermocouple, one of the contact methods, is restricted to under 2000 .deg. C, and the infrared thermometry, one of the non-contact methods, is unable to measure an internal temperature and very sensitive to the interference from reacted gases. In order to solve these problems, the delay time technique of ultrasonic wavelets due to high temperature has two sorts of stage. As a first stage, a delay time measurement of ULTRAS (Ultra-high Temperature Ultrasonic Response Analysis System) is suggested. As a second stage, a molten material temperature was measured up to 2300 .deg. C. Also, the optimization design of the UTS (ultrasonic temperature sensor) with persistence at the high temperature was suggested in this paper. And the utilization of the theory suggested in this paper and the efficiency of the developed system are performed by special equipment and some experiments supported by KRISS (Korea Research Institute of Standard and Science)

  2. Maintaining high precision of isotope ratio analysis over extended periods of time.

    Brand, Willi A

    2009-06-01

    Stable isotope ratios are reliable and long lasting process tracers. In order to compare data from different locations or different sampling times at a high level of precision, a measurement strategy must include reliable traceability to an international stable isotope scale via a reference material (RM). Since these international RMs are available in low quantities only, we have developed our own analysis schemes involving laboratory working RM. In addition, quality assurance RMs are used to control the long-term performance of the delta-value assignments. The analysis schemes allow the construction of quality assurance performance charts over years of operation. In this contribution, the performance of three typical techniques established in IsoLab at the MPI-BGC in Jena is discussed. The techniques are (1) isotope ratio mass spectrometry with an elemental analyser for delta(15)N and delta(13)C analysis of bulk (organic) material, (2) high precision delta(13)C and delta(18)O analysis of CO(2) in clean-air samples, and (3) stable isotope analysis of water samples using a high-temperature reaction with carbon. In addition, reference strategies on a laser ablation system for high spatial resolution delta(13)C analysis in tree rings is exemplified briefly.

  3. SEISMIC Analysis of high-rise buildings with composite metal damper

    Chen Ruixue

    2015-01-01

    Full Text Available This paper mainly studies on the mechanical characteristics and application effect of composite metal damper in the high-rise buildings via the numerical simulation analysis. The research adopts the elastic and elastic-plastic dynamic approach and the displacement time history response and damper energy dissipation capacity and so on of the high-rise building are compared and analyzed before and after installation. The analysis found that the energy dissipation characteristic of metallic dampers is good. High-rise building story drift significantly is reduced and the extent of damage of the walls and coupling beams is decreased, achieved a good energy dissipation effect. Composite metal damper can effectively and economically improve the seismic performance of high-rise buildings, meet the requirement of the 3-level design for seismic resistance. The result has certain reference significance for the application of metallic damper in the high-rise buildings.

  4. Analysis of Workforce Skills in High School Graduates: Self Report of High School Seniors in Northwest Ohio

    Jason A. Hedrick

    2015-03-01

    Full Text Available Analysis of workforce competencies at the conclusion of high school graduation are discussed in this paper. Researchers sampled over 875 graduating seniors from 16 high schools within six counties throughout Northwestern Ohio. Results highlight future career and educational goals of these young people and a self-report of skills based on the SCANS competencies and basic foundation skills. When evaluating Foundation Skills of Personal Qualities, Basic Skills, and Thinking Skills, students indicated highest ratings in Personal Qualities and overall lowest ratings in Basic Skills. A series of five Workforce Competencies were also evaluated, including Using Resources, Using Information, Using Technology, Interpersonal Skills, and Working in Systems. Highest ratings for Competencies were reported in Interpersonal Skills and lowest in Using Resources.

  5. Trace analysis measurements in high-purity aluminium by means of radiochemical neutron and proton activation analysis

    Egger, K.P.

    1987-01-01

    The aim of the study consisted in the development of efficient radiochemical composite processes and activation methods for the multi-element determination of traces within the lower ng range in high-purity aluminium. More than 50 elements were determined with the help of activation with reactor neutrons; the selective separation of matrix activity (adsorption with hydrated antimony pentoxide) led to a noticeable improvement of detectability, as compared with instrumental neutron activation analysis. Further improvements were achieved with the help of radiochemical group separations in ion exchangers or with the help of the selective separation of the pure beta-emitting elements. Over 20 elements up to high atomic numbers were determined by means of activating 13 MeV protons and 23 Me protons. In this connection, improvements of the detection limit by as a factor of 10 were achieved with radiochemical separation techniques, as compared with pure instrumental proton activation analysis. (RB) [de

  6. Detailed Analysis of Torque Ripple in High Frequency Signal Injection based Sensor less PMSM Drives

    Ravikumar Setty A.

    2017-01-01

    Full Text Available High Frequency Signal Injection based techniques are robust and well proven to estimate the rotor position from stand still to low speed. However, Injected high frequency signal introduces, high frequency harmonics in the motor phase currents and results in significant Output Torque ripple. There is no detailed analysis exist in the literature, to study the effect of injected signal frequency on Torque ripple. Objective of this work is to study the Torque Ripple resulting from High Frequency signal injection in PMSM motor drives. Detailed MATLAB/Simulink simulations are carried to quantify the Torque ripple at different Signal frequencies.

  7. High Performance Liquid Chromatography-mass Spectrometry Analysis of High Antioxidant Australian Fruits with Antiproliferative Activity Against Cancer Cells.

    Sirdaarta, Joseph; Maen, Anton; Rayan, Paran; Matthews, Ben; Cock, Ian Edwin

    2016-05-01

    145 unique mass signals were detected in the lemon aspen methanolic and aqueous extracts by nonbiased high-performance liquid chromatography-mass spectrometry analysis. Of these, 20 compounds were identified as being of particular interest due to their reported antioxidant and/or anticancer activities. The lack of toxicity and antiproliferative activity of the high antioxidant plant extracts against HeLa and CaCo2 cancer cell lines indicates their potential in the treatment and prevention of some cancers. Australian fruit extracts with high antioxidant contents were potent inhibitors of CaCo2 and HeLa carcinoma cell proliferationMethanolic lemon aspen extract was particularly potent, with IC50 values of 480 μg/mL (HeLa) and 769 μg/mL (CaCo2)High-performance liquid chromatography-mass spectrometry-quadrupole time-of-flight analysis highlighted and putatively identified 20 compounds in the antiproliferative lemon aspen extractsIn contrast, lower antioxidant content extracts stimulated carcinoma cell proliferationAll extracts with antiproliferative activity were nontoxic in the Artemia nauplii assay. Abbreviations used: DPPH: di (phenyl)- (2,4,6-trinitrophenyl) iminoazanium, HPLC: High-performance liquid chromatography, IC50: The concentration required to inhibit by 50%, LC50: The concentration required to achieve 50% mortality, MS: Mass spectrometry.

  8. High Performance Liquid Chromatography-mass Spectrometry Analysis of High Antioxidant Australian Fruits with Antiproliferative Activity Against Cancer Cells

    Sirdaarta, Joseph; Maen, Anton; Rayan, Paran; Matthews, Ben; Cock, Ian Edwin

    2016-01-01

    g/mL). All other extracts were nontoxic. A total of 145 unique mass signals were detected in the lemon aspen methanolic and aqueous extracts by nonbiased high-performance liquid chromatography-mass spectrometry analysis. Of these, 20 compounds were identified as being of particular interest due to their reported antioxidant and/or anticancer activities. Conclusions: The lack of toxicity and antiproliferative activity of the high antioxidant plant extracts against HeLa and CaCo2 cancer cell lines indicates their potential in the treatment and prevention of some cancers. SUMMARY Australian fruit extracts with high antioxidant contents were potent inhibitors of CaCo2 and HeLa carcinoma cell proliferationMethanolic lemon aspen extract was particularly potent, with IC50 values of 480 μg/mL (HeLa) and 769 μg/mL (CaCo2)High-performance liquid chromatography-mass spectrometry-quadrupole time-of-flight analysis highlighted and putatively identified 20 compounds in the antiproliferative lemon aspen extractsIn contrast, lower antioxidant content extracts stimulated carcinoma cell proliferationAll extracts with antiproliferative activity were nontoxic in the Artemia nauplii assay. Abbreviations used: DPPH: di (phenyl)- (2,4,6-trinitrophenyl) iminoazanium, HPLC: High-performance liquid chromatography, IC50: The concentration required to inhibit by 50%, LC50: The concentration required to achieve 50% mortality, MS: Mass spectrometry. PMID:27279705

  9. Content analysis to detect high stress in oral interviews and text documents

    Thirumalainambi, Rajkumar (Inventor); Jorgensen, Charles C. (Inventor)

    2012-01-01

    A system of interrogation to estimate whether a subject of interrogation is likely experiencing high stress, emotional volatility and/or internal conflict in the subject's responses to an interviewer's questions. The system applies one or more of four procedures, a first statistical analysis, a second statistical analysis, a third analysis and a heat map analysis, to identify one or more documents containing the subject's responses for which further examination is recommended. Words in the documents are characterized in terms of dimensions representing different classes of emotions and states of mind, in which the subject's responses that manifest high stress, emotional volatility and/or internal conflict are identified. A heat map visually displays the dimensions manifested by the subject's responses in different colors, textures, geometric shapes or other visually distinguishable indicia.

  10. Experience with conventional inelastic analysis procedures in very high temperature applications

    Mallett, R.H.; Thompson, J.M.; Swindeman, R.W.

    1991-01-01

    Conventional incremental plasticity and creep analysis procedures for inelastic analysis are applied to hot flue gas cleanup system components. These flue gas systems operate at temperatures where plasticity and creep are very much intertwined while the two phenomena are treated separately in the conventional inelastic analysis procedure. Data for RA333 material are represented in forms appropriate for the conventional inelastic analysis procedures. Behavior is predicted for typical operating cycles. Creep-fatigue damage is estimated based upon usage fractions. Excessive creep damage is predicted; the major contributions occur during high stress short term intervals caused by rapid temperature changes. In this paper these results are presented for discussion of the results and their interpretation in terms of creep-fatigue damage for very high temperature applications

  11. High-Temperature Structural Analysis Model of the Process Heat Exchanger for Helium Gas Loop (II)

    Song, Kee Nam; Lee, Heong Yeon; Kim, Chan Soo; Hong, Seong Duk; Park, Hong Yoon

    2010-01-01

    PHE (Process Heat Exchanger) is a key component required to transfer heat energy of 950 .deg. C generated in a VHTR (Very High Temperature Reactor) to the chemical reaction that yields a large quantity of hydrogen. Korea Atomic Energy Research Institute established the helium gas loop for the performance test of components, which are used in the VHTR, and they manufactured a PHE prototype to be tested in the loop. In this study, as part of the high temperature structural-integrity evaluation of the PHE prototype, which is scheduled to be tested in the helium gas loop, we carried out high-temperature structural-analysis modeling, thermal analysis, and thermal expansion analysis of the PHE prototype. The results obtained in this study will be used to design the performance test setup for the PHE prototype

  12. High-speed Vibrational Imaging and Spectral Analysis of Lipid Bodies by Compound Raman Microscopy

    Slipchenko, Mikhail N.; Le, Thuc T.; Chen, Hongtao; Cheng, Ji-Xin

    2009-01-01

    Cells store excess energy in the form of cytoplasmic lipid droplets. At present, it is unclear how different types of fatty acids contribute to the formation of lipid-droplets. We describe a compound Raman microscope capable of both high-speed chemical imaging and quantitative spectral analysis on the same platform. We use a picosecond laser source to perform coherent Raman scattering imaging of a biological sample and confocal Raman spectral analysis at points of interest. The potential of t...

  13. High precision analysis of trace lithium isotope by thermal ionization mass spectrometry

    Tang Lei; Liu Xuemei; Long Kaiming; Liu Zhao; Yang Tianli

    2010-01-01

    High precision analysis method of ng lithium by thermal ionization mass spectrometry is developed. By double-filament measurement,phosphine acid ion enhancer and sample pre-baking technique,the precision of trace lithium analysis is improved. For 100 ng lithium isotope standard sample, relative standard deviation is better than 0.086%; for 10 ng lithium isotope standard sample, relative standard deviation is better than 0.90%. (authors)

  14. Two and dimensional heat analysis inside a high pressure electrical discharge tube

    Aghanajafi, C.; Dehghani, A. R.; Fallah Abbasi, M.

    2005-01-01

    This article represents the heat transfer analysis for a horizontal high pressure mercury steam tube. To get a more realistic numerical simulation, heat radiation at different wavelength width bands, has been used besides convection and conduction heat transfer. The analysis for different gases with different pressure in two and three dimensional cases has been investigated and the results compared with empirical and semi empirical values. The effect of the environmental temperature on the arc tube temperature is also studied

  15. The high energy behavior of the forward scattering parameters - an amplitude analysis update

    Block, M.M.; Margolis, B.; White, A.R.

    1995-01-01

    Utilizing the most recent experimental data, we reanalyze high energy pp and pp data, using the asymptotic amplitude analysis, under the assumption that we have reached open-quotes asymptopiaclose quotes. This analysis gives strong evidence for a log (s/s 0 ) dependence at current energies and not log 2 (s/s 0 ), and also demonstrates that odderons are not necessary to explain the experimental data

  16. The element analysis of high purity beryllium by method of laser mass-spectrometry

    Virich, V.D.; Kisel', O.V.; Kovtun, K.V.; Pugachev, N.S.; Yakobson, L.A.

    2003-01-01

    The operation is devoted to examination of a possibility of the analysis of element composition pure and high purity model of a beryllium is model by a method of laser mass spectrometry. The advantages of a method in a part of finding of a small amount of admixtures in comparison with other modes of the analysis are exhibited. The possibility of quantitative definition of a content in beryllium samples of gas-making admixtures-C,N,O surveyed

  17. Toxic Compounds Analysis With High Performance Liquid Chromatography Detected By Electro Chemical Detector (Ecd)

    Hideharu Shintaniq

    2014-01-01

    The principal area of application of high performance liquid chromatography-electrochemical detector (HPLC-ECD) has been in the analysis of naturally-occurring analytes, such as catecholamines, and pharmaceuticals in biological samples, HPLC-ECD has also applied to the analysis of pesticides and other analytes of interest to the toxicologist. In this paper, toxic area is described. In these, ammatoxins, aromatic amine, nitro-compounds, algal toxins, fungal toxins, pesticides, veterinary drug ...

  18. Computational and statistical methods for high-throughput analysis of post-translational modifications of proteins

    Schwämmle, Veit; Braga, Thiago Verano; Roepstorff, Peter

    2015-01-01

    The investigation of post-translational modifications (PTMs) represents one of the main research focuses for the study of protein function and cell signaling. Mass spectrometry instrumentation with increasing sensitivity improved protocols for PTM enrichment and recently established pipelines...... for high-throughput experiments allow large-scale identification and quantification of several PTM types. This review addresses the concurrently emerging challenges for the computational analysis of the resulting data and presents PTM-centered approaches for spectra identification, statistical analysis...

  19. Cost-Effectiveness Analysis of Test-Based versus Presumptive Treatment of Uncomplicated Malaria in Children under Five Years in an Area of High Transmission in Central Ghana

    Tawiah, Theresa; Hansen, Kristian Schultz; Baiden, Frank

    2016-01-01

    about household cost incurred on transport, drugs, fees, and special food during a period of one week after the health centre visit as well as days unable to work. A decision model approach was used to calculate the incremental cost-effectiveness ratios (ICERs). Univariate and multivariate sensitivity...... (ACT) in all suspected malaria patients. The use of malaria rapid diagnostic tests (mRDTs) would make it possible for prescribers to diagnose malaria at point-of-care and better target the use of antimalarials. Therefore, a cost-effectiveness analysis was performed on the introduction of m......) or clinical judgement (control) was used to measure the effect of mRDTs on appropriate treatment: ‘a child with a positive reference diagnosis prescribed a course of ACT or a child with a negative reference diagnosis not given an ACT’. Cost data was collected from five purposively selected health centres...

  20. Economic analysis of multiple-module high temperature gas-cooled reactor (MHTR) nuclear power plants

    Liu Yu; Dong Yujie

    2011-01-01

    In recent years, as the increasing demand of energy all over the world, and the pressure on greenhouse emissions, there's a new opportunity for the development of nuclear energy. Modular High Temperature Gas-cooled Reactor (MHTR) received recognition for its inherent safety feature and high outlet temperature. Whether the Modular High Temperature Gas-cooled Reactor would be accepted extensively, its economy is a key point. In this paper, the methods of qualitative analysis and the method of quantitative analysis, the economic models designed by Economic Modeling Working Group (EMWG) of the Generation IV International Forum (GIF), as well as the HTR-PM's main technical features, are used to analyze the economy of the MHTR. A prediction is made on the basis of summarizing High Temperature Gas-cooled Reactor module characteristics, construction cost, total capital cost, fuel cost and operation and maintenance (O and M) cost and so on. In the following part, comparative analysis is taken measures to the economy and cost ratio of different designs, to explore the impacts of modularization and standardization on the construction of multiple-module reactor nuclear power plant. Meanwhile, the analysis is also adopted in the research of key factors such as the learning effect and yield to find out their impacts on the large scale development of MHTR. Furthermore, some reference would be provided to its wide application based on these analysis. (author)

  1. Progress in element analysis on a high-voltage electron microscope

    Tivol, W.F.; Barnard, D.; Guha, T.

    1985-01-01

    X-Ray microprobe (XMA) and electron energy-loss (EELS) spectrometers have been installed on the high-voltage electron microscope (HVEM). The probe size has been measured and background reduction is in progress for XMA and EELS as are improvements in electron optics for EELS and sensitivity measurements. XMA is currently useful for qualitative analysis and has been used by several investigators from our laboratory and outside laboratories. However, EELS background levels are still too high for meaningful results to be obtained. Standards suitable for biological specimens are being measured, and a library for quantitative analysis is being compiled

  2. Source-driven noise analysis measurements with neptunium metal reflected by high enriched uranium

    Valentine, Timothy E.; Mattingly, John K.

    2003-01-01

    Subcritical noise analysis measurements have been performed with neptunium ( 237 Np) sphere reflected by highly enriched uranium. These measurements were performed at the Los Alamos Critical Experiment Facility in December 2002 to provide an estimate of the subcriticality of 237 Np reflected by various amounts of high-enriched uranium. This paper provides a description of the measurements and presents some preliminary results of the analysis of the measurements. The measured and calculated spectral ratios differ by 15% whereas the 'interpreted' and calculated k eff values differ by approximately 1%. (author)

  3. High-speed image analysis reveals chaotic vibratory behaviors of pathological vocal folds

    Zhang Yu, E-mail: yuzhang@xmu.edu.c [Key Laboratory of Underwater Acoustic Communication and Marine Information Technology of the Ministry of Education, Xiamen University, Xiamen Fujian 361005 (China); Shao Jun [Shanghai EENT Hospital of Fudan University, Shanghai (China); Krausert, Christopher R. [Department of Surgery, Division of Otolaryngology - Head and Neck Surgery, University of Wisconsin School of Medicine and Public Health, Madison, WI 53792-7375 (United States); Zhang Sai [Key Laboratory of Underwater Acoustic Communication and Marine Information Technology of the Ministry of Education, Xiamen University, Xiamen Fujian 361005 (China); Jiang, Jack J. [Shanghai EENT Hospital of Fudan University, Shanghai (China); Department of Surgery, Division of Otolaryngology - Head and Neck Surgery, University of Wisconsin School of Medicine and Public Health, Madison, WI 53792-7375 (United States)

    2011-01-15

    Research highlights: Low-dimensional human glottal area data. Evidence of chaos in human laryngeal activity from high-speed digital imaging. Traditional perturbation analysis should be cautiously applied to aperiodic high speed image signals. Nonlinear dynamic analysis may be helpful for understanding disordered behaviors in pathological laryngeal systems. - Abstract: Laryngeal pathology is usually associated with irregular dynamics of laryngeal activity. High-speed imaging facilitates direct observation and measurement of vocal fold vibrations. However, chaotic dynamic characteristics of aperiodic high-speed image data have not yet been investigated in previous studies. In this paper, we will apply nonlinear dynamic analysis and traditional perturbation methods to quantify high-speed image data from normal subjects and patients with various laryngeal pathologies including vocal fold nodules, polyps, bleeding, and polypoid degeneration. The results reveal the low-dimensional dynamic characteristics of human glottal area data. In comparison to periodic glottal area series from a normal subject, aperiodic glottal area series from pathological subjects show complex reconstructed phase space, fractal dimension, and positive Lyapunov exponents. The estimated positive Lyapunov exponents provide the direct evidence of chaos in pathological human vocal folds from high-speed digital imaging. Furthermore, significant differences between the normal and pathological groups are investigated for nonlinear dynamic and perturbation analyses. Jitter in the pathological group is significantly higher than in the normal group, but shimmer does not show such a difference. This finding suggests that the traditional perturbation analysis should be cautiously applied to high speed image signals. However, the correlation dimension and the maximal Lyapunov exponent reveal a statistically significant difference between normal and pathological groups. Nonlinear dynamic analysis is capable of

  4. High-speed image analysis reveals chaotic vibratory behaviors of pathological vocal folds

    Zhang Yu; Shao Jun; Krausert, Christopher R.; Zhang Sai; Jiang, Jack J.

    2011-01-01

    Research highlights: → Low-dimensional human glottal area data. → Evidence of chaos in human laryngeal activity from high-speed digital imaging. → Traditional perturbation analysis should be cautiously applied to aperiodic high speed image signals. → Nonlinear dynamic analysis may be helpful for understanding disordered behaviors in pathological laryngeal systems. - Abstract: Laryngeal pathology is usually associated with irregular dynamics of laryngeal activity. High-speed imaging facilitates direct observation and measurement of vocal fold vibrations. However, chaotic dynamic characteristics of aperiodic high-speed image data have not yet been investigated in previous studies. In this paper, we will apply nonlinear dynamic analysis and traditional perturbation methods to quantify high-speed image data from normal subjects and patients with various laryngeal pathologies including vocal fold nodules, polyps, bleeding, and polypoid degeneration. The results reveal the low-dimensional dynamic characteristics of human glottal area data. In comparison to periodic glottal area series from a normal subject, aperiodic glottal area series from pathological subjects show complex reconstructed phase space, fractal dimension, and positive Lyapunov exponents. The estimated positive Lyapunov exponents provide the direct evidence of chaos in pathological human vocal folds from high-speed digital imaging. Furthermore, significant differences between the normal and pathological groups are investigated for nonlinear dynamic and perturbation analyses. Jitter in the pathological group is significantly higher than in the normal group, but shimmer does not show such a difference. This finding suggests that the traditional perturbation analysis should be cautiously applied to high speed image signals. However, the correlation dimension and the maximal Lyapunov exponent reveal a statistically significant difference between normal and pathological groups. Nonlinear dynamic

  5. Analysis of gas turbine engines using water and oxygen injection to achieve high Mach numbers and high thrust

    Henneberry, Hugh M.; Snyder, Christopher A.

    1993-01-01

    An analysis of gas turbine engines using water and oxygen injection to enhance performance by increasing Mach number capability and by increasing thrust is described. The liquids are injected, either separately or together, into the subsonic diffuser ahead of the engine compressor. A turbojet engine and a mixed-flow turbofan engine (MFTF) are examined, and in pursuit of maximum thrust, both engines are fitted with afterburners. The results indicate that water injection alone can extend the performance envelope of both engine types by one and one-half Mach numbers at which point water-air ratios reach 17 or 18 percent and liquid specific impulse is reduced to some 390 to 470 seconds, a level about equal to the impulse of a high energy rocket engine. The envelope can be further extended, but only with increasing sacrifices in liquid specific impulse. Oxygen-airflow ratios as high as 15 percent were investigated for increasing thrust. Using 15 percent oxygen in combination with water injection at high supersonic Mach numbers resulted in thrust augmentation as high as 76 percent without any significant decrease in liquid specific impulse. The stoichiometric afterburner exit temperature increased with increasing oxygen flow, reaching 4822 deg R in the turbojet engine at a Mach number of 3.5. At the transonic Mach number of 0.95 where no water injection is needed, an oxygen-air ratio of 15 percent increased thrust by some 55 percent in both engines, along with a decrease in liquid specific impulse of 62 percent. Afterburner temperature was approximately 4700 deg R at this high thrust condition. Water and/or oxygen injection are simple and straightforward strategies to improve engine performance and they will add little to engine weight. However, if large Mach number and thrust increases are required, liquid flows become significant, so that operation at these conditions will necessarily be of short duration.

  6. Development of NONSTA code for the design and analysis of LMR high temperature structure

    Kim, Jong Bum; Lee, H. Y.; Yoo, B.

    1999-02-01

    Liquid metal reactor(LMR) operates at high temperature (500-550 dg C) and structural materials undergo complex deformation behavior like diffusion, dislocation glide, and dislocation climb due to high temperature environment. And the material life reduces rapidly due to the interaction of cavities created inside structural materials and high temperature fatigue cracks. Thus the establishment of high temperature structure analysis techniques is necessary for the reliability and safety evaluation of such structures. The objectives of this study are to develop NONSTA code as the subprogram of ABAQUS code adopting constitutive equations which can predict high temperature material behavior precisely and to build the systematic analysis procedures. The developed program was applied to the example problems such as the tensile analysis using exponential creep model and the repetitive tensile-compression analysis using Chaboche unified viscoplastic model. In addition, the problem of a plate with a center hole subjected to tensile load was solved to show the applicability of the program to multiaxial problem and the time dependent stress redistribution was observed. (Author). 40 refs., 2 tabs., 24 figs

  7. Analysis of traces at ORNL's new high-flux neutron activation laboratory

    Ricci, E.; Handley, T.H.; Dyer, F.F.

    1974-01-01

    The investigations are outlined, which are carried out in order to develop (preferably instrumental) methods for multielement analysis of various trace elements. For this reason a new High-Flux NAA Laboratory was constructed at ORNL's. A general review is given on the Laboratory, further some methods and applications are shown. In the field of comparator activation analysis comparative data are given on mercury determinations in various matrices, and on arsenic determination in grasshoppers. This later method was used to trace the transport of arsenic containing pesticides. Some data are given on absolute activation analysis of Na, Ci, Mn, Br, and Au, too. (K.A.)

  8. SAMPO 90 high resolution interactive gamma-spectrum analysis including automation with macros

    Aarnio, P.A.; Nikkinen, M.T.; Routti, J.T.

    1992-01-01

    SAMPO 90 is high performance gamma-spectrum analysis program for personal computers. It uses color graphics to display calibrations, spectra, fitting results as multiplet components, and analysis results. All the analysis phases can be done either under full interactive user control or macros and programmable function keys can be used for completely automated measurement and analysis sequences including the control of MACs and sample changers. Accurate peak area determination of even the most complex multiplets, of up to 32 components, is accomplished using linear and mixed mode fitting. Nuclide identification is done using associated lines techniques allowing interference correction for fully overlapping peaks. Peaked Background Subtraction can be performed and Minimum Detectable Activities calculated. The analysis reports and program parameters are fully customizable. (author) 13 refs.; 1 fig

  9. Genotyping of Listeria monocytogenes isolates from poultry carcasses using high resolution melting (HRM) analysis.

    Sakaridis, Ioannis; Ganopoulos, Ioannis; Madesis, Panagiotis; Tsaftaris, Athanasios; Argiriou, Anagnostis

    2014-01-02

    An outbreak situation of human listeriosis requires a fast and accurate protocol for typing Listeria monocytogenes . Existing techniques are either characterized by low discriminatory power or are laborious and require several days to give a final result. Polymerase chain reaction (PCR) coupled with high resolution melting (HRM) analysis was investigated in this study as an alternative tool for a rapid and precise genotyping of L. monocytogenes isolates. Fifty-five isolates of L. monocytogenes isolated from poultry carcasses and the environment of four slaughterhouses were typed by HRM analysis using two specific markers, internalin B and ssrA genes. The analysis of genotype confidence percentage of L. monocytogenes isolates produced by HRM analysis generated dendrograms with two major groups and several subgroups. Furthermore, the analysis of the HRM curves revealed that all L. monocytogenes isolates could easily be distinguished. In conclusion, HRM was proven to be a fast and powerful tool for genotyping isolates of L. monocytogenes .

  10. High resolution gamma-ray spectroscopy applied to bulk sample analysis

    Kosanke, K.L.; Koch, C.D.; Wilson, R.D.

    1980-01-01

    A high resolution Ge(Li) gamma-ray spectrometer has been installed and made operational for use in routine bulk sample analysis by the Bendix Field Engineering Corporation (BFEC) geochemical analysis department. The Ge(Li) spectrometer provides bulk sample analyses for potassium, uranium, and thorium that are superior to those obtained by the BFEC sodium iodide spectrometer. The near term analysis scheme permits a direct assay for uranium that corrects for bulk sample self-absorption effects and is independent of the uranium/radium disequilibrium condition of the sample. A more complete analysis scheme has been developed that fully utilizes the gamma-ray data provided by the Ge(Li) spectrometer and that more properly accounts for the sample self-absorption effect. This new analysis scheme should be implemented on the BFEC Ge(Li) spectrometer at the earliest date

  11. Spatio-spectral analysis of ionization times in high-harmonic generation

    Soifer, Hadas, E-mail: hadas.soifer@weizmann.ac.il [Department of Physics of Complex Systems, Weizmann Institute of Science, Rehovot 76100 (Israel); Dagan, Michal; Shafir, Dror; Bruner, Barry D. [Department of Physics of Complex Systems, Weizmann Institute of Science, Rehovot 76100 (Israel); Ivanov, Misha Yu. [Department of Physics, Imperial College London, South Kensington Campus, SW7 2AZ London (United Kingdom); Max-Born Institute for Nonlinear Optics and Short Pulse Spectroscopy, Max-Born-Strasse 2A, D-12489 Berlin (Germany); Serbinenko, Valeria; Barth, Ingo; Smirnova, Olga [Max-Born Institute for Nonlinear Optics and Short Pulse Spectroscopy, Max-Born-Strasse 2A, D-12489 Berlin (Germany); Dudovich, Nirit [Department of Physics of Complex Systems, Weizmann Institute of Science, Rehovot 76100 (Israel)

    2013-03-12

    Graphical abstract: A spatio-spectral analysis of the two-color oscillation phase allows us to accurately separate short and long trajectories and reconstruct their ionization times. Highlights: ► We perform a complete spatio-spectral analysis of the high harmonic generation process. ► We analyze the ionization times across the entire spatio-spectral plane of the harmonics. ► We apply this analysis to reconstruct the ionization times of both short and long trajectories. - Abstract: Recollision experiments have been very successful in resolving attosecond scale dynamics. However, such schemes rely on the single atom response, neglecting the macroscopic properties of the interaction and the effects of using multi-cycle laser fields. In this paper we perform a complete spatio-spectral analysis of the high harmonic generation process and resolve the distribution of the subcycle dynamics of the recolliding electron. Specifically, we focus on the measurement of ionization times. Recently, we have demonstrated that the addition of a weak, crossed polarized second harmonic field allows us to resolve the moment of ionization (Shafir, 2012) [1]. In this paper we extend this measurement and perform a complete spatio-spectral analysis. We apply this analysis to reconstruct the ionization times of both short and long trajectories showing good agreement with the quantum path analysis.

  12. Conference Analysis Report of Assessments on Defect and Damage for a High Temperature Structure

    Lee, Hyeong Yeon

    2008-11-01

    This report presents the analysis on the state-of-the-art research trends on creep-fatigue damage, defect assessment of high temperature structure, development of heat resistant materials and their behavior at high temperature based on the papers presented in the two international conferences of ASME PVP 2008 which was held in Chicago in July 2008 and CF-5(5th International Conference on Creep, Fatigue and Creep-Fatigue) which was held in Kalpakkam, India in September 2008

  13. Analysis of short-chain acids from anaerobic bacteria by high-performance liquid chromatography.

    Guerrant, G O; Lambert, M A; Moss, C W

    1982-01-01

    A standard mixture of 25 short-chain fatty acids was resolved by high-performance liquid chromatography, using an Aminex HPX-87 column. The acids produced in culture media by anaerobic bacteria were analyzed by high-performance liquid chromatography after extraction with ether and reextraction into a small volume of 0.1 N NaOH. The presence of fumaric acid in culture extracts of Peptostreptococcus anaerobius was confirmed by gas chromatography-mass spectrometry analysis of the trapped eluent ...

  14. Report on Ultra-high Resolution Gamma-/X-ray Analysis of Uranium Skull Oxide

    Friedrich, S.; Velazquez, M.; Drury, O.; Salaymeh, S.

    2009-01-01

    We have utilized the high energy resolution and high peak-to-background ratio of superconducting TES γ-detectors at very low energies for non-destructive analysis of a skull oxide derived from reprocessed nuclear fuel. Specifically, we demonstrate that superconducting detectors can separate and analyze the strong actinide emission lines in the spectral region below 60 keV that are often obscured in γ-measurements with conventional Ge detectors.

  15. Conference Analysis Report of Assessments on Defect and Damage for a High Temperature Structure

    Lee, Hyeong Yeon

    2008-11-15

    This report presents the analysis on the state-of-the-art research trends on creep-fatigue damage, defect assessment of high temperature structure, development of heat resistant materials and their behavior at high temperature based on the papers presented in the two international conferences of ASME PVP 2008 which was held in Chicago in July 2008 and CF-5(5th International Conference on Creep, Fatigue and Creep-Fatigue) which was held in Kalpakkam, India in September 2008.

  16. Analysis of administrative barriers in the industry of the high-rise construction in Russian Federation

    Zaychenko, Irina; Borremans, Alexandra; Gutman, Svetlana

    2018-03-01

    The article describes the concept and types of administrative barriers encountered in various areas of the enterprise. The particularities of the Russian high-rise construction industry are described and a comparative analysis of administrative barriers in this sector is performed. The main stages and administrative procedures when the developers implement investment and construction projects in the field of high-rise construction are determined. The regulatory and legal framework for the implementation of investment and project activities in the high-rise construction industry has been studied and conclusions have been drawn on its low level of precision in the issue of the formation of competitive and efficient high-rise construction markets. The average number of administrative procedures for the implementation of the investment and construction project in the field of high-rise construction is determined. The factors preventing the reduction of administrative barriers in the high-rise construction industry are revealed.

  17. High frequency analysis of cough sounds in pediatric patients with respiratory diseases.

    Kosasih, K; Abeyratne, U R; Swarnkar, V

    2012-01-01

    Cough is a common symptom in a range of respiratory diseases and is considered a natural defense mechanism of the body. Despite its critical importance in the diagnosis of illness, there are no golden methods to objectively assess cough. In a typical consultation session, a physician may briefly listen to the cough sounds using a stethoscope placed against the chest. The physician may also listen to spontaneous cough sounds via naked ears, as they naturally propagate through air. Cough sounds carry vital information on the state of the respiratory system but the field of cough analysis in clinical medicine is in its infancy. All existing cough analysis approaches are severely handicapped by the limitations of the human hearing range and simplified analysis techniques. In this paper, we address these problems, and explore the use of frequencies covering a range well beyond the human perception (up to 90 kHz) and use wavelet analysis to extract diagnostically important information from coughs. Our data set comes from a pediatric respiratory ward in Indonesia, from subjects diagnosed with asthma, pneumonia and rhinopharyngitis. We analyzed over 90 cough samples from 4 patients and explored if high frequencies carried useful information in separating these disease groups. Multiple regression analysis resulted in coefficients of determination (R(2)) of 77-82% at high frequencies (15 kHz-90 kHz) indicating that they carry useful information. When the high frequencies were combined with frequencies below 15kHz, the R(2) performance increased to 85-90%.

  18. Structural vibration passive control and economic analysis of a high-rise building in Beijing

    Chen, Yongqi; Cao, Tiezhu; Ma, Liangzhe; Luo, Chaoying

    2009-12-01

    Performance analysis of the Pangu Plaza under earthquake and wind loads is described in this paper. The plaza is a 39-story steel high-rise building, 191 m high, located in Beijing close to the 2008 Olympic main stadium. It has both fluid viscous dampers (FVDs) and buckling restrained braces or unbonded brace (BRB or UBB) installed. A repeated iteration procedure in its design and analysis was adopted for optimization. Results from the seismic response analysis in the horizontal and vertical directions show that the FVDs are highly effective in reducing the response of both the main structure and the secondary system. A comparative analysis of structural seismic performance and economic impact was conducted using traditional methods, i.e., increased size of steel columns and beams and/or use of an increased number of seismic braces versus using FVD. Both the structural response and economic analysis show that using FVD to absorb seismic energy not only satisfies the Chinese seismic design code for a “rare” earthquake, but is also the most economical way to improve seismic performance both for one-time direct investment and long term maintenance.

  19. Differential Expression and Functional Analysis of High-Throughput -Omics Data Using Open Source Tools.

    Kebschull, Moritz; Fittler, Melanie Julia; Demmer, Ryan T; Papapanou, Panos N

    2017-01-01

    Today, -omics analyses, including the systematic cataloging of messenger RNA and microRNA sequences or DNA methylation patterns in a cell population, organ, or tissue sample, allow for an unbiased, comprehensive genome-level analysis of complex diseases, offering a large advantage over earlier "candidate" gene or pathway analyses. A primary goal in the analysis of these high-throughput assays is the detection of those features among several thousand that differ between different groups of samples. In the context of oral biology, our group has successfully utilized -omics technology to identify key molecules and pathways in different diagnostic entities of periodontal disease.A major issue when inferring biological information from high-throughput -omics studies is the fact that the sheer volume of high-dimensional data generated by contemporary technology is not appropriately analyzed using common statistical methods employed in the biomedical sciences.In this chapter, we outline a robust and well-accepted bioinformatics workflow for the initial analysis of -omics data generated using microarrays or next-generation sequencing technology using open-source tools. Starting with quality control measures and necessary preprocessing steps for data originating from different -omics technologies, we next outline a differential expression analysis pipeline that can be used for data from both microarray and sequencing experiments, and offers the possibility to account for random or fixed effects. Finally, we present an overview of the possibilities for a functional analysis of the obtained data.

  20. Improved spectrophotometric analysis of fullerenes C60 and C70 in high-solubility organic solvents.

    Törpe, Alexander; Belton, Daniel J

    2015-01-01

    Fullerenes are among a number of recently discovered carbon allotropes that exhibit unique and versatile properties. The analysis of these materials is of great importance and interest. We present previously unreported spectroscopic data for C60 and C70 fullerenes in high-solubility solvents, including error bounds, so as to allow reliable colorimetric analysis of these materials. The Beer-Lambert-Bouguer law is found to be valid at all wavelengths. The measured data were highly reproducible, and yielded high-precision molar absorbance coefficients for C60 and C70 in o-xylene and o-dichlorobenzene, which both exhibit a high solubility for these fullerenes, and offer the prospect of improved extraction efficiency. A photometric method for a C60/C70 mixture analysis was validated with standard mixtures, and subsequently improved for real samples by correcting for light scattering, using a power-law fit. The method was successfully applied to the analysis of C60/C70 mixtures extracted from fullerene soot.

  1. High-density EEG coherence analysis using functional units applied to mental fatigue

    Caat, Michael ten; Lorist, Monicque M.; Bezdan, Eniko; Roerdink, Jos B.T.M.; Maurits, Natasha M.

    2008-01-01

    Electroencephalography (EEG) coherence provides a quantitative measure of functional brain connectivity which is calculated between pairs of signals as a function of frequency. Without hypotheses, traditional coherence analysis would be cumbersome for high-density EEG which employs a large number of

  2. Modeling Phase-transitions Using a High-performance, Isogeometric Analysis Framework

    Vignal, Philippe; Dalcin, Lisandro; Collier, Nathan; Calo, Victor M.

    2014-01-01

    In this paper, we present a high-performance framework for solving partial differential equations using Isogeometric Analysis, called PetIGA, and show how it can be used to solve phase-field problems. We specifically chose the Cahn-Hilliard equation

  3. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  4. Time Series in Education: The Analysis of Daily Attendance in Two High Schools

    Koopmans, Matthijs

    2011-01-01

    This presentation discusses the use of a time series approach to the analysis of daily attendance in two urban high schools over the course of one school year (2009-10). After establishing that the series for both schools were stationary, they were examined for moving average processes, autoregression, seasonal dependencies (weekly cycles),…

  5. Using High Speed Smartphone Cameras and Video Analysis Techniques to Teach Mechanical Wave Physics

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-01-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses…

  6. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain

    2017-01-01

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive

  7. Cost-Effectiveness Analysis in Practice: Interventions to Improve High School Completion

    Hollands, Fiona; Bowden, A. Brooks; Belfield, Clive; Levin, Henry M.; Cheng, Henan; Shand, Robert; Pan, Yilin; Hanisch-Cerda, Barbara

    2014-01-01

    In this article, we perform cost-effectiveness analysis on interventions that improve the rate of high school completion. Using the What Works Clearinghouse to select effective interventions, we calculate cost-effectiveness ratios for five youth interventions. We document wide variation in cost-effectiveness ratios between programs and between…

  8. Cultural Parallax and Content Analysis: Images of Black Women in High School History Textbooks

    Woyshner, Christine; Schocker, Jessica B.

    2015-01-01

    This study investigates the representation of Black women in high school history textbooks. To examine the extent to which Black women are represented visually and to explore how they are portrayed, the authors use a mixed-methods approach that draws on analytical techniques in content analysis and from visual culture studies. Their findings…

  9. Organization of pulse-height analysis programs for high event rates

    Cohn, C E [Argonne National Lab., Ill. (USA)

    1976-09-01

    The ability of a pulse-height analysis program to handle high event rates can be enhanced by organizing it so as to minimize the time spent in interrupt housekeeping. Specifically, the routine that services the data-ready interrupt from the ADC should test whether another event is ready before performing the interrupt return.

  10. Multiple Logistic Regression Analysis of Cigarette Use among High School Students

    Adwere-Boamah, Joseph

    2011-01-01

    A binary logistic regression analysis was performed to predict high school students' cigarette smoking behavior from selected predictors from 2009 CDC Youth Risk Behavior Surveillance Survey. The specific target student behavior of interest was frequent cigarette use. Five predictor variables included in the model were: a) race, b) frequency of…

  11. VATE: VAlidation of high TEchnology based on large database analysis by learning machine

    Meldolesi, E; Van Soest, J; Alitto, A R; Autorino, R; Dinapoli, N; Dekker, A; Gambacorta, M A; Gatta, R; Tagliaferri, L; Damiani, A; Valentini, V

    2014-01-01

    The interaction between implementation of new technologies and different outcomes can allow a broad range of researches to be expanded. The purpose of this paper is to introduce the VAlidation of high TEchnology based on large database analysis by learning machine (VATE) project that aims to combine

  12. The operating experience and incident analysis for High Flux Engineering Test Reactor

    Zhao Guang

    1999-01-01

    The paper describes the incidents analysis for High Flux Engineering test reactor (HFETR) and introduces operating experience. Some suggestion have been made to reduce the incidents of HFETR. It is necessary to adopt new improvements which enhance the safety and reliability of operation. (author)

  13. A general purpose program system for high energy physics experiment data acquisition and analysis

    Li Shuren; Xing Yuguo; Jin Bingnian

    1985-01-01

    This paper introduced the functions, structure and system generation of a general purpose program system (Fermilab MULTI) for high energy physics experiment data acquisition and analysis. Works concerning the reconstruction of MULTI system level 0.5 which can be run on the computer PDP-11/23 are also introduced briefly

  14. Meta-Analysis on Dating Violence Prevention among Middle and High Schools

    Ting, Siu-Man Raymond

    2009-01-01

    Meta-analysis was applied to study the empirical research from 1990-2007 regarding the effectiveness of the dating violence prevention programs in middle and high schools on students' knowledge and attitudes. The results show that overall the program participants improved their knowledge and attitudes towards dating violence. Implications for…

  15. High frequency analysis of lead-lag relationships between financial markets

    de Jong, F.C.J.M.; Nijman, T.E.

    1995-01-01

    High frequency data are often observed at irregular intervals, which complicates the analysis of lead-lag relationships between financial markets. Frequently, estimators have been used that are based on observations at regular intervals, which are adapted to the irregular observations case by

  16. Analysis and design of a slotless tubular permanent magnet actuator for high acceleration applications

    Meessen, K.J.; Paulides, J.J.H.; Lomonova, E.A.

    2009-01-01

    This paper presents the design of a linear actuator for high acceleration applications. In the analysis, a slotless tubular permanent magnet actuator is modeled by means of semianalytical field solutions. Several slotless topologies are modeled and compared to achieve the highest acceleration. A

  17. Impedance-Based High Frequency Resonance Analysis of DFIG System in Weak Grids

    Song, Yipeng; Wang, Xiongfei; Blaabjerg, Frede

    2017-01-01

    Resonance (SSR). However, the High Frequency Resonance (HFR) of DFIG systems due to the impedance interaction between DFIG system and parallel compensated weak network is often overlooked. This paper thus investigates the impedance characteristics of DFIG systems for the analysis of HFR. The influences...

  18. The high performance cluster computing system for BES offline data analysis

    Sun Yongzhao; Xu Dong; Zhang Shaoqiang; Yang Ting

    2004-01-01

    A high performance cluster computing system (EPCfarm) is introduced, which used for BES offline data analysis. The setup and the characteristics of the hardware and software of EPCfarm are described. The PBS, a queue management package, and the performance of EPCfarm is presented also. (authors)

  19. DESIGN ANALYSIS FOR THE DEFENSE HIGH-LEVEL WASTE DISPOSAL CONTAINER

    G. Radulesscu; J.S. Tang

    2000-06-07

    The purpose of ''Design Analysis for the Defense High-Level Waste Disposal Container'' analysis is to technically define the defense high-level waste (DHLW) disposal container/waste package using the Waste Package Department's (WPD) design methods, as documented in ''Waste Package Design Methodology Report'' (CRWMS M&O [Civilian Radioactive Waste Management System Management and Operating Contractor] 2000a). The DHLW disposal container is intended for disposal of commercial high-level waste (HLW) and DHLW (including immobilized plutonium waste forms), placed within disposable canisters. The U.S. Department of Energy (DOE)-managed spent nuclear fuel (SNF) in disposable canisters may also be placed in a DHLW disposal container along with HLW forms. The objective of this analysis is to demonstrate that the DHLW disposal container/waste package satisfies the project requirements, as embodied in Defense High Level Waste Disposal Container System Description Document (SDD) (CRWMS M&O 1999a), and additional criteria, as identified in Waste Package Design Sensitivity Report (CRWMS M&Q 2000b, Table 4). The analysis briefly describes the analytical methods appropriate for the design of the DHLW disposal contained waste package, and summarizes the results of the calculations that illustrate the analytical methods. However, the analysis is limited to the calculations selected for the DHLW disposal container in support of the Site Recommendation (SR) (CRWMS M&O 2000b, Section 7). The scope of this analysis is restricted to the design of the codisposal waste package of the Savannah River Site (SRS) DHLW glass canisters and the Training, Research, Isotopes General Atomics (TRIGA) SNF loaded in a short 18-in.-outer diameter (OD) DOE standardized SNF canister. This waste package is representative of the waste packages that consist of the DHLW disposal container, the DHLW/HLW glass canisters, and the DOE-managed SNF in disposable

  20. DESIGN ANALYSIS FOR THE DEFENSE HIGH-LEVEL WASTE DISPOSAL CONTAINER

    Radulesscu, G.; Tang, J.S.

    2000-01-01

    The purpose of ''Design Analysis for the Defense High-Level Waste Disposal Container'' analysis is to technically define the defense high-level waste (DHLW) disposal container/waste package using the Waste Package Department's (WPD) design methods, as documented in ''Waste Package Design Methodology Report'' (CRWMS M andO [Civilian Radioactive Waste Management System Management and Operating Contractor] 2000a). The DHLW disposal container is intended for disposal of commercial high-level waste (HLW) and DHLW (including immobilized plutonium waste forms), placed within disposable canisters. The U.S. Department of Energy (DOE)-managed spent nuclear fuel (SNF) in disposable canisters may also be placed in a DHLW disposal container along with HLW forms. The objective of this analysis is to demonstrate that the DHLW disposal container/waste package satisfies the project requirements, as embodied in Defense High Level Waste Disposal Container System Description Document (SDD) (CRWMS M andO 1999a), and additional criteria, as identified in Waste Package Design Sensitivity Report (CRWMS M andQ 2000b, Table 4). The analysis briefly describes the analytical methods appropriate for the design of the DHLW disposal contained waste package, and summarizes the results of the calculations that illustrate the analytical methods. However, the analysis is limited to the calculations selected for the DHLW disposal container in support of the Site Recommendation (SR) (CRWMS M andO 2000b, Section 7). The scope of this analysis is restricted to the design of the codisposal waste package of the Savannah River Site (SRS) DHLW glass canisters and the Training, Research, Isotopes General Atomics (TRIGA) SNF loaded in a short 18-in.-outer diameter (OD) DOE standardized SNF canister. This waste package is representative of the waste packages that consist of the DHLW disposal container, the DHLW/HLW glass canisters, and the DOE-managed SNF in disposable canisters. The intended use of this

  1. Compressed sensing cine imaging with high spatial or high temporal resolution for analysis of left ventricular function.

    Goebel, Juliane; Nensa, Felix; Schemuth, Haemi P; Maderwald, Stefan; Gratz, Marcel; Quick, Harald H; Schlosser, Thomas; Nassenstein, Kai

    2016-08-01

    To assess two compressed sensing cine magnetic resonance imaging (MRI) sequences with high spatial or high temporal resolution in comparison to a reference steady-state free precession cine (SSFP) sequence for reliable quantification of left ventricular (LV) volumes. LV short axis stacks of two compressed sensing breath-hold cine sequences with high spatial resolution (SPARSE-SENSE HS: temporal resolution: 40 msec, in-plane resolution: 1.0 × 1.0 mm(2) ) and high temporal resolution (SPARSE-SENSE HT: temporal resolution: 11 msec, in-plane resolution: 1.7 × 1.7 mm(2) ) and of a reference cine SSFP sequence (standard SSFP: temporal resolution: 40 msec, in-plane resolution: 1.7 × 1.7 mm(2) ) were acquired in 16 healthy volunteers on a 1.5T MR system. LV parameters were analyzed semiautomatically twice by one reader and once by a second reader. The volumetric agreement between sequences was analyzed using paired t-test, Bland-Altman plots, and Passing-Bablock regression. Small differences were observed between standard SSFP and SPARSE-SENSE HS for stroke volume (SV; -7 ± 11 ml; P = 0.024), ejection fraction (EF; -2 ± 3%; P = 0.019), and myocardial mass (9 ± 9 g; P = 0.001), but not for end-diastolic volume (EDV; P = 0.079) and end-systolic volume (ESV; P = 0.266). No significant differences were observed between standard SSFP and SPARSE-SENSE HT regarding EDV (P = 0.956), SV (P = 0.088), and EF (P = 0.103), but for ESV (3 ± 5 ml; P = 0.039) and myocardial mass (8 ± 10 ml; P = 0.007). Bland-Altman analysis showed good agreement between the sequences (maximum bias ≤ -8%). Two compressed sensing cine sequences, one with high spatial resolution and one with high temporal resolution, showed good agreement with standard SSFP for LV volume assessment. J. Magn. Reson. Imaging 2016;44:366-374. © 2016 Wiley Periodicals, Inc.

  2. Specialized surveillance for individuals at high risk for melanoma: a cost analysis of a high-risk clinic.

    Watts, Caroline G; Cust, Anne E; Menzies, Scott W; Coates, Elliot; Mann, Graham J; Morton, Rachael L

    2015-02-01

    Regular surveillance of individuals at high risk for cutaneous melanoma improves early detection and reduces unnecessary excisions; however, a cost analysis of this specialized service has not been undertaken. To determine the mean cost per patient of surveillance in a high-risk clinic from the health service and societal perspectives. We used a bottom-up microcosting method to measure resource use in a consecutive sample of 102 patients treated in a high-risk hospital-based clinic in Australia during a 12-month period. Surveillance and treatment of melanoma. All surveillance and treatment procedures were identified through direct observation, review of medical records, and interviews with staff and were valued using scheduled fees from the Australian government. Societal costs included transportation and loss of productivity. The mean number of clinic visits per year was 2.7 (95% CI, 2.5-2.8) for surveillance and 3.8 (95% CI, 3.4-4.1) for patients requiring surgical excisions. The mean annual cost per patient to the health system was A $882 (95% CI, A $783-$982) (US $599 [95% CI, US $532-$665]); the cost discounted across 20 years was A $11,546 (95% CI, A $10,263-$12,829) (US $7839 [95% CI, US $6969-$8710]). The mean annual societal cost per patient (excluding health system costs) was A $972 (95% CI, A $899-$1045) (US $660 [95% CI, US $611-$710]); the cost discounted across 20 years was A $12,721 (95% CI, A $12,554-$14,463) (US $8637 [95% CI, US $8523-$9820]). Diagnosis of melanoma or nonmelanoma skin cancer and frequent excisions for benign lesions in a relatively small number of patients was responsible for positively skewed health system costs. Microcosting techniques provide an accurate cost estimate for the provision of a specialized service. The high societal cost reflects the time that patients are willing to invest to attend the high-risk clinic. This alternative model of care for a high-risk population has relevance for decision making about health policy.

  3. Efficient analysis for nonlinear microwave characteristics of high-power HTS thin film microstrip resonators

    Kedar, Ashutosh; Kataria, N D

    2005-01-01

    This paper investigates the nonlinear effects of high-T c superconducting (HTS) thin film in high-power applications. A nonlinear model for complex surface impedance has been proposed for the efficient analysis of the nonlinearity of HTS thin films. Further, using the developed model, analysis of HTS-MSR has been done using the spectral domain method (SDM). The SDM formulation has been modified to account for finite conductivity and thickness of HTS films by incorporating a complex resistive boundary condition. The results have been validated with the experiments performed with microstrip resonators (MSRs) based on YBa 2 Cu 3 O 7-x (YBCO) thin films made by a laser ablation technique on LaAlO 3 substrates, characterized for their characteristics, namely, resonant frequency and quality factor measured as a function of temperature and input RF power. A close agreement between the theoretical and measured results has been achieved validating the analysis

  4. Californium-252 neutron activation analysis of high-level processed nuclear tank waste

    Troyer, G.L.; Purcell, M.A.

    2000-01-01

    The basis for production assessment of the vitrification of Hanford nuclear fuel reprocessing wastes will be high-precision measurements of the elemental sodium content. However, the chemical analysis of both radioactive and nonradioactive components in nuclear waste can be challenged by high radiation dose rates. The dose rates compromise many analytical techniques as well as pose personnel dosimetry risks. In many cases, reduction of dose rates through dilution compromises the precision and sensitivity for certain key components. The use of neutron activation analysis (NAA) provides a method of analysis that avoids the need for dilutions or extensive sample preparation. These waste materials also contain trace quantities of fissionable isotopes, which, through neutron activation, can be estimated by delayed neutron counting of fissioned fragments

  5. Analysis of Pacific oyster larval proteome and its response to high-CO2

    Dineshram, R.

    2012-10-01

    Most calcifying organisms show depressed metabolic, growth and calcification rates as symptoms to high-CO2 due to ocean acidification (OA) process. Analysis of the global expression pattern of proteins (proteome analysis) represents a powerful tool to examine these physiological symptoms at molecular level, but its applications are inadequate. To address this knowledge gap, 2-DE coupled with mass spectrophotometer was used to compare the global protein expression pattern of oyster larvae exposed to ambient and to high-CO2. Exposure to OA resulted in marked reduction of global protein expression with a decrease or loss of 71 proteins (18% of the expressed proteins in control), indicating a wide-spread depression of metabolic genes expression in larvae reared under OA. This is, to our knowledge, the first proteome analysis that provides insights into the link between physiological suppression and protein down-regulation under OA in oyster larvae. © 2012 Elsevier Ltd.

  6. Multi-scale Analysis of High Resolution Topography: Feature Extraction and Identification of Landscape Characteristic Scales

    Passalacqua, P.; Sangireddy, H.; Stark, C. P.

    2015-12-01

    With the advent of digital terrain data, detailed information on terrain characteristics and on scale and location of geomorphic features is available over extended areas. Our ability to observe landscapes and quantify topographic patterns has greatly improved, including the estimation of fluxes of mass and energy across landscapes. Challenges still remain in the analysis of high resolution topography data; the presence of features such as roads, for example, challenges classic methods for feature extraction and large data volumes require computationally efficient extraction and analysis methods. Moreover, opportunities exist to define new robust metrics of landscape characterization for landscape comparison and model validation. In this presentation we cover recent research in multi-scale and objective analysis of high resolution topography data. We show how the analysis of the probability density function of topographic attributes such as slope, curvature, and topographic index contains useful information for feature localization and extraction. The analysis of how the distributions change across scales, quantified by the behavior of modal values and interquartile range, allows the identification of landscape characteristic scales, such as terrain roughness. The methods are introduced on synthetic signals in one and two dimensions and then applied to a variety of landscapes of different characteristics. Validation of the methods includes the analysis of modeled landscapes where the noise distribution is known and features of interest easily measured.

  7. Text mining and network analysis to find functional associations of genes in high altitude diseases.

    Bhasuran, Balu; Subramanian, Devika; Natarajan, Jeyakumar

    2018-05-02

    Travel to elevations above 2500 m is associated with the risk of developing one or more forms of acute altitude illness such as acute mountain sickness (AMS), high altitude cerebral edema (HACE) or high altitude pulmonary edema (HAPE). Our work aims to identify the functional association of genes involved in high altitude diseases. In this work we identified the gene networks responsible for high altitude diseases by using the principle of gene co-occurrence statistics from literature and network analysis. First, we mined the literature data from PubMed on high-altitude diseases, and extracted the co-occurring gene pairs. Next, based on their co-occurrence frequency, gene pairs were ranked. Finally, a gene association network was created using statistical measures to explore potential relationships. Network analysis results revealed that EPO, ACE, IL6 and TNF are the top five genes that were found to co-occur with 20 or more genes, while the association between EPAS1 and EGLN1 genes is strongly substantiated. The network constructed from this study proposes a large number of genes that work in-toto in high altitude conditions. Overall, the result provides a good reference for further study of the genetic relationships in high altitude diseases. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Ultra-high performance, solid-state, autoradiographic image digitization and analysis system

    Lear, J.L.; Pratt, J.P.; Ackermann, R.F.; Plotnick, J.; Rumley, S.

    1990-01-01

    We developed a Macintosh II-based, charge-coupled device (CCD), image digitization and analysis system for high-speed, high-resolution quantification of autoradiographic image data. A linear CCD array with 3,500 elements was attached to a precision drive assembly and mounted behind a high-uniformity lens. The drive assembly was used to sweep the array perpendicularly to its axis so that an entire 20 x 25-cm autoradiographic image-containing film could be digitized into 256 gray levels at 50-microns resolution in less than 30 sec. The scanner was interfaced to a Macintosh II computer through a specially constructed NuBus circuit board and software was developed for autoradiographic data analysis. The system was evaluated by scanning individual films multiple times, then measuring the variability of the digital data between the different scans. Image data were found to be virtually noise free. The coefficient of variation averaged less than 1%, a value significantly exceeding the accuracy of both high-speed, low-resolution, video camera (VC) systems and low-speed, high-resolution, rotating drum densitometers (RDD). Thus, the CCD scanner-Macintosh computer analysis system offers the advantage over VC systems of the ability to digitize entire films containing many autoradiograms, but with much greater speed and accuracy than achievable with RDD scanners

  9. A genome-wide analysis of putative functional and exonic variation associated with extremely high intelligence.

    Spain, S L; Pedroso, I; Kadeva, N; Miller, M B; Iacono, W G; McGue, M; Stergiakouli, E; Davey Smith, G; Putallaz, M; Lubinski, D; Meaburn, E L; Plomin, R; Simpson, M A

    2016-08-01

    Although individual differences in intelligence (general cognitive ability) are highly heritable, molecular genetic analyses to date have had limited success in identifying specific loci responsible for its heritability. This study is the first to investigate exome variation in individuals of extremely high intelligence. Under the quantitative genetic model, sampling from the high extreme of the distribution should provide increased power to detect associations. We therefore performed a case-control association analysis with 1409 individuals drawn from the top 0.0003 (IQ >170) of the population distribution of intelligence and 3253 unselected population-based controls. Our analysis focused on putative functional exonic variants assayed on the Illumina HumanExome BeadChip. We did not observe any individual protein-altering variants that are reproducibly associated with extremely high intelligence and within the entire distribution of intelligence. Moreover, no significant associations were found for multiple rare alleles within individual genes. However, analyses using genome-wide similarity between unrelated individuals (genome-wide complex trait analysis) indicate that the genotyped functional protein-altering variation yields a heritability estimate of 17.4% (s.e. 1.7%) based on a liability model. In addition, investigation of nominally significant associations revealed fewer rare alleles associated with extremely high intelligence than would be expected under the null hypothesis. This observation is consistent with the hypothesis that rare functional alleles are more frequently detrimental than beneficial to intelligence.

  10. Error tolerance analysis of wave diagnostic based on coherent modulation imaging in high power laser system

    Pan, Xingchen; Liu, Cheng; Zhu, Jianqiang

    2018-02-01

    Coherent modulation imaging providing fast convergence speed and high resolution with single diffraction pattern is a promising technique to satisfy the urgent demands for on-line multiple parameter diagnostics with single setup in high power laser facilities (HPLF). However, the influence of noise on the final calculated parameters concerned has not been investigated yet. According to a series of simulations with twenty different sampling beams generated based on the practical parameters and performance of HPLF, the quantitative analysis based on statistical results was first investigated after considering five different error sources. We found the background noise of detector and high quantization error will seriously affect the final accuracy and different parameters have different sensitivity to different noise sources. The simulation results and the corresponding analysis provide the potential directions to further improve the final accuracy of parameter diagnostics which is critically important to its formal applications in the daily routines of HPLF.

  11. Development of high performance liquid chromatography method for miconazole analysis in powder sample

    Hermawan, D.; Suwandri; Sulaeman, U.; Istiqomah, A.; Aboul-Enein, H. Y.

    2017-02-01

    A simple high performance liquid chromatography (HPLC) method has been developed in this study for the analysis of miconazole, an antifungal drug, in powder sample. The optimized HPLC system using C8 column was achieved using mobile phase composition containing methanol:water (85:15, v/v), a flow rate of 0.8 mL/min, and UV detection at 220 nm. The calibration graph was linear in the range from 10 to 50 mg/L with r 2 of 0.9983. The limit of detection (LOD) and limit of quantitation (LOQ) obtained were 2.24 mg/L and 7.47 mg/L, respectively. The present HPLC method is applicable for the determination of miconazole in the powder sample with a recovery of 101.28 % (RSD = 0.96%, n = 3). The developed HPLC method provides short analysis time, high reproducibility and high sensitivity.

  12. Is the prognostic significance of O6-methylguanine- DNA methyltransferase promoter methylation equally important in glioblastomas of patients from different continents? A systematic review with meta-analysis.

    Meng, Wei; Jiang, Yangyang; Ma, Jie

    2017-01-01

    O6-methylguanine-DNA methyltransferase (MGMT) is an independent predictor of therapeutic response and potential prognosis in patients with glioblastoma multiforme (GBM). However, its significance of clinical prognosis in different continents still needs to be explored. To explore the effects of MGMT promoter methylation on both progression-free survival (PFS) and overall survival (OS) among GBM patients from different continents, a systematic review of published studies was conducted. A total of 5103 patients from 53 studies were involved in the systematic review and the total percentage of MGMT promoter methylation was 45.53%. Of these studies, 16 studies performed univariate analyses and 17 performed multivariate analyses of MGMT promoter methylation on PFS. The pooled hazard ratio (HR) estimated for PFS was 0.55 (95% CI 0.50, 0.60) by univariate analysis and 0.43 (95% CI 0.38, 0.48) by multivariate analysis. The effect of MGMT promoter methylation on OS was explored in 30 studies by univariate analysis and in 30 studies by multivariate analysis. The combined HR was 0.48 (95% CI 0.44, 0.52) and 0.42 (95% CI 0.38, 0.45), respectively. In each subgroup divided by areas, the prognostic significance still remained highly significant. The proportion of methylation in each group was in inverse proportion to the corresponding HR in the univariate and multivariate analyses of PFS. However, from the perspective of OS, compared with data from Europe and the US, higher methylation rates in Asia did not bring better returns.

  13. a meta-analysis

    Chrissa G. Tsiara

    2018-03-13

    Mar 13, 2018 ... a meta-analysis of case–control studies was conducted. Univariate and ...... recent hepatitis C virus: potential benefit for ribavirin use in. HCV/HIV ... C/G polymorphism in breast pathologies and in HIV-infected patients.

  14. A thermal, thermoelastic, and wear analysis of high-energy disk brakes

    Kennedy, F. E., Jr.; Wu, J. J.; Ling, F. F.

    1974-01-01

    A thermomechanical investigation of the sliding contact problem encountered in high-energy disk brakes is described. The analysis includes a modelling, using the finite element method of the thermoelastic instabilities that cause transient changes in contact area to occur on the friction surface. In order to include the effect of wear at the contact surface, a wear criterion is proposed that results in the prediction of wear rates for disk brakes that are quite close to experimentally determined wear rates. The thermal analysis shows that the transient temperature distribution in a disk brake assembly can be determined more accurately by use of this thermomechanical analysis than by a more conventional analysis that assumes constant contact conditions. It also shows that lower, more desirable, temperatures in disk brakes can be attained by increasing the volume, the thermal conductivity, and, especially, the heat capacity of the brake components.

  15. A hazard and probabilistic safety analysis of a high-level waste transfer process

    Bott, T.F.; Sasser, M.K.

    1996-01-01

    This paper describes a safety analysis of a transfer process for high-level radioactive and toxic waste. The analysis began with a hazard assessment that used elements of What If, Checklist, Failure Modes and Effects Analysis, and Hazards and Operability Study (HAZOP) techniques to identify and rough-in accident sequences. Based on this preliminary analysis, the most significant accident sequences were developed further using event trees. Quantitative frequency estimates for the accident sequences were based on operational data taken from the historical record of the site where the process is performed. Several modeling challenges were encountered in the course of the study. These included linked initiating and accident progression events, fire propagation modeling, accounting for administrative control violations, and handling mission-phase effects

  16. High serum uric acid concentration predicts poor survival in patients with breast cancer.

    Yue, Cai-Feng; Feng, Pin-Ning; Yao, Zhen-Rong; Yu, Xue-Gao; Lin, Wen-Bin; Qian, Yuan-Min; Guo, Yun-Miao; Li, Lai-Sheng; Liu, Min

    2017-10-01

    Uric acid is a product of purine metabolism. Recently, uric acid has gained much attraction in cancer. In this study, we aim to investigate the clinicopathological and prognostic significance of serum uric acid concentration in breast cancer patients. A total of 443 female patients with histopathologically diagnosed breast cancer were included. After a mean follow-up time of 56months, survival was analysed using the Kaplan-Meier method. To further evaluate the prognostic significance of uric acid concentrations, univariate and multivariate Cox regression analyses were applied. Of the clinicopathological parameters, uric acid concentration was associated with age, body mass index, ER status and PR status. Univariate analysis identified that patients with increased uric acid concentration had a significantly inferior overall survival (HR 2.13, 95% CI 1.15-3.94, p=0.016). In multivariate analysis, we found that high uric acid concentration is an independent prognostic factor predicting death, but insufficient to predict local relapse or distant metastasis. Kaplan-Meier analysis indicated that high uric acid concentration is related to the poor overall survival (p=0.013). High uric acid concentration predicts poor survival in patients with breast cancer, and might serve as a potential marker for appropriate management of breast cancer patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Multivariate two-part statistics for analysis of correlated mass spectrometry data from multiple biological specimens.

    Taylor, Sandra L; Ruhaak, L Renee; Weiss, Robert H; Kelly, Karen; Kim, Kyoungmi

    2017-01-01

    High through-put mass spectrometry (MS) is now being used to profile small molecular compounds across multiple biological sample types from the same subjects with the goal of leveraging information across biospecimens. Multivariate statistical methods that combine information from all biospecimens could be more powerful than the usual univariate analyses. However, missing values are common in MS data and imputation can impact between-biospecimen correlation and multivariate analysis results. We propose two multivariate two-part statistics that accommodate missing values and combine data from all biospecimens to identify differentially regulated compounds. Statistical significance is determined using a multivariate permutation null distribution. Relative to univariate tests, the multivariate procedures detected more significant compounds in three biological datasets. In a simulation study, we showed that multi-biospecimen testing procedures were more powerful than single-biospecimen methods when compounds are differentially regulated in multiple biospecimens but univariate methods can be more powerful if compounds are differentially regulated in only one biospecimen. We provide R functions to implement and illustrate our method as supplementary information CONTACT: sltaylor@ucdavis.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. elegantRingAnalysis An Interface for High-Throughput Analysis of Storage Ring Lattices Using elegant

    Borland, Michael

    2005-01-01

    The code {\\tt elegant} is widely used for simulation of linacs for drivers for free-electron lasers. Less well known is that elegant is also a very capable code for simulation of storage rings. In this paper, we show a newly-developed graphical user interface that allows the user to easily take advantage of these capabilities. The interface is designed for use on a Linux cluster, providing very high throughput. It can also be used on a single computer. Among the features it gives access to are basic calculations (Twiss parameters, radiation integrals), phase-space tracking, nonlinear dispersion, dynamic aperture (on- and off-momentum), frequency map analysis, and collective effects (IBS, bunch-lengthening). Using a cluster, it is easy to get highly detailed dynamic aperture and frequency map results in a surprisingly short time.

  19. Accelerated Synchrotron X-ray Diffraction Data Analysis on a Heterogeneous High Performance Computing System

    Qin, J; Bauer, M A, E-mail: qin.jinhui@gmail.com, E-mail: bauer@uwo.ca [Computer Science Department, University of Western Ontario, London, ON N6A 5B7 (Canada)

    2010-11-01

    The analysis of synchrotron X-ray Diffraction (XRD) data has been used by scientists and engineers to understand and predict properties of materials. However, the large volume of XRD image data and the intensive computations involved in the data analysis makes it hard for researchers to quickly reach any conclusions about the images from an experiment when using conventional XRD data analysis software. Synchrotron time is valuable and delays in XRD data analysis can impact decisions about subsequent experiments or about materials that they are investigating. In order to improve the data analysis performance, ideally to achieve near real time data analysis during an XRD experiment, we designed and implemented software for accelerated XRD data analysis. The software has been developed for a heterogeneous high performance computing (HPC) system, comprised of IBM PowerXCell 8i processors and Intel quad-core Xeon processors. This paper describes the software and reports on the improved performance. The results indicate that it is possible for XRD data to be analyzed at the rate it is being produced.

  20. Accelerated Synchrotron X-ray Diffraction Data Analysis on a Heterogeneous High Performance Computing System

    Qin, J; Bauer, M A

    2010-01-01

    The analysis of synchrotron X-ray Diffraction (XRD) data has been used by scientists and engineers to understand and predict properties of materials. However, the large volume of XRD image data and the intensive computations involved in the data analysis makes it hard for researchers to quickly reach any conclusions about the images from an experiment when using conventional XRD data analysis software. Synchrotron time is valuable and delays in XRD data analysis can impact decisions about subsequent experiments or about materials that they are investigating. In order to improve the data analysis performance, ideally to achieve near real time data analysis during an XRD experiment, we designed and implemented software for accelerated XRD data analysis. The software has been developed for a heterogeneous high performance computing (HPC) system, comprised of IBM PowerXCell 8i processors and Intel quad-core Xeon processors. This paper describes the software and reports on the improved performance. The results indicate that it is possible for XRD data to be analyzed at the rate it is being produced.

  1. iScreen: Image-Based High-Content RNAi Screening Analysis Tools.

    Zhong, Rui; Dong, Xiaonan; Levine, Beth; Xie, Yang; Xiao, Guanghua

    2015-09-01

    High-throughput RNA interference (RNAi) screening has opened up a path to investigating functional genomics in a genome-wide pattern. However, such studies are often restricted to assays that have a single readout format. Recently, advanced image technologies have been coupled with high-throughput RNAi screening to develop high-content screening, in which one or more cell image(s), instead of a single readout, were generated from each well. This image-based high-content screening technology has led to genome-wide functional annotation in a wider spectrum of biological research studies, as well as in drug and target discovery, so that complex cellular phenotypes can be measured in a multiparametric format. Despite these advances, data analysis and visualization tools are still largely lacking for these types of experiments. Therefore, we developed iScreen (image-Based High-content RNAi Screening Analysis Tool), an R package for the statistical modeling and visualization of image-based high-content RNAi screening. Two case studies were used to demonstrate the capability and efficiency of the iScreen package. iScreen is available for download on CRAN (http://cran.cnr.berkeley.edu/web/packages/iScreen/index.html). The user manual is also available as a supplementary document. © 2014 Society for Laboratory Automation and Screening.

  2. Comparison of Imputation Methods for Handling Missing Categorical Data with Univariate Pattern|| Una comparación de métodos de imputación de variables categóricas con patrón univariado

    Torres Munguía, Juan Armando

    2014-06-01

    Full Text Available This paper examines the sample proportions estimates in the presence of univariate missing categorical data. A database about smoking habits (2011 National Addiction Survey of Mexico was used to create simulated yet realistic datasets at rates 5% and 15% of missingness, each for MCAR, MAR and MNAR mechanisms. Then the performance of six methods for addressing missingness is evaluated: listwise, mode imputation, random imputation, hot-deck, imputation by polytomous regression and random forests. Results showed that the most effective methods for dealing with missing categorical data in most of the scenarios assessed in this paper were hot-deck and polytomous regression approaches. || El presente estudio examina la estimación de proporciones muestrales en la presencia de valores faltantes en una variable categórica. Se utiliza una encuesta de consumo de tabaco (Encuesta Nacional de Adicciones de México 2011 para crear bases de datos simuladas pero reales con 5% y 15% de valores perdidos para cada mecanismo de no respuesta MCAR, MAR y MNAR. Se evalúa el desempeño de seis métodos para tratar la falta de respuesta: listwise, imputación de moda, imputación aleatoria, hot-deck, imputación por regresión politómica y árboles de clasificación. Los resultados de las simulaciones indican que los métodos más efectivos para el tratamiento de la no respuesta en variables categóricas, bajo los escenarios simulados, son hot-deck y la regresión politómica.

  3. Safety analysis of the transportation of high-level radioactive waste

    Murphy, E.S.; Winegardner, W.K.

    1975-01-01

    An analysis of the risk from transportation of solidified high-level waste is being performed at Battelle-Northwest as part of a comprehensive study of the management of high-level waste. The risk analysis study makes use of fault trees to identify failure events and to specify combinations of events which could result in breach of containment and a release of radioactive material to the environment. Contributions to risk analysis methodology which have been made in connection with this study include procedures for identification of dominant failure sequences, methods for quantifying the effects of probabilistic failure events, and computer code development. Preliminary analysis based on evaluation of the rail transportation fault tree indicates that the dominant failure sequences for transportation of solidified high-level waste will be those related to railroad accidents. Detailed evaluation of rail accident failure sequences is proceeding and is making use of the limited frequency-severity data which is available in the literature. (U.S.)

  4. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    Zheng You

    2013-04-01

    Full Text Available The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  5. Optical system error analysis and calibration method of high-accuracy star trackers.

    Sun, Ting; Xing, Fei; You, Zheng

    2013-04-08

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  6. High fidelity analysis of BWR fuel assembly with COBRA-TF/PARCS and trace codes

    Abarca, A.; Miro, R.; Barrachina, T.; Verdu, G.; Soler, A.

    2013-01-01

    The growing importance of detailed reactor core and fuel assembly description for light water reactors (LWRs) as well as the sub-channel safety analysis requires high fidelity models and coupled neutronic/thermalhydraulic codes. Hand in hand with advances in the computer technology, the nuclear safety analysis is beginning to use a more detailed thermal hydraulics and neutronics. Previously, a PWR core and a 16 by 16 fuel assembly models were developed to test and validate our COBRA-TF/PARCS v2.7 (CTF/PARCS) coupled code. In this work, a comparison of the modeling and simulation advantages and disadvantages of modern 10 by 10 BWR fuel assembly with CTF/PARCS and TRACE codes has been done. The objective of the comparison is making known the main advantages of using the sub-channel codes to perform high resolution nuclear safety analysis. The sub-channel codes, like CTF, permits obtain accurate predictions, in two flow regime, of the thermalhydraulic parameters important to safety with high local resolution. The modeled BWR fuel assembly has 91 fuel rods (81 full length and 10 partial length fuel rods) and a big square central water rod. This assembly has been modeled with high level of detail with CTF code and using the BWR modeling parameters provided by TRACE. The same neutronic PARCS's model has been used for the simulation with both codes. To compare the codes a coupled steady state has be performed. (author)

  7. High Birth Weight Increases the Risk for Bone Tumor: A Systematic Review and Meta-Analysis

    Songfeng Chen

    2015-09-01

    Full Text Available There have been several epidemiologic studies on the relationship between high birth weight and the risk for bone tumor in the past decades. However, due to the rarity of bone tumors, the sample size of individual studies was generally too small for reliable conclusions. Therefore, we have performed a meta-analysis to pool all published data on electronic databases with the purpose to clarify the potential relationship. According to the inclusion and exclusion criteria, 18 independent studies with more than 2796 cases were included. As a result, high birth weight was found to increase the risk for bone tumor with an Odds Ratio (OR of 1.13, with the 95% confidence interval (95% CI ranging from 1.01 to 1.27. The OR of bone tumor for an increase of 500 gram of birth weight was 1.01 (95% CI 1.00–1.02; p = 0.048 for linear trend. Interestingly, individuals with high birth weight had a greater risk for osteosarcoma (OR = 1.22, 95% CI 1.06–1.40, p = 0.006 than those with normal birth weight. In addition, in the subgroup analysis by geographical region, elevated risk was detected among Europeans (OR = 1.14, 95% CI 1.00–1.29, p = 0.049. The present meta-analysis supported a positive association between high birth weight and bone tumor risk.

  8. TEACHER-STUDENTS DISCOURSE IN ENGLISH TEACHING AT HIGH SCHOOL (CLASSROOM DISCOURSE ANALYSIS

    Alamsyah Harahap

    2015-12-01

    Full Text Available English classroom's process of teaching and learning is an important aspect of successful English teaching and learning. The analysis of classroom discourse is a very important form which the classroom process research has taken place. The present study focuses on SMA (high school English classroom discourse. The microethnography of Spradley was the research method deployed. Through a detailed description and analysis of the collected data referring to Sinclair and Coulthard’s classroom discourse analysis model, the problem of patterns of the classroom discourse is made clear. On the basis of the discourse patterns' problem found, a few strategies for high school English teachers are put forward through the teacher training in order to improve English teaching and learning at high school in Indonesia. The research results showed that teacher talk highly dominated the English classroom discourse; 94% of teacher-students talk. IRF Model of Sinclair and Coulthard was not found in the English classroom (only IF pattern and no lesson achieved.

  9. Macroscopic High-Temperature Structural Analysis Model of Small-Scale PCHE Prototype (II)

    Song, Kee Nam; Lee, Heong Yeon; Hong, Sung Deok; Park, Hong Yoon

    2011-01-01

    The IHX (intermediate heat exchanger) of a VHTR (very high-temperature reactor) is a core component that transfers the high heat generated by the VHTR at 950 .deg. C to a hydrogen production plant. Korea Atomic Energy Research Institute manufactured a small-scale prototype of a PCHE (printed circuit heat exchanger) that was being considered as a candidate for the IHX. In this study, as a part of high-temperature structural integrity evaluation of the small-scale PCHE prototype, we carried out high-temperature structural analysis modeling and macroscopic thermal and elastic structural analysis for the small-scale PCHE prototype under small-scale gas-loop test conditions. The modeling and analysis were performed as a precedent study prior to the performance test in the small-scale gas loop. The results obtained in this study will be compared with the test results for the small-scale PCHE. Moreover, these results will be used in the design of a medium-scale PCHE prototype

  10. Analysis of local warm forming of high strength steel using near infrared ray energy

    Yang, W. H., E-mail: whyang21@hyundai.com [Hyundai Motor Company, 700 Yeompo-ro, Buk-Gu, Ulsan, 683-791 (Korea, Republic of); Lee, K., E-mail: klee@deform.co.kr [Solution Lab, 502, 102, Dunsan-daero 117 beon-gil, Seo-Gu, Daejeon, 302-834 (Korea, Republic of); Lee, E. H., E-mail: mtgs2@kaist.ac.kr, E-mail: dyyang@kaist.ac.kr; Yang, D. Y., E-mail: mtgs2@kaist.ac.kr, E-mail: dyyang@kaist.ac.kr [KAIST, Science Town291, Daehak-ro, Yuseong-Gu, Daejeon 305-701 (Korea, Republic of)

    2013-12-16

    The automotive industry has been pressed to satisfy more rigorous fuel efficiency requirements to promote energy conservation, safety features and cost containment. To satisfy this need, high strength steel has been developed and used for many different vehicle parts. The use of high strength steels, however, requires careful analysis and creativity in order to accommodate its relatively high springback behavior. An innovative method, called local warm forming with near infrared ray, has been developed to help promote the use of high strength steels in sheet metal forming. For this method, local regions of the work piece are heated using infrared ray energy, thereby promoting the reduction of springback behavior. In this research, a V-bend test is conducted with DP980. After springback, the bend angles for specimens without local heating are compared to those with local heating. Numerical analysis has been performed using the commercial program, DEFORM-2D. This analysis is carried out with the purpose of understanding how changes to the local stress distribution will affect the springback during the unloading process. The results between experimental and computational approaches are evaluated to assure the accuracy of the simulation. Subsequent numerical simulation studies are performed to explore best practices with respect to thermal boundary conditions, timing, and applicability to the production environment.

  11. Nonlinear analysis of reinforced concrete structures subjected to high temperature and external load

    Sugawara, Y.; Goto, M.; Saito, K.; Suzuki, N.; Muto, A.; Ueda, M.

    1993-01-01

    A quarter of a century has passed since the finite element method was first applied to nonlinear problems concerning reinforced concrete structures, and the reliability of the analysis at ordinary temperature has been enhanced accordingly. By contrast, few studies have tried to deal with the nonlinear behavior of reinforced concrete structures subjected to high temperature and external loads simultaneously. It is generally known that the mechanical properties of concrete and steel are affected greatly by temperature. Therefore, in order to analyze the nonlinear behavior of reinforced concrete subjected to external loads at high temperature, it is necessary to construct constitutive models of the materials reflecting the influence of temperature. In this study, constitutive models of concrete and reinforcement that can express decreases in strength and stiffness at high temperature have been developed. A two-dimensional nonlinear finite element analysis program has been developed by use of these material models. The behavior of reinforced concrete beams subjected simultaneously to high temperature and shear forces were simulated using the developed analytical method. The results of the simulation agreed well with the experimental results, evidencing the validity of the developed material models and the finite element analysis program

  12. Analysis of local warm forming of high strength steel using near infrared ray energy

    Yang, W. H.; Lee, K.; Lee, E. H.; Yang, D. Y.

    2013-01-01

    The automotive industry has been pressed to satisfy more rigorous fuel efficiency requirements to promote energy conservation, safety features and cost containment. To satisfy this need, high strength steel has been developed and used for many different vehicle parts. The use of high strength steels, however, requires careful analysis and creativity in order to accommodate its relatively high springback behavior. An innovative method, called local warm forming with near infrared ray, has been developed to help promote the use of high strength steels in sheet metal forming. For this method, local regions of the work piece are heated using infrared ray energy, thereby promoting the reduction of springback behavior. In this research, a V-bend test is conducted with DP980. After springback, the bend angles for specimens without local heating are compared to those with local heating. Numerical analysis has been performed using the commercial program, DEFORM-2D. This analysis is carried out with the purpose of understanding how changes to the local stress distribution will affect the springback during the unloading process. The results between experimental and computational approaches are evaluated to assure the accuracy of the simulation. Subsequent numerical simulation studies are performed to explore best practices with respect to thermal boundary conditions, timing, and applicability to the production environment

  13. Experiences of High School Students about the Predictors of Tobacco Use: a Directed Qualitative Content Analysis

    Mahmoud Ghasemi

    2015-12-01

    Full Text Available Background and Objectives: Tobacco use is one of the most important risk factors that increases the burden of diseases worldwide. Based on the increasing speed of tobacco use, the aim of the present study was to explain the experiences of high school students about the determiners of use and non-use of tobacco (cigarettes and hookah based on the theory of protection motivation. Materials and Methods: The present study is a qualitative study based on content analysis that has been carried out for five months from 22, November of 2014 to 20, April of 2015 on male high schools in Noshahr. Data were collected in the form of semi-structured interviews from 21 male high school students of whom 7 smoked cigarettes, 7 used hookah and 7 of them did not use any type of tobacco. Data analysis was carried out through the use of directed qualitative content analysis. Results: Data analysis led to the extraction of 99 primary codes that were categorized into 9 predetermined levels of protection motivation theory including perceived sensitivity, perceived intensity, fear, perceived self-efficacy, response expense, efficiency of the perceived answer, external perceived reward, internal perceived reward, protection motivation. The findings of the study showed that the most important predictors for the use of tobacco were the structures of response expense and high perceived rewards and the most important predictors for non-use of tobacco were perceived sensitivity, perceived intensity and high self-efficacy of students. Conclusions: the findings of the present study showed that the pressure from peers, being present in a group using tobacco and the absence of alternative recreational activities are among the most important factors of using tobacco. So, it is suggested that planners of the health department take the comprehensive interventions to improve effective individual and environmental factors of using tobacco so that they could reduce smoking cigarettes

  14. Data analysis in high energy physics. A practical guide to statistical methods

    Behnke, Olaf; Schoerner-Sadenius, Thomas; Kroeninger, Kevin; Schott, Gregory

    2013-01-01

    This practical guide covers the essential tasks in statistical data analysis encountered in high energy physics and provides comprehensive advice for typical questions and problems. The basic methods for inferring results from data are presented as well as tools for advanced tasks such as improving the signal-to-background ratio, correcting detector effects, determining systematics and many others. Concrete applications are discussed in analysis walkthroughs. Each chapter is supplemented by numerous examples and exercises and by a list of literature and relevant links. The book targets a broad readership at all career levels - from students to senior researchers.

  15. Precise Model Analysis for 3-phase High Power Converter using the Harmonic State Space Modeling

    Kwon, Jun Bum; Wang, Xiongfei; Blaabjerg, Frede

    2015-01-01

    This paper presents about the generalized multi-frequency modeling and analysis methodology, which can be used in control loop design and stability analysis. In terms of the switching frequency of high power converter, there can be harmonics interruption if the voltage source converter has a low...... switching frequency ratio or multi-sampling frequency. The range of the control bandwidth can include the switching component. Thus, the systems become unstable. This paper applies the Harmonic State Space (HSS) Modeling method in order to find out the transfer function for each harmonics terms...

  16. New mass-spectrometric facility for the analysis of highly radioactive samples

    Warmack, R.J.; Landau, L.; Christie, W.H.; Carter, J.A.

    1981-01-01

    A new facility has been completed for the analysis of highly radioactive, gamma-emitting solid samples. A commercial spark-source mass spectrometer was adapted for remote handling and loading. Electrodes are prepared in a hot cell and transported to the adjacent lead-shielded source for analysis. The source was redesigned for ease of shielding, loading, and maintenance. Both solutions and residues from irradiated nuclear fuel dissolutions have been analyzed for elemental concentrations to < 1 ppM; isotopic data have also been obtained

  17. Device for high-temperature X-ray diffraction analysis. Ustrojstvo dlya vysokotemperaturnogo rentgenostrukturnogo analiza

    Epifanov, V G; Zavilinskij, A V; Pet' kov, V V; Polenur, A V

    1975-01-07

    Device for high-temperature X-ray diffraction analysis, containing a vacuum chamber with a window for X-ray transit, in which sample- and standard-holders, heater, thermal shields and means for standard and sample temperature measurement are located, is proposed. In order to increase the working temperature level and the structural change detection accuracy the heater is located between the sample- and standard-holders. The standard-holder is linked with the mechanism of control of its position in relation to the heater. The device is intended for investigating phase transformations by differential thermal analysis method with the simultaneous diffraction pattern detection using X-ray diffractometry method.

  18. Software systems for processing and analysis at the NOVA high-energy laser facility

    Auerbach, J.M.; Montgomery, D.S.; McCauley, E.W.; Stone, G.F.

    1986-01-01

    A typical laser interaction experiment at the NOVA high-energy laser facility produces in excess of 20 Mbytes of digitized data. Extensive processing and analysis of this raw data from a wide variety of instruments is necessary to produce results that can be readily used to interpret the experiment. Using VAX-based computer hardware, software systems have been set up to convert the digitized instrument output to physics quantities describing the experiment. A relational data-base management system is used to coordinate all levels of processing and analysis. Software development emphasizes structured design, flexibility, automation, and ease of use

  19. Recommended HPI [High Pressure Injection] rates for the TMI-2 analysis exercise (0 to 300 minutes)

    Anderson, J.L.

    1987-09-01

    An international analysis exercise has been organized to evaluate the ability of nuclear reactor severe accident computer codes to predict the TMI-2 accident sequence and core damage progression during the first 300 minutes of the accident. A required boundary condition for the analysis exercise is the High Pressure Injection or make-up rates into the primary system during the accident. Recommended injection rates for the first 300 minutes of the accident are presented. Recommendations for several sensitivity studies are also presented. 6 refs., 5 figs., 1 tab

  20. Analysis of lipid experiments (ALEX: a software framework for analysis of high-resolution shotgun lipidomics data.

    Peter Husen

    Full Text Available Global lipidomics analysis across large sample sizes produces high-content datasets that require dedicated software tools supporting lipid identification and quantification, efficient data management and lipidome visualization. Here we present a novel software-based platform for streamlined data processing, management and visualization of shotgun lipidomics data acquired using high-resolution Orbitrap mass spectrometry. The platform features the ALEX framework designed for automated identification and export of lipid species intensity directly from proprietary mass spectral data files, and an auxiliary workflow using database exploration tools for integration of sample information, computation of lipid abundance and lipidome visualization. A key feature of the platform is the organization of lipidomics data in "database table format" which provides the user with an unsurpassed flexibility for rapid lipidome navigation using selected features within the dataset. To demonstrate the efficacy of the platform, we present a comparative neurolipidomics study of cerebellum, hippocampus and somatosensory barrel cortex (S1BF from wild-type and knockout mice devoid of the putative lipid phosphate phosphatase PRG-1 (plasticity related gene-1. The presented framework is generic, extendable to processing and integration of other lipidomic data structures, can be interfaced with post-processing protocols supporting statistical testing and multivariate analysis, and can serve as an avenue for disseminating lipidomics data within the scientific community. The ALEX software is available at www.msLipidomics.info.