WorldWideScience

Sample records for analyses predict sistergroup

  1. The sister-group relationships of the largest family of lichenized fungi, Parmeliaceae (Lecanorales, Ascomycota).

    Science.gov (United States)

    Singh, Garima; Divakar, Pradeep K; Dal Grande, Francesco; Otte, Jürgen; Parnmen, Sittiporn; Wedin, Mats; Crespo, Ana; Lumbsch, H Thorsten; Schmitt, Imke

    2013-10-01

    Parmeliaceae is the largest family of lichen-forming fungi. In spite of its importance for fungal diversity, its relationships with other families in Lecanorales remain poorly known. To better understand the evolutionary history of the diversification of lineages and species richness in Parmeliaceae it is important to know the phylogenetic relationships of the closest relatives of the family. A recent study based on two molecular loci suggested that either Protoparmelia s. str. or a group consisting of Gypsoplaca and Protoparmelia s. str. were the possible sister-group candidates of Parmeliaceae, but that study could not distinguish between these two alternatives. Here, we used a four-locus phylogeny (nuLSU, ITS, RPB1, MCM7) to reveal relationships of Parmeliaceae with other potential relatives in Lecanorales. Maximum likelihood and Bayesian analyses showed that Protoparmelia is polyphyletic, with Protoparmelia s. str. (including Protoparmelia badia and Protoparmelia picea) being most closely related to Parmeliaceae s. str., while the Protoparmelia atriseda-group formed the sister-group to Miriquidica. Gypsoplaca formed the sister-group to the Parmeliaceae s. str. + Protoparmelia s. str. clade. Monophyly of Protoparmelia as currently circumscribed, and Gypsoplaca as sister-group to Parmeliaceae s. str. were both significantly rejected by alternative hypothesis testing. PMID:24119410

  2. The first record of a trans-oceanic sister-group relationship between obligate vertebrate troglobites.

    Directory of Open Access Journals (Sweden)

    Prosanta Chakrabarty

    Full Text Available We show using the most complete phylogeny of one of the most species-rich orders of vertebrates (Gobiiformes, and calibrations from the rich fossil record of teleost fishes, that the genus Typhleotris, endemic to subterranean karst habitats in southwestern Madagascar, is the sister group to Milyeringa, endemic to similar subterranean systems in northwestern Australia. Both groups are eyeless, and our phylogenetic and biogeographic results show that these obligate cave fishes now found on opposite ends of the Indian Ocean (separated by nearly 7,000 km are each others closest relatives and owe their origins to the break up of the southern supercontinent, Gondwana, at the end of the Cretaceous period. Trans-oceanic sister-group relationships are otherwise unknown between blind, cave-adapted vertebrates and our results provide an extraordinary case of Gondwanan vicariance.

  3. Benchmark analyses of prediction models for pipe wall thinning

    International Nuclear Information System (INIS)

    In recent years, the importance of utilizing a prediction model or code for the management of pipe wall thinning has been recognized. In Japan Society of Mechanical Engineers (JSME), a working group on prediction methods has been set up within a research committee for studying the management of pipe wall-thinning. Some prediction models for pipe wall thinning were reviewed by benchmark analyses in terms of their prediction characteristics and the specifications required for their use in the management of pipe wall thinning in power generation facilities. This paper introduces the prediction models selected from the existing flow-accelerated corrosion and/or liquid droplet impingement erosion models. The experimental results and example of the results of wall thickness measurement used as benchmark data are also mentioned. (author)

  4. Multiple regression analyses in the prediction of aerospace instrument costs

    Science.gov (United States)

    Tran, Linh

    The aerospace industry has been investing for decades in ways to improve its efficiency in estimating the project life cycle cost (LCC). One of the major focuses in the LCC is the cost/prediction of aerospace instruments done during the early conceptual design phase of the project. The accuracy of early cost predictions affects the project scheduling and funding, and it is often the major cause for project cost overruns. The prediction of instruments' cost is based on the statistical analysis of these independent variables: Mass (kg), Power (watts), Instrument Type, Technology Readiness Level (TRL), Destination: earth orbiting or planetary, Data rates (kbps), Number of bands, Number of channels, Design life (months), and Development duration (months). This author is proposing a cost prediction approach of aerospace instruments based on these statistical analyses: Clustering Analysis, Principle Components Analysis (PCA), Bootstrap, and multiple regressions (both linear and non-linear). In the proposed approach, the Cost Estimating Relationship (CER) will be developed for the dependent variable Instrument Cost by using a combination of multiple independent variables. "The Full Model" will be developed and executed to estimate the full set of nine variables. The SAS program, Excel, Automatic Cost Estimating Integrate Tool (ACEIT) and Minitab are the tools to aid the analysis. Through the analysis, the cost drivers will be identified which will help develop an ultimate cost estimating software tool for the Instrument Cost prediction and optimization of future missions.

  5. Uncertainty and Sensitivity Analyses of Model Predictions of Solute Transport

    Science.gov (United States)

    Skaggs, T. H.; Suarez, D. L.; Goldberg, S. R.

    2012-12-01

    Soil salinity reduces crop production on about 50% of irrigated lands worldwide. One roadblock to increased use of advanced computer simulation tools for better managing irrigation water and soil salinity is that the models usually do not provide an estimate of the uncertainty in model predictions, which can be substantial. In this work, we investigate methods for putting confidence bounds on HYDRUS-1D simulations of solute leaching in soils. Uncertainties in model parameters estimated with pedotransfer functions are propagated through simulation model predictions using Monte Carlo simulation. Generalized sensitivity analyses indicate which parameters are most significant for quantifying uncertainty. The simulation results are compared with experimentally observed transport variability in a number of large, replicated lysimeters.

  6. Climate Prediction Center (CPC) US daily temperature analyses

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The U.S. daily temperature analyses are maps depicting various temperature quantities utilizing daily maximum and minimum temperature data across the US. Maps are...

  7. Predicting item popularity: Analysing local clustering behaviour of users

    Science.gov (United States)

    Liebig, Jessica; Rao, Asha

    2016-01-01

    Predicting the popularity of items in rating networks is an interesting but challenging problem. This is especially so when an item has first appeared and has received very few ratings. In this paper, we propose a novel approach to predicting the future popularity of new items in rating networks, defining a new bipartite clustering coefficient to predict the popularity of movies and stories in the MovieLens and Digg networks respectively. We show that the clustering behaviour of the first user who rates a new item gives insight into the future popularity of that item. Our method predicts, with a success rate of over 65% for the MovieLens network and over 50% for the Digg network, the future popularity of an item. This is a major improvement on current results.

  8. Analysing Twitter and web queries for flu trend prediction

    Science.gov (United States)

    2014-01-01

    Background Social media platforms encourage people to share diverse aspects of their daily life. Among these, shared health related information might be used to infer health status and incidence rates for specific conditions or symptoms. In this work, we present an infodemiology study that evaluates the use of Twitter messages and search engine query logs to estimate and predict the incidence rate of influenza like illness in Portugal. Results Based on a manually classified dataset of 2704 tweets from Portugal, we selected a set of 650 textual features to train a Naïve Bayes classifier to identify tweets mentioning flu or flu-like illness or symptoms. We obtained a precision of 0.78 and an F-measure of 0.83, based on cross validation over the complete annotated set. Furthermore, we trained a multiple linear regression model to estimate the health-monitoring data from the Influenzanet project, using as predictors the relative frequencies obtained from the tweet classification results and from query logs, and achieved a correlation ratio of 0.89 (p < 0.001). These classification and regression models were also applied to estimate the flu incidence in the following flu season, achieving a correlation of 0.72. Conclusions Previous studies addressing the estimation of disease incidence based on user-generated content have mostly focused on the english language. Our results further validate those studies and show that by changing the initial steps of data preprocessing and feature extraction and selection, the proposed approaches can be adapted to other languages. Additionally, we investigated whether the predictive model created can be applied to data from the subsequent flu season. In this case, although the prediction result was good, an initial phase to adapt the regression model could be necessary to achieve more robust results. PMID:25077431

  9. TRAC analyses and GIRAFFE tests for PCCS performance prediction

    International Nuclear Information System (INIS)

    The passive containment cooling system (PCCS) would remove decay heat by steam condensation without any electric power supply or operator's action if an accident should occur in nuclear reactors. There is, however, concern that non-condensable gas might influence the PCCS performance in the event of an accident. This paper summarizes Toshiba's activities respecting PCCS development, in particular those activities relating to TRAC qualification for PCCS performance prediction and the GIRAFFE tests. TRAC is a best estimate thermal hydraulic analysis code. GIRAFFE is a full-height test facility simulating the SBWR containment with the PCCS, at Toshiba's Ukishima site. (author)

  10. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  11. On the use of uncertainty analyses to test hypotheses regarding deterministic model predictions of environmental processes

    International Nuclear Information System (INIS)

    This paper illustrates the use of Monte Carlo parameter uncertainty and sensitivity analyses to test hypotheses regarding predictions of deterministic models of environmental transport, dose, risk and other phenomena. The methodology is illustrated by testing whether 238Pu is transferred more readily than 239+240Pu from the gastrointestinal (GI) tract of cattle to their tissues (muscle, liver and blood). This illustration is based on a study wherein beef-cattle grazed for up to 1064 days on a fenced plutonium (Pu)-contaminated arid site in Area 13 near the Nevada Test Site in the United States. Periodically, cattle were sacrificed and their tissues analyzed for Pu and other radionuclides. Conditional sensitivity analyses of the model predictions were also conducted. These analyses indicated that Pu cattle tissue concentrations had the largest impact of any model parameter on the pdf of predicted Pu fractional transfers. Issues that arise in conducting uncertainty and sensitivity analyses of deterministic models are discussed. (author)

  12. Crack growth prediction analyses of a RPV prototype under PTS loading

    International Nuclear Information System (INIS)

    This work presents the numerical finite element analysis and fracture mechanics procedure carried out to predict crack growth behavior of a reactor pressure vessel (RPV) prototype during a pressurized thermal shock (PTS) experiment. A brief description of the PTS experiment is given, followed by a presentation of the numerical models used for thermal structural analysis and to obtain the crack driving force parameters. Fracture mechanics procedures, with different levels of complexity, are presented for crack growth prediction. The results obtained using a simplified procedure are compared with those based on 3D finite element analyses. (author)

  13. Bacterial regulon modeling and prediction based on systematic cis regulatory motif analyses

    Science.gov (United States)

    Liu, Bingqiang; Zhou, Chuan; Li, Guojun; Zhang, Hanyuan; Zeng, Erliang; Liu, Qi; Ma, Qin

    2016-03-01

    Regulons are the basic units of the response system in a bacterial cell, and each consists of a set of transcriptionally co-regulated operons. Regulon elucidation is the basis for studying the bacterial global transcriptional regulation network. In this study, we designed a novel co-regulation score between a pair of operons based on accurate operon identification and cis regulatory motif analyses, which can capture their co-regulation relationship much better than other scores. Taking full advantage of this discovery, we developed a new computational framework and built a novel graph model for regulon prediction. This model integrates the motif comparison and clustering and makes the regulon prediction problem substantially more solvable and accurate. To evaluate our prediction, a regulon coverage score was designed based on the documented regulons and their overlap with our prediction; and a modified Fisher Exact test was implemented to measure how well our predictions match the co-expressed modules derived from E. coli microarray gene-expression datasets collected under 466 conditions. The results indicate that our program consistently performed better than others in terms of the prediction accuracy. This suggests that our algorithms substantially improve the state-of-the-art, leading to a computational capability to reliably predict regulons for any bacteria.

  14. A multigene phylogeny of the fly superfamily Asiloidea (Insecta): Taxon sampling and additional genes reveal the sister-group to all higher flies (Cyclorrhapha).

    Science.gov (United States)

    Trautwein, Michelle D; Wiegmann, Brian M; Yeates, David K

    2010-09-01

    Asiloidea are a group of 9 lower brachyceran fly families, considered to be the closest relative to the large Metazoan radiation Eremoneura (Cyclorrhapha+Empidoidea). The evidence for asiloid monophyly is limited, and few characters define the relationships between the families of Asiloidea and Eremoneura. Additionally, enigmatic genera, Hilarimorpha and Apystomyia, retain morphological characters of both asiloids and higher flies. We use the nuclear protein-coding gene CAD and 28S rDNA to test the monophyly of Asiloidea and to resolve its relationship to Eremoneura. We explore the effects of taxon sampling on support values and topological stability, the resolving power of additional genes, and hypothesis testing using four-cluster likelihood mapping. We find that: (1) the 'asiloid' genus Apystomyia is sister to Cyclorrhapha, (2) the remaining asiloids are monophyletic at the exclusion of the family Bombyliidae, and (3) our best estimate of relationships places the asiloid flies excluding Bombyliidae as the sister-group to Eremoneura, though high support is lacking. PMID:20399874

  15. The mitochondrial genome of Paraspadella gotoi is highly reduced and reveals that chaetognaths are a sister-group to protostomes

    Energy Technology Data Exchange (ETDEWEB)

    Helfenbein, Kevin G.; Fourcade, H. Matthew; Vanjani, Rohit G.; Boore, Jeffrey L.

    2004-05-01

    We report the first complete mitochondrial (mt) DNA sequence from a member of the phylum Chaetognatha (arrow worms). The Paraspadella gotoi mtDNA is highly unusual, missing 23 of the genes commonly found in animal mtDNAs, including atp6, which has otherwise been found universally to be present. Its 14 genes are unusually arranged into two groups, one on each strand. One group is punctuated by numerous non-coding intergenic nucleotides, while the other group is tightly packed, having no non-coding nucleotides, leading to speculation that there are two transcription units with differing modes of expression. The phylogenetic position of the Chaetognatha within the Metazoa has long been uncertain, with conflicting or equivocal results from various morphological analyses and rRNA sequence comparisons. Comparisons here of amino acid sequences from mitochondrially encoded proteins gives a single most parsimonious tree that supports a position of Chaetognatha as sister to the protostomes studied here. From this, one can more clearly interpret the patterns of evolution of various developmental features, especially regarding the embryological fate of the blastopore.

  16. Design and Antigenic Epitopes Prediction of a New Trial Recombinant Multiepitopic Rotaviral Vaccine: In Silico Analyses.

    Science.gov (United States)

    Jafarpour, Sima; Ayat, Hoda; Ahadi, Ali Mohammad

    2015-01-01

    Rotavirus is the major etiologic factor of severe diarrheal disease. Natural infection provides protection against subsequent rotavirus infection and diarrhea. This research presents a new vaccine designed based on computational models. In this study, three types of epitopes are considered-linear, conformational, and combinational-in a proposed model protein. Several studies on rotavirus vaccines have shown that VP6 and VP4 proteins are good candidates for vaccine production. In the present study, a fusion protein was designed as a new generation of rotavirus vaccines by bioinformatics analyses. This model-based study using ABCpred, BCPREDS, Bcepred, and Ellipro web servers showed that the peptide presented in this article has the necessary properties to act as a vaccine. Prediction of linear B-cell epitopes of peptides is helpful to investigate whether these peptides are able to activate humoral immunity. PMID:25965449

  17. PASMet: a web-based platform for prediction, modelling and analyses of metabolic systems.

    Science.gov (United States)

    Sriyudthsak, Kansuporn; Mejia, Ramon Francisco; Arita, Masanori; Hirai, Masami Yokota

    2016-07-01

    PASMet (Prediction, Analysis and Simulation of Metabolic networks) is a web-based platform for proposing and verifying mathematical models to understand the dynamics of metabolism. The advantages of PASMet include user-friendliness and accessibility, which enable biologists and biochemists to easily perform mathematical modelling. PASMet offers a series of user-functions to handle the time-series data of metabolite concentrations. The functions are organised into four steps: (i) Prediction of a probable metabolic pathway and its regulation; (ii) Construction of mathematical models; (iii) Simulation of metabolic behaviours; and (iv) Analysis of metabolic system characteristics. Each function contains various statistical and mathematical methods that can be used independently. Users who may not have enough knowledge of computing or programming can easily and quickly analyse their local data without software downloads, updates or installations. Users only need to upload their files in comma-separated values (CSV) format or enter their model equations directly into the website. Once the time-series data or mathematical equations are uploaded, PASMet automatically performs computation on server-side. Then, users can interactively view their results and directly download them to their local computers. PASMet is freely available with no login requirement at http://pasmet.riken.jp/ from major web browsers on Windows, Mac and Linux operating systems. PMID:27174940

  18. Analyses of Optimal Embedding Dimension and Delay for Local Linear Prediction Model

    Institute of Scientific and Technical Information of China (English)

    MENG Qing-Fang; PENG Yu-Hua; LIU Yun-Xia; SUN Wei-Feng

    2007-01-01

    In the reconstructed phase space, a novel local linear prediction model is proposed to predict chaotic time series. The parameters of the proposed model take the values that are different from those of the phase space reconstruction. We propose a criterion based on prediction error to determine the optimal parameters of the proposed model. The simulation results show that the proposed model can effectively make one-step and multistep prediction for chaotic time series, and the one-step and multi-step prediction accuracy of the proposed model is superior to that of the traditional local linear prediction.

  19. Uncertainty and Sensitivity Analyses of a Two-Parameter Impedance Prediction Model

    Science.gov (United States)

    Jones, M. G.; Parrott, T. L.; Watson, W. R.

    2008-01-01

    This paper presents comparisons of predicted impedance uncertainty limits derived from Monte-Carlo-type simulations with a Two-Parameter (TP) impedance prediction model and measured impedance uncertainty limits based on multiple tests acquired in NASA Langley test rigs. These predicted and measured impedance uncertainty limits are used to evaluate the effects of simultaneous randomization of each input parameter for the impedance prediction and measurement processes. A sensitivity analysis is then used to further evaluate the TP prediction model by varying its input parameters on an individual basis. The variation imposed on the input parameters is based on measurements conducted with multiple tests in the NASA Langley normal incidence and grazing incidence impedance tubes; thus, the input parameters are assigned uncertainties commensurate with those of the measured data. These same measured data are used with the NASA Langley impedance measurement (eduction) processes to determine the corresponding measured impedance uncertainty limits, such that the predicted and measured impedance uncertainty limits (95% confidence intervals) can be compared. The measured reactance 95% confidence intervals encompass the corresponding predicted reactance confidence intervals over the frequency range of interest. The same is true for the confidence intervals of the measured and predicted resistance at near-resonance frequencies, but the predicted resistance confidence intervals are lower than the measured resistance confidence intervals (no overlap) at frequencies away from resonance. A sensitivity analysis indicates the discharge coefficient uncertainty is the major contributor to uncertainty in the predicted impedances for the perforate-over-honeycomb liner used in this study. This insight regarding the relative importance of each input parameter will be used to guide the design of experiments with test rigs currently being brought on-line at NASA Langley.

  20. Prediction of hybrid performance in maize using molecular markers and joint analyses of hybrids and parental inbreds.

    Science.gov (United States)

    Schrag, Tobias A; Möhring, Jens; Melchinger, Albrecht E; Kusterer, Barbara; Dhillon, Baldev S; Piepho, Hans-Peter; Frisch, Matthias

    2010-01-01

    The identification of superior hybrids is important for the success of a hybrid breeding program. However, field evaluation of all possible crosses among inbred lines requires extremely large resources. Therefore, efforts have been made to predict hybrid performance (HP) by using field data of related genotypes and molecular markers. In the present study, the main objective was to assess the usefulness of pedigree information in combination with the covariance between general combining ability (GCA) and per se performance of parental lines for HP prediction. In addition, we compared the prediction efficiency of AFLP and SSR marker data, estimated marker effects separately for reciprocal allelic configurations (among heterotic groups) of heterozygous marker loci in hybrids, and imputed missing AFLP marker data for marker-based HP prediction. Unbalanced field data of 400 maize dent x flint hybrids from 9 factorials and of 79 inbred parents were subjected to joint analyses with mixed linear models. The inbreds were genotyped with 910 AFLP and 256 SSR markers. Efficiency of prediction (R (2)) was estimated by cross-validation for hybrids having no or one parent evaluated in testcrosses. Best linear unbiased prediction of GCA and specific combining ability resulted in the highest efficiencies for HP prediction for both traits (R (2) = 0.6-0.9), if pedigree and line per se data were used. However, without such data, HP for grain yield was more efficiently predicted using molecular markers. The additional modifications of the marker-based approaches had no clear effect. Our study showed the high potential of joint analyses of hybrids and parental inbred lines for the prediction of performance of untested hybrids. PMID:19916002

  1. Analysing the Relevance of Experience Partitions to the Prediction of Players’ Self-Reports of Affect

    DEFF Research Database (Denmark)

    Martínez, Héctor Pérez; Yannakakis, Georgios N.

    2011-01-01

    A common practice in modeling affect from physiological signals consists of reducing the signals to a set of statistical features that feed predictors of self-reported emotions. This paper analyses the impact of various time-windows, used for the extraction of physiological features, to the...

  2. Measuring Usable Knowledge: Teachers' Analyses of Mathematics Classroom Videos Predict Teaching Quality and Student Learning

    Science.gov (United States)

    Kersting, Nicole B.; Givvin, Karen B.; Thompson, Belinda J.; Santagata, Rossella; Stigler, James W.

    2012-01-01

    This study explores the relationships between teacher knowledge, teaching practice, and student learning in mathematics. It extends previous work that developed and evaluated an innovative approach to assessing teacher knowledge based on teachers' analyses of classroom video clips. Teachers watched and commented on 13 fraction clips. These written…

  3. Map on predicted deposition of Cs-137 in Spanish soils from geostatistical analyses

    International Nuclear Information System (INIS)

    The knowledge of the distribution of 137Cs deposition over Spanish mainland soils along with the geographical, physical and morphological terrain information enable us to know the 137Cs background content in soil. This could be useful as a tool in a hypothetical situation of an accident involving a radioactive discharge or in soil erosion studies. A Geographic Information System (GIS) would allow the gathering of all the mentioned information. In this work, gamma measurements of 137Cs on 34 Spanish mainland soils, rainfall data taken from 778 weather stations, soil types and geographical and physical terrain information were input into a GIS. Geostatistical techniques were applied to interpolate values of 137Cs activity at unsampled places, obtaining prediction maps of 137Cs deposition. Up to now, geostatistical methods have been used to model spatial continuity of data. Through semivariance and cross-covariance functions the spatial correlation of such data can be studied and described. Ordinary and simple kriging techniques were carried out to map spatial patterns of 137Cs deposition, and ordinary and simple co-kriging were used to improve the prediction map obtained through a second related variable: namely the rainfall. To choose the best prediction map of 137Cs deposition, the spatial dependence of the variable, the correlation coefficient and the prediction errors were evaluated using the different models previously mentioned. The best result for 137Cs deposition map was obtained when applying the co-kriging techniques. - Highlights: ► Implementation of 137Cs activity data, in Spanish soils, in a GIS. ► Prediction models were performed of Cs-137 fallout with kriging techniques. ► More accurate prediction surfaces were obtained using cokriging techniques. ► Rainfall is the second variable used to cokriging technique.

  4. Improving retention: predicting at-risk students by analysing clicking behaviour in a virtual learning environment

    OpenAIRE

    Wolff, Annika; Zdrahal, Zdenek; Nikolov, Andriy; Pantucek, Michal

    2013-01-01

    One of the key interests for learning analytics is how it can be used to improve retention. This paper focuses on work conducted at the Open University (OU) into predicting students who are at risk of failing their module. The Open University is one of the worlds largest distance learning institutions. Since tutors do not interact face to face with students, it can be difficult for tutors to identify and respond to students who are struggling in time to try to resolve the difficulty. Predict...

  5. Finite Element Creep Damage Analyses and Life Prediction of P91 Pipe Containing Local Wall Thinning Defect

    Science.gov (United States)

    Xue, Jilin; Zhou, Changyu

    2016-03-01

    Creep continuum damage finite element (FE) analyses were performed for P91 steel pipe containing local wall thinning (LWT) defect subjected to monotonic internal pressure, monotonic bending moment and combined internal pressure and bending moment by orthogonal experimental design method. The creep damage lives of pipe containing LWT defect under different load conditions were obtained. Then, the creep damage life formulas were regressed based on the creep damage life results from FE method. At the same time a skeletal point rupture stress was found and used for life prediction which was compared with creep damage lives obtained by continuum damage analyses. From the results, the failure lives of pipe containing LWT defect can be obtained accurately by using skeletal point rupture stress method. Finally, the influence of LWT defect geometry was analysed, which indicated that relative defect depth was the most significant factor for creep damage lives of pipe containing LWT defect.

  6. Aeromechanics and Aeroacoustics Predictions of the Boeing-SMART Rotor Using Coupled-CFD/CSD Analyses

    Science.gov (United States)

    Bain, Jeremy; Sim, Ben W.; Sankar, Lakshmi; Brentner, Ken

    2010-01-01

    This paper will highlight helicopter aeromechanics and aeroacoustics prediction capabilities developed by Georgia Institute of Technology, the Pennsylvania State University, and Northern Arizona University under the Helicopter Quieting Program (HQP) sponsored by the Tactical Technology Office of the Defense Advanced Research Projects Agency (DARPA). First initiated in 2004, the goal of the HQP was to develop high fidelity, state-of-the-art computational tools for designing advanced helicopter rotors with reduced acoustic perceptibility and enhanced performance. A critical step towards achieving this objective is the development of rotorcraft prediction codes capable of assessing a wide range of helicopter configurations and operations for future rotorcraft designs. This includes novel next-generation rotor systems that incorporate innovative passive and/or active elements to meet future challenging military performance and survivability goals.

  7. Predictability of Regional Climate: A Bayesian Approach to Analysing a WRF Model Ensemble

    Science.gov (United States)

    Bruyere, C. L.; Mesquita, M. D. S.; Paimazumder, D.

    2013-12-01

    This study investigates aspects of climate predictability with a focus on climatic variables and different characteristics of extremes over nine North American climatic regions and two selected Atlantic sectors. An ensemble of state-of-the-art Weather Research and Forecasting Model (WRF) simulations is used for the analysis. The ensemble is comprised of a combination of various physics schemes, initial conditions, domain sizes, boundary conditions and breeding techniques. The main objectives of this research are: 1) to increase our understanding of the ability of WRF to capture regional climate information - both at the individual and collective ensemble members, 2) to investigate the role of different members and their synergy in reproducing regional climate 3) to estimate the associated uncertainty. In this study, we propose a Bayesian framework to study the predictability of extremes and associated uncertainties in order to provide a wealth of knowledge about WRF reliability and provide further clarity and understanding of the sensitivities and optimal combinations. The choice of the Bayesian model, as opposed to standard methods, is made because: a) this method has a mean square error that is less than standard statistics, which makes it a more robust method; b) it allows for the use of small sample sizes, which are typical in high-resolution modeling; c) it provides a probabilistic view of uncertainty, which is useful when making decisions concerning ensemble members.

  8. Development of a feasibility prediction tool for solar power plant installation analyses

    International Nuclear Information System (INIS)

    Highlights: → An agglomerative hierarchical clustering tool is designed for renewable energy sources in this study. → In the model, nearest neighbor approach is used as clustering algorithm and Euclidean, Manhattan, and Minkowski distance metrics as distance equations. → The developed tool assists knowledge domain expert in terms of analysing extensive datasets. → The developed tool clusters the given sample data efficiently and successfully using each distance metrics. → The clustering results are compared according to success rates. -- Abstract: The solar energy becomes a challenging area among other renewable sources since the solar energy sources have the advantages of not causing pollution, having low maintenance cost, and not producing noise due to the absence of the moving parts. Although these advantages, the installation cost of a solar power plant is considerably high. However, feasibility analyses have a great role before installation in order to determine the most appropriate power plant site. Despite there are many methods used in feasibility analysis, this paper is focused on a new intelligent method based on an agglomerative hierarchical clustering approach. The solar irradiation and insolation parameters of Central Anatolian Region of Turkey are evaluated utilizing the intelligent feasibility analysis tool developed in this study. The clustering operation in the tool is performed by using the nearest neighbor algorithm. At the stage of determining the optimum hierarchical clustering results, Euclidean, Manhattan and Minkowski distance metrics are adapted to the tool. The achieved clustering results based on Minkowski distance metric provide the most feasible inferences to knowledge domain expert according to other distance metrics.

  9. GenoMatrix: A Software Package for Pedigree-Based and Genomic Prediction Analyses on Complex Traits.

    Science.gov (United States)

    Nazarian, Alireza; Gezan, Salvador Alejandro

    2016-07-01

    Genomic and pedigree-based best linear unbiased prediction methodologies (G-BLUP and P-BLUP) have proven themselves efficient for partitioning the phenotypic variance of complex traits into its components, estimating the individuals' genetic merits, and predicting unobserved (or yet-to-be observed) phenotypes in many species and fields of study. The GenoMatrix software, presented here, is a user-friendly package to facilitate the process of using genome-wide marker data and parentage information for G-BLUP and P-BLUP analyses on complex traits. It provides users with a collection of applications which help them on a set of tasks from performing quality control on data to constructing and manipulating the genomic and pedigree-based relationship matrices and obtaining their inverses. Such matrices will be then used in downstream analyses by other statistical packages. The package also enables users to obtain predicted values for unobserved individuals based on the genetic values of observed related individuals. GenoMatrix is available to the research community as a Windows 64bit executable and can be downloaded free of charge at: http://compbio.ufl.edu/software/genomatrix/. PMID:27025440

  10. Circulating biomarkers for predicting cardiovascular disease risk; a systematic review and comprehensive overview of meta-analyses.

    Directory of Open Access Journals (Sweden)

    Thijs C van Holten

    Full Text Available BACKGROUND: Cardiovascular disease is one of the major causes of death worldwide. Assessing the risk for cardiovascular disease is an important aspect in clinical decision making and setting a therapeutic strategy, and the use of serological biomarkers may improve this. Despite an overwhelming number of studies and meta-analyses on biomarkers and cardiovascular disease, there are no comprehensive studies comparing the relevance of each biomarker. We performed a systematic review of meta-analyses on levels of serological biomarkers for atherothrombosis to compare the relevance of the most commonly studied biomarkers. METHODS AND FINDINGS: Medline and Embase were screened on search terms that were related to "arterial ischemic events" and "meta-analyses". The meta-analyses were sorted by patient groups without pre-existing cardiovascular disease, with cardiovascular disease and heterogeneous groups concerning general populations, groups with and without cardiovascular disease, or miscellaneous. These were subsequently sorted by end-point for cardiovascular disease or stroke and summarized in tables. We have identified 85 relevant full text articles, with 214 meta-analyses. Markers for primary cardiovascular events include, from high to low result: C-reactive protein, fibrinogen, cholesterol, apolipoprotein B, the apolipoprotein A/apolipoprotein B ratio, high density lipoprotein, and vitamin D. Markers for secondary cardiovascular events include, from high to low result: cardiac troponins I and T, C-reactive protein, serum creatinine, and cystatin C. For primary stroke, fibrinogen and serum uric acid are strong risk markers. Limitations reside in that there is no acknowledged search strategy for prognostic studies or meta-analyses. CONCLUSIONS: For primary cardiovascular events, markers with strong predictive potential are mainly associated with lipids. For secondary cardiovascular events, markers are more associated with ischemia. Fibrinogen is a

  11. Theoretical analyses predict A20 regulates period of NF-kB oscillation

    CERN Document Server

    Mengel, Benedicte; Jensen, Mogens H; Trusina, Ala

    2009-01-01

    The nuclear-cytoplasmic shuttling of NF-kB is characterized by damped oscillations of the nuclear concentration with a time period of around 1-2 hours. The NF-kB network contains several feedback loops modulating the overall response of NF-kB activity. While IkBa is known to drive and IkBe is known to dampen the oscillations, the precise role of A20 negative feedback remains to be elucidated. Here we propose a model of the NF-kB system focusing on three negative feedback loops (IkBa, IkBe and A20) which capture the experimentally observed responses in wild-type and knockout cells. We find that A20, like IkBe, efficiently dampens the oscillations albeit through a distinct mechanism. In addition, however, we have discovered a new functional role of A20 by which it controls the oscillation period of nuclear NF-kB. The design based on three nested feedback loops allows independent control of period and amplitude decay in the oscillatory response. Based on these results we predict that adjusting the expression lev...

  12. Accuracy of finite element analyses of CT scans in predictions of vertebral failure patterns under axial compression and anterior flexion.

    Science.gov (United States)

    Jackman, Timothy M; DelMonaco, Alex M; Morgan, Elise F

    2016-01-25

    Finite element (FE) models built from quantitative computed tomography (QCT) scans can provide patient-specific estimates of bone strength and fracture risk in the spine. While prior studies demonstrate accurate QCT-based FE predictions of vertebral stiffness and strength, the accuracy of the predicted failure patterns, i.e., the locations where failure occurs within the vertebra and the way in which the vertebra deforms as failure progresses, is less clear. This study used digital volume correlation (DVC) analyses of time-lapse micro-computed tomography (μCT) images acquired during mechanical testing (compression and anterior flexion) of thoracic spine segments (T7-T9, n=28) to measure displacements occurring throughout the T8 vertebral body at the ultimate point. These displacements were compared to those simulated by QCT-based FE analyses of T8. We hypothesized that the FE predictions would be more accurate when the boundary conditions are based on measurements of pressure distributions within intervertebral discs of similar level of disc degeneration vs. boundary conditions representing rigid platens. The FE simulations captured some of the general, qualitative features of the failure patterns; however, displacement errors ranged 12-279%. Contrary to our hypothesis, no differences in displacement errors were found when using boundary conditions representing measurements of disc pressure vs. rigid platens. The smallest displacement errors were obtained using boundary conditions that were measured directly by DVC at the T8 endplates. These findings indicate that further work is needed to develop methods of identifying physiological loading conditions for the vertebral body, for the purpose of achieving robust, patient-specific FE analyses of failure mechanisms. PMID:26792288

  13. Mitogenomic analyses of eutherian relationships.

    Science.gov (United States)

    Arnason, U; Janke, A

    2002-01-01

    Reasonably correct phylogenies are fundamental to the testing of evolutionary hypotheses. Here, we present phylogenetic findings based on analyses of 67 complete mammalian mitochondrial (mt) genomes. The analyses, irrespective of whether they were performed at the amino acid (aa) level or on nucleotides (nt) of first and second codon positions, placed Erinaceomorpha (hedgehogs and their kin) as the sister group of remaining eutherians. Thus, the analyses separated Erinaceomorpha from other traditional lipotyphlans (e.g., tenrecs, moles, and shrews), making traditional Lipotyphla polyphyletic. Both the aa and nt data sets identified the two order-rich eutherian clades, the Cetferungulata (comprising Pholidota, Carnivora, Perissodactyla, Artiodactyla, and Cetacea) and the African clade (Tenrecomorpha, Macroscelidea, Tubulidentata, Hyracoidea, Proboscidea, and Sirenia). The study corroborated recent findings that have identified a sister-group relationship between Anthropoidea and Dermoptera (flying lemurs), thereby making our own order, Primates, a paraphyletic assembly. Molecular estimates using paleontologically well-established calibration points, placed the origin of most eutherian orders in Cretaceous times, 70-100 million years before present (MYBP). The same estimates place all primate divergences much earlier than traditionally believed. For example, the divergence between Homo and Pan is estimated to have taken place approximately 10 MYBP, a dating consistent with recent findings in primate paleontology. PMID:12438776

  14. Benchmark of SCALE (SAS2H) isotopic predictions of depletion analyses for San Onofre PWR MOX fuel

    Energy Technology Data Exchange (ETDEWEB)

    Hermann, O.W.

    2000-02-01

    The isotopic composition of mixed-oxide (MOX) fuel, fabricated with both uranium and plutonium, after discharge from reactors is of significant interest to the Fissile Materials Disposition Program. The validation of the SCALE (SAS2H) depletion code for use in the prediction of isotopic compositions of MOX fuel, similar to previous validation studies on uranium-only fueled reactors, has corresponding significance. The EEI-Westinghouse Plutonium Recycle Demonstration Program examined the use of MOX fuel in the San Onofre PWR, Unit 1, during cycles 2 and 3. Isotopic analyses of the MOX spent fuel were conducted on 13 actinides and {sup 148}Nd by either mass or alpha spectrometry. Six fuel pellet samples were taken from four different fuel pins of an irradiated MOX assembly. The measured actinide inventories from those samples has been used to benchmark SAS2H for MOX fuel applications. The average percentage differences in the code results compared with the measurement were {minus}0.9% for {sup 235}U and 5.2% for {sup 239}Pu. The differences for most of the isotopes were significantly larger than in the cases for uranium-only fueled reactors. In general, comparisons of code results with alpha spectrometer data had extreme differences, although the differences in the calculations compared with mass spectrometer analyses were not extremely larger than that of uranium-only fueled reactors. This benchmark study should be useful in estimating uncertainties of inventory, criticality and dose calculations of MOX spent fuel.

  15. ATOP - The Advanced Taiwan Ocean Prediction System Based on the mpiPOM. Part 1: Model Descriptions, Analyses and Results

    Directory of Open Access Journals (Sweden)

    Leo Oey

    2013-01-01

    Full Text Available A data-assimilated Taiwan Ocean Prediction (ATOP system is being developed at the National Central University, Taiwan. The model simulates sea-surface height, three-dimensional currents, temperature and salinity and turbulent mixing. The model has options for tracer and particle-tracking algorithms, as well as for wave-induced Stokes drift and wave-enhanced mixing and bottom drag. Two different forecast domains have been tested: a large-grid domain that encompasses the entire North Pacific Ocean at 0.1° × 0.1° horizontal resolution and 41 vertical sigma levels, and a smaller western North Pacific domain which at present also has the same horizontal resolution. In both domains, 25-year spin-up runs from 1988 - 2011 were first conducted, forced by six-hourly Cross-Calibrated Multi-Platform (CCMP and NCEP reanalysis Global Forecast System (GSF winds. The results are then used as initial conditions to conduct ocean analyses from January 2012 through February 2012, when updated hindcasts and real-time forecasts begin using the GFS winds. This paper describes the ATOP system and compares the forecast results against satellite altimetry data for assessing model skills. The model results are also shown to compare well with observations of (i the Kuroshio intrusion in the northern South China Sea, and (ii subtropical counter current. Review and comparison with other models in the literature of _ are also given.

  16. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses-Isotopic Composition Predictions

    International Nuclear Information System (INIS)

    The expanded use of burnup credit in the United States (U.S.) for storage and transport casks, particularly in the acceptance of credit for fission products, has been constrained by the availability of experimental fission product data to support code validation. The U.S. Nuclear Regulatory Commission (NRC) staff has noted that the rationale for restricting the Interim Staff Guidance on burnup credit for storage and transportation casks (ISG-8) to actinide-only is based largely on the lack of clear, definitive experiments that can be used to estimate the bias and uncertainty for computational analyses associated with using burnup credit. To address the issues of burnup credit criticality validation, the NRC initiated a project with the Oak Ridge National Laboratory to (1) develop and establish a technically sound validation approach for commercial spent nuclear fuel (SNF) criticality safety evaluations based on best-available data and methods and (2) apply the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The purpose of this paper is to describe the isotopic composition (depletion) validation approach and resulting observations and recommendations. Validation of the criticality calculations is addressed in a companion paper at this conference. For isotopic composition validation, the approach is to determine burnup-dependent bias and uncertainty in the effective neutron multiplication factor (keff) due to bias and uncertainty in isotopic predictions, via comparisons of isotopic composition predictions (calculated) and measured isotopic compositions from destructive radiochemical assay utilizing as much assay data as is available, and a best-estimate Monte Carlo based method. This paper (1) provides a detailed description of the burnup credit isotopic validation approach and its technical bases, (2) describes the application of the approach for

  17. Application of neural networks and its prospect. 4. Prediction of major disruptions in tokamak plasmas, analyses of time series data

    International Nuclear Information System (INIS)

    Disruption prediction of tokamak plasma has been studied by neural network. The disruption prediction performances by neural network are estimated by the prediction success rate, false alarm rate, and time prior to disruption. The current driving type disruption is predicted by time series data, and plasma lifetime, risk of disruption and plasma stability. Some disruptions generated by density limit, impurity mixture, error magnetic field can be predicted 100 % of prediction success rate by the premonitory symptoms. The pressure driving type disruption phenomena generate some hundred micro seconds before, so that the operation limits such as βN limit of DIII-D and density limit of ADITYA were investigated. The false alarm rate was decreased by βN limit training under stable discharge. The pressure driving disruption generated with increasing plasma pressure can be predicted about 90 % by evaluating plasma stability. (S.Y.)

  18. Analyses of Potential Predictive Markers and Response to Targeted Therapy in Patients with Advanced Clear-cell Renal Cell Carcinoma

    Institute of Scientific and Technical Information of China (English)

    Yan Song; Jing Huang; Ling Shan; Hong-Tu Zhang

    2015-01-01

    Background:Vascular endothelial growth factor-targeted agents are standard treatments in advanced clear-cell renal cell carcinoma (ccRCC),but biomarkers of activity are lacking.The aim of this study was to investigate the association of Von Hippel-Lindau (VHL) gene status,vascular endothelial growth factor receptor (VEGFR) or stem cell factor receptor (KIT) expression,and their relationships with characteristics and clinical outcome of advanced ccRCC.Methods:A total of 59 patients who received targeted treatment with sunitinib or pazopanib were evaluated for determination at Cancer Hospital and Institute,Chinese Academy of Medical Sciences between January 2010 and November 2012.Paraffin-embedded tumor samples were collected and status of the VHL gene and expression of VEGFR and KIT were determined by VHL sequence analysis and immunohistochemistry.Clinical-pathological features were collected and efficacy such as response rate and Median progression-free survival (PFS) and ovcrall survival (OS) were calculated and then compared based on expression status.The Chi-square test,the KaplanMeier method,and the Lon-rank test were used for statistical analyses.Results:Of 59 patients,objective responses were observed in 28 patients (47.5%).The median PFS was 13.8 months and median OS was 39.9 months.There was an improved PFS in patients with the following clinical features:Male gender,number of metastatic sites 2 or less,VEGFR-2 positive or KIT positive.Eleven patients (18.6%) had evidence of VHL mutation,with an objective response rate of 45.5%,which showed no difference with patients with no VHL mutation (47.9%).VHL mutation status did not correlate with either overall response rate (P =0.938) or PFS (P =0.277).The PFS was 17.6 months and 22.2 months in VEGFR-2 positive patients and KIT positive patients,respectively,which was significantly longer than that of VEGFR-2 or KIT negative patients (P =0.026 and P =0.043).Conclusion:VHL mutation status could not predict

  19. Analyses of Potential Predictive Markers and Response to Targeted Therapy in Patients with Advanced Clear-cell Renal Cell Carcinoma

    Directory of Open Access Journals (Sweden)

    Yan Song

    2015-01-01

    Full Text Available Background: Vascular endothelial growth factor-targeted agents are standard treatments in advanced clear-cell renal cell carcinoma (ccRCC, but biomarkers of activity are lacking. The aim of this study was to investigate the association of Von Hippel-Lindau (VHL gene status, vascular endothelial growth factor receptor (VEGFR or stem cell factor receptor (KIT expression, and their relationships with characteristics and clinical outcome of advanced ccRCC. Methods: A total of 59 patients who received targeted treatment with sunitinib or pazopanib were evaluated for determination at Cancer Hospital and Institute, Chinese Academy of Medical Sciences between January 2010 and November 2012. Paraffin-embedded tumor samples were collected and status of the VHL gene and expression of VEGFR and KIT were determined by VHL sequence analysis and immunohistochemistry. Clinical-pathological features were collected and efficacy such as response rate and Median progression-free survival (PFS and overall survival (OS were calculated and then compared based on expression status. The Chi-square test, the Kaplan-Meier method, and the Lon-rank test were used for statistical analyses. Results: Of 59 patients, objective responses were observed in 28 patients (47.5%. The median PFS was 13.8 months and median OS was 39.9 months. There was an improved PFS in patients with the following clinical features: Male gender, number of metastatic sites 2 or less, VEGFR-2 positive or KIT positive. Eleven patients (18.6% had evidence of VHL mutation, with an objective response rate of 45.5%, which showed no difference with patients with no VHL mutation (47.9%. VHL mutation status did not correlate with either overall response rate (P = 0.938 or PFS (P = 0.277. The PFS was 17.6 months and 22.2 months in VEGFR-2 positive patients and KIT positive patients, respectively, which was significantly longer than that of VEGFR-2 or KIT negative patients (P = 0.026 and P = 0.043. Conclusion

  20. Serial and panel analyses of biomarkers do not improve the prediction of bacteremia compared to one procalcitonin measurement

    NARCIS (Netherlands)

    Tromp, M.; Lansdorp, B.; Bleeker-Rovers, C.P.; Klein Gunnewiek, J.M.; Kullberg, B.J.; Pickkers, P.

    2012-01-01

    Objectives We evaluated the value of a single biomarker, biomarker panels, biomarkers combined with clinical signs of sepsis, and serial determinations of biomarkers in the prediction of bacteremia in patients with sepsis. Methods Adult patients visiting the emergency department because of a susp

  1. Serial and panel analyses of biomarkers do not improve the prediction of bacteremia compared to one procalcitonin measurement.

    NARCIS (Netherlands)

    Tromp, M.; Lansdorp, B.; Bleeker-Rovers, C.P.; Gunnewiek, J.M.; Kullberg, B.J.; Pickkers, P.

    2012-01-01

    OBJECTIVES: We evaluated the value of a single biomarker, biomarker panels, biomarkers combined with clinical signs of sepsis, and serial determinations of biomarkers in the prediction of bacteremia in patients with sepsis. METHODS: Adult patients visiting the emergency department because of a suspe

  2. Can the lifetime of the superheater tubes be predicted according to the fuel analyses? Assessment from field and laboratory data

    Energy Technology Data Exchange (ETDEWEB)

    Salmenoja, K. [Kvaerner Pulping Oy, Tampere (Finland)

    1998-12-31

    Lifetime of the superheaters in different power boilers is more or less still a mystery. This is especially true in firing biomass based fuels (biofuels), such as bark, forest residues, and straw. Due to the unhomogeneous nature of the biofuels, the lifetime of the superheaters may vary from case to case. Sometimes the lifetime is significantly shorter than originally expected, sometimes no corrosion even in the hottest tubes is observed. This is one of the main reasons why the boiler operators often demand for a better predictability on the corrosion resistance of the materials to avoid unscheduled shutdowns. (orig.) 9 refs.

  3. Standardized Software for Wind Load Forecast Error Analyses and Predictions Based on Wavelet-ARIMA Models - Applications at Multiple Geographically Distributed Wind Farms

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Zhangshuan; Makarov, Yuri V.; Samaan, Nader A.; Etingov, Pavel V.

    2013-03-19

    Given the multi-scale variability and uncertainty of wind generation and forecast errors, it is a natural choice to use time-frequency representation (TFR) as a view of the corresponding time series represented over both time and frequency. Here we use wavelet transform (WT) to expand the signal in terms of wavelet functions which are localized in both time and frequency. Each WT component is more stationary and has consistent auto-correlation pattern. We combined wavelet analyses with time series forecast approaches such as ARIMA, and tested the approach at three different wind farms located far away from each other. The prediction capability is satisfactory -- the day-ahead prediction of errors match the original error values very well, including the patterns. The observations are well located within the predictive intervals. Integrating our wavelet-ARIMA (‘stochastic’) model with the weather forecast model (‘deterministic’) will improve our ability significantly to predict wind power generation and reduce predictive uncertainty.

  4. The GENOTEND chip: a new tool to analyse gene expression in muscles of beef cattle for beef quality prediction

    Science.gov (United States)

    2012-01-01

    30 Charolais young bulls slaughtered in year 2, and in the 21 Charolais steers slaughtered in year 1, but not in the group of 19 steers slaughtered in year 2 which differ from the reference group by two factors (gender and year). When the first three groups of animals were analysed together, this subset of genes explained a 4-fold higher proportion of the variability in tenderness than muscle biochemical traits. Conclusion This study underlined the relevance of the GENOTEND chip to identify markers of beef quality, mainly by confirming previous results and by detecting other genes of the heat shock family as potential markers of beef quality. However, it was not always possible to extrapolate the relevance of these markers to all animal groups which differ by several factors (such as gender or environmental conditions of production) from the initial population of reference in which these markers were identified. PMID:22894653

  5. The GENOTEND chip: a new tool to analyse gene expression in muscles of beef cattle for beef quality prediction

    Directory of Open Access Journals (Sweden)

    Hocquette Jean-Francois

    2012-08-01

    validated in the groups of 30 Charolais young bulls slaughtered in year 2, and in the 21 Charolais steers slaughtered in year 1, but not in the group of 19 steers slaughtered in year 2 which differ from the reference group by two factors (gender and year. When the first three groups of animals were analysed together, this subset of genes explained a 4-fold higher proportion of the variability in tenderness than muscle biochemical traits. Conclusion This study underlined the relevance of the GENOTEND chip to identify markers of beef quality, mainly by confirming previous results and by detecting other genes of the heat shock family as potential markers of beef quality. However, it was not always possible to extrapolate the relevance of these markers to all animal groups which differ by several factors (such as gender or environmental conditions of production from the initial population of reference in which these markers were identified.

  6. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

  7. Prediction

    OpenAIRE

    Woollard, W.J.

    2006-01-01

    In this chapter we will look at the ways in which you can use ICT in the classroom to support hypothesis and prediction and how modern technology is enabling: pattern seeking, extrapolation and interpolation to meet the challenges of the information explosion of the 21st century.

  8. Computational fluid dynamics analyses of lateral heat conduction, coolant azimuthal mixing and heat transfer predictions in a BR2 fuel assembly geometry.

    Energy Technology Data Exchange (ETDEWEB)

    Tzanos, C. P.; Dionne, B. (Nuclear Engineering Division)

    2011-05-23

    To support the analyses related to the conversion of the BR2 core from highly-enriched (HEU) to low-enriched (LEU) fuel, the thermal-hydraulics codes PLTEMP and RELAP-3D are used to evaluate the safety margins during steady-state operation (PLTEMP), as well as after a loss-of-flow, loss-of-pressure, or a loss of coolant event (RELAP). In the 1-D PLTEMP and RELAP simulations, conduction in the azimuthal and axial directions is not accounted. The very good thermal conductivity of the cladding and the fuel meat and significant temperature gradients in the lateral directions (axial and azimuthal directions) could lead to a heat flux distribution that is significantly different than the power distribution. To evaluate the significance of the lateral heat conduction, 3-D computational fluid dynamics (CFD) simulations, using the CFD code STAR-CD, were performed. Safety margin calculations are typically performed for a hot stripe, i.e., an azimuthal region of the fuel plates/coolant channel containing the power peak. In a RELAP model, for example, a channel between two plates could be divided into a number of RELAP channels (stripes) in the azimuthal direction. In a PLTEMP model, the effect of azimuthal power peaking could be taken into account by using engineering factors. However, if the thermal mixing in the azimuthal direction of a coolant channel is significant, a stripping approach could be overly conservative by not taking into account this mixing. STAR-CD simulations were also performed to study the thermal mixing in the coolant. Section II of this document presents the results of the analyses of the lateral heat conduction and azimuthal thermal mixing in a coolant channel. Finally, PLTEMP and RELAP simulations rely on the use of correlations to determine heat transfer coefficients. Previous analyses showed that the Dittus-Boelter correlation gives significantly more conservative (lower) predictions than the correlations of Sieder-Tate and Petukhov. STAR-CD 3-D

  9. Prediction

    CERN Document Server

    Sornette, Didier

    2010-01-01

    This chapter first presents a rather personal view of some different aspects of predictability, going in crescendo from simple linear systems to high-dimensional nonlinear systems with stochastic forcing, which exhibit emergent properties such as phase transitions and regime shifts. Then, a detailed correspondence between the phenomenology of earthquakes, financial crashes and epileptic seizures is offered. The presented statistical evidence provides the substance of a general phase diagram for understanding the many facets of the spatio-temporal organization of these systems. A key insight is to organize the evidence and mechanisms in terms of two summarizing measures: (i) amplitude of disorder or heterogeneity in the system and (ii) level of coupling or interaction strength among the system's components. On the basis of the recently identified remarkable correspondence between earthquakes and seizures, we present detailed information on a class of stochastic point processes that has been found to be particu...

  10. A systematic study of coordinate precision in X-ray structure analyses. Pt. 1. Descriptive statistics and predictive estimates of E.S.D.'s for C atoms

    International Nuclear Information System (INIS)

    This study examines the relationship of structure precision, as expressed by the e.s.d.'s of atomic coordinates, to the R factor and chemical constitution of a given crystal structure. On the basis of the work of Cruickshank [Acta Cryst. (1960), 13, 744-777], it is shown that anti σ(C-C), the mean e.s.d. of a C-C bond length in a structure, or anti σ(C), the mean isotropic e.s.d. of a C atom, can be estimated by expressions of the form anti σ = kRN1/2c. Here, Nc is taken as ΣZ2i/Z2C, with the atomic numbers Zi summed over all atoms in the asymmetric unit and ZC = 6. It is also shown that anti σ(E), the mean isotropic e.s.d. of a non-C atom, can be estimated by anti σ(E) kRN1/2c/ZE. Values of k were determined by regression analyses based on subsets of 25 984 and 20 334 entries in the Cambridge Structural Database (CSD) that contain atomic coordinate e.s.d.'s. 95% of coordinate e.s.d.'s for C atoms can be estimated to within 0.005 A of their published value and 78% to within 0.0025 A. These predicted anti σ values provide useful estimates of precision for those 39 000 structures for which coordinate e.s.d.'s are not available in the CSD. Details of the diffraction experiment, which might provide an improved estimating function in Cruickshank's (1960) treatment, are not available in any CSD entries. However, values of Nr (the number of reflections) and Np (the number of parameters) used in refinement were added manually for 817 entries, and the variation of anti σ(C-C) with decreasing Nr/Np ratios is examined: there is a rapid increase in anti σ(C-C) as Nr/Np decreases below circa 6.0. A method for approximating s, the r.m.s. reciprocal radius for the reflections observed, is presented, but it is found that a function of the form anti σ(C-C) = kRN1/2c/ anti s(Nr - Np)1/2 [directly analogous to Cruickshank's (1960) equation] had only slightly improved predictive ability for this data set by comparison with functions based upon R and N1/2c alone. Possible

  11. Using meta-analytic path analysis to test theoretical predictions in health behavior: An illustration based on meta-analyses of the theory of planned behavior

    OpenAIRE

    Hagger, Martin; Chan, Dervin K. C.; Protogerou, Cleo; Chatzisarantis, Nikos L. D.

    2016-01-01

    Objective Synthesizing research on social cognitive theories applied to health behavior is an important step in the development of an evidence base of psychological factors as targets for effective behavioral interventions. However, few meta-analyses of research on social cognitive theories in health contexts have conducted simultaneous tests of theoretically-stipulated pattern effects using path analysis. We argue that conducting path analyses of meta-analytic effects among constructs fr...

  12. Application of pathways analyses for site performance prediction for the Gas Centrifuge Enrichment Plant and Oak Ridge Central Waste Disposal Facility

    International Nuclear Information System (INIS)

    The suitability of the Gas Centrifuge Enrichment Plant and the Oak Ridge Central Waste Disposal Facility for shallow-land burial of low-level radioactive waste is evaluated using pathways analyses. The analyses rely on conservative scenarios to describe the generation and migration of contamination and the potential human exposure to the waste. Conceptual and numerical models are developed using data from comprehensive laboratory and field investigations and are used to simulate the long-term transport of contamination to man. Conservatism is built into the analyses when assumptions concerning future events have to be made or when uncertainties concerning site or waste characteristics exist. Maximum potential doses to man are calculated and compared to the appropriate standards. The sites are found to provide adequate buffer to persons outside the DOE reservations. Conclusions concerning site capacity and site acceptability are drawn. In reaching these conclusions, some consideration is given to the uncertainties and conservatisms involved in the analyses. Analytical methods to quantitatively assess the probability of future events to occur and the sensitivity of the results to data uncertainty may prove useful in relaxing some of the conservatism built into the analyses. The applicability of such methods to pathways analyses is briefly discussed. 18 refs., 9 figs

  13. Comparison of and limits of accuracy for statistical analyses of vibrational and electronic circular dichroism spectra in terms of correlations to and predictions of protein secondary structure.

    OpenAIRE

    Pancoska, P.; Bitto, E.; Janota, V.; Urbanova, M.; Gupta, V P; Keiderling, T A

    1995-01-01

    This work provides a systematic comparison of vibrational CD (VCD) and electronic CD (ECD) methods for spectral prediction of secondary structure. The VCD and ECD data are simplified to a small set of spectral parameters using the principal component method of factor analysis (PC/FA). Regression fits of these parameters are made to the X-ray-determined fractional components (FC) of secondary structure. Predictive capability is determined by computing structures for proteins sequentially left ...

  14. A systematic study of coordinate precision in X-ray structure analyses. Pt. 2. Predictive estimates of E.S.D.'s for the general-atom case

    International Nuclear Information System (INIS)

    The relationship between the mean isotropic e.s.d. anti σ(A)o of any element type A in a crystal structure and the R factor and atomic constitution of that structure is explored for 124 905 element-type occurrences calculated from 33 955 entries in the Cambridge Structural Database. On the basis of the work of Cruickshank [Acta Cryst. (1960), 13, 774-777], it is shown that anti σ(A)p values can be estimated by equations of the form anti σ(A)p = KRN1/2c/ZA where Nc is taken as ΣZ2i/Z2C, the Zi are atomic numbers and the summation is over all atoms in the asymmetric unit. Values of K were obtained by regression techniques using the anti σ(A)o as basis. The constant Knc for noncentrosymmetric structures is found to be larger than Kc for centrosymmetric structures by a factor of ∼21/2, as predicted by Cruickshank (1960). Two predictive equations are generated, one for first-row elements and the second for elements with ZA > 10. The relationship between the different constants K that arise in these two situations is linked to shape differentials in scattering-factor (fi) curves for light and heavy atoms. It is found that predictive equations in which the Zi are selectively replaced by fi at a constant sinθ/λ of 0.30 A-1 generate closely similar values of K for the light-atom and heavy-atom subsets. The overall analysis indicates that atomic e.s.d.'s may be seriously underestimated in the more precise structure determinations, that e.s.d.'s for the heaviest atoms may be less reliable than those for lighter atoms and that e.s.d.'s in noncentrosymmetric structures may be less accurate than those in centrosymmetric structures. (orig.)

  15. Analysing EWviews

    DEFF Research Database (Denmark)

    Jelsøe, Erling; Jæger, Birgit

    2015-01-01

    When analysing the results of a European wide citizen consultation on sustainable consumption it is necessary to take a number of issues into account, such as the question of representativity and tensions between national and European identies and between consumer and Citizen orientations regarding...

  16. Barriers to predicting changes in global terrestrial methane fluxes: analyses using CLM4Me, a methane biogeochemistry model integrated in CESM

    Directory of Open Access Journals (Sweden)

    W. J. Riley

    2011-07-01

    Full Text Available Terrestrial net CH4 surface fluxes often represent the difference between much larger gross production and consumption fluxes and depend on multiple physical, biological, and chemical mechanisms that are poorly understood and represented in regional- and global-scale biogeochemical models. To characterize uncertainties, study feedbacks between CH4 fluxes and climate, and to guide future model development and experimentation, we developed and tested a new CH4 biogeochemistry model (CLM4Me integrated in the land component (Community Land Model; CLM4 of the Community Earth System Model (CESM1. CLM4Me includes representations of CH4 production, oxidation, aerenchyma transport, ebullition, aqueous and gaseous diffusion, and fractional inundation. As with most global models, CLM4 lacks important features for predicting current and future CH4 fluxes, including: vertical representation of soil organic matter, accurate subgrid scale hydrology, realistic representation of inundated system vegetation, anaerobic decomposition, thermokarst dynamics, and aqueous chemistry. We compared the seasonality and magnitude of predicted CH4 emissions to observations from 18 sites and three global atmospheric inversions. Simulated net CH4 emissions using our baseline parameter set were 270, 160, 50, and 70 Tg CH4 yr−1 globally, in the tropics, in the temperate zone, and north of 45° N, respectively; these values are within the range of previous estimates. We then used the model to characterize the sensitivity of regional and global CH4 emission estimates to uncertainties in model parameterizations. Of the parameters we tested, the temperature sensitivity of CH4 production, oxidation parameters, and aerenchyma properties had the largest impacts on net CH4 emissions, up to a factor of 4 and 10 at the regional and gridcell scales

  17. Barriers to predicting changes in global terrestrial methane fluxes: analyses using CLM4Me, a methane biogeochemistry model integrated in CESM

    Directory of Open Access Journals (Sweden)

    W. J. Riley

    2011-02-01

    Full Text Available Terrestrial net CH4 surface fluxes often represent the difference between much larger gross production and consumption fluxes and depend on multiple physical, biological, and chemical mechanisms that are poorly understood and represented in regional- and global-scale biogeochemical models. To characterize uncertainties, study feedbacks between CH4 fluxes and climate, and to guide future model development and experimentation, we developed and tested a new CH4 biogeochemistry model (CLM4Me integrated in the land component (Community Land Model; CLM4 of the Community Earth System Model (CESM1. CLM4Me includes representations of CH4 production, oxidation, aerenchymous transport, ebullition, aqueous and gaseous diffusion, and fractional inundation. As with most global models, CLM4Me lacks important features for predicting current and future CH4 fluxes, including: vertical representation of soil organic matter, accurate subgrid scale hydrology, realistic representation of inundated system vegetation, anaerobic decomposition, thermokarst dynamics, and aqueous chemistry. We compared the seasonality and magnitude of predicted CH4 emissions to observations from 18 sites and three global atmospheric inversions. Simulated net CH4 emissions using our baseline parameter set were 270, 160, 50, and 70 Tg CH4 m−2 yr−1 globally, in the tropics, temperate zone, and north of 45° N, respectively; these values are within the range of previous estimates. We then used the model to characterize the sensitivity of regional and global CH4 emission estimates to uncertainties in model parameterizations. Of the parameters we tested, the temperature sensitivity of CH4 production, oxidation parameters, and aerenchyma properties had the largest impacts on net CH4 emissions, up to a factor of 4 and 10 at the regional and gridcell

  18. Generation of a predicted protein database from EST data and application to iTRAQ analyses in grape (Vitis vinifera cv. Cabernet Sauvignon berries at ripening initiation

    Directory of Open Access Journals (Sweden)

    Smith Derek

    2009-01-01

    Full Text Available Abstract Background iTRAQ is a proteomics technique that uses isobaric tags for relative and absolute quantitation of tryptic peptides. In proteomics experiments, the detection and high confidence annotation of proteins and the significance of corresponding expression differences can depend on the quality and the species specificity of the tryptic peptide map database used for analysis of the data. For species for which finished genome sequence data are not available, identification of proteins relies on similarity to proteins from other species using comprehensive peptide map databases such as the MSDB. Results We were interested in characterizing ripening initiation ('veraison' in grape berries at the protein level in order to better define the molecular control of this important process for grape growers and wine makers. We developed a bioinformatic pipeline for processing EST data in order to produce a predicted tryptic peptide database specifically targeted to the wine grape cultivar, Vitis vinifera cv. Cabernet Sauvignon, and lacking truncated N- and C-terminal fragments. By searching iTRAQ MS/MS data generated from berry exocarp and mesocarp samples at ripening initiation, we determined that implementation of the custom database afforded a large improvement in high confidence peptide annotation in comparison to the MSDB. We used iTRAQ MS/MS in conjunction with custom peptide db searches to quantitatively characterize several important pathway components for berry ripening previously described at the transcriptional level and confirmed expression patterns for these at the protein level. Conclusion We determined that a predicted peptide database for MS/MS applications can be derived from EST data using advanced clustering and trimming approaches and successfully implemented for quantitative proteome profiling. Quantitative shotgun proteome profiling holds great promise for characterizing biological processes such as fruit ripening

  19. Comparative analyses of genetic risk prediction methods reveal extreme diversity of genetic predisposition to nonalcoholic fatty liver disease (NAFLD) among ethnic populations of India

    Indian Academy of Sciences (India)

    Ankita Chatterjee; Analabha Basu; Abhijit Chowdhury; Kausik Das; Neeta Sarkar-Roy; Partha P. Majumder; Priyadarshi Basu

    2015-03-01

    Nonalcoholic fatty liver disease (NAFLD) is a distinct pathologic condition characterized by a disease spectrum ranging from simple steatosis to steato-hepatitis, cirrhosis and hepatocellular carcinoma. Prevalence of NAFLD varies in different ethnic groups, ranging from 12% in Chinese to 45% in Hispanics. Among Indian populations, the diversity in prevalence is high, ranging from 9% in rural populations to 32% in urban populations, with geographic differences as well. Here, we wished to find out if this difference is reflected in their genetic makeup. To date, several candidate genes and a few genomewide association studies (GWAS) have been carried out, and many associations between single nucleotide polymorphisms (SNPs) and NAFLD have been observed. In this study, the risk allele frequencies (RAFs) of NAFLD-associated SNPs in 20 Indian ethnic populations (376 individuals) were analysed. We used two different measures for calculating genetic risk scores and compared their performance. The correlation of additive risk scores of NAFLD for three Hapmap populations with their weighted mean prevalence was found to be high (2 = 0.93). Later we used this method to compare NAFLD risk among ethnic Indian populations. Based on our observation, the Indian caste populations have high risk scores compared to Caucasians, who are often used as surrogate and similar to Indian caste population in disease gene association studies, and is significantly higher than the Indian tribal populations.

  20. Effects of pharmacists' interventions on appropriateness of prescribing and evaluation of the instruments' (MAI, STOPP and STARTs' ability to predict hospitalization--analyses from a randomized controlled trial.

    Directory of Open Access Journals (Sweden)

    Ulrika Gillespie

    Full Text Available BACKGROUND: Appropriateness of prescribing can be assessed by various measures and screening instruments. The aims of this study were to investigate the effects of pharmacists' interventions on appropriateness of prescribing in elderly patients, and to explore the relationship between these results and hospital care utilization during a 12-month follow-up period. METHODS: The study population from a previous randomized controlled study, in which the effects of a comprehensive pharmacist intervention on re-hospitalization was investigated, was used. The criteria from the instruments MAI, STOPP and START were applied retrospectively to the 368 study patients (intervention group (I n = 182, control group (C n = 186. The assessments were done on admission and at discharge to detect differences over time and between the groups. Hospital care consumption was recorded and the association between scores for appropriateness, and hospitalization was analysed. RESULTS: The number of Potentially Inappropriate Medicines (PIMs per patient as identified by STOPP was reduced for I but not for C (1.42 to 0.93 vs. 1.46 to 1.66 respectively, p<0.01. The number of Potential Prescription Omissions (PPOs per patient as identified by START was reduced for I but not for C (0.36 to 0.09 vs. 0.42 to 0.45 respectively, p<0.001. The summated score for MAI was reduced for I but not for C (8.5 to 5.0 and 8.7 to 10.0 respectively, p<0.001. There was a positive association between scores for MAI and STOPP and drug-related readmissions (RR 8-9% and 30-34% respectively. No association was detected between the scores of the tools and total re-visits to hospital. CONCLUSION: The interventions significantly improved the appropriateness of prescribing for patients in the intervention group as evaluated by the instruments MAI, STOPP and START. High scores in MAI and STOPP were associated with a higher number of drug-related readmissions.

  1. A new tool for prediction and analysis of thermal comfort in steady and transient states; Un nouvel outil pour la prediction et l'analyse du confort thermique en regime permanent et variable

    Energy Technology Data Exchange (ETDEWEB)

    Megri, A.Ch. [Illinois Institute of Technology, Civil and Architectural Engineering Dept., Chicago, Illinois (United States); Megri, A.F. [Centre Universitaire de Tebessa, Dept. d' Electronique (Algeria); El Naqa, I. [Washington Univ., School of Medicine, Dept. of Radiation Oncology, Saint Louis, Missouri (United States); Achard, G. [Universite de Savoie, Lab. Optimisation de la Conception et Ingenierie de L' Environnement (LOCIE) - ESIGEC, 73 - Le Bourget du Lac (France)

    2006-02-15

    Thermal comfort is influenced by psychological as well as physiological factors. This paper proposes the use of support vector machine (SVM) learning for automated prediction of human thermal comfort in steady and transient states. The SVM is an artificial intelligent approach that could capture the input/output mapping from the given data. Support vector machines were developed based on the Structural Risk Minimization principle. Different sets of representative experimental environmental factors that affect a homogenous person's thermal balance were used for training the SVM machine. The SVM is a very efficient, fast, and accurate technique to identify thermal comfort. This technique permits the determination of thermal comfort indices for different sub-categories of people; sick and elderly, in extreme climatic conditions, when the experimental data for such sub-category are available. The experimental data has been used for the learning and testing processes. The results show a good correlation between SVM predicted values and those obtained from conventional thermal comfort, such as Fanger and Gagge models. The 'trained machine' with representative data could be used easily and effectively in comparison with other conventional estimation methods of different indices. (author)

  2. Phylogenetic analyses of complete mitochondrial genome sequences suggest a basal divergence of the enigmatic rodent Anomalurus

    Directory of Open Access Journals (Sweden)

    Gissi Carmela

    2007-02-01

    Full Text Available Abstract Background Phylogenetic relationships between Lagomorpha, Rodentia and Primates and their allies (Euarchontoglires have long been debated. While it is now generally agreed that Rodentia constitutes a monophyletic sister-group of Lagomorpha and that this clade (Glires is sister to Primates and Dermoptera, higher-level relationships within Rodentia remain contentious. Results We have sequenced and performed extensive evolutionary analyses on the mitochondrial genome of the scaly-tailed flying squirrel Anomalurus sp., an enigmatic rodent whose phylogenetic affinities have been obscure and extensively debated. Our phylogenetic analyses of the coding regions of available complete mitochondrial genome sequences from Euarchontoglires suggest that Anomalurus is a sister taxon to the Hystricognathi, and that this clade represents the most basal divergence among sampled Rodentia. Bayesian dating methods incorporating a relaxed molecular clock provide divergence-time estimates which are consistently in agreement with the fossil record and which indicate a rapid radiation within Glires around 60 million years ago. Conclusion Taken together, the data presented provide a working hypothesis as to the phylogenetic placement of Anomalurus, underline the utility of mitochondrial sequences in the resolution of even relatively deep divergences and go some way to explaining the difficulty of conclusively resolving higher-level relationships within Glires with available data and methodologies.

  3. Uncertainty and Sensitivity Analyses Plan

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.

  4. Uncertainty and Sensitivity Analyses Plan

    International Nuclear Information System (INIS)

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project

  5. Periodic safety analyses

    International Nuclear Information System (INIS)

    The IAEA Safety Guide 50-SG-S8 devoted to 'Safety Aspects of Foundations of Nuclear Power Plants' indicates that operator of a NPP should establish a program for inspection of safe operation during construction, start-up and service life of the plant for obtaining data needed for estimating the life time of structures and components. At the same time the program should ensure that the safety margins are appropriate. Periodic safety analysis are an important part of the safety inspection program. Periodic safety reports is a method for testing the whole system or a part of the safety system following the precise criteria. Periodic safety analyses are not meant for qualification of the plant components. Separate analyses are devoted to: start-up, qualification of components and materials, and aging. All these analyses are described in this presentation. The last chapter describes the experience obtained for PWR-900 and PWR-1300 units from 1986-1989

  6. Laser Beam Focus Analyser

    DEFF Research Database (Denmark)

    Nielsen, Peter Carøe; Hansen, Hans Nørgaard; Olsen, Flemming Ove;

    2007-01-01

    The quantitative and qualitative description of laser beam characteristics is important for process implementation and optimisation. In particular, a need for quantitative characterisation of beam diameter was identified when using fibre lasers for micro manufacturing. Here the beam diameter limits...... the obtainable features in direct laser machining as well as heat affected zones in welding processes. This paper describes the development of a measuring unit capable of analysing beam shape and diameter of lasers to be used in manufacturing processes. The analyser is based on the principle of a rotating...... mechanical wire being swept through the laser beam at varying Z-heights. The reflected signal is analysed and the resulting beam profile determined. The development comprised the design of a flexible fixture capable of providing both rotation and Z-axis movement, control software including data capture...

  7. Report sensory analyses veal

    OpenAIRE

    Veldman, M.; Schelvis-Smit, A.A.M.

    2005-01-01

    On behalf of a client of Animal Sciences Group, different varieties of veal were analyzed by both instrumental and sensory analyses. The sensory evaluation was performed with a sensory analytical panel in the period of 13th of May and 31st of May, 2005. The three varieties of veal were: young bull, pink veal and white veal. The sensory descriptive analyses show that the three groups Young bulls, pink veal and white veal, differ significantly in red colour for the raw meat as well as the baked...

  8. Wavelet Analyses and Applications

    Science.gov (United States)

    Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.

    2009-01-01

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…

  9. Probabilistic safety analyses (PSA)

    International Nuclear Information System (INIS)

    The guide shows how the probabilistic safety analyses (PSA) are used in the design, construction and operation of light water reactor plants in order for their part to ensure that the safety of the plant is good enough in all plant operational states

  10. Cost-Benefit Analyses of Transportation Investments

    DEFF Research Database (Denmark)

    Næss, Petter

    2006-01-01

    environment. In addition, main input data are based on transport modelling analyses based on a misleading `local ontology' among the model makers. The ontological misconceptions translate into erroneous epistemological assumptions about the possibility of precise predictions and the validity of willingness...

  11. Possible future HERA analyses

    CERN Document Server

    Geiser, Achim

    2015-01-01

    A variety of possible future analyses of HERA data in the context of the HERA data preservation programme is collected, motivated, and commented. The focus is placed on possible future analyses of the existing $ep$ collider data and their physics scope. Comparisons to the original scope of the HERA programme are made, and cross references to topics also covered by other participants of the workshop are given. This includes topics on QCD, proton structure, diffraction, jets, hadronic final states, heavy flavours, electroweak physics, and the application of related theory and phenomenology topics like NNLO QCD calculations, low-x related models, nonperturbative QCD aspects, and electroweak radiative corrections. Synergies with other collider programmes are also addressed. In summary, the range of physics topics which can still be uniquely covered using the existing data is very broad and of considerable physics interest, often matching the interest of results from colliders currently in operation. Due to well-e...

  12. Statistisk analyse med SPSS

    OpenAIRE

    Linnerud, Kristin; Oklevik, Ove; Slettvold, Harald

    2004-01-01

    Dette notatet har sitt utspring i forelesninger og undervisning for 3.års studenter i økonomi og administrasjon ved høgskolen i Sogn og Fjordane. Notatet er særlig lagt opp mot undervisningen i SPSS i de to kursene ”OR 685 Marknadsanalyse og merkevarestrategi” og ”BD 616 Økonomistyring og analyse med programvare”.

  13. Biomass feedstock analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wilen, C.; Moilanen, A.; Kurkela, E. [VTT Energy, Espoo (Finland). Energy Production Technologies

    1996-12-31

    The overall objectives of the project `Feasibility of electricity production from biomass by pressurized gasification systems` within the EC Research Programme JOULE II were to evaluate the potential of advanced power production systems based on biomass gasification and to study the technical and economic feasibility of these new processes with different type of biomass feed stocks. This report was prepared as part of this R and D project. The objectives of this task were to perform fuel analyses of potential woody and herbaceous biomasses with specific regard to the gasification properties of the selected feed stocks. The analyses of 15 Scandinavian and European biomass feed stock included density, proximate and ultimate analyses, trace compounds, ash composition and fusion behaviour in oxidizing and reducing atmospheres. The wood-derived fuels, such as whole-tree chips, forest residues, bark and to some extent willow, can be expected to have good gasification properties. Difficulties caused by ash fusion and sintering in straw combustion and gasification are generally known. The ash and alkali metal contents of the European biomasses harvested in Italy resembled those of the Nordic straws, and it is expected that they behave to a great extent as straw in gasification. Any direct relation between the ash fusion behavior (determined according to the standard method) and, for instance, the alkali metal content was not found in the laboratory determinations. A more profound characterisation of the fuels would require gasification experiments in a thermobalance and a PDU (Process development Unit) rig. (orig.) (10 refs.)

  14. Possible future HERA analyses

    Energy Technology Data Exchange (ETDEWEB)

    Geiser, Achim

    2015-12-15

    A variety of possible future analyses of HERA data in the context of the HERA data preservation programme is collected, motivated, and commented. The focus is placed on possible future analyses of the existing ep collider data and their physics scope. Comparisons to the original scope of the HERA pro- gramme are made, and cross references to topics also covered by other participants of the workshop are given. This includes topics on QCD, proton structure, diffraction, jets, hadronic final states, heavy flavours, electroweak physics, and the application of related theory and phenomenology topics like NNLO QCD calculations, low-x related models, nonperturbative QCD aspects, and electroweak radiative corrections. Synergies with other collider programmes are also addressed. In summary, the range of physics topics which can still be uniquely covered using the existing data is very broad and of considerable physics interest, often matching the interest of results from colliders currently in operation. Due to well-established data and MC sets, calibrations, and analysis procedures the manpower and expertise needed for a particular analysis is often very much smaller than that needed for an ongoing experiment. Since centrally funded manpower to carry out such analyses is not available any longer, this contribution not only targets experienced self-funded experimentalists, but also theorists and master-level students who might wish to carry out such an analysis.

  15. Analysis of K-net and Kik-net data: implications for ground motion prediction - acceleration time histories, response spectra and nonlinear site response; Analyse des donnees accelerometriques de K-net et Kik-net: implications pour la prediction du mouvement sismique - accelerogrammes et spectres de reponse - et la prise en compte des effets de site non-lineaire

    Energy Technology Data Exchange (ETDEWEB)

    Pousse, G

    2005-10-15

    This thesis intends to characterize ground motion during earthquake. This work is based on two Japanese networks. It deals with databases of shallow events, depth less than 25 km, with magnitude between 4.0 and 7.3. The analysis of K-net allows to compute a spectral ground motion prediction equation and to review the shape of the Eurocode 8 design spectra. We show the larger amplification at short period for Japanese data and bring in light the soil amplification that takes place at large period. In addition, we develop a new empirical model for simulating synthetic stochastic nonstationary acceleration time histories. By specifying magnitude, distance and site effect, this model allows to produce many time histories, that a seismic event is liable to produce at the place of interest. Furthermore, the study of near-field borehole records of the Kik-net allows to explore the validity domain of predictive equations and to explain what occurs by extrapolating ground motion predictions. Finally, we show that nonlinearity reduces the dispersion of ground motion at the surface. (author)

  16. Digital differential analysers

    CERN Document Server

    Shilejko, A V; Higinbotham, W

    1964-01-01

    Digital Differential Analysers presents the principles, operations, design, and applications of digital differential analyzers, a machine with the ability to present initial quantities and the possibility of dividing them into separate functional units performing a number of basic mathematical operations. The book discusses the theoretical principles underlying the operation of digital differential analyzers, such as the use of the delta-modulation method and function-generator units. Digital integration methods and the classes of digital differential analyzer designs are also reviewed. The te

  17. Analysing Access Control Specifications

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, René Rydhof

    2009-01-01

    common tool to answer this question, analysis of log files, faces the problem that the amount of logged data may be overwhelming. This problems gets even worse in the case of insider attacks, where the attacker’s actions usually will be logged as permissible, standard actions—if they are logged at all....... Recent events have revealed intimate knowledge of surveillance and control systems on the side of the attacker, making it often impossible to deduce the identity of an inside attacker from logged data. In this work we present an approach that analyses the access control configuration to identify the set...

  18. AMS analyses at ANSTO

    Energy Technology Data Exchange (ETDEWEB)

    Lawson, E.M. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia). Physics Division

    1998-03-01

    The major use of ANTARES is Accelerator Mass Spectrometry (AMS) with {sup 14}C being the most commonly analysed radioisotope - presently about 35 % of the available beam time on ANTARES is used for {sup 14}C measurements. The accelerator measurements are supported by, and dependent on, a strong sample preparation section. The ANTARES AMS facility supports a wide range of investigations into fields such as global climate change, ice cores, oceanography, dendrochronology, anthropology, and classical and Australian archaeology. Described here are some examples of the ways in which AMS has been applied to support research into the archaeology, prehistory and culture of this continent`s indigenous Aboriginal peoples. (author)

  19. AMS analyses at ANSTO

    International Nuclear Information System (INIS)

    The major use of ANTARES is Accelerator Mass Spectrometry (AMS) with 14C being the most commonly analysed radioisotope - presently about 35 % of the available beam time on ANTARES is used for 14C measurements. The accelerator measurements are supported by, and dependent on, a strong sample preparation section. The ANTARES AMS facility supports a wide range of investigations into fields such as global climate change, ice cores, oceanography, dendrochronology, anthropology, and classical and Australian archaeology. Described here are some examples of the ways in which AMS has been applied to support research into the archaeology, prehistory and culture of this continent's indigenous Aboriginal peoples. (author)

  20. Systemdynamisk analyse av vannkraftsystem

    OpenAIRE

    Rydning, Anja

    2007-01-01

    I denne oppgaven er det gjennomført en dynamisk analyse av vannkraftverket Fortun kraftverk. Tre fenomener er særlig vurdert i denne oppgaven: Sjaktsvingninger mellom svingesjakt og magasin, trykkstøt ved turbinen som følge av retardasjonstrykk ved endring i turbinvannføringen og reguleringsstabilitet. Sjaktsvingningene og trykkstøt beregnes analytisk ut fra kontinuitets- og bevegelsesligningen. Modeller av Fortun kraftverk er laget for å beregne trykkstøt og sjaktsvingninger. En modell e...

  1. Chapter 5. Safety analyses

    International Nuclear Information System (INIS)

    In 2000 the safety analyses of the Nuclear Regulatory Authority of the Slovak Republic (UJD) were focused on verification of the safety analyses report and probabilistic safety assessment study for NPP V-1 Bohunice after the reconstruction, reviewing of the suggested changes of the Limits and Conditions for NPP V-2 Bohunice and on the assessment of operational events. An important part of work was performed also in solving of scientific and technical tasks appointed within bilateral projects of co-operation between UJD and its international partnerships' organisations, i.e. within international PHARE programme as well as the 5th framework of the European Commission. Verification of safety analyses part of the safety report for NPP V-1 Bohunice after the gradual reconstruction was focused on checking and passing judgement on the completeness of the considered initiating events, safety criteria, input data, adequacy of the used calculation models and also on the overall quality of the submitted documentation. Suitability of the used methodology and the calculation programmes, achieved level of their verification, correctness and interpretation of the results were assessed. The performed review has shown that the checked safety analyses were performed in compliance with the internationally accepted practice, recommendations of UJD and the IAEA. The required level of safety of NPP V-1 Bohunice has been approved. The document with the results and all the findings of the performed review has been prepared. It includes the details of the performed independent calculations, their results and comparison with the results given in the safety report. A special attention was paid to a review of probabilistic safety assessment study of level 1 for NPP Bohunice V-1 after its gradual reconstruction. The probabilistic safety analysis of NPP in full power operation was elaborated in the study and the impact of the gradual reconstruction to the risk decreasing was quantified. The

  2. Uncertainty Analyses and Strategy

    International Nuclear Information System (INIS)

    The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository

  3. Automated Quality Assurance of Online NIR Analysers

    Directory of Open Access Journals (Sweden)

    Kari Aaljoki

    2005-01-01

    Full Text Available Modern NIR analysers produce valuable data for closed-loop process control and optimisation practically in real time. Thus it is highly important to keep them in the best possible shape. Quality assurance (QA of NIR analysers is an interesting and complex issue because it is not only the instrument and sample handling that has to be monitored. At the same time, validity of prediction models has to be assured. A system for fully automated QA of NIR analysers is described. The system takes care of collecting and organising spectra from various instruments, relevant laboratory, and process management system (PMS data. Validation of spectra is based on simple diagnostics values derived from the spectra. Predictions are validated against laboratory (LIMS or other online analyser results (collected from PMS. The system features automated alarming, reporting, trending, and charting functions for major key variables for easy visual inspection. Various textual and graphical reports are sent to maintenance people through email. The software was written with Borland Delphi 7 Enterprise. Oracle and PMS ODBC interfaces were used for accessing LIMS and PMS data using appropriate SQL queries. It will be shown that it is possible to take actions even before the quality of predictions is seriously affected, thus maximising the overall uptime of the instrument.

  4. APROS nuclear plant analyser

    International Nuclear Information System (INIS)

    The paper describes the build-up of the Loviisa plant primary circuit model using graphical user interface and generic components. The secondary circuit model of Loviisa is constructed in the same manner. The entire power plant model thus obtained is used for the calculation of two example transients. These examples originate from the Loviisa 2 unit dynamical tests in 1980. The Modular Plant Analyser results are compared with the Loviisa Unit 2 measurement data. This comparison indicates good agreement with the data. The present work has been performed using the Alliant FX/40 minisupercomputer. With this computer the Loviisa model fulfills at present the real-time requirement with 0.5 second timestep. (orig./DG)

  5. EEG analyses with SOBI.

    Energy Technology Data Exchange (ETDEWEB)

    Glickman, Matthew R.; Tang, Akaysha (University of New Mexico, Albuquerque, NM)

    2009-02-01

    The motivating vision behind Sandia's MENTOR/PAL LDRD project has been that of systems which use real-time psychophysiological data to support and enhance human performance, both individually and of groups. Relevant and significant psychophysiological data being a necessary prerequisite to such systems, this LDRD has focused on identifying and refining such signals. The project has focused in particular on EEG (electroencephalogram) data as a promising candidate signal because it (potentially) provides a broad window on brain activity with relatively low cost and logistical constraints. We report here on two analyses performed on EEG data collected in this project using the SOBI (Second Order Blind Identification) algorithm to identify two independent sources of brain activity: one in the frontal lobe and one in the occipital. The first study looks at directional influences between the two components, while the second study looks at inferring gender based upon the frontal component.

  6. Micromechanical Analyses of Sturzstroms

    Science.gov (United States)

    Imre, Bernd; Laue, Jan; Springman, Sarah M.

    2010-05-01

    Sturzstroms are very fast landslides of very large initial volume. As type features they display extreme run out, pared with intensive fragmentation of the involved blocks of rock within a collisional flow. The inherent danger to the growing communities in alpine valleys below future potential sites of sturzstroms must be examined and results of predictions of endangered zones allow to impact upon the planning processes in these areas. This calls for the ability to make Type A predictions, according to Lambe (1973), which are done before an event. But Type A predictions are only possible if sufficient understanding of the mechanisms involved in a process is available. The motivation of the doctoral thesis research project presented is therefore to reveal the mechanics of sturzstroms in more detail in order to contribute to the development of a Type A run out prediction model. It is obvious that a sturzstrom represents a highly dynamic collisional granular regime. Thus particles do not only collide but will eventually crush each other. Erismann and Abele (2001) describe this process as dynamic disintegration, where kinetic energy is the main driver for fragmenting the rock mass. In this case an approach combining the type features long run out and fragmentation within a single hypothesis is represented by the dynamic fragmentation-spreading model (Davies and McSaveney, 2009; McSaveney and Davies, 2009). Unfortunately, sturzstroms, and fragmentation within sturzstroms, can not be observed directly in a real event because of their long "reoccurrence time" and the obvious difficulties in placing measuring devices within such a rock flow. Therefore, rigorous modelling is required in particular of the transition from static to dynamic behaviour to achieve better knowledge of the mechanics of sturzstroms, and to provide empirical evidence to confirm the dynamic fragmentation-spreading model. Within this study fragmentation and their effects on the mobility of sturzstroms

  7. The application analyses for primary spectrum pyrometer

    Institute of Scientific and Technical Information of China (English)

    FU; TaiRan

    2007-01-01

    In the applications of primary spectrum pyrometry, based on the dynamic range and the minimum sensibility of the sensor, the application issues, such as the measurement range and the measurement partition, were investigated through theoretical analyses. For a developed primary spectrum pyrometer, the theoretical predictions of measurement range and the distributions of measurement partition were presented through numerical simulations. And the measurement experiments of high-temperature blackbody and standard temperature lamp were processed to further verify the above theoretical analyses and numerical results. Therefore the research in the paper provides the helpful supports for the applications of primary spectrum pyrometer and other radiation pyrometers.……

  8. Severe Accident Recriticality Analyses (SARA)

    International Nuclear Information System (INIS)

    Recriticality in a BWR has been studied for a total loss of electric power accident scenario. In a BWR, the B4C control rods would melt and relocate from the core before the fuel during core uncovery and heat-up. If electric power returns during this time-window unborated water from ECCS systems will start to reflood the partly control rod free core. Recriticality might take place for which the only mitigating mechanisms are the Doppler effect and void formation. In order to assess the impact of recriticality on reactor safety, including accident management measures, the following issues have been investigated in the SARA project: 1. the energy deposition in the fuel during super-prompt power burst, 2. the quasi steady-state reactor power following the initial power burst and 3. containment response to elevated quasi steady-state reactor power. The approach was to use three computer codes and to further develop and adapt them for the task. The codes were SIMULATE-3K, APROS and RECRIT. Recriticality analyses were carried out for a number of selected reflooding transients for the Oskarshamn 3 plant in Sweden with SIMULATE-3K and for the Olkiluoto 1 plant in Finland with all three codes. The core state initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality - both superprompt power bursts and quasi steady-state power generation - for the studied range of parameters, i. e. with core uncovery and heat-up to maximum core temperatures around 1800 K and water flow rates of 45 kg/s to 2000 kg/s injected into the downcomer. Since the recriticality takes place in a small fraction of the core the power densities are high which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal/g, was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding rate

  9. Severe Accident Recriticality Analyses (SARA)

    Energy Technology Data Exchange (ETDEWEB)

    Frid, W. [Swedish Nuclear Power Inspectorate, Stockholm (Sweden); Hoejerup, F. [Risoe National Lab. (Denmark); Lindholm, I.; Miettinen, J.; Puska, E.K. [VTT Energy, Helsinki (Finland); Nilsson, Lars [Studsvik Eco and Safety AB, Nykoeping (Sweden); Sjoevall, H. [Teoliisuuden Voima Oy (Finland)

    1999-11-01

    Recriticality in a BWR has been studied for a total loss of electric power accident scenario. In a BWR, the B{sub 4}C control rods would melt and relocate from the core before the fuel during core uncovery and heat-up. If electric power returns during this time-window unborated water from ECCS systems will start to reflood the partly control rod free core. Recriticality might take place for which the only mitigating mechanisms are the Doppler effect and void formation. In order to assess the impact of recriticality on reactor safety, including accident management measures, the following issues have been investigated in the SARA project: 1. the energy deposition in the fuel during super-prompt power burst, 2. the quasi steady-state reactor power following the initial power burst and 3. containment response to elevated quasi steady-state reactor power. The approach was to use three computer codes and to further develop and adapt them for the task. The codes were SIMULATE-3K, APROS and RECRIT. Recriticality analyses were carried out for a number of selected reflooding transients for the Oskarshamn 3 plant in Sweden with SIMULATE-3K and for the Olkiluoto 1 plant in Finland with all three codes. The core state initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality - both superprompt power bursts and quasi steady-state power generation - for the studied range of parameters, i. e. with core uncovery and heat-up to maximum core temperatures around 1800 K and water flow rates of 45 kg/s to 2000 kg/s injected into the downcomer. Since the recriticality takes place in a small fraction of the core the power densities are high which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal/g, was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding

  10. Climate prediction and predictability

    Science.gov (United States)

    Allen, Myles

    2010-05-01

    Climate prediction is generally accepted to be one of the grand challenges of the Geophysical Sciences. What is less widely acknowledged is that fundamental issues have yet to be resolved concerning the nature of the challenge, even after decades of research in this area. How do we verify or falsify a probabilistic forecast of a singular event such as anthropogenic warming over the 21st century? How do we determine the information content of a climate forecast? What does it mean for a modelling system to be "good enough" to forecast a particular variable? How will we know when models and forecasting systems are "good enough" to provide detailed forecasts of weather at specific locations or, for example, the risks associated with global geo-engineering schemes. This talk will provide an overview of these questions in the light of recent developments in multi-decade climate forecasting, drawing on concepts from information theory, machine learning and statistics. I will draw extensively but not exclusively from the experience of the climateprediction.net project, running multiple versions of climate models on personal computers.

  11. Predictable return distributions

    DEFF Research Database (Denmark)

    Pedersen, Thomas Quistgaard

    This paper provides detailed insights into predictability of the entire stock and bond return distribution through the use of quantile regression. This allows us to examine speci…c parts of the return distribution such as the tails or the center, and for a suf…ciently …ne grid of quantiles we can...... predictable as a function of economic state variables. The results are, however, very different for stocks and bonds. The state variables primarily predict only location shifts in the stock return distribution, while they also predict changes in higher-order moments in the bond return distribution. Out......-of-sample analyses show that the relative accuracy of the state variables in predicting future returns varies across the distribution. A portfolio study shows that an investor with power utility can obtain economic gains by applying the empirical return distribution in portfolio decisions instead of imposing an...

  12. Predictive Data Mining in KPP

    Directory of Open Access Journals (Sweden)

    Dr. R.K. Chauhan

    2012-09-01

    Full Text Available In this paper, we have provided the Genetic Algorithm (GA used for prediction process in Knowledge Penetration Process (KPP. The said GA is implemented and its efficiency is analysed.

  13. Data for decay Heat Predictions

    International Nuclear Information System (INIS)

    These proceedings of a specialists' meeting on data for decay heat predictions are based on fission products yields, on delayed neutrons and on comparative evaluations on evaluated and experimental data for thermal and fast fission. Fourteen conferences were analysed

  14. Prismatic analyser concept for neutron spectrometers

    Energy Technology Data Exchange (ETDEWEB)

    Birk, Jonas O.; Jacobsen, Johan; Hansen, Rasmus L.; Lefmann, Kim [Nano Science Center, Niels Bohr Institute, University of Copenhagen, DK-2100 Copenhagen Ø (Denmark); Markó, Márton; Niedermayer, Christof [Laboratory for Neutron Scattering and Imaging, Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Freeman, Paul G. [Laboratory for Quantum Magnetism, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland); Christensen, Niels B. [Institute of Physics, Technical University of Denmark, DK-2800-Kgs. Lyngby (Denmark); Månsson, Martin [Laboratory for Neutron Scattering and Imaging, Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Laboratory for Quantum Magnetism, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland); Rønnow, Henrik M. [Nano Science Center, Niels Bohr Institute, University of Copenhagen, DK-2100 Copenhagen Ø (Denmark); Laboratory for Quantum Magnetism, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland)

    2014-11-15

    Developments in modern neutron spectroscopy have led to typical sample sizes decreasing from few cm to several mm in diameter samples. We demonstrate how small samples together with the right choice of analyser and detector components makes distance collimation an important concept in crystal analyser spectrometers. We further show that this opens new possibilities where neutrons with different energies are reflected by the same analyser but counted in different detectors, thus improving both energy resolution and total count rate compared to conventional spectrometers. The technique can readily be combined with advanced focussing geometries and with multiplexing instrument designs. We present a combination of simulations and data showing three different energies simultaneously reflected from one analyser. Experiments were performed on a cold triple axis instrument and on a prototype inverse geometry Time-of-flight spectrometer installed at PSI, Switzerland, and shows excellent agreement with the predictions. Typical improvements will be 2.0 times finer resolution and a factor of 1.9 in flux gain compared to a focussing Rowland geometry, or of 3.3 times finer resolution and a factor of 2.4 in flux gain compared to a single flat analyser slab.

  15. Fouling analyses of heat exchangers for PSR

    International Nuclear Information System (INIS)

    Fouling of heat exchangers is generated by water-borne deposits, commonly known as foulants including particulate matter from the air, migrated corrosion produces; silt, clays, and sand suspended in water; organic contaminants; and boron based deposits in plants. This fouling is known to interfere with normal flow characteristics and reduce thermal efficiencies of heat exchangers. This paper focuses on fouling analyses for six heat exchangers of two primary systems in two nuclear power plants; the regenerative heat exchangers of the chemical and volume control system and the component cooling water heat exchangers of the component cooling water system. To analyze the fouling for heat exchangers, fouling factor was introduced based on the ASME O and M codes and TEMA standards. Based on the results of the fouling analyses, the present thermal performances and fouling levels for the six heat exchangers were predicted

  16. Analyses and characterization of double shell tank

    International Nuclear Information System (INIS)

    Evaporator candidate feed from tank 241-AP-108 (108-AP) was sampled under prescribed protocol. Physical, inorganic, and radiochemical analyses were performed on tank 108-AP. Characterization of evaporator feed tank waste is needed primarily for an evaluation of its suitability to be safely processed through the evaporator. Such analyses should provide sufficient information regarding the waste composition to confidently determine whether constituent concentrations are within not only safe operating limits, but should also be relevant to functional limits for operation of the evaporator. Characterization of tank constituent concentrations should provide data which enable a prediction of where the types and amounts of environmentally hazardous waste are likely to occur in the evaporator product streams

  17. Analyses and characterization of double shell tank

    Energy Technology Data Exchange (ETDEWEB)

    1994-10-04

    Evaporator candidate feed from tank 241-AP-108 (108-AP) was sampled under prescribed protocol. Physical, inorganic, and radiochemical analyses were performed on tank 108-AP. Characterization of evaporator feed tank waste is needed primarily for an evaluation of its suitability to be safely processed through the evaporator. Such analyses should provide sufficient information regarding the waste composition to confidently determine whether constituent concentrations are within not only safe operating limits, but should also be relevant to functional limits for operation of the evaporator. Characterization of tank constituent concentrations should provide data which enable a prediction of where the types and amounts of environmentally hazardous waste are likely to occur in the evaporator product streams.

  18. Analyse

    DEFF Research Database (Denmark)

    Dubgaard, Alex

    2009-01-01

    Restriktioner over for landbruget er en god forretning. Til gengæld kan det ikke betale sig at reducere udledningen af drivhusgasser......Restriktioner over for landbruget er en god forretning. Til gengæld kan det ikke betale sig at reducere udledningen af drivhusgasser...

  19. STRATEGY PATTERNS PREDICTION MODEL

    OpenAIRE

    Aram Baruch Gonzalez Perez; Jorge Adolfo Ramirez Uresti

    2014-01-01

    Multi-agent systems are broadly known for being able to simulate real-life situations which require the interaction and cooperation of individuals. Opponent modeling can be used along with multi-agent systems to model complex situations such as competitions like soccer games. In this study, a model for predicting opponent moves based on their target is presented. The model is composed by an offline step (learning phase) and an online one (execution phase). The offline step gets and analyses p...

  20. Prediction, Regression and Critical Realism

    DEFF Research Database (Denmark)

    Næss, Petter

    2004-01-01

    This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... of prediction necessary and possible in spatial planning of urban development. Finally, the political implications of positions within theory of science rejecting the possibility of predictions about social phenomena are addressed....... phenomena. This position is fundamentally problematic to public planning. Without at least some ability to predict the likely consequences of different proposals, the justification for public sector intervention into market mechanisms will be frail. Statistical methods like regression analyses are commonly...

  1. Conjoint-Analyse und Marktsegmentierung

    OpenAIRE

    Steiner, Winfried J.; Baumgartner, Bernhard

    2003-01-01

    Die Marktsegmentierung zählt neben der Neuproduktplanung und Preisgestaltung zu den wesentlichen Einsatzgebieten der Conjoint-Analyse. Neben traditionell eingesetzten zweistufigen Vorgehensweisen, bei denen Conjoint-Analyse und Segmentierung in zwei getrennten Schritten erfolgen, stehen heute mit Methoden wie der Clusterwise Regression oder Mixture-Modellen neuere Entwicklungen, die eine simultane Segmentierung und Präferenzschätzung ermöglichen, zur Verfügung. Der Beitrag gibt einen Überblic...

  2. Prediction Markets

    DEFF Research Database (Denmark)

    Horn, Christian Franz; Ivens, Bjørn Sven; Ohneberg, Michael;

    2014-01-01

    In recent years, Prediction Markets gained growing interest as a forecasting tool among researchers as well as practitioners, which resulted in an increasing number of publications. In order to track the latest development of research, comprising the extent and focus of research, this article...... provides a comprehensive review and classification of the literature related to the topic of Prediction Markets. Overall, 316 relevant articles, published in the timeframe from 2007 through 2013, were identified and assigned to a herein presented classification scheme, differentiating between descriptive...... works, articles of theoretical nature, application-oriented studies and articles dealing with the topic of law and policy. The analysis of the research results reveals that more than half of the literature pool deals with the application and actual function tests of Prediction Markets. The results are...

  3. Mitogenomic analyses from ancient DNA

    DEFF Research Database (Denmark)

    Paijmans, Johanna L.A.; Gilbert, M Thomas P; Hofreiter, Michael

    2013-01-01

    . To date, at least 124 partially or fully assembled mitogenomes from more than 20 species have been obtained, and, given the rapid progress in sequencing technology, this number is likely to dramatically increase in the future. The increased information content offered by analysing full mitogenomes...

  4. Analysing student teachers’ lesson plans

    DEFF Research Database (Denmark)

    Carlsen, Louise Meier

    2015-01-01

    I investigate 17 mathematics student teachers’ productions, in view of examining the synergy and interaction between their mathematical and didactical knowledge. The concrete data material consists in lesson plans elaborated for the final exam of a unit on “numbers, arithmetic and algebra”. The...... anthropological theory of the didactic is used as a framework to analyse these components of practical and theoretical knowledge....

  5. Beskrivende analyse af mekaniske systemer

    DEFF Research Database (Denmark)

    Andreasen, Mogens Myrup; Hansen, Claus Thorp

    Descriptive analysis is the activity, where a given product is analysed for obtaining insight into different aspects, leading to an explicit description of each of these aspects. This textbook is worked out for course 72101 Produktanalyse (Analysis of products) given at DTU....

  6. The CAMAC logic state analyser

    CERN Document Server

    Centro, Sandro

    1981-01-01

    Summary form only given, as follows. Large electronic experiments using distributed processors for parallel readout and data reduction need to analyse the data acquisition components status and monitor dead time constants of each active readout module and processor. For the UA1 experiment, a microprocessor-based CAMAC logic status analyser (CLSA) has been developed in order to implement these functions autonomously. CLSA is a single unit CAMAC module, able to record, up to 256 times, the logic status of 32 TTL inputs gated by a common clock, internal or external, with a maximum frequency of 2 MHz. The data stored in the internal CLSA memory can be read directly via CAMAC function or preprocessed by CLSA 6800 microprocessor. The 6800 resident firmware (4Kbyte) expands the module features to include an interactive monitor, data recording control, data reduction and histogram accumulation with statistics parameter evaluation. The microprocessor memory and the resident firmware can be externally extended using st...

  7. Workload analyse of assembling process

    Science.gov (United States)

    Ghenghea, L. D.

    2015-11-01

    The workload is the most important indicator for managers responsible of industrial technological processes no matter if these are automated, mechanized or simply manual in each case, machines or workers will be in the focus of workload measurements. The paper deals with workload analyses made to a most part manual assembling technology for roller bearings assembling process, executed in a big company, with integrated bearings manufacturing processes. In this analyses the delay sample technique have been used to identify and divide all bearing assemblers activities, to get information about time parts from 480 minutes day work time that workers allow to each activity. The developed study shows some ways to increase the process productivity without supplementary investments and also indicated the process automation could be the solution to gain maximum productivity.

  8. Analyse du discours et archive

    OpenAIRE

    Maingueneau, Dominique

    2007-01-01

    Les recherches qui se réclament de "l’analyse du discours" connaissent un développement considérable dans le monde entier ; en revanche, "l’école française d’analyse du discours" (AD) traverse une crise d’identité depuis le début des années 80. Dans cet exposé nous voudrions explorer les raisons de cette crise, puis préciser le concept d’archive qui, à notre sens, permet de prolonger la voie ouverte à la fin des années 1960. Mais il ne s’agit que d’une des voies possibles, dès lors que, comme...

  9. Learner as Statistical Units of Analyses

    Directory of Open Access Journals (Sweden)

    Vivek Venkatesh

    2011-01-01

    Full Text Available Educational psychologists have researched the generality and specificity of metacognitive monitoring in the context of college-level multiple-choice tests, but fairly little is known as to how learners monitor their performance on more complex academic tasks. Even lesser is known about how monitoring proficiencies such as discrimination and bias might be related to key self-regulatory processes associated with task understanding. This quantitative study explores the relationship between monitoring proficiencies and task understanding in 39 adult learners tackling ill-structured writing tasks for a graduate “theories of e-learning” course. Using learner as unit of analysis, the generality of monitoring is confirmed through intra-measure correlation analyses while facets of its specificity stand out due to the absence of inter-measure correlations. Unsurprisingly, learner-based correlational and repeated measures analyses did not reveal how monitoring proficiencies and task understanding might be related. However, using essay as unit of analysis, ordinal and multinomial regressions reveal how monitoring influences different levels of task understanding. Results are interpreted not only in light of novel procedures undertaken in calculating performance prediction capability but also in the application of essay-based, intra-sample statistical analysis that reveal heretofore unseen relationships between academic self-regulatory constructs.

  10. Analyses of containment structures with corrosion damage

    Energy Technology Data Exchange (ETDEWEB)

    Cherry, J.L. [Sandia National Labs., Albuquerque, NM (United States)

    1997-01-01

    Corrosion damage that has been found in a number of nuclear power plant containment structures can degrade the pressure capacity of the vessel. This has prompted concerns regarding the capacity of corroded containments to withstand accident loadings. To address these concerns, finite element analyses have been performed for a typical PWR Ice Condenser containment structure. Using ABAQUS, the pressure capacity was calculated for a typical vessel with no corrosion damage. Multiple analyses were then performed with the location of the corrosion and the amount of corrosion varied in each analysis. Using a strain-based failure criterion, a {open_quotes}lower bound{close_quotes}, {open_quotes}best estimate{close_quotes}, and {open_quotes}upper bound{close_quotes} failure level was predicted for each case. These limits were established by: determining the amount of variability that exists in material properties of typical containments, estimating the amount of uncertainty associated with the level of modeling detail and modeling assumptions, and estimating the effect of corrosion on the material properties.

  11. Analyses of containment structures with corrosion damage

    International Nuclear Information System (INIS)

    Corrosion damage that has been found in a number of nuclear power plant containment structures can degrade the pressure capacity of the vessel. This has prompted concerns regarding the capacity of corroded containments to withstand accident loadings. To address these concerns, finite element analyses have been performed for a typical PWR Ice Condenser containment structure. Using ABAQUS, the pressure capacity was calculated for a typical vessel with no corrosion damage. Multiple analyses were then performed with the location of the corrosion and the amount of corrosion varied in each analysis. Using a strain-based failure criterion, a open-quotes lower boundclose quotes, open-quotes best estimateclose quotes, and open-quotes upper boundclose quotes failure level was predicted for each case. These limits were established by: determining the amount of variability that exists in material properties of typical containments, estimating the amount of uncertainty associated with the level of modeling detail and modeling assumptions, and estimating the effect of corrosion on the material properties

  12. Analysing Protocol Stacks for Services

    DEFF Research Database (Denmark)

    Gao, Han; Nielson, Flemming; Nielson, Hanne Riis

    2011-01-01

    We show an approach, CaPiTo, to model service-oriented applications using process algebras such that, on the one hand, we can achieve a certain level of abstraction without being overwhelmed by the underlying implementation details and, on the other hand, we respect the concrete industrial standa...... financial case study taken from Chapter 0-3. Finally, we develop a static analysis to analyse the security properties as they emerge at the level of concrete industrial protocols....

  13. Tematisk analyse af amerikansk hiphop

    OpenAIRE

    Tranberg-Hansen, Katrine; Bøgh Larsen, Cecilie; Jeppsson,Louise Emilie; Lindberg Kirkegaard, Nanna; Funch Madsen, Signe; Bülow Bach, Maria

    2013-01-01

    This paper examines the possible development in the function of American hiphop. It focuses on specific themes like ghetto, freedom, rebellion, and racial discrimination in hiphop music. To investigate this possible development two text analysis methods are used: a pragmatic and a stylistic text analysis, and a historical method is used: a source criticism. A minimal amount of literature has been published on how hiphop culture arose. The-­‐ se studies, however, make it possible to analyse...

  14. STRATEGY PATTERNS PREDICTION MODEL

    Directory of Open Access Journals (Sweden)

    Aram Baruch Gonzalez Perez

    2014-01-01

    Full Text Available Multi-agent systems are broadly known for being able to simulate real-life situations which require the interaction and cooperation of individuals. Opponent modeling can be used along with multi-agent systems to model complex situations such as competitions like soccer games. In this study, a model for predicting opponent moves based on their target is presented. The model is composed by an offline step (learning phase and an online one (execution phase. The offline step gets and analyses previous experiences while the online step uses the data generated by offline analysis to predict opponent moves. This model is illustrated by an experiment with the RoboCup 2D Soccer Simulator. The proposed model was tested using 22 games to create the knowledge base and getting an accuracy rate over 80%.

  15. Comparison of elastic and inelastic analyses

    International Nuclear Information System (INIS)

    The use of inelastic analysis methods instead of the traditional elastic analysis methods in the design of radioactive material (RAM) transport packagings leads to a better understanding of the response of the package to mechanical loadings. Thus, better assessment of the containment, thermal protection, and shielding integrity of the package after a structure accident event can be made. A more accurate prediction of the package response can lead to enhanced safety and also allow for a more efficient use of materials, possibly leading to a package with higher capacity or lower weight. This paper discusses the advantages and disadvantages of using inelastic analysis in the design of RAM shipping packages. The use of inelastic analysis presents several problems to the package designer. When using inelastic analysis the entire nonlinear response of the material must be known, including the effects of temperature changes and strain rate. Another problem is that there currently is not an acceptance criteria for this type of analysis that is approved by regulatory agencies. Inelastic analysis acceptance criteria based on failure stress, failure strain , or plastic energy density could be developed. For both elastic and inelastic analyses it is also important to include other sources of stress in the analyses, such as fabrication stresses, thermal stresses, stresses from bolt preloading, and contact stresses at material interfaces. Offsetting these added difficulties is the improved knowledge of the package behavior. This allows for incorporation of a more uniform margin of safety, which can result in weight savings and a higher level of confidence in the post-accident configuration of the package. In this paper, comparisons between elastic and inelastic analyses are made for a simple ring structure and for a package to transport a large quantity of RAM by rail (rail cask) with lead gamma shielding to illustrate the differences in the two analysis techniques

  16. Biological aerosol warner and analyser

    Science.gov (United States)

    Schlemmer, Harry; Kürbitz, Gunther; Miethe, Peter; Spieweck, Michael

    2006-05-01

    The development of an integrated sensor device BiSAM (Biological Sampling and Analysing Module) is presented which is designed for rapid detection of aerosol or dust particles potentially loaded with biological warfare agents. All functional steps from aerosol collection via immuno analysis to display of results are fully automated. The core component of the sensor device is an ultra sensitive rapid analyser PBA (Portable Benchtop Analyser) based on a 3 dimensional immuno filtration column of large internal area, Poly HRP marker technology and kinetic optical detection. High sensitivity despite of the short measuring time, high chemical stability of the micro column and robustness against interferents make the PBA an ideal tool for fielded sensor devices. It is especially favourable to combine the PBA with a bio collector because virtually no sample preparation is necessary. Overall, the BiSAM device is capable to detect and identify living micro organisms (bacteria, spores, viruses) as well as toxins in a measuring cycle of typically half an hour duration. In each batch up to 12 different tests can be run in parallel together with positive and negative controls to keep the false alarm rate low.

  17. Analyses of a Virtual World

    CERN Document Server

    Holovatch, Yurij; Szell, Michael; Thurner, Stefan

    2016-01-01

    We present an overview of a series of results obtained from the analysis of human behavior in a virtual environment. We focus on the massive multiplayer online game (MMOG) Pardus which has a worldwide participant base of more than 400,000 registered players. We provide evidence for striking statistical similarities between social structures and human-action dynamics in the real and virtual worlds. In this sense MMOGs provide an extraordinary way for accurate and falsifiable studies of social phenomena. We further discuss possibilities to apply methods and concepts developed in the course of these studies to analyse oral and written narratives.

  18. Chapter No.4. Safety analyses

    International Nuclear Information System (INIS)

    In 2001 the activity in the field of safety analyses was focused on verification of the safety analyses reports for NPP V-2 Bohunice and NPP Mochovce concerning the new profiled fuel and probabilistic safety assessment study for NPP Mochovce. The calculation safety analyses were performed and expert reviews for the internal UJD needs were elaborated. An important part of work was performed also in solving of scientific and technical tasks appointed within bilateral projects of co-operation between UJD and its international partnership organisations as well as within international projects ordered and financed by the European Commission. All these activities served as an independent support for UJD in its deterministic and probabilistic safety assessment of nuclear installations. A special attention was paid to a review of probabilistic safety assessment study of level 1 for NPP Mochovce. The probabilistic safety analysis of NPP related to the full power operation was elaborated in the study and a contribution of the technical and operational improvements to the risk decreasing was quantified. A core damage frequency of the reactor was calculated and the dominant initiating events and accident sequences with the major contribution to the risk were determined. The target of the review was to determine the acceptance of the sources of input information, assumptions, models, data, analyses and obtained results, so that the probabilistic model could give a real picture of the NPP. The review of the study was performed in co-operation of UJD with the IAEA (IPSART mission) as well as with other external organisations, which were not involved in the elaboration of the reviewed document and probabilistic model of NPP. The review was made in accordance with the IAEA guidelines and methodical documents of UJD and US NRC. In the field of calculation safety analyses the UJD activity was focused on the analysis of an operational event, analyses of the selected accident scenarios

  19. Fracturing and brittleness index analyses of shales

    Science.gov (United States)

    Barnhoorn, Auke; Primarini, Mutia; Houben, Maartje

    2016-04-01

    The formation of a fracture network in rocks has a crucial control on the flow behaviour of fluids. In addition, an existing network of fractures , influences the propagation of new fractures during e.g. hydraulic fracturing or during a seismic event. Understanding of the type and characteristics of the fracture network that will be formed during e.g. hydraulic fracturing is thus crucial to better predict the outcome of a hydraulic fracturing job. For this, knowledge of the rock properties is crucial. The brittleness index is often used as a rock property that can be used to predict the fracturing behaviour of a rock for e.g. hydraulic fracturing of shales. Various terminologies of the brittleness index (BI1, BI2 and BI3) exist based on mineralogy, elastic constants and stress-strain behaviour (Jin et al., 2014, Jarvie et al., 2007 and Holt et al., 2011). A maximum brittleness index of 1 predicts very good and efficient fracturing behaviour while a minimum brittleness index of 0 predicts a much more ductile shale behaviour. Here, we have performed systematic petrophysical, acoustic and geomechanical analyses on a set of shale samples from Whitby (UK) and we have determined the three different brittleness indices on each sample by performing all the analyses on each of the samples. We show that each of the three brittleness indices are very different for the same sample and as such it can be concluded that the brittleness index is not a good predictor of the fracturing behaviour of shales. The brittleness index based on the acoustic data (BI1) all lie around values of 0.5, while the brittleness index based on the stress strain data (BI2) give an average brittleness index around 0.75, whereas the mineralogy brittleness index (BI3) predict values below 0.2. This shows that by using different estimates of the brittleness index different decisions can be made for hydraulic fracturing. If we would rely on the mineralogy (BI3), the Whitby mudstone is not a suitable

  20. Predicting supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Heinemeyer, S. [Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Weiglein, G. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2010-07-15

    We review the result of SUSY parameter fits based on frequentist analyses of experimental constraints from electroweak precision data, (g-2){sub {mu}}, B physics and cosmological data. We investigate the parameters of the constrained MSSM (CMSSM) with universal soft supersymmetry-breaking mass parameters, and a model with common non-universal Higgs mass parameters in the superpotential (NUHM1). Shown are the results for the SUSY and Higgs spectrum of the models. Many sparticle masses are highly correlated in both the CMSSM and NUHM1, and parts of the regions preferred at the 68% C.L. are accessible to early LHC running. The best-fit points could be tested even with 1 fb{sup -1} at {radical}(s)=7 TeV. (orig.)

  1. Analysing ESP Texts, but How?

    Directory of Open Access Journals (Sweden)

    Borza Natalia

    2015-03-01

    Full Text Available English as a second language (ESL teachers instructing general English and English for specific purposes (ESP in bilingual secondary schools face various challenges when it comes to choosing the main linguistic foci of language preparatory courses enabling non-native students to study academic subjects in English. ESL teachers intending to analyse English language subject textbooks written for secondary school students with the aim of gaining information about what bilingual secondary school students need to know in terms of language to process academic textbooks cannot avoiding deal with a dilemma. It needs to be decided which way it is most appropriate to analyse the texts in question. Handbooks of English applied linguistics are not immensely helpful with regard to this problem as they tend not to give recommendation as to which major text analytical approaches are advisable to follow in a pre-college setting. The present theoretical research aims to address this lacuna. Respectively, the purpose of this pedagogically motivated theoretical paper is to investigate two major approaches of ESP text analysis, the register and the genre analysis, in order to find the more suitable one for exploring the language use of secondary school subject texts from the point of view of an English as a second language teacher. Comparing and contrasting the merits and limitations of the two contrastive approaches allows for a better understanding of the nature of the two different perspectives of text analysis. The study examines the goals, the scope of analysis, and the achievements of the register perspective and those of the genre approach alike. The paper also investigates and reviews in detail the starkly different methods of ESP text analysis applied by the two perspectives. Discovering text analysis from a theoretical and methodological angle supports a practical aspect of English teaching, namely making an informed choice when setting out to analyse

  2. HGCal Simulation Analyses for CMS

    CERN Document Server

    Bruno, Sarah Marie

    2015-01-01

    This summer, I approached the topic of fast-timing detection of photons from Higgs decays via simulation analyses, working under the supervision of Dr. Adolf Bornheim of the California Institute of Technology. My specific project focused on simulating the high granularity calorimeter for the Compact Muon Solenoid (CMS) experiment. CMS detects particles using calorimeters. The Electromagnetic Calorimeter (ECal) is arranged cylindrically to form a barrel section and two “endcaps.” Previously, both the barrel and endcap have employed lead tungstate crystal detectors, known as the “shashlik” design. The crystal detectors, however, rapidly degrade from exposure to radiation. This effect is most pronounced in the endcaps. To avoid the high expense of frequently replacing degraded detectors, it was recently decided to eliminate the endcap crystals in favor of an arrangement of silicon detectors known as the “High Granularity Calorimeter” (HGCal), while leaving the barrel detector technology unchanged. T...

  3. Computational Analyses of Arabic Morphology

    CERN Document Server

    Kiraz, G A

    1994-01-01

    This paper demonstrates how a (multi-tape) two-level formalism can be used to write two-level grammars for Arabic non-linear morphology using a high level, but computationally tractable, notation. Three illustrative grammars are provided based on CV-, moraic- and affixational analyses. These are complemented by a proposal for handling the hitherto computationally untreated problem of the broken plural. It will be shown that the best grammars for describing Arabic non-linear morphology are moraic in the case of templatic stems, and affixational in the case of a-templatic stems. The paper will demonstrate how the broken plural can be derived under two-level theory via the `implicit' derivation of the singular.

  4. Economical analyses in interventional radiology

    International Nuclear Information System (INIS)

    Considerations about the relation between benefit and expenses are also gaining increasing importance in interventional radiology. This review aims at providing a survey about the published data concerning economical analyses of some of the more frequently employed interventions in radiology excluding neuroradiological and coronary interventions. Because of the relative scarcity of literature in this field, all identified articles (n=46) were included without selection for methodological quality. For a number of radiological interventions the cost-effectiveness has already been demonstrated, e.g., PTA of femoropopliteal and iliac artery stenoses, stenting of renal artery stenoses, placement of vena-cava filters, as well as metal stents in malignant biliary and esophageal obstructions. Conflicting data exist for the treatment of abdominal aortic aneurysms. So far, no analysis could be found that directly compares bypass surgery versus PTA+stent in iliac arteries. (orig.)

  5. Analyse des besoins des usagers

    OpenAIRE

    KHOUDOUR,L; LANGLAIS,A; Charpentier, C.; MOTTE,C; PIAN,C

    2002-01-01

    Il s'agit d'étendre la surveillance vidéo de l'enceinte du métro vers l'intérieur des rames. Les images captées constituent des prises de vue des événements qui se déroulent à l'intérieur des véhicules afin notamment d'améliorer la sécurité des usagers transportes. Il est possible de mémoriser les images des quelques instants précédant un incident usager, d'analyser ces images en temps différé et de mieux appréhender en temps réel le comportement des usagers face à des événements ou des consi...

  6. Analysing the Wrongness of Killing

    DEFF Research Database (Denmark)

    Di Nucci, Ezio

    2016-01-01

    This article provides an in-depth analysis of the wrongness of killing by comparing different versions of three influential views: the traditional view that killing is always wrong; the liberal view that killing is wrong if and only if the victim does not want to be killed; and Don Marquis‟ future...... of value account of the wrongness of killing. In particular, I illustrate the advantages that a basic version of the liberal view and a basic version of the future of value account have over competing alternatives. Still, ultimately none of the views analysed here are satisfactory; but the different...... reasons why those competing views fail provide important insights into the ethics of killing....

  7. Analysing performance through value creation

    Directory of Open Access Journals (Sweden)

    Adrian TRIFAN

    2015-12-01

    Full Text Available This paper draws a parallel between measuring financial performance in 2 variants: the first one using data offered by accounting, which lays emphasis on maximizing profit, and the second one which aims to create value. The traditional approach to performance is based on some indicators from accounting data: ROI, ROE, EPS. The traditional management, based on analysing the data from accounting, has shown its limits, and a new approach is needed, based on creating value. The evaluation of value based performance tries to avoid the errors due to accounting data, by using other specific indicators: EVA, MVA, TSR, CVA. The main objective is shifted from maximizing the income to maximizing the value created for shareholders. The theoretical part is accompanied by a practical analysis regarding the creation of value and an analysis of the main indicators which evaluate this concept.

  8. Isotopic signatures by bulk analyses

    International Nuclear Information System (INIS)

    Los Alamos National Laboratory has developed a series of measurement techniques for identification of nuclear signatures by analyzing bulk samples. Two specific applications for isotopic fingerprinting to identify the origin of anthropogenic radioactivity in bulk samples are presented. The first example is the analyses of environmental samples collected in the US Arctic to determine the impact of dumping of radionuclides in this polar region. Analyses of sediment and biota samples indicate that for the areas sampled the anthropogenic radionuclide content of sediments was predominantly the result of the deposition of global fallout. The anthropogenic radionuclide concentrations in fish, birds and mammals were very low. It can be surmised that marine food chains are presently not significantly affected. The second example is isotopic fingerprinting of water and sediment samples from the Rocky Flats Facility (RFP). The largest source of anthropogenic radioactivity presently affecting surface-waters at RFP is the sediments that are currently residing in the holding ponds. One gram of sediment from a holding pond contains approximately 50 times more plutonium than 1 liter of water from the pond. Essentially 100% of the uranium in Ponds A-1 and A-2 originated as depleted uranium. The largest source of radioactivity in the terminal Ponds A-4, B-5 and C-2 was naturally occurring uranium and its decay product radium. The uranium concentrations in the waters collected from the terminal ponds contained 0.05% or less of the interim standard calculated derived concentration guide for uranium in waters available to the public. All of the radioactivity observed in soil, sediment and water samples collected at RFP was naturally occurring, the result of processes at RFP or the result of global fallout. No extraneous anthropogenic alpha, beta or gamma activities were detected. The plutonium concentrations in Pond C-2 appear to vary seasonally

  9. BN-600 hybrid core benchmark analyses

    International Nuclear Information System (INIS)

    Benchmark analyses for the hybrid BN-600 reactor that contains three uranium enrichment zones and one plutonium zone in the core, have been performed within the frame of an IAEA sponsored Coordinated Research Project. The results for several relevant reactivity parameters obtained by the participants with their own state-of-the-art basic data and codes, were compared in terms of calculational uncertainty, and their effects on the ULOF transient behavior of the hybrid BN-600 core were evaluated. The comparison of the diffusion and transport results obtained for the homogeneous representation generally shows good agreement for most parameters between the RZ and HEX-Z models. The burnup effect and the heterogeneity effect on most reactivity parameters also show good agreement for the HEX-Z diffusion and transport theory results. A large difference noticed for the sodium and steel density coefficients is mainly due to differences in the spatial coefficient predictions for non fuelled regions. The burnup reactivity loss was evaluated to be 0.025 (4.3 $) within ∼ 5.0% standard deviation. The heterogeneity effect on most reactivity coefficients was estimated to be small. The heterogeneity treatment reduced the control rod worth by 2.3%. The heterogeneity effect on the k-eff and control rod worth appeared to differ strongly depending on the heterogeneity treatment method. A substantial spread noticed for several reactivity coefficients did not give a significant impact on the transient behavior prediction. This result is attributable to compensating effects between several reactivity effects and the specific design of the partially MOX fuelled hybrid core. (author)

  10. Thermal and hydraulic analyses of the System 81 cold traps

    Energy Technology Data Exchange (ETDEWEB)

    Kim, K.

    1977-06-15

    Thermal and hydraulic analyses of the System 81 Type I and II cold traps were completed except for thermal transients analysis. Results are evaluated, discussed, and reported. Analytical models were developed to determine the physical dimensions of the cold traps and to predict the performance. The FFTF cold trap crystallizer performances were simulated using the thermal model. This simulation shows that the analytical model developed predicts reasonably conservative temperatures. Pressure drop and sodium residence time calculations indicate that the present design will meet the requirements specified in the E-Specification. Steady state temperature data for the critical regions were generated to assess the magnitude of the thermal stress.

  11. Reliability analyses used by maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Rusek, S.; Gono, R.; Kral, V.; Kratky, M. [VSB Technical Univ. of Ostrava, Poruba (Czech Republic)

    2008-07-01

    A series of studies have been conducted to analyze failures that have been experienced by most power distribution companies in the Czech Republic and in one of the Slovak Republics. The purpose was to find ways to optimize the maintenance of distribution network devices. Data was compiled to enable a comparison of results and to create a statistically more important database. Since the number of failures in the area of electrical power engineering have been rather small, the results on element reliability will only be available in several more years to come. The main challenge with reliability analysis is to find reliable and updated input data. As such, the primary task is to change the existing structure of databases of power distribution companies. These databases must be adjusted to get the input data for the calculation functions of reliability centred maintenance (RCM). This paper described the programs designed for analyses of reliability indices and the optimization of maintenance of equipment of the distribution system that will provide basic data for responsible and logical decisions regarding maintenance and basic data for the preparation of an effective maintenance schedule and the creation of a feedback system. 7 refs., 4 figs.

  12. Partitioning Uncertainty for Non-Ergodic Probabilistic Seismic Hazard Analyses

    OpenAIRE

    Dawood, Haitham Mohamed Mahmoud Mousad

    2014-01-01

    Properly accounting for the uncertainties in predicting ground motion parameters is critical for Probabilistic Seismic Hazard Analyses (PSHA). This is particularly important for critical facilities that are designed for long return period motions. Non-ergodic PSHA is a framework that allows for this proper accounting of uncertainties. This, in turn, allows for more informed decisions by designers, owners and regulating agencies. The ergodic assumption implies that the standard deviation ...

  13. Budget-Impact Analyses: A Critical Review of Published Studies

    OpenAIRE

    Ewa Orlewska; Laszlo Gulcsi

    2009-01-01

    This article reviews budget-impact analyses (BIAs) published to date in peer-reviewed bio-medical journals with reference to current best practice, and discusses where future research needs to be directed. Published BIAs were identified by conducting a computerized search on PubMed using the search term 'budget impact analysis'. The years covered by the search included January 2000 through November 2008. Only studies (i) named by authors as BIAs and (ii) predicting financial consequences of a...

  14. Measuring Quality Across Three Child Care Quality Rating and Improvement Systems: Findings from Secondary Analyses.

    OpenAIRE

    Lizabeth Malone; Gretchen Kirby; Pia Caronongan; Kimberly Boller; Kathryn Tout

    2011-01-01

    This report presents findings from an exploratory analysis of administrative data from three QRISs. The analyses examine the prevalence of quality components across centers and how they combine to result in an overall rating level and to predict observed quality.

  15. NOx analyser interefence from alkenes

    Science.gov (United States)

    Bloss, W. J.; Alam, M. S.; Lee, J. D.; Vazquez, M.; Munoz, A.; Rodenas, M.

    2012-04-01

    Nitrogen oxides (NO and NO2, collectively NOx) are critical intermediates in atmospheric chemistry. NOx abundance controls the levels of the primary atmospheric oxidants OH, NO3 and O3, and regulates the ozone production which results from the degradation of volatile organic compounds. NOx are also atmospheric pollutants in their own right, and NO2 is commonly included in air quality objectives and regulations. In addition to their role in controlling ozone formation, NOx levels affect the production of other pollutants such as the lachrymator PAN, and the nitrate component of secondary aerosol particles. Consequently, accurate measurement of nitrogen oxides in the atmosphere is of major importance for understanding our atmosphere. The most widely employed approach for the measurement of NOx is chemiluminescent detection of NO2* from the NO + O3 reaction, combined with NO2 reduction by either a heated catalyst or photoconvertor. The reaction between alkenes and ozone is also chemiluminescent; therefore alkenes may contribute to the measured NOx signal, depending upon the instrumental background subtraction cycle employed. This interference has been noted previously, and indeed the effect has been used to measure both alkenes and ozone in the atmosphere. Here we report the results of a systematic investigation of the response of a selection of NOx analysers, ranging from systems used for routine air quality monitoring to atmospheric research instrumentation, to a series of alkenes ranging from ethene to the biogenic monoterpenes, as a function of conditions (co-reactants, humidity). Experiments were performed in the European Photoreactor (EUPHORE) to ensure common calibration, a common sample for the monitors, and to unequivocally confirm the alkene (via FTIR) and NO2 (via DOAS) levels present. The instrument responses ranged from negligible levels up to 10 % depending upon the alkene present and conditions used. Such interferences may be of substantial importance

  16. Nonlinear Analyses of the Dynamic Properties of Hydrostatic Bearing Systems

    Institute of Scientific and Technical Information of China (English)

    LIU Wei(刘伟); WU Xiujiang(吴秀江); V.A. Prokopenko

    2003-01-01

    Nonlinear analyses of hydrostatic bearing systems are necessary to adequately model the fluid-solid interaction. The dynamic properties of linear and nonlinear analytical models of hydrostatic bearings are compared in this paper. The analyses were based on the determination of the aperiodic border of transient processes with external step loads. The results show that the dynamic properties can be most effectively improved by increasing the hydrostatic bearing crosspiece width and additional pocket volume in a bearing can extend the load range for which the transient process is aperiodic, but an additional restrictor and capacitor (RC) chain must be introduced for increasing damping. The nonlinear analyses can also be used to predict typical design parameters for a hydrostatic bearing.

  17. 78 FR 26847 - Including Specific Pavement Types in Federal-aid Highway Traffic Noise Analyses

    Science.gov (United States)

    2013-05-08

    ... Federal Highway Administration Including Specific Pavement Types in Federal-aid Highway Traffic Noise... types used in Federal-aid highway traffic noise analyses. Current highway traffic noise analyses rely on... (OGAC), and Portland cement concrete (PCC). Prediction of future noise levels is based on the...

  18. Residual Strength Analyses of Monolithic Structures

    Science.gov (United States)

    Forth, Scott (Technical Monitor); Ambur, Damodar R. (Technical Monitor); Seshadri, B. R.; Tiwari, S. N.

    2003-01-01

    Finite-element fracture simulation methodology predicts the residual strength of damaged aircraft structures. The methodology uses the critical crack-tip-opening-angle (CTOA) fracture criterion to characterize the fracture behavior of the material. The CTOA fracture criterion assumes that stable crack growth occurs when the crack-tip angle reaches a constant critical value. The use of the CTOA criterion requires an elastic- plastic, finite-element analysis. The critical CTOA value is determined by simulating fracture behavior in laboratory specimens, such as a compact specimen, to obtain the angle that best fits the observed test behavior. The critical CTOA value appears to be independent of loading, crack length, and in-plane dimensions. However, it is a function of material thickness and local crack-front constraint. Modeling the local constraint requires either a three-dimensional analysis or a two-dimensional analysis with an approximation to account for the constraint effects. In recent times as the aircraft industry is leaning towards monolithic structures with the intention of reducing part count and manufacturing cost, there has been a consistent effort at NASA Langley to extend critical CTOA based numerical methodology in the analysis of integrally-stiffened panels.In this regard, a series of fracture tests were conducted on both flat and curved aluminum alloy integrally-stiffened panels. These flat panels were subjected to uniaxial tension and during the test, applied load-crack extension, out-of-plane displacements and local deformations around the crack tip region were measured. Compact and middle-crack tension specimens were tested to determine the critical angle (wc) using three-dimensional code (ZIP3D) and the plane-strain core height (hJ using two-dimensional code (STAGS). These values were then used in the STAGS analysis to predict the fracture behavior of the integrally-stiffened panels. The analyses modeled stable tearing, buckling, and crack

  19. Pawnee Nation Energy Option Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Matlock, M.; Kersey, K.; Riding In, C.

    2009-07-21

    Pawnee Nation of Oklahoma Energy Option Analyses In 2003, the Pawnee Nation leadership identified the need for the tribe to comprehensively address its energy issues. During a strategic energy planning workshop a general framework was laid out and the Pawnee Nation Energy Task Force was created to work toward further development of the tribe’s energy vision. The overarching goals of the “first steps” project were to identify the most appropriate focus for its strategic energy initiatives going forward, and to provide information necessary to take the next steps in pursuit of the “best fit” energy options. Description of Activities Performed The research team reviewed existing data pertaining to the availability of biomass (focusing on woody biomass, agricultural biomass/bio-energy crops, and methane capture), solar, wind and hydropower resources on the Pawnee-owned lands. Using these data, combined with assumptions about costs and revenue streams, the research team performed preliminary feasibility assessments for each resource category. The research team also reviewed available funding resources and made recommendations to Pawnee Nation highlighting those resources with the greatest potential for financially-viable development, both in the near-term and over a longer time horizon. Findings and Recommendations Due to a lack of financial incentives for renewable energy, particularly at the state level, combined mediocre renewable energy resources, renewable energy development opportunities are limited for Pawnee Nation. However, near-term potential exists for development of solar hot water at the gym, and an exterior wood-fired boiler system at the tribe’s main administrative building. Pawnee Nation should also explore options for developing LFGTE resources in collaboration with the City of Pawnee. Significant potential may also exist for development of bio-energy resources within the next decade. Pawnee Nation representatives should closely monitor

  20. 10 CFR 436.24 - Uncertainty analyses.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Uncertainty analyses. 436.24 Section 436.24 Energy... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank...

  1. The ASSET intercomparison of ozone analyses: method and first results

    Directory of Open Access Journals (Sweden)

    A. J. Geer

    2006-06-01

    Full Text Available This paper examines 11 sets of ozone analyses from 7 different data assimilation systems. Two are numerical weather prediction (NWP systems based on general circulation models (GCMs; the other five use chemistry transport models (CTMs. These systems contain either linearised or detailed ozone chemistry, or no chemistry at all. In most analyses, MIPAS (Michelson Interferometer for Passive Atmospheric Sounding ozone data are assimilated. Two examples assimilate SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric Chartography observations. The analyses are compared to independent ozone observations covering the troposphere, stratosphere and lower mesosphere during the period July to November 2003.

    Through most of the stratosphere (50 hPa to 1 hPa, biases are usually within ±10% and standard deviations less than 10% compared to ozonesondes and HALOE (Halogen Occultation Experiment. Biases and standard deviations are larger in the upper-troposphere/lower-stratosphere, in the troposphere, the mesosphere, and the Antarctic ozone hole region. In these regions, some analyses do substantially better than others, and this is mostly due to differences in the models. At the tropical tropopause, many analyses show positive biases and excessive structure in the ozone fields, likely due to known deficiencies in assimilated tropical wind fields and a degradation in MIPAS data at these levels. In the southern hemisphere ozone hole, only the analyses which correctly model heterogeneous ozone depletion are able to reproduce the near-complete ozone destruction over the pole. In the upper-stratosphere and mesosphere (above 5 hPa, some ozone photochemistry schemes caused large but easily remedied biases. The diurnal cycle of ozone in the mesosphere is not captured, except by the one system that includes a detailed treatment of mesospheric chemistry.

    In general, similarly good results are obtained no matter what the assimilation

  2. Uncertainty and Sensitivity Analyses of Duct Propagation Models

    Science.gov (United States)

    Nark, Douglas M.; Watson, Willie R.; Jones, Michael G.

    2008-01-01

    This paper presents results of uncertainty and sensitivity analyses conducted to assess the relative merits of three duct propagation codes. Results from this study are intended to support identification of a "working envelope" within which to use the various approaches underlying these propagation codes. This investigation considers a segmented liner configuration that models the NASA Langley Grazing Incidence Tube, for which a large set of measured data was available. For the uncertainty analysis, the selected input parameters (source sound pressure level, average Mach number, liner impedance, exit impedance, static pressure and static temperature) are randomly varied over a range of values. Uncertainty limits (95% confidence levels) are computed for the predicted values from each code, and are compared with the corresponding 95% confidence intervals in the measured data. Generally, the mean values of the predicted attenuation are observed to track the mean values of the measured attenuation quite well and predicted confidence intervals tend to be larger in the presence of mean flow. A two-level, six factor sensitivity study is also conducted in which the six inputs are varied one at a time to assess their effect on the predicted attenuation. As expected, the results demonstrate the liner resistance and reactance to be the most important input parameters. They also indicate the exit impedance is a significant contributor to uncertainty in the predicted attenuation.

  3. Computer analyses on loop seal clearing experiment at PWR PACTEL

    International Nuclear Information System (INIS)

    Highlights: • Code analyses of loop seal clearing experiment with PWR PACTEL are introduced. • TRACE and APROS system codes are used in the analyses. • Main events of the experiment are well predicted with both codes. • Discrepancies are observed on the secondary side and in the core region. • Loop seal clearing phenomenon is well simulated with both codes. - Abstract: Water seal formation in the loop seal in pressurized water reactors can occur during a small or intermediate break loss-of-coolant accident, causing temporary fuel overheating. Quantification of the accuracy of overheating prediction is of interest in the best-estimate safety analyses, even though the peak cladding temperatures due to the water seal formation in the loop seal seldom approach acceptance criteria as such. The aim of this study was to test and evaluate the accuracy with which the thermal–hydraulic system code nodalizations of the PWR PACTEL predict loop seal clearing in a small break loss-of-coolant-accident test performed with the PWR PACTEL facility. PWR PACTEL is a thermal–hydraulic test facility with two loops and vertical inverted U-tube steam generators. Post-test simulations were performed with the TRACE and APROS system codes. In the post-test simulations, the main events of the transient such as the decrease in the core water level, depressurization of the primary circuit, and the behavior of the water seal formation and clearing in the loop seal were predicted satisfactorily by both codes. However, discrepancies with the experiment results were observed in the analyses with both codes, for example the core temperature excursions were halted too early and the peak temperature predictions were too low. The core water level increase caused by loop seal clearing was overestimated with both codes, and the pressure and temperature were overestimated on the secondary side of the steam generators. Loop Seal 2 was evidently cleared out while Loop Seal 1 remained closed

  4. Identifying null meta-analyses that are ripe for updating

    Directory of Open Access Journals (Sweden)

    Fang Manchun

    2003-07-01

    Full Text Available Abstract Background As an increasingly large number of meta-analyses are published, quantitative methods are needed to help clinicians and systematic review teams determine when meta-analyses are not up to date. Methods We propose new methods for determining when non-significant meta-analytic results might be overturned, based on a prediction of the number of participants required in new studies. To guide decision making, we introduce the "new participant ratio", the ratio of the actual number of participants in new studies to the predicted number required to obtain statistical significance. A simulation study was conducted to study the performance of our methods and a real meta-analysis provides further evidence. Results In our three simulation configurations, our diagnostic test for determining whether a meta-analysis is out of date had sensitivity of 55%, 62%, and 49% with corresponding specificity of 85%, 80%, and 90% respectively. Conclusions Simulations suggest that our methods are able to detect out-of-date meta-analyses. These quick and approximate methods show promise for use by systematic review teams to help decide whether to commit the considerable resources required to update a meta-analysis. Further investigation and evaluation of the methods is required before they can be recommended for general use.

  5. Moisture advection to the Arctic : forecasted, analysed and observed

    Science.gov (United States)

    Dufour, Ambroise; Zolina, Olga

    2015-04-01

    Besides its contribution to the Arctic hydrological budget, moisture imports from mid-latitudes are also influential on shorter time scales since water vapour advection tends to occur together with extratropical cyclones. Influx of moisture to the Arctic cause the formation of clouds that have an immediate impact on the surface energy budget especially in winter. In the long run, inaccuracies in the description of cloud cover and phase lead to temperature biases in CMIP5 models. The ECMWF workshop on polar prediction has highlighted moisture advection as one of the problematic physical processes limiting the quality of forecasts. Verifying the accuracy of medium-term forecasts is of interest beyond weather prediction : it points to the ability of models to bring adequate quantities of moisture to the Arctic when they are less constrained by observations than in analyses. In this study, we have compared forecasted moisture flux fields with analyses and observations over the period 2000-2010. ECMWF's ERA-Interim provided the forecasts, extending to ten days. For the analyses, in addition to ERA-Interim, we used the Arctic System Reanalysis whose forecast model is optimized for the polar regions and runs at high resolution (30 km). Finally, the Integrated Global Radiosonde Archive data over the Arctic allowed a validation by observations.

  6. 神经元蜡样质脂褐质沉积病(NCL)的基因型与表型相关性研究%Genotype-phenotype analyses of classic neuronal ceroid lipofuscinosis (NCLs): genetic predictions from clinical and pathological findings

    Institute of Scientific and Technical Information of China (English)

    Weina JU; W. Ted BROWN; Nanbert ZHONG; Anetta WRONSKA; Dorota N. MOROZIEWICZ; Rocksheng ZHONG; Natalia WISNIEWSKI; Anna JURKIEWICZ; Michael FIORY; Krystyna E. WISNIEWSKI; Lance JOHNSTON

    2006-01-01

    Objective:Genotype-phenotype associations were studied in 517 subjects clinically affected by classical neuronal ceroid lipofuscinosis (NCL). Methods:Genetic loci CLN1-3 were analyzed in regard to age of onset, initial neurological symptoms, and electron microscope (EM) profiles. Results: The most common initial symptom leading to a clinical evaluation was developmental delay (30%) in NCL1, seizures (42.4%) in NCL2, and vision problems (53.5%) in NCL3. Eighty-two percent of NCL1 cases had granular osmiophilic deposits (GRODs) or mixed-GROD-containing EM profiles; 94% of NCL2 cases had curvilinear (CV) or mixed-CV-containing profiles; and 91% of NCL3 had fingerprint (FP) or mixed-FP-containing profiles. The mixed-type EM profile was found in approximately one-third of the NCL cases. DNA mutations within a specific CLN gene were further correlated with NCL phenotypes. Seizures were noticed to associate with common mutations 523G>A and 636C>T of CLN2 in NCL2 but not with common mutations 223G>A and 451C>T of CLN1 in NCL1. Vision loss was the initial symptom in all types of mutations in NCL3. Surprisingly, our data showed that the age of onset was atypical in 51.3% of NCL1 (infantile form) cases, 19.7% of NCL2 (late-infantile form) cases, and 42.8% of NCL3 (juvenile form) cases.Conclusion:Our data provide an overall picture regarding the clinical recognition of classical childhood NCLs. This may assist in the prediction and genetic identification of NCL1-3 via their characteristic clinical features.

  7. Assessment of protein disorder region predictions in CASP10

    KAUST Repository

    Monastyrskyy, Bohdan

    2013-11-22

    The article presents the assessment of disorder region predictions submitted to CASP10. The evaluation is based on the three measures tested in previous CASPs: (i) balanced accuracy, (ii) the Matthews correlation coefficient for the binary predictions, and (iii) the area under the curve in the receiver operating characteristic (ROC) analysis of predictions using probability annotation. We also performed new analyses such as comparison of the submitted predictions with those obtained with a Naïve disorder prediction method and with predictions from the disorder prediction databases D2P2 and MobiDB. On average, the methods participating in CASP10 demonstrated slightly better performance than those in CASP9.

  8. Predicting protein structure classes from function predictions

    DEFF Research Database (Denmark)

    Sommer, I.; Rahnenfuhrer, J.; de Lichtenberg, Ulrik;

    2004-01-01

    We introduce a new approach to using the information contained in sequence-to-function prediction data in order to recognize protein template classes, a critical step in predicting protein structure. The data on which our method is based comprise probabilities of functional categories; for given......-to-structure prediction methods....

  9. Economische analyse van de Nederlandse biotechnologiesector

    OpenAIRE

    Giessen, A.M. van der; Gijsbers, G.W.; Koops, R.; Zee, F.A. van der

    2014-01-01

    In opdracht van de Commissie Genetische Modificatie (COGEM) heeft TNO een deskstudie uitgevoerd getiteld “Economische analyse van de Nederlandse biotechnologiesector”. Deze analyse is één van de voorstudies die de COGEM laat uitvoeren als voorbereiding op de Trendanalyse Biotechnologie, die naar verwachting in 2015 zal worden uitgevoerd. Voor deze analyse heeft de COGEM aan TNO gevraagd ontwikkelingen, trends en kansen van de biotechnologie opnieuw in kaart te brengen, met een nadruk op econo...

  10. 49 CFR 1180.7 - Market analyses.

    Science.gov (United States)

    2010-10-01

    ... OF TRANSPORTATION RULES OF PRACTICE RAILROAD ACQUISITION, CONTROL, MERGER, CONSOLIDATION PROJECT, TRACKAGE RIGHTS, AND LEASE PROCEDURES General Acquisition Procedures § 1180.7 Market analyses. (a)...

  11. [Anne Arold. Kontrastive Analyse...] / Paul Alvre

    Index Scriptorium Estoniae

    Alvre, Paul, 1921-2008

    2001-01-01

    Arvustus: Arold, Anne. Kontrastive analyse der Wortbildungsmuster im Deutschen und im Estnischen (am Beispiel der Aussehensadjektive). Tartu, 2000. (Dissertationes philologiae germanicae Universitatis Tartuensis)

  12. The ASSET intercomparison of ozone analyses: method and first results

    Directory of Open Access Journals (Sweden)

    A. J. Geer

    2006-01-01

    Full Text Available This paper aims to summarise the current performance of ozone data assimilation (DA systems, to show where they can be improved, and to quantify their errors. It examines 11 sets of ozone analyses from 7 different DA systems. Two are numerical weather prediction (NWP systems based on general circulation models (GCMs; the other five use chemistry transport models (CTMs. The systems examined contain either linearised or detailed ozone chemistry, or no chemistry at all. In most analyses, MIPAS (Michelson Interferometer for Passive Atmospheric Sounding ozone data are assimilated; two assimilate SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric Chartography observations instead. Analyses are compared to independent ozone observations covering the troposphere, stratosphere and lower mesosphere during the period July to November 2003. Biases and standard deviations are largest, and show the largest divergence between systems, in the troposphere, in the upper-troposphere/lower-stratosphere, in the upper-stratosphere and mesosphere, and the Antarctic ozone hole region. However, in any particular area, apart from the troposphere, at least one system can be found that agrees well with independent data. In general, none of the differences can be linked to the assimilation technique (Kalman filter, three or four dimensional variational methods, direct inversion or the system (CTM or NWP system. Where results diverge, a main explanation is the way ozone is modelled. It is important to correctly model transport at the tropical tropopause, to avoid positive biases and excessive structure in the ozone field. In the southern hemisphere ozone hole, only the analyses which correctly model heterogeneous ozone depletion are able to reproduce the near-complete ozone destruction over the pole. In the upper-stratosphere and mesosphere (above 5 hPa, some ozone photochemistry schemes caused large but easily remedied biases. The diurnal cycle of ozone in

  13. Development of SAWEC, version 1.22. A simulation and Analysis model to explain and predict energy consumption and CO2 emission in residential buildings; Ontwikkeling van SAWEC, Versie 1.22. Een Simulatie en Analyse model voor verklaring en voorspelling van het Woninggebonden Energieverbruik en CO2-emissie

    Energy Technology Data Exchange (ETDEWEB)

    Jeeninga, H.; Volkers, C.H. [ECN Beleidsstudies, Petten (Netherlands)

    2003-07-01

    SAWEC is a model for simulation and analysis of energy consumption and CO2-emissions of residential energy use. Unlike its predecessor, the model SAVE-Households, SAWEC is based on the KWR-survey. KWR is an extensive survey of the quality of dwellings that is conducted every five years. The development of SAWEC is to a large extent based on the expertise that is developed over the past decade with SAVE-Households. However, the dwelling stock is modelled into more detail and also the vintage approach is improved. A distinction is made between ownership (three types), type of dwelling (four types), date of construction (five types) and infrastructure (three types). Furthermore, a new approach of the development of investments costs is implemented and the database of energy conservation measures has been re-designed. When designing the model, specific attention is paid to flexibility of the model to incorporate new features in the near future. Possible new features are endogenous modelling of life style changes, i.e. as a result of demographical changes and learning curves. In this report, the design of the SAWEC model is described. The user guide of the SAWEC model can be found in chapter 6. [Dutch] In opdracht van VROM-DGW is door ECN Beleidsstudies een simulatiemodel SAWEC ontwikkeld. SAWEC is een Simulatie en Analyse model voor verklaring en voorspelling van het Woninggebonden Energieverbruik en CO2 emissie. Een aantal redenen lag ten grondslag aan de wens om een opvolger te ontwikkelen van het bij ECN in gebruik zijnde model SAVEHuishoudens. SAVE-Huishoudens is gebaseerd op de BEK- en BAK-onderzoeken van EnergieNed. De resultaten van deze onderzoeken zijn beperkt vergelijkbaar met het in opdracht van VROM-DGW uitgevoerde KWR-onderzoek, doordat zowel de penetratiegraad van maatregelen in een bepaald zichtjaar als ook de mutatie van de penetratiegraad over een bepaalde periode afwijkt van de KWR-resultaten. Bij het ontwikkelen van het SAWEC-model is voor zover

  14. Disruption prediction at JET

    International Nuclear Information System (INIS)

    The sudden loss of the plasma magnetic confinement, known as disruption, is one of the major issue in a nuclear fusion machine as JET (Joint European Torus). Disruptions pose very serious problems to the safety of the machine. The energy stored in the plasma is released to the machine structure in few milliseconds resulting in forces that at JET reach several Mega Newtons. The problem is even more severe in the nuclear fusion power station where the forces are in the order of one hundred Mega Newtons. The events that occur during a disruption are still not well understood even if some mechanisms that can lead to a disruption have been identified and can be used to predict them. Unfortunately it is always a combination of these events that generates a disruption and therefore it is not possible to use simple algorithms to predict it. This thesis analyses the possibility of using neural network algorithms to predict plasma disruptions in real time. This involves the determination of plasma parameters every few milliseconds. A plasma boundary reconstruction algorithm, XLOC, has been developed in collaboration with Dr. D. O'Brien and Dr. J. Ellis capable of determining the plasma wall/distance every 2 milliseconds. The XLOC output has been used to develop a multilayer perceptron network to determine plasma parameters as li and qψ with which a machine operational space has been experimentally defined. If the limits of this operational space are breached the disruption probability increases considerably. Another approach for prediction disruptions is to use neural network classification methods to define the JET operational space. Two methods have been studied. The first method uses a multilayer perceptron network with softmax activation function for the output layer. This method can be used for classifying the input patterns in various classes. In this case the plasma input patterns have been divided between disrupting and safe patterns, giving the possibility of

  15. Star 48 solid rocket motor nozzle analyses and instrumented firings

    Science.gov (United States)

    Porter, R. L.

    1986-01-01

    The analyses and testing performed by NASA in support of an expanded and improved nozzle design data base for use by the U.S. solid rocket motor industry is presented. A production nozzle with a history of one ground failure and two flight failures was selected for analyses and testing. The stress analysis was performed with the Champion computer code developed by the U.S. Navy. Several improvements were made to the code. Strain predictions were made and compared to test data. Two short duration motor firings were conducted with highly instrumented nozzles. The first nozzle had 58 thermocouples, 66 strain gages, and 8 bondline pressure measurements. The second nozzle had 59 thermocouples, 68 strain measurements, and 8 bondline pressure measurements. Most of this instrumentation was on the nonmetallic parts, and provided significantly more thermal and strain data on the nonmetallic components of a nozzle than has been accumulated in a solid rocket motor test to date.

  16. Prediction of coefficients of thermal expansion for unidirectional composites

    Science.gov (United States)

    Bowles, David E.; Tompkins, Stephen S.

    1989-01-01

    Several analyses for predicting the longitudinal, alpha(1), and transverse, alpha(2), coefficients of thermal expansion of unidirectional composites were compared with each other, and with experimental data on different graphite fiber reinforced resin, metal, and ceramic matrix composites. Analytical and numerical analyses that accurately accounted for Poisson restraining effects in the transverse direction were in consistently better agreement with experimental data for alpha(2), than the less rigorous analyses. All of the analyses predicted similar values of alpha(1), and were in good agreement with the experimental data. A sensitivity analysis was conducted to determine the relative influence of constituent properties on the predicted values of alpha(1), and alpha(2). As would be expected, the prediction of alpha(1) was most sensitive to longitudinal fiber properties and the prediction of alpha(2) was most sensitive to matrix properties.

  17. Making detailed predictions makes (some) predictions worse

    Science.gov (United States)

    Kelly, Theresa F.

    In this paper, we investigate whether making detailed predictions about an event makes other predictions worse. Across 19 experiments, 10,895 participants, and 415,960 predictions about 724 professional sports games, we find that people who made detailed predictions about sporting events (e.g., how many hits each baseball team would get) made worse predictions about more general outcomes (e.g., which team would win). We rule out that this effect is caused by inattention or fatigue, thinking too hard, or a differential reliance on holistic information about the teams. Instead, we find that thinking about game-relevant details before predicting winning teams causes people to give less weight to predictive information, presumably because predicting details makes information that is relatively useless for predicting the winning team more readily accessible in memory and therefore incorporated into forecasts. Furthermore, we show that this differential use of information can be used to predict what kinds of games will and will not be susceptible to the negative effect of making detailed predictions.

  18. Genome-Facilitated Analyses of Geomicrobial Processes

    Energy Technology Data Exchange (ETDEWEB)

    Kenneth H. Nealson

    2012-05-02

    This project had the goal(s) of understanding the mechanism(s) of extracellular electron transport (EET) in the microbe Shewanella oneidensis MR-1, and a number of other strains and species in the genus Shewanella. The major accomplishments included sequencing, annotation, and analysis of more than 20 Shewanella genomes. The comparative genomics enabled the beginning of a systems biology approach to this genus. Another major contribution involved the study of gene regulation, primarily in the model organism, MR-1. As part of this work, we took advantage of special facilities at the DOE: e.g., the synchrotron radiation facility at ANL, where we successfully used this system for elemental characterization of single cells in different metabolic states (1). We began work with purified enzymes, and identification of partially purified enzymes, leading to initial characterization of several of the 42 c-type cytochromes from MR-1 (2). As the genome became annotated, we began experiments on transcriptome analysis under different conditions of growth, the first step towards systems biology (3,4). Conductive appendages of Shewanella, called bacterial nanowires were identified and characterized during this work (5, 11, 20,21). For the first time, it was possible to measure the electron transfer rate between single cells and a solid substrate (20), a rate that has been confirmed by several other laboratories. We also showed that MR-1 cells preferentially attach to cells at a given charge, and are not attracted, or even repelled by other charges. The interaction with the charged surfaces begins with a stimulation of motility (called electrokinesis), and eventually leads to attachment and growth. One of the things that genomics allows is the comparative analysis of the various Shewanella strains, which led to several important insights. First, while the genomes predicted that none of the strains looked like they should be able to degrade N-acetyl glucosamine (NAG), the monomer

  19. Longitudinal Analyses of Early Lesions by Fluorescence: An Observational Study

    OpenAIRE

    Ferreira Zandoná, A.; Ando, M.; Gomez, G.F.; Garcia-Corretjer, M.; Eckert, G.J.; Santiago, E; Katz, B P; Zero, D.T.

    2013-01-01

    Previous caries experience correlates to future caries risk; thus, early identification of lesions has importance for risk assessment and management. In this study, we aimed to determine if Quantitative Light-induced Fluorescence (QLF) parameters—area (A [mm2]), fluorescence loss (∆F [%]), and ∆Q [%×mm2]—obtained by image analyses can predict lesion progression. We secured consent from 565 children (from 5-13 years old) and their parents/guardians and examined them at baseline and regular int...

  20. Approximate analyses of inelastic effects in pipework

    International Nuclear Information System (INIS)

    This presentation shows figures concerned with analyses of inelastic effects in pipework as follows: comparison of experimental and calculated simplified analyses results for free end rotation and for circumferential strain; interrupted stress relaxation; regenerated relaxation caused by reversed yield; buckling of straight pipe under combined bending and torsion; results of fatigues test of pipe bend

  1. Novel Algorithms for Astronomical Plate Analyses

    Indian Academy of Sciences (India)

    Rene Hudec; Lukas Hudec

    2011-03-01

    Powerful computers and dedicated software allow effective data mining and scientific analyses in astronomical plate archives. We give and discuss examples of newly developed algorithms for astronomical plate analyses, e.g., searches for optical transients, as well as for major spectral and brightness changes.

  2. Predictive assessment of reading.

    Science.gov (United States)

    Wood, Frank B; Hill, Deborah F; Meyer, Marianne S; Flowers, D Lynn

    2005-12-01

    Study 1 retrospectively analyzed neuropsychological and psychoeducational tests given to N=220 first graders, with follow-up assessments in third and eighth grade. Four predictor constructs were derived: (1) Phonemic Awareness, (2) Picture Vocabulary, (3) Rapid Naming, and (4) Single Word Reading. Together, these accounted for 88%, 76%, 69%, and 69% of the variance, respectively, in first, third, and eighth grade Woodcock Johnson Broad Reading and eighth grade Gates-MacGinitie. When Single Word Reading was excluded from the predictors, the remaining predictors still accounted for 71%, 65%, 61%, and 65% of variance in the respective outcomes. Secondary analyses of risk of low outcome showed sensitivities/specificities of 93.0/91.0, and 86.4/84.9, respectively, for predicting which students would be in the bottom 15% and 30% of actual first grade WJBR. Sensitivities/specificities were 84.8/83.3 and 80.2/81.3, respectively, for predicting the bottom 15% and 30% of actual third grade WJBR outcomes; eighth grade outcomes had sensitivities/specificities of 80.0/80.0 and 85.7/83.1, respectively, for the bottom 15% and 30% of actual eighth grade WJBR scores. Study 2 cross-validated the concurrent predictive validities in an N=500 geographically diverse sample of late kindergartners through third graders, whose ethnic and racial composition closely approximated the national early elementary school population. New tests of the same four predictor domains were used, together taking only 15 minutes to administer by teachers; the new Woodcock-Johnson III Broad Reading standard score was the concurrent criterion, whose testers were blind to the predictor results. This cross-validation showed 86% of the variance accounted for, using the same regression weights as used in Study 1. With these weights, sensitivity/specificity values for the 15% and 30% thresholds were, respectively, 91.3/88.0 and 94.1/89.1. These validities and accuracies are stronger than others reported for

  3. Downstream prediction using a nonlinear prediction method

    Science.gov (United States)

    Adenan, N. H.; Noorani, M. S. M.

    2013-11-01

    The estimation of river flow is significantly related to the impact of urban hydrology, as this could provide information to solve important problems, such as flooding downstream. The nonlinear prediction method has been employed for analysis of four years of daily river flow data for the Langat River at Kajang, Malaysia, which is located in a downstream area. The nonlinear prediction method involves two steps; namely, the reconstruction of phase space and prediction. The reconstruction of phase space involves reconstruction from a single variable to the m-dimensional phase space in which the dimension m is based on optimal values from two methods: the correlation dimension method (Model I) and false nearest neighbour(s) (Model II). The selection of an appropriate method for selecting a combination of preliminary parameters, such as m, is important to provide an accurate prediction. From our investigation, we gather that via manipulation of the appropriate parameters for the reconstruction of the phase space, Model II provides better prediction results. In particular, we have used Model II together with the local linear prediction method to achieve the prediction results for the downstream area with a high correlation coefficient. In summary, the results show that Langat River in Kajang is chaotic, and, therefore, predictable using the nonlinear prediction method. Thus, the analysis and prediction of river flow in this area can provide river flow information to the proper authorities for the construction of flood control, particularly for the downstream area.

  4. Predictability of blocking

    International Nuclear Information System (INIS)

    Tibaldi and Molteni (1990, hereafter referred to as TM) had previously investigated operational blocking predictability by the ECMWF model and the possible relationships between model systematic error and blocking in the winter season of the Northern Hemisphere, using seven years of ECMWF operational archives of analyses and day 1 to 10 forecasts. They showed that fewer blocking episodes than in the real atmosphere were generally simulated by the model, and that this deficiency increased with increasing forecast time. As a consequence of this, a major contribution to the systematic error in the winter season was shown to derive from the inability of the model to properly forecast blocking. In this study, the analysis performed in TM for the first seven winter seasons of the ECMWF operational model is extended to the subsequent five winters, during which model development, reflecting both resolution increases and parametrisation modifications, continued unabated. In addition the objective blocking index developed by TM has been applied to the observed data to study the natural low frequency variability of blocking. The ability to simulate blocking of some climate models has also been tested

  5. Nuclear analyses of Indian LLCB test blanket system in ITER

    International Nuclear Information System (INIS)

    Heading towards the Nuclear Fusion Reactor Program, India is developing Lead Lithium Ceramic Breeder (LLCB) tritium breeding blanket for its future fusion Reactor. A mock-up of the LLCB blanket is proposed to be tested in ITER equatorial port no. 2, to ensure the overall performance of blanket in reactor relevant nuclear fusion environment. Nuclear analyses play an important role in LLCB Test Blanket System development. It is required for tritium breeding estimation, thermal-hydraulic design, coolants process design, radio-active waste management, equipments maintenance and replacement strategies and nuclear safety. To predict the nuclear behaviour of LLCB test blanket module in ITER, nuclear responses like tritium production, nuclear heating, neutron fluxes and radiation damages are estimated. As a part of ITER machine, LLCB TBS has to follow certain nuclear shielding requirements i.e. shutdown dose rates should not exceed the defined limits in ITER premises (inside bio-shield ∼100 μSv/hr after 12 days cooling and outside bio-shield ∼10 μSv/hr after 1 day cooling). Hence nuclear analyses are performed to assess and optimize the shielding capability of LLCB TBS inside and outside bio-shield. To state the radio-activity level of LLCB TBS components which support the rad-waste and safety assessment, nuclear activation analyses are executed. Nuclear analyses of LLCB TBS are performed using ITER recommended nuclear analyses codes (i.e. MCNP, EASY), nuclear cross section data libraries (i.e. FENDL 2.1, EAF) and neutronic model (ITER C-lite v.1). The paper describes comprehensive nuclear performance of LLCB TBS in ITER. (author)

  6. The psychological status of phonological analyses

    Directory of Open Access Journals (Sweden)

    David Eddington

    2015-09-01

    Full Text Available This paper casts doubt on the psychological relevance of many phonological analyses. There are four reasons for this: 1 theoretical adequacy does not necessarily imply psychological significance; 2 most approaches are nonempirical in that they are not subject to potential spatiotemporal falsification; 3 phonological analyses are estab­ lished with little or no recourse to the speakers of the language via experimental psy­ chology; 4 the limited base of evidence which most analyses are founded on is further cause for skepticism.

  7. L’Analyse de discours des Sociologues

    OpenAIRE

    Demailly, Lise

    2013-01-01

    Les sociologues utilisent, comme méthode d'analyse, l'analyse de discours. Des recherches, ici exposées, ont été menées sur cette méthode, ses spécificités et ses apports à la formation aux techniques d'expression (T.E.). Il ressort que le sociologue produit d'abord des discours (par l'entretien et l'observation) puis les analyse, les traite. Ces discours sont difficilement utilisables en T.E. tant ils sont saturés d'enjeux théoriques voire idéologiques.

  8. Safety analyses for the planned Konrad repository

    International Nuclear Information System (INIS)

    The safety analyses for the planned federal repository Konrad are described which serve to check and prove observance of the protection goals. The safety analyses lead to the definition of requirements for the plant and the radioactive waste. As a large number of papers dealing with the safety analyses for the repository's operation phase have already been published, the present report concentrates on the investigations into the post-operational phase which were carried out, among others, by the Bundesanstalt fur Geowissenschaften und Rohstoffe (Federal Institute for Geosciences and Natural Resources), Hanover, and the Gesellschaft fur Strahlen- and Umweltforschung (Radiological and Environmental Research Corporation), Braunschweig and Munich, on behalf of the PTB

  9. Moving Crystal Slow-Neutron Wavelength Analyser

    DEFF Research Database (Denmark)

    Buras, B.; Kjems, Jørgen

    1973-01-01

    Experimental proof that a moving single crystal can serve as a slow-neutron wavelength analyser of special features is presented. When the crystal moves with a velocity h/(2 md) (h-Planck constant, m-neutron mass, d-interplanar spacing) perpendicular to the diffracting plane and the analysed...... neutron beam is parallel to the diffracting plane, then neutrons of different wave-lengths contained in the incident beam are simultaneously diffracted under different reflection angles and recorded by a position-sensitive detector. Special features of this analysing system are briefly discussed....

  10. Learning predictive clustering rules

    OpenAIRE

    Ženko, Bernard; Džeroski, Sašo; Struyf, Jan

    2005-01-01

    The two most commonly addressed data mining tasks are predictive modelling and clustering. Here we address the task of predictive clustering, which contains elements of both and generalizes them to some extent. We propose a novel approach to predictive clustering called predictive clustering rules, present an initial implementation and its preliminary experimental evaluation.

  11. 7 CFR 94.102 - Analyses available.

    Science.gov (United States)

    2010-01-01

    ... analyses for total ash, fat by acid hydrolysis, moisture, salt, protein, beta-carotene, catalase... plate count, direct microscopic count, Campylobacter, coliforms, presumptive Escherichia coli, Listeria monocytogenes, proteolytic count, psychrotrophic bacteria, Salmonella, Staphylococcus, thermoduric bacteria,...

  12. Comparison with Russian analyses of meteor impact

    Energy Technology Data Exchange (ETDEWEB)

    Canavan, G.H.

    1997-06-01

    The inversion model for meteor impacts is used to discuss Russian analyses and compare principal results. For common input parameters, the models produce consistent estimates of impactor parameters. Directions for future research are discussed and prioritized.

  13. Understanding Human Error Based on Automated Analyses

    Data.gov (United States)

    National Aeronautics and Space Administration — This is a report on a continuing study of automated analyses of experiential textual reports to gain insight into the causal factors of human errors in aviation...

  14. Anthocyanin analyses of Vaccinium fruit dietary supplements

    Science.gov (United States)

    Vaccinium fruit ingredients within dietary supplements were identified by comparisons with anthocyanin analyses of known Vaccinium profiles (demonstration of anthocyanin fingerprinting). Available Vaccinium supplements were purchased and analyzed; their anthocyanin profiles (based on HPLC separation...

  15. A digital image analyser for RIMS studies

    International Nuclear Information System (INIS)

    Resonance Ionisation Mass Spectrometry (RIMS) is now playing a vital role in various areas of physics and chemistry. A digital image analyser for quantitative analysis of RIMS experiments has been developed

  16. Predicting School Board Member Incumbent Defeat.

    Science.gov (United States)

    Lutz, Frank W.; Hunt, Brook P.

    Researchers attempted to predict the defeat of school board incumbents, using variables which had already been shown to account for incumbent defeat in statistical analyses performed after board elections in many different states. A global model was constructed based on 20 social, economic, and political variables as well as on school districts'…

  17. Monitoring and prediction of natural disasters

    International Nuclear Information System (INIS)

    The problems of natural disaster predicting and accomplishing a synthesis of environmental monitoring systems to collect, store, and process relevant information for their solution are analysed. A three-level methodology is proposed for making decisions concerning the natural disaster dynamics. The methodology is based on the assessment of environmental indicators and the use of numerical models of the environment

  18. Safety analyses for reprocessing and waste processing

    International Nuclear Information System (INIS)

    Presentation of an incident analysis of process steps of the RP, simplified considerations concerning safety, and safety analyses of the storage and solidification facilities of the RP. A release tree method is developed and tested. An incident analysis of process steps, the evaluation of the SRL-study and safety analyses of the storage and solidification facilities of the RP are performed in particular. (DG)

  19. Functional Analyses and Treatment of Precursor Behavior

    OpenAIRE

    Najdowski, Adel C; Wallace, Michele D; Ellsworth, Carrie L; MacAleese, Alicia N; Cleveland, Jackie M

    2008-01-01

    Functional analysis has been demonstrated to be an effective method to identify environmental variables that maintain problem behavior. However, there are cases when conducting functional analyses of severe problem behavior may be contraindicated. The current study applied functional analysis procedures to a class of behavior that preceded severe problem behavior (precursor behavior) and evaluated treatments based on the outcomes of the functional analyses of precursor behavior. Responding fo...

  20. Thermal Analyse sof Cross-Linked Polyethylene

    Directory of Open Access Journals (Sweden)

    Radek Polansky

    2007-01-01

    Full Text Available The paper summarizes results obtained during the structural analyses measurements (Differential Scanning Calorimetry DSC, Thermogravimetry TG, Thermomechanical analysis TMA and Fourier transform infrared spectroscopy FT-IR. The samples of cross-linked polyethylene cable insulation were tested via these analyses. The DSC and TG were carried out using simultaneous thermal analyzer TA Instruments SDT Q600 with connection of Fourier transform infrared spectrometer Nicolet 380. Thermomechanical analysis was carried out by TMA Q400EM TA Instruments apparatus.

  1. Nonparametric bootstrap prediction

    OpenAIRE

    Fushiki, Tadayoshi; Komaki, Fumiyasu; Aihara, Kazuyuki

    2005-01-01

    Ensemble learning has recently been intensively studied in the field of machine learning. `Bagging' is a method of ensemble learning and uses bootstrap data to construct various predictors. The required prediction is then obtained by averaging the predictors. Harris proposed using this technique with the parametric bootstrap predictive distribution to construct predictive distributions, and showed that the parametric bootstrap predictive distribution gives asymptotically better prediction tha...

  2. Predictability of social interactions

    OpenAIRE

    Xu, Kevin S.

    2013-01-01

    The ability to predict social interactions between people has profound applications including targeted marketing and prediction of information diffusion and disease propagation. Previous work has shown that the location of an individual at any given time is highly predictable. This study examines the predictability of social interactions between people to determine whether interaction patterns are similarly predictable. I find that the locations and times of interactions for an individual are...

  3. Summary of dynamic analyses of the advanced neutron source reactor inner control rods

    International Nuclear Information System (INIS)

    A summary of the structural dynamic analyses that were instrumental in providing design guidance to the Advanced Neutron source (ANS) inner control element system is presented in this report. The structural analyses and the functional constraints that required certain performance parameters were combined to shape and guide the design effort toward a prediction of successful and reliable control and scram operation to be provided by these inner control rods

  4. Regression Analyses of Self-Regulatory Concepts to Predict Community College Math Achievement and Persistence

    Science.gov (United States)

    Gramlich, Stephen Peter

    2010-01-01

    Open door admissions at community colleges bring returning adults, first timers, low achievers, disabled persons, and immigrants. Passing and retention rates for remedial and non-developmental math courses can be comparatively inadequate (LAVC, 2005; CCPRDC, 2000; SBCC, 2004; Seybert & Soltz, 1992; Waycaster, 2002). Mathematics achievement…

  5. Integrative genomic analyses of a novel cytokine, interleukin-34 and its potential role in cancer prediction

    OpenAIRE

    Wang, Bo; Xu, Wenming; TAN, MIAOLIAN; Xiao, Yan; Yang, Haiwei; Xia, Tian-Song

    2014-01-01

    Interleukin-34 (IL-34) is a novel cytokine, which is composed of 222 amino acids and forms homodimers. It binds to the macrophage colony-stimulating factor (M-CSF) receptor and plays an important role in innate immunity and inflammatory processes. In the present study, we identified the completed IL-34 gene in 25 various mammalian genomes and found that IL-34 existed in all types of vertebrates, including fish, amphibians, birds and mammals. These species have a similar 7 exon/6 intron gene o...

  6. ANALYSING URBAN EFFECTS IN BUDAPEST USING THE WRF NUMERICAL WEATHER PREDICTION MODEL

    Directory of Open Access Journals (Sweden)

    JÚLIA GÖNDÖCS

    2016-03-01

    Full Text Available Continuously growing cities significantly modify the entire environment through air pollution and modification of land surface, resulting altered energy budget and land-atmosphere exchange processes over built-up areas. These effects mainly appear in cities or metropolitan areas, leading to the Urban Heat Island (UHI phenomenon, which occurs due to the temperature difference between the built-up areas and their cooler surroundings. The Weather Research and Forecasting (WRF mesoscale model coupled to multilayer urban canopy parameterisation is used to investigate this phenomenon for Budapest and its surroundings with actual land surface properties. In this paper the basic ideas of our research and the methodology in brief are presented. The simulation is completed for one week in summer 2015 with initial meteorological fields from Global Forecasting System (GFS outputs, under atmospheric conditions of weak wind and clear sky for the Pannonian Basin. Then, to improve the WRF model and its settings, the calculated skin temperature is compared to the remotely sensed measurements derived from satellites Aqua and Terra, and the temporal and spatial bias values are estimated.

  7. An Earthquake Prediction System Using The Time Series Analyses of Earthquake Property And Crust Motion

    International Nuclear Information System (INIS)

    We have developed a short-term deterministic earthquake (EQ) forecasting system similar to those used for Typhoons and Hurricanes, which has been under a test operation at website http://www.tec21.jp/ since June of 2003. We use the focus and crust displacement data recently opened to the public by Japanese seismograph and global positioning system (GPS) networks, respectively. Our system divides the forecasting area into the five regional areas of Japan, each of which is about 5 deg. by 5 deg. We have found that it can forecast the focus, date of occurrence and magnitude (M) of an impending EQ (whose M is larger than about 6), all within narrow limits. We have two examples to describe the system. One is the 2003/09/26 EQ of M 8 in the Hokkaido area, which is of hindsight. Another is a successful rollout of the most recent forecast on the 2004/05/30 EQ of M 6.7 off coast of the southern Kanto (Tokyo) area

  8. Fertility prediction of frozen boar sperm using novel and conventional analyses

    Science.gov (United States)

    Frozen-thawed boar sperm is seldom used for artificial insemination (AI) because fertility is lower than fresh or cooled semen. Despite the many advantages of AI including reduced pathogen exposure and ease of semen transport, cryo-induced damage to sperm usually results in decreased litter sizes a...

  9. Use of CFD Analyses to Predict Disk Friction Loss of Centrifugal Compressor Impellers

    Science.gov (United States)

    Cho, Leesang; Lee, Seawook; Cho, Jinsoo

    To improve the total efficiency of centrifugal compressors, it is necessary to reduce disk friction loss, which is expressed as the power loss. In this study, to reduce the disk friction loss due to the effect of axial clearance and surface roughness is analyzed and methods to reduce disk friction loss are proposed. The rotating reference frame technique using a commercial CFD tool (FLUENT) is used for steady-state analysis of the centrifugal compressor. Numerical results of the CFD analysis are compared with theoretical results using established experimental empirical equations. The disk friction loss of the impeller is decreased in line with increments in axial clearance until the axial clearance between the impeller disk and the casing is smaller than the boundary layer thickness. In addition, the disk friction loss of the impeller is increased in line with the increments in surface roughness in a similar pattern as that of existing experimental empirical formulas. The disk friction loss of the impeller is more affected by the surface roughness than the change of the axial clearance. To minimize disk friction loss on the centrifugal compressor impeller, the axial clearance and the theoretical boundary layer thickness should be designed to be the same. The design of the impeller requires careful consideration in order to optimize axial clearance and minimize surface roughness.

  10. Analyses of the predicted changes of the global oceans under the increased greenhouse gases scenarios

    Institute of Scientific and Technical Information of China (English)

    MU Lin; WU Dexing; CHEN Xue'en; J Jungclaus

    2006-01-01

    A new climate model (ECHAM5/MPIOM1) developed for the fourth assessment report of the Intergovernmental Panel on Climate Change (IPCC) at Max-Planck Institute for Meteorology is used to study the climate changes under the different increased CO2 scenarios (B1, A1B and A2). Based on the corresponding model results, the sea surface temperature and salinity structure, the variations of the thermohaline circulation (THC) and the changes of sea ice in the northern hemisphere are analyzed. It is concluded that from the year of 2000 to 2100, under the B1, A1B and A2 scenarios, the global mean sea surface temperatures (SST) would increase by 2.5℃, 3.5℃ and 4.0℃ respectively, especially in the region of the Arctic, the increase of SST would be even above 10.0℃; the maximal negative value of the variation of the fresh water flux is located in the subtropical oceans, while the precipitation in the eastern tropical Pacific increases. The strength of THC decreases under the B1, A1B and A2 scenarios, and the reductions would be about 20%, 25% and 25.1% of the present THC strength respectively. In the northern hemisphere, the area of the sea ice cover would decrease by about 50% under the A1B scenario.

  11. Behavioral and Physiological Neural Network Analyses: A Common Pathway toward Pattern Recognition and Prediction

    Science.gov (United States)

    Ninness, Chris; Lauter, Judy L.; Coffee, Michael; Clary, Logan; Kelly, Elizabeth; Rumph, Marilyn; Rumph, Robin; Kyle, Betty; Ninness, Sharon K.

    2012-01-01

    Using 3 diversified datasets, we explored the pattern-recognition ability of the Self-Organizing Map (SOM) artificial neural network as applied to diversified nonlinear data distributions in the areas of behavioral and physiological research. Experiment 1 employed a dataset obtained from the UCI Machine Learning Repository. Data for this study…

  12. Integrative genomic analyses of a novel cytokine, interleukin-34 and its potential role in cancer prediction.

    Science.gov (United States)

    Wang, Bo; Xu, Wenming; Tan, Miaolian; Xiao, Yan; Yang, Haiwei; Xia, Tian-Song

    2015-01-01

    Interleukin-34 (IL-34) is a novel cytokine, which is composed of 222 amino acids and forms homodimers. It binds to the macrophage colony-stimulating factor (M-CSF) receptor and plays an important role in innate immunity and inflammatory processes. In the present study, we identified the completed IL-34 gene in 25 various mammalian genomes and found that IL-34 existed in all types of vertebrates, including fish, amphibians, birds and mammals. These species have a similar 7 exon/6 intron gene organization. The phylogenetic tree indicated that the IL-34 gene from the primate lineage, rodent lineage and teleost lineage form a species-specific cluster. It was found mammalian that IL-34 was under positive selection pressure with the identified positively selected site, 196Val. Fifty-five functionally relevant single nucleotide polymorphisms (SNPs), including 32 SNPs causing missense mutations, 3 exonic splicing enhancer SNPs and 20 SNPs causing nonsense mutations were identified from 2,141 available SNPs in the human IL-34 gene. IL-34 was expressed in various types of cancer, including blood, brain, breast, colorectal, eye, head and neck, lung, ovarian and skin cancer. A total of 5 out of 40 tests (1 blood cancer, 1 brain cancer, 1 colorectal cancer and 2 lung cancer) revealed an association between IL-34 gene expression and cancer prognosis. It was found that the association between the expression of IL-34 and cancer prognosis varied in different types of cancer, even in the same types of cancer from different databases. This suggests that the function of IL-34 in these tumors may be multidimensional. The upstream transcription factor 1 (USF1), regulatory factor X-1 (RFX1), the Sp1 transcription factor 1 , POU class 3 homeobox 2 (POU3F2) and forkhead box L1 (FOXL1) regulatory transcription factor binding sites were identified in the IL-34 gene upstream (promoter) region, which may be involved in the effects of IL-34 in tumors. PMID:25395235

  13. Numerical earthquake prediction

    International Nuclear Information System (INIS)

    Can earthquakes be predicted? How should people overcome the difficulties encountered in the study of earthquake prediction? This issue can take inspiration from the experiences of weather forecast. Although weather forecasting took a period of about half a century to advance from empirical to numerical forecast, it has achieved significant success. A consensus has been reached among the Chinese seismological community that earthquake prediction must also develop from empirical forecasting to physical prediction. However, it is seldom mentioned that physical prediction is characterized by quantitatively numerical predictions based on physical laws. This article discusses five key components for numerical earthquake prediction and their current status. We conclude that numerical earthquake prediction should now be put on the planning agenda and its roadmap designed, seismic stations should be deployed and observations made according to the needs of numerical prediction, and theoretical research should be carried out. (authors)

  14. Finite element analyses of CCAT preliminary design

    Science.gov (United States)

    Sarawit, Andrew T.; Kan, Frank W.

    2014-07-01

    This paper describes the development of the CCAT telescope finite element model (FEM) and the analyses performed to support the preliminary design work. CCAT will be a 25 m diameter telescope operating in the 0.2 to 2 mm wavelength range. It will be located at an elevation of 5600 m on Cerro Chajnantor in Northern Chile, near ALMA. The telescope will be equipped with wide-field cameras and spectrometers mounted at the two Nasmyth foci. The telescope will be inside an enclosure to protect it from wind buffeting, direct solar heating, and bad weather. The main structures of the telescope include a steel Mount and a carbon-fiber-reinforced-plastic (CFRP) primary truss. The finite element model developed in this study was used to perform modal, frequency response, seismic response spectrum, stress, and deflection analyses of telescope. Modal analyses of telescope were performed to compute the structure natural frequencies and mode shapes and to obtain reduced order modal output at selected locations in the telescope structure to support the design of the Mount control system. Modal frequency response analyses were also performed to compute transfer functions at these selected locations. Seismic response spectrum analyses of the telescope subject to the Maximum Likely Earthquake were performed to compute peak accelerations and seismic demand stresses. Stress analyses were performed for gravity load to obtain gravity demand stresses. Deflection analyses for gravity load, thermal load, and differential elevation drive torque were performed so that the CCAT Observatory can verify that the structures meet the stringent telescope surface and pointing error requirements.

  15. The prediction of different experiences of longterm illness

    DEFF Research Database (Denmark)

    Blank, N; Diderichsen, Finn

    1996-01-01

    To analyse the role played by socioeconomic factors and self rated general health in the prediction of the reporting of severe longterm illness, and the extent to which these factors explain social class differences in the reporting of such illness.......To analyse the role played by socioeconomic factors and self rated general health in the prediction of the reporting of severe longterm illness, and the extent to which these factors explain social class differences in the reporting of such illness....

  16. Regional Scale Analyses of Climate Change Impacts on Agriculture

    Science.gov (United States)

    Wolfe, D. W.; Hayhoe, K.

    2006-12-01

    New statistically downscaled climate modeling techniques provide an opportunity for improved regional analysis of climate change impacts on agriculture. Climate modeling outputs can often simultaneously meet the needs of those studying impacts on natural as well as managed ecosystems. Climate outputs can be used to drive existing forest or crop models, or livestock models (e.g., temperature-humidity index model predicting dairy milk production) for improved information on regional impact. High spatial resolution climate forecasts, combined with knowledge of seasonal temperatures or rainfall constraining species ranges, can be used to predict shifts in suitable habitat for invasive weeds, insects, and pathogens, as well as cash crops. Examples of climate thresholds affecting species range and species composition include: minimum winter temperature, duration of winter chilling (vernalization) hours (e.g., hours below 7.2 C), frost-free period, and frequency of high temperature stress days in summer. High resolution climate outputs can also be used to drive existing integrated pest management models predicting crop insect and disease pressure. Collectively, these analyses can be used to test hypotheses or provide insight into the impact of future climate change scenarios on species range shifts and threat from invasives, shifts in crop production zones, and timing and regional variation in economic impacts.

  17. PREDICTING TURBINE STAGE PERFORMANCE

    Science.gov (United States)

    Boyle, R. J.

    1994-01-01

    This program was developed to predict turbine stage performance taking into account the effects of complex passage geometries. The method uses a quasi-3D inviscid-flow analysis iteratively coupled to calculated losses so that changes in losses result in changes in the flow distribution. In this manner the effects of both the geometry on the flow distribution and the flow distribution on losses are accounted for. The flow may be subsonic or shock-free transonic. The blade row may be fixed or rotating, and the blades may be twisted and leaned. This program has been applied to axial and radial turbines, and is helpful in the analysis of mixed flow machines. This program is a combination of the flow analysis programs MERIDL and TSONIC coupled to the boundary layer program BLAYER. The subsonic flow solution is obtained by a finite difference, stream function analysis. Transonic blade-to-blade solutions are obtained using information from the finite difference, stream function solution with a reduced flow factor. Upstream and downstream flow variables may vary from hub to shroud and provision is made to correct for loss of stagnation pressure. Boundary layer analyses are made to determine profile and end-wall friction losses. Empirical loss models are used to account for incidence, secondary flow, disc windage, and clearance losses. The total losses are then used to calculate stator, rotor, and stage efficiency. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 370/3033 under TSS with a central memory requirement of approximately 4.5 Megs of 8 bit bytes. This program was developed in 1985.

  18. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions. PMID:27286683

  19. A theoretical framework for analysing preschool teaching

    DEFF Research Database (Denmark)

    Chaiklin, Seth

    2014-01-01

    through the analysis of one of the earliest recorded examples of preschool education (initiated by J. F. Oberlin in northeastern France in 1767). The general idea of societal need is elaborated as a way of analysing practices, and a general analytic schema is presented for characterising preschool......This article introduces a theoretical framework for analysing preschool teaching as a historically-grounded societal practice. The aim is to present a unified framework that can be used to analyse and compare both historical and contemporary examples of preschool teaching practice within and across...... national traditions. The framework has two main components, an analysis of preschool teaching as a practice, formed in relation to societal needs, and an analysis of the categorical relations which necessarily must be addressed in preschool teaching activity. The framework is introduced and illustrated...

  20. Advanced toroidal facility vaccuum vessel stress analyses

    International Nuclear Information System (INIS)

    The complex geometry of the Advance Toroidal Facility (ATF) vacuum vessel required special analysis techniques in investigating the structural behavior of the design. The response of a large-scale finite element model was found for transportation and operational loading. Several computer codes and systems, including the National Magnetic Fusion Energy Computer Center Cray machines, were implemented in accomplishing these analyses. The work combined complex methods that taxed the limits of both the codes and the computer systems involved. Using MSC/NASTRAN cyclic-symmetry solutions permitted using only 1/12 of the vessel geometry to mathematically analyze the entire vessel. This allowed the greater detail and accuracy demanded by the complex geometry of the vessel. Critical buckling-pressure analyses were performed with the same model. The development, results, and problems encountered in performing these analyses are described. 5 refs., 3 figs

  1. Empirical Prediction Intervals for County Population Forecasts.

    Science.gov (United States)

    Rayer, Stefan; Smith, Stanley K; Tayman, Jeff

    2009-12-01

    Population forecasts entail a significant amount of uncertainty, especially for long-range horizons and for places with small or rapidly changing populations. This uncertainty can be dealt with by presenting a range of projections or by developing statistical prediction intervals. The latter can be based on models that incorporate the stochastic nature of the forecasting process, on empirical analyses of past forecast errors, or on a combination of the two. In this article, we develop and test prediction intervals based on empirical analyses of past forecast errors for counties in the United States. Using decennial census data from 1900 to 2000, we apply trend extrapolation techniques to develop a set of county population forecasts; calculate forecast errors by comparing forecasts to subsequent census counts; and use the distribution of errors to construct empirical prediction intervals. We find that empirically-based prediction intervals provide reasonably accurate predictions of the precision of population forecasts, but provide little guidance regarding their tendency to be too high or too low. We believe the construction of empirically-based prediction intervals will help users of small-area population forecasts measure and evaluate the uncertainty inherent in population forecasts and plan more effectively for the future. PMID:19936030

  2. TOGGLE : toolbox for generic NGS analyses

    OpenAIRE

    Monat, Cécile; Tranchant-Dubreuil, Christine; Kougbeadjo, Ayité; Farcy, Cédric; Ortega-Abboud, Enrique; Amanzougarene, Souhila; Ravel, Sébastien; Agbessi, Mawussé; Orjuela-Bouniol, Julie; Summo, Maryline; Sabot, François

    2015-01-01

    Background The explosion of NGS (Next Generation Sequencing) sequence data requires a huge effort in Bioinformatics methods and analyses. The creation of dedicated, robust and reliable pipelines able to handle dozens of samples from raw FASTQ data to relevant biological data is a time-consuming task in all projects relying on NGS. To address this, we created a generic and modular toolbox for developing such pipelines. Results TOGGLE (TOolbox for Generic nGs anaLysEs) is a suite of tools able ...

  3. TOGGLE: toolbox for generic NGS analyses

    OpenAIRE

    Monat, Cécile; Tranchant-Dubreuil, Christine; Kougbeadjo, Ayité; Farcy , Cédric; Ortega-Abboud, Enrique; Amanzougarene, Souhila; Ravel, Sébastien; Agbessi, Mawusse; Orjuela-Bouniol, Julie; Summo, Marilyne; Sabot, François

    2015-01-01

    Background: The explosion of NGS (Next Generation Sequencing) sequence data requires a huge effort in Bioinformatics methods and analyses. The creation of dedicated, robust and reliable pipelines able to handle dozens of samples from raw FASTQ data to relevant biological data is a time-consuming task in all projects relying on NGS. To address this, we created a generic and modular toolbox for developing such pipelines. Results: TOGGLE (TOolbox for Generic nGs anaLysEs) is a suite of tools abl...

  4. TOGGLE: toolbox for generic NGS analyses

    OpenAIRE

    Monat, Cécile; Tranchant-Dubreuil, Christine; Kougbeadjo, Ayité; Farcy, Cédric; Ortega-Abboud, Enrique; Amanzougarene, Souhila; Ravel, Sébastien; Agbessi, Mawusse; Orjuela-Bouniol , Julie; Summo, Marilyne; Sabot, François

    2015-01-01

    Background The explosion of NGS (Next Generation Sequencing) sequence data requires a huge effort in Bioinformatics methods and analyses. The creation of dedicated, robust and reliable pipelines able to handle dozens of samples from raw FASTQ data to relevant biological data is a time-consuming task in all projects relying on NGS. To address this, we created a generic and modular toolbox for developing such pipelines. Results TOGGLE (TOolbox for Generic nGs anaLysEs) is a suite of tools able ...

  5. Analyse de discours et demande sociale

    OpenAIRE

    Cislaru, Georgeta; Garnier, Sylvie; Matras, Marie-Thérèse; Pugnière-Saavedra, Frédéric; Rousseau, Patrick; Sitri, Frédérique; Veniard, Marie

    2010-01-01

    Que peut nous révéler l’analyse de discours des pratiques sociétales et des pratiques discursives qui les sous-tendent ? En questionnant le discours,l'analyse de discours questionne aussi ses instances productrices : instances politiques, médiatiques, institutionnelles. Elle a ainsi engagé, depuis une quarantaine d’années, un dialogue interdisciplinaire fructueux. Avec cinq contributions d’analystes de discours et deux de professionnels de la protection de l’enfance, ce numéro des Carnets du ...

  6. Interferences in reactor neutron activation analyses

    International Nuclear Information System (INIS)

    It has been shown that interfering reactions may occur in neutron activation analyses of aluminum and zinc matrixes, commonly used in nuclear areas. The interferences analysed were: Al2713 (n, α) Na2411 and Zn6430 (n, p) Cu6429. The method used was the non-destructive neutron activation analysis and the spectra were obtained in a 1024 multichannel system coupled with a Ge(Li) detector. Sodium was detected in aluminum samples from the reactor tank and pneumatic transfer system. The independence of the sodium concentration in samples in the range of 0 - 100 ppm is shown by the attenuation obtained with the samples encapsulated in cadmium. (Author)

  7. Prosjektering og analyse av en spennarmert betongbru

    OpenAIRE

    Strand, Elin Holsten; Kaldbekkdalen, Ann-Kristin

    2014-01-01

    Hensikten med rapporten er å gjennomføre analyse og dimensjonering av en etteroppspent betongbru. Modellering og analyse er gjennomført i NovaFrame 5. En del av oppgaven var å bestemme spennsystem og tverrsnittshøyden i brua. Det ble antatt seks spennkabler i felt, og tolv over støtte. Videre ble tverrsnittshøyden satt lik 1,3 meter. Dimensjoneringen ble gjennomført i henhold til gjeldende Eurokoder, aktuelle dokumenter og Håndbok 185, som er utarb...

  8. Optimal predictive model selection

    OpenAIRE

    Barbieri, Maria Maddalena; Berger, James O.

    2004-01-01

    Often the goal of model selection is to choose a model for future prediction, and it is natural to measure the accuracy of a future prediction by squared error loss. Under the Bayesian approach, it is commonly perceived that the optimal predictive model is the model with highest posterior probability, but this is not necessarily the case. In this paper we show that, for selection among normal linear models, the optimal predictive model is often the median probability model, which is defined a...

  9. Predictive software design measures

    OpenAIRE

    Love, Randall James

    1994-01-01

    This research develops a set of predictive measures enabling software testers and designers to identify and target potential problem areas for additional and/or enhanced testing. Predictions are available as early in the design process as requirements allocation and as late as code walk-throughs. These predictions are based on characteristics of the design artifacts prior to coding. Prediction equations are formed at established points in the software development process...

  10. Comparison of veterinary import risk analyses studies

    NARCIS (Netherlands)

    Vos-de Jong, de C.J.; Conraths, F.J.; Adkin, A.; Jones, E.M.; Hallgren, G.S.; Paisley, L.G.

    2011-01-01

    Twenty-two veterinary import risk analyses (IRAs) were audited: a) for inclusion of the main elements of risk analysis; b) between different types of IRAs; c) between reviewers' scores. No significant differences were detected between different types of IRAs, although quantitative IRAs and IRAs publ

  11. UMTS signal measurements with digital spectrum analysers

    International Nuclear Information System (INIS)

    The launch of the Universal Mobile Telecommunications System (UNITS), the most recent mobile telecommunications standard has imposed the requirement of updating measurement instrumentation and methodologies. In order to define the most reliable measurement procedure, which is aimed at assessing the exposure to electromagnetic fields, modern spectrum analysers' features for correct signal characterisation has been reviewed. (authors)

  12. A gamma model for {DNA} mixture analyses

    OpenAIRE

    Cowell, R. G.; Lauritzen, S L; Mortera, J.

    2007-01-01

    We present a new methodology for analysing forensic identification problems involving DNA mixture traces where several individuals may have contributed to the trace. The model used for identification and separation of DNA mixtures is based on a gamma distribution for peak area values. In this paper we illustrate the gamma model and apply it on several real examples from forensic casework.

  13. En Billig GPS Data Analyse Platform

    DEFF Research Database (Denmark)

    Andersen, Ove; Christiansen, Nick; Larsen, Niels T.; Torp, Kristian

    2011-01-01

    Denne artikel præsenterer en komplet software platform til analyse af GPS data. Platformen er bygget udelukkende vha. open-source komponenter. De enkelte komponenter i platformen beskrives i detaljer. Fordele og ulemper ved at bruge open-source diskuteres herunder hvilke IT politiske tiltage, der...

  14. Comparing functional annotation analyses with Catmap

    Directory of Open Access Journals (Sweden)

    Krogh Morten

    2004-12-01

    Full Text Available Abstract Background Ranked gene lists from microarray experiments are usually analysed by assigning significance to predefined gene categories, e.g., based on functional annotations. Tools performing such analyses are often restricted to a category score based on a cutoff in the ranked list and a significance calculation based on random gene permutations as null hypothesis. Results We analysed three publicly available data sets, in each of which samples were divided in two classes and genes ranked according to their correlation to class labels. We developed a program, Catmap (available for download at http://bioinfo.thep.lu.se/Catmap, to compare different scores and null hypotheses in gene category analysis, using Gene Ontology annotations for category definition. When a cutoff-based score was used, results depended strongly on the choice of cutoff, introducing an arbitrariness in the analysis. Comparing results using random gene permutations and random sample permutations, respectively, we found that the assigned significance of a category depended strongly on the choice of null hypothesis. Compared to sample label permutations, gene permutations gave much smaller p-values for large categories with many coexpressed genes. Conclusions In gene category analyses of ranked gene lists, a cutoff independent score is preferable. The choice of null hypothesis is very important; random gene permutations does not work well as an approximation to sample label permutations.

  15. Analyses of hydraulic performance of velocity caps

    DEFF Research Database (Denmark)

    Christensen, Erik Damgaard; Degn Eskesen, Mark Chr.; Buhrkall, Jeppe;

    2014-01-01

    The hydraulic performance of a velocity cap has been investigated. Velocity caps are often used in connection with offshore intakes. CFD (computational fluid dynamics) examined the flow through the cap openings and further down into the intake pipes. This was combined with dimension analyses...

  16. FAME: Software for analysing rock microstructures

    Science.gov (United States)

    Hammes, Daniel M.; Peternell, Mark

    2016-05-01

    Determination of rock microstructures leads to a better understanding of the formation and deformation of polycrystalline solids. Here, we present FAME (Fabric Analyser based Microstructure Evaluation), an easy-to-use MATLAB®-based software for processing datasets recorded by an automated fabric analyser microscope. FAME is provided as a MATLAB®-independent Windows® executable with an intuitive graphical user interface. Raw data from the fabric analyser microscope can be automatically loaded, filtered and cropped before analysis. Accurate and efficient rock microstructure analysis is based on an advanced user-controlled grain labelling algorithm. The preview and testing environments simplify the determination of appropriate analysis parameters. Various statistic and plotting tools allow a graphical visualisation of the results such as grain size, shape, c-axis orientation and misorientation. The FAME2elle algorithm exports fabric analyser data to an elle (modelling software)-supported format. FAME supports batch processing for multiple thin section analysis or large datasets that are generated for example during 2D in-situ deformation experiments. The use and versatility of FAME is demonstrated on quartz and deuterium ice samples.

  17. Uncertainty quantification approaches for advanced reactor analyses.

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, L. L.; Nuclear Engineering Division

    2009-03-24

    The original approach to nuclear reactor design or safety analyses was to make very conservative modeling assumptions so as to ensure meeting the required safety margins. Traditional regulation, as established by the U. S. Nuclear Regulatory Commission required conservatisms which have subsequently been shown to be excessive. The commission has therefore moved away from excessively conservative evaluations and has determined best-estimate calculations to be an acceptable alternative to conservative models, provided the best-estimate results are accompanied by an uncertainty evaluation which can demonstrate that, when a set of analysis cases which statistically account for uncertainties of all types are generated, there is a 95% probability that at least 95% of the cases meet the safety margins. To date, nearly all published work addressing uncertainty evaluations of nuclear power plant calculations has focused on light water reactors and on large-break loss-of-coolant accident (LBLOCA) analyses. However, there is nothing in the uncertainty evaluation methodologies that is limited to a specific type of reactor or to specific types of plant scenarios. These same methodologies can be equally well applied to analyses for high-temperature gas-cooled reactors and to liquid metal reactors, and they can be applied to steady-state calculations, operational transients, or severe accident scenarios. This report reviews and compares both statistical and deterministic uncertainty evaluation approaches. Recommendations are given for selection of an uncertainty methodology and for considerations to be factored into the process of evaluating uncertainties for advanced reactor best-estimate analyses.

  18. Hybrid Logical Analyses of the Ambient Calculus

    DEFF Research Database (Denmark)

    Bolander, Thomas; Hansen, Rene Rydhof

    2010-01-01

    In this paper, hybrid logic is used to formulate three control flow analyses for Mobile Ambients, a process calculus designed for modelling mobility. We show that hybrid logic is very well-suited to express the semantic structure of the ambient calculus and how features of hybrid logic can be...

  19. Meta-analyses on viral hepatitis

    DEFF Research Database (Denmark)

    Gluud, Lise L; Gluud, Christian

    2009-01-01

    This article summarizes the meta-analyses of interventions for viral hepatitis A, B, and C. Some of the interventions assessed are described in small trials with unclear bias control. Other interventions are supported by large, high-quality trials. Although attempts have been made to adjust...

  20. Modelling and Analysing Socio-Technical Systems

    DEFF Research Database (Denmark)

    Aslanyan, Zaruhi; Ivanova, Marieta Georgieva; Nielson, Flemming;

    2015-01-01

    and assessing attacks. In our work we model all relevant levels of socio-technical systems, and propose evaluation techniques for analysing the security properties of the model. Our approach simplifies the identification of possible attacks and provides qualified assessment and ranking of attacks based...

  1. Masonry: Task Analyses. Competency-Based Education.

    Science.gov (United States)

    Henrico County Public Schools, Glen Allen, VA. Virginia Vocational Curriculum Center.

    These task analyses are designed to be used in combination with the "Trade and Industrial Education Service Area Resource" in order to implement competency-based education in the masonry program in Virginia. The task analysis document contains the task inventory, suggested task sequence lists, and content outlines for the secondary courses Masonry…

  2. The Economic Cost of Homosexuality: Multilevel Analyses

    Science.gov (United States)

    Baumle, Amanda K.; Poston, Dudley, Jr.

    2011-01-01

    This article builds on earlier studies that have examined "the economic cost of homosexuality," by using data from the 2000 U.S. Census and by employing multilevel analyses. Our findings indicate that partnered gay men experience a 12.5 percent earnings penalty compared to married heterosexual men, and a statistically insignificant earnings…

  3. Micromechanical photothermal analyser of microfluidic samples

    DEFF Research Database (Denmark)

    2014-01-01

    The present invention relates to a micromechanical photothermal analyser of microfluidic samples comprising an oblong micro-channel extending longitudinally from a support element, the micro-channel is made from at least two materials with different thermal expansion coefficients, wherein the...

  4. Random error in cardiovascular meta-analyses

    DEFF Research Database (Denmark)

    Albalawi, Zaina; McAlister, Finlay A; Thorlund, Kristian; Wong, Michelle; Wetterslev, Jørn

    2013-01-01

    and/or their cumulative Z-curve crossed the O'Brien-Fleming monitoring boundaries for detecting a RRR of at least 25%. We classified meta-analyses that did not achieve statistical significance as true negatives if their pooled sample size was sufficient to reject a RRR of 25%. RESULTS: Twenty three...

  5. Chemical Analyses of Silicon Aerogel Samples

    CERN Document Server

    van der Werf, I; De Leo, R; Marrone, S

    2008-01-01

    After five years of operating, two Aerogel counters: A1 and A2, taking data in Hall A at Jefferson Lab, suffered a loss of performance. In this note possible causes of degradation have been studied. In particular, various chemical and physical analyses have been carried out on several Aerogel tiles and on adhesive tape in order to reveal the presence of contaminants.

  6. Cosmetology: Task Analyses. Competency-Based Education.

    Science.gov (United States)

    Henrico County Public Schools, Glen Allen, VA. Virginia Vocational Curriculum Center.

    These task analyses are designed to be used in combination with the "Trade and Industrial Education Service Area Resource" in order to implement competency-based education in the cosmetology program in Virginia. The task analysis document contains the task inventory, suggested task sequence lists, and content outlines for the secondary courses…

  7. How to Establish Clinical Prediction Models.

    Science.gov (United States)

    Lee, Yong Ho; Bang, Heejung; Kim, Dae Jung

    2016-03-01

    A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice. PMID:26996421

  8. Testing earthquake predictions

    Science.gov (United States)

    Luen, Brad; Stark, Philip B.

    2008-01-01

    Statistical tests of earthquake predictions require a null hypothesis to model occasional chance successes. To define and quantify 'chance success' is knotty. Some null hypotheses ascribe chance to the Earth: Seismicity is modeled as random. The null distribution of the number of successful predictions - or any other test statistic - is taken to be its distribution when the fixed set of predictions is applied to random seismicity. Such tests tacitly assume that the predictions do not depend on the observed seismicity. Conditioning on the predictions in this way sets a low hurdle for statistical significance. Consider this scheme: When an earthquake of magnitude 5.5 or greater occurs anywhere in the world, predict that an earthquake at least as large will occur within 21 days and within an epicentral distance of 50 km. We apply this rule to the Harvard centroid-moment-tensor (CMT) catalog for 2000-2004 to generate a set of predictions. The null hypothesis is that earthquake times are exchangeable conditional on their magnitudes and locations and on the predictions - a common "nonparametric" assumption in the literature. We generate random seismicity by permuting the times of events in the CMT catalog. We consider an event successfully predicted only if (i) it is predicted and (ii) there is no larger event within 50 km in the previous 21 days. The P-value for the observed success rate is <0.001: The method successfully predicts about 5% of earthquakes, far better than 'chance' because the predictor exploits the clustering of earthquakes - occasional foreshocks - which the null hypothesis lacks. Rather than condition on the predictions and use a stochastic model for seismicity, it is preferable to treat the observed seismicity as fixed, and to compare the success rate of the predictions to the success rate of simple-minded predictions like those just described. If the proffered predictions do no better than a simple scheme, they have little value.

  9. Disentangling the relationship of the Australian marsupial orders using retrotransposon and evolutionary network analyses.

    Science.gov (United States)

    Gallus, Susanne; Janke, Axel; Kumar, Vikas; Nilsson, Maria A

    2015-04-01

    The ancestors to the Australian marsupials entered Australia around 60 (54-72) Ma from Antarctica, and radiated into the four living orders Peramelemorphia, Dasyuromorphia, Diprotodontia, and Notoryctemorphia. The relationship between the four Australian marsupial orders has been a long-standing question, because different phylogenetic studies have not been able to consistently reconstruct the same topology. Initial in silico analysis of the Tasmanian devil genome and experimental screening in the seven marsupial orders revealed 20 informative transposable element insertions for resolving the inter- and intraordinal relationships of Australian and South American orders. However, the retrotransposon insertions support three conflicting topologies regarding Peramelemorphia, Dasyuromorphia, and Notoryctemorphia, indicating that the split between the three orders may be best understood as a network. This finding is supported by a phylogenetic reanalysis of nuclear gene sequences, using a consensus network approach that allows depicting hidden phylogenetic conflict, otherwise lost when forcing the data into a bifurcating tree. The consensus network analysis agrees with the transposable element analysis in that all possible topologies regarding Peramelemorphia, Dasyuromorphia, and Notoryctemorphia in a rooted four-taxon topology are equally well supported. In addition, retrotransposon insertion data support the South American order Didelphimorphia being the sistergroup to all other living marsupial orders. The four Australian orders originated within 3 Myr at the Cretaceous-Paleogene boundary. The rapid divergences left conflicting phylogenetic information in the genome possibly generated by incomplete lineage sorting or introgressive hybridization, leaving the relationship among Australian marsupial orders unresolvable as a bifurcating process millions of years later. PMID:25786431

  10. Predicting Predictable about Natural Catastrophic Extremes

    Science.gov (United States)

    Kossobokov, Vladimir

    2015-04-01

    By definition, an extreme event is rare one in a series of kindred phenomena. Usually (e.g. in Geophysics), it implies investigating a small sample of case-histories with a help of delicate statistical methods and data of different quality, collected in various conditions. Many extreme events are clustered (far from independent) and follow fractal or some other "strange" distribution (far from uniform). Evidently, such an "unusual" situation complicates search and definition of reliable precursory behaviors to be used for forecast/prediction purposes. Making forecast/prediction claims reliable and quantitatively probabilistic in the frames of the most popular objectivists' viewpoint on probability requires a long series of "yes/no" forecast/prediction outcomes, which cannot be obtained without an extended rigorous test of the candidate method. The set of errors ("success/failure" scores and space-time measure of alarms) and other information obtained in such a control test supplies us with data necessary to judge the candidate's potential as a forecast/prediction tool and, eventually, to find its improvements. This is to be done first in comparison against random guessing, which results confidence (measured in terms of statistical significance). Note that an application of the forecast/prediction tools could be very different in cases of different natural hazards, costs and benefits that determine risks, and, therefore, requires determination of different optimal strategies minimizing reliable estimates of realistic levels of accepted losses. In their turn case specific costs and benefits may suggest a modification of the forecast/prediction tools for a more adequate "optimal" application. Fortunately, the situation is not hopeless due to the state-of-the-art understanding of the complexity and non-linear dynamics of the Earth as a Physical System and pattern recognition approaches applied to available geophysical evidences, specifically, when intending to predict

  11. Finite-element creep damage analyses of P91 pipes

    International Nuclear Information System (INIS)

    In this paper, uniaxial and notched bar creep test data are used to establish the material behaviour models for two P91 steels of differing strength. The two steels are denoted here as Bar 257 steel, tested at 650 deg. C and A-369 steel, tested at 625 deg. C. Single-state variable and three-state variable creep damage constitutive models were used in the investigation. Methods for determining the material properties in the two sets of equations are briefly described. Finite-element analyses are performed using these material properties for a P91 pipe, subjected to internal pressure and end loading. The failure lives of the pipe were obtained, and on this basis, a preliminary assessment of using the two different sets of constitutive equations for failure predictions of high-temperature components under creep damage conditions can be made

  12. Probabilistic fuel rod analyses using the TRANSURANUS code

    International Nuclear Information System (INIS)

    After more than 25 years of fuel rod modelling research, the basic concepts are well established and the limitations of the specific approaches are known. However, the widely used mechanistic approach leads in many cases to discrepancies between theoretical predictions and experimental evidence indicating that models are not exact and that some of the physical processes encountered are of stochastic nature. To better understand uncertainties and their consequences, the mechanistic approach must therefore be augmented by statistical analyses. In the present paper the basic probabilistic methods are briefly discussed. Two such probabilistic approaches are included in the fuel rod performance code TRANSURANUS: the Monte Carlo method and the Numerical Noise Analysis. These two techniques are compared and their capabilities are demonstrated. (author). 12 refs, 4 figs, 2 tabs

  13. Compilation of Sandia coal char combustion data and kinetic analyses

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, R.E.; Hurt, R.H.; Baxter, L.L.; Hardesty, D.R.

    1992-06-01

    An experimental project was undertaken to characterize the physical and chemical processes that govern the combustion of pulverized coal chars. The experimental endeavor establishes a database on the reactivities of coal chars as a function of coal type, particle size, particle temperature, gas temperature, and gas and composition. The project also provides a better understanding of the mechanism of char oxidation, and yields quantitative information on the release rates of nitrogen- and sulfur-containing species during char combustion. An accurate predictive engineering model of the overall char combustion process under technologically relevant conditions in a primary product of this experimental effort. This document summarizes the experimental effort, the approach used to analyze the data, and individual compilations of data and kinetic analyses for each of the parent coals investigates.

  14. Loss of feed water analyses of advanced Heavy Water Reactor

    International Nuclear Information System (INIS)

    The proposed Advanced Heavy Water Reactor (AHWR) is a 750 MWt vertical pressure tube type boiling light water cooled and heavy water moderated reactor. Passive design feature of this reactor is that the heat removal is achieved through natural circulation of primary coolant at all power level with no primary coolant pumps. The case analysed in this paper is the loss of feedwater to steam drum which results in decrease in heat removal from core. This also causes increase in reactor pressure. Further consequences depend upon various protective and engineered safeguard systems like relief system, reactor trip, isolation condenser and advanced accumulator. Analysis has been done using code RELAP5/MOD3.2. Various modeling aspects are discussed in this paper and predictions are made for different parameters like pressure, temperature, qualities and flow in different part of Primary Heat Transport (PHT) system. (author)

  15. Spent fuel shipping costs for transportation logistics analyses

    International Nuclear Information System (INIS)

    Logistics analyses supplied to the nuclear waste management programs of the U.S. Department of Energy through the Transportation Technology Center (TTC) at Sandia National Laboratories are used to predict nuclear waste material logistics, transportation packaging demands, shipping and receiving rates and transportation-related costs for alternative strategies. This study is an in-depth analysis of the problems and contingencies associated with the costs of shipping irradiated reactor fuel. These costs are extremely variable however, and have changed frequently (sometimes monthly) during the past few years due to changes in capital, fuel, and labor costs. All costs and charges reported in this study are based on January 1982 data using existing transport cask systems and should be used as relative indices only. Actual shipping costs would be negotiable for each origin-destination combination

  16. Predictable or not predictable? The MOV question

    International Nuclear Information System (INIS)

    Over the past 8 years, the nuclear industry has struggled to understand the dynamic phenomena experienced during motor-operated valve (MOV) operation under differing flow conditions. For some valves and designs, their operational functionality has been found to be predictable; for others, unpredictable. Although much has been accomplished over this period of time, especially on modeling valve dynamics, the unpredictability of many valves and designs still exists. A few valve manufacturers are focusing on improving design and fabrication techniques to enhance product reliability and predictability. However, this approach does not address these issues for installed and inpredictable valves. This paper presents some of the more promising techniques that Wyle Laboratories has explored with potential for transforming unpredictable valves to predictable valves and for retrofitting installed MOVs. These techniques include optimized valve tolerancing, surrogated material evaluation, and enhanced surface treatments

  17. STACE: Source Term Analyses for Containment Evaluations of transport casks

    International Nuclear Information System (INIS)

    The development of the Source Term Analyses for Containment Evaluations (STACE) methodology provides a unique means for estimating the probability of cladding breach within transport casks, quantifying the amount of radioactive material released into the cask interior, and calculating the releasable radionuclide concentrations and corresponding maximum permissible leakage rates. Following the guidance of ANSI N14.5, the STACE methodology provides a technically defensible means for estimating maximum permissible leakage rates. These containment criteria attempt to reflect the true radiological hazard by performing a detailed examination of the spent fuel, CRUD, and residual contamination contributions to the releasable source term. The evaluation of the spent fuel contribution to the source team has been modeled fairly accurately using the STACE methodology. The structural model predicts the cask drop load history, the mechanical response of the fuel assembly, and the probability of cladding breach. These data are then used to predict the amount of fission gas, volitile species, and fuel fines that are releasable from the cask. There are some areas where data are sparse or lacking in which experimental validation is planned. Finally, the ANSI N14.5 recommendation that 3% and 100% of the fuel rods fail during normal and hypothetical accident conditions of transport, respectively, has been show to be overly conservative by several degrees of magnitude for these example analyses. Furthermore, the maximum permissible leakage rates for this example assembly under normal and hypothetical accident conditions are significanly higher that the leaktight requirements. By relaxing the maximum permissible leakage rates, the source term methodology is expected to significantly improvecask economics and safety

  18. Does the Repressor Coping Style Predict Lower Posttraumatic Stress Symptoms?

    OpenAIRE

    McNally, Richard J.; Hatch, John P.; Cedillos, Elizabeth M.; Luethcke, Cynthia A.; Baker, Monty T.; Peterson, Alan L.; Litz, Brett T.

    2011-01-01

    We tested whether a continuous measure of repressor coping style predicted lower posttraumatic stress disorder (PTSD) symptoms in 122 health care professionals serving in Operation Iraqi Freedom. Zero-order correlational analyses indicated that predeployment repressor coping scores negatively predicted postdeployment PTSD symptoms, \\(r_s = -0.29, p = 0.001\\), whereas predeployment Connor-Davidson Resilience Scale (CD-RISC) scores did not predict postdeployment PTSD symptoms, \\(r_s = -0.13, p ...

  19. Albedo Pattern Recognition and Time-Series Analyses in Malaysia

    Science.gov (United States)

    Salleh, S. A.; Abd Latif, Z.; Mohd, W. M. N. Wan; Chan, A.

    2012-07-01

    Pattern recognition and time-series analyses will enable one to evaluate and generate predictions of specific phenomena. The albedo pattern and time-series analyses are very much useful especially in relation to climate condition monitoring. This study is conducted to seek for Malaysia albedo pattern changes. The pattern recognition and changes will be useful for variety of environmental and climate monitoring researches such as carbon budgeting and aerosol mapping. The 10 years (2000-2009) MODIS satellite images were used for the analyses and interpretation. These images were being processed using ERDAS Imagine remote sensing software, ArcGIS 9.3, the 6S code for atmospherical calibration and several MODIS tools (MRT, HDF2GIS, Albedo tools). There are several methods for time-series analyses were explored, this paper demonstrates trends and seasonal time-series analyses using converted HDF format MODIS MCD43A3 albedo land product. The results revealed significance changes of albedo percentages over the past 10 years and the pattern with regards to Malaysia's nebulosity index (NI) and aerosol optical depth (AOD). There is noticeable trend can be identified with regards to its maximum and minimum value of the albedo. The rise and fall of the line graph show a similar trend with regards to its daily observation. The different can be identified in term of the value or percentage of rises and falls of albedo. Thus, it can be concludes that the temporal behavior of land surface albedo in Malaysia have a uniform behaviours and effects with regards to the local monsoons. However, although the average albedo shows linear trend with nebulosity index, the pattern changes of albedo with respects to the nebulosity index indicates that there are external factors that implicates the albedo values, as the sky conditions and its diffusion plotted does not have uniform trend over the years, especially when the trend of 5 years interval is examined, 2000 shows high negative linear

  20. QUANTUM MECHANICAL CONFORMATION ANALYSES OF CELLOBIOSE

    Science.gov (United States)

    Rotations about the bonds to the glycosidic oxygen atom are the primary determinants of the shape properties of cellobiose and cellulose. Their preferred values can be predicted by consulting the classical Ramachandran map, or f, y energy surface. Earlywork was followed by Simon, Scheraga and Manl...

  1. Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.

  2. EVA Performance Prediction

    Science.gov (United States)

    Peacock, Brian; Maida, James; Rajulu, Sudhakar

    2004-01-01

    Astronaut physical performance capabilities in micro gravity EV A or on planetary surfaces when encumbered by a life support suit and debilitated by a long exposure to micro gravity will be less than unencumbered pre flight capabilities. The big question addressed by human factors engineers is: what can the astronaut be expected to do on EVA or when we arrive at a planetary surface? A second question is: what aids to performance will be needed to enhance the human physical capability? These questions are important for a number of reasons. First it is necessary to carry out accurate planning of human physical demands to ensure that time and energy critical tasks can be carried out with confidence. Second it is important that the crew members (and their ground or planetary base monitors) have a realistic picture of their own capabilities, as excessive fatigue can lead to catastrophic failure. Third it is important to design appropriate equipment to enhance human sensory capabilities, locomotion, materials handling and manipulation. The evidence from physiological research points to musculoskeletal, cardiovascular and neurovestibular degradation during long duration exposure to micro gravity . The evidence from the biomechanics laboratory (and the Neutral Buoyancy Laboratory) points to a reduction in range of motion, strength and stamina when encumbered by a pressurized suit. The evidence from a long history of EVAs is that crewmembers are indeed restricted in their physical capabilities. There is a wealth of evidence in the literature on the causes and effects of degraded human performance in the laboratory, in sports and athletics, in industry and in other physically demanding jobs. One approach to this challenge is through biomechanical and performance modeling. Such models must be based on thorough task analysis, reliable human performance data from controlled studies, and functional extrapolations validated in analog contexts. The task analyses currently carried

  3. Reliability of chemical analyses of water samples

    Energy Technology Data Exchange (ETDEWEB)

    Beardon, R.

    1989-11-01

    Ground-water quality investigations require reliable chemical analyses of water samples. Unfortunately, laboratory analytical results are often unreliable. The Uranium Mill Tailings Remedial Action (UMTRA) Project`s solution to this problem was to establish a two phase quality assurance program for the analysis of water samples. In the first phase, eight laboratories analyzed three solutions of known composition. The analytical accuracy of each laboratory was ranked and three laboratories were awarded contracts. The second phase consists of on-going monitoring of the reliability of the selected laboratories. The following conclusions are based on two years experience with the UMTRA Project`s Quality Assurance Program. The reliability of laboratory analyses should not be taken for granted. Analytical reliability may be independent of the prices charged by laboratories. Quality assurance programs benefit both the customer and the laboratory.

  4. Pratique de l'analyse fonctionelle

    CERN Document Server

    Tassinari, Robert

    1997-01-01

    Mettre au point un produit ou un service qui soit parfaitement adapté aux besoins et aux exigences du client est indispensable pour l'entreprise. Pour ne rien laisser au hasard, il s'agit de suivre une méthodologie rigoureuse : celle de l'analyse fonctionnelle. Cet ouvrage définit précisément cette méthode ainsi que ses champs d'application. Il décrit les méthodes les plus performantes en termes de conception de produit et de recherche de qualité et introduit la notion d'analyse fonctionnelle interne. Un ouvrage clé pour optimiser les processus de conception de produit dans son entreprise. -- Idées clés, par Business Digest

  5. Causal Mediation Analyses for Randomized Trials.

    Science.gov (United States)

    Lynch, Kevin G; Cary, Mark; Gallop, Robert; Ten Have, Thomas R

    2008-01-01

    In the context of randomized intervention trials, we describe causal methods for analyzing how post-randomization factors constitute the process through which randomized baseline interventions act on outcomes. Traditionally, such mediation analyses have been undertaken with great caution, because they assume that the mediating factor is also randomly assigned to individuals in addition to the randomized baseline intervention (i.e., sequential ignorability). Because the mediating factors are typically not randomized, such analyses are unprotected from unmeasured confounders that may lead to biased inference. We review several causal approaches that attempt to reduce such bias without assuming that the mediating factor is randomized. However, these causal approaches require certain interaction assumptions that may be assessed if there is enough treatment heterogeneity with respect to the mediator. We describe available estimation procedures in the context of several examples from the literature and provide resources for software code. PMID:19484136

  6. DCH analyses using the CONTAIN code

    International Nuclear Information System (INIS)

    This report describes CONTAIN analyses performed during participation in the project of 'DCH issue resolution for ice condenser plants' which is sponsored by NRC at SNL. Even though the calculations were performed for the Ice Condenser plant, CONTAIN code has been used for analyses of many phenomena in the PWR containment and the DCH module can be commonly applied to any plant types. The present ice condenser issue resolution effort intended to provide guidance as to what might be needed to resolve DCH for ice condenser plants. It includes both a screening analysis and a scoping study if the screening analysis cannot provide an complete resolution. The followings are the results concerning DCH loads in descending order. 1. Availability of ignition sources prior to vessel breach 2. availability and effectiveness of ice in the ice condenser 3. Loads modeling uncertainties related to co-ejected RPV water 4. Other loads modeling uncertainties 10 tabs., 3 figs., 14 refs. (Author)

  7. Standardized analyses of nuclear shipping containers

    International Nuclear Information System (INIS)

    This paper describes improved capabilities for analyses of nuclear fuel shipping containers within SCALE -- a modular code system for Standardized Computer Analyses for Licensing Evaluation. Criticality analysis improvements include the new KENO V, a code which contains an enhanced geometry package and a new control module which uses KENO V and allows a criticality search on optimum pitch (maximum k-effective) to be performed. The SAS2 sequence is a new shielding analysis module which couples fuel burnup, source term generation, and radial cask shielding. The SAS5 shielding sequence allows a multidimensional Monte Carlo analysis of a shipping cask with code generated biasing of the particle histories. The thermal analysis sequence (HTAS1) provides an easy-to-use tool for evaluating a shipping cask response to the accident capability of the SCALE system to provide the cask designer or evaluator with a computational system that provides the automated procedures and easy-to-understand input that leads to standarization

  8. Methodology development for statistical evaluation of reactor safety analyses

    International Nuclear Information System (INIS)

    In February 1975, Westinghouse Electric Corporation, under contract to Electric Power Research Institute, started a one-year program to develop methodology for statistical evaluation of nuclear-safety-related engineering analyses. The objectives of the program were to develop an understanding of the relative efficiencies of various computational methods which can be used to compute probability distributions of output variables due to input parameter uncertainties in analyses of design basis events for nuclear reactors and to develop methods for obtaining reasonably accurate estimates of these probability distributions at an economically feasible level. A series of tasks was set up to accomplish these objectives. Two of the tasks were to investigate the relative efficiencies and accuracies of various Monte Carlo and analytical techniques for obtaining such estimates for a simple thermal-hydraulic problem whose output variable of interest is given in a closed-form relationship of the input variables and to repeat the above study on a thermal-hydraulic problem in which the relationship between the predicted variable and the inputs is described by a short-running computer program. The purpose of the report presented is to document the results of the investigations completed under these tasks, giving the rationale for choices of techniques and problems, and to present interim conclusions

  9. Analyses of the OSU-MASLWR Experimental Test Facility

    International Nuclear Information System (INIS)

    Today, considering the sustainability of the nuclear technology in the energy mix policy of developing and developed countries, the international community starts the development of new advanced reactor designs. In this framework, Oregon State University (OSU) has constructed, a system level test facility to examine natural circulation phenomena of importance to multi-application small light water reactor (MASLWR) design, a small modular pressurized water reactor (PWR), relying on natural circulation during both steady-state and transient operation. The target of this paper is to give a review of the main characteristics of the experimental facility, to analyse the main phenomena characterizing the tests already performed, the potential transients that could be investigated in the facility, and to describe the current IAEA International Collaborative Standard Problem that is being hosted at OSU and the experimental data will be collected at the OSU-MASLWR test facility. A summary of the best estimate thermal hydraulic system code analyses, already performed, to analyze the codes capability in predicting the phenomena typical of the MASLWR prototype, thermal hydraulically characterized in the OSU-MASLWR facility, is presented as well.

  10. Sequencing and comparative analyses of the genomes of zoysiagrasses.

    Science.gov (United States)

    Tanaka, Hidenori; Hirakawa, Hideki; Kosugi, Shunichi; Nakayama, Shinobu; Ono, Akiko; Watanabe, Akiko; Hashiguchi, Masatsugu; Gondo, Takahiro; Ishigaki, Genki; Muguerza, Melody; Shimizu, Katsuya; Sawamura, Noriko; Inoue, Takayasu; Shigeki, Yuichi; Ohno, Naoki; Tabata, Satoshi; Akashi, Ryo; Sato, Shusei

    2016-04-01

    Zoysiais a warm-season turfgrass, which comprises 11 allotetraploid species (2n= 4x= 40), each possessing different morphological and physiological traits. To characterize the genetic systems ofZoysiaplants and to analyse their structural and functional differences in individual species and accessions, we sequenced the genomes ofZoysiaspecies using HiSeq and MiSeq platforms. As a reference sequence ofZoysiaspecies, we generated a high-quality draft sequence of the genome ofZ. japonicaaccession 'Nagirizaki' (334 Mb) in which 59,271 protein-coding genes were predicted. In parallel, draft genome sequences ofZ. matrella'Wakaba' andZ. pacifica'Zanpa' were also generated for comparative analyses. To investigate the genetic diversity among theZoysiaspecies, genome sequence reads of three additional accessions,Z. japonica'Kyoto',Z. japonica'Miyagi' andZ. matrella'Chiba Fair Green', were accumulated, and aligned against the reference genome of 'Nagirizaki' along with those from 'Wakaba' and 'Zanpa'. As a result, we detected 7,424,163 single-nucleotide polymorphisms and 852,488 short indels among these species. The information obtained in this study will be valuable for basic studies on zoysiagrass evolution and genetics as well as for the breeding of zoysiagrasses, and is made available in the 'Zoysia Genome Database' athttp://zoysia.kazusa.or.jp. PMID:26975196

  11. Mass spectrometer for the analyses of gases

    International Nuclear Information System (INIS)

    A 6-in-radius, 600 magnetic-sector mass spectrometer (designated as the MS-200) has been constructed for the quantitative and qualitative analyses of fixed gases and volatile organics in the concentration range from 1 ppM (by volume) to 100%. A partial pressure of 1 x 10-6 torr in the inlet expansion volume is required to achieve a useful signal at an electron-multiplier gain of 10,000

  12. Ethics of cost analyses in medical education

    OpenAIRE

    Walsh, Kieran

    2013-01-01

    Cost analyses in medical education are rarely straightforward, and rarely lead to clear-cut conclusions. Occasionally they do lead to clear conclusions but even when that happens, some stakeholders will ask difficult but valid questions about what to do following cost analyses–specifically about distributive justice in the allocation of resources. At present there are few or no debates about these issues and rationing decisions that are taken in medical education are largely made subconscious...

  13. Causal Mediation Analyses for Randomized Trials

    OpenAIRE

    Lynch, Kevin G.; Cary, Mark; Gallop, Robert; Ten Have, Thomas R.

    2008-01-01

    In the context of randomized intervention trials, we describe causal methods for analyzing how post-randomization factors constitute the process through which randomized baseline interventions act on outcomes. Traditionally, such mediation analyses have been undertaken with great caution, because they assume that the mediating factor is also randomly assigned to individuals in addition to the randomized baseline intervention (i.e., sequential ignorability). Because the mediating factors are t...

  14. Pathway Analyses Implicate Glial Cells in Schizophrenia

    OpenAIRE

    Duncan, Laramie E.; Holmans, Peter A.; Lee, Phil H.; O'Dushlaine, Colm T; Kirby, Andrew W.; Smoller, Jordan W.; Öngür, Dost; Cohen, Bruce M.

    2014-01-01

    Background: The quest to understand the neurobiology of schizophrenia and bipolar disorder is ongoing with multiple lines of evidence indicating abnormalities of glia, mitochondria, and glutamate in both disorders. Despite high heritability estimates of 81% for schizophrenia and 75% for bipolar disorder, compelling links between findings from neurobiological studies, and findings from large-scale genetic analyses, are only beginning to emerge. Method Ten publically available gene sets (pathwa...

  15. Mikromechanische Analyse der Wirkungsmechanismen elektrischer Dehnungsmessstreifen

    OpenAIRE

    Stockmann, Martin

    2000-01-01

    Die elektrische Dehnungsmesstechnik auf der Grundlage separater Dehnungsmessstreifen (DMS) stellt heute eine der wesentlichsten Methoden zur experimentellen Beanspruchungs- analyse dar. Präzise Messungen außerhalb der Kalibrierbedingungen, insbesondere bei großen Deformationen oder hohen Querdehnungsanteilen, erfordern die Berücksichtigung nicht- linearer Zusammenhänge zwischen den zu bestimmenden Komponenten der Bauteildehnung und der Widerstandsänderung des Messgitters. ...

  16. A database system for RCM analyses

    International Nuclear Information System (INIS)

    A proposal for a database system to record and document Reliability Centered Maintenance (RCM) analyses is presented. The database is conceived so as to enable its application to large industrial units, which can be granulated into specific parts (systems, nodes) to be analyzed in detail by the RCM methodology at the level of components of the systems (nodes). A proposal for an algorithm to be used for the selection of suitable components for optimization of preventive maintenance is also included. (author)

  17. El Cours d’Analyse de Cauchy

    OpenAIRE

    Pérez, Javier; Aizpuru, Antonio

    1999-01-01

    En este artículo presentamos un estudio contextualizado de Cours d’Analyse de Cauchy, analizando su significado e importancia. Presentamos especial atención al grado de elaboración teórica de límites, continuidad, series, números reales funciones y series completas, relacionando las aportaciones de Cauchi del nivel conceptual anterior a esta ahora.

  18. Conditions and applicational preferences of reliability analyses

    International Nuclear Information System (INIS)

    This VDI guide refers to the tasks of reliability analyses within a given project, and to their integration into the system engineering process. It presents principles and rules for the application of analytical methods to reliability problems, and in general mentions the mathematical reliability models that preferrably are to be applied to specific problems, and the necessary relevant information. It also explains the limits of applicability of the various analytical methods. (orig./HP)

  19. Delvis drenert analyse av innvendig avstivet utgraving

    OpenAIRE

    Myhrvold, Michael F

    2013-01-01

    Denne masteroppgaven omhandler analyser av de delvis drenerte effektene som kan oppstå ved innvendig, avstivede utgravinger. Formålet med masteroppgaven er å gjennomføre en numerisk studie av prosessen som styrer den tidsavhengige utviklingen ved avstivede utgravinger i lavpermeable jordtyper. Det gir muligheten til å vurdere de delvis drenerte effektene og innflytelsen disse utgjør ved denne typen utgravinger. Ettersom jordens oppførsel ved små tø...

  20. Investigation into the methodology of safety analyses

    International Nuclear Information System (INIS)

    The common methods of the systems analysis were investigated with respect to thequestion whether they are appropriate for the detection of potential sources of hazard in industrial plants, in particular in chemical plants, and whether this can be verified. The quantification of accidents and the risk assessment that can be derived therefrom are discussed. In order to allow quantitative safety analyses, the simulation model SYSP was developed. For backing, data were compiled on reliability. (DG)

  1. Pegasys: software for executing and integrating analyses of biological sequences

    Directory of Open Access Journals (Sweden)

    Lett Drew

    2004-04-01

    Full Text Available Abstract Background We present Pegasys – a flexible, modular and customizable software system that facilitates the execution and data integration from heterogeneous biological sequence analysis tools. Results The Pegasys system includes numerous tools for pair-wise and multiple sequence alignment, ab initio gene prediction, RNA gene detection, masking repetitive sequences in genomic DNA as well as filters for database formatting and processing raw output from various analysis tools. We introduce a novel data structure for creating workflows of sequence analyses and a unified data model to store its results. The software allows users to dynamically create analysis workflows at run-time by manipulating a graphical user interface. All non-serial dependent analyses are executed in parallel on a compute cluster for efficiency of data generation. The uniform data model and backend relational database management system of Pegasys allow for results of heterogeneous programs included in the workflow to be integrated and exported into General Feature Format for further analyses in GFF-dependent tools, or GAME XML for import into the Apollo genome editor. The modularity of the design allows for new tools to be added to the system with little programmer overhead. The database application programming interface allows programmatic access to the data stored in the backend through SQL queries. Conclusions The Pegasys system enables biologists and bioinformaticians to create and manage sequence analysis workflows. The software is released under the Open Source GNU General Public License. All source code and documentation is available for download at http://bioinformatics.ubc.ca/pegasys/.

  2. The Strepsiptera-Odyssey: the history of the systematic placement of an enigmatic parasitic insect order

    Directory of Open Access Journals (Sweden)

    H. Pohl

    2013-09-01

    Full Text Available The history of the phylogenetic placement of the parasitic insect order Strepsiptera is outlined. The first species was described in 1793 by P. Rossi and assigned to the hymenopteran family Ichneumonidae. A position close to the cucujiform beetle family Rhipiphoridae was suggested by several earlier authors. Others proposed a close relationship with Diptera or even a group Pupariata including Diptera, Strepsiptera and Coccoidea. A subordinate placement within the polyphagan series Cucujiformia close to the wood-associated Lymexylidae was favored by the coleopterist R.A. Crowson. W. Hennig considered a sistergroup relationship with Coleoptera as the most likely hypothesis but emphasized the uncertainty. Cladistic analyses of morphological data sets yielded very different placements, alternatively as sistergroup of Coleoptera, Antliophora, or all other holometabolan orders. Results based on ribosomal genes suggested a sistergroup relationship with Diptera (Halteria concept. A clade Coleopterida (Strepsiptera and Coleoptera was supported in two studies based on different combinations of protein coding nuclear genes. Analyses of data sets comprising seven or nine genes (7 single copy nuclear genes, respectively, yielded either a subordinate placement within Coleoptera or a sistergroup relationship with Neuropterida. Several early hypotheses based on a typological approach − affinities with Diptera, Coleoptera, a coleopteran subgroup, or Neuropterida − were revived using either a Hennigian approach or formal analyses of morphological characters or different molecular data sets. A phylogenomic approach finally supported a sistergroup relationship with monophyletic Coleoptera.

  3. Application of RUNTA code in flood analyses

    International Nuclear Information System (INIS)

    Flood probability analyses carried out to date indicate the need to evaluate a large number of flood scenarios. This necessity is due to a variety of reasons, the most important of which include: - Large number of potential flood sources - Wide variety of characteristics of flood sources - Large possibility of flood-affected areas becoming inter linked, depending on the location of the potential flood sources - Diversity of flood flows from one flood source, depending on the size of the rupture and mode of operation - Isolation times applicable - Uncertainties in respect of the structural resistance of doors, penetration seals and floors - Applicable degrees of obstruction of floor drainage system Consequently, a tool which carries out the large number of calculations usually required in flood analyses, with speed and flexibility, is considered necessary. The RUNTA Code enables the range of possible scenarios to be calculated numerically, in accordance with all those parameters which, as a result of previous flood analyses, it is necessary to take into account in order to cover all the possible floods associated with each flood area

  4. Phylogenetic analyses of Andromedeae (Ericaceae subfam. Vaccinioideae).

    Science.gov (United States)

    Kron, K A; Judd, W S; Crayn, D M

    1999-09-01

    Phylogenetic relationships within the Andromedeae and closely related taxa were investigated by means of cladistic analyses based on phenotypic (morphology, anatomy, chromosome number, and secondary chemistry) and molecular (rbcL and matK nucleotide sequences) characters. An analysis based on combined molecular and phenotypic characters indicates that the tribe is composed of two major clades-the Gaultheria group (incl. Andromeda, Chamaedaphne, Diplycosia, Gaultheria, Leucothoë, Pernettya, Tepuia, and Zenobia) and the Lyonia group (incl. Agarista, Craibiodendron, Lyonia, and Pieris). Andromedeae are shown to be paraphyletic in all analyses because the Vaccinieae link with some or all of the genera of the Gaultheria group. Oxydendrum is sister to the clade containing the Vaccinieae, Gaultheria group, and Lyonia group. The monophyly of Agarista, Lyonia, Pieris, and Gaultheria (incl. Pernettya) is supported, while that of Leucothoë is problematic. The close relationship of Andromeda and Zenobia is novel and was strongly supported in the molecular (but not morphological) analyses. Diplycosia, Tepuia, Gaultheria, and Pernettya form a well-supported clade, which can be diagnosed by the presence of fleshy calyx lobes and methyl salicylate. Recognition of Andromedeae is not reflective of our understanding of geneological relationships and should be abandoned; the Lyonia group is formally recognized at the tribal level. PMID:10487817

  5. Visualizing Risk Prediction Models

    OpenAIRE

    Vanya Van Belle; Ben Van Calster

    2015-01-01

    Objective Risk prediction models can assist clinicians in making decisions. To boost the uptake of these models in clinical practice, it is important that end-users understand how the model works and can efficiently communicate its results. We introduce novel methods for interpretable model visualization. Methods The proposed visualization techniques are applied to two prediction models from the Framingham Heart Study for the prediction of intermittent claudication and stroke after atrial fib...

  6. Pyroshock prediction procedures

    Science.gov (United States)

    Piersol, Allan G.

    2002-05-01

    Given sufficient effort, pyroshock loads can be predicted by direct analytical procedures using Hydrocodes that analytically model the details of the pyrotechnic explosion and its interaction with adjacent structures, including nonlinear effects. However, it is more common to predict pyroshock environments using empirical procedures based upon extensive studies of past pyroshock data. Various empirical pyroshock prediction procedures are discussed, including those developed by the Jet Propulsion Laboratory, Lockheed-Martin, and Boeing.

  7. Predicting transformers oil parameters

    OpenAIRE

    Shaban, K.; El-Hag, A.; Matveev, A.

    2009-01-01

    In this paper different configurations of artificial neural networks are applied to predict various transformers oil parameters. The prediction is performed through modeling the relationship between the transformer insulation resistance extracted from the Megger test and the breakdown strength, interfacial tension, acidity and the water content of the transformers oil. The process of predicting these oil parameters statuses is carried out using two different configurations of neural networks....

  8. Is Suicide Predictable?

    OpenAIRE

    Asmaee, S; Mosavi, N; R Abdul Rashid; H Habi; Seghatoleslam, T; Naseri, A.

    2012-01-01

    Background: The current study aimed to test the hypothesis: Is suicide predictable? And try to classify the predictive factors in multiple suicide attempts. Methods: A cross-sectional study was administered to 223 multiple attempters, women who came to a medical poison centre after a suicide attempt. The participants were young, poor, and single. A Logistic Regression Analiysis was used to classify the predictive factors of suicide. Results: Women who had multiple suicide attempts exhibited a...

  9. Prediction methods environmental-effect reporting

    International Nuclear Information System (INIS)

    This report provides a survey of prediction methods which can be applied to the calculation of emissions in cuclear-reactor accidents, in the framework of environment-effect reports (dutch m.e.r.) or risk analyses. Also emissions during normal operation are important for m.e.r.. These can be derived from measured emissions of power plants being in operation. Data concerning the latter are reported. The report consists of an introduction into reactor technology, among which a description of some reactor types, the corresponding fuel cycle and dismantling scenarios - a discussion of risk-analyses for nuclear power plants and the physical processes which can play a role during accidents - a discussion of prediction methods to be employed and the expected developments in this area - some background information. (aughor). 145 refs.; 21 figs.; 20 tabs

  10. Machine learning algorithms for datasets popularity prediction

    CERN Document Server

    Kancys, Kipras

    2016-01-01

    This report represents continued study where ML algorithms were used to predict databases popularity. Three topics were covered. First of all, there was a discrepancy between old and new meta-data collection procedures, so a reason for that had to be found. Secondly, different parameters were analysed and dropped to make algorithms perform better. And third, it was decided to move modelling part on Spark.

  11. Empirical Prediction Intervals for County Population Forecasts

    OpenAIRE

    Rayer, Stefan; Smith, Stanley K.; Tayman, Jeff

    2009-01-01

    Population forecasts entail a significant amount of uncertainty, especially for long-range horizons and for places with small or rapidly changing populations. This uncertainty can be dealt with by presenting a range of projections or by developing statistical prediction intervals. The latter can be based on models that incorporate the stochastic nature of the forecasting process, on empirical analyses of past forecast errors, or on a combination of the two. In this article, we develop and tes...

  12. Stable isotopic analyses in paleoclimatic reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Wigand, P.E. [Univ. and Community College System of Nevada, Reno, NV (United States)

    1995-09-01

    Most traditional paleoclimatic proxy data have inherent time lags between climatic input and system response that constrain their use in accurate reconstruction of paleoclimate chronology, scaling of its variability, and the elucidation of the processes that determine its impact on the biotic and abiotic environment. With the exception of dendroclimatology, and studies of short-lived organisms and pollen recovered from annually varved lacustrine sediments, significant periods of time ranging from years, to centuries, to millennia may intervene between climate change and its first manifestation in paleoclimatic proxy data records. Reconstruction of past climate through changes in plant community composition derived from pollen sequences and plant remains from ancient woodrat middens, wet environments and dry caves all suffer from these lags. However, stable isotopic analyses can provide more immediate indication of biotic response to climate change. Evidence of past physiological response of organisms to changes in effective precipitation as climate varies can be provided by analyses of the stable isotopic content of plant macrofossils from various contexts. These analyses consider variation in the stable isotopic (hydrogen, oxygen and carbon) content of plant tissues as it reflects (1) past global or local temperature through changes in meteoric (rainfall) water chemistry in the case of the first two isotopes, and (2) plant stress through changes in plant respiration/transpiration processes under differing water availability, and varying atmospheric CO, composition (which itself may actually be a net result of biotic response to climate change). Studies currently being conducted in the Intermountain West indicate both long- and short-term responses that when calibrated with modem analogue studies have the potential of revealing not only the timing of climate events, but their direction, magnitude and rapidity.

  13. Evaluation of Model Operational Analyses during DYNAMO

    Science.gov (United States)

    Ciesielski, Paul; Johnson, Richard

    2013-04-01

    A primary component of the observing system in the DYNAMO-CINDY2011-AMIE field campaign was an atmospheric sounding network comprised of two sounding quadrilaterals, one north and one south of the equator over the central Indian Ocean. During the experiment a major effort was undertaken to ensure the real-time transmission of these data onto the GTS (Global Telecommunication System) for dissemination to the operational centers (ECMWF, NCEP, JMA, etc.). Preliminary estimates indicate that ~95% of the soundings from the enhanced sounding network were successfully transmitted and potentially used in their data assimilation systems. Because of the wide use of operational and reanalysis products (e.g., in process studies, initializing numerical simulations, construction of large-scale forcing datasets for CRMs, etc.), their validity will be examined by comparing a variety of basic and diagnosed fields from two operational analyses (ECMWF and NCEP) to similar analyses based solely on sounding observations. Particular attention will be given to the vertical structures of apparent heating (Q1) and drying (Q2) from the operational analyses (OA), which are strongly influenced by cumulus parameterizations, a source of model infidelity. Preliminary results indicate that the OA products did a reasonable job at capturing the mean and temporal characteristics of convection during the DYNAMO enhanced observing period, which included the passage of two significant MJO events during the October-November 2011 period. For example, temporal correlations between Q2-budget derived rainfall from the OA products and that estimated from the TRMM satellite (i.e., the 3B42V7 product) were greater than 0.9 over the Northern Sounding Array of DYNAMO. However closer inspection of the budget profiles show notable differences between the OA products and the sounding-derived results in low-level (surface to 700 hPa) heating and drying structures. This presentation will examine these differences and

  14. IDEA: Interactive Display for Evolutionary Analyses

    Directory of Open Access Journals (Sweden)

    Carlton Jane M

    2008-12-01

    Full Text Available Abstract Background The availability of complete genomic sequences for hundreds of organisms promises to make obtaining genome-wide estimates of substitution rates, selective constraints and other molecular evolution variables of interest an increasingly important approach to addressing broad evolutionary questions. Two of the programs most widely used for this purpose are codeml and baseml, parts of the PAML (Phylogenetic Analysis by Maximum Likelihood suite. A significant drawback of these programs is their lack of a graphical user interface, which can limit their user base and considerably reduce their efficiency. Results We have developed IDEA (Interactive Display for Evolutionary Analyses, an intuitive graphical input and output interface which interacts with PHYLIP for phylogeny reconstruction and with codeml and baseml for molecular evolution analyses. IDEA's graphical input and visualization interfaces eliminate the need to edit and parse text input and output files, reducing the likelihood of errors and improving processing time. Further, its interactive output display gives the user immediate access to results. Finally, IDEA can process data in parallel on a local machine or computing grid, allowing genome-wide analyses to be completed quickly. Conclusion IDEA provides a graphical user interface that allows the user to follow a codeml or baseml analysis from parameter input through to the exploration of results. Novel options streamline the analysis process, and post-analysis visualization of phylogenies, evolutionary rates and selective constraint along protein sequences simplifies the interpretation of results. The integration of these functions into a single tool eliminates the need for lengthy data handling and parsing, significantly expediting access to global patterns in the data.

  15. Cost/benefit analyses of environmental impact

    International Nuclear Information System (INIS)

    Various aspects of cost-benefit analyses are considered. Some topics discussed are: regulations of the National Environmental Policy Act (NEPA); statement of AEC policy and procedures for implementation of NEPA; Calvert Cliffs decision; AEC Regulatory Guide; application of risk-benefit analysis to nuclear power; application of the as low as practicable (ALAP) rule to radiation discharges; thermal discharge restrictions proposed by EPA under the 1972 Amendment to the Water Pollution Control Act; estimates of somatic and genetic insult per unit population exposure; occupational exposure; EPA Point Source Guidelines for Discharges from Steam Electric Power Plants; and costs of closed-cycle cooling using cooling towers. (U.S.)

  16. FEM-ANALYSE AV INDUSTRIELL ALUMINIUMSPROFILEKSTRUDERING

    OpenAIRE

    Christenssen, Wenche

    2014-01-01

    Avhandlingen er skrevet for å øke forståelsen og kunnskapen rundt materialflyt ved ekstruderingav komplekse og tynnvegde aluminiumprofiler. Det gjennomgås også hvordan ujevn materialflyt utav en matrise kan avbalanseres ved bruk av forkammer.Rapporten tar for seg oppbygning av modeller og simulering for to forskjellige profilgeometrier.Det første profilet er et U-profil som det tidligere er gjort analyser av ved bruk av modellmateriale.Dette ble gjort i en Diplom...

  17. Erregerspektrum bei tiefen Halsinfektionen: Eine retrospektive Analyse

    OpenAIRE

    Sömmer, C; Haid, M; Hommerich, C; Laskawi, R; Canis, M; Matthias, C

    2014-01-01

    Einleitung: Tiefe Halsinfektionen zählen zu den gefährlichsten Erkrankungen in der HNO-Heilkunde. Diese Analyse gibt einen Überblick über die Mikrobiologie tiefer Halsinfektionen und Einflussfaktoren, die zu einer Änderung des Keimspektrums führen können. Methoden: Von Januar 2002 bis Dezember 2012 wurden 63 Patienten mit tiefen Halsinfektionen in der HNO-Klinik der Universitätsmedizin Göttingen behandelt. Es wurden intraoperative Abstriche entnommen. Die Inzidenz der häufigsten Erreger wur...

  18. Fully Coupled FE Analyses of Buried Structures

    Directory of Open Access Journals (Sweden)

    James T. Baylot

    1994-01-01

    Full Text Available Current procedures for determining the response of buried structures to the effects of the detonation of buried high explosives recommend decoupling the free-field stress analysis from the structure response analysis. A fully coupled (explosive–soil structure finite element analysis procedure was developed so that the accuracies of current decoupling procedures could be evaluated. Comparisons of the results of analyses performed using this procedure with scale-model experiments indicate that this finite element procedure can be used to effectively evaluate the accuracies of the methods currently being used to decouple the free-field stress analysis from the structure response analysis.

  19. Rod Ellis, Gary Barkhuizen, Analysing Learner Language

    OpenAIRE

    Narcy-Combes, Marie-Françoise

    2014-01-01

    Ce livre vient à point nommé pour compléter les outils à la disposition des jeunes chercheurs en linguistique appliquée et didactique des langues, comme des praticiens de terrain désireux de conduire une recherche-action. Comme souvent en ce qui concerne les ouvrages de Rod Ellis, il s’agit d’une somme : une étude diachronique des outils utilisés depuis les années soixante par les chercheurs en acquisition des langues pour l’analyse des productions écrites et orales des apprenants de langue. ...

  20. En analyse av Yoga-kundalini-upanisad

    OpenAIRE

    2006-01-01

    Avhandlingen En analyse av Yoga-kundalini-upanisad bygger på den indiske asketen Narayanaswamy Aiyers engelske oversettelse av Yoga-kundalini-upanisad, utgitt i Thirty Minor Upanisad-s, Including the Yoga Upanisad-s (Oklahoma, Santarasa Publications, 1980). Denne hinduistiske teksten er omtalt som en av de 21 yoga-upanishadene, den åttisjette av de 108 klassiske upanishadene, og utgjør en del av tekstkorpuset Krsna-Yajurveda. Teksten fungerer som en manual i øvelser fra disiplinene hathayoga,...

  1. Use of Geospatial Analyses for Semantic Reasoning

    OpenAIRE

    Karmacharya, Ashish; Cruz, Christophe; Boochs, Frank; Marzani, Franck

    2010-01-01

    International audience This work focuses on the integration of the spatial analyses for semantic reasoning in order to compute new axioms of an existing OWL ontology. To make it concrete, we have defined Spatial Built-ins, an extension of existing Built-ins of the SWRL rule language. It permits to run deductive rules with the help of a translation rule engine. Thus, the Spatial SWRL rules are translated to standard SWRL rules. Once the spatial functions of the Spatial SWRL rules are comput...

  2. Implementing partnerships in nonreactor facility safety analyses

    International Nuclear Information System (INIS)

    Faculty and students from LSU have been participating in nuclear safety analyses and radiation protection projects at ANL-W at INEL since 1973. A mutually beneficial relationship has evolved that has resulted in generation of safety-related studies acceptable to Argonne and DOE, NRC, and state regulatory groups. Most of the safety projects have involved the Hot Fuel Examination Facility or the Fuel Conditioning Facility; both are hot cells that receive spent fuel from EBR-II. A table shows some of the major projects at ANL-W that involved LSU students and faculty

  3. Ion accelerators for ionometrical analyses of solids

    International Nuclear Information System (INIS)

    An ion accelerator for ionometrical analyses of solid states is described. The following problems are treated: high vacuum systems and their operation, small energy spread and beam collimation, system for automatic transmission of the ion beam. Due to the careful optimization of the discussed parameters and to the automatic beam transmission system beam currents of 5-10x10-9 A could be measured for more than 300 hours operation time, with deviations less than 20%. In a one year period the accelerator was in operation for more than 2300 hours. (T.G.)

  4. En kvantitativ analyse af danskernes huskesedler

    OpenAIRE

    Schmidt, Marcus

    2005-01-01

    Den foreliggende rapport bygger på en analyse af 871 huskesedler. Sedlerne er dels indsamlet i Jylland og dels i København. Indsamlingen af forbrugernes kasserede huskesedler er foregået såvel inde i dagligvarebutikkerne (indkøbskurve, affaldsspande) som ude foran butikkerne (parkeringsplads, indkøbsvogne). Dataindsamlingen omfatter de største supermarkeder og discountbutikker samt Bilka. Det vedhæftede appendiks forenden indeholder en nærmere redegørelse for den anvendte metodiske fremga...

  5. Deux perspectives pour analyser les relations professionnelles

    OpenAIRE

    Dunlop, John T.; Whyte, William F.; Mias, Arnaud

    2016-01-01

    Cet article est la traduction d’un article paru dans la revue Industrial and Labor Relations Review, qui fait suite à un débat organisé à l’université de Princeton au début de l’année 1949, entre William Foote Whyte (1914-2000) et John Thomas Dunlop (1914-2003) à propos du cadre d’analyse des relations professionnelles (Industrial Relations), qui font alors l’objet de recherches de plus en plus nombreuses aux Etats-Unis. Cette controverse entre l’un des chefs de file du mouvement des “relatio...

  6. Analysing development to shape the future

    OpenAIRE

    Andreas Novy; Lukas Lengauer

    2008-01-01

    This article links theory and politics in a systematic way by proposing Is-Shall-Do as a didactical model for analysing a concrete conjuncture, relating it to the desired future in the form of a concrete utopia. Aware of structural limits and potential space of manoeuvre for political agency adequate practical steps to implement the concrete utopia are elaborated. The paper is divided in a first section which exposes three interwoven aspects of development: the the idea of a good life, the co...

  7. Large scale breeder reactor pump dynamic analyses

    International Nuclear Information System (INIS)

    The lateral natural frequency and vibration response analyses of the Large Scale Breeder Reactor (LSBR) primary pump were performed as part of the total dynamic analysis effort to obtain the fabrication release. The special features of pump modeling are outlined in this paper. The analysis clearly demonstrates the method of increasing the system natural frequency by reducing the generalized mass without significantly changing the generalized stiffness of the structure. Also, a method of computing the maximum relative and absolute steady state responses and associated phase angles at given locations is provided. This type of information is very helpful in generating response versus frequency and phase angle versus frequency plots

  8. Analyses of containment structures with corrosion damage

    International Nuclear Information System (INIS)

    Corrosion damage to a nuclear power plant containment structure can degrade the pressure capacity of the vessel. For the low-carbon, low- strength steels used in containments, the effect of corrosion on material properties is discussed. Strain-to-failure tests, in uniaxial tension, have been performed on corroded material samples. Results were used to select strain-based failure criteria for corroded steel. Using the ABAQUS finite element analysis code, the capacity of a typical PWR Ice Condenser containment with corrosion damage has been studied. Multiple analyses were performed with the locations of the corrosion the containment, and the amount of corrosion varied in each analysis

  9. Introduction: Analysing Emotion and Theorising Affect

    Directory of Open Access Journals (Sweden)

    Peta Tait

    2016-08-01

    Full Text Available This discussion introduces ideas of emotion and affect for a volume of articles demonstrating the scope of approaches used in their study within the humanities and creative arts. The volume offers multiple perspectives on emotion and affect within 20th-century and 21st-century texts, arts and organisations and their histories. The discussion explains how emotion encompasses the emotions, emotional feeling, sensation and mood and how these can be analysed particularly in relation to literature, art and performance. It briefly summarises concepts of affect theory within recent approaches before introducing the articles.

  10. Externalizing Behaviour for Analysing System Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof;

    2013-01-01

    not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to...... attackers. Therefore, many attacks are considerably easier to be performed for insiders than for outsiders. However, current models do not support explicit specification of different behaviours. Instead, behaviour is deeply embedded in the analyses supported by the models, meaning that it is a complex, if...

  11. Neutron dose measurements and the analyses

    International Nuclear Information System (INIS)

    This paper describes mainly the skyshine neutron dose distributions and MCNP analyses of the experiments. D-T neutron skyshine experiments were carried out at FNS with a port at the roof. Neutron and secondary gamma-ray dose rates were measured as far as 550 m and 400 m, respectively. The experimental results were analyzed with the Monte Carlo code MNCP-4C with the nuclear data library JENDL-3.2, where the FNS building and the measurement field including the pine forest were modeled with simplified cylindrical geometries. The MCNP calculation agreed well both neutron and secondary gamma-ray dose rate distributions within uncertainty of 30%. (author)

  12. Visuelle Analyse von Eye-Tracking-Daten

    OpenAIRE

    Chen, Xuemei

    2011-01-01

    Eye-Tracking ist eine der am häufigsten eingesetzten Techniken zur Analyse der Mensch-Computer-Interaktion sowie zur Untersuchung der Perzeption. Die erfassten Eye-Tracking-Daten werden meist mit Heat-Maps oder Scan-Paths analysiert, um die Usability der getesteten Anwendung zu ermitteln oder auf höhere kognitive Prozesse zu schließen. Das Ziel dieser Diplomarbeit ist die Entwicklung neuer Visualisierungstechniken für Eye-Tracking-Daten beziehungsweise die Entwicklung eines Studienkonzepts...

  13. 'Red Flag' Predictions

    DEFF Research Database (Denmark)

    Hallin, Carina Antonia; Andersen, Torben Juul; Tveterås, Sigbjørn

    -generation prediction markets and outline its unique features as a third-generation prediction market. It is argued that frontline employees gain deep insights when they execute operational activities on an ongoing basis in the organization. The experiential learning from close interaction with internal and external...

  14. Predicting the MJO

    Science.gov (United States)

    Hendon, H.

    2003-04-01

    Extended range prediction of the Madden Julian Oscillation (MJO) and seasonal prediction of MJO activity are reviewed. Skillful prediction of individual MJO events offers the possibility of forecasting increased risk of cyclone development throughout the global tropics, altered risk of extreme rainfall events in both tropics and extratropics, and displacement of storm tracks with 3-4 week lead times. The level of MJO activity within a season, which affects the mean intensity of the Australian summer monsoon and possibly the evolution of ENSO, may be governed by variations of sea surface temperature that are predictable with lead times of a few seasons. The limit of predictability for individual MJO events is unknown. Empirical-statistical schemes are skillful out to about 3 weeks and have better skill than dynamical forecast models at lead times longer than about 5 days. The dynamical forecast models typically suffer from a poor representation (or complete lack) of the MJO and large initial error. They are better used to ascertain the global impacts of the lack of the MJO rather than for determination of the limit of predictability. Dynamical extended range prediction within a GCM that has a good representation of the MJO indicates potential skill comparable to the empirical schemes. Examples of operational extended range prediction with POAMA, the new coupled seasonal forecast model at the Bureau of Meteorology that also reasonably simulates the MJO, will be presented.

  15. Improved nonlinear prediction method

    Science.gov (United States)

    Adenan, Nur Hamiza; Md Noorani, Mohd Salmi

    2014-06-01

    The analysis and prediction of time series data have been addressed by researchers. Many techniques have been developed to be applied in various areas, such as weather forecasting, financial markets and hydrological phenomena involving data that are contaminated by noise. Therefore, various techniques to improve the method have been introduced to analyze and predict time series data. In respect of the importance of analysis and the accuracy of the prediction result, a study was undertaken to test the effectiveness of the improved nonlinear prediction method for data that contain noise. The improved nonlinear prediction method involves the formation of composite serial data based on the successive differences of the time series. Then, the phase space reconstruction was performed on the composite data (one-dimensional) to reconstruct a number of space dimensions. Finally the local linear approximation method was employed to make a prediction based on the phase space. This improved method was tested with data series Logistics that contain 0%, 5%, 10%, 20% and 30% of noise. The results show that by using the improved method, the predictions were found to be in close agreement with the observed ones. The correlation coefficient was close to one when the improved method was applied on data with up to 10% noise. Thus, an improvement to analyze data with noise without involving any noise reduction method was introduced to predict the time series data.

  16. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg;

    2001-01-01

    utilities as partners and users. The new models are evaluated for five wind farms in Denmark as well as one wind farm in Spain. It is shown that the predictions based on conditional parametric models are superior to the predictions obatined by state-of-the-art parametric models....

  17. Predicting AD conversion

    DEFF Research Database (Denmark)

    Liu, Yawu; Mattila, Jussi; Ruiz, Miguel �ngel Mu�oz;

    2013-01-01

    To compare the accuracies of predicting AD conversion by using a decision support system (PredictAD tool) and current research criteria of prodromal AD as identified by combinations of episodic memory impairment of hippocampal type and visual assessment of medial temporal lobe atrophy (MTA) on MRI...

  18. Prediction of Antibody Epitopes

    DEFF Research Database (Denmark)

    Nielsen, Morten; Marcatili, Paolo

    2015-01-01

    self-proteins. Given the sequence or the structure of a protein of interest, several methods exploit such features to predict the residues that are more likely to be recognized by an immunoglobulin.Here, we present two methods (BepiPred and DiscoTope) to predict linear and discontinuous antibody...

  19. Error mode prediction.

    Science.gov (United States)

    Hollnagel, E; Kaarstad, M; Lee, H C

    1999-11-01

    The study of accidents ('human errors') has been dominated by efforts to develop 'error' taxonomies and 'error' models that enable the retrospective identification of likely causes. In the field of Human Reliability Analysis (HRA) there is, however, a significant practical need for methods that can predict the occurrence of erroneous actions--qualitatively and quantitatively. The present experiment tested an approach for qualitative performance prediction based on the Cognitive Reliability and Error Analysis Method (CREAM). Predictions of possible erroneous actions were made for operators using different types of alarm systems. The data were collected as part of a large-scale experiment using professional nuclear power plant operators in a full scope simulator. The analysis showed that the predictions were correct in more than 70% of the cases, and also that the coverage of the predictions depended critically on the comprehensiveness of the preceding task analysis. PMID:10582035

  20. Evaluating prediction uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    McKay, M.D. [Los Alamos National Lab., NM (United States)

    1995-03-01

    The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented.

  1. Evaluating prediction uncertainty

    International Nuclear Information System (INIS)

    The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented

  2. Is Time Predictability Quantifiable?

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    Computer architects and researchers in the realtime domain start to investigate processors and architectures optimized for real-time systems. Optimized for real-time systems means time predictable, i.e., architectures where it is possible to statically derive a tight bound of the worst-case execu......Computer architects and researchers in the realtime domain start to investigate processors and architectures optimized for real-time systems. Optimized for real-time systems means time predictable, i.e., architectures where it is possible to statically derive a tight bound of the worst......-case execution time. To compare different approaches we would like to quantify time predictability. That means we need to measure time predictability. In this paper we discuss the different approaches for these measurements and conclude that time predictability is practically not quantifiable. We can only...

  3. Ground motion predictions

    International Nuclear Information System (INIS)

    Nuclear generated ground motion is defined and then related to the physical parameters that cause it. Techniques employed for prediction of ground motion peak amplitude, frequency spectra and response spectra are explored, with initial emphasis on the analysis of data collected at the Nevada Test Site (NTS). NTS postshot measurements are compared with pre-shot predictions. Applicability of these techniques to new areas, for example, Plowshare sites, must be questioned. Fortunately, the Atomic Energy Commission is sponsoring complementary studies to improve prediction capabilities primarily in new locations outside the NTS region. Some of these are discussed in the light of anomalous seismic behavior, and comparisons are given showing theoretical versus experimental results. In conclusion, current ground motion prediction techniques are applied to events off the NTS. Predictions are compared with measurements for the event Faultless and for the Plowshare events, Gasbuggy, Cabriolet, and Buggy I. (author)

  4. Ethics of cost analyses in medical education.

    Science.gov (United States)

    Walsh, Kieran

    2013-11-01

    Cost analyses in medical education are rarely straightforward, and rarely lead to clear-cut conclusions. Occasionally they do lead to clear conclusions but even when that happens, some stakeholders will ask difficult but valid questions about what to do following cost analyses-specifically about distributive justice in the allocation of resources. At present there are few or no debates about these issues and rationing decisions that are taken in medical education are largely made subconsciously. Distributive justice 'concerns the nature of a socially just allocation of goods in a society'. Inevitably there is a large degree of subjectivity in the judgment as to whether an allocation is seen as socially just or ethical. There are different principles by which we can view distributive justice and which therefore affect the prism of subjectivity through which we see certain problems. For example, we might say that distributive justice at a certain institution or in a certain medical education system operates according to the principle that resources must be divided equally amongst learners. Another system may say that resources should be distributed according to the needs of learners or even of patients. No ethical system or model is inherently right or wrong, they depend on the context in which the educator is working. PMID:24203859

  5. ISFSI site boundary radiation dose rate analyses

    International Nuclear Information System (INIS)

    Across the globe nuclear utilities are in the process of designing and analysing Independent Spent Fuel Storage Installations (ISFSI) for the purpose of above ground spent-fuel storage primarily to mitigate the filling of spent-fuel pools. Using a conjoining of discrete ordinates transport theory (DORT) and Monte Carlo (MCNP) techniques, an ISFSI was analysed to determine neutron and photon dose rates for a generic overpack, and ISFSI pad configuration and design at distances ranging from 1 to ∼1700 m from the ISFSI array. The calculated dose rates are used to address the requirements of 10CFR72.104, which provides limits to be enforced for the protection of the public by the NRC in regard to ISFSI facilities. For this overpack, dose rates decrease by three orders of magnitude through the first 200 m moving away from the ISFSI. In addition, the contributions from different source terms changes over distance. It can be observed that although side photons provide the majority of dose rate in this calculation, scattered photons and side neutrons take on more importance as the distance from the ISFSI is increased. (authors)

  6. Hierarchical regression for analyses of multiple outcomes.

    Science.gov (United States)

    Richardson, David B; Hamra, Ghassan B; MacLehose, Richard F; Cole, Stephen R; Chu, Haitao

    2015-09-01

    In cohort mortality studies, there often is interest in associations between an exposure of primary interest and mortality due to a range of different causes. A standard approach to such analyses involves fitting a separate regression model for each type of outcome. However, the statistical precision of some estimated associations may be poor because of sparse data. In this paper, we describe a hierarchical regression model for estimation of parameters describing outcome-specific relative rate functions and associated credible intervals. The proposed model uses background stratification to provide flexible control for the outcome-specific associations of potential confounders, and it employs a hierarchical "shrinkage" approach to stabilize estimates of an exposure's associations with mortality due to different causes of death. The approach is illustrated in analyses of cancer mortality in 2 cohorts: a cohort of dioxin-exposed US chemical workers and a cohort of radiation-exposed Japanese atomic bomb survivors. Compared with standard regression estimates of associations, hierarchical regression yielded estimates with improved precision that tended to have less extreme values. The hierarchical regression approach also allowed the fitting of models with effect-measure modification. The proposed hierarchical approach can yield estimates of association that are more precise than conventional estimates when one wishes to estimate associations with multiple outcomes. PMID:26232395

  7. ANALYSES ON SYSTEMATIC CONFRONTATION OF FIGHTER AIRCRAFT

    Institute of Scientific and Technical Information of China (English)

    HuaiJinpeng; WuZhe; HuangJun

    2002-01-01

    Analyses of the systematic confrontation between two military forcfes are the highest hierarchy on opera-tional effectiveness study of weapon systema.The physi-cal model for tactical many-on-many engagements of an aerial warfare with heterogeneous figher aircraft is estab-lished.On the basis of Lanchester multivariate equations of square law,a mathematical model corresponding to the established physical model is given.A superiorityh parame-ter is then derived directly from the mathematical model.With view to the high -tech condition of modern war-fare,the concept of superiority parameter which more well and truly reflects the essential of an air-to-air en-gagement is further formulated.The attrition coeffi-cients,which are key to the differential equations,are de-termined by using tactics of random target assignment and air-to-air capability index of the fighter aircraft.Hereby,taking the mathematical model and superiority parameter as cores,calculations amd analyses of complicate systemic problems such as evaluation of battle superiority,prog-mostication of combat process and optimization of colloca-tions have been accomplished.Results indicate that a clas-sical combat theory with its certain recent development has received newer applications in the military operation research for complicated confrontation analysis issues.

  8. Sensitivity in risk analyses with uncertain numbers.

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, W. Troy; Ferson, Scott

    2006-06-01

    Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is Dempster-Shafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a ''pinching'' strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered.

  9. Waste Stream Analyses for Nuclear Fuel Cycles

    Energy Technology Data Exchange (ETDEWEB)

    N. R. Soelberg

    2010-08-01

    A high-level study was performed in Fiscal Year 2009 for the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE) Advanced Fuel Cycle Initiative (AFCI) to provide information for a range of nuclear fuel cycle options (Wigeland 2009). At that time, some fuel cycle options could not be adequately evaluated since they were not well defined and lacked sufficient information. As a result, five families of these fuel cycle options are being studied during Fiscal Year 2010 by the Systems Analysis Campaign for the DOE NE Fuel Cycle Research and Development (FCRD) program. The quality and completeness of data available to date for the fuel cycle options is insufficient to perform quantitative radioactive waste analyses using recommended metrics. This study has been limited thus far to qualitative analyses of waste streams from the candidate fuel cycle options, because quantitative data for wastes from the front end, fuel fabrication, reactor core structure, and used fuel for these options is generally not yet available.

  10. Assessment of ERANOS for HPLWR core analyses

    International Nuclear Information System (INIS)

    The High Performance Light Water Reactor (HPLWR) is an innovative thermal spectrum nuclear reactor concept in which water at supercritical pressure is used both as neutron moderator and as coolant. The usage of a deterministic tool for neutronic analyses of the HPLWR core is preferred to that of Monte Carlo techniques mainly because of computational time reduction but also because of higher flexibility in dealing with temperature dependent cross-sections; for these reasons, ERANOS has been chosen. Verification of the developed geometry models and selected calculation procedure is mandatory when applying ERANOS to this innovative reactor concept, since the code has been originally developed for fast reactors. This task is achieved by means of code-to-code comparison, choosing MCNP5, which ensures correct geometry representation and provides a continuous energy treatment, even if, to the authors' knowledge, lacks of extensive validation of the thermal scattering data for the considered water temperature and pressure ranges. Two main spatial scales, associated to the usage of a deterministic code, require attention: 1) cell calculations, in which macroscopic self-shielded cross-sections are generated, 2) 3D calculations. A very good agreement between the codes is obtained for both cell and 3D fuel assembly calculations. The developed 3D core model has been verified by a comparison of different ERANOS modules because of the high computational power request by MCNP5. The results shown ensure the applicability of ERANOS to HPLWR analyses when using adequate calculation procedures. (authors)

  11. Structural prediction in aphasia

    Directory of Open Access Journals (Sweden)

    Tessa Warren

    2015-05-01

    Full Text Available There is considerable evidence that young healthy comprehenders predict the structure of upcoming material, and that their processing is facilitated when they encounter material matching those predictions (e.g., Staub & Clifton, 2006; Yoshida, Dickey & Sturt, 2013. However, less is known about structural prediction in aphasia. There is evidence that lexical prediction may be spared in aphasia (Dickey et al., 2014; Love & Webb, 1977; cf. Mack et al, 2013. However, predictive mechanisms supporting facilitated lexical access may not necessarily support structural facilitation. Given that many people with aphasia (PWA exhibit syntactic deficits (e.g. Goodglass, 1993, PWA with such impairments may not engage in structural prediction. However, recent evidence suggests that some PWA may indeed predict upcoming structure (Hanne, Burchert, De Bleser, & Vashishth, 2015. Hanne et al. tracked the eyes of PWA (n=8 with sentence-comprehension deficits while they listened to reversible subject-verb-object (SVO and object-verb-subject (OVS sentences in German, in a sentence-picture matching task. Hanne et al. manipulated case and number marking to disambiguate the sentences’ structure. Gazes to an OVS or SVO picture during the unfolding of a sentence were assumed to indicate prediction of the structure congruent with that picture. According to this measure, the PWA’s structural prediction was impaired compared to controls, but they did successfully predict upcoming structure when morphosyntactic cues were strong and unambiguous. Hanne et al.’s visual-world evidence is suggestive, but their forced-choice sentence-picture matching task places tight constraints on possible structural predictions. Clearer evidence of structural prediction would come from paradigms where the content of upcoming material is not as constrained. The current study used self-paced reading study to examine structural prediction among PWA in less constrained contexts. PWA (n=17 who

  12. Castor-1C spent fuel storage cask decay heat, heat transfer, and shielding analyses

    International Nuclear Information System (INIS)

    This report documents the decay heat, heat transfer, and shielding analyses of the Gesellschaft fuer Nuklear Services (GNS) CASTOR-1C cask used in a spent fuel storage demonstration performed at Preussen Elektra's Wurgassen nuclear power plant. The demonstration was performed between March 1982 and January 1984, and resulted in cask and fuel temperature data and cask exterior surface gamma-ray and neutron radiation dose rate measurements. The purpose of the analyses reported here was to evaluate decay heat, heat transfer, and shielding computer codes. The analyses consisted of (1) performing pre-look predictions (predictions performed before the analysts were provided the test data), (2) comparing ORIGEN2 (decay heat), COBRA-SFS and HYDRA (heat transfer), and QAD and DOT (shielding) results to data, and (3) performing post-test analyses if appropriate. Even though two heat transfer codes were used to predict CASTOR-1C cask test data, no attempt was made to compare the two codes. The codes are being evaluated with other test data (single-assembly data and other cask data), and to compare the codes based on one set of data may be premature and lead to erroneous conclusions

  13. Analyses of cavitation instabilities in ductile metals

    DEFF Research Database (Denmark)

    Tvergaard, Viggo

    Cavitation instabilities have been predicted for a single void in a ductile metal stressed under high triaxiality conditions. In experiments for a ceramic reinforced by metal particles a single dominant void has been observed on the fracture surface of some of the metal particles bridging a crack...... for the influence of such size-effects on cavitation instabilities are presented. When a metal contains a distribution of micro voids, and the void spacing compared to void size is not extremely large, the surrounding voids may affect the occurrence of a cavitation instability at one of the voids...

  14. Micromechanical Failure Analyses for Finite Element Polymer Modeling

    Energy Technology Data Exchange (ETDEWEB)

    CHAMBERS,ROBERT S.; REEDY JR.,EARL DAVID; LO,CHI S.; ADOLF,DOUGLAS B.; GUESS,TOMMY R.

    2000-11-01

    Polymer stresses around sharp corners and in constrained geometries of encapsulated components can generate cracks leading to system failures. Often, analysts use maximum stresses as a qualitative indicator for evaluating the strength of encapsulated component designs. Although this approach has been useful for making relative comparisons screening prospective design changes, it has not been tied quantitatively to failure. Accurate failure models are needed for analyses to predict whether encapsulated components meet life cycle requirements. With Sandia's recently developed nonlinear viscoelastic polymer models, it has been possible to examine more accurately the local stress-strain distributions in zones of likely failure initiation looking for physically based failure mechanisms and continuum metrics that correlate with the cohesive failure event. This study has identified significant differences between rubbery and glassy failure mechanisms that suggest reasonable alternatives for cohesive failure criteria and metrics. Rubbery failure seems best characterized by the mechanisms of finite extensibility and appears to correlate with maximum strain predictions. Glassy failure, however, seems driven by cavitation and correlates with the maximum hydrostatic tension. Using these metrics, two three-point bending geometries were tested and analyzed under variable loading rates, different temperatures and comparable mesh resolution (i.e., accuracy) to make quantitative failure predictions. The resulting predictions and observations agreed well suggesting the need for additional research. In a separate, additional study, the asymptotically singular stress state found at the tip of a rigid, square inclusion embedded within a thin, linear elastic disk was determined for uniform cooling. The singular stress field is characterized by a single stress intensity factor K{sub a} and the applicable K{sub a} calibration relationship has been determined for both fully bonded and

  15. Predicting geomagnetic activity indices

    International Nuclear Information System (INIS)

    Complete text of publication follows. Magnetically active times, e.g., Kp > 5, are notoriously difficult to predict, precisely the times when such predictions are crucial to the space weather users. Taking advantage of the routinely available solar wind measurements at Lagrangian point (L1) and nowcast Kps, Kp and Dst forecast models based on neural networks were developed with the focus on improving the forecast for active times. To satisfy different needs and operational constraints, three models were developed: (1) a model that inputs nowcast Kp and solar wind parameters and predicts Kp 1 hr ahead; (2) a model with the same input as model 1 and predicts Kp 4 hr ahead; and (3) a model that inputs only solar wind parameters and predicts Kp 1 hr ahead (the exact prediction lead time depends on the solar wind speed and the location of the solar wind monitor.) Extensive evaluations of these models and other major operational Kp forecast models show that, while the new models can predict Kps more accurately for all activities, the most dramatic improvements occur for moderate and active times. Similar Dst models were developed. Information dynamics analysis of Kp, suggests that geospace is more dominated by internal dynamics near solar minimum than near solar maximum, when it is more directly driven by external inputs, namely solar wind and interplanetary magnetic field (IMF).

  16. Nuclear Analyses For ITER NB System

    International Nuclear Information System (INIS)

    Full text: Detailed nuclear analyses for the latest ITER NB system are required to ensure that NB design conforms to the nuclear regulations and licensing. A variety of nuclear analyses was conducted for the NB system including a tokamak building and outside the building by using Monte Carlo code MCNP5.14, activation code ACT-4 and Fusion Evaluated Nuclear Data Library FENDL-2.1. A special “Direct 1-step Monte Carlo” method is adopted for the shutdown dose rate calculation. The NB system and the tokamak building are very complicated, and it is practically impossible to make geometry input data manually. We used the automatic converter code GEOMIT from CAD data to MCNP geometry input data. GEOMIT was improved for these analyses, and the conversion performance was drastically enhanced. Void cells in MCNP input data were generated by subtracting solid cells data from simple rectangular void cells. The CAD data were successfully converted to MCNP geometry input data, and void data were also adequately produced with GEOMIT. The effective dose rates at external zones (non-controlled areas) should be less than 80 μSv/month according to French regulations. Shielding structures are under analysis to reduce the radiation streaming through the openings. We are confirming that the criterion is satisfied for the NB system. The effective dose rate data in the NB cell after shutdown are necessary to check the dose rate during possible rad-works for maintenance. Dose rates for workers must be maintained as low as reasonably achievable, and at locations where hands-on maintenance is performed should be below a target of 100 μSv/h at 12 days after shutdown. We are specifying the adequate zoning and area where hands-on maintenance can be allowed, based on the analysis results. The cask design for transport activated NB components is an important issue, and we are calculating the effective dose rates. The target of the effective dose rate from the activated NB components is less

  17. Flowtran assessment for predicting flow instability

    International Nuclear Information System (INIS)

    FLOWTRAN is a thermal-hydraulic assembly code for simulating Savannah River Site (SRS) reactor assemblies and predicting flow instability. Reactor power and flow transient modelling is critical in determining safe operating limits at which a reactor could be shut down without damage to the fuel assemblies. FLOWTRAN models an individual assembly's thermal-hydraulic behavior and can determine the operating power limit to avoid flow instability when the flow regime through the assembly is single-phase. Tests were conducted at Columbia University in 1988--89 with downward flow through single tubes to examine fluid flow instability. FLOWTRAN cannot predict actual flow instability because it cannot model two-phase flow. FLOWTRAN modelled the heated tubes to predict Onset of Significant Voiding (OSV) using the Saha-Zuber's correlation modified for SRS reactors; data analyses for the Columbia tests showed that the modified correlation OSV is a conservative predictor for downward flow instability

  18. Analysing lawyers’ attitude towards knowledge sharing

    Directory of Open Access Journals (Sweden)

    Wole M. Olatokun

    2012-09-01

    Full Text Available Objectives: The study examined and identified the factors that affect lawyers’ attitudes toknowledge sharing, and their knowledge sharing behaviour. Specifically, it investigated therelationship between the salient beliefs affecting the knowledge sharing attitude of lawyers’,and applied a modified version of the Theory of Reasoned Action (TRA in the knowledgesharing context, to predict how these factors affect their knowledge sharing behaviour.Method: A field survey of 273 lawyers was carried out, using questionnaire for data collection.Collected data on all variables were structured into grouped frequency distributions. PrincipalComponent Factor Analysis was applied to reduce the constructs and Simple Regression wasapplied to test the hypotheses. These were tested at 0.05% level of significance.Results: Results showed that expected associations and contributions were the majordeterminants of lawyers’ attitudes towards knowledge sharing. Expected reward was notsignificantly related to lawyers’ attitudes towards knowledge sharing. A positive attitudetowards knowledge sharing was found to lead to a positive intention to share knowledge,although a positive intention to share knowledge did not significantly predict a positiveknowledge sharing behaviour. The level of Information Technology (IT usage was also foundto significantly affect the knowledge sharing behaviour of lawyers’.Conclusion: It was recommended that law firms in the study area should deploy more ITinfrastructure and services that encourage effective knowledge sharing amongst lawyers.

  19. On Prediction of EOP

    CERN Document Server

    Malkin, Z

    2009-01-01

    Two methods of prediction of the Pole coordinates and TAI-UTC were tested -- extrapolation of the deterministic components and ARIMA. It was found that each of these methods is most effective for certain length of prognosis. For short-time prediction ARIMA algorithm yields more accurate prognosis, and for long-time one extrapolation is preferable. So, the combined algorithm is being used in practice of IAA EOP Service. The accuracy of prognosis is close to accuracy of IERS algorithms. For prediction of nutation the program KSV-1996-1 by T. Herring is being used.

  20. Analysing transfer phenomena in osmotic evaporation

    Directory of Open Access Journals (Sweden)

    Freddy Forero Longas

    2011-12-01

    Full Text Available Osmotic evaporation is a modification of traditional processes using membranes; by means of a vapour pressure differential, produced by a highly concentrated extraction solution, water is transferred through a hydrophobic membrane as vapour. This technique has many advantages over traditional processes, allowing work at atmospheric pressure and low temperatures, this being ideal for heatsensitive products. This paper presents and synthetically analyses the phenomena of heat and mass transfer which occurs in the process and describes the models used for estimating the parameters of interest, such as flow, temperature, heat transfer rate and the relationships that exist amongst them when hollow fibre modules are used, providing a quick reference tool and specific information about this process.

  1. Seismic analyses of structures. 1st draft

    International Nuclear Information System (INIS)

    The dynamic analysis presented in this paper refers to the seismic analysis of the main building of Paks NPP. The aim of the analysis was to determine the floor response spectra as response to seismic input. This analysis was performed by the 3-dimensional calculation model and the floor response spectra were determined for a number levels from the floor response time histories and no other adjustments were applied. The following results of seismic analysis are presented: 3-dimensional finite element model; basic assumptions of dynamic analyses; table of frequencies and included factors; modal masses for all modes; floor response spectra in all the selected nodes with figures of indicated nodes and important nodes of free vibration

  2. Genetic Analyses of Meiotic Recombination in Arabidopsis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Meiosis is essential for sexual reproduction and recombination is a critical step required for normal meiosis. Understanding the underlying molecular mechanisms that regulate recombination ie important for medical, agricultural and ecological reasons. Readily available molecular and cytological tools make Arabidopsis an excellent system to study meiosis. Here we review recent developments in molecular genetic analyses on meiotic recombination. These Include studies on plant homologs of yeast and animal genes, as well as novel genes that were first identified in plants. The characterizations of these genes have demonstrated essential functions from the initiation of recombination by double-strand breaks to repair of such breaks, from the formation of double-Holliday junctions to possible resolution of these junctions, both of which are critical for crossover formation. The recent advances have ushered a new era in plant meiosis, in which the combination of genetics, genomics, and molecular cytology can uncover important gene functions.

  3. Deterministic analyses of severe accident issues

    International Nuclear Information System (INIS)

    Severe accidents in light water reactors involve complex physical phenomena. In the past there has been a heavy reliance on simple assumptions regarding physical phenomena alongside of probability methods to evaluate risks associated with severe accidents. Recently GE has developed realistic methodologies that permit deterministic evaluations of severe accident progression and of some of the associated phenomena in the case of Boiling Water Reactors (BWRs). These deterministic analyses indicate that with appropriate system modifications, and operator actions, core damage can be prevented in most cases. Furthermore, in cases where core-melt is postulated, containment failure can either be prevented or significantly delayed to allow sufficient time for recovery actions to mitigate severe accidents

  4. Spatial Analyses of Harappan Urban Settlements

    Directory of Open Access Journals (Sweden)

    Hirofumi Teramura

    2006-12-01

    Full Text Available The Harappan Civilization occupies a unique place among the early civilizations of the world with its well planned urban settlements, advanced handicraft and technology, religious and trade activities. Using a Geographical Information Systems (GIS, this study presents spatial analyses that locate urban settlements on a digital elevation model (DEM according to the three phases of early, mature and late. Understanding the relationship between the spatial distribution of Harappan sites and the change in some factors, such as topographic features, river passages or sea level changes, will lead to an understanding of the dynamism of this civilization. It will also afford a glimpse of the factors behind the formation, development, and decline of the Harappan Civilization.

  5. Angular analyses in relativistic quantum mechanics

    International Nuclear Information System (INIS)

    This work describes the angular analysis of reactions between particles with spin in a fully relativistic fashion. One particle states are introduced, following Wigner's method, as representations of the inhomogeneous Lorentz group. In order to perform the angular analyses, the reduction of the product of two representations of the inhomogeneous Lorentz group is studied. Clebsch-Gordan coefficients are computed for the following couplings: l-s coupling, helicity coupling, multipolar coupling, and symmetric coupling for more than two particles. Massless and massive particles are handled simultaneously. On the way we construct spinorial amplitudes and free fields; we recall how to establish convergence theorems for angular expansions from analyticity hypothesis. Finally we substitute these hypotheses to the idea of 'potential radius', which gives at low energy the usual 'centrifugal barrier' factors. The presence of such factors had never been deduced from hypotheses compatible with relativistic invariance. (author)

  6. Correlation analyses of deep galaxy samples

    International Nuclear Information System (INIS)

    Estimates of the two-point angular correlation function, w(theta), are presented for galaxy samples obtained from COSMOS machine measurements of 1.2-m UK Schmidt telescope (UKST) and 4-m Anglo-Australian telescope (AAT) plates. All of the estimated w(theta) are consistent with a -0.8 power-law slope at small scales. At larger angular scales a break from the power-law behaviour is seen in the UKST w(theta) corresponding to a spatial separation of 3 h-1 Mpc in agreement with earlier results. The AAT plates allow the correlation analyses to be carried out to 24 mag in the blue passband and 22 mag in the red passband. It is observed that the correlation function amplitude scaling relation in both passbands is very similar. (author)

  7. Analysing weak orbital signals in Gaia data

    CERN Document Server

    Lucy, L B

    2014-01-01

    Anomalous orbits are found when minimum-chi^{2} estimation is applied to synthetic Gaia data for weak orbital signals - i.e., orbits whose astrometric signatures are comparable to the single-scan measurement error (Pourbaix 2002). These orbits are nearly parabolic, edge-on, and their major axes align with the line-of-sight to the observer. Such orbits violate the Copernican principle (CPr) and as such could be rejected. However, the preferred alternative is to develop a statistical technique that incorporates the CPr as a fundamental postulate. This can be achieved in the context of Bayesian estimation by defining a Copernican prior. With this development, Pourbaix's anomalous orbits no longer arise. Instead, orbits with a somewhat higher chi^{2} but which do not violate the CPr are selected. Other areas of astronomy where the investigator must analyse data from 'imperfect experiments' might similarly benefit from appropriately- defined Copernican priors.

  8. Preserving the nuclear option: analyses and recommendations

    International Nuclear Information System (INIS)

    It is certain that a future role for nuclear power will depend on substantial changes in the management and regulation of the enterprise. It is widely believed that institutional, rather than technological, change is, at least in the short term, the key to resuscitating the nuclear option. Several recent analyses of the problems facing nuclear power, together with the current congressional hearing on the Nuclear Regulatory Commission's fiscal year 1986 budget request, have examined both the future of nuclear power and what can be done to address present institutional shortcomings. The congressional sessions have provided an indication of the views of both legislators and regulators, and this record, although mixed, generally shows continued optimism about the prospects of the nuclear option if needed reforms are accomplished

  9. Thermal hydraulic reactor safety analyses and experiments

    International Nuclear Information System (INIS)

    The report introduces the results of the thermal hydraulic reactor safety research performed in the Nuclear Engineering Laboratory of the Technical Research Centre of Finland (VTT) during the years 1972-1987. Also practical applications i.e. analyses for the safety authorities and power companies are presented. The emphasis is on description of the state-of-the-art know how. The report describes VTT's most important computer codes, both those of foreign origin and those developed at VTT, and their assessment work, VTT's own experimental research, as well as international experimental projects and other forms of cooperation VTT has participated in. Appendix 8 contains a comprehensive list of the most important publications and technical reports produced. They present the content and results of the research in detail.(orig.)

  10. Reliability and safety analyses under fuzziness

    International Nuclear Information System (INIS)

    Fuzzy theory, for example possibility theory, is compatible with probability theory. What is shown so far is that probability theory needs not be replaced by fuzzy theory, but rather that the former works much better in applications if it is combined with the latter. In fact, it is said that there are two essential uncertainties in the field of reliability and safety analyses: One is a probabilistic uncertainty which is more relevant for mechanical systems and the natural environment, and the other is fuzziness (imprecision) caused by the existence of human beings in systems. The classical probability theory alone is therefore not sufficient to deal with uncertainties in humanistic system. In such a context this collection of works will put a milestone in the arguments of probability theory and fuzzy theory. This volume covers fault analysis, life time analysis, reliability, quality control, safety analysis and risk analysis. (orig./DG). 106 figs

  11. First international intercomparison of image analysers

    CERN Document Server

    Pálfalvi, J; Eoerdoegh, I

    1999-01-01

    Image analyser systems used for evaluating solid state nuclear track detectors (SSNTD) were compared in order to establish minimum hardware and software requirements and methodology necessary in different fields of radiation dosimetry. For the purpose, CR-39 detectors (TASL, Bristol, U.K.) were irradiated with different (n,alpha) and (n,p) converters in a reference Pu-Be neutron field, in an underground laboratory with high radon concentration and by different alpha sources at the Atomic Energy Research Institute (AERI) in Budapest, Hungary. 6 sets of etched and pre-evaluated detectors and the 7th one without etching were distributed among the 14 laboratories from 11 countries. The participants measured the different track parameters and statistically evaluated the results, to determine the performance of their system. The statistical analysis of results showed high deviations from the mean values in many cases. As the conclusion of the intercomparison recommendations were given to fulfill those requirements ...

  12. Fouling analyses for heat exchangers of NPP

    International Nuclear Information System (INIS)

    Fouling of heat exchanges is generated by water-borne deposits, commonly known as foulants including particulate matter from the air, migrated corrosion produces; silt, clays, and sand suspended in water; organic contaminants; and boron based deposits in plants. This fouling is known to interfere with normal flow characteristics and reduce thermal efficiencies of heat exchangers. In order to analyze the fouling for heat exchangers of nuclear power plant, the fouling factor is introduced based on the ASME O and M codes and TEMA standards. This paper focuses on the fouling analyses for the heat exchangers of several primary systems; the RHR heat exchanger of the residual heat removal system, the letdown heat exchanger of the chemical and volume control system, and the CCW heat exchanger of the component cooling water system, Based on the results of the fouling levels for the three heat exchangers are assumed

  13. Communication analyses of plant operator crews

    International Nuclear Information System (INIS)

    Elucidation of crew communication aspects is required to improve the man-man interface which supports operators' diagnoses and decisions. Experiments to clarify operator performance under abnormal condition were evaluated by protocol analyses, interviews, etc. using a training simulator. We had the working hypothesis, based on experimental observations, that operator performance can be evaluated by analysis of crew communications. The following four approaches were tried to evaluate operator performance. (1) Crew performance was quantitatively evaluated by the number of tasks undertaken by an operator crew. (2) The group thinking process was clarified by cognition-communication flow. (3) The group response process was clarified by movement flow. (4) Quantitative indexes for evaluating crew performance were considered to be represented by the amount of information effectively exchanged among operators. (author)

  14. Analysing Medieval Urban Space; a methodology

    Directory of Open Access Journals (Sweden)

    Marlous L. Craane MA

    2007-08-01

    Full Text Available This article has been written in reaction to recent developments in medieval history and archaeology, to study not only the buildings in a town but also the spaces that hold them together. It discusses a more objective and interdisciplinary approach for analysing urban morphology and use of space. It proposes a 'new' methodology by combining town plan analysis and space syntax. This methodology was trialled on the city of Utrecht in the Netherlands. By comparing the results of this 'new' methodology with the results of previous, more conventional, research, this article shows that space syntax can be applied successfully to medieval urban contexts. It does this by demonstrating a strong correlation between medieval economic spaces and the most integrated spaces, just as is found in the study of modern urban environments. It thus provides a strong basis for the use of this technique in future research of medieval urban environments.

  15. Preclosure Consequence Analyses for License Application

    International Nuclear Information System (INIS)

    The purpose of this calculation is to demonstrate that the preclosure performance objectives, specified in 10 CFR 63.111(a) and 10 CFR 63.111(b) [DIRS 173273], have been met for the proposed design and operations in the geologic repository operations area. Radiological consequence analyses are performed for potential releases and direct radiation from normal operations in surface and subsurface facilities and from Category 1 and Category 2 event sequences during the preclosure period. Surface releases from normal repository operations are primarily from radionuclides released from opening a transportation cask during dry transfer operations of spent nuclear fuel (SNF) in Dry Transfer Facility 1 (DTF 1), Dry Transfer Facility 2 (DTF 2), or the Fuel Handling Facility (FHF). Subsurface releases from normal repository operations are from resuspension of waste package surface contamination and neutron activation of ventilated air and silica dust from host rock in the emplacement drifts. Preclosure performance objectives are discussed and summarized

  16. DEPUTY: analysing architectural structures and checking style

    International Nuclear Information System (INIS)

    The DepUty (dependencies utility) can be classified as a project and process management tool. The main goal of DepUty is to assist by means of source code analysis and graphical representation using UML, in understanding dependencies of sub-systems and packages in CMS Object Oriented software, to understand architectural structure, and to schedule code release in modularised integration. It also allows a new-comer to more easily understand the global structure of CMS software, and to void circular dependencies up-front or re-factor the code, in case it was already too close to the edge of non-maintainability. The authors will discuss the various views DepUty provides to analyse package dependencies, and illustrate both the metrics and style checking facilities it provides

  17. Attitude stability analyses for small artificial satellites

    International Nuclear Information System (INIS)

    The objective of this paper is to analyze the stability of the rotational motion of a symmetrical spacecraft, in a circular orbit. The equilibrium points and regions of stability are established when components of the gravity gradient torque acting on the spacecraft are included in the equations of rotational motion, which are described by the Andoyer's variables. The nonlinear stability of the equilibrium points of the rotational motion is analysed here by the Kovalev-Savchenko theorem. With the application of the Kovalev-Savchenko theorem, it is possible to verify if they remain stable under the influence of the terms of higher order of the normal Hamiltonian. In this paper, numerical simulations are made for a small hypothetical artificial satellite. Several stable equilibrium points were determined and regions around these points have been established by variations in the orbital inclination and in the spacecraft principal moment of inertia. The present analysis can directly contribute in the maintenance of the spacecraft's attitude

  18. Analysing Attrition in Outsourced Software Project

    Directory of Open Access Journals (Sweden)

    Umesh Rao Hodeghatta

    2015-01-01

    Full Text Available Information systems (IS outsourcing has grown as a major business phenomenon, and widely accepted as a business tool. Software outsourcing c ompanies provide expertise, knowledge and capabilities to their clients by taking up the proj ects both onsite and offsite. These companies face numerous challenges including attrition of pro ject members. Attrition is a major challenge experienced by the outsourcing companies as it has severe impact on business, revenues and profitability. In this paper, attrition data of a m ajor software outsourcing company was analysed and an attempt to find the reason for attr ition is also made. The data analysis was based on the data collected by an outsourcing compa ny over a period of two years for a major client. The results show that the client initiated attrition can have an impact on project and the members quit the outsourcing company due to client initiated ramp down without revealing the reason.

  19. Digital analyses of cartometric Fruska Gora guidelines

    Directory of Open Access Journals (Sweden)

    Živković Dragica

    2013-01-01

    Full Text Available Modern geo morphological topography research have been using quantity statistic and cartographic methods for topographic relief features, mutual relief features, mutual connection analyses on the grounds of good quality numeric parameters etc. Topographic features are important for topographic activities are important for important natural activities. Important morphological characteristics are precisely at the angle of topography, hypsometry, and topography exposition and so on. Small yet unknown relief slants can deeply affect land configuration, hypsometry, topographic exposition etc. Expositions modify the light and heat of interconnected phenomena: soil and air temperature, soil disintegration, the length of vegetation period, the complexity of photosynthesis, the fruitfulness of agricultural crops, the height of snow limit etc. [Projekat Ministarstva nauke Republike Srbije, br. 176008 i br. III44006

  20. Design and analyses of clinical trials

    International Nuclear Information System (INIS)

    The course will use RTOG studies to illustrate design and analysis issues for phase I, phase II, and phase III trials. The issues discussed will include types of statistical errors, the selection of study endpoints, choice of the appropriate study population, determination of sample sizes, randomization, and plans for statistical analyses. Estimation of the sample sizes will be discussed for both absolute survival (alive or dead) and cause specific failure (local failure). For phase III trials, Data Monitoring Committees are now widely used in multi-centered trials. Their main purpose is to determine if there are sufficient evidence to terminate a study for efficacy or safety reasons. The results of two such terminated trials will be used to illustrate the DMC's function. The question of when should a trial be reported at medical meeting and in the literature will be explored. The emphasis will be on concepts and statistical notations will be kept to a minimum

  1. The radiation analyses of ITER lower ports

    International Nuclear Information System (INIS)

    The ITER Vacuum Vessel has upper, equatorial, and lower ports used for equipment installation, diagnostics, heating and current drive systems, cryo-vacuum pumping, and access inside the vessel for maintenance. At the level of the divertor, the nine lower ports for remote handling, cryo-vacuum pumping and diagnostic are inclined downwards and toroidally located each every 40o. The cryopump port has additionally a branch to allocate a second cryopump. The ports, as openings in the Vacuum Vessel, permit radiation streaming out of the vessel which affects the heating in the components in the outer regions of the machine inside and outside the ports. Safety concerns are also raised with respect to the dose after shutdown at the cryostat behind the ports: in such zones the radiation dose level must be kept below the regulatory limit to allow personnel access for maintenance purposes. Neutronic analyses have been required to qualify the ITER project related to the lower ports. A 3-D model was used to take into account full details of the ports and the lower machine surroundings. MCNP version 5 1.40 has been used with the FENDL 2.1 nuclear data library. The ITER 40o model distributed by the ITER Organization was developed in the lower part to include the relevant details. The results of a first analysis, focused on cryopump system only, were recently published. In this paper more complete data on the cryopump port and analysis for the remote handling port and the diagnostic rack are presented; the results of both analyses give a complete map of the radiation loads in the outer divertor ports. Nuclear heating, dpa, tritium production, and dose rates after shutdown are provided and the implications for the design are discussed.

  2. The radiation analyses of ITER lower ports

    Energy Technology Data Exchange (ETDEWEB)

    Petrizzi, L., E-mail: petrizzi@frascati.enea.it [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy); Brolatti, G. [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy); Martin, A.; Loughlin, M. [ITER Organization, Cadarache, 13108 St Paul-lez-Durance (France); Moro, F.; Villari, R. [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy)

    2010-12-15

    The ITER Vacuum Vessel has upper, equatorial, and lower ports used for equipment installation, diagnostics, heating and current drive systems, cryo-vacuum pumping, and access inside the vessel for maintenance. At the level of the divertor, the nine lower ports for remote handling, cryo-vacuum pumping and diagnostic are inclined downwards and toroidally located each every 40{sup o}. The cryopump port has additionally a branch to allocate a second cryopump. The ports, as openings in the Vacuum Vessel, permit radiation streaming out of the vessel which affects the heating in the components in the outer regions of the machine inside and outside the ports. Safety concerns are also raised with respect to the dose after shutdown at the cryostat behind the ports: in such zones the radiation dose level must be kept below the regulatory limit to allow personnel access for maintenance purposes. Neutronic analyses have been required to qualify the ITER project related to the lower ports. A 3-D model was used to take into account full details of the ports and the lower machine surroundings. MCNP version 5 1.40 has been used with the FENDL 2.1 nuclear data library. The ITER 40{sup o} model distributed by the ITER Organization was developed in the lower part to include the relevant details. The results of a first analysis, focused on cryopump system only, were recently published. In this paper more complete data on the cryopump port and analysis for the remote handling port and the diagnostic rack are presented; the results of both analyses give a complete map of the radiation loads in the outer divertor ports. Nuclear heating, dpa, tritium production, and dose rates after shutdown are provided and the implications for the design are discussed.

  3. TRAC analyses for CCTF and SCTF tests and UPTF design/operation

    International Nuclear Information System (INIS)

    The 2D/3D Program is a multinational (Germany, Japan, and the United States) experimental and analytical nuclear reactor safety research program. The Los Alamos analysis effort is functioning as a vital part of the 2D/3D program. The CCTF and SCTF analyses have demonstrated that TRAC-PF1 can correctly predict multidimensional, nonequilibrium behavior in large-scale facilities prototypical of actual PWR's. Through these and future TRAC analyses the experimental findings can be related from facility to facility, and the results of this research program can be directly related to licensing concerns affecting actual PWR's

  4. Proposed Testing to Assess the Accuracy of Glass-To-Metal Seal Stress Analyses.

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, Robert S.; Emery, John M; Tandon, Rajan; Antoun, Bonnie R.; Stavig, Mark E.; Newton, Clay S.; Gibson, Cory S; Bencoe, Denise N.

    2014-09-01

    The material characterization tests conducted on 304L VAR stainless steel and Schott 8061 glass have provided higher fidelity data for calibration of material models used in Glass - T o - Metal (GTM) seal analyses. Specifically, a Thermo - Multi - Linear Elastic Plastic ( thermo - MLEP) material model has be en defined for S S304L and the Simplified Potential Energy Clock nonlinear visc oelastic model has been calibrated for the S8061 glass. To assess the accuracy of finite element stress analyses of GTM seals, a suite of tests are proposed to provide data for comparison to mo del predictions.

  5. Nonlinear dynamic analyses of seismic tests of a modified scale model PWR primary coolant loop

    International Nuclear Information System (INIS)

    Simplified and detailed nonlinear piping analysis methods were used in post-test predictions of the test behavior of a 1/2.5-scale modified primary loop of a Japanese PWR system at different levels of seismic loading. The testing was conducted using the Tadotsu Large-Scale Vibration Table Facility in Japan. The simplified nonlinear analyses used the refined incremental hinge method and the detailed nonlinear analyses used the implicit integration time history option of the ABAQUS computer code. This paper describes the tests, analysis techniques, and computer models. The analysis-to-test comparisons are discussed and conclusions and recommendations are provided

  6. Genomic Prediction in Barley

    DEFF Research Database (Denmark)

    Edriss, Vahid; Cericola, Fabio; Jensen, Jens D;

    Genomic prediction uses markers (SNPs) across the whole genome to predict individual breeding values at an early growth stage potentially before large scale phenotyping. One of the applications of genomic prediction in plant breeding is to identify the best individual candidate lines to contribute...... to next generation. The main goal of this study was to see the potential of using genomic prediction in a commercial Barley breeding program. The data used in this study was from Nordic Seed company which is located in Denmark. Around 350 advanced lines were genotyped with 9K Barely chip from...... Illumina. Traits used in this study were grain yield, plant height and heading date. Heading date is number days it takes after 1st June for plant to head. Heritabilities were 0.33, 0.44 and 0.48 for yield, height and heading, respectively for the average of nine plots. The GBLUP model was used for genomic...

  7. Genomic Prediction in Barley

    DEFF Research Database (Denmark)

    Edriss, Vahid; Cericola, Fabio; Jensen, Jens D;

    2015-01-01

    Genomic prediction uses markers (SNPs) across the whole genome to predict individual breeding values at an early growth stage potentially before large scale phenotyping. One of the applications of genomic prediction in plant breeding is to identify the best individual candidate lines to contribute...... to next generation. The main goal of this study was to see the potential of using genomic prediction in a commercial Barley breeding program. The data used in this study was from Nordic Seed company which is located in Denmark. Around 350 advanced lines were genotyped with 9K Barely chip from...... Illumina. Traits used in this study were grain yield, plant height and heading date. Heading date is number days it takes after 1st June for plant to head. Heritabilities were 0.33, 0.44 and 0.48 for yield, height and heading, respectively for the average of nine plots. The GBLUP model was used for genomic...

  8. Predicted value of $0 \\, \

    CERN Document Server

    Maedan, Shinji

    2016-01-01

    Assuming that the lightest neutrino mass $ m_0 $ is measured, we study the influence of error of the measured $ m_0 $ on the uncertainty of the predicted value of the neutrinoless double beta decay ($0 \\, \

  9. Predictable grammatical constructions

    DEFF Research Database (Denmark)

    Lucas, Sandra

    2015-01-01

    My aim in this paper is to provide evidence from diachronic linguistics for the view that some predictable units are entrenched in grammar and consequently in human cognition, in a way that makes them functionally and structurally equal to nonpredictable grammatical units, suggesting that these...... predictable units should be considered grammatical constructions on a par with the nonpredictable constructions. Frequency has usually been seen as the only possible argument speaking in favor of viewing some formally and semantically fully predictable units as grammatical constructions. However, this paper...... semantically and formally predictable. Despite this difference, [méllo INF], like the other future periphrases, seems to be highly entrenched in the cognition (and grammar) of Early Medieval Greek language users, and consequently a grammatical construction. The syntactic evidence speaking in favor of [méllo...

  10. Robust Distributed Online Prediction

    CERN Document Server

    Dekel, Ofer; Shamir, Ohad; Xiao, Lin

    2010-01-01

    The standard model of online prediction deals with serial processing of inputs by a single processor. However, in large-scale online prediction problems, where inputs arrive at a high rate, an increasingly common necessity is to distribute the computation across several processors. A non-trivial challenge is to design distributed algorithms for online prediction, which maintain good regret guarantees. In \\cite{DMB}, we presented the DMB algorithm, which is a generic framework to convert any serial gradient-based online prediction algorithm into a distributed algorithm. Moreover, its regret guarantee is asymptotically optimal for smooth convex loss functions and stochastic inputs. On the flip side, it is fragile to many types of failures that are common in distributed environments. In this companion paper, we present variants of the DMB algorithm, which are resilient to many types of network failures, and tolerant to varying performance of the computing nodes.

  11. Insights from Severe Accident Analyses for Verification of VVER SAMG

    International Nuclear Information System (INIS)

    The severe accident analyses of simultaneous rupture of all four steam lines (case-a), simultaneous occurrence of LOCA with SBO (case-b) and Station blackout (case-c) were performed with the computer code ASTEC V2r2 for a typical VVER-1000. The results obtained will be used for verification of sever accident provisions and Severe Accident Management Guidelines (SAMG). Auxiliary feed water and emergency core cooling systems are modelled as boundary conditions. The ICARE module is used to simulate the reactor core, which is divided into five radial regions by grouping similarly powered fuel assemblies together. Initially, CESAR module computes thermal hydraulics in primary and secondary circuits. As soon as core uncovery begins, the ICARE module is actuated based on certain parameters, and after this, ICARE module computes the thermal hydraulics in the core, bypass, downcomer and the lower plenum. CESAR handles the remaining components in the primary and secondary loops. CPA module is used to simulate the containment and to predict the thermal-hydraulic and hydrogen behaviour in the containment. The accident sequences were selected in such a way that they cover low/high pressure and slow/fast core damage progression events. Events simulated included slow progression events with high pressure and fast accident progression with low primary pressure. Analysis was also carried out for the case of SBO with the opening of the PORVs when core exit temperature exceeds certain value as part of SAMG. Time step sensitivity study was carried out for LOCA with SBO. In general the trends and magnitude of the parameters are as expected. The key results of the above analyses are presented in this paper. (author)

  12. Nuclear level density predictions

    OpenAIRE

    Bucurescu Dorel; von Egidy Till

    2015-01-01

    Simple formulas depending only on nuclear masses were previously proposed for the parameters of the Back-Shifted Fermi Gas (BSFG) model and of the Constant Temperature (CT) model of the nuclear level density, respectively. They are now applied for the prediction of the level density parameters of all nuclei with available masses. Both masses from the new 2012 mass table and from different models are considered and the predictions are discussed in connection with nuclear regions most affected ...

  13. Predictive graph mining

    OpenAIRE

    Karwath, Andreas; De Raedt, Luc

    2004-01-01

    Graph mining approaches are extremely popular and effective in molecular databases. The vast majority of these approaches first derive interesting, i.e. frequent, patterns and then use these as features to build predictive models. Rather than building these models in a two step indirect way, the SMIREP system introduced in this paper, derives predictive rule models from molecular data directly. SMIREP combines the SMILES and SMARTS representation languages that are popular in computational ch...

  14. Operational Dust Prediction

    Science.gov (United States)

    Benedetti, Angela; Baldasano, Jose M.; Basart, Sara; Benincasa, Francesco; Boucher, Olivier; Brooks, Malcolm E.; Chen, Jen-Ping; Colarco, Peter R.; Gong, Sunlin; Huneeus, Nicolas; Jones, Luke; Lu, Sarah; Menut, Laurent; Morcrette, Jean-Jacques; Mulcahy, Jane; Nickovic, Slobodan; Garcia-Pando, Carlos P.; Reid, Jeffrey S.; Sekiyama, Thomas T.; Tanaka, Taichu Y.; Terradellas, Enric; Westphal, Douglas L.; Zhang, Xiao-Ye; Zhou, Chun-Hong

    2014-01-01

    Over the last few years, numerical prediction of dust aerosol concentration has become prominent at several research and operational weather centres due to growing interest from diverse stakeholders, such as solar energy plant managers, health professionals, aviation and military authorities and policymakers. Dust prediction in numerical weather prediction-type models faces a number of challenges owing to the complexity of the system. At the centre of the problem is the vast range of scales required to fully account for all of the physical processes related to dust. Another limiting factor is the paucity of suitable dust observations available for model, evaluation and assimilation. This chapter discusses in detail numerical prediction of dust with examples from systems that are currently providing dust forecasts in near real-time or are part of international efforts to establish daily provision of dust forecasts based on multi-model ensembles. The various models are introduced and described along with an overview on the importance of dust prediction activities and a historical perspective. Assimilation and evaluation aspects in dust prediction are also discussed.

  15. Experimental PVT property analyses for Athabasca bitumen

    Energy Technology Data Exchange (ETDEWEB)

    Ashrafi, Mohammad; Souraki, Yaser; Karimaie, Hassan; Torsaeter, Ole [Norwegian University of Science and Technology (Norway); Bjorkvik, Bard J.A. [SINTEF Petroleum Research (Norway)

    2011-07-01

    To study fluid behavior in a reservoir it is very important to find out exact and complete data on the rock system, fluid properties, and rock-fluid interactions. PVT properties are among the most critical data that reservoir engineers need to evaluate the reservoir. This paper presents the experimental study of a few PVT properties of Athabasca bitumen. The viscosity of Athabasca heavy crude was measured using a rotational viscometer. These viscosity data are a reliable input for simulation purposes. The Athabasca oil was characterized using gas chromatography analysis. Whole sample molar mass was measured at 534 g/mol by cryoscopy. Density and molar mass were also measured. Based on the experimental study, a formula was derived for Athabasca bitumen density prediction in the temperature and pressure range studied. From the results, the interfacial tension between oil and steam was measured, using the pendant drop method, and found to be between 25 and 18 mN/m.

  16. Database-Driven Analyses of Astronomical Spectra

    Science.gov (United States)

    Cami, Jan

    2012-03-01

    Spectroscopy is one of the most powerful tools to study the physical properties and chemical composition of very diverse astrophysical environments. In principle, each nuclide has a unique set of spectral features; thus, establishing the presence of a specific material at astronomical distances requires no more than finding a laboratory spectrum of the right material that perfectly matches the astronomical observations. Once the presence of a substance is established, a careful analysis of the observational characteristics (wavelengths or frequencies, intensities, and line profiles) allows one to determine many physical parameters of the environment in which the substance resides, such as temperature, density, velocity, and so on. Because of this great diagnostic potential, ground-based and space-borne astronomical observatories often include instruments to carry out spectroscopic analyses of various celestial objects and events. Of particular interest is molecular spectroscopy at infrared wavelengths. From the spectroscopic point of view, molecules differ from atoms in their ability to vibrate and rotate, and quantum physics inevitably causes those motions to be quantized. The energies required to excite vibrations or rotations are such that vibrational transitions generally occur at infrared wavelengths, whereas pure rotational transitions typically occur at sub-mm wavelengths. Molecular vibration and rotation are coupled though, and thus at infrared wavelengths, one commonly observes a multitude of ro-vibrational transitions (see Figure 13.1). At lower spectral resolution, all transitions blend into one broad ro-vibrational molecular band. The isotope. Molecular spectroscopy thus allows us to see a difference of one neutron in an atomic nucleus that is located at astronomical distances! Since the detection of the first interstellar molecules (the CH [21] and CN [14] radicals), more than 150 species have been detected in space, ranging in size from diatomic

  17. High perfomance liquid chromatography in pharmaceutical analyses.

    Science.gov (United States)

    Nikolin, Branko; Imamović, Belma; Medanhodzić-Vuk, Saira; Sober, Miroslav

    2004-05-01

    In testing the pre-sale procedure the marketing of drugs and their control in the last ten years, high performance liquid chromatography replaced numerous spectroscopic methods and gas chromatography in the quantitative and qualitative analysis. In the first period of HPLC application it was thought that it would become a complementary method of gas chromatography, however, today it has nearly completely replaced gas chromatography in pharmaceutical analysis. The application of the liquid mobile phase with the possibility of transformation of mobilized polarity during chromatography and all other modifications of mobile phase depending upon the characteristics of substance which are being tested, is a great advantage in the process of separation in comparison to other methods. The greater choice of stationary phase is the next factor which enables realization of good separation. The separation line is connected to specific and sensitive detector systems, spectrafluorimeter, diode detector, electrochemical detector as other hyphernated systems HPLC-MS and HPLC-NMR, are the basic elements on which is based such wide and effective application of the HPLC method. The purpose high performance liquid chromatography (HPLC) analysis of any drugs is to confirm the identity of a drug and provide quantitative results and also to monitor the progress of the therapy of a disease.1) Measuring presented on the Fig. 1. is chromatogram obtained for the plasma of depressed patients 12 h before oral administration of dexamethasone. It may also be used to further our understanding of the normal and disease process in the human body trough biomedical and therapeutically research during investigation before of the drugs registration. The analyses of drugs and metabolites in biological fluids, particularly plasma, serum or urine is one of the most demanding but one of the most common uses of high performance of liquid chromatography. Blood, plasma or serum contains numerous endogenous

  18. Hungarian approach to LOCA analyses for SARs

    International Nuclear Information System (INIS)

    The Hungarian AGNES project in the period of 1992-94 was performed with the aim to reassess the safety of the Paks NPP using state-of-the-art techniques. The project comprised - among others - a complete design basis accident (DBA) analysis. Major part of the thermal-hydraulic analyses has been performed by the RELAP5/mod2.5/V251 code version with conservative approach. In the medium size LOCA calculations and the PTS studies the six reactor cooling loops of the WWER-440/213 system were modelled by three loops (a single, a double and a triple loop). In the further developed version of the input model used in small break LOCA and other DBA analyses the six loops were modelled separately. The nodalisation schemes of the reactor vessel and the pressurizer, moreover the single primary loops are identical in the two input models. For the six-loop inputs model the trip cards, general tables and control variables are generated by using a RELAP5 object-oriented pre-processing interactive code, the TROPIC 4.0 code received from TRACTEBEL Belgium. The six-loop input model for WWER-440/V213 system was verified by the data of two operational transients measured in Paks NPP. The analysis of large break LOCAs, where the combined simultaneous upper plenum and downcomer injection results in a rather complicated process during reflooding phase, was carried out by using the ATHLET mod 1.1 Cycle code version (developed by GRS) in the framework of a bilateral German-Hungarian cooperation agreement using two-loop (1+5) input model. Later on in our safety analysis activities the application of best estimate methodology gained ground. In the last years AEKI in framework of different projects as US CAMP activity, EU PHARE and 5th Framework Programmes, as well as national projects to support the plant operation performed also many cases of LOCA analysis including primary to secondary leakages, feedwater and steam line breaks. These can be the preparation for a new DBA Analysis project

  19. Seismic Soil-Structure Interaction Analyses of a Deeply Embedded Model Reactor – SASSI Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Nie J.; Braverman J.; Costantino, M.

    2013-10-31

    This report summarizes the SASSI analyses of a deeply embedded reactor model performed by BNL and CJC and Associates, as part of the seismic soil-structure interaction (SSI) simulation capability project for the NEAMS (Nuclear Energy Advanced Modeling and Simulation) Program of the Department of Energy. The SASSI analyses included three cases: 0.2 g, 0.5 g, and 0.9g, all of which refer to nominal peak accelerations at the top of the bedrock. The analyses utilized the modified subtraction method (MSM) for performing the seismic SSI evaluations. Each case consisted of two analyses: input motion in one horizontal direction (X) and input motion in the vertical direction (Z), both of which utilized the same in-column input motion. Besides providing SASSI results for use in comparison with the time domain SSI results obtained using the DIABLO computer code, this study also leads to the recognition that the frequency-domain method should be modernized so that it can better serve its mission-critical role for analysis and design of nuclear power plants.

  20. Uncertainty analyses in systems modeling process

    International Nuclear Information System (INIS)

    In the context of Probabilistic Safety Assessment (PSA), the uncertainty analyses play an important role. The objective is to ensure the qualitative evaluation and quantitative estimation in PSA level 1 results (the core damage frequency, the accident sequences frequency, the top events probability, etc). An application that enables uncertainty calculations by probability distribution propagations in the fault tree model has been developed. It uses the moment method and Monte Carlo method. The application has been integrated into a computer program that allocates the reliability data, quantifies the human errors and labels in a unique way the components. The reliability data used in Institute for Nuclear Research (INR) Pitesti for Cernavoda Probabilistic Safety Evaluation (CPSE) studies is a generic data base. Taking into account the status of reliability data base and the cases by which an error factor for a failure rate lognormal distribution is calculated, the data base has been completed with an error factor for each record. This paper presents the module that performs the uncertainty analysis and an example of uncertainty analysis at the fault tree level. (authors)

  1. Analyses of demand response in Denmark

    International Nuclear Information System (INIS)

    Due to characteristics of the power system, costs of producing electricity vary considerably over short time intervals. Yet, many consumers do not experience corresponding variations in the price they pay for consuming electricity. The topic of this report is: are consumers willing and able to respond to short-term variations in electricity prices, and if so, what is the social benefit of consumers doing so? Taking Denmark and the Nord Pool market as a case, the report focuses on what is known as short-term consumer flexibility or demand response in the electricity market. With focus on market efficiency, efficient allocation of resources and security of supply, the report describes demand response from a micro-economic perspective and provides empirical observations and case studies. The report aims at evaluating benefits from demand response. However, only elements contributing to an overall value are presented. In addition, the analyses are limited to benefits for society, and costs of obtaining demand response are not considered. (au)

  2. Causal mediation analyses with rank preserving models.

    Science.gov (United States)

    Have, Thomas R Ten; Joffe, Marshall M; Lynch, Kevin G; Brown, Gregory K; Maisto, Stephen A; Beck, Aaron T

    2007-09-01

    We present a linear rank preserving model (RPM) approach for analyzing mediation of a randomized baseline intervention's effect on a univariate follow-up outcome. Unlike standard mediation analyses, our approach does not assume that the mediating factor is also randomly assigned to individuals in addition to the randomized baseline intervention (i.e., sequential ignorability), but does make several structural interaction assumptions that currently are untestable. The G-estimation procedure for the proposed RPM represents an extension of the work on direct effects of randomized intervention effects for survival outcomes by Robins and Greenland (1994, Journal of the American Statistical Association 89, 737-749) and on intervention non-adherence by Ten Have et al. (2004, Journal of the American Statistical Association 99, 8-16). Simulations show good estimation and confidence interval performance by the proposed RPM approach under unmeasured confounding relative to the standard mediation approach, but poor performance under departures from the structural interaction assumptions. The trade-off between these assumptions is evaluated in the context of two suicide/depression intervention studies. PMID:17825022

  3. Phylogenomic Analyses Support Traditional Relationships within Cnidaria.

    Directory of Open Access Journals (Sweden)

    Felipe Zapata

    Full Text Available Cnidaria, the sister group to Bilateria, is a highly diverse group of animals in terms of morphology, lifecycles, ecology, and development. How this diversity originated and evolved is not well understood because phylogenetic relationships among major cnidarian lineages are unclear, and recent studies present contrasting phylogenetic hypotheses. Here, we use transcriptome data from 15 newly-sequenced species in combination with 26 publicly available genomes and transcriptomes to assess phylogenetic relationships among major cnidarian lineages. Phylogenetic analyses using different partition schemes and models of molecular evolution, as well as topology tests for alternative phylogenetic relationships, support the monophyly of Medusozoa, Anthozoa, Octocorallia, Hydrozoa, and a clade consisting of Staurozoa, Cubozoa, and Scyphozoa. Support for the monophyly of Hexacorallia is weak due to the equivocal position of Ceriantharia. Taken together, these results further resolve deep cnidarian relationships, largely support traditional phylogenetic views on relationships, and provide a historical framework for studying the evolutionary processes involved in one of the most ancient animal radiations.

  4. Isolation and analyses of axonal ribonucleoprotein complexes.

    Science.gov (United States)

    Doron-Mandel, Ella; Alber, Stefanie; Oses, Juan A; Medzihradszky, Katalin F; Burlingame, Alma L; Fainzilber, Mike; Twiss, Jeffery L; Lee, Seung Joon

    2016-01-01

    Cytoskeleton-dependent RNA transport and local translation in axons are gaining increased attention as key processes in the maintenance and functioning of neurons. Specific axonal transcripts have been found to play roles in many aspects of axonal physiology including axon guidance, axon survival, axon to soma communication, injury response and regeneration. This axonal transcriptome requires long-range transport that is achieved by motor proteins carrying transcripts as messenger ribonucleoprotein (mRNP) complexes along microtubules. Other than transport, the mRNP complex plays a major role in the generation, maintenance, and regulation of the axonal transcriptome. Identification of axonal RNA-binding proteins (RBPs) and analyses of the dynamics of their mRNPs are of high interest to the field. Here, we describe methods for the study of interactions between RNA and proteins in axons. First, we describe a protocol for identifying binding proteins for an RNA of interest by using RNA affinity chromatography. Subsequently, we discuss immunoprecipitation (IP) methods allowing the dissection of protein-RNA and protein-protein interactions in mRNPs under various physiological conditions. PMID:26794529

  5. Activation analyses for different fusion structural alloys

    International Nuclear Information System (INIS)

    The leading candidate structural materials, viz., the vanadium alloys, the nickel or the manganese stabilized austenitic steels, and the ferritic steels, are analysed in terms of their induced activation in the TPSS fusion power reactor. The TPSS reactor has 1950 MW fusion power and inboard and outboard average neutron wall loading of 3.75 and 5.35 MW/m2 respectively. The results shows that, after one year of continuous operation, the vanadium alloys have the least radioactivity at reactor shutdown. The maximum difference between the induced radioactivity in the vanadium alloys and in the other iron-based alloys occurs at about 10 years after reactor shutdown. At this time, the total reactor radioactivity, using the vanadium alloys, is about two orders of magnitude less than the total reactor radioactivity utilizing any other alloy. The difference is even larger in the first wall, the FW-vanadium activation is 3 orders of magnitude less than other alloys' FW activation. 2 refs., 7 figs

  6. Reliability Analyses of Groundwater Pollutant Transport

    Energy Technology Data Exchange (ETDEWEB)

    Dimakis, Panagiotis

    1997-12-31

    This thesis develops a probabilistic finite element model for the analysis of groundwater pollution problems. Two computer codes were developed, (1) one using finite element technique to solve the two-dimensional steady state equations of groundwater flow and pollution transport, and (2) a first order reliability method code that can do a probabilistic analysis of any given analytical or numerical equation. The two codes were connected into one model, PAGAP (Probability Analysis of Groundwater And Pollution). PAGAP can be used to obtain (1) the probability that the concentration at a given point at a given time will exceed a specified value, (2) the probability that the maximum concentration at a given point will exceed a specified value and (3) the probability that the residence time at a given point will exceed a specified period. PAGAP could be used as a tool for assessment purposes and risk analyses, for instance the assessment of the efficiency of a proposed remediation technique or to study the effects of parameter distribution for a given problem (sensitivity study). The model has been applied to study the greatest self sustained, precipitation controlled aquifer in North Europe, which underlies Oslo`s new major airport. 92 refs., 187 figs., 26 tabs.

  7. Consumption patterns and perception analyses of hangwa.

    Science.gov (United States)

    Kwock, Chang Geun; Lee, Min A; Park, So Hyun

    2012-03-01

    Hangwa is a traditional food, corresponding to the current consumption trend, in need of marketing strategies to extend its consumption. Therefore, the purpose of this study was to analyze consumers' consumption patterns and perception of Hangwa to increase consumption in the market. A questionnaire was sent to 250 consumers by e-mail from Oct 8∼23, 2009 and the data from 231 persons were analyzed in this study. Statistical, descriptive, paired samples t-test, and importance-performance analyses were conducted using SPSS WIN 17.0. According to the results, Hangwa was purchased mainly 'for present' (39.8%) and the main reasons for buying it were 'traditional image' (33.3%) and 'taste' (22.5%). When importance and performance of attributes considered in purchasing Hangwa were evaluated, performance was assessed to be lower than importance for all attributes. The attributes in the first quadrant with a high importance and a high performance were 'a sanitary process', 'a rigorous quality mark' and 'taste', which were related with quality of the products. In addition, those with a high importance but a low performance were 'popularization through advertisement', 'promotion through mass media', 'conversion of thought on traditional foods', 'a reasonable price' and 'a wide range of price'. In conclusion, Hangwa manufacturers need to diversify products and extend the expiration date based on technologies to promote its consumption. In terms of price, Hangwa should become more available by lowering the price barrier for consumers who are sensitive to price. PMID:24471065

  8. Kinematic gait analyses in healthy Golden Retrievers

    Directory of Open Access Journals (Sweden)

    Gabriela C.A. Silva

    2014-12-01

    Full Text Available Kinematic analysis relates to the relative movement between rigid bodies and finds application in gait analysis and other body movements, interpretation of their data when there is change, determines the choice of treatment to be instituted. The objective of this study was to standardize the march of Dog Golden Retriever Healthy to assist in the diagnosis and treatment of musculoskeletal disorders. We used a kinematic analysis system to analyse the gait of seven dogs Golden Retriever, female, aged between 2 and 4 years, weighing 21.5 to 28 kg, clinically normal. Flexion and extension were described for shoulder, elbow, carpal, hip, femorotibialis and tarsal joints. The gait was characterized lateral and had accepted hypothesis of normality for all variables, except for the stance of hip and elbow, considering a confidence level of 95%, significance level α = 0.05. Variations have been attributed to displacement of the stripes during movement and the duplicated number of reviews. The kinematic analysis proved to be a consistent method of evaluation of the movement during canine gait and the data can be used in the diagnosis and evaluation of canine gait in comparison to other studies and treatment of dogs with musculoskeletal disorders.

  9. Parametric analyses of fusion-fission systems

    International Nuclear Information System (INIS)

    After a short review of the nuclear reactions relevant to fusion-fission systems the various types of blankets and characteristic model cases are presented. The fusion-fission system is modelled by its energy flow diagram. The system components and the system as a whole are characterized by 'component parameters' and 'system parameters' all of which are energy ratios. A cost estimate is given for the net energy delivered by the system, and a collection of formulas for the various energies flowing in the system in terms of the thermal energy delivered by the fusion part is presented. For sensitivity analysis four reference cases are defined which combine two plasma confinement schemes (mirror and tokamak) with two fissile fuel cycles (thorium-uranium and uranium-plutonium). The sensitivity of the critical plasma energy multiplication, of the circulating energy fraction, and of the energy cost with respect to changes of the component parameters is analysed. For the mirror case only superconducting magnets are considered, whereas two tokimak cases take into account both superconducting and normal-conducting coils. A section presenting relations between the plasma energy multiplication and the confinement parameter n tausub(E) of driven tokamak plasmas is added for reference. The conclusions summarize the results which could be obtained within the framework of energy balances, cost estimates and their parametric sensitivities. This is supplemented by listing those issues which lie beyond this scope but have to be taken into account when assessments of fusion-fission systems are made. (orig.)

  10. ANALYSES AND INFLUENCES OF GLAZED BUILDING ENVELOPES

    Directory of Open Access Journals (Sweden)

    Sabina Jordan

    2011-01-01

    Full Text Available The article presents the results of an analytical study of the functioning of glazing at two different yet interacting levels: at the level of the building as a whole, and at that of glazing as a building element. At the building level, analyses were performed on a sample of high-rise business buildings in Slovenia, where the glazing"s share of the building envelope was calculated, and estimates of the proportion of shade provided by external blinds were made. It is shown that, especially in the case of modern buildings with large proportions of glazing and buildings with no shading devices, careful glazing design is needed, together with a sound knowledge of energy performance. In the second part of the article, the energy balance values relating to selected types of glazing are presented, including solar control glazing. The paper demonstrates the need for a holistic energy approach to glazing problems, as well as how different types of glazing can be methodically compared, thus improving the design of sustainability-orientated buildings.

  11. Statistical analyses of extreme food habits

    International Nuclear Information System (INIS)

    This report is a summary of the results of the project ''Statistical analyses of extreme food habits'', which was ordered from the National Office for Radiation Protection as a contribution to the amendment of the ''General Administrative Regulation to paragraph 45 of the Decree on Radiation Protection: determination of the radiation exposition by emission of radioactive substances from facilities of nuclear technology''. Its aim is to show if the calculation of the radiation ingested by 95% of the population by food intake, like it is planned in a provisional draft, overestimates the true exposure. If such an overestimation exists, the dimension of it should be determined. It was possible to prove the existence of this overestimation but its dimension could only roughly be estimated. To identify the real extent of it, it is necessary to include the specific activities of the nuclides, which were not available for this investigation. In addition to this the report shows how the amounts of food consumption of different groups of foods influence each other and which connections between these amounts should be taken into account, in order to estimate the radiation exposition as precise as possible. (orig.)

  12. Achieving reasonable conservatism in nuclear safety analyses

    International Nuclear Information System (INIS)

    In the absence of methods that explicitly account for uncertainties, seeking reasonable conservatism in nuclear safety analyses can quickly lead to extreme conservatism. The rate of divergence to extreme conservatism is often beyond the expert analysts’ intuitive feeling, but can be demonstrated mathematically. Too much conservatism in addressing the safety of nuclear facilities is not beneficial to society. Using certain properties of lognormal distributions for representation of input parameter uncertainties, example calculations for the risk and consequence of a fictitious facility accident scenario are presented. Results show that there are large differences between the calculated 95th percentiles and the extreme bounding values derived from using all input variables at their upper-bound estimates. Showing the relationship of the mean values to the key parameters of the output distributions, the paper concludes that the mean is the ideal candidate for representation of the value of an uncertain parameter. The mean value is proposed as the metric that is consistent with the concept of reasonable conservatism in nuclear safety analysis, because its value increases towards higher percentiles of the underlying positively skewed distribution with increasing levels of uncertainty. Insensitivity of the results to the actual underlying distributions is briefly demonstrated. - Highlights: • Multiple conservative assumptions can quickly diverge into extreme conservatism. • Mathematics and attractive properties provide basis for wide use of lognormal distribution. • Mean values are ideal candidates for representation of parameter uncertainties. • Mean values are proposed as reasonably conservative estimates of parameter uncertainties

  13. WIND SPEED AND ENERGY POTENTIAL ANALYSES

    Directory of Open Access Journals (Sweden)

    A. TOKGÖZLÜ

    2013-01-01

    Full Text Available This paper provides a case study on application of wavelet techniques to analyze wind speed and energy (renewable and environmental friendly energy. Solar and wind are main sources of energy that allows farmers to have the potential for transferring kinetic energy captured by the wind mill for pumping water, drying crops, heating systems of green houses, rural electrification's or cooking. Larger wind turbines (over 1 MW can pump enough water for small-scale irrigation. This study tried to initiate data gathering process for wavelet analyses, different scale effects and their role on wind speed and direction variations. The wind data gathering system is mounted at latitudes: 37° 50" N; longitude 30° 33" E and height: 1200 m above mean sea level at a hill near Süleyman Demirel University campus. 10 minutes average values of two levels wind speed and direction (10m and 30m above ground level have been recorded by a data logger between July 2001 and February 2002. Wind speed values changed between the range of 0 m/s and 54 m/s. Annual mean speed value is 4.5 m/s at 10 m ground level. Prevalent wind

  14. PRECLOSURE CONSEQUENCE ANALYSES FOR LICENSE APPLICATION

    Energy Technology Data Exchange (ETDEWEB)

    S. Tsai

    2005-01-12

    Radiological consequence analyses are performed for potential releases from normal operations in surface and subsurface facilities and from Category 1 and Category 2 event sequences during the preclosure period. Surface releases from normal repository operations are primarily from radionuclides released from opening a transportation cask during dry transfer operations of spent nuclear fuel (SNF) in Dry Transfer Facility 1 (DTF 1), Dry Transfer Facility 2 (DTF 2), the Canister Handling facility (CHF), or the Fuel Handling Facility (FHF). Subsurface releases from normal repository operations are from resuspension of waste package surface contamination and neutron activation of ventilated air and silica dust from host rock in the emplacement drifts. The purpose of this calculation is to demonstrate that the preclosure performance objectives, specified in 10 CFR 63.111(a) and 10 CFR 63.111(b), have been met for the proposed design and operations in the geologic repository operations area. Preclosure performance objectives are discussed in Section 6.2.3 and are summarized in Tables 1 and 2.

  15. Analyse textuelle des discours: Niveaux ou plans d´analyse

    Directory of Open Access Journals (Sweden)

    Jean-Michel Adam

    2012-12-01

    Full Text Available L’article porte sur la théorie de l´Analyse Textuelle des Discours, à partir d´une reprisede la traduction brésilienne de La linguistique textuelle: introduction à l’analyse textuelle desdiscours (Cortez, 2008. L’ATD est pensée en fonction de trois observations préliminaires: lalinguistique textuelle est une des disciplines de l’analyse de discours, le texte est l’objet d’analysede l’ATD, et, dès qu’il y a texte, c’est-à-dire reconnaissance du fait qu’une suite d’énoncésforme un tout de communication, il y a effet de généricité, c’est-à-dire inscription de cette suited’énoncés dans une classe de discours. Le modèle théorique de l’ATD est éclairé par une reprisede son schéma 4, où sont représentés huit niveaux d’analyse. L´ATD est abordée sous l’angled’une double exigence – des raisons théoriques et des raisons méthodologiques et didactiquesqui conduisent à ces niveaux – et sont détaillées et illustrées les cinq plans ou niveaux d’analysetextuelle. Pour finir, des parties de l’oeuvre sont reprises et élargies, avec d’autres analyses où denouveaux aspcts théoriques sont détaillés.

  16. Use of EBSD Data in Numerical Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Becker, R; Wiland, H

    2000-01-14

    obtained for comparison with the model predictions. More recent work has taken advantage of automated data collection on deformed specimens as a means of collecting detailed and spatially correlated data for model validation. Although it will not be discussed in detail here, another area in which EBSD data is having a great impact is on recrystallization modeling. EBSD techniques can be used to collect data for quantitative microstructural analysis. This data can be used to infer growth kinetics of specific orientations, and this information can be synthesized into more accurate grain growth or recrystallization models. Another role which EBSD techniques may play is in determining initial structures for recrystallization models. A realistic starting structure is vital for evaluating the models, and attempts at predicting realistic structures with finite element simulations are not yet successful. As methodologies and equipment resolution continue to improve, it is possible that measured structures will serve as input for recrystallization models. Simulations have already been run using information obtained manually from a TEM.

  17. Prediction of Factors Determining Changes in Stability in Protein Mutants

    OpenAIRE

    Parthiban, Vijayarangakannan

    2006-01-01

    Analysing the factors behind protein stability is a key research topic in molecular biology and has direct implications on protein structure prediction and protein-protein docking solutions. Protein stability upon point mutations were analysed using a distance dependant pair potential representing mainly through-space interactions and torsion angle potential representing neighbouring effects as a basic statistical mechanical setup for the analysis. The synergetic effect of accessible surface ...

  18. Aircraft noise prediction

    Science.gov (United States)

    Filippone, Antonio

    2014-07-01

    This contribution addresses the state-of-the-art in the field of aircraft noise prediction, simulation and minimisation. The point of view taken in this context is that of comprehensive models that couple the various aircraft systems with the acoustic sources, the propagation and the flight trajectories. After an exhaustive review of the present predictive technologies in the relevant fields (airframe, propulsion, propagation, aircraft operations, trajectory optimisation), the paper addresses items for further research and development. Examples are shown for several airplanes, including the Airbus A319-100 (CFM engines), the Bombardier Dash8-Q400 (PW150 engines, Dowty R408 propellers) and the Boeing B737-800 (CFM engines). Predictions are done with the flight mechanics code FLIGHT. The transfer function between flight mechanics and the noise prediction is discussed in some details, along with the numerical procedures for validation and verification. Some code-to-code comparisons are shown. It is contended that the field of aircraft noise prediction has not yet reached a sufficient level of maturity. In particular, some parametric effects cannot be investigated, issues of accuracy are not currently addressed, and validation standards are still lacking.

  19. Solar Cycle Prediction

    CERN Document Server

    Petrovay, K

    2010-01-01

    A review of solar cycle prediction methods and their performance is given, including forecasts for cycle 24 and focusing on aspects of the solar cycle prediction problem that have a bearing on dynamo theory. The scope of the review is further restricted to the issue of predicting the amplitude (and optionally the epoch) of an upcoming solar maximum no later than right after the start of the given cycle. Prediction methods form three main groups. Precursor methods rely on the value of some measure of solar activity or magnetism at a specified time to predict the amplitude of the following solar maximum. Their implicit assumption is that each numbered solar cycle is a consistent unit in itself, while solar activity seems to consist of a series of much less tightly intercorrelated individual cycles. Extrapolation methods, in contrast, are based on the premise that the physical process giving rise to the sunspot number record is statistically homogeneous, i.e., the mathematical regularities underlying its variati...

  20. Non-Statistical Methods of Analysing of Bankruptcy Risk

    Directory of Open Access Journals (Sweden)

    Pisula Tomasz

    2015-06-01

    Full Text Available The article focuses on assessing the effectiveness of a non-statistical approach to bankruptcy modelling in enterprises operating in the logistics sector. In order to describe the issue more comprehensively, the aforementioned prediction of the possible negative results of business operations was carried out for companies functioning in the Polish region of Podkarpacie, and in Slovakia. The bankruptcy predictors selected for the assessment of companies operating in the logistics sector included 28 financial indicators characterizing these enterprises in terms of their financial standing and management effectiveness. The purpose of the study was to identify factors (models describing the bankruptcy risk in enterprises in the context of their forecasting effectiveness in a one-year and two-year time horizon. In order to assess their practical applicability the models were carefully analysed and validated. The usefulness of the models was assessed in terms of their classification properties, and the capacity to accurately identify enterprises at risk of bankruptcy and healthy companies as well as proper calibration of the models to the data from training sample sets.

  1. Quantifying Fire Cycle from Dendroecological Records Using Survival Analyses

    Directory of Open Access Journals (Sweden)

    Dominic Cyr

    2016-06-01

    Full Text Available Quantifying fire regimes in the boreal forest ecosystem is crucial for understanding the past and present dynamics, as well as for predicting its future dynamics. Survival analyses have often been used to estimate the fire cycle in eastern Canada because they make it possible to take into account the censored information that is made prevalent by the typically long fire return intervals and the limited scope of the dendroecological methods that are used to quantify them. Here, we assess how the true length of the fire cycle, the short-term temporal variations in fire activity, and the sampling effort affect the accuracy and precision of estimates obtained from two types of parametric survival models, the Weibull and the exponential models, and one non-parametric model obtained with the Cox regression. Then, we apply those results in a case area located in eastern Canada. Our simulation experiment confirms some documented concerns regarding the detrimental effects of temporal variations in fire activity on parametric estimation of the fire cycle. Cox regressions appear to provide the most accurate and robust estimator, being by far the least affected by temporal variations in fire activity. The Cox-based estimate of the fire cycle for the last 300 years in the case study area is 229 years (CI95: 162–407, compared with the likely overestimated 319 years obtained with the commonly used exponential model.

  2. Effects of Anchor Bolts Failures in Steam Explosion Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Hyun; Chang, Yoon-Suk [Kyung Hee University, Yongin (Korea, Republic of); Song, Sungchu; Cho, Yong-Jin [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2015-05-15

    Steam explosion may occur in a nuclear power plant by molten core-coolant interactions when the external reactor vessel cooling strategy is failed. This phenomenon can threat the integrity of reactor cavity, penetration piping and support structures. Even though extensive researches have been performed to predict influences of the steam explosion, due to complexity of physical phenomena and environmental thermal hydraulic conditions, it is remained as one of possible hazards. A steam explosion can cause intensive and rapid heat transfer, and lead to the formation of pressure waves and production of missiles that may endanger surrounding reactor cavity wall and associate components due to resulting dynamic effects. The goal of this research is to examine structural integrity of RPV (Reactor Pressure Vessel) support structures and anchor bolts under typical ex-vessel steam explosion conditions through FE analyses. Particularly, influence due to the failure of anchor bolts connecting RPV and support structures was evaluated. In this paper, influence of RPV and support structure due to the anchor bolt failure were evaluated under typical steam explosion conditions and the following conclusions were derived. The highest maximum stresses were calculated at the support structures under the steam explosion condition with the SVF and anchor bolts non-failure. The all stress values did not exceed their yield strengths. The displacements were high under anchor bolt failure conditions. However, the vertical movements of major components were small comparing to the overall dimensions of them.

  3. Review of Approximate Analyses of Sheet Forming Processes

    Science.gov (United States)

    Weiss, Matthias; Rolfe, Bernard; Yang, Chunhui; de Souza, Tim; Hodgson, Peter

    2011-08-01

    Approximate models are often used for the following purposes: • in on-line control systems of metal forming processes where calculation speed is critical; • to obtain quick, quantitative information on the magnitude of the main variables in the early stages of process design; • to illustrate the role of the major variables in the process; • as an initial check on numerical modelling; and • as a basis for quick calculations on processes in teaching and training packages. The models often share many similarities; for example, an arbitrary geometric assumption of deformation giving a simplified strain distribution, simple material property descriptions—such as an elastic, perfectly plastic law—and mathematical short cuts such as a linear approximation of a polynomial expression. In many cases, the output differs significantly from experiment and performance or efficiency factors are developed by experience to tune the models. In recent years, analytical models have been widely used at Deakin University in the design of experiments and equipment and as a pre-cursor to more detailed numerical analyses. Examples that are reviewed in this paper include deformation of sandwich material having a weak, elastic core, load prediction in deep drawing, bending of strip (particularly of ageing steel where kinking may occur), process analysis of low-pressure hydroforming of tubing, analysis of the rejection rates in stamping, and the determination of constitutive models by an inverse method applied to bending tests.

  4. Microstructural and compositional analyses of GaN-based nanostructures

    Energy Technology Data Exchange (ETDEWEB)

    Pretorius, Angelika; Mueller, Knut; Rosenauer, Andreas [Section Electron Microscopy, Institute of Solid State Physics, University of Bremen, Otto-Hahn-Allee 1, 28359 Bremen (Germany); Schmidt, Thomas; Falta, Jens [Section Surface Physics, Institute of Solid State Physics, University of Bremen, Otto-Hahn-Allee 1, 28359 Bremen (Germany); Aschenbrenner, Timo; Yamaguchi, Tomohiro; Dartsch, Heiko; Hommel, Detlef [Section Semiconductor Epitaxy, Institute of Solid State Physics, University of Bremen, Otto-Hahn-Allee 1, 28359 Bremen (Germany); Kuebel, Christian [Institute of Nanotechnology, Karlsruher Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)

    2011-08-15

    Composition and microstructure of GaN-based island structures and distributed Bragg reflectors (DBRs) were investigated with transmission electron microscopy (TEM). We analysed free-standing InGaN islands and islands capped with GaN. Growth of the islands performed by molecular beam epitaxy (MBE) and metal organic vapour phase epitaxy (MOVPE) resulted in different microstructures. The islands grown by MBE were plastically relaxed. Cap layer deposition resulted in a rapid dissolution of the islands already at early stages of cap layer growth. These findings are confirmed by grazing-incidence X-ray diffraction (GIXRD). In contrast, the islands grown by MOVPE relax only elastically. Strain state analysis (SSA) revealed that the indium concentration increases towards the tips of the islands. For an application as quantum dots, the islands must be embedded into DBRs. Structure and composition of Al{sub y}Ga{sub 1-y}N/GaN Bragg reflectors on top of an AlGaN buffer layer and In{sub x}Al{sub 1-x}N/GaN Bragg reflectors on top of a GaN buffer layer were investigated. Specifically, structural defects such as threading dislocations (TDs) and inversion domains (IDs) were studied, and we investigated thicknesses, interfaces and interface roughnesses of the layers. As the peak reflectivities of the investigated DBRs do not reach the theoretical predictions, possible reasons are discussed. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  5. A quantitative approach to analysing cortisol response in the horse.

    Science.gov (United States)

    Ekstrand, C; Ingvast-Larsson, C; Olsén, L; Hedeland, M; Bondesson, U; Gabrielsson, J

    2016-06-01

    The cortisol response to glucocorticoid intervention has, in spite of several studies in horses, not been fully characterized with regard to the determinants of onset, intensity and duration of response. Therefore, dexamethasone and cortisol response data were collected in a study applying a constant rate infusion regimen of dexamethasone (0.17, 1.7 and 17 μg/kg) to six Standardbreds. Plasma was analysed for dexamethasone and cortisol concentrations using UHPLC-MS/MS. Dexamethasone displayed linear kinetics within the concentration range studied. A turnover model of oscillatory behaviour accurately mimicked cortisol data. The mean baseline concentration range was 34-57 μg/L, the fractional turnover rate 0.47-1.5 1/h, the amplitude parameter 6.8-24 μg/L, the maximum inhibitory capacity 0.77-0.97, the drug potency 6-65 ng/L and the sigmoidicity factor 0.7-30. This analysis provided a better understanding of the time course of the cortisol response in horses. This includes baseline variability within and between horses and determinants of the equilibrium concentration-response relationship. The analysis also challenged a protocol for a dexamethasone suppression test design and indicated future improvement to increase the predictability of the test. PMID:26542753

  6. Genome-wide analyses of small noncoding RNAs in streptococci

    Directory of Open Access Journals (Sweden)

    Nadja ePatenge

    2015-05-01

    Full Text Available Streptococci represent a diverse group of Gram-positive bacteria, which colonize a wide range of hosts among animals and humans. Streptococcal species occur as commensal as well as pathogenic organisms. Many of the pathogenic species can cause severe, invasive infections in their hosts leading to a high morbidity and mortality. The consequence is a tremendous suffering on the part of men and livestock besides the significant financial burden in the agricultural and healthcare sectors. An environmentally stimulated and tightly controlled expression of virulence factor genes is of fundamental importance for streptococcal pathogenicity. Bacterial small noncoding RNAs (sRNAs modulate the expression of genes involved in stress response, sugar metabolism, surface composition, and other properties that are related to bacterial virulence. Even though the regulatory character is shared by this class of RNAs, variation on the molecular level results in a high diversity of functional mechanisms. The knowledge about the role of sRNAs in streptococci is still limited, but in recent years, genome-wide screens for sRNAs have been conducted in an increasing number of species. Bioinformatics prediction approaches have been employed as well as expression analyses by classical array techniques or next generation sequencing. This review will give an overview of whole genome screens for sRNAs in streptococci with a focus on describing the different methods and comparing their outcome considering sRNA conservation among species, functional similarities, and relevance for streptococcal infection.

  7. Mortality of atomic bomb survivors predicted from laboratory animals

    Science.gov (United States)

    Carnes, Bruce A.; Grahn, Douglas; Hoel, David

    2003-01-01

    Exposure, pathology and mortality data for mice, dogs and humans were examined to determine whether accurate interspecies predictions of radiation-induced mortality could be achieved. The analyses revealed that (1) days of life lost per unit dose can be estimated for a species even without information on radiation effects in that species, and (2) accurate predictions of age-specific radiation-induced mortality in beagles and the atomic bomb survivors can be obtained from a dose-response model for comparably exposed mice. These findings illustrate the value of comparative mortality analyses and the relevance of animal data to the study of human health effects.

  8. Analysing galaxy clustering for future experiments including the Dark Energy Survey

    OpenAIRE

    Nock, Kelly

    2010-01-01

    The use of Baryon Acoustic Oscillations (BAO) as a standard ruler in the 2-point galaxy clustering signal has proven to be an excellent probe of the cosmological expansion. With the abundance of good quality galaxy data predicted for future large sky surveys, the potential to conduct precision cosmology using clustering analyses is immense. Many of the next generation sky surveys, including the Dark Energy Survey (DES), the Panoramic Survey Telescope and Rapid Response System (PanStarrs), and...

  9. Seismic criteria studies and analyses. Quarterly progress report No. 3. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    1975-01-03

    Information is presented concerning the extent to which vibratory motions at the subsurface foundation level might differ from motions at the ground surface and the effects of the various subsurface materials on the overall Clinch River Breeder Reactor site response; seismic analyses of LMFBR type reactors to establish analytical procedures for predicting structure stresses and deformations; and aspects of the current technology regarding the representation of energy losses in nuclear power plants as equivalent viscous damping.

  10. Evaluation of mixed dentition analyses in north Indian population: A comparative study

    OpenAIRE

    Ravi Kumar Goyal; Vijay P Sharma; Pradeep Tandon; Amit Nagar; Gyan P Singh

    2014-01-01

    Introduction: Mixed dentition regression equations analyses (Moyers, Tanaka-Johnston) are based on European population , reliability of these methods is questionable over other population. Materials and Methods: The present study was conducted on total 260 study models. This study was done in two phases. In the first phase, linear regression equations were made. In the second phase, comparison of actual values of sum of mesiodistal width of canine, first and second premolars with the predicte...

  11. Analysing the Association of Leadership Style, Face-to-Face Communication, and Organizational Effectiveness

    OpenAIRE

    Vijai N. Giri; Tirumala Santra

    2008-01-01

    The present paper analyses the association of leadership styles, Face-to-Face communication (FtF) and organizational effectiveness. Data were collected from 324 employees from various organizations in India. It was found that leadership styles predicted significantly the organizational effectiveness. The transformational leadership and transactional leadership styles were found to be positively correlated to organizational effectiveness and lassaiz-faire leadership style was found to be negat...

  12. K-West and K-East basin thermal analyses for dry conditions

    International Nuclear Information System (INIS)

    Detailed 3 dimensional thermal analyses of the 100K East and 100 K West basins were conducted to determine the peak fuel temperature for intact fuel in the event of a complete loss of water from the basins. Thermal models for the building, an array of fuel encapsulation canisters on the basin floor, and the fuel within a single canister are described along with conservative predictions for the maximum expected temperatures for the loss of water event

  13. Mathematical Modeling of a SI Engine Cycle with Actual Air-Fuel Cycle Analyses

    OpenAIRE

    Perihan SEKMEN; Yakup SEKMEN

    2007-01-01

    The performance of an engine whose basic design parameters are known can be predicted with the assistance of simulation programs into the less time, cost  and near value of actual. However, inadequate areas of the current model can guide future research because the effects of design variables on engine performance can be determined before. In this study, thermodynamic cycle and performance analyses were simulated for various engine speeds (1800, 2400 ve 3600 1/min) and various excess air coef...

  14. Predictive Techniques for Spacecraft Cabin Air Quality Control

    Science.gov (United States)

    Perry, J. L.; Cromes, Scott D. (Technical Monitor)

    2001-01-01

    As assembly of the International Space Station (ISS) proceeds, predictive techniques are used to determine the best approach for handling a variety of cabin air quality challenges. These techniques use equipment offgassing data collected from each ISS module before flight to characterize the trace chemical contaminant load. Combined with crew metabolic loads, these data serve as input to a predictive model for assessing the capability of the onboard atmosphere revitalization systems to handle the overall trace contaminant load as station assembly progresses. The techniques for predicting in-flight air quality are summarized along with results from early ISS mission analyses. Results from groundbased analyses of in-flight air quality samples are compared to the predictions to demonstrate the technique's relative conservatism.

  15. It's difficult, but important, to make negative predictions.

    Science.gov (United States)

    Williams, Richard V; Amberg, Alexander; Brigo, Alessandro; Coquin, Laurence; Giddings, Amanda; Glowienke, Susanne; Greene, Nigel; Jolly, Robert; Kemper, Ray; O'Leary-Steele, Catherine; Parenty, Alexis; Spirkl, Hans-Peter; Stalford, Susanne A; Weiner, Sandy K; Wichard, Joerg

    2016-04-01

    At the confluence of predictive and regulatory toxicologies, negative predictions may be the thin green line that prevents populations from being exposed to harm. Here, two novel approaches to making confident and robust negative in silico predictions for mutagenicity (as defined by the Ames test) have been evaluated. Analyses of 12 data sets containing >13,000 compounds, showed that negative predictivity is high (∼90%) for the best approach and features that either reduce the accuracy or certainty of negative predictions are identified as misclassified or unclassified respectively. However, negative predictivity remains high (and in excess of the prevalence of non-mutagens) even in the presence of these features, indicating that they are not flags for mutagenicity. PMID:26785392

  16. Integrated Field Analyses of Thermal Springs

    Science.gov (United States)

    Shervais, K.; Young, B.; Ponce-Zepeda, M. M.; Rosove, S.

    2011-12-01

    A group of undergraduate researchers through the SURE internship offered by the Southern California Earthquake Center (SCEC) have examined thermal springs in southern Idaho, northern Utah as well as mud volcanoes in the Salton Sea, California. We used an integrated approach to estimate the setting and maximum temperature, including water chemistry, Ipad-based image and data-base management, microbiology, and gas analyses with a modified Giggenbach sampler.All springs were characterized using GISRoam (tmCogent3D). We are performing geothermometry calculations as well as comparisons with temperature gradient data on the results while also analyzing biological samples. Analyses include water temperature, pH, electrical conductivity, and TDS measured in the field. Each sample is sealed and chilled and delivered to a water lab within 12 hours.Temperatures are continuously monitored with the use of Solinst Levelogger Juniors. Through partnership with a local community college geology club, we receive results on a monthly basis and are able to process initial data earlier in order to evaluate data over a longer time span. The springs and mudpots contained microbial organisms which were analyzed using methods of single colony isolation, polymerase chain reaction, and DNA sequencing showing the impact of the organisms on the springs or vice versa. Soon we we will collect gas samples at sites that show signs of gas. This will be taken using a hybrid of the Giggenbach method and our own methods. Drawing gas samples has proven a challenge, however we devised a method to draw out gas samples utilizing the Giggenbach flask, transferring samples to glass blood sample tubes, replacing NaOH in the Giggenbach flask, and evacuating it in the field for multiple samples using a vacuum pump. We also use a floating platform devised to carry and lower a levelogger, to using an in-line fuel filter from a tractor in order to keep mud from contaminating the equipment.The use of raster

  17. Essays on Earnings Predictability

    DEFF Research Database (Denmark)

    Bruun, Mark

    This dissertation addresses the prediction of corporate earnings. The thesis aims to examine whether the degree of precision in earnings forecasts can be increased by basing them on historical financial ratios. Furthermore, the intent of the dissertation is to analyze whether accounting standards...... forecasts are not more accurate than the simpler forecasts based on a historical timeseries of earnings. Secondly, the dissertation shows how accounting standards affect analysts’ earnings predictions. Accounting conservatism contributes to a more volatile earnings process, which lowers the accuracy of...... analysts’ earnings forecasts. Furthermore, the dissertation shows how the stock market’s reaction to the disclosure of information about corporate earnings depends on how well corporate earnings can be predicted. The dissertation indicates that the stock market’s reaction to the disclosure of earnings...

  18. Neurological abnormalities predict disability

    DEFF Research Database (Denmark)

    Poggesi, Anna; Gouw, Alida; van der Flier, Wiesje;

    2014-01-01

    To investigate the role of neurological abnormalities and magnetic resonance imaging (MRI) lesions in predicting global functional decline in a cohort of initially independent-living elderly subjects. The Leukoaraiosis And DISability (LADIS) Study, involving 11 European centres, was primarily aimed...... at evaluating age-related white matter changes (ARWMC) as an independent predictor of the transition to disability (according to Instrumental Activities of Daily Living scale) or death in independent elderly subjects that were followed up for 3 years. At baseline, a standardized neurological examination...... abnormality independently predicted transition to disability or death [HR (95 % CI) 1.53 (1.01-2.34)]. The hazard increased with increasing number of abnormalities. Among MRI lesions, only ARWMC of severe grade independently predicted disability or death [HR (95 % CI) 2.18 (1.37-3.48)]. In our cohort...

  19. Prediction model Perla

    International Nuclear Information System (INIS)

    Prediction model Perla presents one of a tool for an evaluation of a stream ecological status. It enables a comparing with a standard. The standard is formed by a dataset of sites from all area of the Czech Republic. The sites were influenced by a human activity as few as possible. 8 variables were used for prediction (distance from source, elevation, stream width and depth, slope, substrate roughness, longitude and latitude. All of them were statistically important for benthic communities. Results do not response ecoregions, but rather stream size (type). B (EQItaxonu), EQISi, EQIASPT a EQIH appears applicable for assessment using the prediction model and for natural and human stress differentiating. Limiting values of the indices for good ecological status are suggested. On the contrary, using of EQIEPT a EQIekoprof indices would be possible only with difficulties. (authors)

  20. Permeability prediction in chalks

    DEFF Research Database (Denmark)

    Alam, Mohammad Monzurul; Fabricius, Ida Lykke; Prasad, Manika

    2011-01-01

    The velocity of elastic waves is the primary datum available for acquiring information about subsurface characteristics such as lithology and porosity. Cheap and quick (spatial coverage, ease of measurement) information of permeability can be achieved, if sonic velocity is used for permeability...... prediction, so we have investigated the use of velocity data to predict permeability. The compressional velocity fromwireline logs and core plugs of the chalk reservoir in the South Arne field, North Sea, has been used for this study. We compared various methods of permeability prediction from velocities....... The relationships between permeability and porosity from core data were first examined using Kozeny’s equation. The data were analyzed for any correlations to the specific surface of the grain, Sg, and to the hydraulic property defined as the flow zone indicator (FZI). These two methods use two...

  1. Partially predictable chaos

    CERN Document Server

    Wernecke, Hendrik; Gros, Claudius

    2016-01-01

    For a chaotic system pairs of initially close-by trajectories become eventually fully uncorrelated on the attracting set. This process of decorrelation is split into an initial decrease characterized by the maximal Lyapunov exponent and a subsequent diffusive process on the chaotic attractor causing the final loss of predictability. The time scales of both processes can be either of the same or of very different orders of magnitude. In the latter case the two trajectories linger within a finite but small distance (with respect to the overall size of the attractor) for exceedingly long times and therefore remain partially predictable. We introduce a 0-1 indicator for chaos capable of describing this scenario, arguing, in addition, that the chaotic closed braids found close to a period-doubling transition are generically partially predictable.

  2. Predicting the Sunspot Cycle

    Science.gov (United States)

    Hathaway, David H.

    2009-01-01

    The 11-year sunspot cycle was discovered by an amateur astronomer in 1844. Visual and photographic observations of sunspots have been made by both amateurs and professionals over the last 400 years. These observations provide key statistical information about the sunspot cycle that do allow for predictions of future activity. However, sunspots and the sunspot cycle are magnetic in nature. For the last 100 years these magnetic measurements have been acquired and used exclusively by professional astronomers to gain new information about the nature of the solar activity cycle. Recently, magnetic dynamo models have evolved to the stage where they can assimilate past data and provide predictions. With the advent of the Internet and open data policies, amateurs now have equal access to the same data used by professionals and equal opportunities to contribute (but, alas, without pay). This talk will describe some of the more useful prediction techniques and reveal what they say about the intensity of the upcoming sunspot cycle.

  3. Epitope prediction methods

    DEFF Research Database (Denmark)

    Karosiene, Edita

    introduces the NetMHCIIpan-3.0 predictor based on artificial neural networks, which is capable of giving binding affinities to any human MHC class II molecule. Chapter 4 of this thesis gives an overview of bioinformatics tools developed by the Immunological Bioinformatics group at Center for Biological...... machine learning techniques. Several MHC class I binding prediction algorithms have been developed and due to their high accuracy they are used by many immunologists to facilitate the conventional experimental process of epitope discovery. However, the accuracy of these methods depends on data defining...... the MHC molecule in question, making it difficult for the non-expert end-user to choose the most suitable predictor. The first paper in this thesis presents a new, publicly available, consensus method for MHC class I predictions. The NetMHCcons predictor combines three state-of-the-art prediction...

  4. Scorecard on weather predictions

    Science.gov (United States)

    Richman, Barbara T.

    No matter that several northern and eastern states were pelted by snow and sleet early in March, as far as longterm weather forecasters are concerned, winter ended on February 28. Now is the time to review their winter seasonal forecasts to determine how accurate were those predictions issued at the start of winter.The National Weather Service (NWS) predicted on November 27, 1981, that the winter season would bring colder-than-normal temperatures to the eastern half of the United States, while temperatures were expected to be higher than normal in the westernmost section (see Figure 1). The NWS made no prediction for the middle of the country, labeling the area ‘indeterminate,’ or having the same chance of experiencing above-normal temperatures as below-normal temperatures, explained Donald L. Gilman, chief of the NWS long-range forecasting group.

  5. On study design in neuroimaging heritability analyses

    Science.gov (United States)

    Koran, Mary Ellen; Li, Bo; Jahanshad, Neda; Thornton-Wells, Tricia A.; Glahn, David C.; Thompson, Paul M.; Blangero, John; Nichols, Thomas E.; Kochunov, Peter; Landman, Bennett A.

    2014-03-01

    Imaging genetics is an emerging methodology that combines genetic information with imaging-derived metrics to understand how genetic factors impact observable structural, functional, and quantitative phenotypes. Many of the most well-known genetic studies are based on Genome-Wide Association Studies (GWAS), which use large populations of related or unrelated individuals to associate traits and disorders with individual genetic factors. Merging imaging and genetics may potentially lead to improved power of association in GWAS because imaging traits may be more sensitive phenotypes, being closer to underlying genetic mechanisms, and their quantitative nature inherently increases power. We are developing SOLAR-ECLIPSE (SE) imaging genetics software which is capable of performing genetic analyses with both large-scale quantitative trait data and family structures of variable complexity. This program can estimate the contribution of genetic commonality among related subjects to a given phenotype, and essentially answer the question of whether or not the phenotype is heritable. This central factor of interest, heritability, offers bounds on the direct genetic influence over observed phenotypes. In order for a trait to be a good phenotype for GWAS, it must be heritable: at least some proportion of its variance must be due to genetic influences. A variety of family structures are commonly used for estimating heritability, yet the variability and biases for each as a function of the sample size are unknown. Herein, we investigate the ability of SOLAR to accurately estimate heritability models based on imaging data simulated using Monte Carlo methods implemented in R. We characterize the bias and the variability of heritability estimates from SOLAR as a function of sample size and pedigree structure (including twins, nuclear families, and nuclear families with grandparents).

  6. Preparation of biological samples for SIMS analyses

    International Nuclear Information System (INIS)

    Full text: For the first time at ANSTO, a program of SIMS analysis of biological samples was undertaken. This presentation will discuss how the wide variety of samples were prepared, and the methods used to gain useful information from SIMS analysis. Lack of matrix-matched standards made quantification difficult, but the strength of SIMS lies in the ability to detect a wide range of stable isotopes with good spatial resolution. This makes the technique suitable for studying organisms that archive signature elements in their structure. Samples such as bivalve shells and crocodile osteoderms were vacuum-impregnated in resin to a size suitable for the SIMS sample holder. Polishing was followed by a sputter coating with gold to alleviate charging of the sample during SIMS analysis. Some samples were introduced directly on the sample holder, either stuck to a glass slide or simply held in place with spring and backing plate. The only treatment in this case was gold coating and degassing in a vacuum pumping station. The porous nature of materials such as leaves and stromatolites requires a period of time under vacuum to remove gases which could interfere with the ultra high vacuum required for SIMS analysis. A calcite standard was used for comparison of oxygen isotopic ratios, but the only matrix-matched standard was available for metal analysis of coral skeletons. Otherwise, the calcium content of the material was assumed to be uniform and acted as an internal standard from which isotopic ratios of other elements could be determined. SIMS analysis of biological samples demonstrated that some matrices could reveal an archive of pollution histories. These samples require matrix-matched standards if the trends observed from analyses are to be quantified

  7. Pipeline for macro- and microarray analyses

    Directory of Open Access Journals (Sweden)

    R. Vicentini

    2007-05-01

    Full Text Available The pipeline for macro- and microarray analyses (PMmA is a set of scripts with a web interface developed to analyze DNA array data generated by array image quantification software. PMmA is designed for use with single- or double-color array data and to work as a pipeline in five classes (data format, normalization, data analysis, clustering, and array maps. It can also be used as a plugin in the BioArray Software Environment, an open-source database for array analysis, or used in a local version of the web service. All scripts in PMmA were developed in the PERL programming language and statistical analysis functions were implemented in the R statistical language. Consequently, our package is a platform-independent software. Our algorithms can correctly select almost 90% of the differentially expressed genes, showing a superior performance compared to other methods of analysis. The pipeline software has been applied to 1536 expressed sequence tags macroarray public data of sugarcane exposed to cold for 3 to 48 h. PMmA identified thirty cold-responsive genes previously unidentified in this public dataset. Fourteen genes were up-regulated, two had a variable expression and the other fourteen were down-regulated in the treatments. These new findings certainly were a consequence of using a superior statistical analysis approach, since the original study did not take into account the dependence of data variability on the average signal intensity of each gene. The web interface, supplementary information, and the package source code are available, free, to non-commercial users at http://ipe.cbmeg.unicamp.br/pub/PMmA.

  8. Pipeline for macro- and microarray analyses.

    Science.gov (United States)

    Vicentini, R; Menossi, M

    2007-05-01

    The pipeline for macro- and microarray analyses (PMmA) is a set of scripts with a web interface developed to analyze DNA array data generated by array image quantification software. PMmA is designed for use with single- or double-color array data and to work as a pipeline in five classes (data format, normalization, data analysis, clustering, and array maps). It can also be used as a plugin in the BioArray Software Environment, an open-source database for array analysis, or used in a local version of the web service. All scripts in PMmA were developed in the PERL programming language and statistical analysis functions were implemented in the R statistical language. Consequently, our package is a platform-independent software. Our algorithms can correctly select almost 90% of the differentially expressed genes, showing a superior performance compared to other methods of analysis. The pipeline software has been applied to 1536 expressed sequence tags macroarray public data of sugarcane exposed to cold for 3 to 48 h. PMmA identified thirty cold-responsive genes previously unidentified in this public dataset. Fourteen genes were up-regulated, two had a variable expression and the other fourteen were down-regulated in the treatments. These new findings certainly were a consequence of using a superior statistical analysis approach, since the original study did not take into account the dependence of data variability on the average signal intensity of each gene. The web interface, supplementary information, and the package source code are available, free, to non-commercial users at http://ipe.cbmeg.unicamp.br/pub/PMmA. PMID:17464422

  9. EP 1000 steam generator tube rupture analyses

    International Nuclear Information System (INIS)

    European electrical utility organizations together with Westinghouse and Ansaldo are participating in a program to utilize the Westinghouse passive nuclear plant technology to develop a plant which meets the European Utility Requirements (EUR) and is expected to be licensable in Europe. The program was initiated in 1994 and the plant is designated EP1000. The EP1000 design is notable for simplicity that comes from a reliance on passive safety systems to enhance plant safety. The use of passive safety systems has provided significant and measurable improvements in plant simplification, safety, reliability, investment protection and plant costs. These systems use only natural forces such as gravity, natural circulation, and compressed gas to provide the driving forces for the systems to adequately cool the reactor core following an initiating event. The EP1000 builds up on the Westinghouse passive nuclear plant technology to enhance plant safety and meet European Utility Requirements and specific European National Safety Criteria. This paper summarizes the main results of the Steam Generator Tube Rupture (SGTR) analysis activity, performed in Phase 2B of the European Passive Plant Program. The purpose of the study is to provide evidence that the passive safety system performance provides a significant improvement in terms of safety, providing significant margins to steam generator overfilling and reducing the need for operator actions. The behavior of the EP1000 plant following SGTR accidents has been analyzed by means of the RELAP5/Mod3.2 code. Sensitivity cases were performed, to address the impact of varying the number of steam generator tubes that rupture, and the potential adverse interactions that could result from operation of control systems (i.e., Chemical and Volume Control System, Startup Feedwater). Analyses have also been performed to define and verify improved protection system logic to avoid possible steam generator safety valve challenges both in the

  10. A combined computational-experimental analyses of selected metabolic enzymes in Pseudomonas species

    Directory of Open Access Journals (Sweden)

    Deepak Perumal, Chu Sing Lim, Vincent T.K. Chow, Kishore R. Sakharkar, Meena K. Sakharkar

    2008-01-01

    Full Text Available Comparative genomic analysis has revolutionized our ability to predict the metabolic subsystems that occur in newly sequenced genomes, and to explore the functional roles of the set of genes within each subsystem. These computational predictions can considerably reduce the volume of experimental studies required to assess basic metabolic properties of multiple bacterial species. However, experimental validations are still required to resolve the apparent inconsistencies in the predictions by multiple resources. Here, we present combined computational-experimental analyses on eight completely sequenced Pseudomonas species. Comparative pathway analyses reveal that several pathways within the Pseudomonas species show high plasticity and versatility. Potential bypasses in 11 metabolic pathways were identified. We further confirmed the presence of the enzyme O-acetyl homoserine (thiol lyase (EC: 2.5.1.49 in P. syringae pv. tomato that revealed inconsistent annotations in KEGG and in the recently published SYSTOMONAS database. These analyses connect and integrate systematic data generation, computational data interpretation, and experimental validation and represent a synergistic and powerful means for conducting biological research.

  11. Analysis methods for predicting the behaviour of isolators and formulation of simplified models for use in predicting response of structures to earthquake type input

    International Nuclear Information System (INIS)

    This report describes the simplified models for predicting the response of high-damping natural rubber bearings (HDNRB) to earthquake ground motions and benchmark problems for assessing the accuracy of finite element analyses in designing base-isolators. (author)

  12. PREDICTION OF RECESSION

    OpenAIRE

    Lee, Young Sub; Zhu, Qian

    2010-01-01

    The purpose of our research is to examine the predictive power of inverted yield curve for the recession in the near future. The data used in this research are between Jan 1, 1959 to Nov, 2008. There are 8 recessions during this period, including current one. We conducted two sets of tests. The first set consists of spread between 10-year Treasury bond and 3-month Treasury bill and spread between 10-year Treasury bond and 3-month LIBOR; and we find the predictive power of spread between 10-ye...

  13. Linguistic Structure Prediction

    CERN Document Server

    Smith, Noah A

    2011-01-01

    A major part of natural language processing now depends on the use of text data to build linguistic analyzers. We consider statistical, computational approaches to modeling linguistic structure. We seek to unify across many approaches and many kinds of linguistic structures. Assuming a basic understanding of natural language processing and/or machine learning, we seek to bridge the gap between the two fields. Approaches to decoding (i.e., carrying out linguistic structure prediction) and supervised and unsupervised learning of models that predict discrete structures as outputs are the focus. W

  14. Atmospheric predictability revisited

    Directory of Open Access Journals (Sweden)

    Lizzie S. R. Froude

    2013-06-01

    Full Text Available This article examines the potential to improve numerical weather prediction (NWP by estimating upper and lower bounds on predictability by re-visiting the original study of Lorenz (1982 but applied to the most recent version of the European Centre for Medium Range Weather Forecasts (ECMWF forecast system, for both the deterministic and ensemble prediction systems (EPS. These bounds are contrasted with an older version of the same NWP system to see how they have changed with improvements to the NWP system. The computations were performed for the earlier seasons of DJF 1985/1986 and JJA 1986 and the later seasons of DJF 2010/2011 and JJA 2011 using the 500-hPa geopotential height field. Results indicate that for this field, we may be approaching the limit of deterministic forecasting so that further improvements might only be obtained by improving the initial state. The results also show that predictability calculations with earlier versions of the model may overestimate potential forecast skill, which may be due to insufficient internal variability in the model and because recent versions of the model are more realistic in representing the true atmospheric evolution. The same methodology is applied to the EPS to calculate upper and lower bounds of predictability of the ensemble mean forecast in order to explore how ensemble forecasting could extend the limits of the deterministic forecast. The results show that there is a large potential to improve the ensemble predictions, but for the increased predictability of the ensemble mean, there will be a trade-off in information as the forecasts will become increasingly smoothed with time. From around the 10-d forecast time, the ensemble mean begins to converge towards climatology. Until this point, the ensemble mean is able to predict the main features of the large-scale flow accurately and with high consistency from one forecast cycle to the next. By the 15-d forecast time, the ensemble mean has lost

  15. Is genetic evolution predictable?

    Science.gov (United States)

    Stern, David L; Orgogozo, Virginie

    2009-02-01

    Ever since the integration of Mendelian genetics into evolutionary biology in the early 20th century, evolutionary geneticists have for the most part treated genes and mutations as generic entities. However, recent observations indicate that all genes are not equal in the eyes of evolution. Evolutionarily relevant mutations tend to accumulate in hotspot genes and at specific positions within genes. Genetic evolution is constrained by gene function, the structure of genetic networks, and population biology. The genetic basis of evolution may be predictable to some extent, and further understanding of this predictability requires incorporation of the specific functions and characteristics of genes into evolutionary theory. PMID:19197055

  16. RETAIL BANKRUPTCY PREDICTION

    Directory of Open Access Journals (Sweden)

    Johnny Pang

    2013-01-01

    Full Text Available This study reintroduces the famous discriminant functions from Edward Altman and Begley, Ming and Watts (BMW that were used to predict bankrupts. We will formulate three new discriminant functions which differ from Altman’s and BMW’s re-estimated Altman model. Altman’s models as well as Begley, Ming and Watts’s re-estimated Altman model apply to publicly traded industries, whereas the new models formulated in this study are based on retail companies. The three new functions will provide better predictions on retail bankruptcy and they will minimize the chance of misclassifications.

  17. Summary Report of Laboratory Critical Experiment Analyses Performed for the Disposal Criticality Analysis Methodology

    International Nuclear Information System (INIS)

    This report, ''Summary Report of Laboratory Critical Experiment Analyses Performed for the Disposal Criticality Analysis Methodology'', contains a summary of the laboratory critical experiment (LCE) analyses used to support the validation of the disposal criticality analysis methodology. The objective of this report is to present a summary of the LCE analyses' results. These results demonstrate the ability of MCNP to accurately predict the critical multiplication factor (keff) for fuel with different configurations. Results from the LCE evaluations will support the development and validation of the criticality models used in the disposal criticality analysis methodology. These models and their validation have been discussed in the ''Disposal Criticality Analysis Methodology Topical Report'' (CRWMS M and O 1998a)

  18. Summary Report of Laboratory Critical Experiment Analyses Performed for the Disposal Criticality Analysis Methodology

    Energy Technology Data Exchange (ETDEWEB)

    J. Scaglione

    1999-09-09

    This report, ''Summary Report of Laboratory Critical Experiment Analyses Performed for the Disposal Criticality Analysis Methodology'', contains a summary of the laboratory critical experiment (LCE) analyses used to support the validation of the disposal criticality analysis methodology. The objective of this report is to present a summary of the LCE analyses' results. These results demonstrate the ability of MCNP to accurately predict the critical multiplication factor (keff) for fuel with different configurations. Results from the LCE evaluations will support the development and validation of the criticality models used in the disposal criticality analysis methodology. These models and their validation have been discussed in the ''Disposal Criticality Analysis Methodology Topical Report'' (CRWMS M&O 1998a).

  19. RNA secondary structure prediction from multi-aligned sequences

    OpenAIRE

    Hamada, Michiaki

    2013-01-01

    It has been well accepted that the RNA secondary structures of most functional non-coding RNAs (ncRNAs) are closely related to their functions and are conserved during evolution. Hence, prediction of conserved secondary structures from evolutionarily related sequences is one important task in RNA bioinformatics; the methods are useful not only to further functional analyses of ncRNAs but also to improve the accuracy of secondary structure predictions and to find novel functional RNAs from the...

  20. Reliable prediction of complex thermal hydraulic parameters by ANN

    International Nuclear Information System (INIS)

    Thermal hydraulic data-base is very useful in the design and analysis of the proposed Advanced Heavy Water Reactor which relies on natural circulation for normal core cooling. Compilation of the thermal hydraulic data-base is in progress. Artificial Neural Networks (ANNs), have been applied to analyse the consistency and accuracy of the data-base. The ANN predictions are more accurate and cover wider range of parameters compared to model based predictions

  1. The prediction of coal reservoir permeability in Tiefa coalfield

    Energy Technology Data Exchange (ETDEWEB)

    Li Xiaoyan; Li Jing; Yang Lijun; Lei Chongli [China Coal Research Institute (China). Xian Branch

    1998-06-01

    The prediction of coal reservoir permeability is necessary for reservoir evaluation before the development of coalbed methane. The adopted method is based on the possibilities of finding the fracture developed zones and their strikes in the region and predicting the high-permeable areas by the macro- and micro-statistical analyses of the joints in enclosing rocks and the fractures in coals. 1 fig., 2 tabs.

  2. Aging analyses of aircraft wire insulation

    Energy Technology Data Exchange (ETDEWEB)

    GILLEN,KENNETH T.; CLOUGH,ROGER LEE; CELINA,MATHIAS C.; AUBERT,JAMES H.; MALONE,G. MICHAEL

    2000-05-08

    Over the past two decades, Sandia has developed a variety of specialized analytical techniques for evaluating the long-term aging and stability of cable insulation and other related materials. These techniques have been applied to cable reliability studies involving numerous insulation types and environmental factors. This work has allowed the monitoring of the occurrence and progression of cable material deterioration in application environments, and has provided insights into material degradation mechanisms. It has also allowed development of more reliable lifetime prediction methodologies. As a part of the FAA program for intrusive inspection of aircraft wiring, they are beginning to apply a battery of techniques to assessing the condition of cable specimens removed from retired aircraft. It is anticipated that in a future part of this program, they may employ these techniques in conjunction with accelerated aging methodologies and models that the authros have developed and employed in the past to predict cable lifetimes. The types of materials to be assessed include 5 different wire types: polyimide, PVC/Glass/Nylon, extruded XL-polyalkene/PVDF, Poly-X, and XL-ETFE. This presentation provides a brief overview of the main techniques that will be employed in assessing the state of health of aircraft wire insulation. The discussion will be illustrated with data from their prior cable aging studies, highlighting the methods used and their important conclusions. A few of the techniques that they employ are widely used in aging studies on polymers, but others are unique to Sandia. All of their techniques are non-proprietary, and maybe of interest for use by others in terms of application to aircraft wiring analysis. At the end of this report is a list showing some leading references to papers that have been published in the open literature which provide more detailed information on the analytical techniques for elastomer aging studies. The first step in the

  3. Analyses of hypomethylated oil palm gene space.

    Science.gov (United States)

    Low, Eng-Ti L; Rosli, Rozana; Jayanthi, Nagappan; Mohd-Amin, Ab Halim; Azizi, Norazah; Chan, Kuang-Lim; Maqbool, Nauman J; Maclean, Paul; Brauning, Rudi; McCulloch, Alan; Moraga, Roger; Ong-Abdullah, Meilina; Singh, Rajinder

    2014-01-01

    Demand for palm oil has been increasing by an average of ∼8% the past decade and currently accounts for about 59% of the world's vegetable oil market. This drives the need to increase palm oil production. Nevertheless, due to the increasing need for sustainable production, it is imperative to increase productivity rather than the area cultivated. Studies on the oil palm genome are essential to help identify genes or markers that are associated with important processes or traits, such as flowering, yield and disease resistance. To achieve this, 294,115 and 150,744 sequences from the hypomethylated or gene-rich regions of Elaeis guineensis and E. oleifera genome were sequenced and assembled into contigs. An additional 16,427 shot-gun sequences and 176 bacterial artificial chromosomes (BAC) were also generated to check the quality of libraries constructed. Comparison of these sequences revealed that although the methylation-filtered libraries were sequenced at low coverage, they still tagged at least 66% of the RefSeq supported genes in the BAC and had a filtration power of at least 2.0. A total 33,752 microsatellites and 40,820 high-quality single nucleotide polymorphism (SNP) markers were identified. These represent the most comprehensive collection of microsatellites and SNPs to date and would be an important resource for genetic mapping and association studies. The gene models predicted from the assembled contigs were mined for genes of interest, and 242, 65 and 14 oil palm transcription factors, resistance genes and miRNAs were identified respectively. Examples of the transcriptional factors tagged include those associated with floral development and tissue culture, such as homeodomain proteins, MADS, Squamosa and Apetala2. The E. guineensis and E. oleifera hypomethylated sequences provide an important resource to understand the molecular mechanisms associated with important agronomic traits in oil palm. PMID:24497974

  4. Analyses of hypomethylated oil palm gene space.

    Directory of Open Access Journals (Sweden)

    Eng-Ti L Low

    Full Text Available Demand for palm oil has been increasing by an average of ∼8% the past decade and currently accounts for about 59% of the world's vegetable oil market. This drives the need to increase palm oil production. Nevertheless, due to the increasing need for sustainable production, it is imperative to increase productivity rather than the area cultivated. Studies on the oil palm genome are essential to help identify genes or markers that are associated with important processes or traits, such as flowering, yield and disease resistance. To achieve this, 294,115 and 150,744 sequences from the hypomethylated or gene-rich regions of Elaeis guineensis and E. oleifera genome were sequenced and assembled into contigs. An additional 16,427 shot-gun sequences and 176 bacterial artificial chromosomes (BAC were also generated to check the quality of libraries constructed. Comparison of these sequences revealed that although the methylation-filtered libraries were sequenced at low coverage, they still tagged at least 66% of the RefSeq supported genes in the BAC and had a filtration power of at least 2.0. A total 33,752 microsatellites and 40,820 high-quality single nucleotide polymorphism (SNP markers were identified. These represent the most comprehensive collection of microsatellites and SNPs to date and would be an important resource for genetic mapping and association studies. The gene models predicted from the assembled contigs were mined for genes of interest, and 242, 65 and 14 oil palm transcription factors, resistance genes and miRNAs were identified respectively. Examples of the transcriptional factors tagged include those associated with floral development and tissue culture, such as homeodomain proteins, MADS, Squamosa and Apetala2. The E. guineensis and E. oleifera hypomethylated sequences provide an important resource to understand the molecular mechanisms associated with important agronomic traits in oil palm.

  5. Quantitative DNA Analyses for Airborne Birch Pollen.

    Directory of Open Access Journals (Sweden)

    Isabell Müller-Germann

    Full Text Available Birch trees produce large amounts of highly allergenic pollen grains that are distributed by wind and impact human health by causing seasonal hay fever, pollen-related asthma, and other allergic diseases. Traditionally, pollen forecasts are based on conventional microscopic counting techniques that are labor-intensive and limited in the reliable identification of species. Molecular biological techniques provide an alternative approach that is less labor-intensive and enables identification of any species by its genetic fingerprint. A particularly promising method is quantitative Real-Time polymerase chain reaction (qPCR, which can be used to determine the number of DNA copies and thus pollen grains in air filter samples. During the birch pollination season in 2010 in Mainz, Germany, we collected air filter samples of fine (<3 μm and coarse air particulate matter. These were analyzed by qPCR using two different primer pairs: one for a single-copy gene (BP8 and the other for a multi-copy gene (ITS. The BP8 gene was better suitable for reliable qPCR results, and the qPCR results obtained for coarse particulate matter were well correlated with the birch pollen forecasting results of the regional air quality model COSMO-ART. As expected due to the size of birch pollen grains (~23 μm, the concentration of DNA in fine particulate matter was lower than in the coarse particle fraction. For the ITS region the factor was 64, while for the single-copy gene BP8 only 51. The possible presence of so-called sub-pollen particles in the fine particle fraction is, however, interesting even in low concentrations. These particles are known to be highly allergenic, reach deep into airways and cause often severe health problems. In conclusion, the results of this exploratory study open up the possibility of predicting and quantifying the pollen concentration in the atmosphere more precisely in the future.

  6. A conceptual DFT approach towards analysing toxicity

    Indian Academy of Sciences (India)

    U Sarkar; D R Roy; P K Chattaraj; R Parthasarathi; J Padmanabhan; V Subramanian

    2005-09-01

    The applicability of DFT-based descriptors for the development of toxicological structure-activity relationships is assessed. Emphasis in the present study is on the quality of DFT-based descriptors for the development of toxicological QSARs and, more specifically, on the potential of the electrophilicity concept in predicting toxicity of benzidine derivatives and the series of polyaromatic hydrocarbons (PAH) expressed in terms of their biological activity data (50). First, two benzidine derivatives, which act as electron-donating agents in their interactions with biomolecules are considered. Overall toxicity in general and the most probable site of reactivity in particular are effectively described by the global and local electrophilicity parameters respectively. Interaction of two benzidine derivatives with nucleic acid (NA) bases/selected base pairs is determined using Parr’s charge transfer formula. The experimental biological activity data (50) for the family of PAH, namely polychlorinated dibenzofurans (PCDF), polyhalogenated dibenzo--dioxins (PHDD) and polychlorinated biphenyls (PCB) are taken as dependent variables and the HF energy (), along with DFT-based global and local descriptors, viz., electrophilicity index () and local electrophilic power (+) respectively are taken as independent variables. Fairly good correlation is obtained showing the significance of the selected descriptors in the QSAR on toxins that act as electron acceptors in the presence of biomolecules. Effects of population analysis schemes in the calculation of Fukui functions as well as that of solvation are probed. Similarly, some electron-donor aliphatic amines are studied in the present work. We see that global and local electrophilicities along with the HF energy are adequate in explaining the toxicity of several substances

  7. Safety Analyses of IRT- Sofia LEU Core: Sensitivity Analyses of Loss of Flow Accident

    International Nuclear Information System (INIS)

    Sensitivity analyses (by the PARET/ANL code) of the loss of forced flow accident because of pump stop as a result of loss of the offsite electricity supply for the IRT - Sofia are presented in this paper. These analyses are carried out because the used hydro dynamic calculation evaluation for a period during which the downward coolant flow is slowing down and stopped has very wide tolerance (from 2 to 25 sec). According to the obtained results the relation between evaluated and allowed conditions of the fuel operation does not vary significantly in the analyzed limits. Moreover for all analyzed conditions the peak cladding temperature reached in this transient is below the temperature for onset of nucleate boiling (1170C) and far below the safety limit of 4250C for the fuel cladding temperature that is fully consistent with the conclusion from the IRT - Sofia SAR for this accident. (authors)

  8. Kuosheng Mark III containment analyses using GOTHIC

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Ansheng, E-mail: samuellin1999@iner.gov.tw; Chen, Yen-Shu; Yuann, Yng-Ruey

    2013-10-15

    Highlights: • The Kuosheng Mark III containment model is established using GOTHIC. • Containment pressure and temperature responses due to LOCA are presented. • The calculated results are all below the design values and compared with the FSAR results. • The calculated results can be served as an analysis reference for an SPU project in the future. -- Abstract: Kuosheng nuclear power plant in Taiwan is a twin-unit BWR/6 plant, and both units utilize the Mark III containment. Currently, the plant is performing a stretch power uprate (SPU) project to increase the core thermal power to 103.7% OLTP (original licensed thermal power). However, the containment response in the Kuosheng Final Safety Analysis Report (FSAR) was completed more than twenty-five years ago. The purpose of this study is to establish a Kuosheng Mark III containment model using the containment program GOTHIC. The containment pressure and temperature responses under the design-basis accidents, which are the main steam line break (MSLB) and the recirculation line break (RCLB) accidents, are investigated. Short-term and long-term analyses are presented in this study. The short-term analysis is to calculate the drywell peak pressure and temperature which happen in the early stage of the LOCAs. The long-term analysis is to calculate the peak pressure and temperature of the reactor building space. In the short-term analysis, the calculated peak drywell to wetwell differential pressure is 140.6 kPa for the MSLB, which is below than the design value of 189.6 kPa. The calculated peak drywell temperature is 158 °C, which is still below the design value of 165.6 °C. In addition, in the long-term analysis, the calculated peak containment pressure is 47 kPa G, which is below the design value of 103.4 kPa G. The calculated peak values of containment temperatures are 74.7 °C, which is lower than the design value of 93.3 °C. Therefore, the Kuosheng Mark III containment can maintain the integrity after

  9. Prediction method abstracts

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-12-31

    This conference was held December 4--8, 1994 in Asilomar, California. The purpose of this meeting was to provide a forum for exchange of state-of-the-art information concerning the prediction of protein structure. Attention if focused on the following: comparative modeling; sequence to fold assignment; and ab initio folding.

  10. PREDICTION OF OVULATION

    Institute of Scientific and Technical Information of China (English)

    LIUYong; CHENSu-Ru; ZHOUJin-Ting; LIUJi-Ying

    1989-01-01

    The purpose or this research is: I) to observe the secretory pattern of five reproductive hormones in Chinese women with normal menstrual cyclcs, especially at the prc-ovulatory peroid; 2) to study whether urinary LH measurement could be used instead of serum LH measurement; 3) to evaluate the significance of LH-EIA kit (Right-Day) for ovulation prediction.

  11. Predicting Lotto Numbers

    DEFF Research Database (Denmark)

    Jørgensen, Claus Bjørn; Suetens, Sigrid; Tyran, Jean-Robert

    numbers based on recent drawings. While most players pick the same set of numbers week after week without regards of numbers drawn or anything else, we find that those who do change, act on average in the way predicted by the law of small numbers as formalized in recent behavioral theory. In particular...

  12. Gate valve performance prediction

    International Nuclear Information System (INIS)

    The Electric Power Research Institute is carrying out a program to improve the performance prediction methods for motor-operated valves. As part of this program, an analytical method to predict the stem thrust required to stroke a gate valve has been developed and has been assessed against data from gate valve tests. The method accounts for the loads applied to the disc by fluid flow and for the detailed mechanical interaction of the stem, disc, guides, and seats. To support development of the method, two separate-effects test programs were carried out. One test program determined friction coefficients for contacts between gate valve parts by using material specimens in controlled environments. The other test program investigated the interaction of the stem, disc, guides, and seat using a special fixture with full-sized gate valve parts. The method has been assessed against flow-loop and in-plant test data. These tests include valve sizes from 3 to 18 in. and cover a considerable range of flow, temperature, and differential pressure. Stem thrust predictions for the method bound measured results. In some cases, the bounding predictions are substantially higher than the stem loads required for valve operation, as a result of the bounding nature of the friction coefficients in the method

  13. Prediction in OLAP Cube

    Directory of Open Access Journals (Sweden)

    Abdellah Sair

    2012-05-01

    Full Text Available Data warehouses are now offering an adequate solution for managing large volumes of data. Online analysis supports OLAP data warehouses in the process of decision support and visualization tools offer, structure and operation of data warehouse. On the other hand, data mining allows the extraction of knowledge with technical description, classification, explanation and prediction. It is therefore possible to better understand the data by coupling on-line analysis with data mining through a unified analysis process. Continuing the work of R. Ben Messaoud, where exploitation of the coupling of on-line analysis and data mining focuses on the description, visualization, classification and explanation, we propose extending the OLAP prediction capabilities. To integrate the prediction in the heart of OLAP, an approach based on automatic learning with regression trees is proposed in order to predict the value of an aggregate or a measure. We will try to express our approach using data from a service management reviews to know that it would be the average obtained by the students if we open a new module, for a department at a certain criterion.

  14. Polarization predictions for LEAR

    International Nuclear Information System (INIS)

    Large polarization properties have recently been experimentally found in quasi-two-body reactions. From these results, the additive quark model and assumptions on the relative size of some participant matrix elements (which will be motivated elsewhere as properties of colour confinement), we present prediction for the reactions pp- to YY-. (Author)

  15. Vertebral Fracture Prediction

    DEFF Research Database (Denmark)

    2008-01-01

    Vertebral Fracture Prediction A method of processing data derived from an image of at least part of a spine is provided for estimating the risk of a future fracture in vertebraeof the spine. Position data relating to at least four neighbouring vertebrae of the spine is processed. The curvature of...

  16. Chloride ingress prediction

    DEFF Research Database (Denmark)

    Frederiksen, Jens Mejer; Geiker, Mette Rica

    2008-01-01

    Prediction of chloride ingress into concrete is an important part of durability design of reinforced concrete structures exposed to chloride containing environment. This paper presents experimentally based design parameters for Portland cement concretes with and without silica fume and fly ash in...

  17. Prediction of resonant oscillation

    DEFF Research Database (Denmark)

    2010-01-01

    The invention relates to methods for prediction of parametric rolling of vessels. The methods are based on frequency domain and time domain information in order do set up a detector able to trigger an alarm when parametric roll is likely to occur. The methods use measurements of e.g. pitch and roll...

  18. Prediction of regulatory elements

    DEFF Research Database (Denmark)

    Sandelin, Albin

    2008-01-01

    -lab methods are time consuming and expensive, it is not realistic to identify TFBS for all uncharacterized genes in the genome by purely experimental means. Computational methods aimed at predicting potential regulatory regions can increase the efficiency of wet-lab experiments significantly. Here, methods...

  19. Predictive models in urology.

    Science.gov (United States)

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology. PMID:23423686

  20. Predicting coronary heart disease

    DEFF Research Database (Denmark)

    Sillesen, Henrik; Fuster, Valentin

    2012-01-01

    Atherosclerosis is the leading cause of death and disabling disease. Whereas risk factors are well known and constitute therapeutic targets, they are not useful for prediction of risk of future myocardial infarction, stroke, or death. Therefore, methods to identify atherosclerosis itself have bee...

  1. Predicting Intrinsic Motivation

    Science.gov (United States)

    Martens, Rob; Kirschner, Paul A.

    2004-01-01

    Intrinsic motivation can be predicted from participants' perceptions of the social environment and the task environment (Ryan & Deci, 2000)in terms of control, relatedness and competence. To determine the degree of independence of these factors 251 students in higher vocational education (physiotherapy and hotel management) indicated the extent to…

  2. Predicting Lotto Numbers

    NARCIS (Netherlands)

    Jorgensen, C.B.; Suetens, S.; Tyran, J.R.

    2011-01-01

    We investigate the "law of small numbers" using a unique panel data set on lotto gambling. Because we can track individual players over time, we can measure how they react to outcomes of recent lotto drawings. We can therefore test whether they behave as if they believe they can predict lotto number

  3. Predicting Lotto Numbers

    DEFF Research Database (Denmark)

    Suetens, Sigrid; Galbo-Jørgensen, Claus B.; Tyran, Jean-Robert Karl

    2016-01-01

    We investigate the ‘law of small numbers’ using a data set on lotto gambling that allows us to measure players’ reactions to draws. While most players pick the same set of numbers week after week, we find that those who do change react on average as predicted by the law of small numbers as...

  4. First Super-Earth Atmosphere Analysed

    Science.gov (United States)

    2010-12-01

    The atmosphere around a super-Earth exoplanet has been analysed for the first time by an international team of astronomers using ESO's Very Large Telescope. The planet, which is known as GJ 1214b, was studied as it passed in front of its parent star and some of the starlight passed through the planet's atmosphere. We now know that the atmosphere is either mostly water in the form of steam or is dominated by thick clouds or hazes. The results will appear in the 2 December 2010 issue of the journal Nature. The planet GJ 1214b was confirmed in 2009 using the HARPS instrument on ESO's 3.6-metre telescope in Chile (eso0950) [1]. Initial findings suggested that this planet had an atmosphere, which has now been confirmed and studied in detail by an international team of astronomers, led by Jacob Bean (Harvard-Smithsonian Center for Astrophysics), using the FORS instrument on ESO's Very Large Telescope. "This is the first super-Earth to have its atmosphere analysed. We've reached a real milestone on the road toward characterising these worlds," said Bean. GJ 1214b has a radius of about 2.6 times that of the Earth and is about 6.5 times as massive, putting it squarely into the class of exoplanets known as super-Earths. Its host star lies about 40 light-years from Earth in the constellation of Ophiuchus (the Serpent Bearer). It is a faint star [2], but it is also small, which means that the size of the planet is large compared to the stellar disc, making it relatively easy to study [3]. The planet travels across the disc of its parent star once every 38 hours as it orbits at a distance of only two million kilometres: about seventy times closer than the Earth orbits the Sun. To study the atmosphere, the team observed the light coming from the star as the planet passed in front of it [4]. During these transits, some of the starlight passes through the planet's atmosphere and, depending on the chemical composition and weather on the planet, specific wavelengths of light are

  5. Exchange Rate Predictions

    OpenAIRE

    Yablonskyy, Karen

    2012-01-01

    The aim of this thesis is to analyze the foreign exchange currency forecasting techniques. Moreover the central idea behind the topic is to develop the strategy of forecasting by choosing indicators and techniques to make own forecast on currency pair EUR/USD. This thesis work is a mixture of theory and practice analyses. The goal during the work on this project was to study different types of forecasting techniques and make own forecast, practice forecasting and trading on Forex platform, ba...

  6. Predicting Major Solar Eruptions

    Science.gov (United States)

    Kohler, Susanna

    2016-05-01

    Coronal mass ejections (CMEs) and solar flares are two examples of major explosions from the surface of the Sun but theyre not the same thing, and they dont have to happen at the same time. A recent study examines whether we can predict which solar flares will be closely followed by larger-scale CMEs.Image of a solar flare from May 2013, as captured by NASAs Solar Dynamics Observatory. [NASA/SDO]Flares as a Precursor?A solar flare is a localized burst of energy and X-rays, whereas a CME is an enormous cloud of magnetic flux and plasma released from the Sun. We know that some magnetic activity on the surface of the Sun triggers both a flare and a CME, whereas other activity only triggers a confined flare with no CME.But what makes the difference? Understanding this can help us learn about the underlying physical drivers of flares and CMEs. It also might help us to better predict when a CME which can pose a risk to astronauts, disrupt radio transmissions, and cause damage to satellites might occur.In a recent study, Monica Bobra and Stathis Ilonidis (Stanford University) attempt to improve our ability to make these predictions by using a machine-learning algorithm.Classification by ComputerUsing a combination of 6 or more features results in a much better predictive success (measured by the True Skill Statistic; higher positive value = better prediction) for whether a flare will be accompanied by a CME. [Bobra Ilonidis 2016]Bobra and Ilonidis used magnetic-field data from an instrument on the Solar Dynamics Observatory to build a catalog of solar flares, 56 of which were accompanied by a CME and 364 of which were not. The catalog includes information about 18 different features associated with the photospheric magnetic field of each flaring active region (for example, the mean gradient of the horizontal magnetic field).The authors apply a machine-learning algorithm known as a binary classifier to this catalog. This algorithm tries to predict, given a set of features

  7. Prediction of postoperative pain: a systematic review of predictive experimental pain studies

    DEFF Research Database (Denmark)

    Werner, Mads Utke; Mjöbo, Helena N; Nielsen, Per R;

    2010-01-01

    Quantitative testing of a patient's basal pain perception before surgery has the potential to be of clinical value if it can accurately predict the magnitude of pain and requirement of analgesics after surgery. This review includes 14 studies that have investigated the correlation between...... preoperative responses to experimental pain stimuli and clinical postoperative pain and demonstrates that the preoperative pain tests may predict 4-54% of the variance in postoperative pain experience depending on the stimulation methods and the test paradigm used. The predictive strength is much higher than...... previously reported for single factor analyses of demographics and psychologic factors. In addition, some of these studies indicate that an increase in preoperative pain sensitivity is associated with a high probability of development of sustained postsurgical pain....

  8. Predictive role of the nighttime blood pressure

    DEFF Research Database (Denmark)

    Hansen, Tine W; Li, Yan; Boggia, José;

    2011-01-01

    Numerous studies addressed the predictive value of the nighttime blood pressure (BP) as captured by ambulatory monitoring. However, arbitrary cutoff limits in dichotomized analyses of continuous variables, data dredging across selected subgroups, extrapolation of cross-sectional studies to...... conclusive evidence proving that nondipping is a reversible risk factor, the option whether or not to restore the diurnal blood pressure profile to a normal pattern should be left to the clinical judgment of doctors and should be individualized for each patient. Current guidelines on the interpretation of...

  9. Final report on reliability and lifetime prediction.

    Energy Technology Data Exchange (ETDEWEB)

    Gillen, Kenneth Todd; Wise, Jonathan; Jones, Gary D.; Causa, Al G. [Goodyear Tire and Rubber Co., Akron, OH; Terrill, Edward R. [Goodyear Tire and Rubber Co., Akron, OH; Borowczak, Marc [Goodyear Tire and Rubber Co., Akron, OH

    2012-12-01

    This document highlights the important results obtained from the subtask of the Goodyear CRADA devoted to better understanding reliability of tires and to developing better lifetime prediction methods. The overall objective was to establish the chemical and physical basis for the degradation of tires using standard as well as unique models and experimental techniques. Of particular interest was the potential application of our unique modulus profiling apparatus for assessing tire properties and for following tire degradation. During the course of this complex investigation, extensive relevant information was generated, including experimental results, data analyses and development of models and instruments. Detailed descriptions of the findings are included in this report.

  10. Educational Data Mining & Students’ Performance Prediction

    Directory of Open Access Journals (Sweden)

    Amjad Abu Saa

    2016-05-01

    Full Text Available It is important to study and analyse educational data especially students’ performance. Educational Data Mining (EDM is the field of study concerned with mining educational data to find out interesting patterns and knowledge in educational organizations. This study is equally concerned with this subject, specifically, the students’ performance. This study explores multiple factors theoretically assumed to affect students’ performance in higher education, and finds a qualitative model which best classifies and predicts the students’ performance based on related personal and social factors.

  11. An electronic probe micro-analyser. A linear scan device

    International Nuclear Information System (INIS)

    The Castaing electronic probe micro-analyser makes possible static analysis at successive points. For two years this apparatus has been equipped by its constructor with an automatic device for surface scanning. In order to increase the micro-analyser's efficiency a 'linear' scan device has been incorporated making it possible to obtain semi-quantitative analyses very rapidly. (authors)

  12. Refining intra-protein contact prediction by graph analysis

    Directory of Open Access Journals (Sweden)

    Eyal Eran

    2007-05-01

    Full Text Available Abstract Background Accurate prediction of intra-protein residue contacts from sequence information will allow the prediction of protein structures. Basic predictions of such specific contacts can be further refined by jointly analyzing predicted contacts, and by adding information on the relative positions of contacts in the protein primary sequence. Results We introduce a method for graph analysis refinement of intra-protein contacts, termed GARP. Our previously presented intra-contact prediction method by means of pair-to-pair substitution matrix (P2PConPred was used to test the GARP method. In our approach, the top contact predictions obtained by a basic prediction method were used as edges to create a weighted graph. The edges were scored by a mutual clustering coefficient that identifies highly connected graph regions, and by the density of edges between the sequence regions of the edge nodes. A test set of 57 proteins with known structures was used to determine contacts. GARP improves the accuracy of the P2PConPred basic prediction method in whole proteins from 12% to 18%. Conclusion Using a simple approach we increased the contact prediction accuracy of a basic method by 1.5 times. Our graph approach is simple to implement, can be used with various basic prediction methods, and can provide input for further downstream analyses.

  13. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik;

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines the...

  14. Predicting Anthracycline Benefit

    DEFF Research Database (Denmark)

    Bartlett, John M S; McConkey, Christopher C; Munro, Alison F;

    2015-01-01

    PURPOSE: Evidence supporting the clinical utility of predictive biomarkers of anthracycline activity is weak, with a recent meta-analysis failing to provide strong evidence for either HER2 or TOP2A. Having previously shown that duplication of chromosome 17 pericentromeric alpha satellite as...... measured with a centromere enumeration probe (CEP17) predicted sensitivity to anthracyclines, we report here an individual patient-level pooled analysis of data from five trials comparing anthracycline-based chemotherapy with CMF (cyclophosphamide, methotrexate, and fluorouracil) as adjuvant chemotherapy...... for early breast cancer. PATIENTS AND METHODS: Fluorescent in situ hybridization for CEP17, HER2, and TOP2A was performed in three laboratories on samples from 3,846 of 4,864 eligible patients from five trials evaluating anthracycline-containing chemotherapy versus CMF. Methodologic differences did...

  15. Chaos detection and predictability

    CERN Document Server

    Gottwald, Georg; Laskar, Jacques

    2016-01-01

    Distinguishing chaoticity from regularity in deterministic dynamical systems and specifying the subspace of the phase space in which instabilities are expected to occur is of utmost importance in as disparate areas as astronomy, particle physics and climate dynamics.   To address these issues there exists a plethora of methods for chaos detection and predictability. The most commonly employed technique for investigating chaotic dynamics, i.e. the computation of Lyapunov exponents, however, may suffer a number of problems and drawbacks, for example when applied to noisy experimental data.   In the last two decades, several novel methods have been developed for the fast and reliable determination of the regular or chaotic nature of orbits, aimed at overcoming the shortcomings of more traditional techniques. This set of lecture notes and tutorial reviews serves as an introduction to and overview of modern chaos detection and predictability techniques for graduate students and non-specialists.   The book cover...

  16. Predictability of Critical Transitions

    CERN Document Server

    Zhang, Xiaozhu; Hallerberg, Sarah

    2015-01-01

    Critical transitions in multistable systems have been discussed as models for a variety of phenomena ranging from the extinctions of species to socio-economic changes and climate transitions between ice-ages and warm-ages. From bifurcation theory we can expect certain critical transitions to be preceded by a decreased recovery from external perturbations. The consequences of this critical slowing down have been observed as an increase in variance and autocorrelation prior to the transition. However especially in the presence of noise it is not clear, whether these changes in observation variables are statistically relevant such that they could be used as indicators for critical transitions. In this contribution we investigate the predictability of critical transitions in conceptual models. We study the the quadratic integrate-and-fire model and the van der Pol model, under the influence of external noise. We focus especially on the statistical analysis of the success of predictions and the overall predictabil...

  17. Comparing Spatial Predictions

    KAUST Repository

    Hering, Amanda S.

    2011-11-01

    Under a general loss function, we develop a hypothesis test to determine whether a significant difference in the spatial predictions produced by two competing models exists on average across the entire spatial domain of interest. The null hypothesis is that of no difference, and a spatial loss differential is created based on the observed data, the two sets of predictions, and the loss function chosen by the researcher. The test assumes only isotropy and short-range spatial dependence of the loss differential but does allow it to be non-Gaussian, non-zero-mean, and spatially correlated. Constant and nonconstant spatial trends in the loss differential are treated in two separate cases. Monte Carlo simulations illustrate the size and power properties of this test, and an example based on daily average wind speeds in Oklahoma is used for illustration. Supplemental results are available online. © 2011 American Statistical Association and the American Society for Qualitys.

  18. The Predictive Audit Framework

    OpenAIRE

    Kuenkaikaew, Siripan; Vasarhelyi, Miklos A.

    2013-01-01

    Assurance is an essential part of the business process of the modern enterprise. Auditing is a widely used assurance method made mandatory for public companies since 1934. The traditional (retroactive) audit provides after-the-fact audit reports, and is of limited value in the ever changing modern business environment because it is slow and backwards looking. Contemporary auditing and monitoring technologies could shorten the audit and assurance time frame. This paper proposes the predictive ...

  19. Multivariate respiratory motion prediction

    International Nuclear Information System (INIS)

    In extracranial robotic radiotherapy, tumour motion is compensated by tracking external and internal surrogates. To compensate system specific time delays, time series prediction of the external optical surrogates is used. We investigate whether the prediction accuracy can be increased by expanding the current clinical setup by an accelerometer, a strain belt and a flow sensor. Four previously published prediction algorithms are adapted to multivariate inputs—normalized least mean squares (nLMS), wavelet-based least mean squares (wLMS), support vector regression (SVR) and relevance vector machines (RVM)—and evaluated for three different prediction horizons. The measurement involves 18 subjects and consists of two phases, focusing on long term trends (M1) and breathing artefacts (M2). To select the most relevant and least redundant sensors, a sequential forward selection (SFS) method is proposed. Using a multivariate setting, the results show that the clinically used nLMS algorithm is susceptible to large outliers. In the case of irregular breathing (M2), the mean root mean square error (RMSE) of a univariate nLMS algorithm is 0.66 mm and can be decreased to 0.46 mm by a multivariate RVM model (best algorithm on average). To investigate the full potential of this approach, the optimal sensor combination was also estimated on the complete test set. The results indicate that a further decrease in RMSE is possible for RVM (to 0.42 mm). This motivates further research about sensor selection methods. Besides the optical surrogates, the sensors most frequently selected by the algorithms are the accelerometer and the strain belt. These sensors could be easily integrated in the current clinical setup and would allow a more precise motion compensation. (paper)

  20. Predictive Game Theory

    Science.gov (United States)

    Wolpert, David H.

    2005-01-01

    Probability theory governs the outcome of a game; there is a distribution over mixed strat.'s, not a single "equilibrium". To predict a single mixed strategy must use our loss function (external to the game's players. Provides a quantification of any strategy's rationality. Prove rationality falls as cost of computation rises (for players who have not previously interacted). All extends to games with varying numbers of players.

  1. Predicting helpful product reviews

    OpenAIRE

    O'Mahony, Michael P.; Cunningham, Pádraig; Smyth, Barry

    2010-01-01

    Millions of users are today posting user-generated content online, expressing their opinions on all manner of goods and services, topics and social affairs. While undoubtedly useful,user-generated content presents consumers with significant challenges in terms of information overload and quality considerations. In this paper, we address these issues in the context of product reviews and present a brief survey of our work to date on predicting review helpfulness. In particular, the performa...

  2. Individualizing fracture risk prediction

    OpenAIRE

    van Geel, Tineke A. C. M.; van den Bergh, Joop P. W.; Dinant, Geert Jan; Geusens, Piet

    2010-01-01

    Low bone mineral density (BMD) and clinical factors (CRF) have been identified as factors associated with an increased relative risk of fractures. From this observation and for clinical decision making, the concept of prediction of the individual absolute risk of fractures has emerged. It refers to the individual's risk for fractures over a certain time period, e.g. the next 5 and 10 years. Two individualized fracture risk calculation tools that are increasingly used and are available on the ...

  3. Predicting appointment breaking.

    Science.gov (United States)

    Bean, A G; Talaga, J

    1995-01-01

    The goal of physician referral services is to schedule appointments, but if too many patients fail to show up, the value of the service will be compromised. The authors found that appointment breaking can be predicted by the number of days to the scheduled appointment, the doctor's specialty, and the patient's age and gender. They also offer specific suggestions for modifying the marketing mix to reduce the incidence of no-shows. PMID:10142384

  4. Thinking about Aid Predictability

    OpenAIRE

    Andrews, Matthew; Wilhelm, Vera

    2008-01-01

    Researchers are giving more attention to aid predictability. In part, this is because of increases in the number of aid agencies and aid dollars and the growing complexity of the aid community. A growing body of research is examining key questions: Is aid unpredictable? What causes unpredictability? What can be done about it? This note draws from a selection of recent literature to bring s...

  5. Time-predictable architectures

    CERN Document Server

    Rochange, Christine; Uhrig , Sascha

    2014-01-01

    Building computers that can be used to design embedded real-time systems is the subject of this title. Real-time embedded software requires increasingly higher performances. The authors therefore consider processors that implement advanced mechanisms such as pipelining, out-of-order execution, branch prediction, cache memories, multi-threading, multicorearchitectures, etc. The authors of this book investigate the timepredictability of such schemes.

  6. Multivariate respiratory motion prediction

    Science.gov (United States)

    Dürichen, R.; Wissel, T.; Ernst, F.; Schlaefer, A.; Schweikard, A.

    2014-10-01

    In extracranial robotic radiotherapy, tumour motion is compensated by tracking external and internal surrogates. To compensate system specific time delays, time series prediction of the external optical surrogates is used. We investigate whether the prediction accuracy can be increased by expanding the current clinical setup by an accelerometer, a strain belt and a flow sensor. Four previously published prediction algorithms are adapted to multivariate inputs—normalized least mean squares (nLMS), wavelet-based least mean squares (wLMS), support vector regression (SVR) and relevance vector machines (RVM)—and evaluated for three different prediction horizons. The measurement involves 18 subjects and consists of two phases, focusing on long term trends (M1) and breathing artefacts (M2). To select the most relevant and least redundant sensors, a sequential forward selection (SFS) method is proposed. Using a multivariate setting, the results show that the clinically used nLMS algorithm is susceptible to large outliers. In the case of irregular breathing (M2), the mean root mean square error (RMSE) of a univariate nLMS algorithm is 0.66 mm and can be decreased to 0.46 mm by a multivariate RVM model (best algorithm on average). To investigate the full potential of this approach, the optimal sensor combination was also estimated on the complete test set. The results indicate that a further decrease in RMSE is possible for RVM (to 0.42 mm). This motivates further research about sensor selection methods. Besides the optical surrogates, the sensors most frequently selected by the algorithms are the accelerometer and the strain belt. These sensors could be easily integrated in the current clinical setup and would allow a more precise motion compensation.

  7. Numbers, Predictions and War

    OpenAIRE

    J.W. Grobbelaar

    2012-01-01

    Die subtitel van hierdie boek: 'Using history to evaluate combat forces and predict the outcome of battles', is 'n goeie beskrywing van die ambisieuse oogmerk van die skrywer. In die boek word 'n studie beskryf wat by die Historical Evaluation and Research Organization (afgekort: HERO) onderneem is om 'n wiskundige model daar te stel waarmee die uitkoms van enige veldslag voorspel kan word. As basis tot die studie word twee fundamentele aannames gemaak:

  8. L'analyse qualitative comme approche multiple

    Directory of Open Access Journals (Sweden)

    Roberto Cipriani

    2009-11-01

    Full Text Available L’exemple de l’enquête historique, visant à identifier les caractéristiques de la naissance et du développement d’une science et des lectures qu’elle donne des événements sociaux, est des plus originaux. Toute méthodologie historique non seulement débouche sur une pure et simple masse d’épisodes et d’événements, mais est également une narration et une élaboration critique de ces mêmes faits. Michael Postan écrit à juste titre que la complexité des données historiques est cependant de telle nature, et les différences et les similitudes tellement difficiles à cerner, que les efforts des historiens et des sociologues pour construire des comparaisons explicites se sont soldées, pour la plupart, par des tentatives grossières et naïves. La leçon des Annales a contribué en effet à construire l’idée d’une histoire qui puisse lire et expliquer ce qui est uniforme et ce qui est singulier. Rien de plus naturel que la réunion d’« êtres psychiques », à l’instar de l’assemblage des cellules en un organisme, en un « être psychique » nouveau et différent. Un tournant s’impose donc vers une expérimentation empirique plus ample et plus correcte, afin de disposer des instruments adéquats, capables de garantir à la méthodologie micro, qualitative et biographique, une fiabilité suffisante.Historical approach offers a relevant contribution in order to find the features of birth and development of a science which analyses social events. Historical methodology produces not only a lot of data but also a narrative, and an interpretation of facts. According to Michael Postan, history and sociology have made many efforts to compare data that are complex but similar and different at the same time. And the results seem to be naïf. Thanks to Les Annales suggestion it is possible to read and to explain what is uniform and what is singular. To put together “psychical beings”, like organic cells, in a new

  9. Odor Impression Prediction from Mass Spectra

    Science.gov (United States)

    Nakamoto, Takamichi

    2016-01-01

    The sense of smell arises from the perception of odors from chemicals. However, the relationship between the impression of odor and the numerous physicochemical parameters has yet to be understood owing to its complexity. As such, there is no established general method for predicting the impression of odor of a chemical only from its physicochemical properties. In this study, we designed a novel predictive model based on an artificial neural network with a deep structure for predicting odor impression utilizing the mass spectra of chemicals, and we conducted a series of computational analyses to evaluate its performance. Feature vectors extracted from the original high-dimensional space using two autoencoders equipped with both input and output layers in the model are used to build a mapping function from the feature space of mass spectra to the feature space of sensory data. The results of predictions obtained by the proposed new method have notable accuracy (R≅0.76) in comparison with a conventional method (R≅0.61). PMID:27326765

  10. Predicting Community Evolution in Social Networks

    Directory of Open Access Journals (Sweden)

    Stanisław Saganowski

    2015-05-01

    Full Text Available Nowadays, sustained development of different social media can be observed worldwide. One of the relevant research domains intensively explored recently is analysis of social communities existing in social media as well as prediction of their future evolution taking into account collected historical evolution chains. These evolution chains proposed in the paper contain group states in the previous time frames and its historical transitions that were identified using one out of two methods: Stable Group Changes Identification (SGCI and Group Evolution Discovery (GED. Based on the observed evolution chains of various length, structural network features are extracted, validated and selected as well as used to learn classification models. The experimental studies were performed on three real datasets with different profile: DBLP, Facebook and Polish blogosphere. The process of group prediction was analysed with respect to different classifiers as well as various descriptive feature sets extracted from evolution chains of different length. The results revealed that, in general, the longer evolution chains the better predictive abilities of the classification models. However, chains of length 3 to 7 enabled the GED-based method to almost reach its maximum possible prediction quality. For SGCI, this value was at the level of 3–5 last periods.

  11. Predicting Human Cooperation

    Science.gov (United States)

    Nay, John J.; Vorobeychik, Yevgeniy

    2016-01-01

    The Prisoner’s Dilemma has been a subject of extensive research due to its importance in understanding the ever-present tension between individual self-interest and social benefit. A strictly dominant strategy in a Prisoner’s Dilemma (defection), when played by both players, is mutually harmful. Repetition of the Prisoner’s Dilemma can give rise to cooperation as an equilibrium, but defection is as well, and this ambiguity is difficult to resolve. The numerous behavioral experiments investigating the Prisoner’s Dilemma highlight that players often cooperate, but the level of cooperation varies significantly with the specifics of the experimental predicament. We present the first computational model of human behavior in repeated Prisoner’s Dilemma games that unifies the diversity of experimental observations in a systematic and quantitatively reliable manner. Our model relies on data we integrated from many experiments, comprising 168,386 individual decisions. The model is composed of two pieces: the first predicts the first-period action using solely the structural game parameters, while the second predicts dynamic actions using both game parameters and history of play. Our model is successful not merely at fitting the data, but in predicting behavior at multiple scales in experimental designs not used for calibration, using only information about the game structure. We demonstrate the power of our approach through a simulation analysis revealing how to best promote human cooperation. PMID:27171417

  12. Eclipse prediction in Mesopotamia.

    Science.gov (United States)

    Steele, J. M.

    2000-02-01

    Among the many celestial phenomena observed in ancient Mesopotamia, eclipses, particularly eclipses of the Moon, were considered to be among the astrologically most significant events. In Babylon, by at least the middle of the seventh century BC, and probably as early as the middle of the eighth century BC, astronomical observations were being systematically conducted and recorded in a group of texts which we have come to call Astronomical Diaries. These Diaries contain many observations and predictions of eclipses. The predictions generally include the expected time of the eclipse, apparently calculated quite precisely. By the last three centuries BC, the Babylonian astronomers had developed highly advanced mathematical theories of the Moon and planets. This paper outlines the various methods which appear to have been formulated by the Mesopotamian astronomers to predict eclipses of the Sun and the Moon. It also considers the question of which of these methods were actually used in compiling the Astronomical Diaries, and speculates why these particular methods were used.

  13. Prediction in projection

    Science.gov (United States)

    Garland, Joshua; Bradley, Elizabeth

    2015-12-01

    Prediction models that capture and use the structure of state-space dynamics can be very effective. In practice, however, one rarely has access to full information about that structure, and accurate reconstruction of the dynamics from scalar time-series data—e.g., via delay-coordinate embedding—can be a real challenge. In this paper, we show that forecast models that employ incomplete reconstructions of the dynamics—i.e., models that are not necessarily true embeddings—can produce surprisingly accurate predictions of the state of a dynamical system. In particular, we demonstrate the effectiveness of a simple near-neighbor forecast technique that works with a two-dimensional time-delay reconstruction of both low- and high-dimensional dynamical systems. Even though correctness of the topology may not be guaranteed for incomplete reconstructions like this, the dynamical structure that they do capture allows for accurate predictions—in many cases, even more accurate than predictions generated using a traditional embedding. This could be very useful in the context of real-time forecasting, where the human effort required to produce a correct delay-coordinate embedding is prohibitive.

  14. Is Suicide Predictable?

    Directory of Open Access Journals (Sweden)

    S Asmaee

    2012-04-01

    Full Text Available Background:The current study aimed to test the hypothesis: Is suicide predictable? And try to classify the predictive factors in multiple suicide attempts.Methods:A cross-sectional study was administered to 223 multiple attempters, women who came to a medical poison centre after a suicide attempt.The participants were young, poor, and single.A Logistic Regression Analiysis was used to classify the predictive factors of suicide.Results:Women who had multiple suicide attempts exhibited a significant tendency to attempt suicide again. They had a history for more than two years of multiple suicide attempts, from three to as many as 18 times, plus mental illnesses such as depression and substance abuse.They also had a positive history of mental illnesses.Conclusion:Results indicate that contributing factors for another suicide attempt include previous suicide attempts, mental illness (depression,or a positive history of mental illnesses in the family affecting them at a young age, and substance abuse.

  15. Using a Log Analyser to Assist Research into Haptic Technology

    Science.gov (United States)

    Jónsson, Fannar Freyr; Hvannberg, Ebba Þóra

    Usability evaluations collect subjective and objective measures. Examples of the latter are time to complete a task. The paper describes use cases of a log analyser for haptic feedback. The log analyser reads a log file and extracts information such as time of each practice and assessment session, analyses whether the user goes off curve and measures the force applied. A study case using the analyser is performed using a PHANToM haptic learning environment application that is used to teach young visually impaired students the subject of polynomials. The paper answers six questions to illustrate further use cases of the log analyser.

  16. Predicting travel time variability for cost-benefit analysis

    NARCIS (Netherlands)

    S. Peer; C. Koopmans; E.T. Verhoef

    2010-01-01

    Unreliable travel times cause substantial costs to travelers. Nevertheless, they are not taken into account in many cost-benefit-analyses (CBA), or only in very rough ways. This paper aims at providing simple rules on how variability can be predicted, based on travel time data from Dutch highways. T

  17. Predicting Alcohol, Cigarette, and Marijuana Use from Preferential Music Consumption

    Science.gov (United States)

    Oberle, Crystal D.; Garcia, Javier A.

    2015-01-01

    This study investigated whether use of alcohol, cigarettes, and marijuana may be predicted from preferential consumption of particular music genres. Undergraduates (257 women and 78 men) completed a questionnaire assessing these variables. Partial correlation analyses, controlling for sensation-seeking tendencies and behaviors, revealed that…

  18. Large-scale prediction of drug-target relationships

    DEFF Research Database (Denmark)

    Kuhn, Michael; Campillos, Mónica; González, Paula;

    2008-01-01

    also provides a more global view on drug-target relations. Here we review recent attempts to apply large-scale computational analyses to predict novel interactions of drugs and targets from molecular and cellular features. In this context, we quantify the family-dependent probability of two proteins to...

  19. Pan-cancer analyses of the nuclear receptor superfamily

    Science.gov (United States)

    Long, Mark D.; Campbell, Moray J.

    2016-01-01

    Nuclear receptors (NR) act as an integrated conduit for environmental and hormonal signals to govern genomic responses, which relate to cell fate decisions. We review how their integrated actions with each other, shared co-factors and other transcription factors are disrupted in cancer. Steroid hormone nuclear receptors are oncogenic drivers in breast and prostate cancer and blockade of signaling is a major therapeutic goal. By contrast to blockade of receptors, in other cancers enhanced receptor function is attractive, as illustrated initially with targeting of retinoic acid receptors in leukemia. In the post-genomic era large consortia, such as The Cancer Genome Atlas, have developed a remarkable volume of genomic data with which to examine multiple aspects of nuclear receptor status in a pan-cancer manner. Therefore to extend the review of NR function we have also undertaken bioinformatics analyses of NR expression in over 3000 tumors, spread across six different tumor types (bladder, breast, colon, head and neck, liver and prostate). Specifically, to ask how the NR expression was distorted (altered expression, mutation and CNV) we have applied bootstrapping approaches to simulate data for comparison, and also compared these NR findings to 12 other transcription factor families. Nuclear receptors were uniquely and uniformly downregulated across all six tumor types, more than predicted by chance. These approaches also revealed that each tumor type had a specific NR expression profile but these were most similar between breast and prostate cancer. Some NRs were down-regulated in at least five tumor types (e.g. NR3C2/MR and NR5A2/LRH-1)) whereas others were uniquely down-regulated in one tumor (e.g. NR1B3/RARG). The downregulation was not driven by copy number variation or mutation and epigenetic mechanisms maybe responsible for the altered nuclear receptor expression.

  20. In-pile behaviour analyses of the monitoring fuel rods

    International Nuclear Information System (INIS)

    As test objects for the Proving Test on the Reliability of BWR 8x8 Fuel Assemblies, 10 monitoring fuel assemblies were irradiated under normal conditions in a typical commercial BWR. Seven were subjected to detailed post-irradiation examination and a large amount of data was obtained on in-pile fuel rod behaviour. On the basis of these data, fuel rod behaviour is discussed. A new computer code for thermal-mechanical analysis of the fuel rods was used to analyse the behaviour of the punctured rods, which had exposures ranging from 4 to 32 GW·d/t. These monitoring fuel rods showed a very small decrease due to cladding creep-down. Pellet-cladding mechanical interaction (PCMI) was first considered to have caused this increase, but no evidence was found of a PCMI large enough to have resulted in permanent strain on the cladding. From the data on diameter change, most of the increase was then attributed to cladding surface corrosion and crud deposition. The fission gas release (FGR) rate showed a distinctive exposure dependency. At exposures lower than 10 GW·d/t all FGR data were less than 1%; however, above 10 GW·d/t they showed large scatter, from 0.1 to 20%. Data from microgamma scanning indicated the existence of a threshold temperature for FGR. This temperature depended on exposure and the local FGR rate increased rapidly with a temperature above the threshold. This rapid increase of the local FGR rate may be the reason for the distinctive data scattering. Pellet density changes were also compared with out-of-pile test results. Through evaluation work, a better understanding of the in-pile behaviour of commercial fuel rods has been achieved and the reliability of the BWR fuel rods has been verified; the prediction capability of the new code was also confirmed. (author)

  1. Validation of HELIOS for ATR Core Follow Analyses

    International Nuclear Information System (INIS)

    This work summarizes the validation analyses for the HELIOS code to support core design and safety assurance calculations of the Advanced Test Reactor (ATR). Past and current core safety assurance is performed by the PDQ-7 diffusion code; a state of the art reactor physics simulation tool from the nuclear industry's earlier days. Over the past twenty years, improvements in computational speed have enabled the use of modern neutron transport methodologies to replace the role of diffusion theory for simulation of complex systems, such as the ATR. More exact methodologies have enabled a paradigm-shift away from highly tuned codes that force compliance with a bounding safety envelope, and towards codes regularly validated against routine measurements. To validate HELIOS, the 16 ATR operational cycles from late-2009 to present were modeled. The computed power distribution was compared against data collected by the ATR's on-line power surveillance system. It was found that the ATR's lobe-powers could be determined with ±10% accuracy. Also, the ATR's cold startup shim configuration for each of these 16 cycles was estimated and compared against the reported critical position from the reactor log-book. HELIOS successfully predicted criticality within the tolerance set by the ATR startup procedure for 13 out of the 16 cycles. This is compared to 12 times for PDQ (without empirical adjustment). These findings, as well as other insights discussed in this report, suggest that HELIOS is highly suited for replacing PDQ for core safety assurance of the ATR. Furthermore, a modern verification and validation framework has been established that allows reactor and fuel performance data to be computed with a known degree of accuracy and stated uncertainty.

  2. Pan-Cancer Analyses of the Nuclear Receptor Superfamily

    Directory of Open Access Journals (Sweden)

    Mark D. Long

    2015-12-01

    Full Text Available Nuclear receptors (NR act as an integrated conduit for environmental and hormonal signals to govern genomic responses, which relate to cell fate decisions. We review how their integrated actions with each other, shared co-factors and other transcription factors are disrupted in cancer. Steroid hormone nuclear receptors are oncogenic drivers in breast and prostate cancer and blockade of signaling is a major therapeutic goal. By contrast to blockade of receptors, in other cancers enhanced receptor function is attractive, as illustrated initially with targeting of retinoic acid receptors in leukemia. In the post-genomic era large consortia, such as The Cancer Genome Atlas, have developed a remarkable volume of genomic data with which to examine multiple aspects of nuclear receptor status in a pan-cancer manner. Therefore to extend the review of NR function we have also undertaken bioinformatics analyses of NR expression in over 3000 tumors, spread across six different tumor types (bladder, breast, colon, head and neck, liver and prostate. Specifically, to ask how the NR expression was distorted (altered expression, mutation and CNV we have applied bootstrapping approaches to simulate data for comparison, and also compared these NR findings to 12 other transcription factor families. Nuclear receptors were uniquely and uniformly downregulated across all six tumor types, more than predicted by chance. These approaches also revealed that each tumor type had a specific NR expression profile but these were most similar between breast and prostate cancer. Some NRs were down-regulated in at least five tumor types (e.g., NR3C2/MR and NR5A2/LRH-1 whereas others were uniquely down-regulated in one tumor (e.g., NR1B3/RARG. The downregulation was not driven by copy number variation or mutation and epigenetic mechanisms maybe responsible for the altered nuclear receptor expression.

  3. Validation of HELIOS for ATR Core Follow Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Bays, Samuel E. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Swain, Emily T. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Crawford, Douglas S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Nigg, David W. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-03-01

    This work summarizes the validation analyses for the HELIOS code to support core design and safety assurance calculations of the Advanced Test Reactor (ATR). Past and current core safety assurance is performed by the PDQ-7 diffusion code; a state of the art reactor physics simulation tool from the nuclear industry’s earlier days. Over the past twenty years, improvements in computational speed have enabled the use of modern neutron transport methodologies to replace the role of diffusion theory for simulation of complex systems, such as the ATR. More exact methodologies have enabled a paradigm-shift away from highly tuned codes that force compliance with a bounding safety envelope, and towards codes regularly validated against routine measurements. To validate HELIOS, the 16 ATR operational cycles from late-2009 to present were modeled. The computed power distribution was compared against data collected by the ATR’s on-line power surveillance system. It was found that the ATR’s lobe-powers could be determined with ±10% accuracy. Also, the ATR’s cold startup shim configuration for each of these 16 cycles was estimated and compared against the reported critical position from the reactor log-book. HELIOS successfully predicted criticality within the tolerance set by the ATR startup procedure for 13 out of the 16 cycles. This is compared to 12 times for PDQ (without empirical adjustment). These findings, as well as other insights discussed in this report, suggest that HELIOS is highly suited for replacing PDQ for core safety assurance of the ATR. Furthermore, a modern verification and validation framework has been established that allows reactor and fuel performance data to be computed with a known degree of accuracy and stated uncertainty.

  4. Aeroacoustic Prediction Codes

    Science.gov (United States)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor

  5. Integrated Waste Treatment Unit (IWTU) Input Coal Analyses and Off-Gass Filter (OGF) Content Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Jantzen, Carol M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Missimer, David M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Guenther, Chris P. [National Energy Technology Lab. (NETL), Morgantown, WV (United States); Shekhawat, Dushyant [National Energy Technology Lab. (NETL), Morgantown, WV (United States); VanEssendelft, Dirk T. [National Energy Technology Lab. (NETL), Morgantown, WV (United States); Means, Nicholas C. [AECOM Technology Corp., Oak Ridge, TN (United States)

    2015-04-23

    in process piping and materials, in excessive off-gas absorbent loading, and in undesired process emissions. The ash content of the coal is important as the ash adds to the DMR and other vessel products which affect the final waste product mass and composition. The amount and composition of the ash also affects the reaction kinetics. Thus ash content and composition contributes to the mass balance. In addition, sodium, potassium, calcium, sulfur, and maybe silica and alumina in the ash may contribute to wall-scale formation. Sodium, potassium, and alumina in the ash will be overwhelmed by the sodium, potassium, and alumina from the feed but the impact from the other ash components needs to be quantified. A maximum coal particle size is specified so the feed system does not plug and a minimum particle size is specified to prevent excess elutriation from the DMR to the Process Gas Filter (PGF). A vendor specification was used to procure the calcined coal for IWTU processing. While the vendor supplied a composite analysis for the 22 tons of coal (Appendix A), this study compares independent analyses of the coal performed at the Savannah River National Laboratory (SRNL) and at the National Energy Technology Laboratory (NETL). Three supersacks a were sampled at three different heights within the sack in order to determine within bag variability and between bag variability of the coal. These analyses were also compared to the vendor’s composite analyses and to the coal specification. These analyses were also compared to historic data on Bestac coal analyses that had been performed at Hazen Research Inc. (HRI) between 2004-2011.

  6. Improvements in Hanford TRU Program Utilizing Systems Modeling and Analyses

    International Nuclear Information System (INIS)

    Hanford's Transuranic (TRU) Program is responsible for certifying contact-handled (CH) TRU waste and shipping the certified waste to the Waste Isolation Pilot Plant (WIPP). Hanford's CH TRU waste includes material that is in retrievable storage as well as above ground storage, and newly generated waste. Certifying a typical container entails retrieving and then characterizing it (Non-Destructive Examination [NDE], Non-Destructive Assay [NDA], and Head Space Gas Sampling [HSG]), validating records (data review and reconciliation), and designating the container for a payload. The certified payload is then shipped to WIPP. Systems modeling and analysis techniques were applied to Hanford's TRU Program to help streamline the certification process and increase shipping rates. The modeling and analysis yields several benefits: - Maintains visibility on system performance and predicts downstream consequences of production issues. - Predicts future system performance with higher confidence, based on tracking past performance. - Applies speculation analyses to determine the impact of proposed changes (e.g., apparent shortage of feed should not be used as basis to reassign personnel if more feed is coming in the queue). - Positively identifies the appropriate queue for all containers (e.g., discovered several containers that were not actively being worked because they were in the wrong 'physical' location - method used previously for queuing up containers). - Identifies anomalies with the various data systems used to track inventory (e.g., dimensional differences for Standard Waste Boxes). A model of the TRU Program certification process was created using custom queries of the multiple databases for managing waste containers. The model was developed using a simplified process chart based on the expected path for a typical container. The process chart was augmented with the remediation path for containers that do not meet acceptance criteria for WIPP. Containers are sorted

  7. Angular analyses in relativistic quantum mechanics; Analyses angulaires en mecanique quantique relativiste

    Energy Technology Data Exchange (ETDEWEB)

    Moussa, P. [Commissariat a l' Energie Atomique, 91 - Saclay (France). Centre d' Etudes Nucleaires

    1968-06-01

    This work describes the angular analysis of reactions between particles with spin in a fully relativistic fashion. One particle states are introduced, following Wigner's method, as representations of the inhomogeneous Lorentz group. In order to perform the angular analyses, the reduction of the product of two representations of the inhomogeneous Lorentz group is studied. Clebsch-Gordan coefficients are computed for the following couplings: l-s coupling, helicity coupling, multipolar coupling, and symmetric coupling for more than two particles. Massless and massive particles are handled simultaneously. On the way we construct spinorial amplitudes and free fields; we recall how to establish convergence theorems for angular expansions from analyticity hypothesis. Finally we substitute these hypotheses to the idea of 'potential radius', which gives at low energy the usual 'centrifugal barrier' factors. The presence of such factors had never been deduced from hypotheses compatible with relativistic invariance. (author) [French] On decrit un formalisme permettant de tenir compte de l'invariance relativiste, dans l'analyse angulaire des amplitudes de reaction entre particules de spin quelconque. Suivant Wigner, les etats a une particule sont introduits a l'aide des representations du groupe de Lorentz inhomogene. Pour effectuer les analyses angulaires, on etudie la reduction du produit de deux representations du groupe de Lorentz inhomogene. Les coefficients de Clebsch-Gordan correspondants sont calcules dans les couplages suivants: couplage l-s couplage d'helicite, couplage multipolaire, couplage symetrique pour plus de deux particules. Les particules de masse nulle et de masse non nulle sont traitees simultanement. Au passage, on introduit les amplitudes spinorielles et on construit les champs libres, on rappelle comment des hypotheses d'analyticite permettent d'etablir des theoremes de convergence pour les

  8. Prolonged grief and depression after unnatural loss: Latent class analyses and cognitive correlates.

    Science.gov (United States)

    Boelen, Paul A; Reijntjes, Albert; J Djelantik, A A A Manik; Smid, Geert E

    2016-06-30

    This study sought to identify (a) subgroups among people confronted with unnatural/violent loss characterized by different symptoms profiles of prolonged grief disorder (PGD) and depression, and (b) socio-demographic, loss-related, and cognitive variables associated with subgroup membership. We used data from 245 individuals confronted with the death of a loved one due to an accident (47.3%), suicide (49%) or homicide (3.7%). Latent class analysis revealed three classes of participants: a resilient-class (25.3%), a predominantly PGD-class (39.2%), and a combined PGD/Depression-class (35.5%). Membership in the resilient-class was predicted by longer time since loss and lower age; membership in the combined class was predicted by lower education. Endorsement of negative cognitions about the self, life, the future, and one's own grief-reactions was lowest in the Resilient-class, intermediate in the PGD-class, and highest in the combined PGD/Depression-class. When all socio-demographic, loss-related, and cognitive variables were included in multinomial regression analyses predicting class-membership, it was found that negative cognitions about one's grief was the only variable predicting membership of the PGD-class. Negative cognitions about the self, life, and grief predicted membership of the combined PGD/Depression-class. These findings provide valuable information for the development of interventions for different subgroups of bereaved individuals confronted with unnatural/violent loss. PMID:27138832

  9. Integrative analyses shed new light on human ribosomal protein gene regulation.

    Science.gov (United States)

    Li, Xin; Zheng, Yiyu; Hu, Haiyan; Li, Xiaoman

    2016-01-01

    Ribosomal protein genes (RPGs) are important house-keeping genes that are well-known for their coordinated expression. Previous studies on RPGs are largely limited to their promoter regions. Recent high-throughput studies provide an unprecedented opportunity to study how human RPGs are transcriptionally modulated and how such transcriptional regulation may contribute to the coordinate gene expression in various tissues and cell types. By analyzing the DNase I hypersensitive sites under 349 experimental conditions, we predicted 217 RPG regulatory regions in the human genome. More than 86.6% of these computationally predicted regulatory regions were partially corroborated by independent experimental measurements. Motif analyses on these predicted regulatory regions identified 31 DNA motifs, including 57.1% of experimentally validated motifs in literature that regulate RPGs. Interestingly, we observed that the majority of the predicted motifs were shared by the predicted distal and proximal regulatory regions of the same RPGs, a likely general mechanism for enhancer-promoter interactions. We also found that RPGs may be differently regulated in different cells, indicating that condition-specific RPG regulatory regions still need to be discovered and investigated. Our study advances the understanding of how RPGs are coordinately modulated, which sheds light to the general principles of gene transcriptional regulation in mammals. PMID:27346035

  10. New dimension analyses with error analysis for quaking aspen and black spruce

    Science.gov (United States)

    Woods, K. D.; Botkin, D. B.; Feiveson, A. H.

    1987-01-01

    Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.

  11. On identified predictive control

    Science.gov (United States)

    Bialasiewicz, Jan T.

    1993-01-01

    Self-tuning control algorithms are potential successors to manually tuned PID controllers traditionally used in process control applications. A very attractive design method for self-tuning controllers, which has been developed over recent years, is the long-range predictive control (LRPC). The success of LRPC is due to its effectiveness with plants of unknown order and dead-time which may be simultaneously nonminimum phase and unstable or have multiple lightly damped poles (as in the case of flexible structures or flexible robot arms). LRPC is a receding horizon strategy and can be, in general terms, summarized as follows. Using assumed long-range (or multi-step) cost function the optimal control law is found in terms of unknown parameters of the predictor model of the process, current input-output sequence, and future reference signal sequence. The common approach is to assume that the input-output process model is known or separately identified and then to find the parameters of the predictor model. Once these are known, the optimal control law determines control signal at the current time t which is applied at the process input and the whole procedure is repeated at the next time instant. Most of the recent research in this field is apparently centered around the LRPC formulation developed by Clarke et al., known as generalized predictive control (GPC). GPC uses ARIMAX/CARIMA model of the process in its input-output formulation. In this paper, the GPC formulation is used but the process predictor model is derived from the state space formulation of the ARIMAX model and is directly identified over the receding horizon, i.e., using current input-output sequence. The underlying technique in the design of identified predictive control (IPC) algorithm is the identification algorithm of observer/Kalman filter Markov parameters developed by Juang et al. at NASA Langley Research Center and successfully applied to identification of flexible structures.

  12. Foundations of predictive analytics

    CERN Document Server

    Wu, James

    2012-01-01

    Drawing on the authors' two decades of experience in applied modeling and data mining, Foundations of Predictive Analytics presents the fundamental background required for analyzing data and building models for many practical applications, such as consumer behavior modeling, risk and marketing analytics, and other areas. It also discusses a variety of practical topics that are frequently missing from similar texts. The book begins with the statistical and linear algebra/matrix foundation of modeling methods, from distributions to cumulant and copula functions to Cornish--Fisher expansion and o

  13. Towards Predictive Association Theories

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios; Tsivintzelis, Ioannis; Michelsen, Michael Locht;

    2011-01-01

    Association equations of state like SAFT, CPA and NRHB have been previously applied to many complex mixtures. In this work we focus on two of these models, the CPA and the NRHB equations of state and the emphasis is on the analysis of their predictive capabilities for a wide range of applications...... phase equilibria in mixtures containing glycols. The importance of considering the solvation of CO2–water (in CPA) when the model is applied to multicomponent mixtures as well as of the multiple associations in heavy glycol–water mixtures (in NRHB) is investigated....

  14. Chloride ingress prediction

    DEFF Research Database (Denmark)

    Frederiksen, Jens Mejer; Geiker, Mette Rica

    Prediction of chloride ingress into concrete is an important part of durability design of reinforced concrete structures exposed to chloride containing environment. This paper presents experimentally based design parameters for Portland cement concretes with and without silica fume and fly ash in...... marine atmospheric and submersed South Scandinavian environment. The design parameters are based on sequential measurements of 86 chloride profiles taken over ten years from 13 different types of concrete. The design parameters provide the input for an analytical model for chloride profiles as function...

  15. Mathematics of Predicting Growth

    OpenAIRE

    Nielsen, Ron W

    2015-01-01

    Abstract. Mathematical methods of analysis of data and of predicting growth are discussed. The starting point is the analysis of the growth rates, which can be expressed as a function of time or as a function of the size of the growing entity. Application of these methods is illustrated using the world economic growth but they can be applied to any type of growth.Keywords. Growth rate, Differential equations, Gross Domestic Product, Economic growth.JEL. C01, C20, C50, C53, C60, C65, C80

  16. Predicting Lotto Numbers

    DEFF Research Database (Denmark)

    Suetens, Sigrid; Galbo-Jørgensen, Claus B.; Tyran, Jean-Robert Karl

    2015-01-01

    formalized in recent behavioral theory. In particular, players tend to bet less on numbers that have been drawn in the preceding week, as suggested by the ‘gambler’s fallacy’, and bet more on a number if it was frequently drawn in the recent past, consistent with the ‘hot-hand fallacy’.......We investigate the ‘law of small numbers’ using a data set on lotto gambling that allows us to measure players’ reactions to draws. While most players pick the same set of numbers week after week, we find that those who do change react on average as predicted by the law of small numbers as...

  17. Asphalt pavement temperature prediction

    OpenAIRE

    Minhoto, Manuel; Pais, Jorge; Pereira, Paulo

    2006-01-01

    A 3-D finite element model (FEM) was developed to calculate the lemperature of an asphtalt rubber pavement localed in the Northeast of Portugal. The goal of the case study presented in this paper is to show the good accuracy temperature prediction tha can be obtained with this model when compared with the field pavement thermal condition obtained during a year. lnput data to the model are the hourly values for solar radiation and temperature and the mean daily value of wind speed obtained fr...

  18. Predicting Sustainable Work Behavior

    DEFF Research Database (Denmark)

    Hald, Kim Sundtoft

    2013-01-01

    Sustainable work behavior is an important issue for operations managers – it has implications for most outcomes of OM. This research explores the antecedents of sustainable work behavior. It revisits and extends the sociotechnical model developed by Brown et al. (2000) on predicting safe behavior....... Employee characteristics and general attitudes towards safety and work condition are included in the extended model. A survey was handed out to 654 employees in Chinese factories. This research contributes by demonstrating how employee- characteristics and general attitudes towards safety and work...... condition influence their sustainable work behavior. A new definition of sustainable work behavior is proposed....

  19. Stress Prediction System

    Science.gov (United States)

    1995-01-01

    NASA wanted to know how astronauts' bodies would react under various gravitational pulls and space suit weights. Under contract to NASA, the University of Michigan's Center for Ergonomics developed a model capable of predicting what type of stress and what degree of load a body could stand. The algorithm generated was commercialized with the ISTU (Isometric Strength Testing Unit) Functional Capacity Evaluation System, which simulates tasks such as lifting a heavy box or pushing a cart and evaluates the exertion expended. It also identifies the muscle group that limits the subject's performance. It is an effective tool of personnel evaluation, selection and job redesign.

  20. Predicting Lotto Numbers

    OpenAIRE

    Jorgensen, C.B.; Suetens, S.; Tyran, J.R.

    2011-01-01

    We investigate the “law of small numbers” using a unique panel data set on lotto gambling. Because we can track individual players over time, we can measure how they react to outcomes of recent lotto drawings. We can therefore test whether they behave as if they believe they can predict lotto numbers based on recent drawings. While most players pick the same set of numbers week after week without regards of numbers drawn or anything else, we find that those who do change, act on average in th...