WorldWideScience

Sample records for analyses predict a20

  1. Theoretical analyses predict A20 regulates period of NF-kB oscillation

    CERN Document Server

    Mengel, Benedicte; Jensen, Mogens H; Trusina, Ala

    2009-01-01

    The nuclear-cytoplasmic shuttling of NF-kB is characterized by damped oscillations of the nuclear concentration with a time period of around 1-2 hours. The NF-kB network contains several feedback loops modulating the overall response of NF-kB activity. While IkBa is known to drive and IkBe is known to dampen the oscillations, the precise role of A20 negative feedback remains to be elucidated. Here we propose a model of the NF-kB system focusing on three negative feedback loops (IkBa, IkBe and A20) which capture the experimentally observed responses in wild-type and knockout cells. We find that A20, like IkBe, efficiently dampens the oscillations albeit through a distinct mechanism. In addition, however, we have discovered a new functional role of A20 by which it controls the oscillation period of nuclear NF-kB. The design based on three nested feedback loops allows independent control of period and amplitude decay in the oscillatory response. Based on these results we predict that adjusting the expression lev...

  2. Benchmark analyses of prediction models for pipe wall thinning

    International Nuclear Information System (INIS)

    In recent years, the importance of utilizing a prediction model or code for the management of pipe wall thinning has been recognized. In Japan Society of Mechanical Engineers (JSME), a working group on prediction methods has been set up within a research committee for studying the management of pipe wall-thinning. Some prediction models for pipe wall thinning were reviewed by benchmark analyses in terms of their prediction characteristics and the specifications required for their use in the management of pipe wall thinning in power generation facilities. This paper introduces the prediction models selected from the existing flow-accelerated corrosion and/or liquid droplet impingement erosion models. The experimental results and example of the results of wall thickness measurement used as benchmark data are also mentioned. (author)

  3. Multiple regression analyses in the prediction of aerospace instrument costs

    Science.gov (United States)

    Tran, Linh

    The aerospace industry has been investing for decades in ways to improve its efficiency in estimating the project life cycle cost (LCC). One of the major focuses in the LCC is the cost/prediction of aerospace instruments done during the early conceptual design phase of the project. The accuracy of early cost predictions affects the project scheduling and funding, and it is often the major cause for project cost overruns. The prediction of instruments' cost is based on the statistical analysis of these independent variables: Mass (kg), Power (watts), Instrument Type, Technology Readiness Level (TRL), Destination: earth orbiting or planetary, Data rates (kbps), Number of bands, Number of channels, Design life (months), and Development duration (months). This author is proposing a cost prediction approach of aerospace instruments based on these statistical analyses: Clustering Analysis, Principle Components Analysis (PCA), Bootstrap, and multiple regressions (both linear and non-linear). In the proposed approach, the Cost Estimating Relationship (CER) will be developed for the dependent variable Instrument Cost by using a combination of multiple independent variables. "The Full Model" will be developed and executed to estimate the full set of nine variables. The SAS program, Excel, Automatic Cost Estimating Integrate Tool (ACEIT) and Minitab are the tools to aid the analysis. Through the analysis, the cost drivers will be identified which will help develop an ultimate cost estimating software tool for the Instrument Cost prediction and optimization of future missions.

  4. Uncertainty and Sensitivity Analyses of Model Predictions of Solute Transport

    Science.gov (United States)

    Skaggs, T. H.; Suarez, D. L.; Goldberg, S. R.

    2012-12-01

    Soil salinity reduces crop production on about 50% of irrigated lands worldwide. One roadblock to increased use of advanced computer simulation tools for better managing irrigation water and soil salinity is that the models usually do not provide an estimate of the uncertainty in model predictions, which can be substantial. In this work, we investigate methods for putting confidence bounds on HYDRUS-1D simulations of solute leaching in soils. Uncertainties in model parameters estimated with pedotransfer functions are propagated through simulation model predictions using Monte Carlo simulation. Generalized sensitivity analyses indicate which parameters are most significant for quantifying uncertainty. The simulation results are compared with experimentally observed transport variability in a number of large, replicated lysimeters.

  5. Climate Prediction Center (CPC) US daily temperature analyses

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The U.S. daily temperature analyses are maps depicting various temperature quantities utilizing daily maximum and minimum temperature data across the US. Maps are...

  6. Predicting item popularity: Analysing local clustering behaviour of users

    Science.gov (United States)

    Liebig, Jessica; Rao, Asha

    2016-01-01

    Predicting the popularity of items in rating networks is an interesting but challenging problem. This is especially so when an item has first appeared and has received very few ratings. In this paper, we propose a novel approach to predicting the future popularity of new items in rating networks, defining a new bipartite clustering coefficient to predict the popularity of movies and stories in the MovieLens and Digg networks respectively. We show that the clustering behaviour of the first user who rates a new item gives insight into the future popularity of that item. Our method predicts, with a success rate of over 65% for the MovieLens network and over 50% for the Digg network, the future popularity of an item. This is a major improvement on current results.

  7. Analysing Twitter and web queries for flu trend prediction

    Science.gov (United States)

    2014-01-01

    Background Social media platforms encourage people to share diverse aspects of their daily life. Among these, shared health related information might be used to infer health status and incidence rates for specific conditions or symptoms. In this work, we present an infodemiology study that evaluates the use of Twitter messages and search engine query logs to estimate and predict the incidence rate of influenza like illness in Portugal. Results Based on a manually classified dataset of 2704 tweets from Portugal, we selected a set of 650 textual features to train a Naïve Bayes classifier to identify tweets mentioning flu or flu-like illness or symptoms. We obtained a precision of 0.78 and an F-measure of 0.83, based on cross validation over the complete annotated set. Furthermore, we trained a multiple linear regression model to estimate the health-monitoring data from the Influenzanet project, using as predictors the relative frequencies obtained from the tweet classification results and from query logs, and achieved a correlation ratio of 0.89 (p < 0.001). These classification and regression models were also applied to estimate the flu incidence in the following flu season, achieving a correlation of 0.72. Conclusions Previous studies addressing the estimation of disease incidence based on user-generated content have mostly focused on the english language. Our results further validate those studies and show that by changing the initial steps of data preprocessing and feature extraction and selection, the proposed approaches can be adapted to other languages. Additionally, we investigated whether the predictive model created can be applied to data from the subsequent flu season. In this case, although the prediction result was good, an initial phase to adapt the regression model could be necessary to achieve more robust results. PMID:25077431

  8. TRAC analyses and GIRAFFE tests for PCCS performance prediction

    International Nuclear Information System (INIS)

    The passive containment cooling system (PCCS) would remove decay heat by steam condensation without any electric power supply or operator's action if an accident should occur in nuclear reactors. There is, however, concern that non-condensable gas might influence the PCCS performance in the event of an accident. This paper summarizes Toshiba's activities respecting PCCS development, in particular those activities relating to TRAC qualification for PCCS performance prediction and the GIRAFFE tests. TRAC is a best estimate thermal hydraulic analysis code. GIRAFFE is a full-height test facility simulating the SBWR containment with the PCCS, at Toshiba's Ukishima site. (author)

  9. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  10. On the use of uncertainty analyses to test hypotheses regarding deterministic model predictions of environmental processes

    International Nuclear Information System (INIS)

    This paper illustrates the use of Monte Carlo parameter uncertainty and sensitivity analyses to test hypotheses regarding predictions of deterministic models of environmental transport, dose, risk and other phenomena. The methodology is illustrated by testing whether 238Pu is transferred more readily than 239+240Pu from the gastrointestinal (GI) tract of cattle to their tissues (muscle, liver and blood). This illustration is based on a study wherein beef-cattle grazed for up to 1064 days on a fenced plutonium (Pu)-contaminated arid site in Area 13 near the Nevada Test Site in the United States. Periodically, cattle were sacrificed and their tissues analyzed for Pu and other radionuclides. Conditional sensitivity analyses of the model predictions were also conducted. These analyses indicated that Pu cattle tissue concentrations had the largest impact of any model parameter on the pdf of predicted Pu fractional transfers. Issues that arise in conducting uncertainty and sensitivity analyses of deterministic models are discussed. (author)

  11. Crack growth prediction analyses of a RPV prototype under PTS loading

    International Nuclear Information System (INIS)

    This work presents the numerical finite element analysis and fracture mechanics procedure carried out to predict crack growth behavior of a reactor pressure vessel (RPV) prototype during a pressurized thermal shock (PTS) experiment. A brief description of the PTS experiment is given, followed by a presentation of the numerical models used for thermal structural analysis and to obtain the crack driving force parameters. Fracture mechanics procedures, with different levels of complexity, are presented for crack growth prediction. The results obtained using a simplified procedure are compared with those based on 3D finite element analyses. (author)

  12. Bacterial regulon modeling and prediction based on systematic cis regulatory motif analyses

    Science.gov (United States)

    Liu, Bingqiang; Zhou, Chuan; Li, Guojun; Zhang, Hanyuan; Zeng, Erliang; Liu, Qi; Ma, Qin

    2016-03-01

    Regulons are the basic units of the response system in a bacterial cell, and each consists of a set of transcriptionally co-regulated operons. Regulon elucidation is the basis for studying the bacterial global transcriptional regulation network. In this study, we designed a novel co-regulation score between a pair of operons based on accurate operon identification and cis regulatory motif analyses, which can capture their co-regulation relationship much better than other scores. Taking full advantage of this discovery, we developed a new computational framework and built a novel graph model for regulon prediction. This model integrates the motif comparison and clustering and makes the regulon prediction problem substantially more solvable and accurate. To evaluate our prediction, a regulon coverage score was designed based on the documented regulons and their overlap with our prediction; and a modified Fisher Exact test was implemented to measure how well our predictions match the co-expressed modules derived from E. coli microarray gene-expression datasets collected under 466 conditions. The results indicate that our program consistently performed better than others in terms of the prediction accuracy. This suggests that our algorithms substantially improve the state-of-the-art, leading to a computational capability to reliably predict regulons for any bacteria.

  13. Design and Antigenic Epitopes Prediction of a New Trial Recombinant Multiepitopic Rotaviral Vaccine: In Silico Analyses.

    Science.gov (United States)

    Jafarpour, Sima; Ayat, Hoda; Ahadi, Ali Mohammad

    2015-01-01

    Rotavirus is the major etiologic factor of severe diarrheal disease. Natural infection provides protection against subsequent rotavirus infection and diarrhea. This research presents a new vaccine designed based on computational models. In this study, three types of epitopes are considered-linear, conformational, and combinational-in a proposed model protein. Several studies on rotavirus vaccines have shown that VP6 and VP4 proteins are good candidates for vaccine production. In the present study, a fusion protein was designed as a new generation of rotavirus vaccines by bioinformatics analyses. This model-based study using ABCpred, BCPREDS, Bcepred, and Ellipro web servers showed that the peptide presented in this article has the necessary properties to act as a vaccine. Prediction of linear B-cell epitopes of peptides is helpful to investigate whether these peptides are able to activate humoral immunity. PMID:25965449

  14. PASMet: a web-based platform for prediction, modelling and analyses of metabolic systems.

    Science.gov (United States)

    Sriyudthsak, Kansuporn; Mejia, Ramon Francisco; Arita, Masanori; Hirai, Masami Yokota

    2016-07-01

    PASMet (Prediction, Analysis and Simulation of Metabolic networks) is a web-based platform for proposing and verifying mathematical models to understand the dynamics of metabolism. The advantages of PASMet include user-friendliness and accessibility, which enable biologists and biochemists to easily perform mathematical modelling. PASMet offers a series of user-functions to handle the time-series data of metabolite concentrations. The functions are organised into four steps: (i) Prediction of a probable metabolic pathway and its regulation; (ii) Construction of mathematical models; (iii) Simulation of metabolic behaviours; and (iv) Analysis of metabolic system characteristics. Each function contains various statistical and mathematical methods that can be used independently. Users who may not have enough knowledge of computing or programming can easily and quickly analyse their local data without software downloads, updates or installations. Users only need to upload their files in comma-separated values (CSV) format or enter their model equations directly into the website. Once the time-series data or mathematical equations are uploaded, PASMet automatically performs computation on server-side. Then, users can interactively view their results and directly download them to their local computers. PASMet is freely available with no login requirement at http://pasmet.riken.jp/ from major web browsers on Windows, Mac and Linux operating systems. PMID:27174940

  15. A20 Inhibits β-Cell Apoptosis by Multiple Mechanisms and Predicts Residual β-Cell Function in Type 1 Diabetes

    DEFF Research Database (Denmark)

    Fukaya, Makiko; Brorsson, Caroline A; Meyerovich, Kira;

    2016-01-01

    Activation of the transcription factor nuclear factor kappa B (NFkB) contributes to β-cell death in type 1 diabetes (T1D). Genome-wide association studies have identified the gene TNF-induced protein 3 (TNFAIP3), encoding for the zinc finger protein A20, as a susceptibility locus for T1D. A20 res...... identify the single nucleotide polymorphism rs2327832 of TNFAIP3 as a possible prognostic marker for diabetes outcome in children with T1D....

  16. Analyses of Optimal Embedding Dimension and Delay for Local Linear Prediction Model

    Institute of Scientific and Technical Information of China (English)

    MENG Qing-Fang; PENG Yu-Hua; LIU Yun-Xia; SUN Wei-Feng

    2007-01-01

    In the reconstructed phase space, a novel local linear prediction model is proposed to predict chaotic time series. The parameters of the proposed model take the values that are different from those of the phase space reconstruction. We propose a criterion based on prediction error to determine the optimal parameters of the proposed model. The simulation results show that the proposed model can effectively make one-step and multistep prediction for chaotic time series, and the one-step and multi-step prediction accuracy of the proposed model is superior to that of the traditional local linear prediction.

  17. Uncertainty and Sensitivity Analyses of a Two-Parameter Impedance Prediction Model

    Science.gov (United States)

    Jones, M. G.; Parrott, T. L.; Watson, W. R.

    2008-01-01

    This paper presents comparisons of predicted impedance uncertainty limits derived from Monte-Carlo-type simulations with a Two-Parameter (TP) impedance prediction model and measured impedance uncertainty limits based on multiple tests acquired in NASA Langley test rigs. These predicted and measured impedance uncertainty limits are used to evaluate the effects of simultaneous randomization of each input parameter for the impedance prediction and measurement processes. A sensitivity analysis is then used to further evaluate the TP prediction model by varying its input parameters on an individual basis. The variation imposed on the input parameters is based on measurements conducted with multiple tests in the NASA Langley normal incidence and grazing incidence impedance tubes; thus, the input parameters are assigned uncertainties commensurate with those of the measured data. These same measured data are used with the NASA Langley impedance measurement (eduction) processes to determine the corresponding measured impedance uncertainty limits, such that the predicted and measured impedance uncertainty limits (95% confidence intervals) can be compared. The measured reactance 95% confidence intervals encompass the corresponding predicted reactance confidence intervals over the frequency range of interest. The same is true for the confidence intervals of the measured and predicted resistance at near-resonance frequencies, but the predicted resistance confidence intervals are lower than the measured resistance confidence intervals (no overlap) at frequencies away from resonance. A sensitivity analysis indicates the discharge coefficient uncertainty is the major contributor to uncertainty in the predicted impedances for the perforate-over-honeycomb liner used in this study. This insight regarding the relative importance of each input parameter will be used to guide the design of experiments with test rigs currently being brought on-line at NASA Langley.

  18. Prediction of hybrid performance in maize using molecular markers and joint analyses of hybrids and parental inbreds.

    Science.gov (United States)

    Schrag, Tobias A; Möhring, Jens; Melchinger, Albrecht E; Kusterer, Barbara; Dhillon, Baldev S; Piepho, Hans-Peter; Frisch, Matthias

    2010-01-01

    The identification of superior hybrids is important for the success of a hybrid breeding program. However, field evaluation of all possible crosses among inbred lines requires extremely large resources. Therefore, efforts have been made to predict hybrid performance (HP) by using field data of related genotypes and molecular markers. In the present study, the main objective was to assess the usefulness of pedigree information in combination with the covariance between general combining ability (GCA) and per se performance of parental lines for HP prediction. In addition, we compared the prediction efficiency of AFLP and SSR marker data, estimated marker effects separately for reciprocal allelic configurations (among heterotic groups) of heterozygous marker loci in hybrids, and imputed missing AFLP marker data for marker-based HP prediction. Unbalanced field data of 400 maize dent x flint hybrids from 9 factorials and of 79 inbred parents were subjected to joint analyses with mixed linear models. The inbreds were genotyped with 910 AFLP and 256 SSR markers. Efficiency of prediction (R (2)) was estimated by cross-validation for hybrids having no or one parent evaluated in testcrosses. Best linear unbiased prediction of GCA and specific combining ability resulted in the highest efficiencies for HP prediction for both traits (R (2) = 0.6-0.9), if pedigree and line per se data were used. However, without such data, HP for grain yield was more efficiently predicted using molecular markers. The additional modifications of the marker-based approaches had no clear effect. Our study showed the high potential of joint analyses of hybrids and parental inbred lines for the prediction of performance of untested hybrids. PMID:19916002

  19. Analysing the Relevance of Experience Partitions to the Prediction of Players’ Self-Reports of Affect

    DEFF Research Database (Denmark)

    Martínez, Héctor Pérez; Yannakakis, Georgios N.

    2011-01-01

    A common practice in modeling affect from physiological signals consists of reducing the signals to a set of statistical features that feed predictors of self-reported emotions. This paper analyses the impact of various time-windows, used for the extraction of physiological features, to the...

  20. Measuring Usable Knowledge: Teachers' Analyses of Mathematics Classroom Videos Predict Teaching Quality and Student Learning

    Science.gov (United States)

    Kersting, Nicole B.; Givvin, Karen B.; Thompson, Belinda J.; Santagata, Rossella; Stigler, James W.

    2012-01-01

    This study explores the relationships between teacher knowledge, teaching practice, and student learning in mathematics. It extends previous work that developed and evaluated an innovative approach to assessing teacher knowledge based on teachers' analyses of classroom video clips. Teachers watched and commented on 13 fraction clips. These written…

  1. Map on predicted deposition of Cs-137 in Spanish soils from geostatistical analyses

    International Nuclear Information System (INIS)

    The knowledge of the distribution of 137Cs deposition over Spanish mainland soils along with the geographical, physical and morphological terrain information enable us to know the 137Cs background content in soil. This could be useful as a tool in a hypothetical situation of an accident involving a radioactive discharge or in soil erosion studies. A Geographic Information System (GIS) would allow the gathering of all the mentioned information. In this work, gamma measurements of 137Cs on 34 Spanish mainland soils, rainfall data taken from 778 weather stations, soil types and geographical and physical terrain information were input into a GIS. Geostatistical techniques were applied to interpolate values of 137Cs activity at unsampled places, obtaining prediction maps of 137Cs deposition. Up to now, geostatistical methods have been used to model spatial continuity of data. Through semivariance and cross-covariance functions the spatial correlation of such data can be studied and described. Ordinary and simple kriging techniques were carried out to map spatial patterns of 137Cs deposition, and ordinary and simple co-kriging were used to improve the prediction map obtained through a second related variable: namely the rainfall. To choose the best prediction map of 137Cs deposition, the spatial dependence of the variable, the correlation coefficient and the prediction errors were evaluated using the different models previously mentioned. The best result for 137Cs deposition map was obtained when applying the co-kriging techniques. - Highlights: ► Implementation of 137Cs activity data, in Spanish soils, in a GIS. ► Prediction models were performed of Cs-137 fallout with kriging techniques. ► More accurate prediction surfaces were obtained using cokriging techniques. ► Rainfall is the second variable used to cokriging technique.

  2. Improving retention: predicting at-risk students by analysing clicking behaviour in a virtual learning environment

    OpenAIRE

    Wolff, Annika; Zdrahal, Zdenek; Nikolov, Andriy; Pantucek, Michal

    2013-01-01

    One of the key interests for learning analytics is how it can be used to improve retention. This paper focuses on work conducted at the Open University (OU) into predicting students who are at risk of failing their module. The Open University is one of the worlds largest distance learning institutions. Since tutors do not interact face to face with students, it can be difficult for tutors to identify and respond to students who are struggling in time to try to resolve the difficulty. Predict...

  3. Finite Element Creep Damage Analyses and Life Prediction of P91 Pipe Containing Local Wall Thinning Defect

    Science.gov (United States)

    Xue, Jilin; Zhou, Changyu

    2016-03-01

    Creep continuum damage finite element (FE) analyses were performed for P91 steel pipe containing local wall thinning (LWT) defect subjected to monotonic internal pressure, monotonic bending moment and combined internal pressure and bending moment by orthogonal experimental design method. The creep damage lives of pipe containing LWT defect under different load conditions were obtained. Then, the creep damage life formulas were regressed based on the creep damage life results from FE method. At the same time a skeletal point rupture stress was found and used for life prediction which was compared with creep damage lives obtained by continuum damage analyses. From the results, the failure lives of pipe containing LWT defect can be obtained accurately by using skeletal point rupture stress method. Finally, the influence of LWT defect geometry was analysed, which indicated that relative defect depth was the most significant factor for creep damage lives of pipe containing LWT defect.

  4. Aeromechanics and Aeroacoustics Predictions of the Boeing-SMART Rotor Using Coupled-CFD/CSD Analyses

    Science.gov (United States)

    Bain, Jeremy; Sim, Ben W.; Sankar, Lakshmi; Brentner, Ken

    2010-01-01

    This paper will highlight helicopter aeromechanics and aeroacoustics prediction capabilities developed by Georgia Institute of Technology, the Pennsylvania State University, and Northern Arizona University under the Helicopter Quieting Program (HQP) sponsored by the Tactical Technology Office of the Defense Advanced Research Projects Agency (DARPA). First initiated in 2004, the goal of the HQP was to develop high fidelity, state-of-the-art computational tools for designing advanced helicopter rotors with reduced acoustic perceptibility and enhanced performance. A critical step towards achieving this objective is the development of rotorcraft prediction codes capable of assessing a wide range of helicopter configurations and operations for future rotorcraft designs. This includes novel next-generation rotor systems that incorporate innovative passive and/or active elements to meet future challenging military performance and survivability goals.

  5. Predictability of Regional Climate: A Bayesian Approach to Analysing a WRF Model Ensemble

    Science.gov (United States)

    Bruyere, C. L.; Mesquita, M. D. S.; Paimazumder, D.

    2013-12-01

    This study investigates aspects of climate predictability with a focus on climatic variables and different characteristics of extremes over nine North American climatic regions and two selected Atlantic sectors. An ensemble of state-of-the-art Weather Research and Forecasting Model (WRF) simulations is used for the analysis. The ensemble is comprised of a combination of various physics schemes, initial conditions, domain sizes, boundary conditions and breeding techniques. The main objectives of this research are: 1) to increase our understanding of the ability of WRF to capture regional climate information - both at the individual and collective ensemble members, 2) to investigate the role of different members and their synergy in reproducing regional climate 3) to estimate the associated uncertainty. In this study, we propose a Bayesian framework to study the predictability of extremes and associated uncertainties in order to provide a wealth of knowledge about WRF reliability and provide further clarity and understanding of the sensitivities and optimal combinations. The choice of the Bayesian model, as opposed to standard methods, is made because: a) this method has a mean square error that is less than standard statistics, which makes it a more robust method; b) it allows for the use of small sample sizes, which are typical in high-resolution modeling; c) it provides a probabilistic view of uncertainty, which is useful when making decisions concerning ensemble members.

  6. Development of a feasibility prediction tool for solar power plant installation analyses

    International Nuclear Information System (INIS)

    Highlights: → An agglomerative hierarchical clustering tool is designed for renewable energy sources in this study. → In the model, nearest neighbor approach is used as clustering algorithm and Euclidean, Manhattan, and Minkowski distance metrics as distance equations. → The developed tool assists knowledge domain expert in terms of analysing extensive datasets. → The developed tool clusters the given sample data efficiently and successfully using each distance metrics. → The clustering results are compared according to success rates. -- Abstract: The solar energy becomes a challenging area among other renewable sources since the solar energy sources have the advantages of not causing pollution, having low maintenance cost, and not producing noise due to the absence of the moving parts. Although these advantages, the installation cost of a solar power plant is considerably high. However, feasibility analyses have a great role before installation in order to determine the most appropriate power plant site. Despite there are many methods used in feasibility analysis, this paper is focused on a new intelligent method based on an agglomerative hierarchical clustering approach. The solar irradiation and insolation parameters of Central Anatolian Region of Turkey are evaluated utilizing the intelligent feasibility analysis tool developed in this study. The clustering operation in the tool is performed by using the nearest neighbor algorithm. At the stage of determining the optimum hierarchical clustering results, Euclidean, Manhattan and Minkowski distance metrics are adapted to the tool. The achieved clustering results based on Minkowski distance metric provide the most feasible inferences to knowledge domain expert according to other distance metrics.

  7. GenoMatrix: A Software Package for Pedigree-Based and Genomic Prediction Analyses on Complex Traits.

    Science.gov (United States)

    Nazarian, Alireza; Gezan, Salvador Alejandro

    2016-07-01

    Genomic and pedigree-based best linear unbiased prediction methodologies (G-BLUP and P-BLUP) have proven themselves efficient for partitioning the phenotypic variance of complex traits into its components, estimating the individuals' genetic merits, and predicting unobserved (or yet-to-be observed) phenotypes in many species and fields of study. The GenoMatrix software, presented here, is a user-friendly package to facilitate the process of using genome-wide marker data and parentage information for G-BLUP and P-BLUP analyses on complex traits. It provides users with a collection of applications which help them on a set of tasks from performing quality control on data to constructing and manipulating the genomic and pedigree-based relationship matrices and obtaining their inverses. Such matrices will be then used in downstream analyses by other statistical packages. The package also enables users to obtain predicted values for unobserved individuals based on the genetic values of observed related individuals. GenoMatrix is available to the research community as a Windows 64bit executable and can be downloaded free of charge at: http://compbio.ufl.edu/software/genomatrix/. PMID:27025440

  8. Circulating biomarkers for predicting cardiovascular disease risk; a systematic review and comprehensive overview of meta-analyses.

    Directory of Open Access Journals (Sweden)

    Thijs C van Holten

    Full Text Available BACKGROUND: Cardiovascular disease is one of the major causes of death worldwide. Assessing the risk for cardiovascular disease is an important aspect in clinical decision making and setting a therapeutic strategy, and the use of serological biomarkers may improve this. Despite an overwhelming number of studies and meta-analyses on biomarkers and cardiovascular disease, there are no comprehensive studies comparing the relevance of each biomarker. We performed a systematic review of meta-analyses on levels of serological biomarkers for atherothrombosis to compare the relevance of the most commonly studied biomarkers. METHODS AND FINDINGS: Medline and Embase were screened on search terms that were related to "arterial ischemic events" and "meta-analyses". The meta-analyses were sorted by patient groups without pre-existing cardiovascular disease, with cardiovascular disease and heterogeneous groups concerning general populations, groups with and without cardiovascular disease, or miscellaneous. These were subsequently sorted by end-point for cardiovascular disease or stroke and summarized in tables. We have identified 85 relevant full text articles, with 214 meta-analyses. Markers for primary cardiovascular events include, from high to low result: C-reactive protein, fibrinogen, cholesterol, apolipoprotein B, the apolipoprotein A/apolipoprotein B ratio, high density lipoprotein, and vitamin D. Markers for secondary cardiovascular events include, from high to low result: cardiac troponins I and T, C-reactive protein, serum creatinine, and cystatin C. For primary stroke, fibrinogen and serum uric acid are strong risk markers. Limitations reside in that there is no acknowledged search strategy for prognostic studies or meta-analyses. CONCLUSIONS: For primary cardiovascular events, markers with strong predictive potential are mainly associated with lipids. For secondary cardiovascular events, markers are more associated with ischemia. Fibrinogen is a

  9. Accuracy of finite element analyses of CT scans in predictions of vertebral failure patterns under axial compression and anterior flexion.

    Science.gov (United States)

    Jackman, Timothy M; DelMonaco, Alex M; Morgan, Elise F

    2016-01-25

    Finite element (FE) models built from quantitative computed tomography (QCT) scans can provide patient-specific estimates of bone strength and fracture risk in the spine. While prior studies demonstrate accurate QCT-based FE predictions of vertebral stiffness and strength, the accuracy of the predicted failure patterns, i.e., the locations where failure occurs within the vertebra and the way in which the vertebra deforms as failure progresses, is less clear. This study used digital volume correlation (DVC) analyses of time-lapse micro-computed tomography (μCT) images acquired during mechanical testing (compression and anterior flexion) of thoracic spine segments (T7-T9, n=28) to measure displacements occurring throughout the T8 vertebral body at the ultimate point. These displacements were compared to those simulated by QCT-based FE analyses of T8. We hypothesized that the FE predictions would be more accurate when the boundary conditions are based on measurements of pressure distributions within intervertebral discs of similar level of disc degeneration vs. boundary conditions representing rigid platens. The FE simulations captured some of the general, qualitative features of the failure patterns; however, displacement errors ranged 12-279%. Contrary to our hypothesis, no differences in displacement errors were found when using boundary conditions representing measurements of disc pressure vs. rigid platens. The smallest displacement errors were obtained using boundary conditions that were measured directly by DVC at the T8 endplates. These findings indicate that further work is needed to develop methods of identifying physiological loading conditions for the vertebral body, for the purpose of achieving robust, patient-specific FE analyses of failure mechanisms. PMID:26792288

  10. Benchmark of SCALE (SAS2H) isotopic predictions of depletion analyses for San Onofre PWR MOX fuel

    Energy Technology Data Exchange (ETDEWEB)

    Hermann, O.W.

    2000-02-01

    The isotopic composition of mixed-oxide (MOX) fuel, fabricated with both uranium and plutonium, after discharge from reactors is of significant interest to the Fissile Materials Disposition Program. The validation of the SCALE (SAS2H) depletion code for use in the prediction of isotopic compositions of MOX fuel, similar to previous validation studies on uranium-only fueled reactors, has corresponding significance. The EEI-Westinghouse Plutonium Recycle Demonstration Program examined the use of MOX fuel in the San Onofre PWR, Unit 1, during cycles 2 and 3. Isotopic analyses of the MOX spent fuel were conducted on 13 actinides and {sup 148}Nd by either mass or alpha spectrometry. Six fuel pellet samples were taken from four different fuel pins of an irradiated MOX assembly. The measured actinide inventories from those samples has been used to benchmark SAS2H for MOX fuel applications. The average percentage differences in the code results compared with the measurement were {minus}0.9% for {sup 235}U and 5.2% for {sup 239}Pu. The differences for most of the isotopes were significantly larger than in the cases for uranium-only fueled reactors. In general, comparisons of code results with alpha spectrometer data had extreme differences, although the differences in the calculations compared with mass spectrometer analyses were not extremely larger than that of uranium-only fueled reactors. This benchmark study should be useful in estimating uncertainties of inventory, criticality and dose calculations of MOX spent fuel.

  11. ATOP - The Advanced Taiwan Ocean Prediction System Based on the mpiPOM. Part 1: Model Descriptions, Analyses and Results

    Directory of Open Access Journals (Sweden)

    Leo Oey

    2013-01-01

    Full Text Available A data-assimilated Taiwan Ocean Prediction (ATOP system is being developed at the National Central University, Taiwan. The model simulates sea-surface height, three-dimensional currents, temperature and salinity and turbulent mixing. The model has options for tracer and particle-tracking algorithms, as well as for wave-induced Stokes drift and wave-enhanced mixing and bottom drag. Two different forecast domains have been tested: a large-grid domain that encompasses the entire North Pacific Ocean at 0.1° × 0.1° horizontal resolution and 41 vertical sigma levels, and a smaller western North Pacific domain which at present also has the same horizontal resolution. In both domains, 25-year spin-up runs from 1988 - 2011 were first conducted, forced by six-hourly Cross-Calibrated Multi-Platform (CCMP and NCEP reanalysis Global Forecast System (GSF winds. The results are then used as initial conditions to conduct ocean analyses from January 2012 through February 2012, when updated hindcasts and real-time forecasts begin using the GFS winds. This paper describes the ATOP system and compares the forecast results against satellite altimetry data for assessing model skills. The model results are also shown to compare well with observations of (i the Kuroshio intrusion in the northern South China Sea, and (ii subtropical counter current. Review and comparison with other models in the literature of _ are also given.

  12. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses-Isotopic Composition Predictions

    International Nuclear Information System (INIS)

    The expanded use of burnup credit in the United States (U.S.) for storage and transport casks, particularly in the acceptance of credit for fission products, has been constrained by the availability of experimental fission product data to support code validation. The U.S. Nuclear Regulatory Commission (NRC) staff has noted that the rationale for restricting the Interim Staff Guidance on burnup credit for storage and transportation casks (ISG-8) to actinide-only is based largely on the lack of clear, definitive experiments that can be used to estimate the bias and uncertainty for computational analyses associated with using burnup credit. To address the issues of burnup credit criticality validation, the NRC initiated a project with the Oak Ridge National Laboratory to (1) develop and establish a technically sound validation approach for commercial spent nuclear fuel (SNF) criticality safety evaluations based on best-available data and methods and (2) apply the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The purpose of this paper is to describe the isotopic composition (depletion) validation approach and resulting observations and recommendations. Validation of the criticality calculations is addressed in a companion paper at this conference. For isotopic composition validation, the approach is to determine burnup-dependent bias and uncertainty in the effective neutron multiplication factor (keff) due to bias and uncertainty in isotopic predictions, via comparisons of isotopic composition predictions (calculated) and measured isotopic compositions from destructive radiochemical assay utilizing as much assay data as is available, and a best-estimate Monte Carlo based method. This paper (1) provides a detailed description of the burnup credit isotopic validation approach and its technical bases, (2) describes the application of the approach for

  13. Application of neural networks and its prospect. 4. Prediction of major disruptions in tokamak plasmas, analyses of time series data

    International Nuclear Information System (INIS)

    Disruption prediction of tokamak plasma has been studied by neural network. The disruption prediction performances by neural network are estimated by the prediction success rate, false alarm rate, and time prior to disruption. The current driving type disruption is predicted by time series data, and plasma lifetime, risk of disruption and plasma stability. Some disruptions generated by density limit, impurity mixture, error magnetic field can be predicted 100 % of prediction success rate by the premonitory symptoms. The pressure driving type disruption phenomena generate some hundred micro seconds before, so that the operation limits such as βN limit of DIII-D and density limit of ADITYA were investigated. The false alarm rate was decreased by βN limit training under stable discharge. The pressure driving disruption generated with increasing plasma pressure can be predicted about 90 % by evaluating plasma stability. (S.Y.)

  14. Analyses of Potential Predictive Markers and Response to Targeted Therapy in Patients with Advanced Clear-cell Renal Cell Carcinoma

    Institute of Scientific and Technical Information of China (English)

    Yan Song; Jing Huang; Ling Shan; Hong-Tu Zhang

    2015-01-01

    Background:Vascular endothelial growth factor-targeted agents are standard treatments in advanced clear-cell renal cell carcinoma (ccRCC),but biomarkers of activity are lacking.The aim of this study was to investigate the association of Von Hippel-Lindau (VHL) gene status,vascular endothelial growth factor receptor (VEGFR) or stem cell factor receptor (KIT) expression,and their relationships with characteristics and clinical outcome of advanced ccRCC.Methods:A total of 59 patients who received targeted treatment with sunitinib or pazopanib were evaluated for determination at Cancer Hospital and Institute,Chinese Academy of Medical Sciences between January 2010 and November 2012.Paraffin-embedded tumor samples were collected and status of the VHL gene and expression of VEGFR and KIT were determined by VHL sequence analysis and immunohistochemistry.Clinical-pathological features were collected and efficacy such as response rate and Median progression-free survival (PFS) and ovcrall survival (OS) were calculated and then compared based on expression status.The Chi-square test,the KaplanMeier method,and the Lon-rank test were used for statistical analyses.Results:Of 59 patients,objective responses were observed in 28 patients (47.5%).The median PFS was 13.8 months and median OS was 39.9 months.There was an improved PFS in patients with the following clinical features:Male gender,number of metastatic sites 2 or less,VEGFR-2 positive or KIT positive.Eleven patients (18.6%) had evidence of VHL mutation,with an objective response rate of 45.5%,which showed no difference with patients with no VHL mutation (47.9%).VHL mutation status did not correlate with either overall response rate (P =0.938) or PFS (P =0.277).The PFS was 17.6 months and 22.2 months in VEGFR-2 positive patients and KIT positive patients,respectively,which was significantly longer than that of VEGFR-2 or KIT negative patients (P =0.026 and P =0.043).Conclusion:VHL mutation status could not predict

  15. Analyses of Potential Predictive Markers and Response to Targeted Therapy in Patients with Advanced Clear-cell Renal Cell Carcinoma

    Directory of Open Access Journals (Sweden)

    Yan Song

    2015-01-01

    Full Text Available Background: Vascular endothelial growth factor-targeted agents are standard treatments in advanced clear-cell renal cell carcinoma (ccRCC, but biomarkers of activity are lacking. The aim of this study was to investigate the association of Von Hippel-Lindau (VHL gene status, vascular endothelial growth factor receptor (VEGFR or stem cell factor receptor (KIT expression, and their relationships with characteristics and clinical outcome of advanced ccRCC. Methods: A total of 59 patients who received targeted treatment with sunitinib or pazopanib were evaluated for determination at Cancer Hospital and Institute, Chinese Academy of Medical Sciences between January 2010 and November 2012. Paraffin-embedded tumor samples were collected and status of the VHL gene and expression of VEGFR and KIT were determined by VHL sequence analysis and immunohistochemistry. Clinical-pathological features were collected and efficacy such as response rate and Median progression-free survival (PFS and overall survival (OS were calculated and then compared based on expression status. The Chi-square test, the Kaplan-Meier method, and the Lon-rank test were used for statistical analyses. Results: Of 59 patients, objective responses were observed in 28 patients (47.5%. The median PFS was 13.8 months and median OS was 39.9 months. There was an improved PFS in patients with the following clinical features: Male gender, number of metastatic sites 2 or less, VEGFR-2 positive or KIT positive. Eleven patients (18.6% had evidence of VHL mutation, with an objective response rate of 45.5%, which showed no difference with patients with no VHL mutation (47.9%. VHL mutation status did not correlate with either overall response rate (P = 0.938 or PFS (P = 0.277. The PFS was 17.6 months and 22.2 months in VEGFR-2 positive patients and KIT positive patients, respectively, which was significantly longer than that of VEGFR-2 or KIT negative patients (P = 0.026 and P = 0.043. Conclusion

  16. Serial and panel analyses of biomarkers do not improve the prediction of bacteremia compared to one procalcitonin measurement

    NARCIS (Netherlands)

    Tromp, M.; Lansdorp, B.; Bleeker-Rovers, C.P.; Klein Gunnewiek, J.M.; Kullberg, B.J.; Pickkers, P.

    2012-01-01

    Objectives We evaluated the value of a single biomarker, biomarker panels, biomarkers combined with clinical signs of sepsis, and serial determinations of biomarkers in the prediction of bacteremia in patients with sepsis. Methods Adult patients visiting the emergency department because of a susp

  17. Serial and panel analyses of biomarkers do not improve the prediction of bacteremia compared to one procalcitonin measurement.

    NARCIS (Netherlands)

    Tromp, M.; Lansdorp, B.; Bleeker-Rovers, C.P.; Gunnewiek, J.M.; Kullberg, B.J.; Pickkers, P.

    2012-01-01

    OBJECTIVES: We evaluated the value of a single biomarker, biomarker panels, biomarkers combined with clinical signs of sepsis, and serial determinations of biomarkers in the prediction of bacteremia in patients with sepsis. METHODS: Adult patients visiting the emergency department because of a suspe

  18. Can the lifetime of the superheater tubes be predicted according to the fuel analyses? Assessment from field and laboratory data

    Energy Technology Data Exchange (ETDEWEB)

    Salmenoja, K. [Kvaerner Pulping Oy, Tampere (Finland)

    1998-12-31

    Lifetime of the superheaters in different power boilers is more or less still a mystery. This is especially true in firing biomass based fuels (biofuels), such as bark, forest residues, and straw. Due to the unhomogeneous nature of the biofuels, the lifetime of the superheaters may vary from case to case. Sometimes the lifetime is significantly shorter than originally expected, sometimes no corrosion even in the hottest tubes is observed. This is one of the main reasons why the boiler operators often demand for a better predictability on the corrosion resistance of the materials to avoid unscheduled shutdowns. (orig.) 9 refs.

  19. Standardized Software for Wind Load Forecast Error Analyses and Predictions Based on Wavelet-ARIMA Models - Applications at Multiple Geographically Distributed Wind Farms

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Zhangshuan; Makarov, Yuri V.; Samaan, Nader A.; Etingov, Pavel V.

    2013-03-19

    Given the multi-scale variability and uncertainty of wind generation and forecast errors, it is a natural choice to use time-frequency representation (TFR) as a view of the corresponding time series represented over both time and frequency. Here we use wavelet transform (WT) to expand the signal in terms of wavelet functions which are localized in both time and frequency. Each WT component is more stationary and has consistent auto-correlation pattern. We combined wavelet analyses with time series forecast approaches such as ARIMA, and tested the approach at three different wind farms located far away from each other. The prediction capability is satisfactory -- the day-ahead prediction of errors match the original error values very well, including the patterns. The observations are well located within the predictive intervals. Integrating our wavelet-ARIMA (‘stochastic’) model with the weather forecast model (‘deterministic’) will improve our ability significantly to predict wind power generation and reduce predictive uncertainty.

  20. The GENOTEND chip: a new tool to analyse gene expression in muscles of beef cattle for beef quality prediction

    Science.gov (United States)

    2012-01-01

    30 Charolais young bulls slaughtered in year 2, and in the 21 Charolais steers slaughtered in year 1, but not in the group of 19 steers slaughtered in year 2 which differ from the reference group by two factors (gender and year). When the first three groups of animals were analysed together, this subset of genes explained a 4-fold higher proportion of the variability in tenderness than muscle biochemical traits. Conclusion This study underlined the relevance of the GENOTEND chip to identify markers of beef quality, mainly by confirming previous results and by detecting other genes of the heat shock family as potential markers of beef quality. However, it was not always possible to extrapolate the relevance of these markers to all animal groups which differ by several factors (such as gender or environmental conditions of production) from the initial population of reference in which these markers were identified. PMID:22894653

  1. The GENOTEND chip: a new tool to analyse gene expression in muscles of beef cattle for beef quality prediction

    Directory of Open Access Journals (Sweden)

    Hocquette Jean-Francois

    2012-08-01

    validated in the groups of 30 Charolais young bulls slaughtered in year 2, and in the 21 Charolais steers slaughtered in year 1, but not in the group of 19 steers slaughtered in year 2 which differ from the reference group by two factors (gender and year. When the first three groups of animals were analysed together, this subset of genes explained a 4-fold higher proportion of the variability in tenderness than muscle biochemical traits. Conclusion This study underlined the relevance of the GENOTEND chip to identify markers of beef quality, mainly by confirming previous results and by detecting other genes of the heat shock family as potential markers of beef quality. However, it was not always possible to extrapolate the relevance of these markers to all animal groups which differ by several factors (such as gender or environmental conditions of production from the initial population of reference in which these markers were identified.

  2. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

  3. Prediction

    OpenAIRE

    Woollard, W.J.

    2006-01-01

    In this chapter we will look at the ways in which you can use ICT in the classroom to support hypothesis and prediction and how modern technology is enabling: pattern seeking, extrapolation and interpolation to meet the challenges of the information explosion of the 21st century.

  4. Computational fluid dynamics analyses of lateral heat conduction, coolant azimuthal mixing and heat transfer predictions in a BR2 fuel assembly geometry.

    Energy Technology Data Exchange (ETDEWEB)

    Tzanos, C. P.; Dionne, B. (Nuclear Engineering Division)

    2011-05-23

    To support the analyses related to the conversion of the BR2 core from highly-enriched (HEU) to low-enriched (LEU) fuel, the thermal-hydraulics codes PLTEMP and RELAP-3D are used to evaluate the safety margins during steady-state operation (PLTEMP), as well as after a loss-of-flow, loss-of-pressure, or a loss of coolant event (RELAP). In the 1-D PLTEMP and RELAP simulations, conduction in the azimuthal and axial directions is not accounted. The very good thermal conductivity of the cladding and the fuel meat and significant temperature gradients in the lateral directions (axial and azimuthal directions) could lead to a heat flux distribution that is significantly different than the power distribution. To evaluate the significance of the lateral heat conduction, 3-D computational fluid dynamics (CFD) simulations, using the CFD code STAR-CD, were performed. Safety margin calculations are typically performed for a hot stripe, i.e., an azimuthal region of the fuel plates/coolant channel containing the power peak. In a RELAP model, for example, a channel between two plates could be divided into a number of RELAP channels (stripes) in the azimuthal direction. In a PLTEMP model, the effect of azimuthal power peaking could be taken into account by using engineering factors. However, if the thermal mixing in the azimuthal direction of a coolant channel is significant, a stripping approach could be overly conservative by not taking into account this mixing. STAR-CD simulations were also performed to study the thermal mixing in the coolant. Section II of this document presents the results of the analyses of the lateral heat conduction and azimuthal thermal mixing in a coolant channel. Finally, PLTEMP and RELAP simulations rely on the use of correlations to determine heat transfer coefficients. Previous analyses showed that the Dittus-Boelter correlation gives significantly more conservative (lower) predictions than the correlations of Sieder-Tate and Petukhov. STAR-CD 3-D

  5. Prediction

    CERN Document Server

    Sornette, Didier

    2010-01-01

    This chapter first presents a rather personal view of some different aspects of predictability, going in crescendo from simple linear systems to high-dimensional nonlinear systems with stochastic forcing, which exhibit emergent properties such as phase transitions and regime shifts. Then, a detailed correspondence between the phenomenology of earthquakes, financial crashes and epileptic seizures is offered. The presented statistical evidence provides the substance of a general phase diagram for understanding the many facets of the spatio-temporal organization of these systems. A key insight is to organize the evidence and mechanisms in terms of two summarizing measures: (i) amplitude of disorder or heterogeneity in the system and (ii) level of coupling or interaction strength among the system's components. On the basis of the recently identified remarkable correspondence between earthquakes and seizures, we present detailed information on a class of stochastic point processes that has been found to be particu...

  6. A systematic study of coordinate precision in X-ray structure analyses. Pt. 1. Descriptive statistics and predictive estimates of E.S.D.'s for C atoms

    International Nuclear Information System (INIS)

    This study examines the relationship of structure precision, as expressed by the e.s.d.'s of atomic coordinates, to the R factor and chemical constitution of a given crystal structure. On the basis of the work of Cruickshank [Acta Cryst. (1960), 13, 744-777], it is shown that anti σ(C-C), the mean e.s.d. of a C-C bond length in a structure, or anti σ(C), the mean isotropic e.s.d. of a C atom, can be estimated by expressions of the form anti σ = kRN1/2c. Here, Nc is taken as ΣZ2i/Z2C, with the atomic numbers Zi summed over all atoms in the asymmetric unit and ZC = 6. It is also shown that anti σ(E), the mean isotropic e.s.d. of a non-C atom, can be estimated by anti σ(E) kRN1/2c/ZE. Values of k were determined by regression analyses based on subsets of 25 984 and 20 334 entries in the Cambridge Structural Database (CSD) that contain atomic coordinate e.s.d.'s. 95% of coordinate e.s.d.'s for C atoms can be estimated to within 0.005 A of their published value and 78% to within 0.0025 A. These predicted anti σ values provide useful estimates of precision for those 39 000 structures for which coordinate e.s.d.'s are not available in the CSD. Details of the diffraction experiment, which might provide an improved estimating function in Cruickshank's (1960) treatment, are not available in any CSD entries. However, values of Nr (the number of reflections) and Np (the number of parameters) used in refinement were added manually for 817 entries, and the variation of anti σ(C-C) with decreasing Nr/Np ratios is examined: there is a rapid increase in anti σ(C-C) as Nr/Np decreases below circa 6.0. A method for approximating s, the r.m.s. reciprocal radius for the reflections observed, is presented, but it is found that a function of the form anti σ(C-C) = kRN1/2c/ anti s(Nr - Np)1/2 [directly analogous to Cruickshank's (1960) equation] had only slightly improved predictive ability for this data set by comparison with functions based upon R and N1/2c alone. Possible

  7. Using meta-analytic path analysis to test theoretical predictions in health behavior: An illustration based on meta-analyses of the theory of planned behavior

    OpenAIRE

    Hagger, Martin; Chan, Dervin K. C.; Protogerou, Cleo; Chatzisarantis, Nikos L. D.

    2016-01-01

    Objective Synthesizing research on social cognitive theories applied to health behavior is an important step in the development of an evidence base of psychological factors as targets for effective behavioral interventions. However, few meta-analyses of research on social cognitive theories in health contexts have conducted simultaneous tests of theoretically-stipulated pattern effects using path analysis. We argue that conducting path analyses of meta-analytic effects among constructs fr...

  8. Application of pathways analyses for site performance prediction for the Gas Centrifuge Enrichment Plant and Oak Ridge Central Waste Disposal Facility

    International Nuclear Information System (INIS)

    The suitability of the Gas Centrifuge Enrichment Plant and the Oak Ridge Central Waste Disposal Facility for shallow-land burial of low-level radioactive waste is evaluated using pathways analyses. The analyses rely on conservative scenarios to describe the generation and migration of contamination and the potential human exposure to the waste. Conceptual and numerical models are developed using data from comprehensive laboratory and field investigations and are used to simulate the long-term transport of contamination to man. Conservatism is built into the analyses when assumptions concerning future events have to be made or when uncertainties concerning site or waste characteristics exist. Maximum potential doses to man are calculated and compared to the appropriate standards. The sites are found to provide adequate buffer to persons outside the DOE reservations. Conclusions concerning site capacity and site acceptability are drawn. In reaching these conclusions, some consideration is given to the uncertainties and conservatisms involved in the analyses. Analytical methods to quantitatively assess the probability of future events to occur and the sensitivity of the results to data uncertainty may prove useful in relaxing some of the conservatism built into the analyses. The applicability of such methods to pathways analyses is briefly discussed. 18 refs., 9 figs

  9. Comparison of and limits of accuracy for statistical analyses of vibrational and electronic circular dichroism spectra in terms of correlations to and predictions of protein secondary structure.

    OpenAIRE

    Pancoska, P.; Bitto, E.; Janota, V.; Urbanova, M.; Gupta, V P; Keiderling, T A

    1995-01-01

    This work provides a systematic comparison of vibrational CD (VCD) and electronic CD (ECD) methods for spectral prediction of secondary structure. The VCD and ECD data are simplified to a small set of spectral parameters using the principal component method of factor analysis (PC/FA). Regression fits of these parameters are made to the X-ray-determined fractional components (FC) of secondary structure. Predictive capability is determined by computing structures for proteins sequentially left ...

  10. A systematic study of coordinate precision in X-ray structure analyses. Pt. 2. Predictive estimates of E.S.D.'s for the general-atom case

    International Nuclear Information System (INIS)

    The relationship between the mean isotropic e.s.d. anti σ(A)o of any element type A in a crystal structure and the R factor and atomic constitution of that structure is explored for 124 905 element-type occurrences calculated from 33 955 entries in the Cambridge Structural Database. On the basis of the work of Cruickshank [Acta Cryst. (1960), 13, 774-777], it is shown that anti σ(A)p values can be estimated by equations of the form anti σ(A)p = KRN1/2c/ZA where Nc is taken as ΣZ2i/Z2C, the Zi are atomic numbers and the summation is over all atoms in the asymmetric unit. Values of K were obtained by regression techniques using the anti σ(A)o as basis. The constant Knc for noncentrosymmetric structures is found to be larger than Kc for centrosymmetric structures by a factor of ∼21/2, as predicted by Cruickshank (1960). Two predictive equations are generated, one for first-row elements and the second for elements with ZA > 10. The relationship between the different constants K that arise in these two situations is linked to shape differentials in scattering-factor (fi) curves for light and heavy atoms. It is found that predictive equations in which the Zi are selectively replaced by fi at a constant sinθ/λ of 0.30 A-1 generate closely similar values of K for the light-atom and heavy-atom subsets. The overall analysis indicates that atomic e.s.d.'s may be seriously underestimated in the more precise structure determinations, that e.s.d.'s for the heaviest atoms may be less reliable than those for lighter atoms and that e.s.d.'s in noncentrosymmetric structures may be less accurate than those in centrosymmetric structures. (orig.)

  11. Analysing EWviews

    DEFF Research Database (Denmark)

    Jelsøe, Erling; Jæger, Birgit

    2015-01-01

    When analysing the results of a European wide citizen consultation on sustainable consumption it is necessary to take a number of issues into account, such as the question of representativity and tensions between national and European identies and between consumer and Citizen orientations regarding...

  12. Barriers to predicting changes in global terrestrial methane fluxes: analyses using CLM4Me, a methane biogeochemistry model integrated in CESM

    Directory of Open Access Journals (Sweden)

    W. J. Riley

    2011-07-01

    Full Text Available Terrestrial net CH4 surface fluxes often represent the difference between much larger gross production and consumption fluxes and depend on multiple physical, biological, and chemical mechanisms that are poorly understood and represented in regional- and global-scale biogeochemical models. To characterize uncertainties, study feedbacks between CH4 fluxes and climate, and to guide future model development and experimentation, we developed and tested a new CH4 biogeochemistry model (CLM4Me integrated in the land component (Community Land Model; CLM4 of the Community Earth System Model (CESM1. CLM4Me includes representations of CH4 production, oxidation, aerenchyma transport, ebullition, aqueous and gaseous diffusion, and fractional inundation. As with most global models, CLM4 lacks important features for predicting current and future CH4 fluxes, including: vertical representation of soil organic matter, accurate subgrid scale hydrology, realistic representation of inundated system vegetation, anaerobic decomposition, thermokarst dynamics, and aqueous chemistry. We compared the seasonality and magnitude of predicted CH4 emissions to observations from 18 sites and three global atmospheric inversions. Simulated net CH4 emissions using our baseline parameter set were 270, 160, 50, and 70 Tg CH4 yr−1 globally, in the tropics, in the temperate zone, and north of 45° N, respectively; these values are within the range of previous estimates. We then used the model to characterize the sensitivity of regional and global CH4 emission estimates to uncertainties in model parameterizations. Of the parameters we tested, the temperature sensitivity of CH4 production, oxidation parameters, and aerenchyma properties had the largest impacts on net CH4 emissions, up to a factor of 4 and 10 at the regional and gridcell scales

  13. Barriers to predicting changes in global terrestrial methane fluxes: analyses using CLM4Me, a methane biogeochemistry model integrated in CESM

    Directory of Open Access Journals (Sweden)

    W. J. Riley

    2011-02-01

    Full Text Available Terrestrial net CH4 surface fluxes often represent the difference between much larger gross production and consumption fluxes and depend on multiple physical, biological, and chemical mechanisms that are poorly understood and represented in regional- and global-scale biogeochemical models. To characterize uncertainties, study feedbacks between CH4 fluxes and climate, and to guide future model development and experimentation, we developed and tested a new CH4 biogeochemistry model (CLM4Me integrated in the land component (Community Land Model; CLM4 of the Community Earth System Model (CESM1. CLM4Me includes representations of CH4 production, oxidation, aerenchymous transport, ebullition, aqueous and gaseous diffusion, and fractional inundation. As with most global models, CLM4Me lacks important features for predicting current and future CH4 fluxes, including: vertical representation of soil organic matter, accurate subgrid scale hydrology, realistic representation of inundated system vegetation, anaerobic decomposition, thermokarst dynamics, and aqueous chemistry. We compared the seasonality and magnitude of predicted CH4 emissions to observations from 18 sites and three global atmospheric inversions. Simulated net CH4 emissions using our baseline parameter set were 270, 160, 50, and 70 Tg CH4 m−2 yr−1 globally, in the tropics, temperate zone, and north of 45° N, respectively; these values are within the range of previous estimates. We then used the model to characterize the sensitivity of regional and global CH4 emission estimates to uncertainties in model parameterizations. Of the parameters we tested, the temperature sensitivity of CH4 production, oxidation parameters, and aerenchyma properties had the largest impacts on net CH4 emissions, up to a factor of 4 and 10 at the regional and gridcell

  14. Generation of a predicted protein database from EST data and application to iTRAQ analyses in grape (Vitis vinifera cv. Cabernet Sauvignon berries at ripening initiation

    Directory of Open Access Journals (Sweden)

    Smith Derek

    2009-01-01

    Full Text Available Abstract Background iTRAQ is a proteomics technique that uses isobaric tags for relative and absolute quantitation of tryptic peptides. In proteomics experiments, the detection and high confidence annotation of proteins and the significance of corresponding expression differences can depend on the quality and the species specificity of the tryptic peptide map database used for analysis of the data. For species for which finished genome sequence data are not available, identification of proteins relies on similarity to proteins from other species using comprehensive peptide map databases such as the MSDB. Results We were interested in characterizing ripening initiation ('veraison' in grape berries at the protein level in order to better define the molecular control of this important process for grape growers and wine makers. We developed a bioinformatic pipeline for processing EST data in order to produce a predicted tryptic peptide database specifically targeted to the wine grape cultivar, Vitis vinifera cv. Cabernet Sauvignon, and lacking truncated N- and C-terminal fragments. By searching iTRAQ MS/MS data generated from berry exocarp and mesocarp samples at ripening initiation, we determined that implementation of the custom database afforded a large improvement in high confidence peptide annotation in comparison to the MSDB. We used iTRAQ MS/MS in conjunction with custom peptide db searches to quantitatively characterize several important pathway components for berry ripening previously described at the transcriptional level and confirmed expression patterns for these at the protein level. Conclusion We determined that a predicted peptide database for MS/MS applications can be derived from EST data using advanced clustering and trimming approaches and successfully implemented for quantitative proteome profiling. Quantitative shotgun proteome profiling holds great promise for characterizing biological processes such as fruit ripening

  15. Comparative analyses of genetic risk prediction methods reveal extreme diversity of genetic predisposition to nonalcoholic fatty liver disease (NAFLD) among ethnic populations of India

    Indian Academy of Sciences (India)

    Ankita Chatterjee; Analabha Basu; Abhijit Chowdhury; Kausik Das; Neeta Sarkar-Roy; Partha P. Majumder; Priyadarshi Basu

    2015-03-01

    Nonalcoholic fatty liver disease (NAFLD) is a distinct pathologic condition characterized by a disease spectrum ranging from simple steatosis to steato-hepatitis, cirrhosis and hepatocellular carcinoma. Prevalence of NAFLD varies in different ethnic groups, ranging from 12% in Chinese to 45% in Hispanics. Among Indian populations, the diversity in prevalence is high, ranging from 9% in rural populations to 32% in urban populations, with geographic differences as well. Here, we wished to find out if this difference is reflected in their genetic makeup. To date, several candidate genes and a few genomewide association studies (GWAS) have been carried out, and many associations between single nucleotide polymorphisms (SNPs) and NAFLD have been observed. In this study, the risk allele frequencies (RAFs) of NAFLD-associated SNPs in 20 Indian ethnic populations (376 individuals) were analysed. We used two different measures for calculating genetic risk scores and compared their performance. The correlation of additive risk scores of NAFLD for three Hapmap populations with their weighted mean prevalence was found to be high (2 = 0.93). Later we used this method to compare NAFLD risk among ethnic Indian populations. Based on our observation, the Indian caste populations have high risk scores compared to Caucasians, who are often used as surrogate and similar to Indian caste population in disease gene association studies, and is significantly higher than the Indian tribal populations.

  16. Effects of pharmacists' interventions on appropriateness of prescribing and evaluation of the instruments' (MAI, STOPP and STARTs' ability to predict hospitalization--analyses from a randomized controlled trial.

    Directory of Open Access Journals (Sweden)

    Ulrika Gillespie

    Full Text Available BACKGROUND: Appropriateness of prescribing can be assessed by various measures and screening instruments. The aims of this study were to investigate the effects of pharmacists' interventions on appropriateness of prescribing in elderly patients, and to explore the relationship between these results and hospital care utilization during a 12-month follow-up period. METHODS: The study population from a previous randomized controlled study, in which the effects of a comprehensive pharmacist intervention on re-hospitalization was investigated, was used. The criteria from the instruments MAI, STOPP and START were applied retrospectively to the 368 study patients (intervention group (I n = 182, control group (C n = 186. The assessments were done on admission and at discharge to detect differences over time and between the groups. Hospital care consumption was recorded and the association between scores for appropriateness, and hospitalization was analysed. RESULTS: The number of Potentially Inappropriate Medicines (PIMs per patient as identified by STOPP was reduced for I but not for C (1.42 to 0.93 vs. 1.46 to 1.66 respectively, p<0.01. The number of Potential Prescription Omissions (PPOs per patient as identified by START was reduced for I but not for C (0.36 to 0.09 vs. 0.42 to 0.45 respectively, p<0.001. The summated score for MAI was reduced for I but not for C (8.5 to 5.0 and 8.7 to 10.0 respectively, p<0.001. There was a positive association between scores for MAI and STOPP and drug-related readmissions (RR 8-9% and 30-34% respectively. No association was detected between the scores of the tools and total re-visits to hospital. CONCLUSION: The interventions significantly improved the appropriateness of prescribing for patients in the intervention group as evaluated by the instruments MAI, STOPP and START. High scores in MAI and STOPP were associated with a higher number of drug-related readmissions.

  17. A new tool for prediction and analysis of thermal comfort in steady and transient states; Un nouvel outil pour la prediction et l'analyse du confort thermique en regime permanent et variable

    Energy Technology Data Exchange (ETDEWEB)

    Megri, A.Ch. [Illinois Institute of Technology, Civil and Architectural Engineering Dept., Chicago, Illinois (United States); Megri, A.F. [Centre Universitaire de Tebessa, Dept. d' Electronique (Algeria); El Naqa, I. [Washington Univ., School of Medicine, Dept. of Radiation Oncology, Saint Louis, Missouri (United States); Achard, G. [Universite de Savoie, Lab. Optimisation de la Conception et Ingenierie de L' Environnement (LOCIE) - ESIGEC, 73 - Le Bourget du Lac (France)

    2006-02-15

    Thermal comfort is influenced by psychological as well as physiological factors. This paper proposes the use of support vector machine (SVM) learning for automated prediction of human thermal comfort in steady and transient states. The SVM is an artificial intelligent approach that could capture the input/output mapping from the given data. Support vector machines were developed based on the Structural Risk Minimization principle. Different sets of representative experimental environmental factors that affect a homogenous person's thermal balance were used for training the SVM machine. The SVM is a very efficient, fast, and accurate technique to identify thermal comfort. This technique permits the determination of thermal comfort indices for different sub-categories of people; sick and elderly, in extreme climatic conditions, when the experimental data for such sub-category are available. The experimental data has been used for the learning and testing processes. The results show a good correlation between SVM predicted values and those obtained from conventional thermal comfort, such as Fanger and Gagge models. The 'trained machine' with representative data could be used easily and effectively in comparison with other conventional estimation methods of different indices. (author)

  18. Life Course and Intergenerational Continuity of Intimate Partner Aggression and Physical Injury: A 20-Year Study.

    Science.gov (United States)

    Knight, Kelly E; Menard, Scott; Simmons, Sara B; Bouffard, Leana A; Orsi, Rebecca

    2016-01-01

    The objective of this study is to examine continuity of intimate partner aggression (IPA), which is defined as repeated annual involvement in IPA, across respondents' life course and into the next generation, where it may emerge among adult children. A national, longitudinal, and multigenerational sample of 1,401 individuals and their adult children is analyzed. Annual data on IPA severity and physical injury were collected by the National Youth Survey Family Study across a 20-year period from 1984 to 2004. Three hypotheses and biological sex differences are tested and effect sizes are estimated. First, findings reveal evidence for life course continuity (IPA is a strong predictor of subsequent IPA), but the overall trend decreases over time. Second, intergenerational continuity is documented (parents' IPA predicts adult children's IPA), but the effect is stronger for female than for male adult children. Third, results from combined and separate, more restrictive, measures of victimization and perpetration are nearly identical except in the intergenerational analyses. Fourth, evidence for continuity is not found when assessing physical injury alone. Together, these findings imply that some but not all forms of IPA are common, continuous, and intergenerational. Life course continuity appears stronger than intergenerational continuity. PMID:27076093

  19. Structural analysis of point mutations at the Vaccinia virus A20/D4 interface.

    Science.gov (United States)

    Contesto-Richefeu, Céline; Tarbouriech, Nicolas; Brazzolotto, Xavier; Burmeister, Wim P; Peyrefitte, Christophe N; Iseni, Frédéric

    2016-09-01

    The Vaccinia virus polymerase holoenzyme is composed of three subunits: E9, the catalytic DNA polymerase subunit; D4, a uracil-DNA glycosylase; and A20, a protein with no known enzymatic activity. The D4/A20 heterodimer is the DNA polymerase cofactor, the function of which is essential for processive DNA synthesis. The recent crystal structure of D4 bound to the first 50 amino acids of A20 (D4/A201-50) revealed the importance of three residues, forming a cation-π interaction at the dimerization interface, for complex formation. These are Arg167 and Pro173 of D4 and Trp43 of A20. Here, the crystal structures of the three mutants D4-R167A/A201-50, D4-P173G/A201-50 and D4/A201-50-W43A are presented. The D4/A20 interface of the three structures has been analysed for atomic solvation parameters and cation-π interactions. This study confirms previous biochemical data and also points out the importance for stability of the restrained conformational space of Pro173. Moreover, these new structures will be useful for the design and rational improvement of known molecules targeting the D4/A20 interface. PMID:27599859

  20. Adiponectin induces A20 expression in adipose tissue to confer metabolic benefit.

    Science.gov (United States)

    Hand, Laura E; Usan, Paola; Cooper, Garth J S; Xu, Lance Y; Ammori, Basil; Cunningham, Peter S; Aghamohammadzadeh, Reza; Soran, Handrean; Greenstein, Adam; Loudon, Andrew S I; Bechtold, David A; Ray, David W

    2015-01-01

    Obesity is a major risk factor for metabolic disease, with white adipose tissue (WAT) inflammation emerging as a key underlying pathology. We detail that mice lacking Reverbα exhibit enhanced fat storage without the predicted increased WAT inflammation or loss of insulin sensitivity. In contrast to most animal models of obesity and obese human patients, Reverbα(-/-) mice exhibit elevated serum adiponectin levels and increased adiponectin secretion from WAT explants in vitro, highlighting a potential anti-inflammatory role of this adipokine in hypertrophic WAT. Indeed, adiponectin was found to suppress primary macrophage responses to lipopolysaccharide and proinflammatory fatty acids, and this suppression depended on glycogen synthase kinase 3β activation and induction of A20. Attenuated inflammatory responses in Reverbα(-/-) WAT depots were associated with tonic elevation of A20 protein and ex vivo shown to depend on A20. We also demonstrate that adipose A20 expression in obese human subjects exhibits a negative correlation with measures of insulin sensitivity. Furthermore, bariatric surgery-induced weight loss was accompanied by enhanced WAT A20 expression, which is positively correlated with increased serum adiponectin and improved metabolic and inflammatory markers, including C-reactive protein. The findings identify A20 as a mediator of adiponectin anti-inflammatory action in WAT and a potential target for mitigating obesity-related pathology. PMID:25190567

  1. Conceptual Nuclear Design of a 20 MW Multipurpose Research Reactor

    International Nuclear Information System (INIS)

    A conceptual nuclear design of a 20 MW multi-purpose research reactor for Vietnam has been jointly done by the KAERI and the DNRI (VAEC). The AHR reference core in this report is a right water cooled and a heavy water reflected open-tank-in-pool type multipurpose research reactor with 20 MW. The rod type fuel of a dispersed U3Si2-Al with a density of 4.0 gU/cc is used as a fuel. The core consists of fourteen 36-element assemblies, four 18-element assemblies and has three in-core irradiation sites. The reflector tank filled with heavy water surrounds the core and provides rooms for various irradiation holes. Major analyses have been done for the relevant nuclear design parameters such as the neutron flux and power distributions, reactivity coefficients, control rod worths, etc. For the analysis, the MCNP, MVP, and HELIOS codes were used by KAERI and DNRI (VAEC). The results by MCNP (KAERI) and MVP (DNRI) showed good agreements and can be summarized as followings. For a clean, unperturbed core condition such that the fuels are all fresh and there are no irradiation holes in the reflector region, the fast neutron flux (En≥1.0 MeV) reaches 1.47x1014 n/cm2s and the maximum thermal neutron flux (En≤0.625 eV) reaches 4.43x1014 n/cm2s in the core region. In the reflector region, the thermal neutron peak occurs about 28 cm far from the core center and the maximum thermal neutron flux is estimated to be 4.09x1014 n/cm2s. For the analysis of the equilibrium cycle core, the irradiation facilities in the reflector region were considered. The cycle length was estimated as 38 days long with a refueling scheme of replacing three 36-element fuel assemblies or replacing two 36-element and one 18-element fuel assemblies. The excess reactivity at a BOC was 103.4 mk, and 24.6 mk at a minimum was reserved at an EOC. The assembly average discharge burnup was 54.6% of initial U-235 loading. For the proposed fuel management scheme, the maximum peaking factor Fq was calculated as 2

  2. Cruise>Climate Variability and Predictability (CLIVAR) A22,A20 (AT20, EM122)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The hydrographic surveys will consist of approximately 180 full water column CTD/LADCP casts along the trackline. Each cast will acquire up to 36 water samples on...

  3. Novel A20-gene-eluting stent inhibits carotid artery restenosis in a porcine model

    Science.gov (United States)

    Zhou, Zhen-hua; Peng, Jing; Meng, Zhao-you; Chen, Lin; Huang, Jia-Lu; Huang, He-qing; Li, Li; Zeng, Wen; Wei, Yong; Zhu, Chu-Hong; Chen, Kang-Ning

    2016-01-01

    Background Carotid artery stenosis is a major risk factor for ischemic stroke. Although carotid angioplasty and stenting using an embolic protection device has been introduced as a less invasive carotid revascularization approach, in-stent restenosis limits its long-term efficacy and safety. The objective of this study was to test the anti-restenosis effects of local stent-mediated delivery of the A20 gene in a porcine carotid artery model. Materials and methods The pCDNA3.1EHA20 was firmly attached onto stents that had been collagen coated and treated with N-succinimidyl-3-(2-pyridyldithiol)propionate solution and anti-DNA immunoglobulin fixation. Anti-restenosis effects of modified vs control (the bare-metal stent and pCDNA3.1 void vector) stents were assessed by Western blot and scanning electron microscopy, as well as by morphological and inflammatory reaction analyses. Results Stent-delivered A20 gene was locally expressed in porcine carotids in association with significantly greater extent of re-endothelialization at day 14 and of neointimal hyperplasia inhibition at 3 months than stenting without A20 gene expression. Conclusion The A20-gene-eluting stent inhibits neointimal hyperplasia while promoting re-endothelialization and therefore constitutes a novel potential alternative to prevent restenosis while minimizing complications. PMID:27540277

  4. Conceptual Nuclear Design of a 20 MW Multipurpose Research Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Chul Gyo; Kim, Hak Sung; Park, Cheol [KAERI, Daejeon (Korea, Republic of); Nghiem, Huynh Ton; Vinh, Le Vinh; Dang, Vo Doan Hai [Dalat Nuclear Research Reactor, Hanoi (Viet Nam)

    2007-08-15

    A conceptual nuclear design of a 20 MW multi-purpose research reactor for Vietnam has been jointly done by the KAERI and the DNRI (VAEC). The AHR reference core in this report is a right water cooled and a heavy water reflected open-tank-in-pool type multipurpose research reactor with 20 MW. The rod type fuel of a dispersed U{sub 3}Si{sub 2}-Al with a density of 4.0 gU/cc is used as a fuel. The core consists of fourteen 36-element assemblies, four 18-element assemblies and has three in-core irradiation sites. The reflector tank filled with heavy water surrounds the core and provides rooms for various irradiation holes. Major analyses have been done for the relevant nuclear design parameters such as the neutron flux and power distributions, reactivity coefficients, control rod worths, etc. For the analysis, the MCNP, MVP, and HELIOS codes were used by KAERI and DNRI (VAEC). The results by MCNP (KAERI) and MVP (DNRI) showed good agreements and can be summarized as followings. For a clean, unperturbed core condition such that the fuels are all fresh and there are no irradiation holes in the reflector region, the fast neutron flux (E{sub n}{>=}1.0 MeV) reaches 1.47x10{sup 14} n/cm{sup 2}s and the maximum thermal neutron flux (E{sub n}{<=}0.625 eV) reaches 4.43x10{sup 14} n/cm{sup 2}s in the core region. In the reflector region, the thermal neutron peak occurs about 28 cm far from the core center and the maximum thermal neutron flux is estimated to be 4.09x10{sup 14} n/cm{sup 2}s. For the analysis of the equilibrium cycle core, the irradiation facilities in the reflector region were considered. The cycle length was estimated as 38 days long with a refueling scheme of replacing three 36-element fuel assemblies or replacing two 36-element and one 18-element fuel assemblies. The excess reactivity at a BOC was 103.4 mk, and 24.6 mk at a minimum was reserved at an EOC. The assembly average discharge burnup was 54.6% of initial U-235 loading. For the proposed fuel management

  5. Multicultural Counseling Competencies Research: A 20-Year Content Analysis

    Science.gov (United States)

    Worthington, Roger L.; Soth-McNett, Angela M.; Moreno, Matthew V.

    2007-01-01

    The authors conducted a 20-year content analysis of the entire field of empirical research on the multicultural counseling competencies (D. W. Sue et al., 1982). They conducted an exhaustive search for empirical research articles using PsycINFO, as well as complete reviews of the past 20 years of several journals (e.g., Journal of Counseling…

  6. Uncertainty and Sensitivity Analyses Plan

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.

  7. Uncertainty and Sensitivity Analyses Plan

    International Nuclear Information System (INIS)

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project

  8. A 20 GHz circularly polarized, fan beam slot array antenna

    Science.gov (United States)

    Weikle, D. C.

    1982-03-01

    An EHF waveguide slot array was developed for possible use as a receive-only paging antenna for ground mobile terminals. The design, fabrication, and measured performance of this antenna are presented. The antenna generates a circularly polarized fan beam that is narrow in azimuth and broad in elevation. When mechanically rotated in azimuth, it can receive a 20 GHz satellite transmission independent of mobile terminal direction. Azimuth plane sidelobe levels, which are typically <-40 dB from the main lobe, provide for discrimination against ground and airborne jammers.

  9. Performance of a 20-in. photoelectric lens image intensifier tube

    International Nuclear Information System (INIS)

    We have evaluated a 20-in. photoelectric lens image intensifier tube (PLI) to be mounted on the spherical focal surface of the Ashra light collectors, where Ashra stands for All-sky Survey High Resolution Air-shower Detector, an unconventional optical collector complex that images air showers produced by very high energy cosmic-ray particles in a 42o-diameter field of view with a resolution of a few arcminutes. The PLI, the worlds largest image intensifier, has a very large effective photocathode area of 20-in. diameter and reduces an image size to less than 1-in. diameter using the electric lens effect. This enables us to use a solid-state imager to take focal surface images in the Ashra light collector. Thus, PLI is a key technology for the Ashra experiment to realize a much lower pixel cost in comparison with other experiments using photomultiplier arrays at the focal surface. In this paper we present the design and performance of the 20-in. PLI. - Highlights: → We have evaluated a 20-in. photoelectric lens image intensifier tube (PLI). → The PLI is the worlds largest image intensifier. → The PLI is mounted on the focal surface of the Ashra light collector. → Ashra stands for All-sky Survey High Resolution Air-shower Detector. → The PLI is the key to realize all-sky survey with a few arcmin resolution in Ashra.

  10. Antiferromagnetism in a 20% Ho-80% Tb alloy single crystal

    DEFF Research Database (Denmark)

    Lebech, Bente

    1968-01-01

    20% Ho-80% Tb exhibits two magnetic phases, similar to those of Tb. The spiral turn angle varies from 31.1° to 21.4°. A minimum effective spin for the occurrence of stable simple ferromagnetic structure at low temperatures is predicted.......20% Ho-80% Tb exhibits two magnetic phases, similar to those of Tb. The spiral turn angle varies from 31.1° to 21.4°. A minimum effective spin for the occurrence of stable simple ferromagnetic structure at low temperatures is predicted....

  11. Periodic safety analyses

    International Nuclear Information System (INIS)

    The IAEA Safety Guide 50-SG-S8 devoted to 'Safety Aspects of Foundations of Nuclear Power Plants' indicates that operator of a NPP should establish a program for inspection of safe operation during construction, start-up and service life of the plant for obtaining data needed for estimating the life time of structures and components. At the same time the program should ensure that the safety margins are appropriate. Periodic safety analysis are an important part of the safety inspection program. Periodic safety reports is a method for testing the whole system or a part of the safety system following the precise criteria. Periodic safety analyses are not meant for qualification of the plant components. Separate analyses are devoted to: start-up, qualification of components and materials, and aging. All these analyses are described in this presentation. The last chapter describes the experience obtained for PWR-900 and PWR-1300 units from 1986-1989

  12. Laser Beam Focus Analyser

    DEFF Research Database (Denmark)

    Nielsen, Peter Carøe; Hansen, Hans Nørgaard; Olsen, Flemming Ove;

    2007-01-01

    The quantitative and qualitative description of laser beam characteristics is important for process implementation and optimisation. In particular, a need for quantitative characterisation of beam diameter was identified when using fibre lasers for micro manufacturing. Here the beam diameter limits...... the obtainable features in direct laser machining as well as heat affected zones in welding processes. This paper describes the development of a measuring unit capable of analysing beam shape and diameter of lasers to be used in manufacturing processes. The analyser is based on the principle of a rotating...... mechanical wire being swept through the laser beam at varying Z-heights. The reflected signal is analysed and the resulting beam profile determined. The development comprised the design of a flexible fixture capable of providing both rotation and Z-axis movement, control software including data capture...

  13. Report sensory analyses veal

    OpenAIRE

    Veldman, M.; Schelvis-Smit, A.A.M.

    2005-01-01

    On behalf of a client of Animal Sciences Group, different varieties of veal were analyzed by both instrumental and sensory analyses. The sensory evaluation was performed with a sensory analytical panel in the period of 13th of May and 31st of May, 2005. The three varieties of veal were: young bull, pink veal and white veal. The sensory descriptive analyses show that the three groups Young bulls, pink veal and white veal, differ significantly in red colour for the raw meat as well as the baked...

  14. Wavelet Analyses and Applications

    Science.gov (United States)

    Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.

    2009-01-01

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…

  15. Probabilistic safety analyses (PSA)

    International Nuclear Information System (INIS)

    The guide shows how the probabilistic safety analyses (PSA) are used in the design, construction and operation of light water reactor plants in order for their part to ensure that the safety of the plant is good enough in all plant operational states

  16. Description of a 20 Kilohertz power distribution system

    Science.gov (United States)

    Hansen, I. G.

    1986-01-01

    A single phase, 440 VRMS, 20 kHz power distribution system with a regulated sinusoidal wave form is discussed. A single phase power system minimizes the wiring, sensing, and control complexities required in a multi-sourced redundantly distributed power system. The single phase addresses only the distribution link; mulitphase lower frequency inputs and outputs accommodation techniques are described. While the 440 V operating potential was initially selected for aircraft operating below 50,000 ft, this potential also appears suitable for space power systems. This voltage choice recognizes a reasonable upper limit for semiconductor ratings, yet will direct synthesis of 220 V, 3 power. A 20 kHz operating frequency was selected to be above the range of audibility, minimize the weight of reactive components, yet allow the construction of single power stages of 25 to 30 kW. The regulated sinusoidal distribution system has several advantages. With a regulated voltage, most ac/dc conversions involve rather simple transformer rectifier applications. A sinusoidal distribution system, when used in conjunction with zero crossing switching, represents a minimal source of EMI. The present state of 20 kHz power technology includes computer controls of voltage and/or frequency, low inductance cable, current limiting circuit protection, bi-directional power flow, and motor/generator operating using standard induction machines. A status update and description of each of these items and their significance is presented.

  17. Performance of a 20-in. photoelectric lens image intensifier tube

    CERN Document Server

    Asaoka, Yoichi; 10.1016/j.nima.2011.05.036

    2011-01-01

    We have evaluated a 20-in. photoelectric lens image intensifier tube (PLI) to be mounted on the spherical focal surface of the Ashra light collectors, where Ashra stands for All-sky Survey High Resolution Air-shower Detector, an unconventional optical collector complex that images air showers produced by very high energy cosmic-ray particles in a 42$^\\circ$-diameter field of view with a resolution of a few arcminutes. The PLI, the worlds largest image intensifier, has a very large effective photocathode area of 20-in. diameter and reduces an image size to less than 1-inch diameter using the electric lens effect. This enables us to use a solid-state imager to take focal surface images in the Ashra light collector. Thus, PLI is a key technology for the Ashra experiment to realize a much lower pixel cost in comparison with other experiments using photomultiplier arrays at the focal surface. In this paper we present the design and performance of the 20-in. PLI.

  18. Cost-Benefit Analyses of Transportation Investments

    DEFF Research Database (Denmark)

    Næss, Petter

    2006-01-01

    environment. In addition, main input data are based on transport modelling analyses based on a misleading `local ontology' among the model makers. The ontological misconceptions translate into erroneous epistemological assumptions about the possibility of precise predictions and the validity of willingness...

  19. Possible future HERA analyses

    CERN Document Server

    Geiser, Achim

    2015-01-01

    A variety of possible future analyses of HERA data in the context of the HERA data preservation programme is collected, motivated, and commented. The focus is placed on possible future analyses of the existing $ep$ collider data and their physics scope. Comparisons to the original scope of the HERA programme are made, and cross references to topics also covered by other participants of the workshop are given. This includes topics on QCD, proton structure, diffraction, jets, hadronic final states, heavy flavours, electroweak physics, and the application of related theory and phenomenology topics like NNLO QCD calculations, low-x related models, nonperturbative QCD aspects, and electroweak radiative corrections. Synergies with other collider programmes are also addressed. In summary, the range of physics topics which can still be uniquely covered using the existing data is very broad and of considerable physics interest, often matching the interest of results from colliders currently in operation. Due to well-e...

  20. Statistisk analyse med SPSS

    OpenAIRE

    Linnerud, Kristin; Oklevik, Ove; Slettvold, Harald

    2004-01-01

    Dette notatet har sitt utspring i forelesninger og undervisning for 3.års studenter i økonomi og administrasjon ved høgskolen i Sogn og Fjordane. Notatet er særlig lagt opp mot undervisningen i SPSS i de to kursene ”OR 685 Marknadsanalyse og merkevarestrategi” og ”BD 616 Økonomistyring og analyse med programvare”.

  1. Biomass feedstock analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wilen, C.; Moilanen, A.; Kurkela, E. [VTT Energy, Espoo (Finland). Energy Production Technologies

    1996-12-31

    The overall objectives of the project `Feasibility of electricity production from biomass by pressurized gasification systems` within the EC Research Programme JOULE II were to evaluate the potential of advanced power production systems based on biomass gasification and to study the technical and economic feasibility of these new processes with different type of biomass feed stocks. This report was prepared as part of this R and D project. The objectives of this task were to perform fuel analyses of potential woody and herbaceous biomasses with specific regard to the gasification properties of the selected feed stocks. The analyses of 15 Scandinavian and European biomass feed stock included density, proximate and ultimate analyses, trace compounds, ash composition and fusion behaviour in oxidizing and reducing atmospheres. The wood-derived fuels, such as whole-tree chips, forest residues, bark and to some extent willow, can be expected to have good gasification properties. Difficulties caused by ash fusion and sintering in straw combustion and gasification are generally known. The ash and alkali metal contents of the European biomasses harvested in Italy resembled those of the Nordic straws, and it is expected that they behave to a great extent as straw in gasification. Any direct relation between the ash fusion behavior (determined according to the standard method) and, for instance, the alkali metal content was not found in the laboratory determinations. A more profound characterisation of the fuels would require gasification experiments in a thermobalance and a PDU (Process development Unit) rig. (orig.) (10 refs.)

  2. Possible future HERA analyses

    Energy Technology Data Exchange (ETDEWEB)

    Geiser, Achim

    2015-12-15

    A variety of possible future analyses of HERA data in the context of the HERA data preservation programme is collected, motivated, and commented. The focus is placed on possible future analyses of the existing ep collider data and their physics scope. Comparisons to the original scope of the HERA pro- gramme are made, and cross references to topics also covered by other participants of the workshop are given. This includes topics on QCD, proton structure, diffraction, jets, hadronic final states, heavy flavours, electroweak physics, and the application of related theory and phenomenology topics like NNLO QCD calculations, low-x related models, nonperturbative QCD aspects, and electroweak radiative corrections. Synergies with other collider programmes are also addressed. In summary, the range of physics topics which can still be uniquely covered using the existing data is very broad and of considerable physics interest, often matching the interest of results from colliders currently in operation. Due to well-established data and MC sets, calibrations, and analysis procedures the manpower and expertise needed for a particular analysis is often very much smaller than that needed for an ongoing experiment. Since centrally funded manpower to carry out such analyses is not available any longer, this contribution not only targets experienced self-funded experimentalists, but also theorists and master-level students who might wish to carry out such an analysis.

  3. A 20-year simulated climatology of global dust aerosol deposition.

    Science.gov (United States)

    Zheng, Yu; Zhao, Tianliang; Che, Huizheng; Liu, Yu; Han, Yongxiang; Liu, Chong; Xiong, Jie; Liu, Jianhui; Zhou, Yike

    2016-07-01

    Based on a 20-year (1991-2010) simulation of dust aerosol deposition with the global climate model CAM5.1 (Community Atmosphere Model, version 5.1), the spatial and temporal variations of dust aerosol deposition were analyzed using climate statistical methods. The results indicated that the annual amount of global dust aerosol deposition was approximately 1161±31Mt, with a decreasing trend, and its interannual variation range of 2.70% over 1991-2010. The 20-year average ratio of global dust dry to wet depositions was 1.12, with interannual variation of 2.24%, showing the quantity of dry deposition of dust aerosol was greater than dust wet deposition. High dry deposition was centered over continental deserts and surrounding regions, while wet deposition was a dominant deposition process over the North Atlantic, North Pacific and northern Indian Ocean. Furthermore, both dry and wet deposition presented a zonal distribution. To examine the regional changes of dust aerosol deposition on land and sea areas, we chose the North Atlantic, Eurasia, northern Indian Ocean, North Pacific and Australia to analyze the interannual and seasonal variations of dust deposition and dry-to-wet deposition ratio. The deposition amounts of each region showed interannual fluctuations with the largest variation range at around 26.96% in the northern Indian Ocean area, followed by the North Pacific (16.47%), Australia (9.76%), North Atlantic (9.43%) and Eurasia (6.03%). The northern Indian Ocean also had the greatest amplitude of interannual variation in dry-to-wet deposition ratio, at 22.41%, followed by the North Atlantic (9.69%), Australia (6.82%), North Pacific (6.31%) and Eurasia (4.36%). Dust aerosol presented a seasonal cycle, with typically strong deposition in spring and summer and weak deposition in autumn and winter. The dust deposition over the northern Indian Ocean exhibited the greatest seasonal change range at about 118.00%, while the North Atlantic showed the lowest seasonal

  4. Analysis of K-net and Kik-net data: implications for ground motion prediction - acceleration time histories, response spectra and nonlinear site response; Analyse des donnees accelerometriques de K-net et Kik-net: implications pour la prediction du mouvement sismique - accelerogrammes et spectres de reponse - et la prise en compte des effets de site non-lineaire

    Energy Technology Data Exchange (ETDEWEB)

    Pousse, G

    2005-10-15

    This thesis intends to characterize ground motion during earthquake. This work is based on two Japanese networks. It deals with databases of shallow events, depth less than 25 km, with magnitude between 4.0 and 7.3. The analysis of K-net allows to compute a spectral ground motion prediction equation and to review the shape of the Eurocode 8 design spectra. We show the larger amplification at short period for Japanese data and bring in light the soil amplification that takes place at large period. In addition, we develop a new empirical model for simulating synthetic stochastic nonstationary acceleration time histories. By specifying magnitude, distance and site effect, this model allows to produce many time histories, that a seismic event is liable to produce at the place of interest. Furthermore, the study of near-field borehole records of the Kik-net allows to explore the validity domain of predictive equations and to explain what occurs by extrapolating ground motion predictions. Finally, we show that nonlinearity reduces the dispersion of ground motion at the surface. (author)

  5. Digital differential analysers

    CERN Document Server

    Shilejko, A V; Higinbotham, W

    1964-01-01

    Digital Differential Analysers presents the principles, operations, design, and applications of digital differential analyzers, a machine with the ability to present initial quantities and the possibility of dividing them into separate functional units performing a number of basic mathematical operations. The book discusses the theoretical principles underlying the operation of digital differential analyzers, such as the use of the delta-modulation method and function-generator units. Digital integration methods and the classes of digital differential analyzer designs are also reviewed. The te

  6. Analysing Access Control Specifications

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, René Rydhof

    2009-01-01

    common tool to answer this question, analysis of log files, faces the problem that the amount of logged data may be overwhelming. This problems gets even worse in the case of insider attacks, where the attacker’s actions usually will be logged as permissible, standard actions—if they are logged at all....... Recent events have revealed intimate knowledge of surveillance and control systems on the side of the attacker, making it often impossible to deduce the identity of an inside attacker from logged data. In this work we present an approach that analyses the access control configuration to identify the set...

  7. AMS analyses at ANSTO

    Energy Technology Data Exchange (ETDEWEB)

    Lawson, E.M. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia). Physics Division

    1998-03-01

    The major use of ANTARES is Accelerator Mass Spectrometry (AMS) with {sup 14}C being the most commonly analysed radioisotope - presently about 35 % of the available beam time on ANTARES is used for {sup 14}C measurements. The accelerator measurements are supported by, and dependent on, a strong sample preparation section. The ANTARES AMS facility supports a wide range of investigations into fields such as global climate change, ice cores, oceanography, dendrochronology, anthropology, and classical and Australian archaeology. Described here are some examples of the ways in which AMS has been applied to support research into the archaeology, prehistory and culture of this continent`s indigenous Aboriginal peoples. (author)

  8. AMS analyses at ANSTO

    International Nuclear Information System (INIS)

    The major use of ANTARES is Accelerator Mass Spectrometry (AMS) with 14C being the most commonly analysed radioisotope - presently about 35 % of the available beam time on ANTARES is used for 14C measurements. The accelerator measurements are supported by, and dependent on, a strong sample preparation section. The ANTARES AMS facility supports a wide range of investigations into fields such as global climate change, ice cores, oceanography, dendrochronology, anthropology, and classical and Australian archaeology. Described here are some examples of the ways in which AMS has been applied to support research into the archaeology, prehistory and culture of this continent's indigenous Aboriginal peoples. (author)

  9. Systemdynamisk analyse av vannkraftsystem

    OpenAIRE

    Rydning, Anja

    2007-01-01

    I denne oppgaven er det gjennomført en dynamisk analyse av vannkraftverket Fortun kraftverk. Tre fenomener er særlig vurdert i denne oppgaven: Sjaktsvingninger mellom svingesjakt og magasin, trykkstøt ved turbinen som følge av retardasjonstrykk ved endring i turbinvannføringen og reguleringsstabilitet. Sjaktsvingningene og trykkstøt beregnes analytisk ut fra kontinuitets- og bevegelsesligningen. Modeller av Fortun kraftverk er laget for å beregne trykkstøt og sjaktsvingninger. En modell e...

  10. Characterization of bovine A20 gene: Expression mediated by NF-κB pathway in MDBK cells infected with bovine viral diarrhea virus-1.

    Science.gov (United States)

    Fredericksen, Fernanda; Villalba, Melina; Olavarría, Víctor H

    2016-05-01

    Cytokine production for immunological process is tightly regulated at the transcriptional and posttranscriptional levels. The NF-κB signaling pathway maintains immune homeostasis in the cell through the participation of molecules such as A20 (TNFAIP3), which is a key regulatory factor in the immune response, hematopoietic differentiation, and immunomodulation. Although A20 has been identified in mammals, and despite recent efforts to identify A20 members in other higher vertebrates, relatively little is known about the composition of this regulator in other classes of vertebrates, particularly for bovines. In this study, the genetic context of bovine A20 was explored and compared against homologous genes in the human, mouse, chicken, dog, and zebrafish chromosomes. Through in silico analysis, several regions of interest were found conserved between even phylogenetically distant species. Additionally, a protein-deduced sequence of bovine A20 evidenced many conserved domains in humans and mice. Furthermore, all potential amino acid residues implicated in the active site of A20 were conserved. Finally, bovine A20 mRNA expression as mediated by the bovine viral diarrhea virus and poly (I:C) was evaluated. These analyses evidenced a strong fold increase in A20 expression following virus exposure, a phenomenon blocked by a pharmacological NF-κB inhibitor (BAY 117085). Interestingly, A20 mRNA had a half-life of only 32min, likely due to adenylate- and uridylate-rich elements in the 3'-untranslated region. Collectively, these data identify bovine A20 as a regulator of immune marker expression. Finally, this is the first report to find the bovine viral diarrhea virus modulating bovine A20 activation through the NF-κB pathway. PMID:26809100

  11. Chapter 5. Safety analyses

    International Nuclear Information System (INIS)

    In 2000 the safety analyses of the Nuclear Regulatory Authority of the Slovak Republic (UJD) were focused on verification of the safety analyses report and probabilistic safety assessment study for NPP V-1 Bohunice after the reconstruction, reviewing of the suggested changes of the Limits and Conditions for NPP V-2 Bohunice and on the assessment of operational events. An important part of work was performed also in solving of scientific and technical tasks appointed within bilateral projects of co-operation between UJD and its international partnerships' organisations, i.e. within international PHARE programme as well as the 5th framework of the European Commission. Verification of safety analyses part of the safety report for NPP V-1 Bohunice after the gradual reconstruction was focused on checking and passing judgement on the completeness of the considered initiating events, safety criteria, input data, adequacy of the used calculation models and also on the overall quality of the submitted documentation. Suitability of the used methodology and the calculation programmes, achieved level of their verification, correctness and interpretation of the results were assessed. The performed review has shown that the checked safety analyses were performed in compliance with the internationally accepted practice, recommendations of UJD and the IAEA. The required level of safety of NPP V-1 Bohunice has been approved. The document with the results and all the findings of the performed review has been prepared. It includes the details of the performed independent calculations, their results and comparison with the results given in the safety report. A special attention was paid to a review of probabilistic safety assessment study of level 1 for NPP Bohunice V-1 after its gradual reconstruction. The probabilistic safety analysis of NPP in full power operation was elaborated in the study and the impact of the gradual reconstruction to the risk decreasing was quantified. The

  12. Uncertainty Analyses and Strategy

    International Nuclear Information System (INIS)

    The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository

  13. Automated Quality Assurance of Online NIR Analysers

    Directory of Open Access Journals (Sweden)

    Kari Aaljoki

    2005-01-01

    Full Text Available Modern NIR analysers produce valuable data for closed-loop process control and optimisation practically in real time. Thus it is highly important to keep them in the best possible shape. Quality assurance (QA of NIR analysers is an interesting and complex issue because it is not only the instrument and sample handling that has to be monitored. At the same time, validity of prediction models has to be assured. A system for fully automated QA of NIR analysers is described. The system takes care of collecting and organising spectra from various instruments, relevant laboratory, and process management system (PMS data. Validation of spectra is based on simple diagnostics values derived from the spectra. Predictions are validated against laboratory (LIMS or other online analyser results (collected from PMS. The system features automated alarming, reporting, trending, and charting functions for major key variables for easy visual inspection. Various textual and graphical reports are sent to maintenance people through email. The software was written with Borland Delphi 7 Enterprise. Oracle and PMS ODBC interfaces were used for accessing LIMS and PMS data using appropriate SQL queries. It will be shown that it is possible to take actions even before the quality of predictions is seriously affected, thus maximising the overall uptime of the instrument.

  14. Fusion plasma experiments on TFTR: A 20 year retrospective

    International Nuclear Information System (INIS)

    The Tokamak Fusion Test Reactor (TFTR) (R. J. Hawryluk, to be published in Rev. Mod. Phys.) experiments on high-temperature plasmas, that culminated in the study of deuterium endash tritium D endash T plasmas containing significant populations of energetic alpha particles, spanned over two decades from conception to completion. During the design of TFTR, the key physics issues were magnetohydrodynamic (MHD) equilibrium and stability, plasma energy transport, impurity effects, and plasma reactivity. Energetic particle physics was given less attention during this phase because, in part, of the necessity to address the issues that would create the conditions for the study of energetic particles and also the lack of diagnostics to study the energetic particles in detail. The worldwide tokamak program including the contributions from TFTR made substantial progress during the past two decades in addressing the fundamental issues affecting the performance of high-temperature plasmas and the behavior of energetic particles. The progress has been the result of the construction of new facilities, which enabled the production of high-temperature well-confined plasmas, development of sophisticated diagnostic techniques to study both the background plasma and the resulting energetic fusion products, and computational techniques to both interpret the experimental results and to predict the outcome of experiments. copyright 1998 American Institute of Physics

  15. APROS nuclear plant analyser

    International Nuclear Information System (INIS)

    The paper describes the build-up of the Loviisa plant primary circuit model using graphical user interface and generic components. The secondary circuit model of Loviisa is constructed in the same manner. The entire power plant model thus obtained is used for the calculation of two example transients. These examples originate from the Loviisa 2 unit dynamical tests in 1980. The Modular Plant Analyser results are compared with the Loviisa Unit 2 measurement data. This comparison indicates good agreement with the data. The present work has been performed using the Alliant FX/40 minisupercomputer. With this computer the Loviisa model fulfills at present the real-time requirement with 0.5 second timestep. (orig./DG)

  16. EEG analyses with SOBI.

    Energy Technology Data Exchange (ETDEWEB)

    Glickman, Matthew R.; Tang, Akaysha (University of New Mexico, Albuquerque, NM)

    2009-02-01

    The motivating vision behind Sandia's MENTOR/PAL LDRD project has been that of systems which use real-time psychophysiological data to support and enhance human performance, both individually and of groups. Relevant and significant psychophysiological data being a necessary prerequisite to such systems, this LDRD has focused on identifying and refining such signals. The project has focused in particular on EEG (electroencephalogram) data as a promising candidate signal because it (potentially) provides a broad window on brain activity with relatively low cost and logistical constraints. We report here on two analyses performed on EEG data collected in this project using the SOBI (Second Order Blind Identification) algorithm to identify two independent sources of brain activity: one in the frontal lobe and one in the occipital. The first study looks at directional influences between the two components, while the second study looks at inferring gender based upon the frontal component.

  17. Micromechanical Analyses of Sturzstroms

    Science.gov (United States)

    Imre, Bernd; Laue, Jan; Springman, Sarah M.

    2010-05-01

    Sturzstroms are very fast landslides of very large initial volume. As type features they display extreme run out, pared with intensive fragmentation of the involved blocks of rock within a collisional flow. The inherent danger to the growing communities in alpine valleys below future potential sites of sturzstroms must be examined and results of predictions of endangered zones allow to impact upon the planning processes in these areas. This calls for the ability to make Type A predictions, according to Lambe (1973), which are done before an event. But Type A predictions are only possible if sufficient understanding of the mechanisms involved in a process is available. The motivation of the doctoral thesis research project presented is therefore to reveal the mechanics of sturzstroms in more detail in order to contribute to the development of a Type A run out prediction model. It is obvious that a sturzstrom represents a highly dynamic collisional granular regime. Thus particles do not only collide but will eventually crush each other. Erismann and Abele (2001) describe this process as dynamic disintegration, where kinetic energy is the main driver for fragmenting the rock mass. In this case an approach combining the type features long run out and fragmentation within a single hypothesis is represented by the dynamic fragmentation-spreading model (Davies and McSaveney, 2009; McSaveney and Davies, 2009). Unfortunately, sturzstroms, and fragmentation within sturzstroms, can not be observed directly in a real event because of their long "reoccurrence time" and the obvious difficulties in placing measuring devices within such a rock flow. Therefore, rigorous modelling is required in particular of the transition from static to dynamic behaviour to achieve better knowledge of the mechanics of sturzstroms, and to provide empirical evidence to confirm the dynamic fragmentation-spreading model. Within this study fragmentation and their effects on the mobility of sturzstroms

  18. The application analyses for primary spectrum pyrometer

    Institute of Scientific and Technical Information of China (English)

    FU; TaiRan

    2007-01-01

    In the applications of primary spectrum pyrometry, based on the dynamic range and the minimum sensibility of the sensor, the application issues, such as the measurement range and the measurement partition, were investigated through theoretical analyses. For a developed primary spectrum pyrometer, the theoretical predictions of measurement range and the distributions of measurement partition were presented through numerical simulations. And the measurement experiments of high-temperature blackbody and standard temperature lamp were processed to further verify the above theoretical analyses and numerical results. Therefore the research in the paper provides the helpful supports for the applications of primary spectrum pyrometer and other radiation pyrometers.……

  19. Severe Accident Recriticality Analyses (SARA)

    International Nuclear Information System (INIS)

    Recriticality in a BWR has been studied for a total loss of electric power accident scenario. In a BWR, the B4C control rods would melt and relocate from the core before the fuel during core uncovery and heat-up. If electric power returns during this time-window unborated water from ECCS systems will start to reflood the partly control rod free core. Recriticality might take place for which the only mitigating mechanisms are the Doppler effect and void formation. In order to assess the impact of recriticality on reactor safety, including accident management measures, the following issues have been investigated in the SARA project: 1. the energy deposition in the fuel during super-prompt power burst, 2. the quasi steady-state reactor power following the initial power burst and 3. containment response to elevated quasi steady-state reactor power. The approach was to use three computer codes and to further develop and adapt them for the task. The codes were SIMULATE-3K, APROS and RECRIT. Recriticality analyses were carried out for a number of selected reflooding transients for the Oskarshamn 3 plant in Sweden with SIMULATE-3K and for the Olkiluoto 1 plant in Finland with all three codes. The core state initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality - both superprompt power bursts and quasi steady-state power generation - for the studied range of parameters, i. e. with core uncovery and heat-up to maximum core temperatures around 1800 K and water flow rates of 45 kg/s to 2000 kg/s injected into the downcomer. Since the recriticality takes place in a small fraction of the core the power densities are high which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal/g, was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding rate

  20. Severe Accident Recriticality Analyses (SARA)

    Energy Technology Data Exchange (ETDEWEB)

    Frid, W. [Swedish Nuclear Power Inspectorate, Stockholm (Sweden); Hoejerup, F. [Risoe National Lab. (Denmark); Lindholm, I.; Miettinen, J.; Puska, E.K. [VTT Energy, Helsinki (Finland); Nilsson, Lars [Studsvik Eco and Safety AB, Nykoeping (Sweden); Sjoevall, H. [Teoliisuuden Voima Oy (Finland)

    1999-11-01

    Recriticality in a BWR has been studied for a total loss of electric power accident scenario. In a BWR, the B{sub 4}C control rods would melt and relocate from the core before the fuel during core uncovery and heat-up. If electric power returns during this time-window unborated water from ECCS systems will start to reflood the partly control rod free core. Recriticality might take place for which the only mitigating mechanisms are the Doppler effect and void formation. In order to assess the impact of recriticality on reactor safety, including accident management measures, the following issues have been investigated in the SARA project: 1. the energy deposition in the fuel during super-prompt power burst, 2. the quasi steady-state reactor power following the initial power burst and 3. containment response to elevated quasi steady-state reactor power. The approach was to use three computer codes and to further develop and adapt them for the task. The codes were SIMULATE-3K, APROS and RECRIT. Recriticality analyses were carried out for a number of selected reflooding transients for the Oskarshamn 3 plant in Sweden with SIMULATE-3K and for the Olkiluoto 1 plant in Finland with all three codes. The core state initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality - both superprompt power bursts and quasi steady-state power generation - for the studied range of parameters, i. e. with core uncovery and heat-up to maximum core temperatures around 1800 K and water flow rates of 45 kg/s to 2000 kg/s injected into the downcomer. Since the recriticality takes place in a small fraction of the core the power densities are high which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal/g, was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding

  1. 17 CFR 240.13a-20 - Plain English presentation of specified information.

    Science.gov (United States)

    2010-04-01

    ... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Plain English presentation of specified information. 240.13a-20 Section 240.13a-20 Commodity and Securities Exchanges SECURITIES AND... Regulations Under the Securities Exchange Act of 1934 Other Reports § 240.13a-20 Plain English presentation...

  2. Climate prediction and predictability

    Science.gov (United States)

    Allen, Myles

    2010-05-01

    Climate prediction is generally accepted to be one of the grand challenges of the Geophysical Sciences. What is less widely acknowledged is that fundamental issues have yet to be resolved concerning the nature of the challenge, even after decades of research in this area. How do we verify or falsify a probabilistic forecast of a singular event such as anthropogenic warming over the 21st century? How do we determine the information content of a climate forecast? What does it mean for a modelling system to be "good enough" to forecast a particular variable? How will we know when models and forecasting systems are "good enough" to provide detailed forecasts of weather at specific locations or, for example, the risks associated with global geo-engineering schemes. This talk will provide an overview of these questions in the light of recent developments in multi-decade climate forecasting, drawing on concepts from information theory, machine learning and statistics. I will draw extensively but not exclusively from the experience of the climateprediction.net project, running multiple versions of climate models on personal computers.

  3. Predictable return distributions

    DEFF Research Database (Denmark)

    Pedersen, Thomas Quistgaard

    This paper provides detailed insights into predictability of the entire stock and bond return distribution through the use of quantile regression. This allows us to examine speci…c parts of the return distribution such as the tails or the center, and for a suf…ciently …ne grid of quantiles we can...... predictable as a function of economic state variables. The results are, however, very different for stocks and bonds. The state variables primarily predict only location shifts in the stock return distribution, while they also predict changes in higher-order moments in the bond return distribution. Out......-of-sample analyses show that the relative accuracy of the state variables in predicting future returns varies across the distribution. A portfolio study shows that an investor with power utility can obtain economic gains by applying the empirical return distribution in portfolio decisions instead of imposing an...

  4. A20 Deficiency in Lung Epithelial Cells Protects against Influenza A Virus Infection

    OpenAIRE

    Maelfait, Jonathan; Roose, Kenny; Vereecke, Lars; Mc Guire, Conor; Sze, Mozes; Schuijs, Martijn; Willart, Monique; Ibanez, Lorena; Hammad, Hamida; LAMBRECHT, Bart; Beyaert, Rudi; Saelens, Xavier; Loo, Geert

    2016-01-01

    A20 negatively regulates multiple inflammatory signalling pathways. We here addressed the role of A20 in club cells (also known as Clara cells) of the bronchial epithelium in their response to influenza A virus infection. Club cells provide a niche for influenza virus replication, but little is known about the functions of these cells in antiviral immunity. Using airway epithelial cell-specific A20 knockout (A20(AEC-KO)) mice, we show that A20 in club cells critically controls innate immune r...

  5. Thermal stability test and analysis of a 20-actuator bimorph deformable mirror

    Institute of Scientific and Technical Information of China (English)

    Ning Yu; Zhou Hong; Yu Hao; Rao Chang-Hui; Jiang Wen-Han

    2009-01-01

    One of the important characteristic of adaptive mirrors is the thermal stability of surface flatness. In this paper, the thermal stability from 13℃ to 25℃ of a 20-actuator bimorph deformable mirror is tested by a Shack-Hartmann wavefront sensor. Experimental results show that, the surface P-V of bimorph increases nearly linearly with ambient temperature. The ratio is 0.11 μm/℃ and the major component of surface displacement is defocused, compared with which, astigmatism, coma and spherical aberration contribute very small. Besides, a finite element model is built up to analyse the influence of thickness, thermal expansion coefficient and Young's modulus of materials on thermal stability. Calculated results show that bimorph has the best thermal stability when the materials have the same thermal expansion coefficient. And when the thickness ratio of glass to PZT is 3 and Young's modulus ratio is approximately 0.4, the surface instability behaviour of the bimorph manifests itself most severely.

  6. Predictive Data Mining in KPP

    Directory of Open Access Journals (Sweden)

    Dr. R.K. Chauhan

    2012-09-01

    Full Text Available In this paper, we have provided the Genetic Algorithm (GA used for prediction process in Knowledge Penetration Process (KPP. The said GA is implemented and its efficiency is analysed.

  7. Data for decay Heat Predictions

    International Nuclear Information System (INIS)

    These proceedings of a specialists' meeting on data for decay heat predictions are based on fission products yields, on delayed neutrons and on comparative evaluations on evaluated and experimental data for thermal and fast fission. Fourteen conferences were analysed

  8. Prismatic analyser concept for neutron spectrometers

    Energy Technology Data Exchange (ETDEWEB)

    Birk, Jonas O.; Jacobsen, Johan; Hansen, Rasmus L.; Lefmann, Kim [Nano Science Center, Niels Bohr Institute, University of Copenhagen, DK-2100 Copenhagen Ø (Denmark); Markó, Márton; Niedermayer, Christof [Laboratory for Neutron Scattering and Imaging, Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Freeman, Paul G. [Laboratory for Quantum Magnetism, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland); Christensen, Niels B. [Institute of Physics, Technical University of Denmark, DK-2800-Kgs. Lyngby (Denmark); Månsson, Martin [Laboratory for Neutron Scattering and Imaging, Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Laboratory for Quantum Magnetism, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland); Rønnow, Henrik M. [Nano Science Center, Niels Bohr Institute, University of Copenhagen, DK-2100 Copenhagen Ø (Denmark); Laboratory for Quantum Magnetism, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland)

    2014-11-15

    Developments in modern neutron spectroscopy have led to typical sample sizes decreasing from few cm to several mm in diameter samples. We demonstrate how small samples together with the right choice of analyser and detector components makes distance collimation an important concept in crystal analyser spectrometers. We further show that this opens new possibilities where neutrons with different energies are reflected by the same analyser but counted in different detectors, thus improving both energy resolution and total count rate compared to conventional spectrometers. The technique can readily be combined with advanced focussing geometries and with multiplexing instrument designs. We present a combination of simulations and data showing three different energies simultaneously reflected from one analyser. Experiments were performed on a cold triple axis instrument and on a prototype inverse geometry Time-of-flight spectrometer installed at PSI, Switzerland, and shows excellent agreement with the predictions. Typical improvements will be 2.0 times finer resolution and a factor of 1.9 in flux gain compared to a focussing Rowland geometry, or of 3.3 times finer resolution and a factor of 2.4 in flux gain compared to a single flat analyser slab.

  9. Diagnostic system for a 20 TESLA single turn coil magnet prototype

    International Nuclear Information System (INIS)

    The Center for Electromechanics at The University of Texas at Austin (CEM-UT) has designed, fabricated, and is testing a prototype 20 T on-axis, single turn, toroidal field TF) coil. The purpose of this Ignition Technology Demonstration (ITD) is to prove the feasibility of the single-turn coil powered by homopolar generators (HPGs). A scaling factor of 0.06 was selected based on the current capability of CEM-UT's 60 MJ HPG power supply. The Balcones HPG power supply consists of six, 10 MJ HPGs, each rated at 1.5 MA at 100 V. When connected in a parallel configuration to the prototype TF coil they provide a 9 MA, 100 ms, critically damped current pulse. The objective of the diagnostic system for the prototype 20 T, TF coil is to determine displacements, temperatures, and magnetic fields at various locations in the coil. The values are then compared to predictions by the electromagnetic (EM) analysis to validate computational results. Operating conditions for instrumentation in a 20 T, cryogenically-cooled magnet are rather severe. Electromechanical simulations show that the 0.06 scale IGNITEX TF prototype will experience localized temperature rise from liquid-nitrogen temperature (-196 degrees C) to approximately 200 degrees C in less than 100 ms. Close to the inner leg of the coil where stresses and temperatures are maximum, the instrumentation experiences a 30 T field rise in 26 ms

  10. A20 Deficiency in Lung Epithelial Cells Protects against Influenza A Virus Infection

    Science.gov (United States)

    Vereecke, Lars; Mc Guire, Conor; Sze, Mozes; Schuijs, Martijn J.; Willart, Monique; Itati Ibañez, Lorena; Hammad, Hamida; Lambrecht, Bart N.; Beyaert, Rudi; Saelens, Xavier; van Loo, Geert

    2016-01-01

    A20 negatively regulates multiple inflammatory signalling pathways. We here addressed the role of A20 in club cells (also known as Clara cells) of the bronchial epithelium in their response to influenza A virus infection. Club cells provide a niche for influenza virus replication, but little is known about the functions of these cells in antiviral immunity. Using airway epithelial cell-specific A20 knockout (A20AEC-KO) mice, we show that A20 in club cells critically controls innate immune responses upon TNF or double stranded RNA stimulation. Surprisingly, A20AEC-KO mice are better protected against influenza A virus challenge than their wild type littermates. This phenotype is not due to decreased viral replication. Instead host innate and adaptive immune responses and lung damage are reduced in A20AEC-KO mice. These attenuated responses correlate with a dampened cytotoxic T cell (CTL) response at later stages during infection, indicating that A20AEC-KO mice are better equipped to tolerate Influenza A virus infection. Expression of the chemokine CCL2 (also named MCP-1) is particularly suppressed in the lungs of A20AEC-KO mice during later stages of infection. When A20AEC-KO mice were treated with recombinant CCL2 the protective effect was abrogated demonstrating the crucial contribution of this chemokine to the protection of A20AEC-KO mice to Influenza A virus infection. Taken together, we propose a mechanism of action by which A20 expression in club cells controls inflammation and antiviral CTL responses in response to influenza virus infection. PMID:26815999

  11. A20 prevents chronic liver inflammation and cancer by protecting hepatocytes from death.

    Science.gov (United States)

    Catrysse, L; Farhang Ghahremani, M; Vereecke, L; Youssef, S A; Mc Guire, C; Sze, M; Weber, A; Heikenwalder, M; de Bruin, A; Beyaert, R; van Loo, G

    2016-01-01

    An important regulator of inflammatory signalling is the ubiquitin-editing protein A20 that acts as a break on nuclear factor-κB (NF-κB) activation, but also exerts important cytoprotective functions. A20 knockout mice are cachectic and die prematurely due to excessive multi-organ inflammation. To establish the importance of A20 in liver homeostasis and pathology, we developed a novel mouse line lacking A20 specifically in liver parenchymal cells. These mice spontaneously develop chronic liver inflammation but no fibrosis or hepatocellular carcinomas, illustrating an important role for A20 in normal liver tissue homeostasis. Hepatocyte-specific A20 knockout mice show sustained NF-κB-dependent gene expression in the liver upon tumor necrosis factor (TNF) or lipopolysaccharide injection, as well as hepatocyte apoptosis and lethality upon challenge with sublethal doses of TNF, demonstrating an essential role for A20 in the protection of mice against acute liver failure. Finally, chronic liver inflammation and enhanced hepatocyte apoptosis in hepatocyte-specific A20 knockout mice was associated with increased susceptibility to chemically or high fat-diet-induced hepatocellular carcinoma development. Together, these studies establish A20 as a crucial hepatoprotective factor. PMID:27253414

  12. EXPRESSION OF THE INFLAMMATORY REGULATOR A20 CORRELATES WITH LUNG FUNCTION IN PATIENTS WITH CYSTIC FIBROSIS

    OpenAIRE

    Kelly, Catriona; Williams, Mark; Elborn, Stuart; Ennis, Madeleine; Schock, Bettina

    2012-01-01

    Abstract: Background: A20 and TAX1BP1 interact to negatively regulate NF--driven inflammation. A20 expression is altered in F508del/F508delpatients. Here we explore the effect of CFTR and CFTR genotype on A20 andTAX1BP1expression. The relationship with lung function is also assessed.Methods: Primary Nasal Epithelial cells (NECs) from CF patients(F508del/F508del, n=8, R117H/F508del, n=6) and Controls (age-matched,n=8), and 16HBE14o- cells were investigated. A20 and TAX1BP1 geneexpression was d...

  13. Musk fragrances, DEHP and heavy metals in a 20 years old sludge treatment reed bed system.

    Science.gov (United States)

    Matamoros, Víctor; Nguyen, Loc Xuan; Arias, Carlos A; Nielsen, Steen; Laugen, Maria Mølmer; Brix, Hans

    2012-08-01

    The Sludge Treatment Reed Bed (STRB) technology is a cost-efficient and environmentally friendly technology to dewater and mineralize surplus sludge from conventional wastewater treatment systems. Primary and secondary liquid sludge is loaded onto the surface of the bed over several years, where it is dewatered, mineralized and turned into a biosolid with a high dry matter content for use as an organic fertilizer on agricultural land. We analysed the concentrations of five organic micropollutants (galaxolide, tonalide, cashmeran, celestolide and DEHP) and six heavy metals (Pb, Ni, Cu, Cd, Zn and Cr) in the accumulated sludge in a 20-year old STRB in Denmark in order to assess the degradation and fate of these contaminants in a STRB and the relation to sludge composition. The results showed that the deposited sludge was dewatered to reach a dry matter content of 29%, and that up to a third of the organic content of the sludge was mineralized. The concentrations of heavy metals generally increased with depth in the vertical sludge profile due to the dewatering and mineralization of organic matter, but in all cases the concentrations were below the European Union legal limits for agricultural land disposal. The concentrations of fragrances and DEHP ranged from 10 to 9000 ng g(-1) dry mass. The attenuation of hydrophobic micropollutants from the top to the bottom layer of the reed bed ranged from 40 to 98%, except for tonalide which increased significantly with sludge depth, and consequently showed an unusual depth distribution of the galaxolide/tonalide ratio. This unexpected pattern may reflect changes imposed by a long storage time and/or different composition of the fresh sludge in the past. The lack of a significant decreasing DEHP concentration with sludge age might indicate that this compound is very persistent in STRBs. In conclusion the STRB was a feasible technology for sludge treatment before its land disposal. PMID:22608611

  14. Fouling analyses of heat exchangers for PSR

    International Nuclear Information System (INIS)

    Fouling of heat exchangers is generated by water-borne deposits, commonly known as foulants including particulate matter from the air, migrated corrosion produces; silt, clays, and sand suspended in water; organic contaminants; and boron based deposits in plants. This fouling is known to interfere with normal flow characteristics and reduce thermal efficiencies of heat exchangers. This paper focuses on fouling analyses for six heat exchangers of two primary systems in two nuclear power plants; the regenerative heat exchangers of the chemical and volume control system and the component cooling water heat exchangers of the component cooling water system. To analyze the fouling for heat exchangers, fouling factor was introduced based on the ASME O and M codes and TEMA standards. Based on the results of the fouling analyses, the present thermal performances and fouling levels for the six heat exchangers were predicted

  15. Analyses and characterization of double shell tank

    International Nuclear Information System (INIS)

    Evaporator candidate feed from tank 241-AP-108 (108-AP) was sampled under prescribed protocol. Physical, inorganic, and radiochemical analyses were performed on tank 108-AP. Characterization of evaporator feed tank waste is needed primarily for an evaluation of its suitability to be safely processed through the evaporator. Such analyses should provide sufficient information regarding the waste composition to confidently determine whether constituent concentrations are within not only safe operating limits, but should also be relevant to functional limits for operation of the evaporator. Characterization of tank constituent concentrations should provide data which enable a prediction of where the types and amounts of environmentally hazardous waste are likely to occur in the evaporator product streams

  16. Analyses and characterization of double shell tank

    Energy Technology Data Exchange (ETDEWEB)

    1994-10-04

    Evaporator candidate feed from tank 241-AP-108 (108-AP) was sampled under prescribed protocol. Physical, inorganic, and radiochemical analyses were performed on tank 108-AP. Characterization of evaporator feed tank waste is needed primarily for an evaluation of its suitability to be safely processed through the evaporator. Such analyses should provide sufficient information regarding the waste composition to confidently determine whether constituent concentrations are within not only safe operating limits, but should also be relevant to functional limits for operation of the evaporator. Characterization of tank constituent concentrations should provide data which enable a prediction of where the types and amounts of environmentally hazardous waste are likely to occur in the evaporator product streams.

  17. Analyse

    DEFF Research Database (Denmark)

    Dubgaard, Alex

    2009-01-01

    Restriktioner over for landbruget er en god forretning. Til gengæld kan det ikke betale sig at reducere udledningen af drivhusgasser......Restriktioner over for landbruget er en god forretning. Til gengæld kan det ikke betale sig at reducere udledningen af drivhusgasser...

  18. Direct transfer of A20 gene into pancreas protected mice from streptozotocin-induced diabetes

    Institute of Scientific and Technical Information of China (English)

    Lu-yang YU; Bo LIN; Zhen-lin ZHANG; Li-he GUO

    2004-01-01

    AIM: To investigate the efficiency of transfer of A20 gene into pancreas against STZ-induced diabetes. METHODS:PVP-plasmid mixture was directly transferred into the pancreatic parenchyma 2 d before STZ injection. The uptake of plasmid pcDNA3-LacZ or pcDNA3-A20 was detected by PCR and the expression of LacZ was confirmed by histological analysis with X-gal. A20 expression in the pancreas of pcDNA3-A20 transgenic mice was measured by RT-PCR and Westem blots. Urine amylase, NO generation, and histological examination were examined. RESULTS:Injection of PVP-plasmid mixture directly into the pancreatic parenchyma increased urine amylase concentration 16 h after operation and reversed it to nearly normal 36 h later. On d 33 LacZ expression could be found in spleen,duodenum, and islets. The development of diabetes was prevented by direct A20 gene transferring into the pancreas and A20-mediated protection was correlated with suppression of NO production. The insulitis was ameliorated in A20-treated mice. CONCLUSION: Injection of PVP-plasmid mixture directly into the pancreatic parenchyma led to target gene expression in islets. Direct transfer of A20 gene into the pancreas protected mice from STZ-induced diabetes.

  19. A20 inhibits the motility of HCC cells induced by TNF-α.

    Science.gov (United States)

    Wang, Xianteng; Ma, Chao; Zong, Zhaoyun; Xiao, Ying; Li, Na; Guo, Chun; Zhang, Lining; Shi, Yongyu

    2016-03-22

    Metastasis of hepatocellular carcinoma (HCC) can be facilitated by TNF-α, a prototypical inflammatory cytokine in the HCC microenvironment. A20 is a negative regulator of NF-κB signaling pathway. In the present study we ask whether A20 plays a role in HCC metastasis. We found that A20 expression was downregulated in the invasive cells of microvascular invasions (MVI) compared with the noninvasive cells in 89 tissue samples from patients with HCC by immunochemistry methods. Overexpression of A20 in HCC cell lines inhibited their motility induced by TNF-α. Furthermore, the overexpression of A20 inhibited epithelial-mesenchymal transition (EMT), FAK activation and RAC1 activity. By contrast, knockdown of A20 in one HCC cell line results in the converse. In addition, the overexpression of A20 restrained the formation of MVI in HCC xenograft in nude mice treated with TNF-α. All the results suggested that A20 functioned as a negative regulator in motility of HCC cells induced by TNF-α. PMID:26909601

  20. Study of proton and 2 protons emission from light neutron deficient nuclei around A=20

    International Nuclear Information System (INIS)

    Proton and two proton emission from light neutron deficient nuclei around A=20 have been studied. A radioactive beam of 18Ne, 17F and 20Mg, produced at the Grand Accelerateur National d'Ions Lourds by fragmentation of a 24Mg primary beam at 95 MeV/A, bombarded a 9Be target to form unbound states. Proton(s) and nuclei from the decay were detected respectively in the MUST array and the SPEG spectrometer. From energy and angle measurements, the invariant mass of the decaying nucleus could be reconstructed. Double coincidence events between a proton and 17F, 16O, 15O, 14O and 18Ne were registered to obtain excitation energy spectra of 18Ne, 17F, 16F, 15F et 19Na. Generally, the masses measures are in agreement with previous experiments. In the case of 18Ne, excitation energy and angular distributions agree well with the predictions of a break up model calculation. From 17Ne proton coincidences, a first experimental measurement of the ground state mass excess of 18Na has been obtained and yields 24,19(0,15)MeV. Two proton emission from 17Ne and 18Ne excited states and the 19Mg ground state was studied through triple coincidences between two proton and 15O, 16O and 17Ne respectively. In the first case, the proton-proton relative angle distribution in the center of mass has been compared with model calculation. Sequential emission from excited states of 17Ne, above the proton emission threshold, through 16F is dominant but a 2He decay channel could not be excluded. No 2He emission from the 1.288 MeV 17Ne state, or from the 6.15 MeV 18Ne state has been observed. Only one coincidence event between 17Ne and two proton was registered, the value of the one neutron stripping reaction cross section of 20Mg being much lower than predicted. (author)

  1. STRATEGY PATTERNS PREDICTION MODEL

    OpenAIRE

    Aram Baruch Gonzalez Perez; Jorge Adolfo Ramirez Uresti

    2014-01-01

    Multi-agent systems are broadly known for being able to simulate real-life situations which require the interaction and cooperation of individuals. Opponent modeling can be used along with multi-agent systems to model complex situations such as competitions like soccer games. In this study, a model for predicting opponent moves based on their target is presented. The model is composed by an offline step (learning phase) and an online one (execution phase). The offline step gets and analyses p...

  2. Prediction, Regression and Critical Realism

    DEFF Research Database (Denmark)

    Næss, Petter

    2004-01-01

    This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... of prediction necessary and possible in spatial planning of urban development. Finally, the political implications of positions within theory of science rejecting the possibility of predictions about social phenomena are addressed....... phenomena. This position is fundamentally problematic to public planning. Without at least some ability to predict the likely consequences of different proposals, the justification for public sector intervention into market mechanisms will be frail. Statistical methods like regression analyses are commonly...

  3. Two-Variance-Component Model Improves Genetic Prediction in Family Datasets.

    Science.gov (United States)

    Tucker, George; Loh, Po-Ru; MacLeod, Iona M; Hayes, Ben J; Goddard, Michael E; Berger, Bonnie; Price, Alkes L

    2015-11-01

    Genetic prediction based on either identity by state (IBS) sharing or pedigree information has been investigated extensively with best linear unbiased prediction (BLUP) methods. Such methods were pioneered in plant and animal-breeding literature and have since been applied to predict human traits, with the aim of eventual clinical utility. However, methods to combine IBS sharing and pedigree information for genetic prediction in humans have not been explored. We introduce a two-variance-component model for genetic prediction: one component for IBS sharing and one for approximate pedigree structure, both estimated with genetic markers. In simulations using real genotypes from the Candidate-gene Association Resource (CARe) and Framingham Heart Study (FHS) family cohorts, we demonstrate that the two-variance-component model achieves gains in prediction r(2) over standard BLUP at current sample sizes, and we project, based on simulations, that these gains will continue to hold at larger sample sizes. Accordingly, in analyses of four quantitative phenotypes from CARe and two quantitative phenotypes from FHS, the two-variance-component model significantly improves prediction r(2) in each case, with up to a 20% relative improvement. We also find that standard mixed-model association tests can produce inflated test statistics in datasets with related individuals, whereas the two-variance-component model corrects for inflation. PMID:26544803

  4. Molecular Basis for the Unique Deubiquitinating Activity of the NF-κB Inhibitor A20

    Energy Technology Data Exchange (ETDEWEB)

    Lin, S.; Chung, J; Lamothe, B; Rajashankar, K; Lu, M; Lo, Y; Lam, A; Darnay, B; Wu, H

    2008-01-01

    Nuclear factor ?B (NF-?B) activation in tumor necrosis factor, interleukin-1, and Toll-like receptor pathways requires Lys63-linked nondegradative polyubiquitination. A20 is a specific feedback inhibitor of NF-?B activation in these pathways that possesses dual ubiquitin-editing functions. While the N-terminal domain of A20 is a deubiquitinating enzyme (DUB) for Lys63-linked polyubiquitinated signaling mediators such as TRAF6 and RIP, its C-terminal domain is a ubiquitin ligase (E3) for Lys48-linked degradative polyubiquitination of the same substrates. To elucidate the molecular basis for the DUB activity of A20, we determined its crystal structure and performed a series of biochemical and cell biological studies. The structure reveals the potential catalytic mechanism of A20, which may be significantly different from papain-like cysteine proteases. Ubiquitin can be docked onto a conserved A20 surface; this interaction exhibits charge complementarity and no steric clash. Surprisingly, A20 does not have specificity for Lys63-linked polyubiquitin chains. Instead, it effectively removes Lys63-linked polyubiquitin chains from TRAF6 without dissembling the chains themselves. Our studies suggest that A20 does not act as a general DUB but has the specificity for particular polyubiquitinated substrates to assure its fidelity in regulating NF-?B activation in the tumor necrosis factor, interleukin-1, and Toll-like receptor pathways.

  5. Conjoint-Analyse und Marktsegmentierung

    OpenAIRE

    Steiner, Winfried J.; Baumgartner, Bernhard

    2003-01-01

    Die Marktsegmentierung zählt neben der Neuproduktplanung und Preisgestaltung zu den wesentlichen Einsatzgebieten der Conjoint-Analyse. Neben traditionell eingesetzten zweistufigen Vorgehensweisen, bei denen Conjoint-Analyse und Segmentierung in zwei getrennten Schritten erfolgen, stehen heute mit Methoden wie der Clusterwise Regression oder Mixture-Modellen neuere Entwicklungen, die eine simultane Segmentierung und Präferenzschätzung ermöglichen, zur Verfügung. Der Beitrag gibt einen Überblic...

  6. Tracking log transport and deposition during a 20-year flood in a wide mountain river

    Science.gov (United States)

    Wyżga, Bartłomiej; Mikuś, Paweł; Zawiejska, Joanna; Ruiz-Villanueva, Virginia; Kaczka, Ryszard; Czech, Wiktoria

    2016-04-01

    Distance of large wood transport during floods and conditions for wood deposition in wide mountain rivers are still insufficiently recognised. Tracking logs tagged with radio transmitters was used to investigate differences in depositional conditions and the length of log displacement during a 20-year flood between channel reaches of different morphology in the Czarny Dunajec River, Polish Carpathians. During a rising limb of the flood, logs were placed into the river at the beginning of an incised reach, close to the beginning of a channelized reach, and 1 km upstream from the beginning of a wide, multi-thread reach. The incised, channelized, and multi-thread reaches retained 12.5%, 33%, and 94% of tagged logs introduced to these reaches, and all the logs retained in the multi-thread reach were deposited in its upstream half. Significant differences in the length of displacement existed between the logs delivered to the river at the three locations, with logs placed into the river at the beginning of the incised reach moved the longest distances and those delivered just upstream from the multi-thread reach the shortest ones. One-fourth of the logs were deposited in a low-flow channel or on channel margin, one-fifth on the floodplain and more than half on gravel bars. After the flood, river cross-sections with deposited logs and a set of cross-sections without wood deposits were surveyed to collect data for one-dimensional modelling of hydraulic conditions at the flood peak. The cross-sections with deposited logs were typified by significantly greater flow width and flow area, and significantly smaller mean flow depth, mean velocity, Froude number, mean bed shear stress and unit stream power. Principal component analysis of the hydraulic parameters in the analysed cross-sections grouped the two types of cross-sections in distinct clusters, indicating that multi-thread cross-sections differed in hydraulic parameters from all the other cross-sections. The experiment

  7. Ecosystem Development after Mangrove Wetland Creation: Plant-Soil Change across a 20-year Chronosequence

    Science.gov (United States)

    Mangrove wetland restoration and creation efforts are increasingly proposed as mechanisms to compensate for mangrove wetland loss. However, ecosystem development and functional equivalence in restored and created mangrove wetlands is poorly understood. We compared a 20-yr chrono...

  8. Prediction Markets

    DEFF Research Database (Denmark)

    Horn, Christian Franz; Ivens, Bjørn Sven; Ohneberg, Michael;

    2014-01-01

    In recent years, Prediction Markets gained growing interest as a forecasting tool among researchers as well as practitioners, which resulted in an increasing number of publications. In order to track the latest development of research, comprising the extent and focus of research, this article...... provides a comprehensive review and classification of the literature related to the topic of Prediction Markets. Overall, 316 relevant articles, published in the timeframe from 2007 through 2013, were identified and assigned to a herein presented classification scheme, differentiating between descriptive...... works, articles of theoretical nature, application-oriented studies and articles dealing with the topic of law and policy. The analysis of the research results reveals that more than half of the literature pool deals with the application and actual function tests of Prediction Markets. The results are...

  9. Mitogenomic analyses from ancient DNA

    DEFF Research Database (Denmark)

    Paijmans, Johanna L.A.; Gilbert, M Thomas P; Hofreiter, Michael

    2013-01-01

    . To date, at least 124 partially or fully assembled mitogenomes from more than 20 species have been obtained, and, given the rapid progress in sequencing technology, this number is likely to dramatically increase in the future. The increased information content offered by analysing full mitogenomes...

  10. Analysing student teachers’ lesson plans

    DEFF Research Database (Denmark)

    Carlsen, Louise Meier

    2015-01-01

    I investigate 17 mathematics student teachers’ productions, in view of examining the synergy and interaction between their mathematical and didactical knowledge. The concrete data material consists in lesson plans elaborated for the final exam of a unit on “numbers, arithmetic and algebra”. The...... anthropological theory of the didactic is used as a framework to analyse these components of practical and theoretical knowledge....

  11. Beskrivende analyse af mekaniske systemer

    DEFF Research Database (Denmark)

    Andreasen, Mogens Myrup; Hansen, Claus Thorp

    Descriptive analysis is the activity, where a given product is analysed for obtaining insight into different aspects, leading to an explicit description of each of these aspects. This textbook is worked out for course 72101 Produktanalyse (Analysis of products) given at DTU....

  12. The CAMAC logic state analyser

    CERN Document Server

    Centro, Sandro

    1981-01-01

    Summary form only given, as follows. Large electronic experiments using distributed processors for parallel readout and data reduction need to analyse the data acquisition components status and monitor dead time constants of each active readout module and processor. For the UA1 experiment, a microprocessor-based CAMAC logic status analyser (CLSA) has been developed in order to implement these functions autonomously. CLSA is a single unit CAMAC module, able to record, up to 256 times, the logic status of 32 TTL inputs gated by a common clock, internal or external, with a maximum frequency of 2 MHz. The data stored in the internal CLSA memory can be read directly via CAMAC function or preprocessed by CLSA 6800 microprocessor. The 6800 resident firmware (4Kbyte) expands the module features to include an interactive monitor, data recording control, data reduction and histogram accumulation with statistics parameter evaluation. The microprocessor memory and the resident firmware can be externally extended using st...

  13. Workload analyse of assembling process

    Science.gov (United States)

    Ghenghea, L. D.

    2015-11-01

    The workload is the most important indicator for managers responsible of industrial technological processes no matter if these are automated, mechanized or simply manual in each case, machines or workers will be in the focus of workload measurements. The paper deals with workload analyses made to a most part manual assembling technology for roller bearings assembling process, executed in a big company, with integrated bearings manufacturing processes. In this analyses the delay sample technique have been used to identify and divide all bearing assemblers activities, to get information about time parts from 480 minutes day work time that workers allow to each activity. The developed study shows some ways to increase the process productivity without supplementary investments and also indicated the process automation could be the solution to gain maximum productivity.

  14. Analyse du discours et archive

    OpenAIRE

    Maingueneau, Dominique

    2007-01-01

    Les recherches qui se réclament de "l’analyse du discours" connaissent un développement considérable dans le monde entier ; en revanche, "l’école française d’analyse du discours" (AD) traverse une crise d’identité depuis le début des années 80. Dans cet exposé nous voudrions explorer les raisons de cette crise, puis préciser le concept d’archive qui, à notre sens, permet de prolonger la voie ouverte à la fin des années 1960. Mais il ne s’agit que d’une des voies possibles, dès lors que, comme...

  15. Learner as Statistical Units of Analyses

    Directory of Open Access Journals (Sweden)

    Vivek Venkatesh

    2011-01-01

    Full Text Available Educational psychologists have researched the generality and specificity of metacognitive monitoring in the context of college-level multiple-choice tests, but fairly little is known as to how learners monitor their performance on more complex academic tasks. Even lesser is known about how monitoring proficiencies such as discrimination and bias might be related to key self-regulatory processes associated with task understanding. This quantitative study explores the relationship between monitoring proficiencies and task understanding in 39 adult learners tackling ill-structured writing tasks for a graduate “theories of e-learning” course. Using learner as unit of analysis, the generality of monitoring is confirmed through intra-measure correlation analyses while facets of its specificity stand out due to the absence of inter-measure correlations. Unsurprisingly, learner-based correlational and repeated measures analyses did not reveal how monitoring proficiencies and task understanding might be related. However, using essay as unit of analysis, ordinal and multinomial regressions reveal how monitoring influences different levels of task understanding. Results are interpreted not only in light of novel procedures undertaken in calculating performance prediction capability but also in the application of essay-based, intra-sample statistical analysis that reveal heretofore unseen relationships between academic self-regulatory constructs.

  16. Analyses of containment structures with corrosion damage

    Energy Technology Data Exchange (ETDEWEB)

    Cherry, J.L. [Sandia National Labs., Albuquerque, NM (United States)

    1997-01-01

    Corrosion damage that has been found in a number of nuclear power plant containment structures can degrade the pressure capacity of the vessel. This has prompted concerns regarding the capacity of corroded containments to withstand accident loadings. To address these concerns, finite element analyses have been performed for a typical PWR Ice Condenser containment structure. Using ABAQUS, the pressure capacity was calculated for a typical vessel with no corrosion damage. Multiple analyses were then performed with the location of the corrosion and the amount of corrosion varied in each analysis. Using a strain-based failure criterion, a {open_quotes}lower bound{close_quotes}, {open_quotes}best estimate{close_quotes}, and {open_quotes}upper bound{close_quotes} failure level was predicted for each case. These limits were established by: determining the amount of variability that exists in material properties of typical containments, estimating the amount of uncertainty associated with the level of modeling detail and modeling assumptions, and estimating the effect of corrosion on the material properties.

  17. Analyses of containment structures with corrosion damage

    International Nuclear Information System (INIS)

    Corrosion damage that has been found in a number of nuclear power plant containment structures can degrade the pressure capacity of the vessel. This has prompted concerns regarding the capacity of corroded containments to withstand accident loadings. To address these concerns, finite element analyses have been performed for a typical PWR Ice Condenser containment structure. Using ABAQUS, the pressure capacity was calculated for a typical vessel with no corrosion damage. Multiple analyses were then performed with the location of the corrosion and the amount of corrosion varied in each analysis. Using a strain-based failure criterion, a open-quotes lower boundclose quotes, open-quotes best estimateclose quotes, and open-quotes upper boundclose quotes failure level was predicted for each case. These limits were established by: determining the amount of variability that exists in material properties of typical containments, estimating the amount of uncertainty associated with the level of modeling detail and modeling assumptions, and estimating the effect of corrosion on the material properties

  18. Analysing Protocol Stacks for Services

    DEFF Research Database (Denmark)

    Gao, Han; Nielson, Flemming; Nielson, Hanne Riis

    2011-01-01

    We show an approach, CaPiTo, to model service-oriented applications using process algebras such that, on the one hand, we can achieve a certain level of abstraction without being overwhelmed by the underlying implementation details and, on the other hand, we respect the concrete industrial standa...... financial case study taken from Chapter 0-3. Finally, we develop a static analysis to analyse the security properties as they emerge at the level of concrete industrial protocols....

  19. Tematisk analyse af amerikansk hiphop

    OpenAIRE

    Tranberg-Hansen, Katrine; Bøgh Larsen, Cecilie; Jeppsson,Louise Emilie; Lindberg Kirkegaard, Nanna; Funch Madsen, Signe; Bülow Bach, Maria

    2013-01-01

    This paper examines the possible development in the function of American hiphop. It focuses on specific themes like ghetto, freedom, rebellion, and racial discrimination in hiphop music. To investigate this possible development two text analysis methods are used: a pragmatic and a stylistic text analysis, and a historical method is used: a source criticism. A minimal amount of literature has been published on how hiphop culture arose. The-­‐ se studies, however, make it possible to analyse...

  20. STRATEGY PATTERNS PREDICTION MODEL

    Directory of Open Access Journals (Sweden)

    Aram Baruch Gonzalez Perez

    2014-01-01

    Full Text Available Multi-agent systems are broadly known for being able to simulate real-life situations which require the interaction and cooperation of individuals. Opponent modeling can be used along with multi-agent systems to model complex situations such as competitions like soccer games. In this study, a model for predicting opponent moves based on their target is presented. The model is composed by an offline step (learning phase and an online one (execution phase. The offline step gets and analyses previous experiences while the online step uses the data generated by offline analysis to predict opponent moves. This model is illustrated by an experiment with the RoboCup 2D Soccer Simulator. The proposed model was tested using 22 games to create the knowledge base and getting an accuracy rate over 80%.

  1. Comparison of elastic and inelastic analyses

    International Nuclear Information System (INIS)

    The use of inelastic analysis methods instead of the traditional elastic analysis methods in the design of radioactive material (RAM) transport packagings leads to a better understanding of the response of the package to mechanical loadings. Thus, better assessment of the containment, thermal protection, and shielding integrity of the package after a structure accident event can be made. A more accurate prediction of the package response can lead to enhanced safety and also allow for a more efficient use of materials, possibly leading to a package with higher capacity or lower weight. This paper discusses the advantages and disadvantages of using inelastic analysis in the design of RAM shipping packages. The use of inelastic analysis presents several problems to the package designer. When using inelastic analysis the entire nonlinear response of the material must be known, including the effects of temperature changes and strain rate. Another problem is that there currently is not an acceptance criteria for this type of analysis that is approved by regulatory agencies. Inelastic analysis acceptance criteria based on failure stress, failure strain , or plastic energy density could be developed. For both elastic and inelastic analyses it is also important to include other sources of stress in the analyses, such as fabrication stresses, thermal stresses, stresses from bolt preloading, and contact stresses at material interfaces. Offsetting these added difficulties is the improved knowledge of the package behavior. This allows for incorporation of a more uniform margin of safety, which can result in weight savings and a higher level of confidence in the post-accident configuration of the package. In this paper, comparisons between elastic and inelastic analyses are made for a simple ring structure and for a package to transport a large quantity of RAM by rail (rail cask) with lead gamma shielding to illustrate the differences in the two analysis techniques

  2. Mitogenomic analyses of eutherian relationships.

    Science.gov (United States)

    Arnason, U; Janke, A

    2002-01-01

    Reasonably correct phylogenies are fundamental to the testing of evolutionary hypotheses. Here, we present phylogenetic findings based on analyses of 67 complete mammalian mitochondrial (mt) genomes. The analyses, irrespective of whether they were performed at the amino acid (aa) level or on nucleotides (nt) of first and second codon positions, placed Erinaceomorpha (hedgehogs and their kin) as the sister group of remaining eutherians. Thus, the analyses separated Erinaceomorpha from other traditional lipotyphlans (e.g., tenrecs, moles, and shrews), making traditional Lipotyphla polyphyletic. Both the aa and nt data sets identified the two order-rich eutherian clades, the Cetferungulata (comprising Pholidota, Carnivora, Perissodactyla, Artiodactyla, and Cetacea) and the African clade (Tenrecomorpha, Macroscelidea, Tubulidentata, Hyracoidea, Proboscidea, and Sirenia). The study corroborated recent findings that have identified a sister-group relationship between Anthropoidea and Dermoptera (flying lemurs), thereby making our own order, Primates, a paraphyletic assembly. Molecular estimates using paleontologically well-established calibration points, placed the origin of most eutherian orders in Cretaceous times, 70-100 million years before present (MYBP). The same estimates place all primate divergences much earlier than traditionally believed. For example, the divergence between Homo and Pan is estimated to have taken place approximately 10 MYBP, a dating consistent with recent findings in primate paleontology. PMID:12438776

  3. Biological aerosol warner and analyser

    Science.gov (United States)

    Schlemmer, Harry; Kürbitz, Gunther; Miethe, Peter; Spieweck, Michael

    2006-05-01

    The development of an integrated sensor device BiSAM (Biological Sampling and Analysing Module) is presented which is designed for rapid detection of aerosol or dust particles potentially loaded with biological warfare agents. All functional steps from aerosol collection via immuno analysis to display of results are fully automated. The core component of the sensor device is an ultra sensitive rapid analyser PBA (Portable Benchtop Analyser) based on a 3 dimensional immuno filtration column of large internal area, Poly HRP marker technology and kinetic optical detection. High sensitivity despite of the short measuring time, high chemical stability of the micro column and robustness against interferents make the PBA an ideal tool for fielded sensor devices. It is especially favourable to combine the PBA with a bio collector because virtually no sample preparation is necessary. Overall, the BiSAM device is capable to detect and identify living micro organisms (bacteria, spores, viruses) as well as toxins in a measuring cycle of typically half an hour duration. In each batch up to 12 different tests can be run in parallel together with positive and negative controls to keep the false alarm rate low.

  4. Reactive mesothelial hyperplasia associated with chronic peritonitis in a 20-year-old Quarter horse.

    Science.gov (United States)

    Hoon-Hanks, Laura L; Rout, Emily D; Vap, Linda M; Aboellail, Tawfik A; Hassel, Diana M; Nout-Lomas, Yvette S

    2016-05-01

    A 20-year-old gelding was diagnosed with peritonitis and severe reactive mesothelial hyperplasia. Exploratory laparotomy findings were suggestive of a neoplastic etiology; however, additional diagnostics ruled this out and the horse made a full recovery. This report demonstrates the difficulty and value of differentiating between reactive and neoplastic mesothelial processes. PMID:27152035

  5. A20 plays a critical role in the immunoregulatory function of mesenchymal stem cells.

    Science.gov (United States)

    Dang, Rui-Jie; Yang, Yan-Mei; Zhang, Lei; Cui, Dian-Chao; Hong, Bangxing; Li, Ping; Lin, Qiuxia; Wang, Yan; Wang, Qi-Yu; Xiao, Fengjun; Mao, Ning; Wang, Changyong; Jiang, Xiao-Xia; Wen, Ning

    2016-08-01

    Mesenchymal stem cells (MSCs) possess an immunoregulatory capacity and are a therapeutic target for many inflammation-related diseases. However, the detailed mechanisms of MSC-mediated immunosuppression remain unclear. In this study, we provide new information to partly explain the molecular mechanisms of immunoregulation by MSCs. Specifically, we found that A20 expression was induced in MSCs by inflammatory cytokines. Knockdown of A20 in MSCs resulted in increased proliferation and reduced adipogenesis, and partly reversed the suppressive effect of MSCs on T cell proliferation in vitro and inhibited tumour growth in vivo. Mechanistic studies indicated that knockdown of A20 in MSCs inhibited activation of the p38 mitogen-activated protein kinase (MAPK) pathway, which potently promoted the production of tumour necrosis factor (TNF)-α and inhibited the production of interleukin (IL)-10. Collectively, these data reveal a crucial role of A20 in regulating the immunomodulatory activities of MSCs by controlling the expression of TNF-α and IL-10 in an inflammatory environment. These findings provide novel insights into the pathogenesis of various inflammatory-associated diseases, and are a new reference for the future development of treatments for such afflictions. PMID:27028905

  6. 17 CFR 240.14a-20 - Shareholder approval of executive compensation of TARP recipients.

    Science.gov (United States)

    2010-04-01

    ... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Shareholder approval of... § 240.14a-20 Shareholder approval of executive compensation of TARP recipients. If a solicitation is... shareholder vote to approve the compensation of executives, as disclosed pursuant to Item 402 of Regulation...

  7. Analyses of a Virtual World

    CERN Document Server

    Holovatch, Yurij; Szell, Michael; Thurner, Stefan

    2016-01-01

    We present an overview of a series of results obtained from the analysis of human behavior in a virtual environment. We focus on the massive multiplayer online game (MMOG) Pardus which has a worldwide participant base of more than 400,000 registered players. We provide evidence for striking statistical similarities between social structures and human-action dynamics in the real and virtual worlds. In this sense MMOGs provide an extraordinary way for accurate and falsifiable studies of social phenomena. We further discuss possibilities to apply methods and concepts developed in the course of these studies to analyse oral and written narratives.

  8. Chapter No.4. Safety analyses

    International Nuclear Information System (INIS)

    In 2001 the activity in the field of safety analyses was focused on verification of the safety analyses reports for NPP V-2 Bohunice and NPP Mochovce concerning the new profiled fuel and probabilistic safety assessment study for NPP Mochovce. The calculation safety analyses were performed and expert reviews for the internal UJD needs were elaborated. An important part of work was performed also in solving of scientific and technical tasks appointed within bilateral projects of co-operation between UJD and its international partnership organisations as well as within international projects ordered and financed by the European Commission. All these activities served as an independent support for UJD in its deterministic and probabilistic safety assessment of nuclear installations. A special attention was paid to a review of probabilistic safety assessment study of level 1 for NPP Mochovce. The probabilistic safety analysis of NPP related to the full power operation was elaborated in the study and a contribution of the technical and operational improvements to the risk decreasing was quantified. A core damage frequency of the reactor was calculated and the dominant initiating events and accident sequences with the major contribution to the risk were determined. The target of the review was to determine the acceptance of the sources of input information, assumptions, models, data, analyses and obtained results, so that the probabilistic model could give a real picture of the NPP. The review of the study was performed in co-operation of UJD with the IAEA (IPSART mission) as well as with other external organisations, which were not involved in the elaboration of the reviewed document and probabilistic model of NPP. The review was made in accordance with the IAEA guidelines and methodical documents of UJD and US NRC. In the field of calculation safety analyses the UJD activity was focused on the analysis of an operational event, analyses of the selected accident scenarios

  9. Scaling effects predicted by WCOBRA/TRAC for UPI plant best estimate LOCA

    International Nuclear Information System (INIS)

    The WCOBRA/TRAC-MOD7A, Rev. 1 code is currently licensed for best estimate LOCA analyses of 3 and 4 loop PWRs. As part of a licensing effort to extend the code application to plants equipped with upper plenum injection (UPI), scaling effects predicted by the code are investigated. The scaling effects of UPI tests were obtained through data analyses and summarized in Damerell and Simons (1993). The scaling subjects are: breakthrough flow area, downflow rate into the core, hot leg water carryover, and liquid level in upper plenum. The test facilities that supplied the data include UPTF and CCTF. In this report, the scaling trend is obtained from WCOBRA/TRAC analyses of CCTF Run 72 and Run 76 (scaling factor 0.091), UPTF Tests 20A, 20B, and 20C (scaling factor 2.1), and a typical UPI plant (scaling factor 1.0). The predicted scaling trend is found to agree well with the test data. (orig.)

  10. Fracturing and brittleness index analyses of shales

    Science.gov (United States)

    Barnhoorn, Auke; Primarini, Mutia; Houben, Maartje

    2016-04-01

    The formation of a fracture network in rocks has a crucial control on the flow behaviour of fluids. In addition, an existing network of fractures , influences the propagation of new fractures during e.g. hydraulic fracturing or during a seismic event. Understanding of the type and characteristics of the fracture network that will be formed during e.g. hydraulic fracturing is thus crucial to better predict the outcome of a hydraulic fracturing job. For this, knowledge of the rock properties is crucial. The brittleness index is often used as a rock property that can be used to predict the fracturing behaviour of a rock for e.g. hydraulic fracturing of shales. Various terminologies of the brittleness index (BI1, BI2 and BI3) exist based on mineralogy, elastic constants and stress-strain behaviour (Jin et al., 2014, Jarvie et al., 2007 and Holt et al., 2011). A maximum brittleness index of 1 predicts very good and efficient fracturing behaviour while a minimum brittleness index of 0 predicts a much more ductile shale behaviour. Here, we have performed systematic petrophysical, acoustic and geomechanical analyses on a set of shale samples from Whitby (UK) and we have determined the three different brittleness indices on each sample by performing all the analyses on each of the samples. We show that each of the three brittleness indices are very different for the same sample and as such it can be concluded that the brittleness index is not a good predictor of the fracturing behaviour of shales. The brittleness index based on the acoustic data (BI1) all lie around values of 0.5, while the brittleness index based on the stress strain data (BI2) give an average brittleness index around 0.75, whereas the mineralogy brittleness index (BI3) predict values below 0.2. This shows that by using different estimates of the brittleness index different decisions can be made for hydraulic fracturing. If we would rely on the mineralogy (BI3), the Whitby mudstone is not a suitable

  11. Predicting supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Heinemeyer, S. [Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Weiglein, G. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2010-07-15

    We review the result of SUSY parameter fits based on frequentist analyses of experimental constraints from electroweak precision data, (g-2){sub {mu}}, B physics and cosmological data. We investigate the parameters of the constrained MSSM (CMSSM) with universal soft supersymmetry-breaking mass parameters, and a model with common non-universal Higgs mass parameters in the superpotential (NUHM1). Shown are the results for the SUSY and Higgs spectrum of the models. Many sparticle masses are highly correlated in both the CMSSM and NUHM1, and parts of the regions preferred at the 68% C.L. are accessible to early LHC running. The best-fit points could be tested even with 1 fb{sup -1} at {radical}(s)=7 TeV. (orig.)

  12. Analysing ESP Texts, but How?

    Directory of Open Access Journals (Sweden)

    Borza Natalia

    2015-03-01

    Full Text Available English as a second language (ESL teachers instructing general English and English for specific purposes (ESP in bilingual secondary schools face various challenges when it comes to choosing the main linguistic foci of language preparatory courses enabling non-native students to study academic subjects in English. ESL teachers intending to analyse English language subject textbooks written for secondary school students with the aim of gaining information about what bilingual secondary school students need to know in terms of language to process academic textbooks cannot avoiding deal with a dilemma. It needs to be decided which way it is most appropriate to analyse the texts in question. Handbooks of English applied linguistics are not immensely helpful with regard to this problem as they tend not to give recommendation as to which major text analytical approaches are advisable to follow in a pre-college setting. The present theoretical research aims to address this lacuna. Respectively, the purpose of this pedagogically motivated theoretical paper is to investigate two major approaches of ESP text analysis, the register and the genre analysis, in order to find the more suitable one for exploring the language use of secondary school subject texts from the point of view of an English as a second language teacher. Comparing and contrasting the merits and limitations of the two contrastive approaches allows for a better understanding of the nature of the two different perspectives of text analysis. The study examines the goals, the scope of analysis, and the achievements of the register perspective and those of the genre approach alike. The paper also investigates and reviews in detail the starkly different methods of ESP text analysis applied by the two perspectives. Discovering text analysis from a theoretical and methodological angle supports a practical aspect of English teaching, namely making an informed choice when setting out to analyse

  13. HGCal Simulation Analyses for CMS

    CERN Document Server

    Bruno, Sarah Marie

    2015-01-01

    This summer, I approached the topic of fast-timing detection of photons from Higgs decays via simulation analyses, working under the supervision of Dr. Adolf Bornheim of the California Institute of Technology. My specific project focused on simulating the high granularity calorimeter for the Compact Muon Solenoid (CMS) experiment. CMS detects particles using calorimeters. The Electromagnetic Calorimeter (ECal) is arranged cylindrically to form a barrel section and two “endcaps.” Previously, both the barrel and endcap have employed lead tungstate crystal detectors, known as the “shashlik” design. The crystal detectors, however, rapidly degrade from exposure to radiation. This effect is most pronounced in the endcaps. To avoid the high expense of frequently replacing degraded detectors, it was recently decided to eliminate the endcap crystals in favor of an arrangement of silicon detectors known as the “High Granularity Calorimeter” (HGCal), while leaving the barrel detector technology unchanged. T...

  14. Computational Analyses of Arabic Morphology

    CERN Document Server

    Kiraz, G A

    1994-01-01

    This paper demonstrates how a (multi-tape) two-level formalism can be used to write two-level grammars for Arabic non-linear morphology using a high level, but computationally tractable, notation. Three illustrative grammars are provided based on CV-, moraic- and affixational analyses. These are complemented by a proposal for handling the hitherto computationally untreated problem of the broken plural. It will be shown that the best grammars for describing Arabic non-linear morphology are moraic in the case of templatic stems, and affixational in the case of a-templatic stems. The paper will demonstrate how the broken plural can be derived under two-level theory via the `implicit' derivation of the singular.

  15. Economical analyses in interventional radiology

    International Nuclear Information System (INIS)

    Considerations about the relation between benefit and expenses are also gaining increasing importance in interventional radiology. This review aims at providing a survey about the published data concerning economical analyses of some of the more frequently employed interventions in radiology excluding neuroradiological and coronary interventions. Because of the relative scarcity of literature in this field, all identified articles (n=46) were included without selection for methodological quality. For a number of radiological interventions the cost-effectiveness has already been demonstrated, e.g., PTA of femoropopliteal and iliac artery stenoses, stenting of renal artery stenoses, placement of vena-cava filters, as well as metal stents in malignant biliary and esophageal obstructions. Conflicting data exist for the treatment of abdominal aortic aneurysms. So far, no analysis could be found that directly compares bypass surgery versus PTA+stent in iliac arteries. (orig.)

  16. Analyse des besoins des usagers

    OpenAIRE

    KHOUDOUR,L; LANGLAIS,A; Charpentier, C.; MOTTE,C; PIAN,C

    2002-01-01

    Il s'agit d'étendre la surveillance vidéo de l'enceinte du métro vers l'intérieur des rames. Les images captées constituent des prises de vue des événements qui se déroulent à l'intérieur des véhicules afin notamment d'améliorer la sécurité des usagers transportes. Il est possible de mémoriser les images des quelques instants précédant un incident usager, d'analyser ces images en temps différé et de mieux appréhender en temps réel le comportement des usagers face à des événements ou des consi...

  17. Analysing the Wrongness of Killing

    DEFF Research Database (Denmark)

    Di Nucci, Ezio

    2016-01-01

    This article provides an in-depth analysis of the wrongness of killing by comparing different versions of three influential views: the traditional view that killing is always wrong; the liberal view that killing is wrong if and only if the victim does not want to be killed; and Don Marquis‟ future...... of value account of the wrongness of killing. In particular, I illustrate the advantages that a basic version of the liberal view and a basic version of the future of value account have over competing alternatives. Still, ultimately none of the views analysed here are satisfactory; but the different...... reasons why those competing views fail provide important insights into the ethics of killing....

  18. Analysing performance through value creation

    Directory of Open Access Journals (Sweden)

    Adrian TRIFAN

    2015-12-01

    Full Text Available This paper draws a parallel between measuring financial performance in 2 variants: the first one using data offered by accounting, which lays emphasis on maximizing profit, and the second one which aims to create value. The traditional approach to performance is based on some indicators from accounting data: ROI, ROE, EPS. The traditional management, based on analysing the data from accounting, has shown its limits, and a new approach is needed, based on creating value. The evaluation of value based performance tries to avoid the errors due to accounting data, by using other specific indicators: EVA, MVA, TSR, CVA. The main objective is shifted from maximizing the income to maximizing the value created for shareholders. The theoretical part is accompanied by a practical analysis regarding the creation of value and an analysis of the main indicators which evaluate this concept.

  19. Isotopic signatures by bulk analyses

    International Nuclear Information System (INIS)

    Los Alamos National Laboratory has developed a series of measurement techniques for identification of nuclear signatures by analyzing bulk samples. Two specific applications for isotopic fingerprinting to identify the origin of anthropogenic radioactivity in bulk samples are presented. The first example is the analyses of environmental samples collected in the US Arctic to determine the impact of dumping of radionuclides in this polar region. Analyses of sediment and biota samples indicate that for the areas sampled the anthropogenic radionuclide content of sediments was predominantly the result of the deposition of global fallout. The anthropogenic radionuclide concentrations in fish, birds and mammals were very low. It can be surmised that marine food chains are presently not significantly affected. The second example is isotopic fingerprinting of water and sediment samples from the Rocky Flats Facility (RFP). The largest source of anthropogenic radioactivity presently affecting surface-waters at RFP is the sediments that are currently residing in the holding ponds. One gram of sediment from a holding pond contains approximately 50 times more plutonium than 1 liter of water from the pond. Essentially 100% of the uranium in Ponds A-1 and A-2 originated as depleted uranium. The largest source of radioactivity in the terminal Ponds A-4, B-5 and C-2 was naturally occurring uranium and its decay product radium. The uranium concentrations in the waters collected from the terminal ponds contained 0.05% or less of the interim standard calculated derived concentration guide for uranium in waters available to the public. All of the radioactivity observed in soil, sediment and water samples collected at RFP was naturally occurring, the result of processes at RFP or the result of global fallout. No extraneous anthropogenic alpha, beta or gamma activities were detected. The plutonium concentrations in Pond C-2 appear to vary seasonally

  20. Mirror energy difference and the structure of loosely bound proton-rich nuclei around A = 20

    CERN Document Server

    Yuan, Cenxi; Xu, Furong; Suzuki, Toshio; Otsuka, Takaharu

    2014-01-01

    The properties of loosely bound proton-rich nuclei around A = 20 are investigated within the framework of nuclear shell model. In these nuclei, the strength of the effective interactions involving the loosely bound proton s1=2 orbit are significantly reduced in comparison with those in their mirror nuclei. We evaluate the reduction of the effective interaction by calculating the monopole-baseduniversal interaction (VMU) in the Woods-Saxon basis. The shell-model Hamiltonian in the sd shell, such as USD, can thus be modified to reproduce the binding energies and energy levels of the weakly bound proton-rich nuclei around A = 20. The effect of the reduction of the effective interaction on the structure and decay properties of these nuclei is also discussed.

  1. Towards a 20th Century History of Relationships between Theatre and Neuroscience

    OpenAIRE

    Gabriele Sofia

    2014-01-01

    This article considers some preliminary reflections in view of a 20th century theatre-and-neuroscience history. Up to now, the history of the 20th century theatre has been too fragmentary and irregular, missing out on the subterranean links which, either directly or indirectly, bound different experiences. The article aims to put in evidence the recurrent problems of these encounters. The hypothesis of the essay concerns the possibility of gathering and grouping a great part of the relationsh...

  2. Left lung agenesis discovered by a spontaneous pneumothorax in a 20-year-old girl.

    Science.gov (United States)

    Hentati, Abdessalem; Neifar, Chawki; Abid, Walid; M'saad, Sameh

    2016-01-01

    Lung agenesis is a rare condition which prognosis widely depends on associated malformations. Clinical presentation is so variable and diagnosis is often made in childhood. Here, we present a case of a 20-year-old girl who was admitted because of a spontaneous pneumothorax. Explorations concluded at a left lung agenesis, a hyperinflated right lung crossing the midline with a corresponding pneumothorax. There was no malformation else. This congenital condition and treatment for this rare presentation are discussed in detail. PMID:27051112

  3. A 20 kV, 5 A, 1 ns Risetime Pulsed Electron Beam Source

    International Nuclear Information System (INIS)

    A 20 kV, 1 ns risetime pulsed electron beam source was developed using an extremely small gap (0.1 mm) diode driven by a sub-nanosecond risetime, 10 kV rectangular pulse generator. A beam current of 5 A was detected by using a fast response Faraday cup at a distance of 2 cm away from a grid anode. The shot to shot variation of the electron beam pulse was less than 10%

  4. BN-600 hybrid core benchmark analyses

    International Nuclear Information System (INIS)

    Benchmark analyses for the hybrid BN-600 reactor that contains three uranium enrichment zones and one plutonium zone in the core, have been performed within the frame of an IAEA sponsored Coordinated Research Project. The results for several relevant reactivity parameters obtained by the participants with their own state-of-the-art basic data and codes, were compared in terms of calculational uncertainty, and their effects on the ULOF transient behavior of the hybrid BN-600 core were evaluated. The comparison of the diffusion and transport results obtained for the homogeneous representation generally shows good agreement for most parameters between the RZ and HEX-Z models. The burnup effect and the heterogeneity effect on most reactivity parameters also show good agreement for the HEX-Z diffusion and transport theory results. A large difference noticed for the sodium and steel density coefficients is mainly due to differences in the spatial coefficient predictions for non fuelled regions. The burnup reactivity loss was evaluated to be 0.025 (4.3 $) within ∼ 5.0% standard deviation. The heterogeneity effect on most reactivity coefficients was estimated to be small. The heterogeneity treatment reduced the control rod worth by 2.3%. The heterogeneity effect on the k-eff and control rod worth appeared to differ strongly depending on the heterogeneity treatment method. A substantial spread noticed for several reactivity coefficients did not give a significant impact on the transient behavior prediction. This result is attributable to compensating effects between several reactivity effects and the specific design of the partially MOX fuelled hybrid core. (author)

  5. Thermal and hydraulic analyses of the System 81 cold traps

    Energy Technology Data Exchange (ETDEWEB)

    Kim, K.

    1977-06-15

    Thermal and hydraulic analyses of the System 81 Type I and II cold traps were completed except for thermal transients analysis. Results are evaluated, discussed, and reported. Analytical models were developed to determine the physical dimensions of the cold traps and to predict the performance. The FFTF cold trap crystallizer performances were simulated using the thermal model. This simulation shows that the analytical model developed predicts reasonably conservative temperatures. Pressure drop and sodium residence time calculations indicate that the present design will meet the requirements specified in the E-Specification. Steady state temperature data for the critical regions were generated to assess the magnitude of the thermal stress.

  6. Reliability analyses used by maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Rusek, S.; Gono, R.; Kral, V.; Kratky, M. [VSB Technical Univ. of Ostrava, Poruba (Czech Republic)

    2008-07-01

    A series of studies have been conducted to analyze failures that have been experienced by most power distribution companies in the Czech Republic and in one of the Slovak Republics. The purpose was to find ways to optimize the maintenance of distribution network devices. Data was compiled to enable a comparison of results and to create a statistically more important database. Since the number of failures in the area of electrical power engineering have been rather small, the results on element reliability will only be available in several more years to come. The main challenge with reliability analysis is to find reliable and updated input data. As such, the primary task is to change the existing structure of databases of power distribution companies. These databases must be adjusted to get the input data for the calculation functions of reliability centred maintenance (RCM). This paper described the programs designed for analyses of reliability indices and the optimization of maintenance of equipment of the distribution system that will provide basic data for responsible and logical decisions regarding maintenance and basic data for the preparation of an effective maintenance schedule and the creation of a feedback system. 7 refs., 4 figs.

  7. Partitioning Uncertainty for Non-Ergodic Probabilistic Seismic Hazard Analyses

    OpenAIRE

    Dawood, Haitham Mohamed Mahmoud Mousad

    2014-01-01

    Properly accounting for the uncertainties in predicting ground motion parameters is critical for Probabilistic Seismic Hazard Analyses (PSHA). This is particularly important for critical facilities that are designed for long return period motions. Non-ergodic PSHA is a framework that allows for this proper accounting of uncertainties. This, in turn, allows for more informed decisions by designers, owners and regulating agencies. The ergodic assumption implies that the standard deviation ...

  8. Budget-Impact Analyses: A Critical Review of Published Studies

    OpenAIRE

    Ewa Orlewska; Laszlo Gulcsi

    2009-01-01

    This article reviews budget-impact analyses (BIAs) published to date in peer-reviewed bio-medical journals with reference to current best practice, and discusses where future research needs to be directed. Published BIAs were identified by conducting a computerized search on PubMed using the search term 'budget impact analysis'. The years covered by the search included January 2000 through November 2008. Only studies (i) named by authors as BIAs and (ii) predicting financial consequences of a...

  9. Measuring Quality Across Three Child Care Quality Rating and Improvement Systems: Findings from Secondary Analyses.

    OpenAIRE

    Lizabeth Malone; Gretchen Kirby; Pia Caronongan; Kimberly Boller; Kathryn Tout

    2011-01-01

    This report presents findings from an exploratory analysis of administrative data from three QRISs. The analyses examine the prevalence of quality components across centers and how they combine to result in an overall rating level and to predict observed quality.

  10. Novel heterozygous C243Y A20/TNFAIP3 gene mutation is responsible for chronic inflammation in autosomal-dominant Behçet's disease

    Science.gov (United States)

    Shigemura, Tomonari; Kaneko, Naoe; Kobayashi, Norimoto; Kobayashi, Keiko; Takeuchi, Yusuke; Nakano, Naoko; Masumoto, Junya; Agematsu, Kazunaga

    2016-01-01

    Objective Although Behçet's disease (BD) is a chronic inflammatory disorder of uncertain aetiology, the existence of familial BD with autosomal-dominant traits suggests that a responsibility gene (or genes) exists. We investigated a Japanese family with a history of BD to search for pathogenic mutations underlying the biological mechanisms of BD. Methods 6 patients over 4 generations who had suffered from frequent oral ulcers, genital ulcers and erythaema nodosum-like lesions in the skin were assessed. Whole-exome sequencing was performed on genomic DNA, and cytokine production was determined from stimulated mononuclear cells. Inflammatory cytokine secretion and Nod2-mediated NF-κB activation were analysed using the transfected cells. Results By whole-exome sequencing, we identified a common heterozygous missense mutation in A20/TNFAIP3, a gene known to regulate NF-κB signalling, for which all affected family members carried a heterozygous C243Y mutation in the ovarian tumour domain. Mononuclear cells obtained from the proband and his mother produced large amounts of interleukin 1β, IL-6 and tumour necrosis factor α (TNF-a) on stimulation as compared with those from normal controls. Although inflammatory cytokine secretion was suppressed by wild-type transfected cells, it was suppressed to a much lesser extent by mutated C243Y A20/TNFAIP3-transfected cells. In addition, impaired suppression of Nod2-mediated NF-κB activation by C243Y A20/TNFAIP3 was observed. Conclusions A C243Y mutation in A20/TNFAIP3 was likely responsible for increased production of human inflammatory cytokines by reduced suppression of NF-κB activation, and may have accounted for the autosomal-dominant Mendelian mode of BD transmission in this family. PMID:27175295

  11. NOx analyser interefence from alkenes

    Science.gov (United States)

    Bloss, W. J.; Alam, M. S.; Lee, J. D.; Vazquez, M.; Munoz, A.; Rodenas, M.

    2012-04-01

    Nitrogen oxides (NO and NO2, collectively NOx) are critical intermediates in atmospheric chemistry. NOx abundance controls the levels of the primary atmospheric oxidants OH, NO3 and O3, and regulates the ozone production which results from the degradation of volatile organic compounds. NOx are also atmospheric pollutants in their own right, and NO2 is commonly included in air quality objectives and regulations. In addition to their role in controlling ozone formation, NOx levels affect the production of other pollutants such as the lachrymator PAN, and the nitrate component of secondary aerosol particles. Consequently, accurate measurement of nitrogen oxides in the atmosphere is of major importance for understanding our atmosphere. The most widely employed approach for the measurement of NOx is chemiluminescent detection of NO2* from the NO + O3 reaction, combined with NO2 reduction by either a heated catalyst or photoconvertor. The reaction between alkenes and ozone is also chemiluminescent; therefore alkenes may contribute to the measured NOx signal, depending upon the instrumental background subtraction cycle employed. This interference has been noted previously, and indeed the effect has been used to measure both alkenes and ozone in the atmosphere. Here we report the results of a systematic investigation of the response of a selection of NOx analysers, ranging from systems used for routine air quality monitoring to atmospheric research instrumentation, to a series of alkenes ranging from ethene to the biogenic monoterpenes, as a function of conditions (co-reactants, humidity). Experiments were performed in the European Photoreactor (EUPHORE) to ensure common calibration, a common sample for the monitors, and to unequivocally confirm the alkene (via FTIR) and NO2 (via DOAS) levels present. The instrument responses ranged from negligible levels up to 10 % depending upon the alkene present and conditions used. Such interferences may be of substantial importance

  12. Nonlinear Analyses of the Dynamic Properties of Hydrostatic Bearing Systems

    Institute of Scientific and Technical Information of China (English)

    LIU Wei(刘伟); WU Xiujiang(吴秀江); V.A. Prokopenko

    2003-01-01

    Nonlinear analyses of hydrostatic bearing systems are necessary to adequately model the fluid-solid interaction. The dynamic properties of linear and nonlinear analytical models of hydrostatic bearings are compared in this paper. The analyses were based on the determination of the aperiodic border of transient processes with external step loads. The results show that the dynamic properties can be most effectively improved by increasing the hydrostatic bearing crosspiece width and additional pocket volume in a bearing can extend the load range for which the transient process is aperiodic, but an additional restrictor and capacitor (RC) chain must be introduced for increasing damping. The nonlinear analyses can also be used to predict typical design parameters for a hydrostatic bearing.

  13. Left lung agenesis discovered by a spontaneous pneumothorax in a 20-year-old girl

    Directory of Open Access Journals (Sweden)

    Abdessalem Hentati

    2016-01-01

    Full Text Available Lung agenesis is a rare condition which prognosis widely depends on associated malformations. Clinical presentation is so variable and diagnosis is often made in childhood. Here, we present a case of a 20-year-old girl who was admitted because of a spontaneous pneumothorax. Explorations concluded at a left lung agenesis, a hyperinflated right lung crossing the midline with a corresponding pneumothorax. There was no malformation else. This congenital condition and treatment for this rare presentation are discussed in detail.

  14. [Evaluation of a 20 years' experience of colo-anal anastomoses. Indications, results and pitfalls].

    Science.gov (United States)

    Hautefeuille, P; Saab, M; Valleur, P

    1991-01-01

    Seventy nine anastomoses were performed over a 20 year period. Indications included 68 rectal adenocarcinomas and 11 benign lesions. There was no operative mortality. Anastomotic leak was the main cause of morbidity: 12 clinical (15%) and 4 radiological leaks. The 5-year actuarial disease-free survival was 70%, 7 local recurrences (10%) were observed; 6 were Dukes C and 1 Dukes B. Functional results were assessed in 61 patients. They were considered to be excellent in 35 (57%), good in 24 (39%) and bad in 2 (4%). Six failures were noted: 3 technical, 1 oncologic and 2 functional. Pitfalls of coloanal anastomosis are discussed. PMID:2064292

  15. Towards a 20th Century History of Relationships between Theatre and Neuroscience

    Directory of Open Access Journals (Sweden)

    Gabriele Sofia

    2014-05-01

    Full Text Available This article considers some preliminary reflections in view of a 20th century theatre-and-neuroscience history. Up to now, the history of the 20th century theatre has been too fragmentary and irregular, missing out on the subterranean links which, either directly or indirectly, bound different experiences. The article aims to put in evidence the recurrent problems of these encounters. The hypothesis of the essay concerns the possibility of gathering and grouping a great part of the relationships between theatre and neuroscience around four trajectories: the physiology of action, the physiology of emotions, ethology, and studies on the spectator’s perception.

  16. A 20-Liter Test Stand with Gas Purification for Liquid Argon Research

    CERN Document Server

    Li, Yichen; Tang, Wei; Joshi, Jyoti; Qian, Xin; Diwan, Milind; Kettell, Steve; Morse, William; Rao, Triveni; Stewart, James; Tsang, Thomas; Zhang, Lige

    2016-01-01

    We describe the design of a 20-liter test stand constructed to study fundamental properties of liquid argon (LAr). This system utilizes a simple, cost-effective gas argon (GAr) purification to achieve ultra-high purity, which is necessary to study electron transport properties in LAr. An electron drift stack with up to 25 cm length is constructed to study electron drift, diffusion, and attachment at various electric fields. A gold photocathode and a pulsed laser are used as a bright electron source. The operational performance of this system is reported.

  17. Operation of a 20 tesla on-axis tokamak toroidal field magnet

    International Nuclear Information System (INIS)

    The Center for Electromechanics at The University of Texas at Austin (CEM-UT) has designed, built, and is presently testing a 20 T on-axis, single turn, toroidal field (TF) coil. The Ignition Technology Demonstration (ITD) is a 0.06-scale IGNITEX (Texas Fusion Ignition Experiment) TF-coil experiment. The purpose of the ITD program is to demonstrate the operation of a 20 T, single turn, TF coil powered by homopolar generators (HPGs). This program is funded by the Advanced Technology Program and the Texas Atomic Energy Research Foundation. Scaling of the prototype 20 T TF coil was selected to be 0.06 on the basis of the maximum current capability of CEM-UT's 60 MJ HPG power supply, which has a rating of 9 MA at 100 V in a parallel configuration. Stresses and temperatures reached in the scale TF coil are representative of those that would be experienced in a full-scale IGNITEX TF coil with a 1.5 m major radius and a 5 s flat top current profile. The 60 MJ HPG system consists of six, 20 MJ, drum-type HPGs each capable of 1.5 MA at 100 V. Only 25% of the available system energy is used to drive the single turn TF coil to 20 T

  18. Toward a 20% Wind Electricity Supply in the United States: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Flowers, L.; Dougherty, P.

    2007-05-01

    Since the U.S. Department of Energy (DOE) initiated the Wind Powering America (WPA) program in 1999, installed wind power capacity in the United States has increased from 2,500 MW to more than 11,000 MW. In 1999, only four states had more than 100 MW of installed wind capacity; now 16 states have more than 100 MW installed. In addition to WPA's efforts to increase deployment, the American Wind Energy Association (AWEA) is building a network of support across the country. In July 2005, AWEA launched the Wind Energy Works! Coalition, which is comprised of more than 70 organizations. In February 2006, the wind deployment vision was enhanced by President George W. Bush's Advanced Energy Initiative, which refers to a wind energy contribution of up to 20% of the electricity consumption of the United States. A 20% electricity contribution over the next 20 to 25 years represents 300 to 350 gigawatts (GW) of electricity. This paper provides a background of wind energy deployment in the United States and a history of the U.S. DOE's WPA program, as well as the program's approach to increasing deployment through removal of institutional and informational barriers to a 20% wind electricity future.

  19. Breakdown of the Isobaric Multiplet Mass Equation for the A = 20 and 21 Multiplets

    CERN Document Server

    Gallant, A T; Andreoiu, C; Bader, A; Chaudhuri, A; Chowdhury, U; Grossheim, A; Klawitter, R; Kwiatkowski, A A; Leach, K G; Lennarz, A; Macdonald, T D; Schultz, B E; Lassen, J; Heggen, H; Raeder, S; Teigelhöfer, A; Brown, B A; Magilligan, A; Holt, J D; Menéndez, J; Simonis, J; Schwenk, A; Dilling, J

    2014-01-01

    Using the Penning trap mass spectrometer TITAN, we performed the first direct mass measurements of 20,21Mg, isotopes that are the most proton-rich members of the A = 20 and A = 21 isospin multiplets. These measurements were possible through the use of a unique ion-guide laser ion source, a development that suppressed isobaric contamination by six orders of magnitude. Compared to the latest atomic mass evaluation, we find that the mass of 21Mg is in good agreement but that the mass of 20Mg deviates by 3{\\sigma}. These measurements reduce the uncertainties in the masses of 20,21Mg by 15 and 22 times, respectively, resulting in a significant departure from the expected behavior of the isobaric multiplet mass equation in both the A = 20 and A = 21 multiplets. This presents a challenge to shell model calculations using either the isospin non-conserving USDA/B Hamiltonians or isospin non-conserving interactions based on chiral two- and three-nucleon forces.

  20. 78 FR 26847 - Including Specific Pavement Types in Federal-aid Highway Traffic Noise Analyses

    Science.gov (United States)

    2013-05-08

    ... Federal Highway Administration Including Specific Pavement Types in Federal-aid Highway Traffic Noise... types used in Federal-aid highway traffic noise analyses. Current highway traffic noise analyses rely on... (OGAC), and Portland cement concrete (PCC). Prediction of future noise levels is based on the...

  1. Residual Strength Analyses of Monolithic Structures

    Science.gov (United States)

    Forth, Scott (Technical Monitor); Ambur, Damodar R. (Technical Monitor); Seshadri, B. R.; Tiwari, S. N.

    2003-01-01

    Finite-element fracture simulation methodology predicts the residual strength of damaged aircraft structures. The methodology uses the critical crack-tip-opening-angle (CTOA) fracture criterion to characterize the fracture behavior of the material. The CTOA fracture criterion assumes that stable crack growth occurs when the crack-tip angle reaches a constant critical value. The use of the CTOA criterion requires an elastic- plastic, finite-element analysis. The critical CTOA value is determined by simulating fracture behavior in laboratory specimens, such as a compact specimen, to obtain the angle that best fits the observed test behavior. The critical CTOA value appears to be independent of loading, crack length, and in-plane dimensions. However, it is a function of material thickness and local crack-front constraint. Modeling the local constraint requires either a three-dimensional analysis or a two-dimensional analysis with an approximation to account for the constraint effects. In recent times as the aircraft industry is leaning towards monolithic structures with the intention of reducing part count and manufacturing cost, there has been a consistent effort at NASA Langley to extend critical CTOA based numerical methodology in the analysis of integrally-stiffened panels.In this regard, a series of fracture tests were conducted on both flat and curved aluminum alloy integrally-stiffened panels. These flat panels were subjected to uniaxial tension and during the test, applied load-crack extension, out-of-plane displacements and local deformations around the crack tip region were measured. Compact and middle-crack tension specimens were tested to determine the critical angle (wc) using three-dimensional code (ZIP3D) and the plane-strain core height (hJ using two-dimensional code (STAGS). These values were then used in the STAGS analysis to predict the fracture behavior of the integrally-stiffened panels. The analyses modeled stable tearing, buckling, and crack

  2. Pawnee Nation Energy Option Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Matlock, M.; Kersey, K.; Riding In, C.

    2009-07-21

    Pawnee Nation of Oklahoma Energy Option Analyses In 2003, the Pawnee Nation leadership identified the need for the tribe to comprehensively address its energy issues. During a strategic energy planning workshop a general framework was laid out and the Pawnee Nation Energy Task Force was created to work toward further development of the tribe’s energy vision. The overarching goals of the “first steps” project were to identify the most appropriate focus for its strategic energy initiatives going forward, and to provide information necessary to take the next steps in pursuit of the “best fit” energy options. Description of Activities Performed The research team reviewed existing data pertaining to the availability of biomass (focusing on woody biomass, agricultural biomass/bio-energy crops, and methane capture), solar, wind and hydropower resources on the Pawnee-owned lands. Using these data, combined with assumptions about costs and revenue streams, the research team performed preliminary feasibility assessments for each resource category. The research team also reviewed available funding resources and made recommendations to Pawnee Nation highlighting those resources with the greatest potential for financially-viable development, both in the near-term and over a longer time horizon. Findings and Recommendations Due to a lack of financial incentives for renewable energy, particularly at the state level, combined mediocre renewable energy resources, renewable energy development opportunities are limited for Pawnee Nation. However, near-term potential exists for development of solar hot water at the gym, and an exterior wood-fired boiler system at the tribe’s main administrative building. Pawnee Nation should also explore options for developing LFGTE resources in collaboration with the City of Pawnee. Significant potential may also exist for development of bio-energy resources within the next decade. Pawnee Nation representatives should closely monitor

  3. Decreasing Sports Activity with Increasing Age? Findings from a 20-Year Longitudinal and Cohort Sequence Analysis

    Science.gov (United States)

    Breuer, Christoph; Wicker, Pamela

    2009-01-01

    According to cross-sectional studies in sport science literature, decreasing sports activity with increasing age is generally assumed. In this paper, the validity of this assumption is checked by applying more effective methods of analysis, such as longitudinal and cohort sequence analyses. With the help of 20 years' worth of data records from the…

  4. 10 CFR 436.24 - Uncertainty analyses.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Uncertainty analyses. 436.24 Section 436.24 Energy... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank...

  5. Study of proton and 2 protons emission from light neutron deficient nuclei around A=20; Etude de l'emission proton et de deux protons dans les noyaux legers deficients en neutrons de la region A=20

    Energy Technology Data Exchange (ETDEWEB)

    Zerguerras, T

    2001-09-01

    Proton and two proton emission from light neutron deficient nuclei around A=20 have been studied. A radioactive beam of {sup 18}Ne, {sup 17}F and {sup 20}Mg, produced at the Grand Accelerateur National d'Ions Lourds by fragmentation of a {sup 24}Mg primary beam at 95 MeV/A, bombarded a {sup 9}Be target to form unbound states. Proton(s) and nuclei from the decay were detected respectively in the MUST array and the SPEG spectrometer. From energy and angle measurements, the invariant mass of the decaying nucleus could be reconstructed. Double coincidence events between a proton and {sup 17}F, {sup 16}O, {sup 15}O, {sup 14}O and {sup 18}Ne were registered to obtain excitation energy spectra of {sup 18}Ne, {sup 17}F, {sup 16}F, {sup 15}F et {sup 19}Na. Generally, the masses measures are in agreement with previous experiments. In the case of {sup 18}Ne, excitation energy and angular distributions agree well with the predictions of a break up model calculation. From {sup 17}Ne proton coincidences, a first experimental measurement of the ground state mass excess of {sup 18}Na has been obtained and yields 24,19(0,15)MeV. Two proton emission from {sup 17}Ne and {sup 18}Ne excited states and the {sup 19}Mg ground state was studied through triple coincidences between two proton and {sup 15}O, {sup 16}O and {sup 17}Ne respectively. In the first case, the proton-proton relative angle distribution in the center of mass has been compared with model calculation. Sequential emission from excited states of {sup 17}Ne, above the proton emission threshold, through {sup 16}F is dominant but a {sup 2}He decay channel could not be excluded. No {sup 2}He emission from the 1.288 MeV {sup 17}Ne state, or from the 6.15 MeV {sup 18}Ne state has been observed. Only one coincidence event between {sup 17}Ne and two proton was registered, the value of the one neutron stripping reaction cross section of {sup 20}Mg being much lower than predicted. (author)

  6. Antiapoptotic effect both in vivo and in vitro of A20 gene when transfected into rat hippocampal neurons

    Institute of Scientific and Technical Information of China (English)

    Hong-sheng MIAO; Lu-yang YU; Guo-zhen HUI; Li-he GUO

    2005-01-01

    Aim: To evaluate the antiapoptotic effect of the A20 gene in primary hippocampal neurons both in vivo and in vitro. Methods: Primary hippocampal neurons in embryonic day 18 (El 8) rats were transfected with the A20 gene by using the new Nucleofector electroporation transfection method. We then examined, whether A20 -neurons possessed anti-apoptotic abilities after TNF-α stimulation in vitro.A20-neurons and pcDNA3 -neurons were transplanted into the penumbra of the brains of rats that had been subjected to 90-min of ischemia induced by left middle cerebral artery occlusion (MCAO). Results: A20-neurons resisted TNF-α induced apoptosis in vitro. The apoptosis rate of neurons overexpressing A20(28.46%±3.87%) was lower than that in neurons transfected with pcDNA3(53.06%±5.36%). More A20-neurons survived in the penumbra both 3-d and 7-d after transplantation than did sham pcDNA3 neurons. Conclusion: The novel function of A20 may make it a potential targets for the gene therapy for neurological diseases.

  7. Revalidation of the isobaric multiplet mass equation for the $A=20$ quintet

    CERN Document Server

    Glassman, B E; Wrede, C; Allen, J; Bardayan, D W; Bennett, M B; Brown, B A; Chipps, K A; Febbraro, M; Fry, C; Hall, M R; Hall, O; Liddick, S N; O'Malley, P; Ong, W; Pain, S D; Schwartz, S B; Shidling, P; Sims, H; Thompson, P; Zhang, H

    2015-01-01

    An unexpected breakdown of the isobaric multiplet mass equation in the $A=20$, $T=2$ quintet was recently reported, presenting a challenge to modern theories of nuclear structure. In the present work, the excitation energy of the lowest $T = 2$ state in $^{20}$Na has been measured to be $6498.4 \\pm 0.2_{\\textrm{stat}} \\pm 0.4_{\\textrm{syst}}$ keV by using the superallowed $0^+ \\rightarrow 0^+$ beta decay of $^{20}$Mg to access it and an array of high-purity germanium detectors to detect its $\\gamma$-ray deexcitation. This value differs by 27 keV (1.9 standard deviations) from the recommended value of $6525 \\pm 14$ keV and is a factor of 28 more precise. The isobaric multiplet mass equation is shown to be revalidated when the new value is adopted.

  8. Interaction cross-sections and matter radii of A = 20 isobars

    International Nuclear Information System (INIS)

    High-energy interaction cross-sections of A=20 nuclei (20N, 20O, 20F, 20Ne, 20Na, 20Mg) on carbon were measured with accuracies of ∼1%. The nuclear matter rms radii derived from the measured cross-sections show an irregular dependence on isospin projection. The largest difference in radii, which amounts to approximately 0.2 fm, has been obtained for the mirror nuclei 20O and 20Mg. The influenc of nuclear deformation and binding energy on the radii is discussed. By evaluating the difference in rms radii of neutron and proton distributions, evidence has been found for the existence of a proton skin for 20Mg and of a neutron skin for 20N. (orig.)

  9. A 20-KW Wind Energy Conversion System (WECS) at the Marine Corps Air Station, Kaneohe, Hawaii

    Science.gov (United States)

    Pal, D.

    1983-01-01

    The wind turbine generator chosen for the evaluation was a horizontal-axis-propeller-downwind rotor driving a three-phase, self-excited alternator through a step-up gear box. The alternator is fed into the base power distribution system through a three-phase, line-communtated-synchronous inverter using SCRs. The site has moderate wind conditions with an annual average windspeed of 12 to 14 mph, and the WECS turbine has a relatively high (29 mph) rated windspeed. The 20-kW WECS systems was primarily designed to obtain operating experience with, and maintenance information on, a 20-kW-sized WECS. This report describes in detail the experience gained and lessons learned during the field evaluation.

  10. Self-Rated Activity Levels and Longevity: Evidence from a 20 Year Longitudinal Study

    Science.gov (United States)

    Mullee, Mark A.; Coleman, Peter G.; Briggs, Roger S. J.; Stevenson, James E.; Turnbull, Joanne C.

    2008-01-01

    The study reports on factors predicting the longevity of 328 people over the age of 65 drawn from an English city and followed over 20 years. Both the reported activities score and the individual's comparative evaluation of their own level of activity independently reduced the risk of death, even when health and cognitive status were taken into…

  11. The ASSET intercomparison of ozone analyses: method and first results

    Directory of Open Access Journals (Sweden)

    A. J. Geer

    2006-06-01

    Full Text Available This paper examines 11 sets of ozone analyses from 7 different data assimilation systems. Two are numerical weather prediction (NWP systems based on general circulation models (GCMs; the other five use chemistry transport models (CTMs. These systems contain either linearised or detailed ozone chemistry, or no chemistry at all. In most analyses, MIPAS (Michelson Interferometer for Passive Atmospheric Sounding ozone data are assimilated. Two examples assimilate SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric Chartography observations. The analyses are compared to independent ozone observations covering the troposphere, stratosphere and lower mesosphere during the period July to November 2003.

    Through most of the stratosphere (50 hPa to 1 hPa, biases are usually within ±10% and standard deviations less than 10% compared to ozonesondes and HALOE (Halogen Occultation Experiment. Biases and standard deviations are larger in the upper-troposphere/lower-stratosphere, in the troposphere, the mesosphere, and the Antarctic ozone hole region. In these regions, some analyses do substantially better than others, and this is mostly due to differences in the models. At the tropical tropopause, many analyses show positive biases and excessive structure in the ozone fields, likely due to known deficiencies in assimilated tropical wind fields and a degradation in MIPAS data at these levels. In the southern hemisphere ozone hole, only the analyses which correctly model heterogeneous ozone depletion are able to reproduce the near-complete ozone destruction over the pole. In the upper-stratosphere and mesosphere (above 5 hPa, some ozone photochemistry schemes caused large but easily remedied biases. The diurnal cycle of ozone in the mesosphere is not captured, except by the one system that includes a detailed treatment of mesospheric chemistry.

    In general, similarly good results are obtained no matter what the assimilation

  12. Uncertainty and Sensitivity Analyses of Duct Propagation Models

    Science.gov (United States)

    Nark, Douglas M.; Watson, Willie R.; Jones, Michael G.

    2008-01-01

    This paper presents results of uncertainty and sensitivity analyses conducted to assess the relative merits of three duct propagation codes. Results from this study are intended to support identification of a "working envelope" within which to use the various approaches underlying these propagation codes. This investigation considers a segmented liner configuration that models the NASA Langley Grazing Incidence Tube, for which a large set of measured data was available. For the uncertainty analysis, the selected input parameters (source sound pressure level, average Mach number, liner impedance, exit impedance, static pressure and static temperature) are randomly varied over a range of values. Uncertainty limits (95% confidence levels) are computed for the predicted values from each code, and are compared with the corresponding 95% confidence intervals in the measured data. Generally, the mean values of the predicted attenuation are observed to track the mean values of the measured attenuation quite well and predicted confidence intervals tend to be larger in the presence of mean flow. A two-level, six factor sensitivity study is also conducted in which the six inputs are varied one at a time to assess their effect on the predicted attenuation. As expected, the results demonstrate the liner resistance and reactance to be the most important input parameters. They also indicate the exit impedance is a significant contributor to uncertainty in the predicted attenuation.

  13. Computer analyses on loop seal clearing experiment at PWR PACTEL

    International Nuclear Information System (INIS)

    Highlights: • Code analyses of loop seal clearing experiment with PWR PACTEL are introduced. • TRACE and APROS system codes are used in the analyses. • Main events of the experiment are well predicted with both codes. • Discrepancies are observed on the secondary side and in the core region. • Loop seal clearing phenomenon is well simulated with both codes. - Abstract: Water seal formation in the loop seal in pressurized water reactors can occur during a small or intermediate break loss-of-coolant accident, causing temporary fuel overheating. Quantification of the accuracy of overheating prediction is of interest in the best-estimate safety analyses, even though the peak cladding temperatures due to the water seal formation in the loop seal seldom approach acceptance criteria as such. The aim of this study was to test and evaluate the accuracy with which the thermal–hydraulic system code nodalizations of the PWR PACTEL predict loop seal clearing in a small break loss-of-coolant-accident test performed with the PWR PACTEL facility. PWR PACTEL is a thermal–hydraulic test facility with two loops and vertical inverted U-tube steam generators. Post-test simulations were performed with the TRACE and APROS system codes. In the post-test simulations, the main events of the transient such as the decrease in the core water level, depressurization of the primary circuit, and the behavior of the water seal formation and clearing in the loop seal were predicted satisfactorily by both codes. However, discrepancies with the experiment results were observed in the analyses with both codes, for example the core temperature excursions were halted too early and the peak temperature predictions were too low. The core water level increase caused by loop seal clearing was overestimated with both codes, and the pressure and temperature were overestimated on the secondary side of the steam generators. Loop Seal 2 was evidently cleared out while Loop Seal 1 remained closed

  14. Identifying null meta-analyses that are ripe for updating

    Directory of Open Access Journals (Sweden)

    Fang Manchun

    2003-07-01

    Full Text Available Abstract Background As an increasingly large number of meta-analyses are published, quantitative methods are needed to help clinicians and systematic review teams determine when meta-analyses are not up to date. Methods We propose new methods for determining when non-significant meta-analytic results might be overturned, based on a prediction of the number of participants required in new studies. To guide decision making, we introduce the "new participant ratio", the ratio of the actual number of participants in new studies to the predicted number required to obtain statistical significance. A simulation study was conducted to study the performance of our methods and a real meta-analysis provides further evidence. Results In our three simulation configurations, our diagnostic test for determining whether a meta-analysis is out of date had sensitivity of 55%, 62%, and 49% with corresponding specificity of 85%, 80%, and 90% respectively. Conclusions Simulations suggest that our methods are able to detect out-of-date meta-analyses. These quick and approximate methods show promise for use by systematic review teams to help decide whether to commit the considerable resources required to update a meta-analysis. Further investigation and evaluation of the methods is required before they can be recommended for general use.

  15. Moisture advection to the Arctic : forecasted, analysed and observed

    Science.gov (United States)

    Dufour, Ambroise; Zolina, Olga

    2015-04-01

    Besides its contribution to the Arctic hydrological budget, moisture imports from mid-latitudes are also influential on shorter time scales since water vapour advection tends to occur together with extratropical cyclones. Influx of moisture to the Arctic cause the formation of clouds that have an immediate impact on the surface energy budget especially in winter. In the long run, inaccuracies in the description of cloud cover and phase lead to temperature biases in CMIP5 models. The ECMWF workshop on polar prediction has highlighted moisture advection as one of the problematic physical processes limiting the quality of forecasts. Verifying the accuracy of medium-term forecasts is of interest beyond weather prediction : it points to the ability of models to bring adequate quantities of moisture to the Arctic when they are less constrained by observations than in analyses. In this study, we have compared forecasted moisture flux fields with analyses and observations over the period 2000-2010. ECMWF's ERA-Interim provided the forecasts, extending to ten days. For the analyses, in addition to ERA-Interim, we used the Arctic System Reanalysis whose forecast model is optimized for the polar regions and runs at high resolution (30 km). Finally, the Integrated Global Radiosonde Archive data over the Arctic allowed a validation by observations.

  16. 神经元蜡样质脂褐质沉积病(NCL)的基因型与表型相关性研究%Genotype-phenotype analyses of classic neuronal ceroid lipofuscinosis (NCLs): genetic predictions from clinical and pathological findings

    Institute of Scientific and Technical Information of China (English)

    Weina JU; W. Ted BROWN; Nanbert ZHONG; Anetta WRONSKA; Dorota N. MOROZIEWICZ; Rocksheng ZHONG; Natalia WISNIEWSKI; Anna JURKIEWICZ; Michael FIORY; Krystyna E. WISNIEWSKI; Lance JOHNSTON

    2006-01-01

    Objective:Genotype-phenotype associations were studied in 517 subjects clinically affected by classical neuronal ceroid lipofuscinosis (NCL). Methods:Genetic loci CLN1-3 were analyzed in regard to age of onset, initial neurological symptoms, and electron microscope (EM) profiles. Results: The most common initial symptom leading to a clinical evaluation was developmental delay (30%) in NCL1, seizures (42.4%) in NCL2, and vision problems (53.5%) in NCL3. Eighty-two percent of NCL1 cases had granular osmiophilic deposits (GRODs) or mixed-GROD-containing EM profiles; 94% of NCL2 cases had curvilinear (CV) or mixed-CV-containing profiles; and 91% of NCL3 had fingerprint (FP) or mixed-FP-containing profiles. The mixed-type EM profile was found in approximately one-third of the NCL cases. DNA mutations within a specific CLN gene were further correlated with NCL phenotypes. Seizures were noticed to associate with common mutations 523G>A and 636C>T of CLN2 in NCL2 but not with common mutations 223G>A and 451C>T of CLN1 in NCL1. Vision loss was the initial symptom in all types of mutations in NCL3. Surprisingly, our data showed that the age of onset was atypical in 51.3% of NCL1 (infantile form) cases, 19.7% of NCL2 (late-infantile form) cases, and 42.8% of NCL3 (juvenile form) cases.Conclusion:Our data provide an overall picture regarding the clinical recognition of classical childhood NCLs. This may assist in the prediction and genetic identification of NCL1-3 via their characteristic clinical features.

  17. Adiponectin Induces A20 Expression in Adipose Tissue To Confer Metabolic Benefit

    OpenAIRE

    Hand LE, Usan P, Cooper GJS, Xu LY, Ammori B, Cunningham PS, Aghamohammadzadeh R, Soran H, Greenstein A, Loudon ASI, Bechtold DA, Ray DW

    2014-01-01

    Obesity is a major risk factor for metabolic disease, with white adipose tissue (WAT) inflammation emerging as a key underlying pathology. We detail that mice lacking Reverbα exhibit enhanced fat storage without the predicted increased WAT inflammation or loss of insulin sensitivity. In contrast to most animal models of obesity and obese human patients, Reverbα−/− mice exhibit elevated serum adiponectin levels and increased adiponectin secretion from WAT explants in vitro, highlighting ...

  18. Involvement of Ubiquitin-Editing Protein A20 in Modulating Inflammation in Rat Cochlea Associated with Silver Nanoparticle-Induced CD68 Upregulation and TLR4 Activation.

    Science.gov (United States)

    Feng, Hao; Pyykkö, Ilmari; Zou, Jing

    2016-12-01

    Silver nanoparticles (AgNPs) were shown to temporarily impair the biological barriers in the skin of the external ear canal, mucosa of the middle ear, and inner ear, causing partially reversible hearing loss after delivery into the middle ear. The current study aimed to elucidate the molecular mechanism, emphasizing the TLR signaling pathways in association with the potential recruitment of macrophages in the cochlea and the modulation of inflammation by ubiquitin-editing protein A20. Molecules potentially involved in these signaling pathways were thoroughly analysed using immunohistochemistry in the rat cochlea exposed to AgNPs at various concentrations through intratympanic injection. The results showed that 0.4 % AgNPs but not 0.02 % AgNPs upregulated the expressions of CD68, TLR4, MCP1, A20, and RNF11 in the strial basal cells, spiral ligament fibrocytes, and non-sensory supporting cells of Corti's organ. 0.4 % AgNPs had no effect on CD44, TLR2, MCP2, Rac1, myosin light chain, VCAM1, Erk1/2, JNK, p38, IL-1β, TNF-α, TNFR1, TNFR2, IL-10, or TGF-β. This study suggested that AgNPs might confer macrophage-like functions on the strial basal cells and spiral ligament fibrocytes and enhance the immune activities of non-sensory supporting cells of Corti's organ through the upregulation of CD68, which might be involved in TLR4 activation. A20 and RNF11 played roles in maintaining cochlear homeostasis via negative regulation of the expressions of inflammatory cytokines. PMID:27142878

  19. Involvement of Ubiquitin-Editing Protein A20 in Modulating Inflammation in Rat Cochlea Associated with Silver Nanoparticle-Induced CD68 Upregulation and TLR4 Activation

    Science.gov (United States)

    Feng, Hao; Pyykkö, Ilmari; Zou, Jing

    2016-05-01

    Silver nanoparticles (AgNPs) were shown to temporarily impair the biological barriers in the skin of the external ear canal, mucosa of the middle ear, and inner ear, causing partially reversible hearing loss after delivery into the middle ear. The current study aimed to elucidate the molecular mechanism, emphasizing the TLR signaling pathways in association with the potential recruitment of macrophages in the cochlea and the modulation of inflammation by ubiquitin-editing protein A20. Molecules potentially involved in these signaling pathways were thoroughly analysed using immunohistochemistry in the rat cochlea exposed to AgNPs at various concentrations through intratympanic injection. The results showed that 0.4 % AgNPs but not 0.02 % AgNPs upregulated the expressions of CD68, TLR4, MCP1, A20, and RNF11 in the strial basal cells, spiral ligament fibrocytes, and non-sensory supporting cells of Corti's organ. 0.4 % AgNPs had no effect on CD44, TLR2, MCP2, Rac1, myosin light chain, VCAM1, Erk1/2, JNK, p38, IL-1β, TNF-α, TNFR1, TNFR2, IL-10, or TGF-β. This study suggested that AgNPs might confer macrophage-like functions on the strial basal cells and spiral ligament fibrocytes and enhance the immune activities of non-sensory supporting cells of Corti's organ through the upregulation of CD68, which might be involved in TLR4 activation. A20 and RNF11 played roles in maintaining cochlear homeostasis via negative regulation of the expressions of inflammatory cytokines.

  20. Assessment of protein disorder region predictions in CASP10

    KAUST Repository

    Monastyrskyy, Bohdan

    2013-11-22

    The article presents the assessment of disorder region predictions submitted to CASP10. The evaluation is based on the three measures tested in previous CASPs: (i) balanced accuracy, (ii) the Matthews correlation coefficient for the binary predictions, and (iii) the area under the curve in the receiver operating characteristic (ROC) analysis of predictions using probability annotation. We also performed new analyses such as comparison of the submitted predictions with those obtained with a Naïve disorder prediction method and with predictions from the disorder prediction databases D2P2 and MobiDB. On average, the methods participating in CASP10 demonstrated slightly better performance than those in CASP9.

  1. Predicting protein structure classes from function predictions

    DEFF Research Database (Denmark)

    Sommer, I.; Rahnenfuhrer, J.; de Lichtenberg, Ulrik;

    2004-01-01

    We introduce a new approach to using the information contained in sequence-to-function prediction data in order to recognize protein template classes, a critical step in predicting protein structure. The data on which our method is based comprise probabilities of functional categories; for given......-to-structure prediction methods....

  2. Economische analyse van de Nederlandse biotechnologiesector

    OpenAIRE

    Giessen, A.M. van der; Gijsbers, G.W.; Koops, R.; Zee, F.A. van der

    2014-01-01

    In opdracht van de Commissie Genetische Modificatie (COGEM) heeft TNO een deskstudie uitgevoerd getiteld “Economische analyse van de Nederlandse biotechnologiesector”. Deze analyse is één van de voorstudies die de COGEM laat uitvoeren als voorbereiding op de Trendanalyse Biotechnologie, die naar verwachting in 2015 zal worden uitgevoerd. Voor deze analyse heeft de COGEM aan TNO gevraagd ontwikkelingen, trends en kansen van de biotechnologie opnieuw in kaart te brengen, met een nadruk op econo...

  3. TNFAIP3 (A20) is a tumor suppressor gene in Hodgkin lymphoma and primary mediastinal B cell lymphoma

    OpenAIRE

    Schmitz, Roland; Hansmann, Martin-Leo; Bohle, Verena; Martin-Subero, Jose Ignacio; Hartmann, Sylvia; Mechtersheimer, Gunhild; Klapper, Wolfram; Vater, Inga; Giefing, Maciej; Gesk, Stefan; Stanelle, Jens; Siebert, Reiner; Küppers, Ralf

    2009-01-01

    Proliferation and survival of Hodgkin and Reed/Sternberg (HRS) cells, the malignant cells of classical Hodgkin lymphoma (cHL), are dependent on constitutive activation of nuclear factor {kappa}B (NF-{kappa}B). NF-{kappa}B activation through various stimuli is negatively regulated by the zinc finger protein A20. To determine whether A20 contributes to the pathogenesis of cHL, we sequenced TNFAIP3, encoding A20, in HL cell lines and laser-microdissected HRS cells from cHL biopsies. We detected ...

  4. 49 CFR 1180.7 - Market analyses.

    Science.gov (United States)

    2010-10-01

    ... OF TRANSPORTATION RULES OF PRACTICE RAILROAD ACQUISITION, CONTROL, MERGER, CONSOLIDATION PROJECT, TRACKAGE RIGHTS, AND LEASE PROCEDURES General Acquisition Procedures § 1180.7 Market analyses. (a)...

  5. [Anne Arold. Kontrastive Analyse...] / Paul Alvre

    Index Scriptorium Estoniae

    Alvre, Paul, 1921-2008

    2001-01-01

    Arvustus: Arold, Anne. Kontrastive analyse der Wortbildungsmuster im Deutschen und im Estnischen (am Beispiel der Aussehensadjektive). Tartu, 2000. (Dissertationes philologiae germanicae Universitatis Tartuensis)

  6. The ASSET intercomparison of ozone analyses: method and first results

    Directory of Open Access Journals (Sweden)

    A. J. Geer

    2006-01-01

    Full Text Available This paper aims to summarise the current performance of ozone data assimilation (DA systems, to show where they can be improved, and to quantify their errors. It examines 11 sets of ozone analyses from 7 different DA systems. Two are numerical weather prediction (NWP systems based on general circulation models (GCMs; the other five use chemistry transport models (CTMs. The systems examined contain either linearised or detailed ozone chemistry, or no chemistry at all. In most analyses, MIPAS (Michelson Interferometer for Passive Atmospheric Sounding ozone data are assimilated; two assimilate SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric Chartography observations instead. Analyses are compared to independent ozone observations covering the troposphere, stratosphere and lower mesosphere during the period July to November 2003. Biases and standard deviations are largest, and show the largest divergence between systems, in the troposphere, in the upper-troposphere/lower-stratosphere, in the upper-stratosphere and mesosphere, and the Antarctic ozone hole region. However, in any particular area, apart from the troposphere, at least one system can be found that agrees well with independent data. In general, none of the differences can be linked to the assimilation technique (Kalman filter, three or four dimensional variational methods, direct inversion or the system (CTM or NWP system. Where results diverge, a main explanation is the way ozone is modelled. It is important to correctly model transport at the tropical tropopause, to avoid positive biases and excessive structure in the ozone field. In the southern hemisphere ozone hole, only the analyses which correctly model heterogeneous ozone depletion are able to reproduce the near-complete ozone destruction over the pole. In the upper-stratosphere and mesosphere (above 5 hPa, some ozone photochemistry schemes caused large but easily remedied biases. The diurnal cycle of ozone in

  7. Can NGOs Make a Difference? Revisiting and Reframing a 20-year Debate

    DEFF Research Database (Denmark)

    Opoku-Mensah, Paul Yaw

    2007-01-01

    The article seeks to connect the vibrant debates in the Nordic region on NGOs and the aid system with the international comparative debates on NGOs and development alternatives. It argues for a    reformulation of the international debate on NGOs and development alternatives to address the...... foundational questions related to the formative role and structural impact of the international aid system on NGOs and their roles. This reformulation moves the discussions further and enables analyses that provide understanding of the actual and potential role of NGOs to transform development  processes....

  8. Structure of adeno-associated virus-2 in complex with neutralizing monoclonal antibody A20

    Energy Technology Data Exchange (ETDEWEB)

    McCraw, Dustin M. [Department of Biochemistry and Molecular Biology, School of Medicine, Mail code L224, Oregon Health and Science University, 3181 S.W. Sam Jackson Park Road, Portland, OR 97239-3098 (United States); O& #x27; Donnell, Jason K. [Institute of Molecular Biophysics, Florida State University, Tallahassee, FL 32306-4380 (United States); Taylor, Kenneth A. [Department of Biological Science, Florida State University, Tallahassee, FL 32306-4295 (United States); Stagg, Scott M. [Institute of Molecular Biophysics, Florida State University, Tallahassee, FL 32306-4380 (United States); Department of Chemistry and Biochemistry, Florida State University, Tallahassee, FL 32306 (United States); Chapman, Michael S., E-mail: chapmami@ohsu.edu [Department of Biochemistry and Molecular Biology, School of Medicine, Mail code L224, Oregon Health and Science University, 3181 S.W. Sam Jackson Park Road, Portland, OR 97239-3098 (United States)

    2012-09-15

    The use of adeno-associated virus (AAV) as a gene therapy vector is limited by the host neutralizing immune response. The cryo-electron microscopy (EM) structure at 8.5 A resolution is determined for a complex of AAV-2 with the Fab' fragment of monoclonal antibody (MAb) A20, the most extensively characterized AAV MAb. The binding footprint is determined through fitting the cryo-EM reconstruction with a homology model following sequencing of the variable domain, and provides a structural basis for integrating diverse prior epitope mappings. The footprint extends from the previously implicated plateau to the side of the spike, and into the conserved canyon, covering a larger area than anticipated. Comparison with structures of binding and non-binding serotypes indicates that recognition depends on a combination of subtle serotype-specific features. Separation of the neutralizing epitope from the heparan sulfate cell attachment site encourages attempts to develop immune-resistant vectors that can still bind to target cells.

  9. Manipulation of palladium nanoparticles in a 20 nm gap between electrodes for hydrogen sensor application

    International Nuclear Information System (INIS)

    This study reports a promising, cost-effective nanoscale hydrogen sensor fabricated using the dielectrophoresis (DEP) process. Palladium nanoparticles (NPs) of diameter in the range 2-4 nm were assembled in a 20 nm gap between electrodes under optimized DEP parameters of frequency, voltage and assembling time of 1 M Hz, 1.5 V and 90 s, respectively. The fabricated nanoscale device was powered by applying a dc voltage of 10 mV across nanogap electrodes and temporal change in resistance at an operating temperature of 160 deg. C was recorded in the presence of 3000 ppm of hydrogen gas. A rise and recovery times of 100 s and 300 s, respectively, in the temporal hydrogen gas response characteristic were observed which could be attributed to the hydride formation due to the strong affinity of assembled palladium NPs towards hydrogen. The nanoscale device was sensitive enough to respond to hydrogen presence even at 30 deg. C. Preliminary results show the potential of DEP in fabricating cost-effective nanoscale hydrogen sensor.

  10. Manipulation of palladium nanoparticles in a 20 nm gap between electrodes for hydrogen sensor application

    Energy Technology Data Exchange (ETDEWEB)

    Huy, Binh Le; Kim, Gil-Ho [Department of Electronic and Electrical Engineering and Sungkyunkwan Advanced Institute of Nanotechnology, Sungkyunkwan University, Suwon 440-746 (Korea, Republic of); Kumar, Sanjeev, E-mail: ghkim@skku.edu [London Centre for Nanotechnology, University College London, 17-19 Gordon Street, London WC1H0AH (United Kingdom)

    2011-08-17

    This study reports a promising, cost-effective nanoscale hydrogen sensor fabricated using the dielectrophoresis (DEP) process. Palladium nanoparticles (NPs) of diameter in the range 2-4 nm were assembled in a 20 nm gap between electrodes under optimized DEP parameters of frequency, voltage and assembling time of 1 M Hz, 1.5 V and 90 s, respectively. The fabricated nanoscale device was powered by applying a dc voltage of 10 mV across nanogap electrodes and temporal change in resistance at an operating temperature of 160 deg. C was recorded in the presence of 3000 ppm of hydrogen gas. A rise and recovery times of 100 s and 300 s, respectively, in the temporal hydrogen gas response characteristic were observed which could be attributed to the hydride formation due to the strong affinity of assembled palladium NPs towards hydrogen. The nanoscale device was sensitive enough to respond to hydrogen presence even at 30 deg. C. Preliminary results show the potential of DEP in fabricating cost-effective nanoscale hydrogen sensor.

  11. Characterization of a 20-nm hard x-ray focus by ptychographic coherent diffractive imaging

    Science.gov (United States)

    Vila-Comamala, Joan; Diaz, Ana; Guizar-Sicairos, Manuel; Gorelick, Sergey; Guzenko, Vitaliy A.; Karvinen, Petri; Kewish, Cameron M.; Färm, Elina; Ritala, Mikko; Mantion, Alexandre; Bunk, Oliver; Menzel, Andreas; David, Christian

    2011-09-01

    Recent advances in the fabrication of diffractive X-ray optics have boosted hard X-ray microscopy into spatial resolutions of 30 nm and below. Here, we demonstrate the fabrication of zone-doubled Fresnel zone plates for multi-keV photon energies (4-12 keV) with outermost zone widths down to 20 nm. However, the characterization of such elements is not straightforward using conventional methods such as knife edge scans on well-characterized test objects. To overcome this limitation, we have used ptychographic coherent diffractive imaging to characterize a 20 nm-wide X-ray focus produced by a zone-doubled Fresnel zone plate at a photon energy of 6.2 keV. An ordinary scanning transmission X-ray microscope was modified to acquire the ptychographic data from a strongly scattering test object. The ptychographic algorithms allowed for the reconstruction of the image of the test object as well as for the reconstruction of the focused hard X-ray beam waist, with high spatial resolution and dynamic range. This method yields a full description of the focusing performance of the Fresnel zone plate and we demonstrate the usefulness ptychographic coherent diffractive imaging for metrology and alignment of nanofocusing diffractive X-ray lenses.

  12. Conceptual design of a 20 Tesla pulsed solenoid for a laser solenoid fusion reactor

    International Nuclear Information System (INIS)

    Design considerations are described for a strip wound solenoid which is pulsed to 20 tesla while immersed in a 20 tesla bias field so as to achieve within the bore of the pulsed solenoid at net field sequence starting at 20 tesla and going first down to zero, then up to 40 tesla, and finally back to 20 tesla in a period of about 5 x 10-3 seconds. The important parameters of the solenoid, e.g., aperture, build, turns, stored and dissipated energy, field intensity and powering circuit, are given. A numerical example for a specific design is presented. Mechanical stresses in the solenoid and the subsequent choice of materials for coil construction are discussed. Although several possible design difficulties are not discussed in this preliminary report of a conceptual magnet design, such as uniformity of field, long-term stability of insulation under neutron bombardment and choice of structural materials of appropriate tensile strength and elasticity to withstand magnetic forces developed, these questions are addressed in detail in the complete design report and in part in reference one. Furthermore, the authors feel that the problems encountered in this conceptual design are surmountable and are not a hindrance to the construction of such a magnet system

  13. Seismic Collapse Assessment of a 20-Story Steel Moment-Resisting Frame Structure

    Directory of Open Access Journals (Sweden)

    Annika Mathiasson

    2014-10-01

    Full Text Available The 2010 edition of the load standard in the United States (U.S., ASCE 7-10, (Minimum Design Loads for Buildings and Other Structures introduced risk-targeted spectral acceleration values for the estimation of seismic design loads. In this study, a 20-story steel moment resisting frame structure located in Century City, CA, USA was designed based on ASCE 7-10 and a probabilistic seismic collapse assessment was conducted. The main goals of this study are: (a to evaluate whether the design of a typical steel moment-frame structure based on risk-targeted spectral accelerations fulfills the target design collapse level of 1% probability of collapse in 50 years; and (b to quantify the collapse potential of a tall steel structure design based on the most current U.S. seismic code provisions. The probability of collapse was estimated for two sets of 104 and 224 recorded ground motions, respectively. An evaluation of the results demonstrated that for this specific structure the code-prescribed collapse performance target was reasonably met.

  14. Preliminary design of a 20 MW ICRF power system for Doublet III-D

    International Nuclear Information System (INIS)

    The heating of plasmas by waves in the ion cyclotron frequency regime is now the preferred approach for driving tokamak reactors to ignition conditions. Among the notable advantages of this approach are ready penetration into a thermonuclear plasma and an established technology for power generation. An ICRF power system for DIII-D (Doublet III reconfigured with a large dee-shaped vacuum vessel) has been designed to test the feasibility of employing such heating in a reactor. The system proposed is a narrow band rf generator operating at 55 and 40 MHz constructed in modules capable of delivering 1.25 MW each. A 20 MW, 10 s system is planned for the program envisioned. A system description is presented inclusive of rf excitation through final power amplifiers. The system layout of equipment within the D-III facility is also described. The modular nature of the total system facilitates meaningful testing at intermediate power levels, future system enlargement, and even reuse of the hardware on other facilities

  15. Transient Structural Analysis of a 20-m Diameter, Hyper-Energetic Lightcraft: Part 1 Axisymmetric Model

    Science.gov (United States)

    Myrabo, Leik N.; Cassenti, Brice N.

    2005-04-01

    An axisymmetric finite element (FEM) structural analysis has been performed on a 20-m diameter hyper-energetic lightcraft designed to transport 6-12 occupants around the planet or directly to low Earth orbit — without resorting to refueling or staging. As proposed, the lenticular double-hull of this super-pressure, balloon-type craft is fabricated from microwave-transparent silicon carbide films of superior strength, inflated with 2-atm of helium. A perimeter toriodal tube, serving as the primary structural `backbone,' is pressurized to 25-atm. The remote beam-energized MHD propulsion system (with directed-energy airspike) is intimately integrated with the craft's tensile-type structure and is not distinguishable as an item separate from the vehicle, as in conventional spacecraft. The design assumption of liquid immersion G-suits, individualized escape pods, and (optional) partial liquid ventilation, assures super-human levels of crew survivability, enabling accelerations of 25 to 50 Gs, or more. The vehicle dry mass is 1200-kg; payload is 1200-kg (crew and escape pods); expendable coolant is 2400-kg of ultra-pure, deionized water (for waste heat rejection from rectenna arrays, during orbital boosts). For simplicity, payload is assumed `distributed' as a thin circular disc directly below the central rectenna. Preliminary findings of this axisymmetric FEM structural analysis are encouraging, and suggest that such craft may indeed be feasible within a generation — perhaps by 2025.

  16. Toward a 20 per cent wind power supply in Quebec by 2020 (10,000 MW)

    International Nuclear Information System (INIS)

    Canada has established high targets for future wind energy production. Quebec's target for 2015 is 4,000 MW. This presentation questioned the feasibility of meeting this target from both a technical and political point of view. The author cautioned that if the current energy strategy in Quebec is not updated, the wind industry will not have a future in the province for the next 20 years, primarily because hydroelectric power development will supply new demand for the period 2015 to 2025. This presentation included a series of graphs depicting historical sales and forecasts of electricity demand in Quebec; new nuclear, hydro and wind generation projects from 2015 to 2025; net electricity exports in Quebec; Hydro-Quebec expected supply and demand from 2005 to 2025 without any new wind power; and, the Hydro-Quebec generation mix anticipated for 2015 to 2025. Solutions for wind power after 2015 were presented. It was concluded that a 20 per cent wind power supply in Quebec by 2020 is feasible, particularly if the energy strategy in the province improves in terms of exports, electricity policies and new approaches for source comparison and choice. refs., tabs., figs

  17. Deletion of the TNFAIP3/A20 gene detected by FICTION analysis in classical Hodgkin lymphoma

    International Nuclear Information System (INIS)

    The TNFAIP3 gene, which encodes a ubiquitin-modifying enzyme (A20) involved in the negative regulation of NF-κB signaling, is frequently inactivated by gene deletions/mutations in a variety of B-cell malignancies. However, the detection of this in primary Hodgkin lymphoma (HL) specimens is hampered by the scarcity of Hodgkin Reed-Sternberg (HR-S) cells even after enrichment by micro-dissection. We used anti-CD30 immunofluorescence with fluorescence in-situ hybridization (FISH) to evaluate the relative number of TNFAIP3/CEP6 double-positive signals in CD30-positive cells. From a total of 47 primary classical Hodgkin lymphoma (cHL) specimens, 44 were evaluable. We found that the relative numbers of TNFAIP3/CD30 cells were distributed among three groups, corresponding to those having homozygous (11%), heterozygous (32%), and no (57%) deletions in TNFAIP3. This shows that TNFAIP3 deletions could be sensitively detected using our chosen methods. Comparing the results with mutation analysis, TNFAIP3 inactivation was shown to have escaped detection in many samples with homozygous deletions. This suggests that TNFAIP3 inactivation in primary cHL specimens might be more frequent than previously reported

  18. Development of SAWEC, version 1.22. A simulation and Analysis model to explain and predict energy consumption and CO2 emission in residential buildings; Ontwikkeling van SAWEC, Versie 1.22. Een Simulatie en Analyse model voor verklaring en voorspelling van het Woninggebonden Energieverbruik en CO2-emissie

    Energy Technology Data Exchange (ETDEWEB)

    Jeeninga, H.; Volkers, C.H. [ECN Beleidsstudies, Petten (Netherlands)

    2003-07-01

    SAWEC is a model for simulation and analysis of energy consumption and CO2-emissions of residential energy use. Unlike its predecessor, the model SAVE-Households, SAWEC is based on the KWR-survey. KWR is an extensive survey of the quality of dwellings that is conducted every five years. The development of SAWEC is to a large extent based on the expertise that is developed over the past decade with SAVE-Households. However, the dwelling stock is modelled into more detail and also the vintage approach is improved. A distinction is made between ownership (three types), type of dwelling (four types), date of construction (five types) and infrastructure (three types). Furthermore, a new approach of the development of investments costs is implemented and the database of energy conservation measures has been re-designed. When designing the model, specific attention is paid to flexibility of the model to incorporate new features in the near future. Possible new features are endogenous modelling of life style changes, i.e. as a result of demographical changes and learning curves. In this report, the design of the SAWEC model is described. The user guide of the SAWEC model can be found in chapter 6. [Dutch] In opdracht van VROM-DGW is door ECN Beleidsstudies een simulatiemodel SAWEC ontwikkeld. SAWEC is een Simulatie en Analyse model voor verklaring en voorspelling van het Woninggebonden Energieverbruik en CO2 emissie. Een aantal redenen lag ten grondslag aan de wens om een opvolger te ontwikkelen van het bij ECN in gebruik zijnde model SAVEHuishoudens. SAVE-Huishoudens is gebaseerd op de BEK- en BAK-onderzoeken van EnergieNed. De resultaten van deze onderzoeken zijn beperkt vergelijkbaar met het in opdracht van VROM-DGW uitgevoerde KWR-onderzoek, doordat zowel de penetratiegraad van maatregelen in een bepaald zichtjaar als ook de mutatie van de penetratiegraad over een bepaalde periode afwijkt van de KWR-resultaten. Bij het ontwikkelen van het SAWEC-model is voor zover

  19. Disruption prediction at JET

    International Nuclear Information System (INIS)

    The sudden loss of the plasma magnetic confinement, known as disruption, is one of the major issue in a nuclear fusion machine as JET (Joint European Torus). Disruptions pose very serious problems to the safety of the machine. The energy stored in the plasma is released to the machine structure in few milliseconds resulting in forces that at JET reach several Mega Newtons. The problem is even more severe in the nuclear fusion power station where the forces are in the order of one hundred Mega Newtons. The events that occur during a disruption are still not well understood even if some mechanisms that can lead to a disruption have been identified and can be used to predict them. Unfortunately it is always a combination of these events that generates a disruption and therefore it is not possible to use simple algorithms to predict it. This thesis analyses the possibility of using neural network algorithms to predict plasma disruptions in real time. This involves the determination of plasma parameters every few milliseconds. A plasma boundary reconstruction algorithm, XLOC, has been developed in collaboration with Dr. D. O'Brien and Dr. J. Ellis capable of determining the plasma wall/distance every 2 milliseconds. The XLOC output has been used to develop a multilayer perceptron network to determine plasma parameters as li and qψ with which a machine operational space has been experimentally defined. If the limits of this operational space are breached the disruption probability increases considerably. Another approach for prediction disruptions is to use neural network classification methods to define the JET operational space. Two methods have been studied. The first method uses a multilayer perceptron network with softmax activation function for the output layer. This method can be used for classifying the input patterns in various classes. In this case the plasma input patterns have been divided between disrupting and safe patterns, giving the possibility of

  20. Star 48 solid rocket motor nozzle analyses and instrumented firings

    Science.gov (United States)

    Porter, R. L.

    1986-01-01

    The analyses and testing performed by NASA in support of an expanded and improved nozzle design data base for use by the U.S. solid rocket motor industry is presented. A production nozzle with a history of one ground failure and two flight failures was selected for analyses and testing. The stress analysis was performed with the Champion computer code developed by the U.S. Navy. Several improvements were made to the code. Strain predictions were made and compared to test data. Two short duration motor firings were conducted with highly instrumented nozzles. The first nozzle had 58 thermocouples, 66 strain gages, and 8 bondline pressure measurements. The second nozzle had 59 thermocouples, 68 strain measurements, and 8 bondline pressure measurements. Most of this instrumentation was on the nonmetallic parts, and provided significantly more thermal and strain data on the nonmetallic components of a nozzle than has been accumulated in a solid rocket motor test to date.

  1. Prediction of coefficients of thermal expansion for unidirectional composites

    Science.gov (United States)

    Bowles, David E.; Tompkins, Stephen S.

    1989-01-01

    Several analyses for predicting the longitudinal, alpha(1), and transverse, alpha(2), coefficients of thermal expansion of unidirectional composites were compared with each other, and with experimental data on different graphite fiber reinforced resin, metal, and ceramic matrix composites. Analytical and numerical analyses that accurately accounted for Poisson restraining effects in the transverse direction were in consistently better agreement with experimental data for alpha(2), than the less rigorous analyses. All of the analyses predicted similar values of alpha(1), and were in good agreement with the experimental data. A sensitivity analysis was conducted to determine the relative influence of constituent properties on the predicted values of alpha(1), and alpha(2). As would be expected, the prediction of alpha(1) was most sensitive to longitudinal fiber properties and the prediction of alpha(2) was most sensitive to matrix properties.

  2. Making detailed predictions makes (some) predictions worse

    Science.gov (United States)

    Kelly, Theresa F.

    In this paper, we investigate whether making detailed predictions about an event makes other predictions worse. Across 19 experiments, 10,895 participants, and 415,960 predictions about 724 professional sports games, we find that people who made detailed predictions about sporting events (e.g., how many hits each baseball team would get) made worse predictions about more general outcomes (e.g., which team would win). We rule out that this effect is caused by inattention or fatigue, thinking too hard, or a differential reliance on holistic information about the teams. Instead, we find that thinking about game-relevant details before predicting winning teams causes people to give less weight to predictive information, presumably because predicting details makes information that is relatively useless for predicting the winning team more readily accessible in memory and therefore incorporated into forecasts. Furthermore, we show that this differential use of information can be used to predict what kinds of games will and will not be susceptible to the negative effect of making detailed predictions.

  3. Ginkgo biloba extract and long-term cognitive decline: a 20-year follow-up population-based study.

    Directory of Open Access Journals (Sweden)

    Hélène Amieva

    Full Text Available BACKGROUND: Numerous studies have looked at the potential benefits of various nootropic drugs such as Ginkgo biloba extract (EGb761®; Tanakan® and piracetam (Nootropyl® on age-related cognitive decline often leading to inconclusive results due to small sample sizes or insufficient follow-up duration. The present study assesses the association between intake of EGb761® and cognitive function of elderly adults over a 20-year period. METHODS AND FINDINGS: The data were gathered from the prospective community-based cohort study 'Paquid'. Within the study sample of 3612 non-demented participants aged 65 and over at baseline, three groups were compared: 589 subjects reporting use of EGb761® at at least one of the ten assessment visits, 149 subjects reporting use of piracetam at one of the assessment visits and 2874 subjects not reporting use of either EGb761® or piracetam. Decline on MMSE, verbal fluency and visual memory over the 20-year follow-up was analysed with a multivariate mixed linear effects model. A significant difference in MMSE decline over the 20-year follow-up was observed in the EGb761® and piracetam treatment groups compared to the 'neither treatment' group. These effects were in opposite directions: the EGb761® group declined less rapidly than the 'neither treatment' group, whereas the piracetam group declined more rapidly (β = -0.6. Regarding verbal fluency and visual memory, no difference was observed between the EGb761® group and the 'neither treatment' group (respectively, β = 0.21 and β = -0.03, whereas the piracetam group declined more rapidly (respectively, β = -1.40 and β = -0.44. When comparing the EGb761® and piracetam groups directly, a different decline was observed for the three tests (respectively β = -1.07, β = -1.61 and β = -0.41. CONCLUSION: Cognitive decline in a non-demented elderly population was lower in subjects who reported using EGb761® than in those who did

  4. Genome-Facilitated Analyses of Geomicrobial Processes

    Energy Technology Data Exchange (ETDEWEB)

    Kenneth H. Nealson

    2012-05-02

    This project had the goal(s) of understanding the mechanism(s) of extracellular electron transport (EET) in the microbe Shewanella oneidensis MR-1, and a number of other strains and species in the genus Shewanella. The major accomplishments included sequencing, annotation, and analysis of more than 20 Shewanella genomes. The comparative genomics enabled the beginning of a systems biology approach to this genus. Another major contribution involved the study of gene regulation, primarily in the model organism, MR-1. As part of this work, we took advantage of special facilities at the DOE: e.g., the synchrotron radiation facility at ANL, where we successfully used this system for elemental characterization of single cells in different metabolic states (1). We began work with purified enzymes, and identification of partially purified enzymes, leading to initial characterization of several of the 42 c-type cytochromes from MR-1 (2). As the genome became annotated, we began experiments on transcriptome analysis under different conditions of growth, the first step towards systems biology (3,4). Conductive appendages of Shewanella, called bacterial nanowires were identified and characterized during this work (5, 11, 20,21). For the first time, it was possible to measure the electron transfer rate between single cells and a solid substrate (20), a rate that has been confirmed by several other laboratories. We also showed that MR-1 cells preferentially attach to cells at a given charge, and are not attracted, or even repelled by other charges. The interaction with the charged surfaces begins with a stimulation of motility (called electrokinesis), and eventually leads to attachment and growth. One of the things that genomics allows is the comparative analysis of the various Shewanella strains, which led to several important insights. First, while the genomes predicted that none of the strains looked like they should be able to degrade N-acetyl glucosamine (NAG), the monomer

  5. Regulation of the human SLC25A20 expression by peroxisome proliferator-activated receptor alpha in human hepatoblastoma cells

    Energy Technology Data Exchange (ETDEWEB)

    Tachibana, Keisuke, E-mail: nya@phs.osaka-u.ac.jp [Graduate School of Pharmaceutical Sciences, Osaka University, 1-6 Yamadaoka, Suita, Osaka 565-0871 (Japan); Takeuchi, Kentaro; Inada, Hirohiko [Graduate School of Pharmaceutical Sciences, Osaka University, 1-6 Yamadaoka, Suita, Osaka 565-0871 (Japan); Yamasaki, Daisuke [Graduate School of Pharmaceutical Sciences, Osaka University, 1-6 Yamadaoka, Suita, Osaka 565-0871 (Japan); The Center for Advanced Medical Engineering and Informatics, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871 (Japan); Ishimoto, Kenji [Graduate School of Pharmaceutical Sciences, Osaka University, 1-6 Yamadaoka, Suita, Osaka 565-0871 (Japan); Graduate School of Medicine, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871 (Japan); Tanaka, Toshiya; Hamakubo, Takao; Sakai, Juro; Kodama, Tatsuhiko [Laboratory for System Biology and Medicine, Research Center for Advanced Science and Technology, University of Tokyo, 4-6-1 Komaba, Meguro, Tokyo 153-8904 (Japan); Doi, Takefumi [Graduate School of Pharmaceutical Sciences, Osaka University, 1-6 Yamadaoka, Suita, Osaka 565-0871 (Japan); The Center for Advanced Medical Engineering and Informatics, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871 (Japan); Graduate School of Medicine, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871 (Japan)

    2009-11-20

    Solute carrier family 25, member 20 (SLC25A20) is a key molecule that transfers acylcarnitine esters in exchange for free carnitine across the mitochondrial membrane in the mitochondrial {beta}-oxidation. The peroxisome proliferator-activated receptor alpha (PPAR{alpha}) is a ligand-activated transcription factor that plays an important role in the regulation of {beta}-oxidation. We previously established tetracycline-regulated human cell line that can be induced to express PPAR{alpha} and found that PPAR{alpha} induces the SLC25A20 expression. In this study, we analyzed the promoter region of the human slc25a20 gene and showed that PPAR{alpha} regulates the expression of human SLC25A20 via the peroxisome proliferator responsive element.

  6. Validation of a 20-year forecast of US childhood lead poisoning: Updated prospects for 2010

    International Nuclear Information System (INIS)

    We forecast childhood lead poisoning and residential lead paint hazard prevalence for 1990-2010, based on a previously unvalidated model that combines national blood lead data with three different housing data sets. The housing data sets, which describe trends in housing demolition, rehabilitation, window replacement, and lead paint, are the American Housing Survey, the Residential Energy Consumption Survey, and the National Lead Paint Survey. Blood lead data are principally from the National Health and Nutrition Examination Survey. New data now make it possible to validate the midpoint of the forecast time period. For the year 2000, the model predicted 23.3 million pre-1960 housing units with lead paint hazards, compared to an empirical HUD estimate of 20.6 million units. Further, the model predicted 498,000 children with elevated blood lead levels (EBL) in 2000, compared to a CDC empirical estimate of 434,000. The model predictions were well within 95% confidence intervals of empirical estimates for both residential lead paint hazard and blood lead outcome measures. The model shows that window replacement explains a large part of the dramatic reduction in lead poisoning that occurred from 1990 to 2000. Here, the construction of the model is described and updated through 2010 using new data. Further declines in childhood lead poisoning are achievable, but the goal of eliminating children's blood lead levels ≥10 μg/dL by 2010 is unlikely to be achieved without additional action. A window replacement policy will yield multiple benefits of lead poisoning prevention, increased home energy efficiency, decreased power plant emissions, improved housing affordability, and other previously unrecognized benefits. Finally, combining housing and health data could be applied to forecasting other housing-related diseases and injuries

  7. A20 inhibits human salivary adenoid cystic carcinoma cells invasion via blocking nuclear factor-κB activation

    Institute of Scientific and Technical Information of China (English)

    ZHANG Bin; GUAN Cheng-chao; CHEN Wan-tao; ZHANG Ping; YAN Ming; SHI Jiu-hui; QIN Chun-lin; YANG Qian

    2007-01-01

    Background A20, also known as tumor necrosis factor α induced protein 3 (TNFaip3), is a cytoplasmic zinc finger protein that inhibits nuclear factor kappa-B (NF-κB) activity and prevents tumor necrosis factor (TNF)-mediated programmed cell death. NF-κB is a transcription factor that regulates expression of genes involved in cell proliferation,cell survival and anti-apoptosis. Several studies have implicated that the NF-κB signal pathway is associated with angiogenesis and clinico-pathological process of adenoid cystic carcinoma (ACC) of the salivary glands.Methods The ability of overexpression of A20 to influence the biological behavior and invasion of ACC cells was examined. The cells were stably transfected with full-length A20 cDNA. Stable gene transfer was verified by realtime-polymerase chain reaction (PCR) and Western blot analysis. The change of cell biological behavior was examined by methyl thiazolyl tetrazolium (MTT) and NF-κB luciferase reporter assay and the invasion of the cells was examined by a Matrigel invasion chamber.Results pEGPFN3-A20 gene was stably transferred into ACC-2 cells and overexpressed. When cells were treated with TNFα, the NF-κB activity of ACC-2-A20 cells could be down-regulated about 46.32% in contrast to ACC-2-GFP cells (P<0.05). A20 potently inhibited growth of A20 transfectant ACC-2-A20 compared with control vector transfected groups and the ACC-2 empty control group (P<0.05). The ACC-2-A20 cells showed significantly reduced ability to invade through Matrigei-coated filters compared to ACC-2-GFP and ACC-2 cells. The inhibition rate was up to 71.05% (P<0.05).Conclusions A20 gene transfer is associated with decreased tumor invasion, in part via the down-regulation of NF-κB expression, providing evidence for a potential application of A20 in designing a treatment modality for salivary gland cancers such as ACC.

  8. Longitudinal Analyses of Early Lesions by Fluorescence: An Observational Study

    OpenAIRE

    Ferreira Zandoná, A.; Ando, M.; Gomez, G.F.; Garcia-Corretjer, M.; Eckert, G.J.; Santiago, E; Katz, B P; Zero, D.T.

    2013-01-01

    Previous caries experience correlates to future caries risk; thus, early identification of lesions has importance for risk assessment and management. In this study, we aimed to determine if Quantitative Light-induced Fluorescence (QLF) parameters—area (A [mm2]), fluorescence loss (∆F [%]), and ∆Q [%×mm2]—obtained by image analyses can predict lesion progression. We secured consent from 565 children (from 5-13 years old) and their parents/guardians and examined them at baseline and regular int...

  9. Approximate analyses of inelastic effects in pipework

    International Nuclear Information System (INIS)

    This presentation shows figures concerned with analyses of inelastic effects in pipework as follows: comparison of experimental and calculated simplified analyses results for free end rotation and for circumferential strain; interrupted stress relaxation; regenerated relaxation caused by reversed yield; buckling of straight pipe under combined bending and torsion; results of fatigues test of pipe bend

  10. Novel Algorithms for Astronomical Plate Analyses

    Indian Academy of Sciences (India)

    Rene Hudec; Lukas Hudec

    2011-03-01

    Powerful computers and dedicated software allow effective data mining and scientific analyses in astronomical plate archives. We give and discuss examples of newly developed algorithms for astronomical plate analyses, e.g., searches for optical transients, as well as for major spectral and brightness changes.

  11. Ecosystem development after mangrove wetland creation: plant-soil change across a 20-year chronosequence

    Science.gov (United States)

    Osland, Michael J.; Spivak, Amanda C.; Nestlerode, Janet A.; Lessmann, Jeannine M.; Almario, Alejandro E.; Heitmuller, Paul T.; Russell, Marc J.; Krauss, Ken W.; Alvarez, Federico; Dantin, Darrin D.; Harvey, James E.; From, Andrew S.; Cormier, Nicole; Stagg, Camille L.

    2012-01-01

    Mangrove wetland restoration and creation efforts are increasingly proposed as mechanisms to compensate for mangrove wetland losses. However, ecosystem development and functional equivalence in restored and created mangrove wetlands are poorly understood. We compared a 20-year chronosequence of created tidal wetland sites in Tampa Bay, Florida (USA) to natural reference mangrove wetlands. Across the chronosequence, our sites represent the succession from salt marsh to mangrove forest communities. Our results identify important soil and plant structural differences between the created and natural reference wetland sites; however, they also depict a positive developmental trajectory for the created wetland sites that reflects tightly coupled plant-soil development. Because upland soils and/or dredge spoils were used to create the new mangrove habitats, the soils at younger created sites and at lower depths (10-30 cm) had higher bulk densities, higher sand content, lower soil organic matter (SOM), lower total carbon (TC), and lower total nitrogen (TN) than did natural reference wetland soils. However, in the upper soil layer (0-10 cm), SOM, TC, and TN increased with created wetland site age simultaneously with mangrove forest growth. The rate of created wetland soil C accumulation was comparable to literature values for natural mangrove wetlands. Notably, the time to equivalence for the upper soil layer of created mangrove wetlands appears to be faster than for many other wetland ecosystem types. Collectively, our findings characterize the rate and trajectory of above- and below-ground changes associated with ecosystem development in created mangrove wetlands; this is valuable information for environmental managers planning to sustain existing mangrove wetlands or mitigate for mangrove wetland losses.

  12. Quality assurance for Chinese herbal formulae: standardization of IBS-20, a 20-herb preparation

    Directory of Open Access Journals (Sweden)

    Bensoussan Alan

    2010-02-01

    Full Text Available Abstract Background The employment of well characterized test samples prepared from authenticated, high quality medicinal plant materials is key to reproducible herbal research. The present study aims to demonstrate a quality assurance program covering the acquisition, botanical validation, chemical standardization and good manufacturing practices (GMP production of IBS-20, a 20-herb Chinese herbal formula under study as a potential agent for the treatment of irritable bowel syndrome. Methods Purity and contaminant tests for the presence of toxic metals, pesticide residues, mycotoxins and microorganisms were performed. Qualitative chemical fingerprint analysis and quantitation of marker compounds of the herbs, as well as that of the IBS-20 formula was carried out with high-performance liquid chromatography (HPLC. Extraction and manufacture of the 20-herb formula were carried out under GMP. Chemical standardization was performed with liquid chromatography-mass spectrometry (LC-MS analysis. Stability of the formula was monitored with HPLC in real time. Results Quality component herbs, purchased from a GMP supplier were botanically and chemically authenticated and quantitative HPLC profiles (fingerprints of each component herb and of the composite formula were established. An aqueous extract of the mixture of the 20 herbs was prepared and formulated into IBS-20, which was chemically standardized by LC-MS, with 20 chemical compounds serving as reference markers. The stability of the formula was monitored and shown to be stable at room temperature. Conclusion A quality assurance program has been developed for the preparation of a standardized 20-herb formulation for use in the clinical studies for the treatment of irritable bowel syndrome (IBS. The procedures developed in the present study will serve as a protocol for other poly-herbal Chinese medicine studies.

  13. Predictive assessment of reading.

    Science.gov (United States)

    Wood, Frank B; Hill, Deborah F; Meyer, Marianne S; Flowers, D Lynn

    2005-12-01

    Study 1 retrospectively analyzed neuropsychological and psychoeducational tests given to N=220 first graders, with follow-up assessments in third and eighth grade. Four predictor constructs were derived: (1) Phonemic Awareness, (2) Picture Vocabulary, (3) Rapid Naming, and (4) Single Word Reading. Together, these accounted for 88%, 76%, 69%, and 69% of the variance, respectively, in first, third, and eighth grade Woodcock Johnson Broad Reading and eighth grade Gates-MacGinitie. When Single Word Reading was excluded from the predictors, the remaining predictors still accounted for 71%, 65%, 61%, and 65% of variance in the respective outcomes. Secondary analyses of risk of low outcome showed sensitivities/specificities of 93.0/91.0, and 86.4/84.9, respectively, for predicting which students would be in the bottom 15% and 30% of actual first grade WJBR. Sensitivities/specificities were 84.8/83.3 and 80.2/81.3, respectively, for predicting the bottom 15% and 30% of actual third grade WJBR outcomes; eighth grade outcomes had sensitivities/specificities of 80.0/80.0 and 85.7/83.1, respectively, for the bottom 15% and 30% of actual eighth grade WJBR scores. Study 2 cross-validated the concurrent predictive validities in an N=500 geographically diverse sample of late kindergartners through third graders, whose ethnic and racial composition closely approximated the national early elementary school population. New tests of the same four predictor domains were used, together taking only 15 minutes to administer by teachers; the new Woodcock-Johnson III Broad Reading standard score was the concurrent criterion, whose testers were blind to the predictor results. This cross-validation showed 86% of the variance accounted for, using the same regression weights as used in Study 1. With these weights, sensitivity/specificity values for the 15% and 30% thresholds were, respectively, 91.3/88.0 and 94.1/89.1. These validities and accuracies are stronger than others reported for

  14. Downstream prediction using a nonlinear prediction method

    Science.gov (United States)

    Adenan, N. H.; Noorani, M. S. M.

    2013-11-01

    The estimation of river flow is significantly related to the impact of urban hydrology, as this could provide information to solve important problems, such as flooding downstream. The nonlinear prediction method has been employed for analysis of four years of daily river flow data for the Langat River at Kajang, Malaysia, which is located in a downstream area. The nonlinear prediction method involves two steps; namely, the reconstruction of phase space and prediction. The reconstruction of phase space involves reconstruction from a single variable to the m-dimensional phase space in which the dimension m is based on optimal values from two methods: the correlation dimension method (Model I) and false nearest neighbour(s) (Model II). The selection of an appropriate method for selecting a combination of preliminary parameters, such as m, is important to provide an accurate prediction. From our investigation, we gather that via manipulation of the appropriate parameters for the reconstruction of the phase space, Model II provides better prediction results. In particular, we have used Model II together with the local linear prediction method to achieve the prediction results for the downstream area with a high correlation coefficient. In summary, the results show that Langat River in Kajang is chaotic, and, therefore, predictable using the nonlinear prediction method. Thus, the analysis and prediction of river flow in this area can provide river flow information to the proper authorities for the construction of flood control, particularly for the downstream area.

  15. Predictability of blocking

    International Nuclear Information System (INIS)

    Tibaldi and Molteni (1990, hereafter referred to as TM) had previously investigated operational blocking predictability by the ECMWF model and the possible relationships between model systematic error and blocking in the winter season of the Northern Hemisphere, using seven years of ECMWF operational archives of analyses and day 1 to 10 forecasts. They showed that fewer blocking episodes than in the real atmosphere were generally simulated by the model, and that this deficiency increased with increasing forecast time. As a consequence of this, a major contribution to the systematic error in the winter season was shown to derive from the inability of the model to properly forecast blocking. In this study, the analysis performed in TM for the first seven winter seasons of the ECMWF operational model is extended to the subsequent five winters, during which model development, reflecting both resolution increases and parametrisation modifications, continued unabated. In addition the objective blocking index developed by TM has been applied to the observed data to study the natural low frequency variability of blocking. The ability to simulate blocking of some climate models has also been tested

  16. A20 is critical for the induction of Pam3CSK4-tolerance in monocytic THP-1 cells.

    Directory of Open Access Journals (Sweden)

    Jinyue Hu

    Full Text Available A20 functions to terminate Toll-like receptor (TLR-induced immune response, and play important roles in the induction of lipopolysacchride (LPS-tolerance. However, the molecular mechanism for Pam3CSK4-tolerance is uncertain. Here we report that TLR1/2 ligand Pam3CSK4 induced tolerance in monocytic THP-1 cells. The pre-treatment of THP-1 cells with Pam3CSK4 down-regulated the induction of pro-inflammatory cytokines induced by Pam3CSK4 re-stimulation. Pam3CSK4 pre-treatment also down-regulated the signaling transduction of JNK, p38 and NF-κB induced by Pam3CSK4 re-stimulation. The activation of TLR1/2 induced a rapid and robust up-regulation of A20, suggesting that A20 may contribute to the induction of Pam3CSK4-tolerance. This hypothesis was proved by the observation that the over-expression of A20 by gene transfer down-regulated Pam3CSK4-induced inflammatory responses, and the down-regulation of A20 by RNA interference inhibited the induction of tolerance. Moreover, LPS induced a significant up-regulation of A20, which contributed to the induction of cross-tolerance between LPS and Pam3CSK4. A20 was also induced by the treatment of THP-1 cells with TNF-α and IL-1β. The pre-treatment with TNF-α and IL-1β partly down-regulated Pam3CSK4-induced activation of MAPKs. Furthermore, pharmacologic inhibition of GSK3 signaling down-regulated Pam3CSK4-induced A20 expression, up-regulated Pam3CSK4-induced inflammatory responses, and partly reversed Pam3CSK4 pre-treatment-induced tolerance, suggesting that GSK3 is involved in TLR1/2-induced tolerance by up-regulation of A20 expression. Taken together, these results indicated that A20 is a critical regulator for TLR1/2-induced pro-inflammatory responses.

  17. Nuclear analyses of Indian LLCB test blanket system in ITER

    International Nuclear Information System (INIS)

    Heading towards the Nuclear Fusion Reactor Program, India is developing Lead Lithium Ceramic Breeder (LLCB) tritium breeding blanket for its future fusion Reactor. A mock-up of the LLCB blanket is proposed to be tested in ITER equatorial port no. 2, to ensure the overall performance of blanket in reactor relevant nuclear fusion environment. Nuclear analyses play an important role in LLCB Test Blanket System development. It is required for tritium breeding estimation, thermal-hydraulic design, coolants process design, radio-active waste management, equipments maintenance and replacement strategies and nuclear safety. To predict the nuclear behaviour of LLCB test blanket module in ITER, nuclear responses like tritium production, nuclear heating, neutron fluxes and radiation damages are estimated. As a part of ITER machine, LLCB TBS has to follow certain nuclear shielding requirements i.e. shutdown dose rates should not exceed the defined limits in ITER premises (inside bio-shield ∼100 μSv/hr after 12 days cooling and outside bio-shield ∼10 μSv/hr after 1 day cooling). Hence nuclear analyses are performed to assess and optimize the shielding capability of LLCB TBS inside and outside bio-shield. To state the radio-activity level of LLCB TBS components which support the rad-waste and safety assessment, nuclear activation analyses are executed. Nuclear analyses of LLCB TBS are performed using ITER recommended nuclear analyses codes (i.e. MCNP, EASY), nuclear cross section data libraries (i.e. FENDL 2.1, EAF) and neutronic model (ITER C-lite v.1). The paper describes comprehensive nuclear performance of LLCB TBS in ITER. (author)

  18. The psychological status of phonological analyses

    Directory of Open Access Journals (Sweden)

    David Eddington

    2015-09-01

    Full Text Available This paper casts doubt on the psychological relevance of many phonological analyses. There are four reasons for this: 1 theoretical adequacy does not necessarily imply psychological significance; 2 most approaches are nonempirical in that they are not subject to potential spatiotemporal falsification; 3 phonological analyses are estab­ lished with little or no recourse to the speakers of the language via experimental psy­ chology; 4 the limited base of evidence which most analyses are founded on is further cause for skepticism.

  19. L’Analyse de discours des Sociologues

    OpenAIRE

    Demailly, Lise

    2013-01-01

    Les sociologues utilisent, comme méthode d'analyse, l'analyse de discours. Des recherches, ici exposées, ont été menées sur cette méthode, ses spécificités et ses apports à la formation aux techniques d'expression (T.E.). Il ressort que le sociologue produit d'abord des discours (par l'entretien et l'observation) puis les analyse, les traite. Ces discours sont difficilement utilisables en T.E. tant ils sont saturés d'enjeux théoriques voire idéologiques.

  20. Safety analyses for the planned Konrad repository

    International Nuclear Information System (INIS)

    The safety analyses for the planned federal repository Konrad are described which serve to check and prove observance of the protection goals. The safety analyses lead to the definition of requirements for the plant and the radioactive waste. As a large number of papers dealing with the safety analyses for the repository's operation phase have already been published, the present report concentrates on the investigations into the post-operational phase which were carried out, among others, by the Bundesanstalt fur Geowissenschaften und Rohstoffe (Federal Institute for Geosciences and Natural Resources), Hanover, and the Gesellschaft fur Strahlen- and Umweltforschung (Radiological and Environmental Research Corporation), Braunschweig and Munich, on behalf of the PTB

  1. Moving Crystal Slow-Neutron Wavelength Analyser

    DEFF Research Database (Denmark)

    Buras, B.; Kjems, Jørgen

    1973-01-01

    Experimental proof that a moving single crystal can serve as a slow-neutron wavelength analyser of special features is presented. When the crystal moves with a velocity h/(2 md) (h-Planck constant, m-neutron mass, d-interplanar spacing) perpendicular to the diffracting plane and the analysed...... neutron beam is parallel to the diffracting plane, then neutrons of different wave-lengths contained in the incident beam are simultaneously diffracted under different reflection angles and recorded by a position-sensitive detector. Special features of this analysing system are briefly discussed....

  2. Learning predictive clustering rules

    OpenAIRE

    Ženko, Bernard; Džeroski, Sašo; Struyf, Jan

    2005-01-01

    The two most commonly addressed data mining tasks are predictive modelling and clustering. Here we address the task of predictive clustering, which contains elements of both and generalizes them to some extent. We propose a novel approach to predictive clustering called predictive clustering rules, present an initial implementation and its preliminary experimental evaluation.

  3. 7 CFR 94.102 - Analyses available.

    Science.gov (United States)

    2010-01-01

    ... analyses for total ash, fat by acid hydrolysis, moisture, salt, protein, beta-carotene, catalase... plate count, direct microscopic count, Campylobacter, coliforms, presumptive Escherichia coli, Listeria monocytogenes, proteolytic count, psychrotrophic bacteria, Salmonella, Staphylococcus, thermoduric bacteria,...

  4. Comparison with Russian analyses of meteor impact

    Energy Technology Data Exchange (ETDEWEB)

    Canavan, G.H.

    1997-06-01

    The inversion model for meteor impacts is used to discuss Russian analyses and compare principal results. For common input parameters, the models produce consistent estimates of impactor parameters. Directions for future research are discussed and prioritized.

  5. Understanding Human Error Based on Automated Analyses

    Data.gov (United States)

    National Aeronautics and Space Administration — This is a report on a continuing study of automated analyses of experiential textual reports to gain insight into the causal factors of human errors in aviation...

  6. Anthocyanin analyses of Vaccinium fruit dietary supplements

    Science.gov (United States)

    Vaccinium fruit ingredients within dietary supplements were identified by comparisons with anthocyanin analyses of known Vaccinium profiles (demonstration of anthocyanin fingerprinting). Available Vaccinium supplements were purchased and analyzed; their anthocyanin profiles (based on HPLC separation...

  7. A digital image analyser for RIMS studies

    International Nuclear Information System (INIS)

    Resonance Ionisation Mass Spectrometry (RIMS) is now playing a vital role in various areas of physics and chemistry. A digital image analyser for quantitative analysis of RIMS experiments has been developed

  8. Predicting School Board Member Incumbent Defeat.

    Science.gov (United States)

    Lutz, Frank W.; Hunt, Brook P.

    Researchers attempted to predict the defeat of school board incumbents, using variables which had already been shown to account for incumbent defeat in statistical analyses performed after board elections in many different states. A global model was constructed based on 20 social, economic, and political variables as well as on school districts'…

  9. Monitoring and prediction of natural disasters

    International Nuclear Information System (INIS)

    The problems of natural disaster predicting and accomplishing a synthesis of environmental monitoring systems to collect, store, and process relevant information for their solution are analysed. A three-level methodology is proposed for making decisions concerning the natural disaster dynamics. The methodology is based on the assessment of environmental indicators and the use of numerical models of the environment

  10. Safety analyses for reprocessing and waste processing

    International Nuclear Information System (INIS)

    Presentation of an incident analysis of process steps of the RP, simplified considerations concerning safety, and safety analyses of the storage and solidification facilities of the RP. A release tree method is developed and tested. An incident analysis of process steps, the evaluation of the SRL-study and safety analyses of the storage and solidification facilities of the RP are performed in particular. (DG)

  11. Functional Analyses and Treatment of Precursor Behavior

    OpenAIRE

    Najdowski, Adel C; Wallace, Michele D; Ellsworth, Carrie L; MacAleese, Alicia N; Cleveland, Jackie M

    2008-01-01

    Functional analysis has been demonstrated to be an effective method to identify environmental variables that maintain problem behavior. However, there are cases when conducting functional analyses of severe problem behavior may be contraindicated. The current study applied functional analysis procedures to a class of behavior that preceded severe problem behavior (precursor behavior) and evaluated treatments based on the outcomes of the functional analyses of precursor behavior. Responding fo...

  12. Thermal Analyse sof Cross-Linked Polyethylene

    Directory of Open Access Journals (Sweden)

    Radek Polansky

    2007-01-01

    Full Text Available The paper summarizes results obtained during the structural analyses measurements (Differential Scanning Calorimetry DSC, Thermogravimetry TG, Thermomechanical analysis TMA and Fourier transform infrared spectroscopy FT-IR. The samples of cross-linked polyethylene cable insulation were tested via these analyses. The DSC and TG were carried out using simultaneous thermal analyzer TA Instruments SDT Q600 with connection of Fourier transform infrared spectrometer Nicolet 380. Thermomechanical analysis was carried out by TMA Q400EM TA Instruments apparatus.

  13. The molecular spectrum and distribution of haemoglobinopathies in Cyprus: a 20-year retrospective study

    Science.gov (United States)

    Kountouris, Petros; Kousiappa, Ioanna; Papasavva, Thessalia; Christopoulos, George; Pavlou, Eleni; Petrou, Miranda; Feleki, Xenia; Karitzie, Eleni; Phylactides, Marios; Fanis, Pavlos; Lederer, Carsten W.; Kyrri, Andreani R.; Kalogerou, Eleni; Makariou, Christiana; Ioannou, Christiana; Kythreotis, Loukas; Hadjilambi, Georgia; Andreou, Nicoletta; Pangalou, Evangelia; Savvidou, Irene; Angastiniotis, Michael; Hadjigavriel, Michael; Sitarou, Maria; Kolnagou, Annita; Kleanthous, Marina; Christou, Soteroula

    2016-01-01

    Haemoglobinopathies are the most common monogenic diseases, posing a major public health challenge worldwide. Cyprus has one the highest prevalences of thalassaemia in the world and has been the first country to introduce a successful population-wide prevention programme, based on premarital screening. In this study, we report the most significant and comprehensive update on the status of haemoglobinopathies in Cyprus for at least two decades. First, we identified and analysed all known 592 β-thalassaemia patients and 595 Hb H disease patients in Cyprus. Moreover, we report the molecular spectrum of α-, β- and δ-globin gene mutations in the population and their geographic distribution, using a set of 13824 carriers genotyped from 1995 to 2015, and estimate relative allele frequencies in carriers of β- and δ-globin gene mutations. Notably, several mutations are reported for the first time in the Cypriot population, whereas important differences are observed in the distribution of mutations across different districts of the island. PMID:27199182

  14. Insights into a 20-ha multi-contaminated brownfield megasite: An environmental forensics approach.

    Science.gov (United States)

    Gallego, J R; Rodríguez-Valdés, E; Esquinas, N; Fernández-Braña, A; Afif, E

    2016-09-01

    Here we addressed the contamination of soils in an abandoned brownfield located in an industrial area. Detailed soil and waste characterisation guided by historical information about the site revealed pyrite ashes (a residue derived from the roasting of pyrite ores) as the main environmental risk. In fact, the disposal of pyrite ashes and the mixing of these ashes with soils have affected a large area of the site, thereby causing heavy metal(loid) pollution (As and Pb levels reaching several thousands of ppm). A full characterisation of the pyrite ashes was thus performed. In this regard, we determined the bioavailable metal species present and their implications, grain-size distribution, mineralogy, and Pb isotopic signature in order to obtain an accurate conceptual model of the site. We also detected significant concentrations of pyrogenic benzo(a)pyrene and other PAHs, and studied the relation of these compounds with the pyrite ashes. In addition, we examined other waste and spills of minor importance within the study site. The information gathered offered an insight into pollution sources, unravelled evidence from the industrial processes that took place decades ago, and identified the co-occurrence of contaminants by means of multivariate statistics. The environmental forensics study carried out provided greater information than conventional analyses for risk assessment purposes and for the selection of clean-up strategies adapted to future land use. PMID:26475240

  15. Nonparametric bootstrap prediction

    OpenAIRE

    Fushiki, Tadayoshi; Komaki, Fumiyasu; Aihara, Kazuyuki

    2005-01-01

    Ensemble learning has recently been intensively studied in the field of machine learning. `Bagging' is a method of ensemble learning and uses bootstrap data to construct various predictors. The required prediction is then obtained by averaging the predictors. Harris proposed using this technique with the parametric bootstrap predictive distribution to construct predictive distributions, and showed that the parametric bootstrap predictive distribution gives asymptotically better prediction tha...

  16. Predictability of social interactions

    OpenAIRE

    Xu, Kevin S.

    2013-01-01

    The ability to predict social interactions between people has profound applications including targeted marketing and prediction of information diffusion and disease propagation. Previous work has shown that the location of an individual at any given time is highly predictable. This study examines the predictability of social interactions between people to determine whether interaction patterns are similarly predictable. I find that the locations and times of interactions for an individual are...

  17. A 20-yr reanalysis Experiment in the Baltic Sea Using three Dimensional Variational (3DVAR method

    Directory of Open Access Journals (Sweden)

    W. Fu

    2012-05-01

    Full Text Available A 20-year retrospective reanalysis of the ocean state in the Baltic Sea is constructed using three dimensional variational (3DVAR data assimilation combining an operational numerical model with available historical temperature (T and salinity (S profiles. To determine the accuracy of the reanalysis, the authors present a series of comparisons with independent observations on a monthly mean basis. The performance of the assimilation in deep/shallow waters is investigated.

    With assimilation, temperature and salinity in the reanalysis fit better than the free run with independent measurements at different depths. Overall, the mean biases of temperature and salinity are reduced by 0.32 °C and 0.34 psu, respectively. Similarly, the mean root mean square error (RMSE of the reanalysis is decreased by 0.35 °C and 0.3 psu compared to the free run. In space, the model error is inhomogeneous and strongly steered by the model error dynamics. Seasonally varying error of the modeled sea surface temperature is mainly controlled by the weather forcing, and shows the least improvements due to sparse observations. Deep layers, on the other hand, witness significant and stable model error improvements. In particular, the salinity related to saline water intrusions into the Baltic Proper is largely improved in the reanalysis. The major inflow events such as in 1993 and 2003 are captured more accurately in the reanalysis as the model salinity in the bottom layer is increased by 2–3 psu. Sea level is also improved due to an improved density field. The correlation between model and observation is increased by 2 %–5 %, and the RMSE is generally reduced by 10 cm in the reanalysis compared to the free run. The reduction of RMSE is mainly due to the reduction of mean bias. Assimilation of T/S contributes little to the barotropic transport in the shallow Danish Transition zone.

    The mixed layer depth exhibits strong seasonal

  18. Summary of dynamic analyses of the advanced neutron source reactor inner control rods

    International Nuclear Information System (INIS)

    A summary of the structural dynamic analyses that were instrumental in providing design guidance to the Advanced Neutron source (ANS) inner control element system is presented in this report. The structural analyses and the functional constraints that required certain performance parameters were combined to shape and guide the design effort toward a prediction of successful and reliable control and scram operation to be provided by these inner control rods

  19. Discovery of frameshifting in Alphavirus 6K resolves a 20-year enigma

    Directory of Open Access Journals (Sweden)

    Fleeton Marina N

    2008-09-01

    Full Text Available Abstract Background The genus Alphavirus includes several potentially lethal human viruses. Additionally, species such as Sindbis virus and Semliki Forest virus are important vectors for gene therapy, vaccination and cancer research, and important models for virion assembly and structural analyses. The genome encodes nine known proteins, including the small '6K' protein. 6K appears to be involved in envelope protein processing, membrane permeabilization, virion assembly and virus budding. In protein gels, 6K migrates as a doublet – a result that, to date, has been attributed to differing degrees of acylation. Nonetheless, despite many years of research, its role is still relatively poorly understood. Results We report that ribosomal -1 frameshifting, with an estimated efficiency of ~10–18%, occurs at a conserved UUUUUUA motif within the sequence encoding 6K, resulting in the synthesis of an additional protein, termed TF (TransFrame protein; ~8 kDa, in which the C-terminal amino acids are encoded by the -1 frame. The presence of TF in the Semliki Forest virion was confirmed by mass spectrometry. The expression patterns of TF and 6K were studied by pulse-chase labelling, immunoprecipitation and immunofluorescence, using both wild-type virus and a TF knockout mutant. We show that it is predominantly TF that is incorporated into the virion, not 6K as previously believed. Investigation of the 3' stimulatory signals responsible for efficient frameshifting at the UUUUUUA motif revealed a remarkable diversity of signals between different alphavirus species. Conclusion Our results provide a surprising new explanation for the 6K doublet, demand a fundamental reinterpretation of existing data on the alphavirus 6K protein, and open the way for future progress in the further characterization of the 6K and TF proteins. The results have implications for alphavirus biology, virion structure, viroporins, ribosomal frameshifting, and bioinformatic

  20. Three distinct suppressors of RNA silencing encoded by a 20-kb viral RNA genome

    Science.gov (United States)

    Lu, Rui; Folimonov, Alexey; Shintaku, Michael; Li, Wan-Xiang; Falk, Bryce W.; Dawson, William O.; Ding, Shou-Wei

    2004-11-01

    Viral infection in both plant and invertebrate hosts requires a virus-encoded function to block the RNA silencing antiviral defense. Here, we report the identification and characterization of three distinct suppressors of RNA silencing encoded by the 20-kb plus-strand RNA genome of citrus tristeza virus (CTV). When introduced by genetic crosses into plants carrying a silencing transgene, both p20 and p23, but not coat protein (CP), restored expression of the transgene. Although none of the CTV proteins prevented DNA methylation of the transgene, export of the silencing signal (capable of mediating intercellular silencing spread) was detected only from the F1 plants expressing p23 and not from the CP- or p20-expressing F1 plants, demonstrating suppression of intercellular silencing by CP and p20 but not by p23. Thus, intracellular and intercellular silencing are each targeted by a CTV protein, whereas the third, p20, inhibits silencing at both levels. Notably, CP suppresses intercellular silencing without interfering with intracellular silencing. The novel property of CP suggests a mechanism distinct to p20 and all of the other viral suppressors known to interfere with intercellular silencing and that this class of viral suppressors may not be consistently identified by Agrobacterium coinfiltration because it also induces RNA silencing against the infiltrated suppressor transgene. Our analyses reveal a sophisticated viral counter-defense strategy that targets the silencing antiviral pathway at multiple steps and may be essential for protecting CTV with such a large RNA genome from antiviral silencing in the perennial tree host. RNA interference | citrus tristeza virus | virus synergy | antiviral immunity

  1. Regression Analyses of Self-Regulatory Concepts to Predict Community College Math Achievement and Persistence

    Science.gov (United States)

    Gramlich, Stephen Peter

    2010-01-01

    Open door admissions at community colleges bring returning adults, first timers, low achievers, disabled persons, and immigrants. Passing and retention rates for remedial and non-developmental math courses can be comparatively inadequate (LAVC, 2005; CCPRDC, 2000; SBCC, 2004; Seybert & Soltz, 1992; Waycaster, 2002). Mathematics achievement…

  2. Integrative genomic analyses of a novel cytokine, interleukin-34 and its potential role in cancer prediction

    OpenAIRE

    Wang, Bo; Xu, Wenming; TAN, MIAOLIAN; Xiao, Yan; Yang, Haiwei; Xia, Tian-Song

    2014-01-01

    Interleukin-34 (IL-34) is a novel cytokine, which is composed of 222 amino acids and forms homodimers. It binds to the macrophage colony-stimulating factor (M-CSF) receptor and plays an important role in innate immunity and inflammatory processes. In the present study, we identified the completed IL-34 gene in 25 various mammalian genomes and found that IL-34 existed in all types of vertebrates, including fish, amphibians, birds and mammals. These species have a similar 7 exon/6 intron gene o...

  3. ANALYSING URBAN EFFECTS IN BUDAPEST USING THE WRF NUMERICAL WEATHER PREDICTION MODEL

    Directory of Open Access Journals (Sweden)

    JÚLIA GÖNDÖCS

    2016-03-01

    Full Text Available Continuously growing cities significantly modify the entire environment through air pollution and modification of land surface, resulting altered energy budget and land-atmosphere exchange processes over built-up areas. These effects mainly appear in cities or metropolitan areas, leading to the Urban Heat Island (UHI phenomenon, which occurs due to the temperature difference between the built-up areas and their cooler surroundings. The Weather Research and Forecasting (WRF mesoscale model coupled to multilayer urban canopy parameterisation is used to investigate this phenomenon for Budapest and its surroundings with actual land surface properties. In this paper the basic ideas of our research and the methodology in brief are presented. The simulation is completed for one week in summer 2015 with initial meteorological fields from Global Forecasting System (GFS outputs, under atmospheric conditions of weak wind and clear sky for the Pannonian Basin. Then, to improve the WRF model and its settings, the calculated skin temperature is compared to the remotely sensed measurements derived from satellites Aqua and Terra, and the temporal and spatial bias values are estimated.

  4. An Earthquake Prediction System Using The Time Series Analyses of Earthquake Property And Crust Motion

    International Nuclear Information System (INIS)

    We have developed a short-term deterministic earthquake (EQ) forecasting system similar to those used for Typhoons and Hurricanes, which has been under a test operation at website http://www.tec21.jp/ since June of 2003. We use the focus and crust displacement data recently opened to the public by Japanese seismograph and global positioning system (GPS) networks, respectively. Our system divides the forecasting area into the five regional areas of Japan, each of which is about 5 deg. by 5 deg. We have found that it can forecast the focus, date of occurrence and magnitude (M) of an impending EQ (whose M is larger than about 6), all within narrow limits. We have two examples to describe the system. One is the 2003/09/26 EQ of M 8 in the Hokkaido area, which is of hindsight. Another is a successful rollout of the most recent forecast on the 2004/05/30 EQ of M 6.7 off coast of the southern Kanto (Tokyo) area

  5. Fertility prediction of frozen boar sperm using novel and conventional analyses

    Science.gov (United States)

    Frozen-thawed boar sperm is seldom used for artificial insemination (AI) because fertility is lower than fresh or cooled semen. Despite the many advantages of AI including reduced pathogen exposure and ease of semen transport, cryo-induced damage to sperm usually results in decreased litter sizes a...

  6. Use of CFD Analyses to Predict Disk Friction Loss of Centrifugal Compressor Impellers

    Science.gov (United States)

    Cho, Leesang; Lee, Seawook; Cho, Jinsoo

    To improve the total efficiency of centrifugal compressors, it is necessary to reduce disk friction loss, which is expressed as the power loss. In this study, to reduce the disk friction loss due to the effect of axial clearance and surface roughness is analyzed and methods to reduce disk friction loss are proposed. The rotating reference frame technique using a commercial CFD tool (FLUENT) is used for steady-state analysis of the centrifugal compressor. Numerical results of the CFD analysis are compared with theoretical results using established experimental empirical equations. The disk friction loss of the impeller is decreased in line with increments in axial clearance until the axial clearance between the impeller disk and the casing is smaller than the boundary layer thickness. In addition, the disk friction loss of the impeller is increased in line with the increments in surface roughness in a similar pattern as that of existing experimental empirical formulas. The disk friction loss of the impeller is more affected by the surface roughness than the change of the axial clearance. To minimize disk friction loss on the centrifugal compressor impeller, the axial clearance and the theoretical boundary layer thickness should be designed to be the same. The design of the impeller requires careful consideration in order to optimize axial clearance and minimize surface roughness.

  7. Analyses of the predicted changes of the global oceans under the increased greenhouse gases scenarios

    Institute of Scientific and Technical Information of China (English)

    MU Lin; WU Dexing; CHEN Xue'en; J Jungclaus

    2006-01-01

    A new climate model (ECHAM5/MPIOM1) developed for the fourth assessment report of the Intergovernmental Panel on Climate Change (IPCC) at Max-Planck Institute for Meteorology is used to study the climate changes under the different increased CO2 scenarios (B1, A1B and A2). Based on the corresponding model results, the sea surface temperature and salinity structure, the variations of the thermohaline circulation (THC) and the changes of sea ice in the northern hemisphere are analyzed. It is concluded that from the year of 2000 to 2100, under the B1, A1B and A2 scenarios, the global mean sea surface temperatures (SST) would increase by 2.5℃, 3.5℃ and 4.0℃ respectively, especially in the region of the Arctic, the increase of SST would be even above 10.0℃; the maximal negative value of the variation of the fresh water flux is located in the subtropical oceans, while the precipitation in the eastern tropical Pacific increases. The strength of THC decreases under the B1, A1B and A2 scenarios, and the reductions would be about 20%, 25% and 25.1% of the present THC strength respectively. In the northern hemisphere, the area of the sea ice cover would decrease by about 50% under the A1B scenario.

  8. Behavioral and Physiological Neural Network Analyses: A Common Pathway toward Pattern Recognition and Prediction

    Science.gov (United States)

    Ninness, Chris; Lauter, Judy L.; Coffee, Michael; Clary, Logan; Kelly, Elizabeth; Rumph, Marilyn; Rumph, Robin; Kyle, Betty; Ninness, Sharon K.

    2012-01-01

    Using 3 diversified datasets, we explored the pattern-recognition ability of the Self-Organizing Map (SOM) artificial neural network as applied to diversified nonlinear data distributions in the areas of behavioral and physiological research. Experiment 1 employed a dataset obtained from the UCI Machine Learning Repository. Data for this study…

  9. Integrative genomic analyses of a novel cytokine, interleukin-34 and its potential role in cancer prediction.

    Science.gov (United States)

    Wang, Bo; Xu, Wenming; Tan, Miaolian; Xiao, Yan; Yang, Haiwei; Xia, Tian-Song

    2015-01-01

    Interleukin-34 (IL-34) is a novel cytokine, which is composed of 222 amino acids and forms homodimers. It binds to the macrophage colony-stimulating factor (M-CSF) receptor and plays an important role in innate immunity and inflammatory processes. In the present study, we identified the completed IL-34 gene in 25 various mammalian genomes and found that IL-34 existed in all types of vertebrates, including fish, amphibians, birds and mammals. These species have a similar 7 exon/6 intron gene organization. The phylogenetic tree indicated that the IL-34 gene from the primate lineage, rodent lineage and teleost lineage form a species-specific cluster. It was found mammalian that IL-34 was under positive selection pressure with the identified positively selected site, 196Val. Fifty-five functionally relevant single nucleotide polymorphisms (SNPs), including 32 SNPs causing missense mutations, 3 exonic splicing enhancer SNPs and 20 SNPs causing nonsense mutations were identified from 2,141 available SNPs in the human IL-34 gene. IL-34 was expressed in various types of cancer, including blood, brain, breast, colorectal, eye, head and neck, lung, ovarian and skin cancer. A total of 5 out of 40 tests (1 blood cancer, 1 brain cancer, 1 colorectal cancer and 2 lung cancer) revealed an association between IL-34 gene expression and cancer prognosis. It was found that the association between the expression of IL-34 and cancer prognosis varied in different types of cancer, even in the same types of cancer from different databases. This suggests that the function of IL-34 in these tumors may be multidimensional. The upstream transcription factor 1 (USF1), regulatory factor X-1 (RFX1), the Sp1 transcription factor 1 , POU class 3 homeobox 2 (POU3F2) and forkhead box L1 (FOXL1) regulatory transcription factor binding sites were identified in the IL-34 gene upstream (promoter) region, which may be involved in the effects of IL-34 in tumors. PMID:25395235

  10. PMA/IONO affects diffuse large B-cell lymphoma cell growth through upregulation of A20 expression.

    Science.gov (United States)

    Yang, Wenxiu; Li, Yi; Li, Pinhao; Wang, Lingling

    2016-08-01

    Diffuse large B-cell lymphoma (DLBCL) is a common non-Hodgkin lymphoma. A20 and mucosa-associated lymphoid tissue lymphoma translocation gene 1 (MALT1) are known to be related to DLBCL pathogenesis and progression. This study aimed to assess the effects of phorbol myristate acetate/ionomycin (PMA/IONO) on the growth and apoptosis of the DLBCL cell line OCI-LY1, and their associations with A20, MALT1 and survivin levels. Cell viability was assessed by MTT assay. Cell cycle distribution and apoptosis were evaluated using flow cytometry after incubation with Annexin V-FITC/propidium iodide (PI) and RNase/PI, respectively. Gene and protein expression levels were determined by quantitative real-time PCR and western blotting, respectively. To further determine the role of A20, this gene was silenced in the OCI-LY1 cell line by specific siRNA transfection. A20 protein levels were higher in the OCI-LY1 cells treated with PMA/IONO compared with the controls, and were positively correlated with the concentration and treatment time of IONO, but not with changes of PMA and MALT1. Meanwhile, survivin expression was reduced in the OCI-LY1 cells after PMA/IONO treatment. In addition, OCI-LY1 proliferation was markedly inhibited, with a negative correlation between cell viability and IONO concentration. In concordance, apoptosis rates were higher in the OCI-LY1 cells after PMA + IONO treatment. Cell cycle distribution differed between the OCI-LY1 cells with and without PMA/IONO treatment only at 24 h, with increased cells in the G0/G1 stage after PMA/IONO treatment. These findings indicate that PMA/IONO promotes the apoptosis and inhibits the growth of DLBCL cells, in association with A20 upregulation. Thus, A20 may be a potential therapeutic target for DLBCL. PMID:27349720

  11. Numerical earthquake prediction

    International Nuclear Information System (INIS)

    Can earthquakes be predicted? How should people overcome the difficulties encountered in the study of earthquake prediction? This issue can take inspiration from the experiences of weather forecast. Although weather forecasting took a period of about half a century to advance from empirical to numerical forecast, it has achieved significant success. A consensus has been reached among the Chinese seismological community that earthquake prediction must also develop from empirical forecasting to physical prediction. However, it is seldom mentioned that physical prediction is characterized by quantitatively numerical predictions based on physical laws. This article discusses five key components for numerical earthquake prediction and their current status. We conclude that numerical earthquake prediction should now be put on the planning agenda and its roadmap designed, seismic stations should be deployed and observations made according to the needs of numerical prediction, and theoretical research should be carried out. (authors)

  12. Finite element analyses of CCAT preliminary design

    Science.gov (United States)

    Sarawit, Andrew T.; Kan, Frank W.

    2014-07-01

    This paper describes the development of the CCAT telescope finite element model (FEM) and the analyses performed to support the preliminary design work. CCAT will be a 25 m diameter telescope operating in the 0.2 to 2 mm wavelength range. It will be located at an elevation of 5600 m on Cerro Chajnantor in Northern Chile, near ALMA. The telescope will be equipped with wide-field cameras and spectrometers mounted at the two Nasmyth foci. The telescope will be inside an enclosure to protect it from wind buffeting, direct solar heating, and bad weather. The main structures of the telescope include a steel Mount and a carbon-fiber-reinforced-plastic (CFRP) primary truss. The finite element model developed in this study was used to perform modal, frequency response, seismic response spectrum, stress, and deflection analyses of telescope. Modal analyses of telescope were performed to compute the structure natural frequencies and mode shapes and to obtain reduced order modal output at selected locations in the telescope structure to support the design of the Mount control system. Modal frequency response analyses were also performed to compute transfer functions at these selected locations. Seismic response spectrum analyses of the telescope subject to the Maximum Likely Earthquake were performed to compute peak accelerations and seismic demand stresses. Stress analyses were performed for gravity load to obtain gravity demand stresses. Deflection analyses for gravity load, thermal load, and differential elevation drive torque were performed so that the CCAT Observatory can verify that the structures meet the stringent telescope surface and pointing error requirements.

  13. The prediction of different experiences of longterm illness

    DEFF Research Database (Denmark)

    Blank, N; Diderichsen, Finn

    1996-01-01

    To analyse the role played by socioeconomic factors and self rated general health in the prediction of the reporting of severe longterm illness, and the extent to which these factors explain social class differences in the reporting of such illness.......To analyse the role played by socioeconomic factors and self rated general health in the prediction of the reporting of severe longterm illness, and the extent to which these factors explain social class differences in the reporting of such illness....

  14. Familial transmission of schizophrenia in Palau: A 20-year genetic epidemiological study in three generations.

    Science.gov (United States)

    Myles-Worsley, Marina; Tiobech, Josepha; Blailes, Francisca; Middleton, Frank A; Vinogradov, Sophia; Byerley, William; Faraone, Stephen V

    2011-04-01

    Our genetic epidemiological studies of schizophrenia and other psychotic disorders (SCZ) in the isolated population of Palau have been ongoing for 20 years. Results from the first decade showed that Palau has an elevated prevalence of SCZ and that cases cluster in extended multigenerational pedigrees interconnected via complex genetic relationships after centuries of endogamous, but not consanguineous, marriages. The aim of our second decade of research, which extended data collection into a third generation of young, high-risk (HR) Palauans, was to identify significant predictors of intergenerational transmission of illness. Our findings revealed that degree of familial loading and gender effects on reproductive fitness are important modifiers of risk for transmission of SCZ. Among 45 distinct multiplex families, we identified 10 high-density (HD) Palauan families, each with 7-29 SCZ cases, which contain half of Palau's 260 SCZ cases and 80% of the 113 SCZ cases with one or more affected first-degree relatives, indicating that familial loading is a major risk factor for SCZ in Palau. Cases that belong to multiply affected sibships are more common than cases with an affected parent. Furthermore, only 6/38 multiply affected sibships have an affected parent, strong evidence that many unaffected parents are obligate carriers of susceptibility genes. Although reproductive fitness is dramatically reduced in affected males, the 30% minority who do become fathers are twice as likely as affected mothers to transmit SCZ to an offspring. As they evolve, these HD families can help to elucidate the genetic mechanisms that predict intergenerational transmission of SCZ. PMID:21294248

  15. Regional Scale Analyses of Climate Change Impacts on Agriculture

    Science.gov (United States)

    Wolfe, D. W.; Hayhoe, K.

    2006-12-01

    New statistically downscaled climate modeling techniques provide an opportunity for improved regional analysis of climate change impacts on agriculture. Climate modeling outputs can often simultaneously meet the needs of those studying impacts on natural as well as managed ecosystems. Climate outputs can be used to drive existing forest or crop models, or livestock models (e.g., temperature-humidity index model predicting dairy milk production) for improved information on regional impact. High spatial resolution climate forecasts, combined with knowledge of seasonal temperatures or rainfall constraining species ranges, can be used to predict shifts in suitable habitat for invasive weeds, insects, and pathogens, as well as cash crops. Examples of climate thresholds affecting species range and species composition include: minimum winter temperature, duration of winter chilling (vernalization) hours (e.g., hours below 7.2 C), frost-free period, and frequency of high temperature stress days in summer. High resolution climate outputs can also be used to drive existing integrated pest management models predicting crop insect and disease pressure. Collectively, these analyses can be used to test hypotheses or provide insight into the impact of future climate change scenarios on species range shifts and threat from invasives, shifts in crop production zones, and timing and regional variation in economic impacts.

  16. PREDICTING TURBINE STAGE PERFORMANCE

    Science.gov (United States)

    Boyle, R. J.

    1994-01-01

    This program was developed to predict turbine stage performance taking into account the effects of complex passage geometries. The method uses a quasi-3D inviscid-flow analysis iteratively coupled to calculated losses so that changes in losses result in changes in the flow distribution. In this manner the effects of both the geometry on the flow distribution and the flow distribution on losses are accounted for. The flow may be subsonic or shock-free transonic. The blade row may be fixed or rotating, and the blades may be twisted and leaned. This program has been applied to axial and radial turbines, and is helpful in the analysis of mixed flow machines. This program is a combination of the flow analysis programs MERIDL and TSONIC coupled to the boundary layer program BLAYER. The subsonic flow solution is obtained by a finite difference, stream function analysis. Transonic blade-to-blade solutions are obtained using information from the finite difference, stream function solution with a reduced flow factor. Upstream and downstream flow variables may vary from hub to shroud and provision is made to correct for loss of stagnation pressure. Boundary layer analyses are made to determine profile and end-wall friction losses. Empirical loss models are used to account for incidence, secondary flow, disc windage, and clearance losses. The total losses are then used to calculate stator, rotor, and stage efficiency. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 370/3033 under TSS with a central memory requirement of approximately 4.5 Megs of 8 bit bytes. This program was developed in 1985.

  17. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions. PMID:27286683

  18. Virtual surveillance of communicable diseases: a 20-year experience in France.

    Science.gov (United States)

    Flahault, A; Blanchon, T; Dorléans, Y; Toubiana, L; Vibert, J F; Valleron, A J

    2006-10-01

    Inserm has developed, since 1984, an information system based on a computer network of physicians in France. It allows for constitution of large databases on diseases, with individual description of cases, and to explore some aspects of the mathematical theory of communicable diseases. We developed user-friendly interfaces for remote data entry and GIS tools providing real-time atlas of the epidemiologic situation in any location. The continuous and ongoing surveillance network is constituted of about 1200 sentinel voluntary and unpaid investigators. We studied their motivation, reasons for either withdrawal or compliance using survival analyses. We implemented early warning systems for outbreak detection and for time-space forecasting. We conducted epidemiological surveys for investigating outbreaks. Large available time and space series allowed us to calibrate and explore synchronism of influenza epidemics, to test the assumption of panmixing in susceptibles-infectious-removed type models and to study the role of closing school in influenza morbidity and mortality in elderly. More than 250 000 cases of influenza, 150 000 cases of acute diarrheas, 35,000 patients for whom HIV tests have been prescribed by general practitioners and 25,000 cases of chickenpox have been collected. Detection of regional influenza or acute diarrhea outbreaks and forecasting of epidemic trends three weeks ahead are currently broadcasted to the French media and published on Sentiweb on a weekly basis. Age-cohort-period models assessed field effectiveness of mass immunization strategies against measles and influenza in the country. Case-control studies with more than 1200 sets of cases of acute diarrheas and their matched controls showed the role of calicivirus and rotavirus as probable major causes of gastroenteritis during recurrent widespread outbreaks in winter in France. An age-specific model for chickenpox showed the probable role of children in disease transmission to their

  19. A theoretical framework for analysing preschool teaching

    DEFF Research Database (Denmark)

    Chaiklin, Seth

    2014-01-01

    through the analysis of one of the earliest recorded examples of preschool education (initiated by J. F. Oberlin in northeastern France in 1767). The general idea of societal need is elaborated as a way of analysing practices, and a general analytic schema is presented for characterising preschool......This article introduces a theoretical framework for analysing preschool teaching as a historically-grounded societal practice. The aim is to present a unified framework that can be used to analyse and compare both historical and contemporary examples of preschool teaching practice within and across...... national traditions. The framework has two main components, an analysis of preschool teaching as a practice, formed in relation to societal needs, and an analysis of the categorical relations which necessarily must be addressed in preschool teaching activity. The framework is introduced and illustrated...

  20. Advanced toroidal facility vaccuum vessel stress analyses

    International Nuclear Information System (INIS)

    The complex geometry of the Advance Toroidal Facility (ATF) vacuum vessel required special analysis techniques in investigating the structural behavior of the design. The response of a large-scale finite element model was found for transportation and operational loading. Several computer codes and systems, including the National Magnetic Fusion Energy Computer Center Cray machines, were implemented in accomplishing these analyses. The work combined complex methods that taxed the limits of both the codes and the computer systems involved. Using MSC/NASTRAN cyclic-symmetry solutions permitted using only 1/12 of the vessel geometry to mathematically analyze the entire vessel. This allowed the greater detail and accuracy demanded by the complex geometry of the vessel. Critical buckling-pressure analyses were performed with the same model. The development, results, and problems encountered in performing these analyses are described. 5 refs., 3 figs

  1. Empirical Prediction Intervals for County Population Forecasts.

    Science.gov (United States)

    Rayer, Stefan; Smith, Stanley K; Tayman, Jeff

    2009-12-01

    Population forecasts entail a significant amount of uncertainty, especially for long-range horizons and for places with small or rapidly changing populations. This uncertainty can be dealt with by presenting a range of projections or by developing statistical prediction intervals. The latter can be based on models that incorporate the stochastic nature of the forecasting process, on empirical analyses of past forecast errors, or on a combination of the two. In this article, we develop and test prediction intervals based on empirical analyses of past forecast errors for counties in the United States. Using decennial census data from 1900 to 2000, we apply trend extrapolation techniques to develop a set of county population forecasts; calculate forecast errors by comparing forecasts to subsequent census counts; and use the distribution of errors to construct empirical prediction intervals. We find that empirically-based prediction intervals provide reasonably accurate predictions of the precision of population forecasts, but provide little guidance regarding their tendency to be too high or too low. We believe the construction of empirically-based prediction intervals will help users of small-area population forecasts measure and evaluate the uncertainty inherent in population forecasts and plan more effectively for the future. PMID:19936030

  2. TOGGLE : toolbox for generic NGS analyses

    OpenAIRE

    Monat, Cécile; Tranchant-Dubreuil, Christine; Kougbeadjo, Ayité; Farcy, Cédric; Ortega-Abboud, Enrique; Amanzougarene, Souhila; Ravel, Sébastien; Agbessi, Mawussé; Orjuela-Bouniol, Julie; Summo, Maryline; Sabot, François

    2015-01-01

    Background The explosion of NGS (Next Generation Sequencing) sequence data requires a huge effort in Bioinformatics methods and analyses. The creation of dedicated, robust and reliable pipelines able to handle dozens of samples from raw FASTQ data to relevant biological data is a time-consuming task in all projects relying on NGS. To address this, we created a generic and modular toolbox for developing such pipelines. Results TOGGLE (TOolbox for Generic nGs anaLysEs) is a suite of tools able ...

  3. TOGGLE: toolbox for generic NGS analyses

    OpenAIRE

    Monat, Cécile; Tranchant-Dubreuil, Christine; Kougbeadjo, Ayité; Farcy , Cédric; Ortega-Abboud, Enrique; Amanzougarene, Souhila; Ravel, Sébastien; Agbessi, Mawusse; Orjuela-Bouniol, Julie; Summo, Marilyne; Sabot, François

    2015-01-01

    Background: The explosion of NGS (Next Generation Sequencing) sequence data requires a huge effort in Bioinformatics methods and analyses. The creation of dedicated, robust and reliable pipelines able to handle dozens of samples from raw FASTQ data to relevant biological data is a time-consuming task in all projects relying on NGS. To address this, we created a generic and modular toolbox for developing such pipelines. Results: TOGGLE (TOolbox for Generic nGs anaLysEs) is a suite of tools abl...

  4. TOGGLE: toolbox for generic NGS analyses

    OpenAIRE

    Monat, Cécile; Tranchant-Dubreuil, Christine; Kougbeadjo, Ayité; Farcy, Cédric; Ortega-Abboud, Enrique; Amanzougarene, Souhila; Ravel, Sébastien; Agbessi, Mawusse; Orjuela-Bouniol , Julie; Summo, Marilyne; Sabot, François

    2015-01-01

    Background The explosion of NGS (Next Generation Sequencing) sequence data requires a huge effort in Bioinformatics methods and analyses. The creation of dedicated, robust and reliable pipelines able to handle dozens of samples from raw FASTQ data to relevant biological data is a time-consuming task in all projects relying on NGS. To address this, we created a generic and modular toolbox for developing such pipelines. Results TOGGLE (TOolbox for Generic nGs anaLysEs) is a suite of tools able ...

  5. Analyse de discours et demande sociale

    OpenAIRE

    Cislaru, Georgeta; Garnier, Sylvie; Matras, Marie-Thérèse; Pugnière-Saavedra, Frédéric; Rousseau, Patrick; Sitri, Frédérique; Veniard, Marie

    2010-01-01

    Que peut nous révéler l’analyse de discours des pratiques sociétales et des pratiques discursives qui les sous-tendent ? En questionnant le discours,l'analyse de discours questionne aussi ses instances productrices : instances politiques, médiatiques, institutionnelles. Elle a ainsi engagé, depuis une quarantaine d’années, un dialogue interdisciplinaire fructueux. Avec cinq contributions d’analystes de discours et deux de professionnels de la protection de l’enfance, ce numéro des Carnets du ...

  6. Interferences in reactor neutron activation analyses

    International Nuclear Information System (INIS)

    It has been shown that interfering reactions may occur in neutron activation analyses of aluminum and zinc matrixes, commonly used in nuclear areas. The interferences analysed were: Al2713 (n, α) Na2411 and Zn6430 (n, p) Cu6429. The method used was the non-destructive neutron activation analysis and the spectra were obtained in a 1024 multichannel system coupled with a Ge(Li) detector. Sodium was detected in aluminum samples from the reactor tank and pneumatic transfer system. The independence of the sodium concentration in samples in the range of 0 - 100 ppm is shown by the attenuation obtained with the samples encapsulated in cadmium. (Author)

  7. Prosjektering og analyse av en spennarmert betongbru

    OpenAIRE

    Strand, Elin Holsten; Kaldbekkdalen, Ann-Kristin

    2014-01-01

    Hensikten med rapporten er å gjennomføre analyse og dimensjonering av en etteroppspent betongbru. Modellering og analyse er gjennomført i NovaFrame 5. En del av oppgaven var å bestemme spennsystem og tverrsnittshøyden i brua. Det ble antatt seks spennkabler i felt, og tolv over støtte. Videre ble tverrsnittshøyden satt lik 1,3 meter. Dimensjoneringen ble gjennomført i henhold til gjeldende Eurokoder, aktuelle dokumenter og Håndbok 185, som er utarb...

  8. Loss-of-function mutations in TNFAIP3 leading to A20 haploinsufficiency cause an early-onset autoinflammatory disease

    NARCIS (Netherlands)

    Zhou, Qing; Wang, Hongying; Schwartz, Daniella M; Stoffels, Monique; Park, Yong Hwan; Zhang, Yuan; Yang, Dan; Demirkaya, Erkan; Takeuchi, Masaki; Tsai, Wanxia Li; Lyons, Jonathan J; Yu, Xiaomin; Ouyang, Claudia; Chen, Celeste; Chin, David T; Zaal, Kristien; Chandrasekharappa, Settara C; P Hanson, Eric; Yu, Zhen; Mullikin, James C; Hasni, Sarfaraz A; Wertz, Ingrid E; Ombrello, Amanda K; Stone, Deborah L; Hoffmann, Patrycja; Jones, Anne; Barham, Beverly K; Leavis, Helen L; van Royen, Annet; Sibley, Cailin; Batu, Ezgi D; Gül, Ahmet; Siegel, Richard M; Boehm, Manfred; Milner, Joshua D; Ozen, Seza; Gadina, Massimo; Chae, JaeJin; Laxer, Ronald M; Kastner, Daniel L; Aksentijevich, Ivona

    2015-01-01

    Systemic autoinflammatory diseases are driven by abnormal activation of innate immunity. Herein we describe a new disease caused by high-penetrance heterozygous germline mutations in TNFAIP3, which encodes the NF-κB regulatory protein A20, in six unrelated families with early-onset systemic inflamma

  9. Loss-of-function mutations in TNFAIP3 leading to A20 haploinsufficiency cause an early onset autoinflammatory syndrome

    Science.gov (United States)

    Zhou, Qing; Wang, Hongying; Schwartz, Daniella M.; Stoffels, Monique; Park, Yong Hwan; Zhang, Yuan; Yang, Dan; Demirkaya, Erkan; Takeuchi, Masaki; Tsai, Wanxia Li; Lyons, Jonathan J.; Yu, Xiaomin; Ouyang, Claudia; Chen, Celeste; Chin, David T.; Zaal, Kristien; Chandrasekharappa, Settara C.; Hanson, Eric P.; Yu, Zhen; Mullikin, James C.; Hasni, Sarfaraz A.; Wertz, Ingrid; Ombrello, Amanda K.; Stone, Deborah L.; Hoffmann, Patrycja; Jones, Anne; Barham, Beverly K.; Leavis, Helen L.; van Royen-Kerkof, Annet; Sibley, Cailin; Batu, Ezgi D.; Gül, Ahmet; Siegel, Richard M.; Boehm, Manfred; Milner, Joshua D.; Ozen, Seza; Gadina, Massimo; Chae, JaeJin; Laxer, Ronald M.; Kastner, Daniel L.; Aksentijevich, Ivona

    2016-01-01

    Systemic autoinflammatory diseases are driven by abnormal activation of innate immunity1. Herein we describe a new syndrome caused by high penetrance heterozygous germline mutations in the NFκB regulatory protein TNFAIP3 (A20) in six unrelated families with early onset systemic inflammation. The syndrome resembles Behçet’s disease (BD), which is typically considered a polygenic disorder with onset in early adulthood2. A20 is a potent inhibitor of the NFκB signaling pathway3. TNFAIP3 mutant truncated proteins are likely to act by haploinsufficiency since they do not exert a dominant-negative effect in overexpression experiments. Patients’ cells show increased degradation of IκBα and nuclear translocation of NFκB p65, and increased expression of NFκB-mediated proinflammatory cytokines. A20 restricts NFκB signals via deubiquitinating (DUB) activity. In cells expressing the mutant A20 protein, there is defective removal of K63-linked ubiquitin from TRAF6, NEMO, and RIP1 after TNF stimulation. NFκB-dependent pro-inflammatory cytokines are potential therapeutic targets for these patients. PMID:26642243

  10. Pre-operative diagnosis of symptomatic meckel's diverticulum in a 20-months old boy using radioisotopes scanning

    International Nuclear Information System (INIS)

    A 20-month-old boy presented to our department with painless rectal bleeding for three days was diagnosed to have a bleeding Meckel's diverticulum on radioisotopes scan. This scintigraphic finding was later confirmed on surgery and histopathology. Radioisotope scanning with Perechnetate is a simple, non-invasive and valuable test for pre-operative diagnosis of Meckel's diverticulum. (author)

  11. Optimal predictive model selection

    OpenAIRE

    Barbieri, Maria Maddalena; Berger, James O.

    2004-01-01

    Often the goal of model selection is to choose a model for future prediction, and it is natural to measure the accuracy of a future prediction by squared error loss. Under the Bayesian approach, it is commonly perceived that the optimal predictive model is the model with highest posterior probability, but this is not necessarily the case. In this paper we show that, for selection among normal linear models, the optimal predictive model is often the median probability model, which is defined a...

  12. Predictive software design measures

    OpenAIRE

    Love, Randall James

    1994-01-01

    This research develops a set of predictive measures enabling software testers and designers to identify and target potential problem areas for additional and/or enhanced testing. Predictions are available as early in the design process as requirements allocation and as late as code walk-throughs. These predictions are based on characteristics of the design artifacts prior to coding. Prediction equations are formed at established points in the software development process...

  13. A20 overexpression under control of mouse osteocalcin promoter in MC3T3-E1 cells inhibited tumor necrosis factor-alpha-induced apoptosis

    Institute of Scientific and Technical Information of China (English)

    Yue-juan QIN; Zhen-lin ZHANG; Lu-yang YU; Jin-wei HE; Ya-nan HOU; Tian-jin LIU; Jia-cai WU; Song-hua WU; Li-he GUO

    2006-01-01

    Aim: To construct an A20 expression vector under the control of mouse osteocalcin promoter (OC-A20), and investigate osteoblastic MC3T3-E1 cell line, which stably overexpresses A20 protein prevented tumor necrosis factor (TNF)-alpha-induced apoptosis. Methods: OC-A20 vector was constructed by fusing a fragment of the mouse osteocalcin gene-2 promoter with human A20 complementary DNA. Then the mouse MC3T3-E1 cell line, stably transfected by A20, was established. The expression of A20 mRNA and A20 protein in the cells were detected by reverse transcription-polymerase chain reaction (RT-PCR) and Western blot analysis, respectively. To determine the specificity of A20 expression in osteoblast, the mouse osteoblastic MC3T3-E1 cell line and mouse embryo fibro-blast NIH3T3 cell line were transiently transfected with OC-A20. The anti-apoptotic role of A20 in MC3T3-E1 cells was determined by Flow cytometric analysis (FACS), terminal dUTP nick endo-labeling (TUNEL) and DNA gel electrophoresis analysis (DNA Ladder), respectively. Results: Weak A20 expression was found in MC3T3-El cells with the primers of mouse A20. A20 mRNA and A20 protein expression were identified in MC3T3-E1 cells transfected with OC-A20 using RT-PCR and Western blot analysis. Only A20 mRNA expression was found in MC3T3-E1 cell after MC3T3-E1 cells and NIH3T3 cells were transient transfected with OC-A20. A decrease obviously occurred in the rate of apoptosis in the OC-A20 group compared with the empty vector (pcDNA3) group by FACS (P<0.001). A significant increase in TUNEL positive staining was found in the pcDNA group compared with OC-A20 group (P<0.001). Simultaneously, similar effects were demonstrated in DNA gel electrophoresis analysis. Conclusion: We constructed an osteoblast-specific expression vector that expressed A20 protein in MC3T3-E1 cells and confirmed that A20 protects osteoblast against TNF-alpha-induced apoptosis.

  14. Comparison of veterinary import risk analyses studies

    NARCIS (Netherlands)

    Vos-de Jong, de C.J.; Conraths, F.J.; Adkin, A.; Jones, E.M.; Hallgren, G.S.; Paisley, L.G.

    2011-01-01

    Twenty-two veterinary import risk analyses (IRAs) were audited: a) for inclusion of the main elements of risk analysis; b) between different types of IRAs; c) between reviewers' scores. No significant differences were detected between different types of IRAs, although quantitative IRAs and IRAs publ

  15. UMTS signal measurements with digital spectrum analysers

    International Nuclear Information System (INIS)

    The launch of the Universal Mobile Telecommunications System (UNITS), the most recent mobile telecommunications standard has imposed the requirement of updating measurement instrumentation and methodologies. In order to define the most reliable measurement procedure, which is aimed at assessing the exposure to electromagnetic fields, modern spectrum analysers' features for correct signal characterisation has been reviewed. (authors)

  16. A gamma model for {DNA} mixture analyses

    OpenAIRE

    Cowell, R. G.; Lauritzen, S L; Mortera, J.

    2007-01-01

    We present a new methodology for analysing forensic identification problems involving DNA mixture traces where several individuals may have contributed to the trace. The model used for identification and separation of DNA mixtures is based on a gamma distribution for peak area values. In this paper we illustrate the gamma model and apply it on several real examples from forensic casework.

  17. En Billig GPS Data Analyse Platform

    DEFF Research Database (Denmark)

    Andersen, Ove; Christiansen, Nick; Larsen, Niels T.; Torp, Kristian

    2011-01-01

    Denne artikel præsenterer en komplet software platform til analyse af GPS data. Platformen er bygget udelukkende vha. open-source komponenter. De enkelte komponenter i platformen beskrives i detaljer. Fordele og ulemper ved at bruge open-source diskuteres herunder hvilke IT politiske tiltage, der...

  18. Comparing functional annotation analyses with Catmap

    Directory of Open Access Journals (Sweden)

    Krogh Morten

    2004-12-01

    Full Text Available Abstract Background Ranked gene lists from microarray experiments are usually analysed by assigning significance to predefined gene categories, e.g., based on functional annotations. Tools performing such analyses are often restricted to a category score based on a cutoff in the ranked list and a significance calculation based on random gene permutations as null hypothesis. Results We analysed three publicly available data sets, in each of which samples were divided in two classes and genes ranked according to their correlation to class labels. We developed a program, Catmap (available for download at http://bioinfo.thep.lu.se/Catmap, to compare different scores and null hypotheses in gene category analysis, using Gene Ontology annotations for category definition. When a cutoff-based score was used, results depended strongly on the choice of cutoff, introducing an arbitrariness in the analysis. Comparing results using random gene permutations and random sample permutations, respectively, we found that the assigned significance of a category depended strongly on the choice of null hypothesis. Compared to sample label permutations, gene permutations gave much smaller p-values for large categories with many coexpressed genes. Conclusions In gene category analyses of ranked gene lists, a cutoff independent score is preferable. The choice of null hypothesis is very important; random gene permutations does not work well as an approximation to sample label permutations.

  19. Analyses of hydraulic performance of velocity caps

    DEFF Research Database (Denmark)

    Christensen, Erik Damgaard; Degn Eskesen, Mark Chr.; Buhrkall, Jeppe;

    2014-01-01

    The hydraulic performance of a velocity cap has been investigated. Velocity caps are often used in connection with offshore intakes. CFD (computational fluid dynamics) examined the flow through the cap openings and further down into the intake pipes. This was combined with dimension analyses...

  20. FAME: Software for analysing rock microstructures

    Science.gov (United States)

    Hammes, Daniel M.; Peternell, Mark

    2016-05-01

    Determination of rock microstructures leads to a better understanding of the formation and deformation of polycrystalline solids. Here, we present FAME (Fabric Analyser based Microstructure Evaluation), an easy-to-use MATLAB®-based software for processing datasets recorded by an automated fabric analyser microscope. FAME is provided as a MATLAB®-independent Windows® executable with an intuitive graphical user interface. Raw data from the fabric analyser microscope can be automatically loaded, filtered and cropped before analysis. Accurate and efficient rock microstructure analysis is based on an advanced user-controlled grain labelling algorithm. The preview and testing environments simplify the determination of appropriate analysis parameters. Various statistic and plotting tools allow a graphical visualisation of the results such as grain size, shape, c-axis orientation and misorientation. The FAME2elle algorithm exports fabric analyser data to an elle (modelling software)-supported format. FAME supports batch processing for multiple thin section analysis or large datasets that are generated for example during 2D in-situ deformation experiments. The use and versatility of FAME is demonstrated on quartz and deuterium ice samples.

  1. Uncertainty quantification approaches for advanced reactor analyses.

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, L. L.; Nuclear Engineering Division

    2009-03-24

    The original approach to nuclear reactor design or safety analyses was to make very conservative modeling assumptions so as to ensure meeting the required safety margins. Traditional regulation, as established by the U. S. Nuclear Regulatory Commission required conservatisms which have subsequently been shown to be excessive. The commission has therefore moved away from excessively conservative evaluations and has determined best-estimate calculations to be an acceptable alternative to conservative models, provided the best-estimate results are accompanied by an uncertainty evaluation which can demonstrate that, when a set of analysis cases which statistically account for uncertainties of all types are generated, there is a 95% probability that at least 95% of the cases meet the safety margins. To date, nearly all published work addressing uncertainty evaluations of nuclear power plant calculations has focused on light water reactors and on large-break loss-of-coolant accident (LBLOCA) analyses. However, there is nothing in the uncertainty evaluation methodologies that is limited to a specific type of reactor or to specific types of plant scenarios. These same methodologies can be equally well applied to analyses for high-temperature gas-cooled reactors and to liquid metal reactors, and they can be applied to steady-state calculations, operational transients, or severe accident scenarios. This report reviews and compares both statistical and deterministic uncertainty evaluation approaches. Recommendations are given for selection of an uncertainty methodology and for considerations to be factored into the process of evaluating uncertainties for advanced reactor best-estimate analyses.

  2. Hybrid Logical Analyses of the Ambient Calculus

    DEFF Research Database (Denmark)

    Bolander, Thomas; Hansen, Rene Rydhof

    2010-01-01

    In this paper, hybrid logic is used to formulate three control flow analyses for Mobile Ambients, a process calculus designed for modelling mobility. We show that hybrid logic is very well-suited to express the semantic structure of the ambient calculus and how features of hybrid logic can be...

  3. Meta-analyses on viral hepatitis

    DEFF Research Database (Denmark)

    Gluud, Lise L; Gluud, Christian

    2009-01-01

    This article summarizes the meta-analyses of interventions for viral hepatitis A, B, and C. Some of the interventions assessed are described in small trials with unclear bias control. Other interventions are supported by large, high-quality trials. Although attempts have been made to adjust...

  4. Modelling and Analysing Socio-Technical Systems

    DEFF Research Database (Denmark)

    Aslanyan, Zaruhi; Ivanova, Marieta Georgieva; Nielson, Flemming;

    2015-01-01

    and assessing attacks. In our work we model all relevant levels of socio-technical systems, and propose evaluation techniques for analysing the security properties of the model. Our approach simplifies the identification of possible attacks and provides qualified assessment and ranking of attacks based...

  5. Masonry: Task Analyses. Competency-Based Education.

    Science.gov (United States)

    Henrico County Public Schools, Glen Allen, VA. Virginia Vocational Curriculum Center.

    These task analyses are designed to be used in combination with the "Trade and Industrial Education Service Area Resource" in order to implement competency-based education in the masonry program in Virginia. The task analysis document contains the task inventory, suggested task sequence lists, and content outlines for the secondary courses Masonry…

  6. The Economic Cost of Homosexuality: Multilevel Analyses

    Science.gov (United States)

    Baumle, Amanda K.; Poston, Dudley, Jr.

    2011-01-01

    This article builds on earlier studies that have examined "the economic cost of homosexuality," by using data from the 2000 U.S. Census and by employing multilevel analyses. Our findings indicate that partnered gay men experience a 12.5 percent earnings penalty compared to married heterosexual men, and a statistically insignificant earnings…

  7. Micromechanical photothermal analyser of microfluidic samples

    DEFF Research Database (Denmark)

    2014-01-01

    The present invention relates to a micromechanical photothermal analyser of microfluidic samples comprising an oblong micro-channel extending longitudinally from a support element, the micro-channel is made from at least two materials with different thermal expansion coefficients, wherein the...

  8. Random error in cardiovascular meta-analyses

    DEFF Research Database (Denmark)

    Albalawi, Zaina; McAlister, Finlay A; Thorlund, Kristian; Wong, Michelle; Wetterslev, Jørn

    2013-01-01

    and/or their cumulative Z-curve crossed the O'Brien-Fleming monitoring boundaries for detecting a RRR of at least 25%. We classified meta-analyses that did not achieve statistical significance as true negatives if their pooled sample size was sufficient to reject a RRR of 25%. RESULTS: Twenty three...

  9. Chemical Analyses of Silicon Aerogel Samples

    CERN Document Server

    van der Werf, I; De Leo, R; Marrone, S

    2008-01-01

    After five years of operating, two Aerogel counters: A1 and A2, taking data in Hall A at Jefferson Lab, suffered a loss of performance. In this note possible causes of degradation have been studied. In particular, various chemical and physical analyses have been carried out on several Aerogel tiles and on adhesive tape in order to reveal the presence of contaminants.

  10. Cosmetology: Task Analyses. Competency-Based Education.

    Science.gov (United States)

    Henrico County Public Schools, Glen Allen, VA. Virginia Vocational Curriculum Center.

    These task analyses are designed to be used in combination with the "Trade and Industrial Education Service Area Resource" in order to implement competency-based education in the cosmetology program in Virginia. The task analysis document contains the task inventory, suggested task sequence lists, and content outlines for the secondary courses…

  11. How to Establish Clinical Prediction Models.

    Science.gov (United States)

    Lee, Yong Ho; Bang, Heejung; Kim, Dae Jung

    2016-03-01

    A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice. PMID:26996421

  12. Testing earthquake predictions

    Science.gov (United States)

    Luen, Brad; Stark, Philip B.

    2008-01-01

    Statistical tests of earthquake predictions require a null hypothesis to model occasional chance successes. To define and quantify 'chance success' is knotty. Some null hypotheses ascribe chance to the Earth: Seismicity is modeled as random. The null distribution of the number of successful predictions - or any other test statistic - is taken to be its distribution when the fixed set of predictions is applied to random seismicity. Such tests tacitly assume that the predictions do not depend on the observed seismicity. Conditioning on the predictions in this way sets a low hurdle for statistical significance. Consider this scheme: When an earthquake of magnitude 5.5 or greater occurs anywhere in the world, predict that an earthquake at least as large will occur within 21 days and within an epicentral distance of 50 km. We apply this rule to the Harvard centroid-moment-tensor (CMT) catalog for 2000-2004 to generate a set of predictions. The null hypothesis is that earthquake times are exchangeable conditional on their magnitudes and locations and on the predictions - a common "nonparametric" assumption in the literature. We generate random seismicity by permuting the times of events in the CMT catalog. We consider an event successfully predicted only if (i) it is predicted and (ii) there is no larger event within 50 km in the previous 21 days. The P-value for the observed success rate is <0.001: The method successfully predicts about 5% of earthquakes, far better than 'chance' because the predictor exploits the clustering of earthquakes - occasional foreshocks - which the null hypothesis lacks. Rather than condition on the predictions and use a stochastic model for seismicity, it is preferable to treat the observed seismicity as fixed, and to compare the success rate of the predictions to the success rate of simple-minded predictions like those just described. If the proffered predictions do no better than a simple scheme, they have little value.

  13. Nuclear Performance Analyses for HCPB Test Blanket Modules in ITER

    International Nuclear Information System (INIS)

    The Helium-Cooled Pebble Bed (HCPB) blanket is one of two breeder blanket concepts developed in the framework of the European Fusion Technology Programme for performance tests in ITER. The related efforts currently focus on the design optimisation of suitable Test Blanket Modules (TBM) and associated R-and-D activities. Four different HCPB TBM types are considered for addressing issues related to (i) electromagnetic transients (EM), (ii) neutronics and Tritium (NT), (iii) thermo-mechanical properties of the pebble beds (TM), and (iv) the integral performance of the blanket module (Plant Integration, PI). The lay-out of the NT and the PI modules has been entirely revised to represent the latest HCPB breeder blanket concept for fusion power reactors. A HCPB TBM consists of a steel box with an internal stiffening grid and small breeder units. The stiffening grid forms radially running open cells accommodating the breeder units (BU). The BU consists of a back plate with attached breeder canisters providing space for the breeder pebble beds. The space between the canisters and the stiffening plates is filled with Beryllium pebbles for the neutron multiplication. The latest design assumes two vertically arranged breeder containers per BU with a toroidal bed height of 10 and 24 mm, for NT and PI modules, respectively. Li4SiO4 is assumed as breeder material at 6Li enrichment levels between 40 at % (NT) and 90 at % (PI). This work is devoted to the neutronic, shielding and activation analyses performed recently for NT and PI variants of the HCPB TBM in ITER. The analyses are based on three-dimensional neutronic and activation calculations making use of a 20 degree torus sector model of ITER developed for Monte Carlo calculations with the MCNP code. The model includes a proper representation of the horizontal ITER test blanket port, the water cooled support frame with two integrated HCPB blanket test modules, the radiation shield and the port environment. Monte Carlo

  14. Predicting Predictable about Natural Catastrophic Extremes

    Science.gov (United States)

    Kossobokov, Vladimir

    2015-04-01

    By definition, an extreme event is rare one in a series of kindred phenomena. Usually (e.g. in Geophysics), it implies investigating a small sample of case-histories with a help of delicate statistical methods and data of different quality, collected in various conditions. Many extreme events are clustered (far from independent) and follow fractal or some other "strange" distribution (far from uniform). Evidently, such an "unusual" situation complicates search and definition of reliable precursory behaviors to be used for forecast/prediction purposes. Making forecast/prediction claims reliable and quantitatively probabilistic in the frames of the most popular objectivists' viewpoint on probability requires a long series of "yes/no" forecast/prediction outcomes, which cannot be obtained without an extended rigorous test of the candidate method. The set of errors ("success/failure" scores and space-time measure of alarms) and other information obtained in such a control test supplies us with data necessary to judge the candidate's potential as a forecast/prediction tool and, eventually, to find its improvements. This is to be done first in comparison against random guessing, which results confidence (measured in terms of statistical significance). Note that an application of the forecast/prediction tools could be very different in cases of different natural hazards, costs and benefits that determine risks, and, therefore, requires determination of different optimal strategies minimizing reliable estimates of realistic levels of accepted losses. In their turn case specific costs and benefits may suggest a modification of the forecast/prediction tools for a more adequate "optimal" application. Fortunately, the situation is not hopeless due to the state-of-the-art understanding of the complexity and non-linear dynamics of the Earth as a Physical System and pattern recognition approaches applied to available geophysical evidences, specifically, when intending to predict

  15. Finite-element creep damage analyses of P91 pipes

    International Nuclear Information System (INIS)

    In this paper, uniaxial and notched bar creep test data are used to establish the material behaviour models for two P91 steels of differing strength. The two steels are denoted here as Bar 257 steel, tested at 650 deg. C and A-369 steel, tested at 625 deg. C. Single-state variable and three-state variable creep damage constitutive models were used in the investigation. Methods for determining the material properties in the two sets of equations are briefly described. Finite-element analyses are performed using these material properties for a P91 pipe, subjected to internal pressure and end loading. The failure lives of the pipe were obtained, and on this basis, a preliminary assessment of using the two different sets of constitutive equations for failure predictions of high-temperature components under creep damage conditions can be made

  16. Probabilistic fuel rod analyses using the TRANSURANUS code

    International Nuclear Information System (INIS)

    After more than 25 years of fuel rod modelling research, the basic concepts are well established and the limitations of the specific approaches are known. However, the widely used mechanistic approach leads in many cases to discrepancies between theoretical predictions and experimental evidence indicating that models are not exact and that some of the physical processes encountered are of stochastic nature. To better understand uncertainties and their consequences, the mechanistic approach must therefore be augmented by statistical analyses. In the present paper the basic probabilistic methods are briefly discussed. Two such probabilistic approaches are included in the fuel rod performance code TRANSURANUS: the Monte Carlo method and the Numerical Noise Analysis. These two techniques are compared and their capabilities are demonstrated. (author). 12 refs, 4 figs, 2 tabs

  17. Compilation of Sandia coal char combustion data and kinetic analyses

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, R.E.; Hurt, R.H.; Baxter, L.L.; Hardesty, D.R.

    1992-06-01

    An experimental project was undertaken to characterize the physical and chemical processes that govern the combustion of pulverized coal chars. The experimental endeavor establishes a database on the reactivities of coal chars as a function of coal type, particle size, particle temperature, gas temperature, and gas and composition. The project also provides a better understanding of the mechanism of char oxidation, and yields quantitative information on the release rates of nitrogen- and sulfur-containing species during char combustion. An accurate predictive engineering model of the overall char combustion process under technologically relevant conditions in a primary product of this experimental effort. This document summarizes the experimental effort, the approach used to analyze the data, and individual compilations of data and kinetic analyses for each of the parent coals investigates.

  18. Loss of feed water analyses of advanced Heavy Water Reactor

    International Nuclear Information System (INIS)

    The proposed Advanced Heavy Water Reactor (AHWR) is a 750 MWt vertical pressure tube type boiling light water cooled and heavy water moderated reactor. Passive design feature of this reactor is that the heat removal is achieved through natural circulation of primary coolant at all power level with no primary coolant pumps. The case analysed in this paper is the loss of feedwater to steam drum which results in decrease in heat removal from core. This also causes increase in reactor pressure. Further consequences depend upon various protective and engineered safeguard systems like relief system, reactor trip, isolation condenser and advanced accumulator. Analysis has been done using code RELAP5/MOD3.2. Various modeling aspects are discussed in this paper and predictions are made for different parameters like pressure, temperature, qualities and flow in different part of Primary Heat Transport (PHT) system. (author)

  19. Spent fuel shipping costs for transportation logistics analyses

    International Nuclear Information System (INIS)

    Logistics analyses supplied to the nuclear waste management programs of the U.S. Department of Energy through the Transportation Technology Center (TTC) at Sandia National Laboratories are used to predict nuclear waste material logistics, transportation packaging demands, shipping and receiving rates and transportation-related costs for alternative strategies. This study is an in-depth analysis of the problems and contingencies associated with the costs of shipping irradiated reactor fuel. These costs are extremely variable however, and have changed frequently (sometimes monthly) during the past few years due to changes in capital, fuel, and labor costs. All costs and charges reported in this study are based on January 1982 data using existing transport cask systems and should be used as relative indices only. Actual shipping costs would be negotiable for each origin-destination combination

  20. Predictable or not predictable? The MOV question

    International Nuclear Information System (INIS)

    Over the past 8 years, the nuclear industry has struggled to understand the dynamic phenomena experienced during motor-operated valve (MOV) operation under differing flow conditions. For some valves and designs, their operational functionality has been found to be predictable; for others, unpredictable. Although much has been accomplished over this period of time, especially on modeling valve dynamics, the unpredictability of many valves and designs still exists. A few valve manufacturers are focusing on improving design and fabrication techniques to enhance product reliability and predictability. However, this approach does not address these issues for installed and inpredictable valves. This paper presents some of the more promising techniques that Wyle Laboratories has explored with potential for transforming unpredictable valves to predictable valves and for retrofitting installed MOVs. These techniques include optimized valve tolerancing, surrogated material evaluation, and enhanced surface treatments

  1. STACE: Source Term Analyses for Containment Evaluations of transport casks

    International Nuclear Information System (INIS)

    The development of the Source Term Analyses for Containment Evaluations (STACE) methodology provides a unique means for estimating the probability of cladding breach within transport casks, quantifying the amount of radioactive material released into the cask interior, and calculating the releasable radionuclide concentrations and corresponding maximum permissible leakage rates. Following the guidance of ANSI N14.5, the STACE methodology provides a technically defensible means for estimating maximum permissible leakage rates. These containment criteria attempt to reflect the true radiological hazard by performing a detailed examination of the spent fuel, CRUD, and residual contamination contributions to the releasable source term. The evaluation of the spent fuel contribution to the source team has been modeled fairly accurately using the STACE methodology. The structural model predicts the cask drop load history, the mechanical response of the fuel assembly, and the probability of cladding breach. These data are then used to predict the amount of fission gas, volitile species, and fuel fines that are releasable from the cask. There are some areas where data are sparse or lacking in which experimental validation is planned. Finally, the ANSI N14.5 recommendation that 3% and 100% of the fuel rods fail during normal and hypothetical accident conditions of transport, respectively, has been show to be overly conservative by several degrees of magnitude for these example analyses. Furthermore, the maximum permissible leakage rates for this example assembly under normal and hypothetical accident conditions are significanly higher that the leaktight requirements. By relaxing the maximum permissible leakage rates, the source term methodology is expected to significantly improvecask economics and safety

  2. Does the Repressor Coping Style Predict Lower Posttraumatic Stress Symptoms?

    OpenAIRE

    McNally, Richard J.; Hatch, John P.; Cedillos, Elizabeth M.; Luethcke, Cynthia A.; Baker, Monty T.; Peterson, Alan L.; Litz, Brett T.

    2011-01-01

    We tested whether a continuous measure of repressor coping style predicted lower posttraumatic stress disorder (PTSD) symptoms in 122 health care professionals serving in Operation Iraqi Freedom. Zero-order correlational analyses indicated that predeployment repressor coping scores negatively predicted postdeployment PTSD symptoms, \\(r_s = -0.29, p = 0.001\\), whereas predeployment Connor-Davidson Resilience Scale (CD-RISC) scores did not predict postdeployment PTSD symptoms, \\(r_s = -0.13, p ...

  3. Albedo Pattern Recognition and Time-Series Analyses in Malaysia

    Science.gov (United States)

    Salleh, S. A.; Abd Latif, Z.; Mohd, W. M. N. Wan; Chan, A.

    2012-07-01

    Pattern recognition and time-series analyses will enable one to evaluate and generate predictions of specific phenomena. The albedo pattern and time-series analyses are very much useful especially in relation to climate condition monitoring. This study is conducted to seek for Malaysia albedo pattern changes. The pattern recognition and changes will be useful for variety of environmental and climate monitoring researches such as carbon budgeting and aerosol mapping. The 10 years (2000-2009) MODIS satellite images were used for the analyses and interpretation. These images were being processed using ERDAS Imagine remote sensing software, ArcGIS 9.3, the 6S code for atmospherical calibration and several MODIS tools (MRT, HDF2GIS, Albedo tools). There are several methods for time-series analyses were explored, this paper demonstrates trends and seasonal time-series analyses using converted HDF format MODIS MCD43A3 albedo land product. The results revealed significance changes of albedo percentages over the past 10 years and the pattern with regards to Malaysia's nebulosity index (NI) and aerosol optical depth (AOD). There is noticeable trend can be identified with regards to its maximum and minimum value of the albedo. The rise and fall of the line graph show a similar trend with regards to its daily observation. The different can be identified in term of the value or percentage of rises and falls of albedo. Thus, it can be concludes that the temporal behavior of land surface albedo in Malaysia have a uniform behaviours and effects with regards to the local monsoons. However, although the average albedo shows linear trend with nebulosity index, the pattern changes of albedo with respects to the nebulosity index indicates that there are external factors that implicates the albedo values, as the sky conditions and its diffusion plotted does not have uniform trend over the years, especially when the trend of 5 years interval is examined, 2000 shows high negative linear

  4. QUANTUM MECHANICAL CONFORMATION ANALYSES OF CELLOBIOSE

    Science.gov (United States)

    Rotations about the bonds to the glycosidic oxygen atom are the primary determinants of the shape properties of cellobiose and cellulose. Their preferred values can be predicted by consulting the classical Ramachandran map, or f, y energy surface. Earlywork was followed by Simon, Scheraga and Manl...

  5. Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.

  6. EVA Performance Prediction

    Science.gov (United States)

    Peacock, Brian; Maida, James; Rajulu, Sudhakar

    2004-01-01

    Astronaut physical performance capabilities in micro gravity EV A or on planetary surfaces when encumbered by a life support suit and debilitated by a long exposure to micro gravity will be less than unencumbered pre flight capabilities. The big question addressed by human factors engineers is: what can the astronaut be expected to do on EVA or when we arrive at a planetary surface? A second question is: what aids to performance will be needed to enhance the human physical capability? These questions are important for a number of reasons. First it is necessary to carry out accurate planning of human physical demands to ensure that time and energy critical tasks can be carried out with confidence. Second it is important that the crew members (and their ground or planetary base monitors) have a realistic picture of their own capabilities, as excessive fatigue can lead to catastrophic failure. Third it is important to design appropriate equipment to enhance human sensory capabilities, locomotion, materials handling and manipulation. The evidence from physiological research points to musculoskeletal, cardiovascular and neurovestibular degradation during long duration exposure to micro gravity . The evidence from the biomechanics laboratory (and the Neutral Buoyancy Laboratory) points to a reduction in range of motion, strength and stamina when encumbered by a pressurized suit. The evidence from a long history of EVAs is that crewmembers are indeed restricted in their physical capabilities. There is a wealth of evidence in the literature on the causes and effects of degraded human performance in the laboratory, in sports and athletics, in industry and in other physically demanding jobs. One approach to this challenge is through biomechanical and performance modeling. Such models must be based on thorough task analysis, reliable human performance data from controlled studies, and functional extrapolations validated in analog contexts. The task analyses currently carried

  7. Reliability of chemical analyses of water samples

    Energy Technology Data Exchange (ETDEWEB)

    Beardon, R.

    1989-11-01

    Ground-water quality investigations require reliable chemical analyses of water samples. Unfortunately, laboratory analytical results are often unreliable. The Uranium Mill Tailings Remedial Action (UMTRA) Project`s solution to this problem was to establish a two phase quality assurance program for the analysis of water samples. In the first phase, eight laboratories analyzed three solutions of known composition. The analytical accuracy of each laboratory was ranked and three laboratories were awarded contracts. The second phase consists of on-going monitoring of the reliability of the selected laboratories. The following conclusions are based on two years experience with the UMTRA Project`s Quality Assurance Program. The reliability of laboratory analyses should not be taken for granted. Analytical reliability may be independent of the prices charged by laboratories. Quality assurance programs benefit both the customer and the laboratory.

  8. Pratique de l'analyse fonctionelle

    CERN Document Server

    Tassinari, Robert

    1997-01-01

    Mettre au point un produit ou un service qui soit parfaitement adapté aux besoins et aux exigences du client est indispensable pour l'entreprise. Pour ne rien laisser au hasard, il s'agit de suivre une méthodologie rigoureuse : celle de l'analyse fonctionnelle. Cet ouvrage définit précisément cette méthode ainsi que ses champs d'application. Il décrit les méthodes les plus performantes en termes de conception de produit et de recherche de qualité et introduit la notion d'analyse fonctionnelle interne. Un ouvrage clé pour optimiser les processus de conception de produit dans son entreprise. -- Idées clés, par Business Digest

  9. Causal Mediation Analyses for Randomized Trials.

    Science.gov (United States)

    Lynch, Kevin G; Cary, Mark; Gallop, Robert; Ten Have, Thomas R

    2008-01-01

    In the context of randomized intervention trials, we describe causal methods for analyzing how post-randomization factors constitute the process through which randomized baseline interventions act on outcomes. Traditionally, such mediation analyses have been undertaken with great caution, because they assume that the mediating factor is also randomly assigned to individuals in addition to the randomized baseline intervention (i.e., sequential ignorability). Because the mediating factors are typically not randomized, such analyses are unprotected from unmeasured confounders that may lead to biased inference. We review several causal approaches that attempt to reduce such bias without assuming that the mediating factor is randomized. However, these causal approaches require certain interaction assumptions that may be assessed if there is enough treatment heterogeneity with respect to the mediator. We describe available estimation procedures in the context of several examples from the literature and provide resources for software code. PMID:19484136

  10. DCH analyses using the CONTAIN code

    International Nuclear Information System (INIS)

    This report describes CONTAIN analyses performed during participation in the project of 'DCH issue resolution for ice condenser plants' which is sponsored by NRC at SNL. Even though the calculations were performed for the Ice Condenser plant, CONTAIN code has been used for analyses of many phenomena in the PWR containment and the DCH module can be commonly applied to any plant types. The present ice condenser issue resolution effort intended to provide guidance as to what might be needed to resolve DCH for ice condenser plants. It includes both a screening analysis and a scoping study if the screening analysis cannot provide an complete resolution. The followings are the results concerning DCH loads in descending order. 1. Availability of ignition sources prior to vessel breach 2. availability and effectiveness of ice in the ice condenser 3. Loads modeling uncertainties related to co-ejected RPV water 4. Other loads modeling uncertainties 10 tabs., 3 figs., 14 refs. (Author)

  11. Standardized analyses of nuclear shipping containers

    International Nuclear Information System (INIS)

    This paper describes improved capabilities for analyses of nuclear fuel shipping containers within SCALE -- a modular code system for Standardized Computer Analyses for Licensing Evaluation. Criticality analysis improvements include the new KENO V, a code which contains an enhanced geometry package and a new control module which uses KENO V and allows a criticality search on optimum pitch (maximum k-effective) to be performed. The SAS2 sequence is a new shielding analysis module which couples fuel burnup, source term generation, and radial cask shielding. The SAS5 shielding sequence allows a multidimensional Monte Carlo analysis of a shipping cask with code generated biasing of the particle histories. The thermal analysis sequence (HTAS1) provides an easy-to-use tool for evaluating a shipping cask response to the accident capability of the SCALE system to provide the cask designer or evaluator with a computational system that provides the automated procedures and easy-to-understand input that leads to standarization

  12. Methodology development for statistical evaluation of reactor safety analyses

    International Nuclear Information System (INIS)

    In February 1975, Westinghouse Electric Corporation, under contract to Electric Power Research Institute, started a one-year program to develop methodology for statistical evaluation of nuclear-safety-related engineering analyses. The objectives of the program were to develop an understanding of the relative efficiencies of various computational methods which can be used to compute probability distributions of output variables due to input parameter uncertainties in analyses of design basis events for nuclear reactors and to develop methods for obtaining reasonably accurate estimates of these probability distributions at an economically feasible level. A series of tasks was set up to accomplish these objectives. Two of the tasks were to investigate the relative efficiencies and accuracies of various Monte Carlo and analytical techniques for obtaining such estimates for a simple thermal-hydraulic problem whose output variable of interest is given in a closed-form relationship of the input variables and to repeat the above study on a thermal-hydraulic problem in which the relationship between the predicted variable and the inputs is described by a short-running computer program. The purpose of the report presented is to document the results of the investigations completed under these tasks, giving the rationale for choices of techniques and problems, and to present interim conclusions

  13. Analyses of the OSU-MASLWR Experimental Test Facility

    International Nuclear Information System (INIS)

    Today, considering the sustainability of the nuclear technology in the energy mix policy of developing and developed countries, the international community starts the development of new advanced reactor designs. In this framework, Oregon State University (OSU) has constructed, a system level test facility to examine natural circulation phenomena of importance to multi-application small light water reactor (MASLWR) design, a small modular pressurized water reactor (PWR), relying on natural circulation during both steady-state and transient operation. The target of this paper is to give a review of the main characteristics of the experimental facility, to analyse the main phenomena characterizing the tests already performed, the potential transients that could be investigated in the facility, and to describe the current IAEA International Collaborative Standard Problem that is being hosted at OSU and the experimental data will be collected at the OSU-MASLWR test facility. A summary of the best estimate thermal hydraulic system code analyses, already performed, to analyze the codes capability in predicting the phenomena typical of the MASLWR prototype, thermal hydraulically characterized in the OSU-MASLWR facility, is presented as well.

  14. Sequencing and comparative analyses of the genomes of zoysiagrasses.

    Science.gov (United States)

    Tanaka, Hidenori; Hirakawa, Hideki; Kosugi, Shunichi; Nakayama, Shinobu; Ono, Akiko; Watanabe, Akiko; Hashiguchi, Masatsugu; Gondo, Takahiro; Ishigaki, Genki; Muguerza, Melody; Shimizu, Katsuya; Sawamura, Noriko; Inoue, Takayasu; Shigeki, Yuichi; Ohno, Naoki; Tabata, Satoshi; Akashi, Ryo; Sato, Shusei

    2016-04-01

    Zoysiais a warm-season turfgrass, which comprises 11 allotetraploid species (2n= 4x= 40), each possessing different morphological and physiological traits. To characterize the genetic systems ofZoysiaplants and to analyse their structural and functional differences in individual species and accessions, we sequenced the genomes ofZoysiaspecies using HiSeq and MiSeq platforms. As a reference sequence ofZoysiaspecies, we generated a high-quality draft sequence of the genome ofZ. japonicaaccession 'Nagirizaki' (334 Mb) in which 59,271 protein-coding genes were predicted. In parallel, draft genome sequences ofZ. matrella'Wakaba' andZ. pacifica'Zanpa' were also generated for comparative analyses. To investigate the genetic diversity among theZoysiaspecies, genome sequence reads of three additional accessions,Z. japonica'Kyoto',Z. japonica'Miyagi' andZ. matrella'Chiba Fair Green', were accumulated, and aligned against the reference genome of 'Nagirizaki' along with those from 'Wakaba' and 'Zanpa'. As a result, we detected 7,424,163 single-nucleotide polymorphisms and 852,488 short indels among these species. The information obtained in this study will be valuable for basic studies on zoysiagrass evolution and genetics as well as for the breeding of zoysiagrasses, and is made available in the 'Zoysia Genome Database' athttp://zoysia.kazusa.or.jp. PMID:26975196

  15. Mass spectrometer for the analyses of gases

    International Nuclear Information System (INIS)

    A 6-in-radius, 600 magnetic-sector mass spectrometer (designated as the MS-200) has been constructed for the quantitative and qualitative analyses of fixed gases and volatile organics in the concentration range from 1 ppM (by volume) to 100%. A partial pressure of 1 x 10-6 torr in the inlet expansion volume is required to achieve a useful signal at an electron-multiplier gain of 10,000

  16. Ethics of cost analyses in medical education

    OpenAIRE

    Walsh, Kieran

    2013-01-01

    Cost analyses in medical education are rarely straightforward, and rarely lead to clear-cut conclusions. Occasionally they do lead to clear conclusions but even when that happens, some stakeholders will ask difficult but valid questions about what to do following cost analyses–specifically about distributive justice in the allocation of resources. At present there are few or no debates about these issues and rationing decisions that are taken in medical education are largely made subconscious...

  17. Causal Mediation Analyses for Randomized Trials

    OpenAIRE

    Lynch, Kevin G.; Cary, Mark; Gallop, Robert; Ten Have, Thomas R.

    2008-01-01

    In the context of randomized intervention trials, we describe causal methods for analyzing how post-randomization factors constitute the process through which randomized baseline interventions act on outcomes. Traditionally, such mediation analyses have been undertaken with great caution, because they assume that the mediating factor is also randomly assigned to individuals in addition to the randomized baseline intervention (i.e., sequential ignorability). Because the mediating factors are t...

  18. Pathway Analyses Implicate Glial Cells in Schizophrenia

    OpenAIRE

    Duncan, Laramie E.; Holmans, Peter A.; Lee, Phil H.; O'Dushlaine, Colm T; Kirby, Andrew W.; Smoller, Jordan W.; Öngür, Dost; Cohen, Bruce M.

    2014-01-01

    Background: The quest to understand the neurobiology of schizophrenia and bipolar disorder is ongoing with multiple lines of evidence indicating abnormalities of glia, mitochondria, and glutamate in both disorders. Despite high heritability estimates of 81% for schizophrenia and 75% for bipolar disorder, compelling links between findings from neurobiological studies, and findings from large-scale genetic analyses, are only beginning to emerge. Method Ten publically available gene sets (pathwa...

  19. Mikromechanische Analyse der Wirkungsmechanismen elektrischer Dehnungsmessstreifen

    OpenAIRE

    Stockmann, Martin

    2000-01-01

    Die elektrische Dehnungsmesstechnik auf der Grundlage separater Dehnungsmessstreifen (DMS) stellt heute eine der wesentlichsten Methoden zur experimentellen Beanspruchungs- analyse dar. Präzise Messungen außerhalb der Kalibrierbedingungen, insbesondere bei großen Deformationen oder hohen Querdehnungsanteilen, erfordern die Berücksichtigung nicht- linearer Zusammenhänge zwischen den zu bestimmenden Komponenten der Bauteildehnung und der Widerstandsänderung des Messgitters. ...

  20. A database system for RCM analyses

    International Nuclear Information System (INIS)

    A proposal for a database system to record and document Reliability Centered Maintenance (RCM) analyses is presented. The database is conceived so as to enable its application to large industrial units, which can be granulated into specific parts (systems, nodes) to be analyzed in detail by the RCM methodology at the level of components of the systems (nodes). A proposal for an algorithm to be used for the selection of suitable components for optimization of preventive maintenance is also included. (author)

  1. El Cours d’Analyse de Cauchy

    OpenAIRE

    Pérez, Javier; Aizpuru, Antonio

    1999-01-01

    En este artículo presentamos un estudio contextualizado de Cours d’Analyse de Cauchy, analizando su significado e importancia. Presentamos especial atención al grado de elaboración teórica de límites, continuidad, series, números reales funciones y series completas, relacionando las aportaciones de Cauchi del nivel conceptual anterior a esta ahora.

  2. Conditions and applicational preferences of reliability analyses

    International Nuclear Information System (INIS)

    This VDI guide refers to the tasks of reliability analyses within a given project, and to their integration into the system engineering process. It presents principles and rules for the application of analytical methods to reliability problems, and in general mentions the mathematical reliability models that preferrably are to be applied to specific problems, and the necessary relevant information. It also explains the limits of applicability of the various analytical methods. (orig./HP)

  3. Delvis drenert analyse av innvendig avstivet utgraving

    OpenAIRE

    Myhrvold, Michael F

    2013-01-01

    Denne masteroppgaven omhandler analyser av de delvis drenerte effektene som kan oppstå ved innvendig, avstivede utgravinger. Formålet med masteroppgaven er å gjennomføre en numerisk studie av prosessen som styrer den tidsavhengige utviklingen ved avstivede utgravinger i lavpermeable jordtyper. Det gir muligheten til å vurdere de delvis drenerte effektene og innflytelsen disse utgjør ved denne typen utgravinger. Ettersom jordens oppførsel ved små tø...

  4. Investigation into the methodology of safety analyses

    International Nuclear Information System (INIS)

    The common methods of the systems analysis were investigated with respect to thequestion whether they are appropriate for the detection of potential sources of hazard in industrial plants, in particular in chemical plants, and whether this can be verified. The quantification of accidents and the risk assessment that can be derived therefrom are discussed. In order to allow quantitative safety analyses, the simulation model SYSP was developed. For backing, data were compiled on reliability. (DG)

  5. How Frequent is Chronic Multiyear Delusional Activity and Recovery in Schizophrenia: A 20-Year Multi–follow-up

    OpenAIRE

    Harrow, Martin; Jobe, Thomas H.

    2008-01-01

    To determine how frequent chronic multiyear delusional activity is in modern-day schizophrenia, we studied 200 patients over a 20-year period. We also studied the relation of delusions to hallucinations and thought disorder-disorganization, to work disability, and to later periods of global recovery and assessed several protective factors against delusional activity. The sample was assessed 6 times over 20 years and includes 43 patients with schizophrenia. Participants were evaluated at each ...

  6. Pegasys: software for executing and integrating analyses of biological sequences

    Directory of Open Access Journals (Sweden)

    Lett Drew

    2004-04-01

    Full Text Available Abstract Background We present Pegasys – a flexible, modular and customizable software system that facilitates the execution and data integration from heterogeneous biological sequence analysis tools. Results The Pegasys system includes numerous tools for pair-wise and multiple sequence alignment, ab initio gene prediction, RNA gene detection, masking repetitive sequences in genomic DNA as well as filters for database formatting and processing raw output from various analysis tools. We introduce a novel data structure for creating workflows of sequence analyses and a unified data model to store its results. The software allows users to dynamically create analysis workflows at run-time by manipulating a graphical user interface. All non-serial dependent analyses are executed in parallel on a compute cluster for efficiency of data generation. The uniform data model and backend relational database management system of Pegasys allow for results of heterogeneous programs included in the workflow to be integrated and exported into General Feature Format for further analyses in GFF-dependent tools, or GAME XML for import into the Apollo genome editor. The modularity of the design allows for new tools to be added to the system with little programmer overhead. The database application programming interface allows programmatic access to the data stored in the backend through SQL queries. Conclusions The Pegasys system enables biologists and bioinformaticians to create and manage sequence analysis workflows. The software is released under the Open Source GNU General Public License. All source code and documentation is available for download at http://bioinformatics.ubc.ca/pegasys/.

  7. Application of RUNTA code in flood analyses

    International Nuclear Information System (INIS)

    Flood probability analyses carried out to date indicate the need to evaluate a large number of flood scenarios. This necessity is due to a variety of reasons, the most important of which include: - Large number of potential flood sources - Wide variety of characteristics of flood sources - Large possibility of flood-affected areas becoming inter linked, depending on the location of the potential flood sources - Diversity of flood flows from one flood source, depending on the size of the rupture and mode of operation - Isolation times applicable - Uncertainties in respect of the structural resistance of doors, penetration seals and floors - Applicable degrees of obstruction of floor drainage system Consequently, a tool which carries out the large number of calculations usually required in flood analyses, with speed and flexibility, is considered necessary. The RUNTA Code enables the range of possible scenarios to be calculated numerically, in accordance with all those parameters which, as a result of previous flood analyses, it is necessary to take into account in order to cover all the possible floods associated with each flood area

  8. Phylogenetic analyses of Andromedeae (Ericaceae subfam. Vaccinioideae).

    Science.gov (United States)

    Kron, K A; Judd, W S; Crayn, D M

    1999-09-01

    Phylogenetic relationships within the Andromedeae and closely related taxa were investigated by means of cladistic analyses based on phenotypic (morphology, anatomy, chromosome number, and secondary chemistry) and molecular (rbcL and matK nucleotide sequences) characters. An analysis based on combined molecular and phenotypic characters indicates that the tribe is composed of two major clades-the Gaultheria group (incl. Andromeda, Chamaedaphne, Diplycosia, Gaultheria, Leucothoë, Pernettya, Tepuia, and Zenobia) and the Lyonia group (incl. Agarista, Craibiodendron, Lyonia, and Pieris). Andromedeae are shown to be paraphyletic in all analyses because the Vaccinieae link with some or all of the genera of the Gaultheria group. Oxydendrum is sister to the clade containing the Vaccinieae, Gaultheria group, and Lyonia group. The monophyly of Agarista, Lyonia, Pieris, and Gaultheria (incl. Pernettya) is supported, while that of Leucothoë is problematic. The close relationship of Andromeda and Zenobia is novel and was strongly supported in the molecular (but not morphological) analyses. Diplycosia, Tepuia, Gaultheria, and Pernettya form a well-supported clade, which can be diagnosed by the presence of fleshy calyx lobes and methyl salicylate. Recognition of Andromedeae is not reflective of our understanding of geneological relationships and should be abandoned; the Lyonia group is formally recognized at the tribal level. PMID:10487817

  9. Visualizing Risk Prediction Models

    OpenAIRE

    Vanya Van Belle; Ben Van Calster

    2015-01-01

    Objective Risk prediction models can assist clinicians in making decisions. To boost the uptake of these models in clinical practice, it is important that end-users understand how the model works and can efficiently communicate its results. We introduce novel methods for interpretable model visualization. Methods The proposed visualization techniques are applied to two prediction models from the Framingham Heart Study for the prediction of intermittent claudication and stroke after atrial fib...

  10. Pyroshock prediction procedures

    Science.gov (United States)

    Piersol, Allan G.

    2002-05-01

    Given sufficient effort, pyroshock loads can be predicted by direct analytical procedures using Hydrocodes that analytically model the details of the pyrotechnic explosion and its interaction with adjacent structures, including nonlinear effects. However, it is more common to predict pyroshock environments using empirical procedures based upon extensive studies of past pyroshock data. Various empirical pyroshock prediction procedures are discussed, including those developed by the Jet Propulsion Laboratory, Lockheed-Martin, and Boeing.

  11. Predicting transformers oil parameters

    OpenAIRE

    Shaban, K.; El-Hag, A.; Matveev, A.

    2009-01-01

    In this paper different configurations of artificial neural networks are applied to predict various transformers oil parameters. The prediction is performed through modeling the relationship between the transformer insulation resistance extracted from the Megger test and the breakdown strength, interfacial tension, acidity and the water content of the transformers oil. The process of predicting these oil parameters statuses is carried out using two different configurations of neural networks....

  12. Is Suicide Predictable?

    OpenAIRE

    Asmaee, S; Mosavi, N; R Abdul Rashid; H Habi; Seghatoleslam, T; Naseri, A.

    2012-01-01

    Background: The current study aimed to test the hypothesis: Is suicide predictable? And try to classify the predictive factors in multiple suicide attempts. Methods: A cross-sectional study was administered to 223 multiple attempters, women who came to a medical poison centre after a suicide attempt. The participants were young, poor, and single. A Logistic Regression Analiysis was used to classify the predictive factors of suicide. Results: Women who had multiple suicide attempts exhibited a...

  13. Prediction methods environmental-effect reporting

    International Nuclear Information System (INIS)

    This report provides a survey of prediction methods which can be applied to the calculation of emissions in cuclear-reactor accidents, in the framework of environment-effect reports (dutch m.e.r.) or risk analyses. Also emissions during normal operation are important for m.e.r.. These can be derived from measured emissions of power plants being in operation. Data concerning the latter are reported. The report consists of an introduction into reactor technology, among which a description of some reactor types, the corresponding fuel cycle and dismantling scenarios - a discussion of risk-analyses for nuclear power plants and the physical processes which can play a role during accidents - a discussion of prediction methods to be employed and the expected developments in this area - some background information. (aughor). 145 refs.; 21 figs.; 20 tabs

  14. Structural and mutational analyses of cis-acting sequences in the 5'-untranslated region of satellite RNA of bamboo mosaic potexvirus

    International Nuclear Information System (INIS)

    The satellite RNA of Bamboo mosaic virus (satBaMV) contains on open reading frame for a 20-kDa protein that is flanked by a 5'-untranslated region (UTR) of 159 nucleotides (nt) and a 3'-UTR of 129 nt. A secondary structure was predicted for the 5'-UTR of satBaMV RNA, which folds into a large stem-loop (LSL) and a small stem-loop. Enzymatic probing confirmed the existence of LSL (nt 8-138) in the 5'-UTR. The essential cis-acting sequences in the 5'-UTR required for satBaMV RNA replication were determined by deletion and substitution mutagenesis. Their replication efficiencies were analyzed in Nicotiana benthamiana protoplasts and Chenopodium quinoa plants coinoculated with helper BaMV RNA. All deletion mutants abolished the replication of satBaMV RNA, whereas mutations introduced in most of the loop regions and stems showed either no replication or a decreased replication efficiency. Mutations that affected the positive-strand satBaMV RNA accumulation also affected the accumulation of negative-strand RNA; however, the accumulation of genomic and subgenomic RNAs of BaMV were not affected. Moreover, covariation analyses of natural satBaMV variants provide substantial evidence that the secondary structure in the 5'-UTR of satBaMV is necessary for efficient replication

  15. Machine learning algorithms for datasets popularity prediction

    CERN Document Server

    Kancys, Kipras

    2016-01-01

    This report represents continued study where ML algorithms were used to predict databases popularity. Three topics were covered. First of all, there was a discrepancy between old and new meta-data collection procedures, so a reason for that had to be found. Secondly, different parameters were analysed and dropped to make algorithms perform better. And third, it was decided to move modelling part on Spark.

  16. Empirical Prediction Intervals for County Population Forecasts

    OpenAIRE

    Rayer, Stefan; Smith, Stanley K.; Tayman, Jeff

    2009-01-01

    Population forecasts entail a significant amount of uncertainty, especially for long-range horizons and for places with small or rapidly changing populations. This uncertainty can be dealt with by presenting a range of projections or by developing statistical prediction intervals. The latter can be based on models that incorporate the stochastic nature of the forecasting process, on empirical analyses of past forecast errors, or on a combination of the two. In this article, we develop and tes...

  17. SRM Internal Flow Tests and Computational Fluid Dynamic Analysis. Volume 2; CFD RSRM Full-Scale Analyses

    Science.gov (United States)

    2001-01-01

    This document presents the full-scale analyses of the CFD RSRM. The RSRM model was developed with a 20 second burn time. The following are presented as part of the full-scale analyses: (1) RSRM embedded inclusion analysis; (2) RSRM igniter nozzle design analysis; (3) Nozzle Joint 4 erosion anomaly; (4) RSRM full motor port slag accumulation analysis; (5) RSRM motor analysis of two-phase flow in the aft segment/submerged nozzle region; (6) Completion of 3-D Analysis of the hot air nozzle manifold; (7) Bates Motor distributed combustion test case; and (8) Three Dimensional Polysulfide Bump Analysis.

  18. Stable isotopic analyses in paleoclimatic reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Wigand, P.E. [Univ. and Community College System of Nevada, Reno, NV (United States)

    1995-09-01

    Most traditional paleoclimatic proxy data have inherent time lags between climatic input and system response that constrain their use in accurate reconstruction of paleoclimate chronology, scaling of its variability, and the elucidation of the processes that determine its impact on the biotic and abiotic environment. With the exception of dendroclimatology, and studies of short-lived organisms and pollen recovered from annually varved lacustrine sediments, significant periods of time ranging from years, to centuries, to millennia may intervene between climate change and its first manifestation in paleoclimatic proxy data records. Reconstruction of past climate through changes in plant community composition derived from pollen sequences and plant remains from ancient woodrat middens, wet environments and dry caves all suffer from these lags. However, stable isotopic analyses can provide more immediate indication of biotic response to climate change. Evidence of past physiological response of organisms to changes in effective precipitation as climate varies can be provided by analyses of the stable isotopic content of plant macrofossils from various contexts. These analyses consider variation in the stable isotopic (hydrogen, oxygen and carbon) content of plant tissues as it reflects (1) past global or local temperature through changes in meteoric (rainfall) water chemistry in the case of the first two isotopes, and (2) plant stress through changes in plant respiration/transpiration processes under differing water availability, and varying atmospheric CO, composition (which itself may actually be a net result of biotic response to climate change). Studies currently being conducted in the Intermountain West indicate both long- and short-term responses that when calibrated with modem analogue studies have the potential of revealing not only the timing of climate events, but their direction, magnitude and rapidity.

  19. Evaluation of Model Operational Analyses during DYNAMO

    Science.gov (United States)

    Ciesielski, Paul; Johnson, Richard

    2013-04-01

    A primary component of the observing system in the DYNAMO-CINDY2011-AMIE field campaign was an atmospheric sounding network comprised of two sounding quadrilaterals, one north and one south of the equator over the central Indian Ocean. During the experiment a major effort was undertaken to ensure the real-time transmission of these data onto the GTS (Global Telecommunication System) for dissemination to the operational centers (ECMWF, NCEP, JMA, etc.). Preliminary estimates indicate that ~95% of the soundings from the enhanced sounding network were successfully transmitted and potentially used in their data assimilation systems. Because of the wide use of operational and reanalysis products (e.g., in process studies, initializing numerical simulations, construction of large-scale forcing datasets for CRMs, etc.), their validity will be examined by comparing a variety of basic and diagnosed fields from two operational analyses (ECMWF and NCEP) to similar analyses based solely on sounding observations. Particular attention will be given to the vertical structures of apparent heating (Q1) and drying (Q2) from the operational analyses (OA), which are strongly influenced by cumulus parameterizations, a source of model infidelity. Preliminary results indicate that the OA products did a reasonable job at capturing the mean and temporal characteristics of convection during the DYNAMO enhanced observing period, which included the passage of two significant MJO events during the October-November 2011 period. For example, temporal correlations between Q2-budget derived rainfall from the OA products and that estimated from the TRMM satellite (i.e., the 3B42V7 product) were greater than 0.9 over the Northern Sounding Array of DYNAMO. However closer inspection of the budget profiles show notable differences between the OA products and the sounding-derived results in low-level (surface to 700 hPa) heating and drying structures. This presentation will examine these differences and

  20. IDEA: Interactive Display for Evolutionary Analyses

    Directory of Open Access Journals (Sweden)

    Carlton Jane M

    2008-12-01

    Full Text Available Abstract Background The availability of complete genomic sequences for hundreds of organisms promises to make obtaining genome-wide estimates of substitution rates, selective constraints and other molecular evolution variables of interest an increasingly important approach to addressing broad evolutionary questions. Two of the programs most widely used for this purpose are codeml and baseml, parts of the PAML (Phylogenetic Analysis by Maximum Likelihood suite. A significant drawback of these programs is their lack of a graphical user interface, which can limit their user base and considerably reduce their efficiency. Results We have developed IDEA (Interactive Display for Evolutionary Analyses, an intuitive graphical input and output interface which interacts with PHYLIP for phylogeny reconstruction and with codeml and baseml for molecular evolution analyses. IDEA's graphical input and visualization interfaces eliminate the need to edit and parse text input and output files, reducing the likelihood of errors and improving processing time. Further, its interactive output display gives the user immediate access to results. Finally, IDEA can process data in parallel on a local machine or computing grid, allowing genome-wide analyses to be completed quickly. Conclusion IDEA provides a graphical user interface that allows the user to follow a codeml or baseml analysis from parameter input through to the exploration of results. Novel options streamline the analysis process, and post-analysis visualization of phylogenies, evolutionary rates and selective constraint along protein sequences simplifies the interpretation of results. The integration of these functions into a single tool eliminates the need for lengthy data handling and parsing, significantly expediting access to global patterns in the data.

  1. Cost/benefit analyses of environmental impact

    International Nuclear Information System (INIS)

    Various aspects of cost-benefit analyses are considered. Some topics discussed are: regulations of the National Environmental Policy Act (NEPA); statement of AEC policy and procedures for implementation of NEPA; Calvert Cliffs decision; AEC Regulatory Guide; application of risk-benefit analysis to nuclear power; application of the as low as practicable (ALAP) rule to radiation discharges; thermal discharge restrictions proposed by EPA under the 1972 Amendment to the Water Pollution Control Act; estimates of somatic and genetic insult per unit population exposure; occupational exposure; EPA Point Source Guidelines for Discharges from Steam Electric Power Plants; and costs of closed-cycle cooling using cooling towers. (U.S.)

  2. FEM-ANALYSE AV INDUSTRIELL ALUMINIUMSPROFILEKSTRUDERING

    OpenAIRE

    Christenssen, Wenche

    2014-01-01

    Avhandlingen er skrevet for å øke forståelsen og kunnskapen rundt materialflyt ved ekstruderingav komplekse og tynnvegde aluminiumprofiler. Det gjennomgås også hvordan ujevn materialflyt utav en matrise kan avbalanseres ved bruk av forkammer.Rapporten tar for seg oppbygning av modeller og simulering for to forskjellige profilgeometrier.Det første profilet er et U-profil som det tidligere er gjort analyser av ved bruk av modellmateriale.Dette ble gjort i en Diplom...

  3. Erregerspektrum bei tiefen Halsinfektionen: Eine retrospektive Analyse

    OpenAIRE

    Sömmer, C; Haid, M; Hommerich, C; Laskawi, R; Canis, M; Matthias, C

    2014-01-01

    Einleitung: Tiefe Halsinfektionen zählen zu den gefährlichsten Erkrankungen in der HNO-Heilkunde. Diese Analyse gibt einen Überblick über die Mikrobiologie tiefer Halsinfektionen und Einflussfaktoren, die zu einer Änderung des Keimspektrums führen können. Methoden: Von Januar 2002 bis Dezember 2012 wurden 63 Patienten mit tiefen Halsinfektionen in der HNO-Klinik der Universitätsmedizin Göttingen behandelt. Es wurden intraoperative Abstriche entnommen. Die Inzidenz der häufigsten Erreger wur...

  4. Fully Coupled FE Analyses of Buried Structures

    Directory of Open Access Journals (Sweden)

    James T. Baylot

    1994-01-01

    Full Text Available Current procedures for determining the response of buried structures to the effects of the detonation of buried high explosives recommend decoupling the free-field stress analysis from the structure response analysis. A fully coupled (explosive–soil structure finite element analysis procedure was developed so that the accuracies of current decoupling procedures could be evaluated. Comparisons of the results of analyses performed using this procedure with scale-model experiments indicate that this finite element procedure can be used to effectively evaluate the accuracies of the methods currently being used to decouple the free-field stress analysis from the structure response analysis.

  5. Rod Ellis, Gary Barkhuizen, Analysing Learner Language

    OpenAIRE

    Narcy-Combes, Marie-Françoise

    2014-01-01

    Ce livre vient à point nommé pour compléter les outils à la disposition des jeunes chercheurs en linguistique appliquée et didactique des langues, comme des praticiens de terrain désireux de conduire une recherche-action. Comme souvent en ce qui concerne les ouvrages de Rod Ellis, il s’agit d’une somme : une étude diachronique des outils utilisés depuis les années soixante par les chercheurs en acquisition des langues pour l’analyse des productions écrites et orales des apprenants de langue. ...

  6. En analyse av Yoga-kundalini-upanisad

    OpenAIRE

    2006-01-01

    Avhandlingen En analyse av Yoga-kundalini-upanisad bygger på den indiske asketen Narayanaswamy Aiyers engelske oversettelse av Yoga-kundalini-upanisad, utgitt i Thirty Minor Upanisad-s, Including the Yoga Upanisad-s (Oklahoma, Santarasa Publications, 1980). Denne hinduistiske teksten er omtalt som en av de 21 yoga-upanishadene, den åttisjette av de 108 klassiske upanishadene, og utgjør en del av tekstkorpuset Krsna-Yajurveda. Teksten fungerer som en manual i øvelser fra disiplinene hathayoga,...

  7. Use of Geospatial Analyses for Semantic Reasoning

    OpenAIRE

    Karmacharya, Ashish; Cruz, Christophe; Boochs, Frank; Marzani, Franck

    2010-01-01

    International audience This work focuses on the integration of the spatial analyses for semantic reasoning in order to compute new axioms of an existing OWL ontology. To make it concrete, we have defined Spatial Built-ins, an extension of existing Built-ins of the SWRL rule language. It permits to run deductive rules with the help of a translation rule engine. Thus, the Spatial SWRL rules are translated to standard SWRL rules. Once the spatial functions of the Spatial SWRL rules are comput...

  8. Implementing partnerships in nonreactor facility safety analyses

    International Nuclear Information System (INIS)

    Faculty and students from LSU have been participating in nuclear safety analyses and radiation protection projects at ANL-W at INEL since 1973. A mutually beneficial relationship has evolved that has resulted in generation of safety-related studies acceptable to Argonne and DOE, NRC, and state regulatory groups. Most of the safety projects have involved the Hot Fuel Examination Facility or the Fuel Conditioning Facility; both are hot cells that receive spent fuel from EBR-II. A table shows some of the major projects at ANL-W that involved LSU students and faculty

  9. Ion accelerators for ionometrical analyses of solids

    International Nuclear Information System (INIS)

    An ion accelerator for ionometrical analyses of solid states is described. The following problems are treated: high vacuum systems and their operation, small energy spread and beam collimation, system for automatic transmission of the ion beam. Due to the careful optimization of the discussed parameters and to the automatic beam transmission system beam currents of 5-10x10-9 A could be measured for more than 300 hours operation time, with deviations less than 20%. In a one year period the accelerator was in operation for more than 2300 hours. (T.G.)

  10. En kvantitativ analyse af danskernes huskesedler

    OpenAIRE

    Schmidt, Marcus

    2005-01-01

    Den foreliggende rapport bygger på en analyse af 871 huskesedler. Sedlerne er dels indsamlet i Jylland og dels i København. Indsamlingen af forbrugernes kasserede huskesedler er foregået såvel inde i dagligvarebutikkerne (indkøbskurve, affaldsspande) som ude foran butikkerne (parkeringsplads, indkøbsvogne). Dataindsamlingen omfatter de største supermarkeder og discountbutikker samt Bilka. Det vedhæftede appendiks forenden indeholder en nærmere redegørelse for den anvendte metodiske fremga...

  11. Deux perspectives pour analyser les relations professionnelles

    OpenAIRE

    Dunlop, John T.; Whyte, William F.; Mias, Arnaud

    2016-01-01

    Cet article est la traduction d’un article paru dans la revue Industrial and Labor Relations Review, qui fait suite à un débat organisé à l’université de Princeton au début de l’année 1949, entre William Foote Whyte (1914-2000) et John Thomas Dunlop (1914-2003) à propos du cadre d’analyse des relations professionnelles (Industrial Relations), qui font alors l’objet de recherches de plus en plus nombreuses aux Etats-Unis. Cette controverse entre l’un des chefs de file du mouvement des “relatio...

  12. Analysing development to shape the future

    OpenAIRE

    Andreas Novy; Lukas Lengauer

    2008-01-01

    This article links theory and politics in a systematic way by proposing Is-Shall-Do as a didactical model for analysing a concrete conjuncture, relating it to the desired future in the form of a concrete utopia. Aware of structural limits and potential space of manoeuvre for political agency adequate practical steps to implement the concrete utopia are elaborated. The paper is divided in a first section which exposes three interwoven aspects of development: the the idea of a good life, the co...

  13. Large scale breeder reactor pump dynamic analyses

    International Nuclear Information System (INIS)

    The lateral natural frequency and vibration response analyses of the Large Scale Breeder Reactor (LSBR) primary pump were performed as part of the total dynamic analysis effort to obtain the fabrication release. The special features of pump modeling are outlined in this paper. The analysis clearly demonstrates the method of increasing the system natural frequency by reducing the generalized mass without significantly changing the generalized stiffness of the structure. Also, a method of computing the maximum relative and absolute steady state responses and associated phase angles at given locations is provided. This type of information is very helpful in generating response versus frequency and phase angle versus frequency plots

  14. Analyses of containment structures with corrosion damage

    International Nuclear Information System (INIS)

    Corrosion damage to a nuclear power plant containment structure can degrade the pressure capacity of the vessel. For the low-carbon, low- strength steels used in containments, the effect of corrosion on material properties is discussed. Strain-to-failure tests, in uniaxial tension, have been performed on corroded material samples. Results were used to select strain-based failure criteria for corroded steel. Using the ABAQUS finite element analysis code, the capacity of a typical PWR Ice Condenser containment with corrosion damage has been studied. Multiple analyses were performed with the locations of the corrosion the containment, and the amount of corrosion varied in each analysis

  15. Introduction: Analysing Emotion and Theorising Affect

    Directory of Open Access Journals (Sweden)

    Peta Tait

    2016-08-01

    Full Text Available This discussion introduces ideas of emotion and affect for a volume of articles demonstrating the scope of approaches used in their study within the humanities and creative arts. The volume offers multiple perspectives on emotion and affect within 20th-century and 21st-century texts, arts and organisations and their histories. The discussion explains how emotion encompasses the emotions, emotional feeling, sensation and mood and how these can be analysed particularly in relation to literature, art and performance. It briefly summarises concepts of affect theory within recent approaches before introducing the articles.

  16. Externalizing Behaviour for Analysing System Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof;

    2013-01-01

    not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to...... attackers. Therefore, many attacks are considerably easier to be performed for insiders than for outsiders. However, current models do not support explicit specification of different behaviours. Instead, behaviour is deeply embedded in the analyses supported by the models, meaning that it is a complex, if...

  17. Neutron dose measurements and the analyses

    International Nuclear Information System (INIS)

    This paper describes mainly the skyshine neutron dose distributions and MCNP analyses of the experiments. D-T neutron skyshine experiments were carried out at FNS with a port at the roof. Neutron and secondary gamma-ray dose rates were measured as far as 550 m and 400 m, respectively. The experimental results were analyzed with the Monte Carlo code MNCP-4C with the nuclear data library JENDL-3.2, where the FNS building and the measurement field including the pine forest were modeled with simplified cylindrical geometries. The MCNP calculation agreed well both neutron and secondary gamma-ray dose rate distributions within uncertainty of 30%. (author)

  18. Visuelle Analyse von Eye-Tracking-Daten

    OpenAIRE

    Chen, Xuemei

    2011-01-01

    Eye-Tracking ist eine der am häufigsten eingesetzten Techniken zur Analyse der Mensch-Computer-Interaktion sowie zur Untersuchung der Perzeption. Die erfassten Eye-Tracking-Daten werden meist mit Heat-Maps oder Scan-Paths analysiert, um die Usability der getesteten Anwendung zu ermitteln oder auf höhere kognitive Prozesse zu schließen. Das Ziel dieser Diplomarbeit ist die Entwicklung neuer Visualisierungstechniken für Eye-Tracking-Daten beziehungsweise die Entwicklung eines Studienkonzepts...

  19. Predicted vitamin D status and incidence of tooth loss and periodontitis

    Science.gov (United States)

    Jimenez, Monik; Giovannucci, Edward; Kaye, Elizabeth Krall; Joshipura, Kaumudi J; Dietrich, Thomas

    2016-01-01

    Objective Vitamin D insufficiency is highly prevalent, with particular subgroups at greater risk (e.g. the elderly and those with darker skin). Vitamin D insufficiency may partly explain US racial/ethnic disparities in the prevalence of periodontitis and tooth loss. We evaluated the association between a predictor score of plasma 25-hydroxyvitamin D (25(OH)D) and incidence of periodontitis and tooth loss. Design Detailed biennial questionnaires were collected on medical history, lifestyle practices and incident periodontitis and tooth loss. The predictor score was derived from variables known to influence circulating concentrations of plasma 25(OH)D and validated against plasma concentrations among a sub-sample. Multivariable Cox proportional-hazards models with time-varying covariates estimated the association between the predicted 25(OH)D score and time until first tooth loss. Subjects A total of 42 730 participants of the Health Professionals Follow-Up Study aged 40–75 years at baseline were followed from 1986 to 2006. Setting USA, representing all fifty states and the District of Columbia. Results We observed 13 581 incident tooth loss events from 539 335 person-years. There was a dose-dependent significant inverse association across quintiles of the predicted 25(OH)D score and incidence of tooth loss. In multivariable analyses, the highest quintile of the updated predicted 25(OH)D score compared with the lowest was associated with a 20 % lower incidence of tooth loss (hazard ratio 5 0·80, 95 % CI 0·76, 0·85; P value for trend periodontitis were similar. Conclusions These results are suggestive of an association between predictors of vitamin D and lower incidence of tooth loss and periodontitis. PMID:23469936

  20. 'Red Flag' Predictions

    DEFF Research Database (Denmark)

    Hallin, Carina Antonia; Andersen, Torben Juul; Tveterås, Sigbjørn

    -generation prediction markets and outline its unique features as a third-generation prediction market. It is argued that frontline employees gain deep insights when they execute operational activities on an ongoing basis in the organization. The experiential learning from close interaction with internal and external...

  1. Predicting the MJO

    Science.gov (United States)

    Hendon, H.

    2003-04-01

    Extended range prediction of the Madden Julian Oscillation (MJO) and seasonal prediction of MJO activity are reviewed. Skillful prediction of individual MJO events offers the possibility of forecasting increased risk of cyclone development throughout the global tropics, altered risk of extreme rainfall events in both tropics and extratropics, and displacement of storm tracks with 3-4 week lead times. The level of MJO activity within a season, which affects the mean intensity of the Australian summer monsoon and possibly the evolution of ENSO, may be governed by variations of sea surface temperature that are predictable with lead times of a few seasons. The limit of predictability for individual MJO events is unknown. Empirical-statistical schemes are skillful out to about 3 weeks and have better skill than dynamical forecast models at lead times longer than about 5 days. The dynamical forecast models typically suffer from a poor representation (or complete lack) of the MJO and large initial error. They are better used to ascertain the global impacts of the lack of the MJO rather than for determination of the limit of predictability. Dynamical extended range prediction within a GCM that has a good representation of the MJO indicates potential skill comparable to the empirical schemes. Examples of operational extended range prediction with POAMA, the new coupled seasonal forecast model at the Bureau of Meteorology that also reasonably simulates the MJO, will be presented.

  2. Improved nonlinear prediction method

    Science.gov (United States)

    Adenan, Nur Hamiza; Md Noorani, Mohd Salmi

    2014-06-01

    The analysis and prediction of time series data have been addressed by researchers. Many techniques have been developed to be applied in various areas, such as weather forecasting, financial markets and hydrological phenomena involving data that are contaminated by noise. Therefore, various techniques to improve the method have been introduced to analyze and predict time series data. In respect of the importance of analysis and the accuracy of the prediction result, a study was undertaken to test the effectiveness of the improved nonlinear prediction method for data that contain noise. The improved nonlinear prediction method involves the formation of composite serial data based on the successive differences of the time series. Then, the phase space reconstruction was performed on the composite data (one-dimensional) to reconstruct a number of space dimensions. Finally the local linear approximation method was employed to make a prediction based on the phase space. This improved method was tested with data series Logistics that contain 0%, 5%, 10%, 20% and 30% of noise. The results show that by using the improved method, the predictions were found to be in close agreement with the observed ones. The correlation coefficient was close to one when the improved method was applied on data with up to 10% noise. Thus, an improvement to analyze data with noise without involving any noise reduction method was introduced to predict the time series data.

  3. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg;

    2001-01-01

    utilities as partners and users. The new models are evaluated for five wind farms in Denmark as well as one wind farm in Spain. It is shown that the predictions based on conditional parametric models are superior to the predictions obatined by state-of-the-art parametric models....

  4. Predicting AD conversion

    DEFF Research Database (Denmark)

    Liu, Yawu; Mattila, Jussi; Ruiz, Miguel �ngel Mu�oz;

    2013-01-01

    To compare the accuracies of predicting AD conversion by using a decision support system (PredictAD tool) and current research criteria of prodromal AD as identified by combinations of episodic memory impairment of hippocampal type and visual assessment of medial temporal lobe atrophy (MTA) on MRI...

  5. Prediction of Antibody Epitopes

    DEFF Research Database (Denmark)

    Nielsen, Morten; Marcatili, Paolo

    2015-01-01

    self-proteins. Given the sequence or the structure of a protein of interest, several methods exploit such features to predict the residues that are more likely to be recognized by an immunoglobulin.Here, we present two methods (BepiPred and DiscoTope) to predict linear and discontinuous antibody...

  6. Error mode prediction.

    Science.gov (United States)

    Hollnagel, E; Kaarstad, M; Lee, H C

    1999-11-01

    The study of accidents ('human errors') has been dominated by efforts to develop 'error' taxonomies and 'error' models that enable the retrospective identification of likely causes. In the field of Human Reliability Analysis (HRA) there is, however, a significant practical need for methods that can predict the occurrence of erroneous actions--qualitatively and quantitatively. The present experiment tested an approach for qualitative performance prediction based on the Cognitive Reliability and Error Analysis Method (CREAM). Predictions of possible erroneous actions were made for operators using different types of alarm systems. The data were collected as part of a large-scale experiment using professional nuclear power plant operators in a full scope simulator. The analysis showed that the predictions were correct in more than 70% of the cases, and also that the coverage of the predictions depended critically on the comprehensiveness of the preceding task analysis. PMID:10582035

  7. Evaluating prediction uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    McKay, M.D. [Los Alamos National Lab., NM (United States)

    1995-03-01

    The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented.

  8. Evaluating prediction uncertainty

    International Nuclear Information System (INIS)

    The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented

  9. Is Time Predictability Quantifiable?

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    Computer architects and researchers in the realtime domain start to investigate processors and architectures optimized for real-time systems. Optimized for real-time systems means time predictable, i.e., architectures where it is possible to statically derive a tight bound of the worst-case execu......Computer architects and researchers in the realtime domain start to investigate processors and architectures optimized for real-time systems. Optimized for real-time systems means time predictable, i.e., architectures where it is possible to statically derive a tight bound of the worst......-case execution time. To compare different approaches we would like to quantify time predictability. That means we need to measure time predictability. In this paper we discuss the different approaches for these measurements and conclude that time predictability is practically not quantifiable. We can only...

  10. Ground motion predictions

    International Nuclear Information System (INIS)

    Nuclear generated ground motion is defined and then related to the physical parameters that cause it. Techniques employed for prediction of ground motion peak amplitude, frequency spectra and response spectra are explored, with initial emphasis on the analysis of data collected at the Nevada Test Site (NTS). NTS postshot measurements are compared with pre-shot predictions. Applicability of these techniques to new areas, for example, Plowshare sites, must be questioned. Fortunately, the Atomic Energy Commission is sponsoring complementary studies to improve prediction capabilities primarily in new locations outside the NTS region. Some of these are discussed in the light of anomalous seismic behavior, and comparisons are given showing theoretical versus experimental results. In conclusion, current ground motion prediction techniques are applied to events off the NTS. Predictions are compared with measurements for the event Faultless and for the Plowshare events, Gasbuggy, Cabriolet, and Buggy I. (author)

  11. Ethics of cost analyses in medical education.

    Science.gov (United States)

    Walsh, Kieran

    2013-11-01

    Cost analyses in medical education are rarely straightforward, and rarely lead to clear-cut conclusions. Occasionally they do lead to clear conclusions but even when that happens, some stakeholders will ask difficult but valid questions about what to do following cost analyses-specifically about distributive justice in the allocation of resources. At present there are few or no debates about these issues and rationing decisions that are taken in medical education are largely made subconsciously. Distributive justice 'concerns the nature of a socially just allocation of goods in a society'. Inevitably there is a large degree of subjectivity in the judgment as to whether an allocation is seen as socially just or ethical. There are different principles by which we can view distributive justice and which therefore affect the prism of subjectivity through which we see certain problems. For example, we might say that distributive justice at a certain institution or in a certain medical education system operates according to the principle that resources must be divided equally amongst learners. Another system may say that resources should be distributed according to the needs of learners or even of patients. No ethical system or model is inherently right or wrong, they depend on the context in which the educator is working. PMID:24203859

  12. ISFSI site boundary radiation dose rate analyses

    International Nuclear Information System (INIS)

    Across the globe nuclear utilities are in the process of designing and analysing Independent Spent Fuel Storage Installations (ISFSI) for the purpose of above ground spent-fuel storage primarily to mitigate the filling of spent-fuel pools. Using a conjoining of discrete ordinates transport theory (DORT) and Monte Carlo (MCNP) techniques, an ISFSI was analysed to determine neutron and photon dose rates for a generic overpack, and ISFSI pad configuration and design at distances ranging from 1 to ∼1700 m from the ISFSI array. The calculated dose rates are used to address the requirements of 10CFR72.104, which provides limits to be enforced for the protection of the public by the NRC in regard to ISFSI facilities. For this overpack, dose rates decrease by three orders of magnitude through the first 200 m moving away from the ISFSI. In addition, the contributions from different source terms changes over distance. It can be observed that although side photons provide the majority of dose rate in this calculation, scattered photons and side neutrons take on more importance as the distance from the ISFSI is increased. (authors)

  13. Hierarchical regression for analyses of multiple outcomes.

    Science.gov (United States)

    Richardson, David B; Hamra, Ghassan B; MacLehose, Richard F; Cole, Stephen R; Chu, Haitao

    2015-09-01

    In cohort mortality studies, there often is interest in associations between an exposure of primary interest and mortality due to a range of different causes. A standard approach to such analyses involves fitting a separate regression model for each type of outcome. However, the statistical precision of some estimated associations may be poor because of sparse data. In this paper, we describe a hierarchical regression model for estimation of parameters describing outcome-specific relative rate functions and associated credible intervals. The proposed model uses background stratification to provide flexible control for the outcome-specific associations of potential confounders, and it employs a hierarchical "shrinkage" approach to stabilize estimates of an exposure's associations with mortality due to different causes of death. The approach is illustrated in analyses of cancer mortality in 2 cohorts: a cohort of dioxin-exposed US chemical workers and a cohort of radiation-exposed Japanese atomic bomb survivors. Compared with standard regression estimates of associations, hierarchical regression yielded estimates with improved precision that tended to have less extreme values. The hierarchical regression approach also allowed the fitting of models with effect-measure modification. The proposed hierarchical approach can yield estimates of association that are more precise than conventional estimates when one wishes to estimate associations with multiple outcomes. PMID:26232395

  14. ANALYSES ON SYSTEMATIC CONFRONTATION OF FIGHTER AIRCRAFT

    Institute of Scientific and Technical Information of China (English)

    HuaiJinpeng; WuZhe; HuangJun

    2002-01-01

    Analyses of the systematic confrontation between two military forcfes are the highest hierarchy on opera-tional effectiveness study of weapon systema.The physi-cal model for tactical many-on-many engagements of an aerial warfare with heterogeneous figher aircraft is estab-lished.On the basis of Lanchester multivariate equations of square law,a mathematical model corresponding to the established physical model is given.A superiorityh parame-ter is then derived directly from the mathematical model.With view to the high -tech condition of modern war-fare,the concept of superiority parameter which more well and truly reflects the essential of an air-to-air en-gagement is further formulated.The attrition coeffi-cients,which are key to the differential equations,are de-termined by using tactics of random target assignment and air-to-air capability index of the fighter aircraft.Hereby,taking the mathematical model and superiority parameter as cores,calculations amd analyses of complicate systemic problems such as evaluation of battle superiority,prog-mostication of combat process and optimization of colloca-tions have been accomplished.Results indicate that a clas-sical combat theory with its certain recent development has received newer applications in the military operation research for complicated confrontation analysis issues.

  15. Sensitivity in risk analyses with uncertain numbers.

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, W. Troy; Ferson, Scott

    2006-06-01

    Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is Dempster-Shafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a ''pinching'' strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered.

  16. Waste Stream Analyses for Nuclear Fuel Cycles

    Energy Technology Data Exchange (ETDEWEB)

    N. R. Soelberg

    2010-08-01

    A high-level study was performed in Fiscal Year 2009 for the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE) Advanced Fuel Cycle Initiative (AFCI) to provide information for a range of nuclear fuel cycle options (Wigeland 2009). At that time, some fuel cycle options could not be adequately evaluated since they were not well defined and lacked sufficient information. As a result, five families of these fuel cycle options are being studied during Fiscal Year 2010 by the Systems Analysis Campaign for the DOE NE Fuel Cycle Research and Development (FCRD) program. The quality and completeness of data available to date for the fuel cycle options is insufficient to perform quantitative radioactive waste analyses using recommended metrics. This study has been limited thus far to qualitative analyses of waste streams from the candidate fuel cycle options, because quantitative data for wastes from the front end, fuel fabrication, reactor core structure, and used fuel for these options is generally not yet available.

  17. Assessment of ERANOS for HPLWR core analyses

    International Nuclear Information System (INIS)

    The High Performance Light Water Reactor (HPLWR) is an innovative thermal spectrum nuclear reactor concept in which water at supercritical pressure is used both as neutron moderator and as coolant. The usage of a deterministic tool for neutronic analyses of the HPLWR core is preferred to that of Monte Carlo techniques mainly because of computational time reduction but also because of higher flexibility in dealing with temperature dependent cross-sections; for these reasons, ERANOS has been chosen. Verification of the developed geometry models and selected calculation procedure is mandatory when applying ERANOS to this innovative reactor concept, since the code has been originally developed for fast reactors. This task is achieved by means of code-to-code comparison, choosing MCNP5, which ensures correct geometry representation and provides a continuous energy treatment, even if, to the authors' knowledge, lacks of extensive validation of the thermal scattering data for the considered water temperature and pressure ranges. Two main spatial scales, associated to the usage of a deterministic code, require attention: 1) cell calculations, in which macroscopic self-shielded cross-sections are generated, 2) 3D calculations. A very good agreement between the codes is obtained for both cell and 3D fuel assembly calculations. The developed 3D core model has been verified by a comparison of different ERANOS modules because of the high computational power request by MCNP5. The results shown ensure the applicability of ERANOS to HPLWR analyses when using adequate calculation procedures. (authors)

  18. Immunoregulatory effect of bifidobacteria strains in porcine intestinal epithelial cells through modulation of ubiquitin-editing enzyme A20 expression.

    Directory of Open Access Journals (Sweden)

    Yohsuke Tomosada

    Full Text Available BACKGROUND: We previously showed that evaluation of anti-inflammatory activities of lactic acid bacteria in porcine intestinal epithelial (PIE cells is useful for selecting potentially immunobiotic strains. OBJECTIVE: The aims of the present study were: i to select potentially immunomodulatory bifidobacteria that beneficially modulate the Toll-like receptor (TLR-4-triggered inflammatory response in PIE cells and; ii to gain insight into the molecular mechanisms involved in the anti-inflammatory effect of immunobiotics by evaluating the role of TLR2 and TLR negative regulators in the modulation of proinflammatory cytokine production and activation of mitogen-activated protein kinase (MAPK and nuclear factor-κB (NF-κB pathways in PIE cells. RESULTS: Bifidobacteria longum BB536 and B. breve M-16V strains significantly downregulated levels of interleukin (IL-8, monocyte chemotactic protein (MCP-1 and IL-6 in PIE cells challenged with heat-killed enterotoxigenic Escherichia coli. Moreover, BB536 and M-16V strains attenuated the proinflammatory response by modulating the NF-κB and MAPK pathways. In addition, our findings provide evidence for a key role for the ubiquitin-editing enzyme A20 in the anti-inflammatory effect of immunobiotic bifidobacteria in PIE cells. CONCLUSIONS: We show new data regarding the mechanism involved in the anti-inflammatory effect of immunobiotics. Several strains with immunoregulatory capabilities used a common mechanism to induce tolerance in PIE cells. Immunoregulatory strains interacted with TLR2, upregulated the expression of A20 in PIE cells, and beneficially modulated the subsequent TLR4 activation by reducing the activation of MAPK and NF-κB pathways and the production of proinflammatory cytokines. We also show that the combination of TLR2 activation and A20 induction can be used as biomarkers to screen and select potential immunoregulatory bifidobacteria strains.

  19. Structural prediction in aphasia

    Directory of Open Access Journals (Sweden)

    Tessa Warren

    2015-05-01

    Full Text Available There is considerable evidence that young healthy comprehenders predict the structure of upcoming material, and that their processing is facilitated when they encounter material matching those predictions (e.g., Staub & Clifton, 2006; Yoshida, Dickey & Sturt, 2013. However, less is known about structural prediction in aphasia. There is evidence that lexical prediction may be spared in aphasia (Dickey et al., 2014; Love & Webb, 1977; cf. Mack et al, 2013. However, predictive mechanisms supporting facilitated lexical access may not necessarily support structural facilitation. Given that many people with aphasia (PWA exhibit syntactic deficits (e.g. Goodglass, 1993, PWA with such impairments may not engage in structural prediction. However, recent evidence suggests that some PWA may indeed predict upcoming structure (Hanne, Burchert, De Bleser, & Vashishth, 2015. Hanne et al. tracked the eyes of PWA (n=8 with sentence-comprehension deficits while they listened to reversible subject-verb-object (SVO and object-verb-subject (OVS sentences in German, in a sentence-picture matching task. Hanne et al. manipulated case and number marking to disambiguate the sentences’ structure. Gazes to an OVS or SVO picture during the unfolding of a sentence were assumed to indicate prediction of the structure congruent with that picture. According to this measure, the PWA’s structural prediction was impaired compared to controls, but they did successfully predict upcoming structure when morphosyntactic cues were strong and unambiguous. Hanne et al.’s visual-world evidence is suggestive, but their forced-choice sentence-picture matching task places tight constraints on possible structural predictions. Clearer evidence of structural prediction would come from paradigms where the content of upcoming material is not as constrained. The current study used self-paced reading study to examine structural prediction among PWA in less constrained contexts. PWA (n=17 who

  20. Castor-1C spent fuel storage cask decay heat, heat transfer, and shielding analyses

    International Nuclear Information System (INIS)

    This report documents the decay heat, heat transfer, and shielding analyses of the Gesellschaft fuer Nuklear Services (GNS) CASTOR-1C cask used in a spent fuel storage demonstration performed at Preussen Elektra's Wurgassen nuclear power plant. The demonstration was performed between March 1982 and January 1984, and resulted in cask and fuel temperature data and cask exterior surface gamma-ray and neutron radiation dose rate measurements. The purpose of the analyses reported here was to evaluate decay heat, heat transfer, and shielding computer codes. The analyses consisted of (1) performing pre-look predictions (predictions performed before the analysts were provided the test data), (2) comparing ORIGEN2 (decay heat), COBRA-SFS and HYDRA (heat transfer), and QAD and DOT (shielding) results to data, and (3) performing post-test analyses if appropriate. Even though two heat transfer codes were used to predict CASTOR-1C cask test data, no attempt was made to compare the two codes. The codes are being evaluated with other test data (single-assembly data and other cask data), and to compare the codes based on one set of data may be premature and lead to erroneous conclusions

  1. Analyses of cavitation instabilities in ductile metals

    DEFF Research Database (Denmark)

    Tvergaard, Viggo

    Cavitation instabilities have been predicted for a single void in a ductile metal stressed under high triaxiality conditions. In experiments for a ceramic reinforced by metal particles a single dominant void has been observed on the fracture surface of some of the metal particles bridging a crack...... for the influence of such size-effects on cavitation instabilities are presented. When a metal contains a distribution of micro voids, and the void spacing compared to void size is not extremely large, the surrounding voids may affect the occurrence of a cavitation instability at one of the voids...

  2. Micromechanical Failure Analyses for Finite Element Polymer Modeling

    Energy Technology Data Exchange (ETDEWEB)

    CHAMBERS,ROBERT S.; REEDY JR.,EARL DAVID; LO,CHI S.; ADOLF,DOUGLAS B.; GUESS,TOMMY R.

    2000-11-01

    Polymer stresses around sharp corners and in constrained geometries of encapsulated components can generate cracks leading to system failures. Often, analysts use maximum stresses as a qualitative indicator for evaluating the strength of encapsulated component designs. Although this approach has been useful for making relative comparisons screening prospective design changes, it has not been tied quantitatively to failure. Accurate failure models are needed for analyses to predict whether encapsulated components meet life cycle requirements. With Sandia's recently developed nonlinear viscoelastic polymer models, it has been possible to examine more accurately the local stress-strain distributions in zones of likely failure initiation looking for physically based failure mechanisms and continuum metrics that correlate with the cohesive failure event. This study has identified significant differences between rubbery and glassy failure mechanisms that suggest reasonable alternatives for cohesive failure criteria and metrics. Rubbery failure seems best characterized by the mechanisms of finite extensibility and appears to correlate with maximum strain predictions. Glassy failure, however, seems driven by cavitation and correlates with the maximum hydrostatic tension. Using these metrics, two three-point bending geometries were tested and analyzed under variable loading rates, different temperatures and comparable mesh resolution (i.e., accuracy) to make quantitative failure predictions. The resulting predictions and observations agreed well suggesting the need for additional research. In a separate, additional study, the asymptotically singular stress state found at the tip of a rigid, square inclusion embedded within a thin, linear elastic disk was determined for uniform cooling. The singular stress field is characterized by a single stress intensity factor K{sub a} and the applicable K{sub a} calibration relationship has been determined for both fully bonded and

  3. Predicting geomagnetic activity indices

    International Nuclear Information System (INIS)

    Complete text of publication follows. Magnetically active times, e.g., Kp > 5, are notoriously difficult to predict, precisely the times when such predictions are crucial to the space weather users. Taking advantage of the routinely available solar wind measurements at Lagrangian point (L1) and nowcast Kps, Kp and Dst forecast models based on neural networks were developed with the focus on improving the forecast for active times. To satisfy different needs and operational constraints, three models were developed: (1) a model that inputs nowcast Kp and solar wind parameters and predicts Kp 1 hr ahead; (2) a model with the same input as model 1 and predicts Kp 4 hr ahead; and (3) a model that inputs only solar wind parameters and predicts Kp 1 hr ahead (the exact prediction lead time depends on the solar wind speed and the location of the solar wind monitor.) Extensive evaluations of these models and other major operational Kp forecast models show that, while the new models can predict Kps more accurately for all activities, the most dramatic improvements occur for moderate and active times. Similar Dst models were developed. Information dynamics analysis of Kp, suggests that geospace is more dominated by internal dynamics near solar minimum than near solar maximum, when it is more directly driven by external inputs, namely solar wind and interplanetary magnetic field (IMF).

  4. Nuclear Analyses For ITER NB System

    International Nuclear Information System (INIS)

    Full text: Detailed nuclear analyses for the latest ITER NB system are required to ensure that NB design conforms to the nuclear regulations and licensing. A variety of nuclear analyses was conducted for the NB system including a tokamak building and outside the building by using Monte Carlo code MCNP5.14, activation code ACT-4 and Fusion Evaluated Nuclear Data Library FENDL-2.1. A special “Direct 1-step Monte Carlo” method is adopted for the shutdown dose rate calculation. The NB system and the tokamak building are very complicated, and it is practically impossible to make geometry input data manually. We used the automatic converter code GEOMIT from CAD data to MCNP geometry input data. GEOMIT was improved for these analyses, and the conversion performance was drastically enhanced. Void cells in MCNP input data were generated by subtracting solid cells data from simple rectangular void cells. The CAD data were successfully converted to MCNP geometry input data, and void data were also adequately produced with GEOMIT. The effective dose rates at external zones (non-controlled areas) should be less than 80 μSv/month according to French regulations. Shielding structures are under analysis to reduce the radiation streaming through the openings. We are confirming that the criterion is satisfied for the NB system. The effective dose rate data in the NB cell after shutdown are necessary to check the dose rate during possible rad-works for maintenance. Dose rates for workers must be maintained as low as reasonably achievable, and at locations where hands-on maintenance is performed should be below a target of 100 μSv/h at 12 days after shutdown. We are specifying the adequate zoning and area where hands-on maintenance can be allowed, based on the analysis results. The cask design for transport activated NB components is an important issue, and we are calculating the effective dose rates. The target of the effective dose rate from the activated NB components is less

  5. Flowtran assessment for predicting flow instability

    International Nuclear Information System (INIS)

    FLOWTRAN is a thermal-hydraulic assembly code for simulating Savannah River Site (SRS) reactor assemblies and predicting flow instability. Reactor power and flow transient modelling is critical in determining safe operating limits at which a reactor could be shut down without damage to the fuel assemblies. FLOWTRAN models an individual assembly's thermal-hydraulic behavior and can determine the operating power limit to avoid flow instability when the flow regime through the assembly is single-phase. Tests were conducted at Columbia University in 1988--89 with downward flow through single tubes to examine fluid flow instability. FLOWTRAN cannot predict actual flow instability because it cannot model two-phase flow. FLOWTRAN modelled the heated tubes to predict Onset of Significant Voiding (OSV) using the Saha-Zuber's correlation modified for SRS reactors; data analyses for the Columbia tests showed that the modified correlation OSV is a conservative predictor for downward flow instability

  6. Analysing lawyers’ attitude towards knowledge sharing

    Directory of Open Access Journals (Sweden)

    Wole M. Olatokun

    2012-09-01

    Full Text Available Objectives: The study examined and identified the factors that affect lawyers’ attitudes toknowledge sharing, and their knowledge sharing behaviour. Specifically, it investigated therelationship between the salient beliefs affecting the knowledge sharing attitude of lawyers’,and applied a modified version of the Theory of Reasoned Action (TRA in the knowledgesharing context, to predict how these factors affect their knowledge sharing behaviour.Method: A field survey of 273 lawyers was carried out, using questionnaire for data collection.Collected data on all variables were structured into grouped frequency distributions. PrincipalComponent Factor Analysis was applied to reduce the constructs and Simple Regression wasapplied to test the hypotheses. These were tested at 0.05% level of significance.Results: Results showed that expected associations and contributions were the majordeterminants of lawyers’ attitudes towards knowledge sharing. Expected reward was notsignificantly related to lawyers’ attitudes towards knowledge sharing. A positive attitudetowards knowledge sharing was found to lead to a positive intention to share knowledge,although a positive intention to share knowledge did not significantly predict a positiveknowledge sharing behaviour. The level of Information Technology (IT usage was also foundto significantly affect the knowledge sharing behaviour of lawyers’.Conclusion: It was recommended that law firms in the study area should deploy more ITinfrastructure and services that encourage effective knowledge sharing amongst lawyers.

  7. On Prediction of EOP

    CERN Document Server

    Malkin, Z

    2009-01-01

    Two methods of prediction of the Pole coordinates and TAI-UTC were tested -- extrapolation of the deterministic components and ARIMA. It was found that each of these methods is most effective for certain length of prognosis. For short-time prediction ARIMA algorithm yields more accurate prognosis, and for long-time one extrapolation is preferable. So, the combined algorithm is being used in practice of IAA EOP Service. The accuracy of prognosis is close to accuracy of IERS algorithms. For prediction of nutation the program KSV-1996-1 by T. Herring is being used.

  8. Analysing transfer phenomena in osmotic evaporation

    Directory of Open Access Journals (Sweden)

    Freddy Forero Longas

    2011-12-01

    Full Text Available Osmotic evaporation is a modification of traditional processes using membranes; by means of a vapour pressure differential, produced by a highly concentrated extraction solution, water is transferred through a hydrophobic membrane as vapour. This technique has many advantages over traditional processes, allowing work at atmospheric pressure and low temperatures, this being ideal for heatsensitive products. This paper presents and synthetically analyses the phenomena of heat and mass transfer which occurs in the process and describes the models used for estimating the parameters of interest, such as flow, temperature, heat transfer rate and the relationships that exist amongst them when hollow fibre modules are used, providing a quick reference tool and specific information about this process.

  9. Seismic analyses of structures. 1st draft

    International Nuclear Information System (INIS)

    The dynamic analysis presented in this paper refers to the seismic analysis of the main building of Paks NPP. The aim of the analysis was to determine the floor response spectra as response to seismic input. This analysis was performed by the 3-dimensional calculation model and the floor response spectra were determined for a number levels from the floor response time histories and no other adjustments were applied. The following results of seismic analysis are presented: 3-dimensional finite element model; basic assumptions of dynamic analyses; table of frequencies and included factors; modal masses for all modes; floor response spectra in all the selected nodes with figures of indicated nodes and important nodes of free vibration

  10. Genetic Analyses of Meiotic Recombination in Arabidopsis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Meiosis is essential for sexual reproduction and recombination is a critical step required for normal meiosis. Understanding the underlying molecular mechanisms that regulate recombination ie important for medical, agricultural and ecological reasons. Readily available molecular and cytological tools make Arabidopsis an excellent system to study meiosis. Here we review recent developments in molecular genetic analyses on meiotic recombination. These Include studies on plant homologs of yeast and animal genes, as well as novel genes that were first identified in plants. The characterizations of these genes have demonstrated essential functions from the initiation of recombination by double-strand breaks to repair of such breaks, from the formation of double-Holliday junctions to possible resolution of these junctions, both of which are critical for crossover formation. The recent advances have ushered a new era in plant meiosis, in which the combination of genetics, genomics, and molecular cytology can uncover important gene functions.

  11. Deterministic analyses of severe accident issues

    International Nuclear Information System (INIS)

    Severe accidents in light water reactors involve complex physical phenomena. In the past there has been a heavy reliance on simple assumptions regarding physical phenomena alongside of probability methods to evaluate risks associated with severe accidents. Recently GE has developed realistic methodologies that permit deterministic evaluations of severe accident progression and of some of the associated phenomena in the case of Boiling Water Reactors (BWRs). These deterministic analyses indicate that with appropriate system modifications, and operator actions, core damage can be prevented in most cases. Furthermore, in cases where core-melt is postulated, containment failure can either be prevented or significantly delayed to allow sufficient time for recovery actions to mitigate severe accidents

  12. Spatial Analyses of Harappan Urban Settlements

    Directory of Open Access Journals (Sweden)

    Hirofumi Teramura

    2006-12-01

    Full Text Available The Harappan Civilization occupies a unique place among the early civilizations of the world with its well planned urban settlements, advanced handicraft and technology, religious and trade activities. Using a Geographical Information Systems (GIS, this study presents spatial analyses that locate urban settlements on a digital elevation model (DEM according to the three phases of early, mature and late. Understanding the relationship between the spatial distribution of Harappan sites and the change in some factors, such as topographic features, river passages or sea level changes, will lead to an understanding of the dynamism of this civilization. It will also afford a glimpse of the factors behind the formation, development, and decline of the Harappan Civilization.

  13. Angular analyses in relativistic quantum mechanics

    International Nuclear Information System (INIS)

    This work describes the angular analysis of reactions between particles with spin in a fully relativistic fashion. One particle states are introduced, following Wigner's method, as representations of the inhomogeneous Lorentz group. In order to perform the angular analyses, the reduction of the product of two representations of the inhomogeneous Lorentz group is studied. Clebsch-Gordan coefficients are computed for the following couplings: l-s coupling, helicity coupling, multipolar coupling, and symmetric coupling for more than two particles. Massless and massive particles are handled simultaneously. On the way we construct spinorial amplitudes and free fields; we recall how to establish convergence theorems for angular expansions from analyticity hypothesis. Finally we substitute these hypotheses to the idea of 'potential radius', which gives at low energy the usual 'centrifugal barrier' factors. The presence of such factors had never been deduced from hypotheses compatible with relativistic invariance. (author)

  14. Correlation analyses of deep galaxy samples

    International Nuclear Information System (INIS)

    Estimates of the two-point angular correlation function, w(theta), are presented for galaxy samples obtained from COSMOS machine measurements of 1.2-m UK Schmidt telescope (UKST) and 4-m Anglo-Australian telescope (AAT) plates. All of the estimated w(theta) are consistent with a -0.8 power-law slope at small scales. At larger angular scales a break from the power-law behaviour is seen in the UKST w(theta) corresponding to a spatial separation of 3 h-1 Mpc in agreement with earlier results. The AAT plates allow the correlation analyses to be carried out to 24 mag in the blue passband and 22 mag in the red passband. It is observed that the correlation function amplitude scaling relation in both passbands is very similar. (author)

  15. Analysing weak orbital signals in Gaia data

    CERN Document Server

    Lucy, L B

    2014-01-01

    Anomalous orbits are found when minimum-chi^{2} estimation is applied to synthetic Gaia data for weak orbital signals - i.e., orbits whose astrometric signatures are comparable to the single-scan measurement error (Pourbaix 2002). These orbits are nearly parabolic, edge-on, and their major axes align with the line-of-sight to the observer. Such orbits violate the Copernican principle (CPr) and as such could be rejected. However, the preferred alternative is to develop a statistical technique that incorporates the CPr as a fundamental postulate. This can be achieved in the context of Bayesian estimation by defining a Copernican prior. With this development, Pourbaix's anomalous orbits no longer arise. Instead, orbits with a somewhat higher chi^{2} but which do not violate the CPr are selected. Other areas of astronomy where the investigator must analyse data from 'imperfect experiments' might similarly benefit from appropriately- defined Copernican priors.

  16. Preserving the nuclear option: analyses and recommendations

    International Nuclear Information System (INIS)

    It is certain that a future role for nuclear power will depend on substantial changes in the management and regulation of the enterprise. It is widely believed that institutional, rather than technological, change is, at least in the short term, the key to resuscitating the nuclear option. Several recent analyses of the problems facing nuclear power, together with the current congressional hearing on the Nuclear Regulatory Commission's fiscal year 1986 budget request, have examined both the future of nuclear power and what can be done to address present institutional shortcomings. The congressional sessions have provided an indication of the views of both legislators and regulators, and this record, although mixed, generally shows continued optimism about the prospects of the nuclear option if needed reforms are accomplished

  17. Thermal hydraulic reactor safety analyses and experiments

    International Nuclear Information System (INIS)

    The report introduces the results of the thermal hydraulic reactor safety research performed in the Nuclear Engineering Laboratory of the Technical Research Centre of Finland (VTT) during the years 1972-1987. Also practical applications i.e. analyses for the safety authorities and power companies are presented. The emphasis is on description of the state-of-the-art know how. The report describes VTT's most important computer codes, both those of foreign origin and those developed at VTT, and their assessment work, VTT's own experimental research, as well as international experimental projects and other forms of cooperation VTT has participated in. Appendix 8 contains a comprehensive list of the most important publications and technical reports produced. They present the content and results of the research in detail.(orig.)

  18. Reliability and safety analyses under fuzziness

    International Nuclear Information System (INIS)

    Fuzzy theory, for example possibility theory, is compatible with probability theory. What is shown so far is that probability theory needs not be replaced by fuzzy theory, but rather that the former works much better in applications if it is combined with the latter. In fact, it is said that there are two essential uncertainties in the field of reliability and safety analyses: One is a probabilistic uncertainty which is more relevant for mechanical systems and the natural environment, and the other is fuzziness (imprecision) caused by the existence of human beings in systems. The classical probability theory alone is therefore not sufficient to deal with uncertainties in humanistic system. In such a context this collection of works will put a milestone in the arguments of probability theory and fuzzy theory. This volume covers fault analysis, life time analysis, reliability, quality control, safety analysis and risk analysis. (orig./DG). 106 figs

  19. First international intercomparison of image analysers

    CERN Document Server

    Pálfalvi, J; Eoerdoegh, I

    1999-01-01

    Image analyser systems used for evaluating solid state nuclear track detectors (SSNTD) were compared in order to establish minimum hardware and software requirements and methodology necessary in different fields of radiation dosimetry. For the purpose, CR-39 detectors (TASL, Bristol, U.K.) were irradiated with different (n,alpha) and (n,p) converters in a reference Pu-Be neutron field, in an underground laboratory with high radon concentration and by different alpha sources at the Atomic Energy Research Institute (AERI) in Budapest, Hungary. 6 sets of etched and pre-evaluated detectors and the 7th one without etching were distributed among the 14 laboratories from 11 countries. The participants measured the different track parameters and statistically evaluated the results, to determine the performance of their system. The statistical analysis of results showed high deviations from the mean values in many cases. As the conclusion of the intercomparison recommendations were given to fulfill those requirements ...

  20. Fouling analyses for heat exchangers of NPP

    International Nuclear Information System (INIS)

    Fouling of heat exchanges is generated by water-borne deposits, commonly known as foulants including particulate matter from the air, migrated corrosion produces; silt, clays, and sand suspended in water; organic contaminants; and boron based deposits in plants. This fouling is known to interfere with normal flow characteristics and reduce thermal efficiencies of heat exchangers. In order to analyze the fouling for heat exchangers of nuclear power plant, the fouling factor is introduced based on the ASME O and M codes and TEMA standards. This paper focuses on the fouling analyses for the heat exchangers of several primary systems; the RHR heat exchanger of the residual heat removal system, the letdown heat exchanger of the chemical and volume control system, and the CCW heat exchanger of the component cooling water system, Based on the results of the fouling levels for the three heat exchangers are assumed

  1. Communication analyses of plant operator crews

    International Nuclear Information System (INIS)

    Elucidation of crew communication aspects is required to improve the man-man interface which supports operators' diagnoses and decisions. Experiments to clarify operator performance under abnormal condition were evaluated by protocol analyses, interviews, etc. using a training simulator. We had the working hypothesis, based on experimental observations, that operator performance can be evaluated by analysis of crew communications. The following four approaches were tried to evaluate operator performance. (1) Crew performance was quantitatively evaluated by the number of tasks undertaken by an operator crew. (2) The group thinking process was clarified by cognition-communication flow. (3) The group response process was clarified by movement flow. (4) Quantitative indexes for evaluating crew performance were considered to be represented by the amount of information effectively exchanged among operators. (author)

  2. Analysing Medieval Urban Space; a methodology

    Directory of Open Access Journals (Sweden)

    Marlous L. Craane MA

    2007-08-01

    Full Text Available This article has been written in reaction to recent developments in medieval history and archaeology, to study not only the buildings in a town but also the spaces that hold them together. It discusses a more objective and interdisciplinary approach for analysing urban morphology and use of space. It proposes a 'new' methodology by combining town plan analysis and space syntax. This methodology was trialled on the city of Utrecht in the Netherlands. By comparing the results of this 'new' methodology with the results of previous, more conventional, research, this article shows that space syntax can be applied successfully to medieval urban contexts. It does this by demonstrating a strong correlation between medieval economic spaces and the most integrated spaces, just as is found in the study of modern urban environments. It thus provides a strong basis for the use of this technique in future research of medieval urban environments.

  3. Preclosure Consequence Analyses for License Application

    International Nuclear Information System (INIS)

    The purpose of this calculation is to demonstrate that the preclosure performance objectives, specified in 10 CFR 63.111(a) and 10 CFR 63.111(b) [DIRS 173273], have been met for the proposed design and operations in the geologic repository operations area. Radiological consequence analyses are performed for potential releases and direct radiation from normal operations in surface and subsurface facilities and from Category 1 and Category 2 event sequences during the preclosure period. Surface releases from normal repository operations are primarily from radionuclides released from opening a transportation cask during dry transfer operations of spent nuclear fuel (SNF) in Dry Transfer Facility 1 (DTF 1), Dry Transfer Facility 2 (DTF 2), or the Fuel Handling Facility (FHF). Subsurface releases from normal repository operations are from resuspension of waste package surface contamination and neutron activation of ventilated air and silica dust from host rock in the emplacement drifts. Preclosure performance objectives are discussed and summarized

  4. DEPUTY: analysing architectural structures and checking style

    International Nuclear Information System (INIS)

    The DepUty (dependencies utility) can be classified as a project and process management tool. The main goal of DepUty is to assist by means of source code analysis and graphical representation using UML, in understanding dependencies of sub-systems and packages in CMS Object Oriented software, to understand architectural structure, and to schedule code release in modularised integration. It also allows a new-comer to more easily understand the global structure of CMS software, and to void circular dependencies up-front or re-factor the code, in case it was already too close to the edge of non-maintainability. The authors will discuss the various views DepUty provides to analyse package dependencies, and illustrate both the metrics and style checking facilities it provides

  5. Attitude stability analyses for small artificial satellites

    International Nuclear Information System (INIS)

    The objective of this paper is to analyze the stability of the rotational motion of a symmetrical spacecraft, in a circular orbit. The equilibrium points and regions of stability are established when components of the gravity gradient torque acting on the spacecraft are included in the equations of rotational motion, which are described by the Andoyer's variables. The nonlinear stability of the equilibrium points of the rotational motion is analysed here by the Kovalev-Savchenko theorem. With the application of the Kovalev-Savchenko theorem, it is possible to verify if they remain stable under the influence of the terms of higher order of the normal Hamiltonian. In this paper, numerical simulations are made for a small hypothetical artificial satellite. Several stable equilibrium points were determined and regions around these points have been established by variations in the orbital inclination and in the spacecraft principal moment of inertia. The present analysis can directly contribute in the maintenance of the spacecraft's attitude

  6. Analysing Attrition in Outsourced Software Project

    Directory of Open Access Journals (Sweden)

    Umesh Rao Hodeghatta

    2015-01-01

    Full Text Available Information systems (IS outsourcing has grown as a major business phenomenon, and widely accepted as a business tool. Software outsourcing c ompanies provide expertise, knowledge and capabilities to their clients by taking up the proj ects both onsite and offsite. These companies face numerous challenges including attrition of pro ject members. Attrition is a major challenge experienced by the outsourcing companies as it has severe impact on business, revenues and profitability. In this paper, attrition data of a m ajor software outsourcing company was analysed and an attempt to find the reason for attr ition is also made. The data analysis was based on the data collected by an outsourcing compa ny over a period of two years for a major client. The results show that the client initiated attrition can have an impact on project and the members quit the outsourcing company due to client initiated ramp down without revealing the reason.

  7. Digital analyses of cartometric Fruska Gora guidelines

    Directory of Open Access Journals (Sweden)

    Živković Dragica

    2013-01-01

    Full Text Available Modern geo morphological topography research have been using quantity statistic and cartographic methods for topographic relief features, mutual relief features, mutual connection analyses on the grounds of good quality numeric parameters etc. Topographic features are important for topographic activities are important for important natural activities. Important morphological characteristics are precisely at the angle of topography, hypsometry, and topography exposition and so on. Small yet unknown relief slants can deeply affect land configuration, hypsometry, topographic exposition etc. Expositions modify the light and heat of interconnected phenomena: soil and air temperature, soil disintegration, the length of vegetation period, the complexity of photosynthesis, the fruitfulness of agricultural crops, the height of snow limit etc. [Projekat Ministarstva nauke Republike Srbije, br. 176008 i br. III44006

  8. Design and analyses of clinical trials

    International Nuclear Information System (INIS)

    The course will use RTOG studies to illustrate design and analysis issues for phase I, phase II, and phase III trials. The issues discussed will include types of statistical errors, the selection of study endpoints, choice of the appropriate study population, determination of sample sizes, randomization, and plans for statistical analyses. Estimation of the sample sizes will be discussed for both absolute survival (alive or dead) and cause specific failure (local failure). For phase III trials, Data Monitoring Committees are now widely used in multi-centered trials. Their main purpose is to determine if there are sufficient evidence to terminate a study for efficacy or safety reasons. The results of two such terminated trials will be used to illustrate the DMC's function. The question of when should a trial be reported at medical meeting and in the literature will be explored. The emphasis will be on concepts and statistical notations will be kept to a minimum

  9. The radiation analyses of ITER lower ports

    International Nuclear Information System (INIS)

    The ITER Vacuum Vessel has upper, equatorial, and lower ports used for equipment installation, diagnostics, heating and current drive systems, cryo-vacuum pumping, and access inside the vessel for maintenance. At the level of the divertor, the nine lower ports for remote handling, cryo-vacuum pumping and diagnostic are inclined downwards and toroidally located each every 40o. The cryopump port has additionally a branch to allocate a second cryopump. The ports, as openings in the Vacuum Vessel, permit radiation streaming out of the vessel which affects the heating in the components in the outer regions of the machine inside and outside the ports. Safety concerns are also raised with respect to the dose after shutdown at the cryostat behind the ports: in such zones the radiation dose level must be kept below the regulatory limit to allow personnel access for maintenance purposes. Neutronic analyses have been required to qualify the ITER project related to the lower ports. A 3-D model was used to take into account full details of the ports and the lower machine surroundings. MCNP version 5 1.40 has been used with the FENDL 2.1 nuclear data library. The ITER 40o model distributed by the ITER Organization was developed in the lower part to include the relevant details. The results of a first analysis, focused on cryopump system only, were recently published. In this paper more complete data on the cryopump port and analysis for the remote handling port and the diagnostic rack are presented; the results of both analyses give a complete map of the radiation loads in the outer divertor ports. Nuclear heating, dpa, tritium production, and dose rates after shutdown are provided and the implications for the design are discussed.

  10. The radiation analyses of ITER lower ports

    Energy Technology Data Exchange (ETDEWEB)

    Petrizzi, L., E-mail: petrizzi@frascati.enea.it [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy); Brolatti, G. [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy); Martin, A.; Loughlin, M. [ITER Organization, Cadarache, 13108 St Paul-lez-Durance (France); Moro, F.; Villari, R. [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy)

    2010-12-15

    The ITER Vacuum Vessel has upper, equatorial, and lower ports used for equipment installation, diagnostics, heating and current drive systems, cryo-vacuum pumping, and access inside the vessel for maintenance. At the level of the divertor, the nine lower ports for remote handling, cryo-vacuum pumping and diagnostic are inclined downwards and toroidally located each every 40{sup o}. The cryopump port has additionally a branch to allocate a second cryopump. The ports, as openings in the Vacuum Vessel, permit radiation streaming out of the vessel which affects the heating in the components in the outer regions of the machine inside and outside the ports. Safety concerns are also raised with respect to the dose after shutdown at the cryostat behind the ports: in such zones the radiation dose level must be kept below the regulatory limit to allow personnel access for maintenance purposes. Neutronic analyses have been required to qualify the ITER project related to the lower ports. A 3-D model was used to take into account full details of the ports and the lower machine surroundings. MCNP version 5 1.40 has been used with the FENDL 2.1 nuclear data library. The ITER 40{sup o} model distributed by the ITER Organization was developed in the lower part to include the relevant details. The results of a first analysis, focused on cryopump system only, were recently published. In this paper more complete data on the cryopump port and analysis for the remote handling port and the diagnostic rack are presented; the results of both analyses give a complete map of the radiation loads in the outer divertor ports. Nuclear heating, dpa, tritium production, and dose rates after shutdown are provided and the implications for the design are discussed.

  11. TRAC analyses for CCTF and SCTF tests and UPTF design/operation

    International Nuclear Information System (INIS)

    The 2D/3D Program is a multinational (Germany, Japan, and the United States) experimental and analytical nuclear reactor safety research program. The Los Alamos analysis effort is functioning as a vital part of the 2D/3D program. The CCTF and SCTF analyses have demonstrated that TRAC-PF1 can correctly predict multidimensional, nonequilibrium behavior in large-scale facilities prototypical of actual PWR's. Through these and future TRAC analyses the experimental findings can be related from facility to facility, and the results of this research program can be directly related to licensing concerns affecting actual PWR's

  12. Proposed Testing to Assess the Accuracy of Glass-To-Metal Seal Stress Analyses.

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, Robert S.; Emery, John M; Tandon, Rajan; Antoun, Bonnie R.; Stavig, Mark E.; Newton, Clay S.; Gibson, Cory S; Bencoe, Denise N.

    2014-09-01

    The material characterization tests conducted on 304L VAR stainless steel and Schott 8061 glass have provided higher fidelity data for calibration of material models used in Glass - T o - Metal (GTM) seal analyses. Specifically, a Thermo - Multi - Linear Elastic Plastic ( thermo - MLEP) material model has be en defined for S S304L and the Simplified Potential Energy Clock nonlinear visc oelastic model has been calibrated for the S8061 glass. To assess the accuracy of finite element stress analyses of GTM seals, a suite of tests are proposed to provide data for comparison to mo del predictions.

  13. Nonlinear dynamic analyses of seismic tests of a modified scale model PWR primary coolant loop

    International Nuclear Information System (INIS)

    Simplified and detailed nonlinear piping analysis methods were used in post-test predictions of the test behavior of a 1/2.5-scale modified primary loop of a Japanese PWR system at different levels of seismic loading. The testing was conducted using the Tadotsu Large-Scale Vibration Table Facility in Japan. The simplified nonlinear analyses used the refined incremental hinge method and the detailed nonlinear analyses used the implicit integration time history option of the ABAQUS computer code. This paper describes the tests, analysis techniques, and computer models. The analysis-to-test comparisons are discussed and conclusions and recommendations are provided

  14. Genomic Prediction in Barley

    DEFF Research Database (Denmark)

    Edriss, Vahid; Cericola, Fabio; Jensen, Jens D;

    Genomic prediction uses markers (SNPs) across the whole genome to predict individual breeding values at an early growth stage potentially before large scale phenotyping. One of the applications of genomic prediction in plant breeding is to identify the best individual candidate lines to contribute...... to next generation. The main goal of this study was to see the potential of using genomic prediction in a commercial Barley breeding program. The data used in this study was from Nordic Seed company which is located in Denmark. Around 350 advanced lines were genotyped with 9K Barely chip from...... Illumina. Traits used in this study were grain yield, plant height and heading date. Heading date is number days it takes after 1st June for plant to head. Heritabilities were 0.33, 0.44 and 0.48 for yield, height and heading, respectively for the average of nine plots. The GBLUP model was used for genomic...

  15. Genomic Prediction in Barley

    DEFF Research Database (Denmark)

    Edriss, Vahid; Cericola, Fabio; Jensen, Jens D;

    2015-01-01

    Genomic prediction uses markers (SNPs) across the whole genome to predict individual breeding values at an early growth stage potentially before large scale phenotyping. One of the applications of genomic prediction in plant breeding is to identify the best individual candidate lines to contribute...... to next generation. The main goal of this study was to see the potential of using genomic prediction in a commercial Barley breeding program. The data used in this study was from Nordic Seed company which is located in Denmark. Around 350 advanced lines were genotyped with 9K Barely chip from...... Illumina. Traits used in this study were grain yield, plant height and heading date. Heading date is number days it takes after 1st June for plant to head. Heritabilities were 0.33, 0.44 and 0.48 for yield, height and heading, respectively for the average of nine plots. The GBLUP model was used for genomic...

  16. Predicted value of $0 \\, \

    CERN Document Server

    Maedan, Shinji

    2016-01-01

    Assuming that the lightest neutrino mass $ m_0 $ is measured, we study the influence of error of the measured $ m_0 $ on the uncertainty of the predicted value of the neutrinoless double beta decay ($0 \\, \

  17. Predictable grammatical constructions

    DEFF Research Database (Denmark)

    Lucas, Sandra

    2015-01-01

    My aim in this paper is to provide evidence from diachronic linguistics for the view that some predictable units are entrenched in grammar and consequently in human cognition, in a way that makes them functionally and structurally equal to nonpredictable grammatical units, suggesting that these...... predictable units should be considered grammatical constructions on a par with the nonpredictable constructions. Frequency has usually been seen as the only possible argument speaking in favor of viewing some formally and semantically fully predictable units as grammatical constructions. However, this paper...... semantically and formally predictable. Despite this difference, [méllo INF], like the other future periphrases, seems to be highly entrenched in the cognition (and grammar) of Early Medieval Greek language users, and consequently a grammatical construction. The syntactic evidence speaking in favor of [méllo...

  18. Robust Distributed Online Prediction

    CERN Document Server

    Dekel, Ofer; Shamir, Ohad; Xiao, Lin

    2010-01-01

    The standard model of online prediction deals with serial processing of inputs by a single processor. However, in large-scale online prediction problems, where inputs arrive at a high rate, an increasingly common necessity is to distribute the computation across several processors. A non-trivial challenge is to design distributed algorithms for online prediction, which maintain good regret guarantees. In \\cite{DMB}, we presented the DMB algorithm, which is a generic framework to convert any serial gradient-based online prediction algorithm into a distributed algorithm. Moreover, its regret guarantee is asymptotically optimal for smooth convex loss functions and stochastic inputs. On the flip side, it is fragile to many types of failures that are common in distributed environments. In this companion paper, we present variants of the DMB algorithm, which are resilient to many types of network failures, and tolerant to varying performance of the computing nodes.

  19. Insights from Severe Accident Analyses for Verification of VVER SAMG

    International Nuclear Information System (INIS)

    The severe accident analyses of simultaneous rupture of all four steam lines (case-a), simultaneous occurrence of LOCA with SBO (case-b) and Station blackout (case-c) were performed with the computer code ASTEC V2r2 for a typical VVER-1000. The results obtained will be used for verification of sever accident provisions and Severe Accident Management Guidelines (SAMG). Auxiliary feed water and emergency core cooling systems are modelled as boundary conditions. The ICARE module is used to simulate the reactor core, which is divided into five radial regions by grouping similarly powered fuel assemblies together. Initially, CESAR module computes thermal hydraulics in primary and secondary circuits. As soon as core uncovery begins, the ICARE module is actuated based on certain parameters, and after this, ICARE module computes the thermal hydraulics in the core, bypass, downcomer and the lower plenum. CESAR handles the remaining components in the primary and secondary loops. CPA module is used to simulate the containment and to predict the thermal-hydraulic and hydrogen behaviour in the containment. The accident sequences were selected in such a way that they cover low/high pressure and slow/fast core damage progression events. Events simulated included slow progression events with high pressure and fast accident progression with low primary pressure. Analysis was also carried out for the case of SBO with the opening of the PORVs when core exit temperature exceeds certain value as part of SAMG. Time step sensitivity study was carried out for LOCA with SBO. In general the trends and magnitude of the parameters are as expected. The key results of the above analyses are presented in this paper. (author)

  20. 绿色水处理剂对A20钢的缓蚀阻垢性能研究%Research on aging inhibiting corrosing and scaling properties of green water treatment agents for A20

    Institute of Scientific and Technical Information of China (English)

    李素云; 王钢; 邵波; 梅其政; 袁曹龙

    2011-01-01

    在p(Ca)=p(HCO)=250mg/L的模拟循环水中,PESA、HEDP、AA/AMPS三种药剂对A20钢的阻垢性能优劣顺序是:PESA>HEDP>AA/AMPS;最佳复配方是HEDP 2.5mg/L、PESA 1.5mg/L、咪唑啉1.5mg/L.复配药剂具有良好的缓蚀阻垢效果,缓蚀过程以抑制阳极为主,阻垢作用通过配位作用和晶格畸变二者的协同作用实现.

  1. Nuclear level density predictions

    OpenAIRE

    Bucurescu Dorel; von Egidy Till

    2015-01-01

    Simple formulas depending only on nuclear masses were previously proposed for the parameters of the Back-Shifted Fermi Gas (BSFG) model and of the Constant Temperature (CT) model of the nuclear level density, respectively. They are now applied for the prediction of the level density parameters of all nuclei with available masses. Both masses from the new 2012 mass table and from different models are considered and the predictions are discussed in connection with nuclear regions most affected ...

  2. Predictive graph mining

    OpenAIRE

    Karwath, Andreas; De Raedt, Luc

    2004-01-01

    Graph mining approaches are extremely popular and effective in molecular databases. The vast majority of these approaches first derive interesting, i.e. frequent, patterns and then use these as features to build predictive models. Rather than building these models in a two step indirect way, the SMIREP system introduced in this paper, derives predictive rule models from molecular data directly. SMIREP combines the SMILES and SMARTS representation languages that are popular in computational ch...

  3. Operational Dust Prediction

    Science.gov (United States)

    Benedetti, Angela; Baldasano, Jose M.; Basart, Sara; Benincasa, Francesco; Boucher, Olivier; Brooks, Malcolm E.; Chen, Jen-Ping; Colarco, Peter R.; Gong, Sunlin; Huneeus, Nicolas; Jones, Luke; Lu, Sarah; Menut, Laurent; Morcrette, Jean-Jacques; Mulcahy, Jane; Nickovic, Slobodan; Garcia-Pando, Carlos P.; Reid, Jeffrey S.; Sekiyama, Thomas T.; Tanaka, Taichu Y.; Terradellas, Enric; Westphal, Douglas L.; Zhang, Xiao-Ye; Zhou, Chun-Hong

    2014-01-01

    Over the last few years, numerical prediction of dust aerosol concentration has become prominent at several research and operational weather centres due to growing interest from diverse stakeholders, such as solar energy plant managers, health professionals, aviation and military authorities and policymakers. Dust prediction in numerical weather prediction-type models faces a number of challenges owing to the complexity of the system. At the centre of the problem is the vast range of scales required to fully account for all of the physical processes related to dust. Another limiting factor is the paucity of suitable dust observations available for model, evaluation and assimilation. This chapter discusses in detail numerical prediction of dust with examples from systems that are currently providing dust forecasts in near real-time or are part of international efforts to establish daily provision of dust forecasts based on multi-model ensembles. The various models are introduced and described along with an overview on the importance of dust prediction activities and a historical perspective. Assimilation and evaluation aspects in dust prediction are also discussed.

  4. Experimental PVT property analyses for Athabasca bitumen

    Energy Technology Data Exchange (ETDEWEB)

    Ashrafi, Mohammad; Souraki, Yaser; Karimaie, Hassan; Torsaeter, Ole [Norwegian University of Science and Technology (Norway); Bjorkvik, Bard J.A. [SINTEF Petroleum Research (Norway)

    2011-07-01

    To study fluid behavior in a reservoir it is very important to find out exact and complete data on the rock system, fluid properties, and rock-fluid interactions. PVT properties are among the most critical data that reservoir engineers need to evaluate the reservoir. This paper presents the experimental study of a few PVT properties of Athabasca bitumen. The viscosity of Athabasca heavy crude was measured using a rotational viscometer. These viscosity data are a reliable input for simulation purposes. The Athabasca oil was characterized using gas chromatography analysis. Whole sample molar mass was measured at 534 g/mol by cryoscopy. Density and molar mass were also measured. Based on the experimental study, a formula was derived for Athabasca bitumen density prediction in the temperature and pressure range studied. From the results, the interfacial tension between oil and steam was measured, using the pendant drop method, and found to be between 25 and 18 mN/m.

  5. Database-Driven Analyses of Astronomical Spectra

    Science.gov (United States)

    Cami, Jan

    2012-03-01

    Spectroscopy is one of the most powerful tools to study the physical properties and chemical composition of very diverse astrophysical environments. In principle, each nuclide has a unique set of spectral features; thus, establishing the presence of a specific material at astronomical distances requires no more than finding a laboratory spectrum of the right material that perfectly matches the astronomical observations. Once the presence of a substance is established, a careful analysis of the observational characteristics (wavelengths or frequencies, intensities, and line profiles) allows one to determine many physical parameters of the environment in which the substance resides, such as temperature, density, velocity, and so on. Because of this great diagnostic potential, ground-based and space-borne astronomical observatories often include instruments to carry out spectroscopic analyses of various celestial objects and events. Of particular interest is molecular spectroscopy at infrared wavelengths. From the spectroscopic point of view, molecules differ from atoms in their ability to vibrate and rotate, and quantum physics inevitably causes those motions to be quantized. The energies required to excite vibrations or rotations are such that vibrational transitions generally occur at infrared wavelengths, whereas pure rotational transitions typically occur at sub-mm wavelengths. Molecular vibration and rotation are coupled though, and thus at infrared wavelengths, one commonly observes a multitude of ro-vibrational transitions (see Figure 13.1). At lower spectral resolution, all transitions blend into one broad ro-vibrational molecular band. The isotope. Molecular spectroscopy thus allows us to see a difference of one neutron in an atomic nucleus that is located at astronomical distances! Since the detection of the first interstellar molecules (the CH [21] and CN [14] radicals), more than 150 species have been detected in space, ranging in size from diatomic

  6. High perfomance liquid chromatography in pharmaceutical analyses.

    Science.gov (United States)

    Nikolin, Branko; Imamović, Belma; Medanhodzić-Vuk, Saira; Sober, Miroslav

    2004-05-01

    In testing the pre-sale procedure the marketing of drugs and their control in the last ten years, high performance liquid chromatography replaced numerous spectroscopic methods and gas chromatography in the quantitative and qualitative analysis. In the first period of HPLC application it was thought that it would become a complementary method of gas chromatography, however, today it has nearly completely replaced gas chromatography in pharmaceutical analysis. The application of the liquid mobile phase with the possibility of transformation of mobilized polarity during chromatography and all other modifications of mobile phase depending upon the characteristics of substance which are being tested, is a great advantage in the process of separation in comparison to other methods. The greater choice of stationary phase is the next factor which enables realization of good separation. The separation line is connected to specific and sensitive detector systems, spectrafluorimeter, diode detector, electrochemical detector as other hyphernated systems HPLC-MS and HPLC-NMR, are the basic elements on which is based such wide and effective application of the HPLC method. The purpose high performance liquid chromatography (HPLC) analysis of any drugs is to confirm the identity of a drug and provide quantitative results and also to monitor the progress of the therapy of a disease.1) Measuring presented on the Fig. 1. is chromatogram obtained for the plasma of depressed patients 12 h before oral administration of dexamethasone. It may also be used to further our understanding of the normal and disease process in the human body trough biomedical and therapeutically research during investigation before of the drugs registration. The analyses of drugs and metabolites in biological fluids, particularly plasma, serum or urine is one of the most demanding but one of the most common uses of high performance of liquid chromatography. Blood, plasma or serum contains numerous endogenous

  7. Hungarian approach to LOCA analyses for SARs

    International Nuclear Information System (INIS)

    The Hungarian AGNES project in the period of 1992-94 was performed with the aim to reassess the safety of the Paks NPP using state-of-the-art techniques. The project comprised - among others - a complete design basis accident (DBA) analysis. Major part of the thermal-hydraulic analyses has been performed by the RELAP5/mod2.5/V251 code version with conservative approach. In the medium size LOCA calculations and the PTS studies the six reactor cooling loops of the WWER-440/213 system were modelled by three loops (a single, a double and a triple loop). In the further developed version of the input model used in small break LOCA and other DBA analyses the six loops were modelled separately. The nodalisation schemes of the reactor vessel and the pressurizer, moreover the single primary loops are identical in the two input models. For the six-loop inputs model the trip cards, general tables and control variables are generated by using a RELAP5 object-oriented pre-processing interactive code, the TROPIC 4.0 code received from TRACTEBEL Belgium. The six-loop input model for WWER-440/V213 system was verified by the data of two operational transients measured in Paks NPP. The analysis of large break LOCAs, where the combined simultaneous upper plenum and downcomer injection results in a rather complicated process during reflooding phase, was carried out by using the ATHLET mod 1.1 Cycle code version (developed by GRS) in the framework of a bilateral German-Hungarian cooperation agreement using two-loop (1+5) input model. Later on in our safety analysis activities the application of best estimate methodology gained ground. In the last years AEKI in framework of different projects as US CAMP activity, EU PHARE and 5th Framework Programmes, as well as national projects to support the plant operation performed also many cases of LOCA analysis including primary to secondary leakages, feedwater and steam line breaks. These can be the preparation for a new DBA Analysis project

  8. Seismic Soil-Structure Interaction Analyses of a Deeply Embedded Model Reactor – SASSI Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Nie J.; Braverman J.; Costantino, M.

    2013-10-31

    This report summarizes the SASSI analyses of a deeply embedded reactor model performed by BNL and CJC and Associates, as part of the seismic soil-structure interaction (SSI) simulation capability project for the NEAMS (Nuclear Energy Advanced Modeling and Simulation) Program of the Department of Energy. The SASSI analyses included three cases: 0.2 g, 0.5 g, and 0.9g, all of which refer to nominal peak accelerations at the top of the bedrock. The analyses utilized the modified subtraction method (MSM) for performing the seismic SSI evaluations. Each case consisted of two analyses: input motion in one horizontal direction (X) and input motion in the vertical direction (Z), both of which utilized the same in-column input motion. Besides providing SASSI results for use in comparison with the time domain SSI results obtained using the DIABLO computer code, this study also leads to the recognition that the frequency-domain method should be modernized so that it can better serve its mission-critical role for analysis and design of nuclear power plants.

  9. Uncertainty analyses in systems modeling process

    International Nuclear Information System (INIS)

    In the context of Probabilistic Safety Assessment (PSA), the uncertainty analyses play an important role. The objective is to ensure the qualitative evaluation and quantitative estimation in PSA level 1 results (the core damage frequency, the accident sequences frequency, the top events probability, etc). An application that enables uncertainty calculations by probability distribution propagations in the fault tree model has been developed. It uses the moment method and Monte Carlo method. The application has been integrated into a computer program that allocates the reliability data, quantifies the human errors and labels in a unique way the components. The reliability data used in Institute for Nuclear Research (INR) Pitesti for Cernavoda Probabilistic Safety Evaluation (CPSE) studies is a generic data base. Taking into account the status of reliability data base and the cases by which an error factor for a failure rate lognormal distribution is calculated, the data base has been completed with an error factor for each record. This paper presents the module that performs the uncertainty analysis and an example of uncertainty analysis at the fault tree level. (authors)

  10. Analyses of demand response in Denmark

    International Nuclear Information System (INIS)

    Due to characteristics of the power system, costs of producing electricity vary considerably over short time intervals. Yet, many consumers do not experience corresponding variations in the price they pay for consuming electricity. The topic of this report is: are consumers willing and able to respond to short-term variations in electricity prices, and if so, what is the social benefit of consumers doing so? Taking Denmark and the Nord Pool market as a case, the report focuses on what is known as short-term consumer flexibility or demand response in the electricity market. With focus on market efficiency, efficient allocation of resources and security of supply, the report describes demand response from a micro-economic perspective and provides empirical observations and case studies. The report aims at evaluating benefits from demand response. However, only elements contributing to an overall value are presented. In addition, the analyses are limited to benefits for society, and costs of obtaining demand response are not considered. (au)

  11. Causal mediation analyses with rank preserving models.

    Science.gov (United States)

    Have, Thomas R Ten; Joffe, Marshall M; Lynch, Kevin G; Brown, Gregory K; Maisto, Stephen A; Beck, Aaron T

    2007-09-01

    We present a linear rank preserving model (RPM) approach for analyzing mediation of a randomized baseline intervention's effect on a univariate follow-up outcome. Unlike standard mediation analyses, our approach does not assume that the mediating factor is also randomly assigned to individuals in addition to the randomized baseline intervention (i.e., sequential ignorability), but does make several structural interaction assumptions that currently are untestable. The G-estimation procedure for the proposed RPM represents an extension of the work on direct effects of randomized intervention effects for survival outcomes by Robins and Greenland (1994, Journal of the American Statistical Association 89, 737-749) and on intervention non-adherence by Ten Have et al. (2004, Journal of the American Statistical Association 99, 8-16). Simulations show good estimation and confidence interval performance by the proposed RPM approach under unmeasured confounding relative to the standard mediation approach, but poor performance under departures from the structural interaction assumptions. The trade-off between these assumptions is evaluated in the context of two suicide/depression intervention studies. PMID:17825022

  12. Phylogenomic Analyses Support Traditional Relationships within Cnidaria.

    Directory of Open Access Journals (Sweden)

    Felipe Zapata

    Full Text Available Cnidaria, the sister group to Bilateria, is a highly diverse group of animals in terms of morphology, lifecycles, ecology, and development. How this diversity originated and evolved is not well understood because phylogenetic relationships among major cnidarian lineages are unclear, and recent studies present contrasting phylogenetic hypotheses. Here, we use transcriptome data from 15 newly-sequenced species in combination with 26 publicly available genomes and transcriptomes to assess phylogenetic relationships among major cnidarian lineages. Phylogenetic analyses using different partition schemes and models of molecular evolution, as well as topology tests for alternative phylogenetic relationships, support the monophyly of Medusozoa, Anthozoa, Octocorallia, Hydrozoa, and a clade consisting of Staurozoa, Cubozoa, and Scyphozoa. Support for the monophyly of Hexacorallia is weak due to the equivocal position of Ceriantharia. Taken together, these results further resolve deep cnidarian relationships, largely support traditional phylogenetic views on relationships, and provide a historical framework for studying the evolutionary processes involved in one of the most ancient animal radiations.

  13. Isolation and analyses of axonal ribonucleoprotein complexes.

    Science.gov (United States)

    Doron-Mandel, Ella; Alber, Stefanie; Oses, Juan A; Medzihradszky, Katalin F; Burlingame, Alma L; Fainzilber, Mike; Twiss, Jeffery L; Lee, Seung Joon

    2016-01-01

    Cytoskeleton-dependent RNA transport and local translation in axons are gaining increased attention as key processes in the maintenance and functioning of neurons. Specific axonal transcripts have been found to play roles in many aspects of axonal physiology including axon guidance, axon survival, axon to soma communication, injury response and regeneration. This axonal transcriptome requires long-range transport that is achieved by motor proteins carrying transcripts as messenger ribonucleoprotein (mRNP) complexes along microtubules. Other than transport, the mRNP complex plays a major role in the generation, maintenance, and regulation of the axonal transcriptome. Identification of axonal RNA-binding proteins (RBPs) and analyses of the dynamics of their mRNPs are of high interest to the field. Here, we describe methods for the study of interactions between RNA and proteins in axons. First, we describe a protocol for identifying binding proteins for an RNA of interest by using RNA affinity chromatography. Subsequently, we discuss immunoprecipitation (IP) methods allowing the dissection of protein-RNA and protein-protein interactions in mRNPs under various physiological conditions. PMID:26794529

  14. Activation analyses for different fusion structural alloys

    International Nuclear Information System (INIS)

    The leading candidate structural materials, viz., the vanadium alloys, the nickel or the manganese stabilized austenitic steels, and the ferritic steels, are analysed in terms of their induced activation in the TPSS fusion power reactor. The TPSS reactor has 1950 MW fusion power and inboard and outboard average neutron wall loading of 3.75 and 5.35 MW/m2 respectively. The results shows that, after one year of continuous operation, the vanadium alloys have the least radioactivity at reactor shutdown. The maximum difference between the induced radioactivity in the vanadium alloys and in the other iron-based alloys occurs at about 10 years after reactor shutdown. At this time, the total reactor radioactivity, using the vanadium alloys, is about two orders of magnitude less than the total reactor radioactivity utilizing any other alloy. The difference is even larger in the first wall, the FW-vanadium activation is 3 orders of magnitude less than other alloys' FW activation. 2 refs., 7 figs

  15. Reliability Analyses of Groundwater Pollutant Transport

    Energy Technology Data Exchange (ETDEWEB)

    Dimakis, Panagiotis

    1997-12-31

    This thesis develops a probabilistic finite element model for the analysis of groundwater pollution problems. Two computer codes were developed, (1) one using finite element technique to solve the two-dimensional steady state equations of groundwater flow and pollution transport, and (2) a first order reliability method code that can do a probabilistic analysis of any given analytical or numerical equation. The two codes were connected into one model, PAGAP (Probability Analysis of Groundwater And Pollution). PAGAP can be used to obtain (1) the probability that the concentration at a given point at a given time will exceed a specified value, (2) the probability that the maximum concentration at a given point will exceed a specified value and (3) the probability that the residence time at a given point will exceed a specified period. PAGAP could be used as a tool for assessment purposes and risk analyses, for instance the assessment of the efficiency of a proposed remediation technique or to study the effects of parameter distribution for a given problem (sensitivity study). The model has been applied to study the greatest self sustained, precipitation controlled aquifer in North Europe, which underlies Oslo`s new major airport. 92 refs., 187 figs., 26 tabs.

  16. Consumption patterns and perception analyses of hangwa.

    Science.gov (United States)

    Kwock, Chang Geun; Lee, Min A; Park, So Hyun

    2012-03-01

    Hangwa is a traditional food, corresponding to the current consumption trend, in need of marketing strategies to extend its consumption. Therefore, the purpose of this study was to analyze consumers' consumption patterns and perception of Hangwa to increase consumption in the market. A questionnaire was sent to 250 consumers by e-mail from Oct 8∼23, 2009 and the data from 231 persons were analyzed in this study. Statistical, descriptive, paired samples t-test, and importance-performance analyses were conducted using SPSS WIN 17.0. According to the results, Hangwa was purchased mainly 'for present' (39.8%) and the main reasons for buying it were 'traditional image' (33.3%) and 'taste' (22.5%). When importance and performance of attributes considered in purchasing Hangwa were evaluated, performance was assessed to be lower than importance for all attributes. The attributes in the first quadrant with a high importance and a high performance were 'a sanitary process', 'a rigorous quality mark' and 'taste', which were related with quality of the products. In addition, those with a high importance but a low performance were 'popularization through advertisement', 'promotion through mass media', 'conversion of thought on traditional foods', 'a reasonable price' and 'a wide range of price'. In conclusion, Hangwa manufacturers need to diversify products and extend the expiration date based on technologies to promote its consumption. In terms of price, Hangwa should become more available by lowering the price barrier for consumers who are sensitive to price. PMID:24471065

  17. Kinematic gait analyses in healthy Golden Retrievers

    Directory of Open Access Journals (Sweden)

    Gabriela C.A. Silva

    2014-12-01

    Full Text Available Kinematic analysis relates to the relative movement between rigid bodies and finds application in gait analysis and other body movements, interpretation of their data when there is change, determines the choice of treatment to be instituted. The objective of this study was to standardize the march of Dog Golden Retriever Healthy to assist in the diagnosis and treatment of musculoskeletal disorders. We used a kinematic analysis system to analyse the gait of seven dogs Golden Retriever, female, aged between 2 and 4 years, weighing 21.5 to 28 kg, clinically normal. Flexion and extension were described for shoulder, elbow, carpal, hip, femorotibialis and tarsal joints. The gait was characterized lateral and had accepted hypothesis of normality for all variables, except for the stance of hip and elbow, considering a confidence level of 95%, significance level α = 0.05. Variations have been attributed to displacement of the stripes during movement and the duplicated number of reviews. The kinematic analysis proved to be a consistent method of evaluation of the movement during canine gait and the data can be used in the diagnosis and evaluation of canine gait in comparison to other studies and treatment of dogs with musculoskeletal disorders.

  18. Parametric analyses of fusion-fission systems

    International Nuclear Information System (INIS)

    After a short review of the nuclear reactions relevant to fusion-fission systems the various types of blankets and characteristic model cases are presented. The fusion-fission system is modelled by its energy flow diagram. The system components and the system as a whole are characterized by 'component parameters' and 'system parameters' all of which are energy ratios. A cost estimate is given for the net energy delivered by the system, and a collection of formulas for the various energies flowing in the system in terms of the thermal energy delivered by the fusion part is presented. For sensitivity analysis four reference cases are defined which combine two plasma confinement schemes (mirror and tokamak) with two fissile fuel cycles (thorium-uranium and uranium-plutonium). The sensitivity of the critical plasma energy multiplication, of the circulating energy fraction, and of the energy cost with respect to changes of the component parameters is analysed. For the mirror case only superconducting magnets are considered, whereas two tokimak cases take into account both superconducting and normal-conducting coils. A section presenting relations between the plasma energy multiplication and the confinement parameter n tausub(E) of driven tokamak plasmas is added for reference. The conclusions summarize the results which could be obtained within the framework of energy balances, cost estimates and their parametric sensitivities. This is supplemented by listing those issues which lie beyond this scope but have to be taken into account when assessments of fusion-fission systems are made. (orig.)

  19. ANALYSES AND INFLUENCES OF GLAZED BUILDING ENVELOPES

    Directory of Open Access Journals (Sweden)

    Sabina Jordan

    2011-01-01

    Full Text Available The article presents the results of an analytical study of the functioning of glazing at two different yet interacting levels: at the level of the building as a whole, and at that of glazing as a building element. At the building level, analyses were performed on a sample of high-rise business buildings in Slovenia, where the glazing"s share of the building envelope was calculated, and estimates of the proportion of shade provided by external blinds were made. It is shown that, especially in the case of modern buildings with large proportions of glazing and buildings with no shading devices, careful glazing design is needed, together with a sound knowledge of energy performance. In the second part of the article, the energy balance values relating to selected types of glazing are presented, including solar control glazing. The paper demonstrates the need for a holistic energy approach to glazing problems, as well as how different types of glazing can be methodically compared, thus improving the design of sustainability-orientated buildings.

  20. Statistical analyses of extreme food habits

    International Nuclear Information System (INIS)

    This report is a summary of the results of the project ''Statistical analyses of extreme food habits'', which was ordered from the National Office for Radiation Protection as a contribution to the amendment of the ''General Administrative Regulation to paragraph 45 of the Decree on Radiation Protection: determination of the radiation exposition by emission of radioactive substances from facilities of nuclear technology''. Its aim is to show if the calculation of the radiation ingested by 95% of the population by food intake, like it is planned in a provisional draft, overestimates the true exposure. If such an overestimation exists, the dimension of it should be determined. It was possible to prove the existence of this overestimation but its dimension could only roughly be estimated. To identify the real extent of it, it is necessary to include the specific activities of the nuclides, which were not available for this investigation. In addition to this the report shows how the amounts of food consumption of different groups of foods influence each other and which connections between these amounts should be taken into account, in order to estimate the radiation exposition as precise as possible. (orig.)

  1. Achieving reasonable conservatism in nuclear safety analyses

    International Nuclear Information System (INIS)

    In the absence of methods that explicitly account for uncertainties, seeking reasonable conservatism in nuclear safety analyses can quickly lead to extreme conservatism. The rate of divergence to extreme conservatism is often beyond the expert analysts’ intuitive feeling, but can be demonstrated mathematically. Too much conservatism in addressing the safety of nuclear facilities is not beneficial to society. Using certain properties of lognormal distributions for representation of input parameter uncertainties, example calculations for the risk and consequence of a fictitious facility accident scenario are presented. Results show that there are large differences between the calculated 95th percentiles and the extreme bounding values derived from using all input variables at their upper-bound estimates. Showing the relationship of the mean values to the key parameters of the output distributions, the paper concludes that the mean is the ideal candidate for representation of the value of an uncertain parameter. The mean value is proposed as the metric that is consistent with the concept of reasonable conservatism in nuclear safety analysis, because its value increases towards higher percentiles of the underlying positively skewed distribution with increasing levels of uncertainty. Insensitivity of the results to the actual underlying distributions is briefly demonstrated. - Highlights: • Multiple conservative assumptions can quickly diverge into extreme conservatism. • Mathematics and attractive properties provide basis for wide use of lognormal distribution. • Mean values are ideal candidates for representation of parameter uncertainties. • Mean values are proposed as reasonably conservative estimates of parameter uncertainties

  2. WIND SPEED AND ENERGY POTENTIAL ANALYSES

    Directory of Open Access Journals (Sweden)

    A. TOKGÖZLÜ

    2013-01-01

    Full Text Available This paper provides a case study on application of wavelet techniques to analyze wind speed and energy (renewable and environmental friendly energy. Solar and wind are main sources of energy that allows farmers to have the potential for transferring kinetic energy captured by the wind mill for pumping water, drying crops, heating systems of green houses, rural electrification's or cooking. Larger wind turbines (over 1 MW can pump enough water for small-scale irrigation. This study tried to initiate data gathering process for wavelet analyses, different scale effects and their role on wind speed and direction variations. The wind data gathering system is mounted at latitudes: 37° 50" N; longitude 30° 33" E and height: 1200 m above mean sea level at a hill near Süleyman Demirel University campus. 10 minutes average values of two levels wind speed and direction (10m and 30m above ground level have been recorded by a data logger between July 2001 and February 2002. Wind speed values changed between the range of 0 m/s and 54 m/s. Annual mean speed value is 4.5 m/s at 10 m ground level. Prevalent wind

  3. PRECLOSURE CONSEQUENCE ANALYSES FOR LICENSE APPLICATION

    Energy Technology Data Exchange (ETDEWEB)

    S. Tsai

    2005-01-12

    Radiological consequence analyses are performed for potential releases from normal operations in surface and subsurface facilities and from Category 1 and Category 2 event sequences during the preclosure period. Surface releases from normal repository operations are primarily from radionuclides released from opening a transportation cask during dry transfer operations of spent nuclear fuel (SNF) in Dry Transfer Facility 1 (DTF 1), Dry Transfer Facility 2 (DTF 2), the Canister Handling facility (CHF), or the Fuel Handling Facility (FHF). Subsurface releases from normal repository operations are from resuspension of waste package surface contamination and neutron activation of ventilated air and silica dust from host rock in the emplacement drifts. The purpose of this calculation is to demonstrate that the preclosure performance objectives, specified in 10 CFR 63.111(a) and 10 CFR 63.111(b), have been met for the proposed design and operations in the geologic repository operations area. Preclosure performance objectives are discussed in Section 6.2.3 and are summarized in Tables 1 and 2.

  4. Reclaiming the individual from Hofstede's ecological analysis--a 20-year odyssey: comment on Oyserman et al. (2002).

    Science.gov (United States)

    Bond, Michael Harris

    2002-01-01

    D. Oyserman, H. M. Coon, and M. Kemmelmeier (2002) challenge the stereotype that European Americans are more individualistic and less collectivistic than persons from most other ethnic groups. The author contends that this stereotype took firm empirical root with G. Hofstede's (1980) monumental publication identifying the United States as the most individualistic of his then 40 nations. This empirical designation arose because of challengeable decisions Hofstede made about the analysis of his data and the labeling of his dimensions. The conflation of concepts under the rubric of cultural individualism plus psychologists' unwarranted psychologizing of the construct then combined with Hofstede's empirical location of America to set a 20-year agenda for data collection. Oyserman et al. disentangle and organize this mass of studies, enabling the discipline of cross-cultural psychology to forge ahead in more productive directions, less reliant on previous assumptions and measures. PMID:11843548

  5. Trichodysplasia Spinulosa in a 20-Month-Old Girl With a Good Response to Topical Cidofovir 1%.

    Science.gov (United States)

    Santesteban, Raquel; Feito, Marta; Mayor, Ander; Beato, María; Ramos, Esther; de Lucas, Raúl

    2015-12-01

    Trichodysplasia spinulosa (TS) is a rare entity, characterized by a follicular digitate keratosis predominantly affecting the face and variable degrees of hair loss, most severely facial hair, that occurs in immunosuppressed individuals, and is considered to be a viral infection caused by a human polyomavirus, the "TS-associated polyomavirus." Histologically it is characterized by hair follicles with excessive inner root-sheath differentiation and intraepithelial viral inclusions. Correlation of these findings with clinical features is required for diagnosis. Treatment with antiviral agents appears to be the most effective. We report the occurrence of TS in a 20-month-old girl with multivisceral transplantation due to short-bowel syndrome secondary to intestinal atresia and gastroschisis. The patient was treated with cidofovir 1% cream, with significant improvement and without any adverse effects. We describe the youngest patient, to our knowledge, with TS. PMID:26620059

  6. Analyse textuelle des discours: Niveaux ou plans d´analyse

    Directory of Open Access Journals (Sweden)

    Jean-Michel Adam

    2012-12-01

    Full Text Available L’article porte sur la théorie de l´Analyse Textuelle des Discours, à partir d´une reprisede la traduction brésilienne de La linguistique textuelle: introduction à l’analyse textuelle desdiscours (Cortez, 2008. L’ATD est pensée en fonction de trois observations préliminaires: lalinguistique textuelle est une des disciplines de l’analyse de discours, le texte est l’objet d’analysede l’ATD, et, dès qu’il y a texte, c’est-à-dire reconnaissance du fait qu’une suite d’énoncésforme un tout de communication, il y a effet de généricité, c’est-à-dire inscription de cette suited’énoncés dans une classe de discours. Le modèle théorique de l’ATD est éclairé par une reprisede son schéma 4, où sont représentés huit niveaux d’analyse. L´ATD est abordée sous l’angled’une double exigence – des raisons théoriques et des raisons méthodologiques et didactiquesqui conduisent à ces niveaux – et sont détaillées et illustrées les cinq plans ou niveaux d’analysetextuelle. Pour finir, des parties de l’oeuvre sont reprises et élargies, avec d’autres analyses où denouveaux aspcts théoriques sont détaillés.

  7. Use of EBSD Data in Numerical Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Becker, R; Wiland, H

    2000-01-14

    obtained for comparison with the model predictions. More recent work has taken advantage of automated data collection on deformed specimens as a means of collecting detailed and spatially correlated data for model validation. Although it will not be discussed in detail here, another area in which EBSD data is having a great impact is on recrystallization modeling. EBSD techniques can be used to collect data for quantitative microstructural analysis. This data can be used to infer growth kinetics of specific orientations, and this information can be synthesized into more accurate grain growth or recrystallization models. Another role which EBSD techniques may play is in determining initial structures for recrystallization models. A realistic starting structure is vital for evaluating the models, and attempts at predicting realistic structures with finite element simulations are not yet successful. As methodologies and equipment resolution continue to improve, it is possible that measured structures will serve as input for recrystallization models. Simulations have already been run using information obtained manually from a TEM.

  8. Prediction of Factors Determining Changes in Stability in Protein Mutants

    OpenAIRE

    Parthiban, Vijayarangakannan

    2006-01-01

    Analysing the factors behind protein stability is a key research topic in molecular biology and has direct implications on protein structure prediction and protein-protein docking solutions. Protein stability upon point mutations were analysed using a distance dependant pair potential representing mainly through-space interactions and torsion angle potential representing neighbouring effects as a basic statistical mechanical setup for the analysis. The synergetic effect of accessible surface ...

  9. A 20-Year Follow-Up of the Harrington-O'Shea Career Decision-Making System

    Science.gov (United States)

    Harrington, Thomas F.

    2006-01-01

    The interest inventory of the Harrington-O'Shea Career Decision-Making System (T. F. Harrington & A. J. O'Shea, 1980, 1992, 2003) had hit rates of occupational status substantially exceeding chance expectations in the literature's 1st long-term predictive validity study since 1991. No significant gender differences were found for the Grade 10…

  10. Aircraft noise prediction

    Science.gov (United States)

    Filippone, Antonio

    2014-07-01

    This contribution addresses the state-of-the-art in the field of aircraft noise prediction, simulation and minimisation. The point of view taken in this context is that of comprehensive models that couple the various aircraft systems with the acoustic sources, the propagation and the flight trajectories. After an exhaustive review of the present predictive technologies in the relevant fields (airframe, propulsion, propagation, aircraft operations, trajectory optimisation), the paper addresses items for further research and development. Examples are shown for several airplanes, including the Airbus A319-100 (CFM engines), the Bombardier Dash8-Q400 (PW150 engines, Dowty R408 propellers) and the Boeing B737-800 (CFM engines). Predictions are done with the flight mechanics code FLIGHT. The transfer function between flight mechanics and the noise prediction is discussed in some details, along with the numerical procedures for validation and verification. Some code-to-code comparisons are shown. It is contended that the field of aircraft noise prediction has not yet reached a sufficient level of maturity. In particular, some parametric effects cannot be investigated, issues of accuracy are not currently addressed, and validation standards are still lacking.

  11. Solar Cycle Prediction

    CERN Document Server

    Petrovay, K

    2010-01-01

    A review of solar cycle prediction methods and their performance is given, including forecasts for cycle 24 and focusing on aspects of the solar cycle prediction problem that have a bearing on dynamo theory. The scope of the review is further restricted to the issue of predicting the amplitude (and optionally the epoch) of an upcoming solar maximum no later than right after the start of the given cycle. Prediction methods form three main groups. Precursor methods rely on the value of some measure of solar activity or magnetism at a specified time to predict the amplitude of the following solar maximum. Their implicit assumption is that each numbered solar cycle is a consistent unit in itself, while solar activity seems to consist of a series of much less tightly intercorrelated individual cycles. Extrapolation methods, in contrast, are based on the premise that the physical process giving rise to the sunspot number record is statistically homogeneous, i.e., the mathematical regularities underlying its variati...

  12. Non-Statistical Methods of Analysing of Bankruptcy Risk

    Directory of Open Access Journals (Sweden)

    Pisula Tomasz

    2015-06-01

    Full Text Available The article focuses on assessing the effectiveness of a non-statistical approach to bankruptcy modelling in enterprises operating in the logistics sector. In order to describe the issue more comprehensively, the aforementioned prediction of the possible negative results of business operations was carried out for companies functioning in the Polish region of Podkarpacie, and in Slovakia. The bankruptcy predictors selected for the assessment of companies operating in the logistics sector included 28 financial indicators characterizing these enterprises in terms of their financial standing and management effectiveness. The purpose of the study was to identify factors (models describing the bankruptcy risk in enterprises in the context of their forecasting effectiveness in a one-year and two-year time horizon. In order to assess their practical applicability the models were carefully analysed and validated. The usefulness of the models was assessed in terms of their classification properties, and the capacity to accurately identify enterprises at risk of bankruptcy and healthy companies as well as proper calibration of the models to the data from training sample sets.

  13. Quantifying Fire Cycle from Dendroecological Records Using Survival Analyses

    Directory of Open Access Journals (Sweden)

    Dominic Cyr

    2016-06-01

    Full Text Available Quantifying fire regimes in the boreal forest ecosystem is crucial for understanding the past and present dynamics, as well as for predicting its future dynamics. Survival analyses have often been used to estimate the fire cycle in eastern Canada because they make it possible to take into account the censored information that is made prevalent by the typically long fire return intervals and the limited scope of the dendroecological methods that are used to quantify them. Here, we assess how the true length of the fire cycle, the short-term temporal variations in fire activity, and the sampling effort affect the accuracy and precision of estimates obtained from two types of parametric survival models, the Weibull and the exponential models, and one non-parametric model obtained with the Cox regression. Then, we apply those results in a case area located in eastern Canada. Our simulation experiment confirms some documented concerns regarding the detrimental effects of temporal variations in fire activity on parametric estimation of the fire cycle. Cox regressions appear to provide the most accurate and robust estimator, being by far the least affected by temporal variations in fire activity. The Cox-based estimate of the fire cycle for the last 300 years in the case study area is 229 years (CI95: 162–407, compared with the likely overestimated 319 years obtained with the commonly used exponential model.

  14. Effects of Anchor Bolts Failures in Steam Explosion Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Hyun; Chang, Yoon-Suk [Kyung Hee University, Yongin (Korea, Republic of); Song, Sungchu; Cho, Yong-Jin [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2015-05-15

    Steam explosion may occur in a nuclear power plant by molten core-coolant interactions when the external reactor vessel cooling strategy is failed. This phenomenon can threat the integrity of reactor cavity, penetration piping and support structures. Even though extensive researches have been performed to predict influences of the steam explosion, due to complexity of physical phenomena and environmental thermal hydraulic conditions, it is remained as one of possible hazards. A steam explosion can cause intensive and rapid heat transfer, and lead to the formation of pressure waves and production of missiles that may endanger surrounding reactor cavity wall and associate components due to resulting dynamic effects. The goal of this research is to examine structural integrity of RPV (Reactor Pressure Vessel) support structures and anchor bolts under typical ex-vessel steam explosion conditions through FE analyses. Particularly, influence due to the failure of anchor bolts connecting RPV and support structures was evaluated. In this paper, influence of RPV and support structure due to the anchor bolt failure were evaluated under typical steam explosion conditions and the following conclusions were derived. The highest maximum stresses were calculated at the support structures under the steam explosion condition with the SVF and anchor bolts non-failure. The all stress values did not exceed their yield strengths. The displacements were high under anchor bolt failure conditions. However, the vertical movements of major components were small comparing to the overall dimensions of them.

  15. Review of Approximate Analyses of Sheet Forming Processes

    Science.gov (United States)

    Weiss, Matthias; Rolfe, Bernard; Yang, Chunhui; de Souza, Tim; Hodgson, Peter

    2011-08-01

    Approximate models are often used for the following purposes: • in on-line control systems of metal forming processes where calculation speed is critical; • to obtain quick, quantitative information on the magnitude of the main variables in the early stages of process design; • to illustrate the role of the major variables in the process; • as an initial check on numerical modelling; and • as a basis for quick calculations on processes in teaching and training packages. The models often share many similarities; for example, an arbitrary geometric assumption of deformation giving a simplified strain distribution, simple material property descriptions—such as an elastic, perfectly plastic law—and mathematical short cuts such as a linear approximation of a polynomial expression. In many cases, the output differs significantly from experiment and performance or efficiency factors are developed by experience to tune the models. In recent years, analytical models have been widely used at Deakin University in the design of experiments and equipment and as a pre-cursor to more detailed numerical analyses. Examples that are reviewed in this paper include deformation of sandwich material having a weak, elastic core, load prediction in deep drawing, bending of strip (particularly of ageing steel where kinking may occur), process analysis of low-pressure hydroforming of tubing, analysis of the rejection rates in stamping, and the determination of constitutive models by an inverse method applied to bending tests.

  16. Microstructural and compositional analyses of GaN-based nanostructures

    Energy Technology Data Exchange (ETDEWEB)

    Pretorius, Angelika; Mueller, Knut; Rosenauer, Andreas [Section Electron Microscopy, Institute of Solid State Physics, University of Bremen, Otto-Hahn-Allee 1, 28359 Bremen (Germany); Schmidt, Thomas; Falta, Jens [Section Surface Physics, Institute of Solid State Physics, University of Bremen, Otto-Hahn-Allee 1, 28359 Bremen (Germany); Aschenbrenner, Timo; Yamaguchi, Tomohiro; Dartsch, Heiko; Hommel, Detlef [Section Semiconductor Epitaxy, Institute of Solid State Physics, University of Bremen, Otto-Hahn-Allee 1, 28359 Bremen (Germany); Kuebel, Christian [Institute of Nanotechnology, Karlsruher Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)

    2011-08-15

    Composition and microstructure of GaN-based island structures and distributed Bragg reflectors (DBRs) were investigated with transmission electron microscopy (TEM). We analysed free-standing InGaN islands and islands capped with GaN. Growth of the islands performed by molecular beam epitaxy (MBE) and metal organic vapour phase epitaxy (MOVPE) resulted in different microstructures. The islands grown by MBE were plastically relaxed. Cap layer deposition resulted in a rapid dissolution of the islands already at early stages of cap layer growth. These findings are confirmed by grazing-incidence X-ray diffraction (GIXRD). In contrast, the islands grown by MOVPE relax only elastically. Strain state analysis (SSA) revealed that the indium concentration increases towards the tips of the islands. For an application as quantum dots, the islands must be embedded into DBRs. Structure and composition of Al{sub y}Ga{sub 1-y}N/GaN Bragg reflectors on top of an AlGaN buffer layer and In{sub x}Al{sub 1-x}N/GaN Bragg reflectors on top of a GaN buffer layer were investigated. Specifically, structural defects such as threading dislocations (TDs) and inversion domains (IDs) were studied, and we investigated thicknesses, interfaces and interface roughnesses of the layers. As the peak reflectivities of the investigated DBRs do not reach the theoretical predictions, possible reasons are discussed. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  17. A quantitative approach to analysing cortisol response in the horse.

    Science.gov (United States)

    Ekstrand, C; Ingvast-Larsson, C; Olsén, L; Hedeland, M; Bondesson, U; Gabrielsson, J

    2016-06-01

    The cortisol response to glucocorticoid intervention has, in spite of several studies in horses, not been fully characterized with regard to the determinants of onset, intensity and duration of response. Therefore, dexamethasone and cortisol response data were collected in a study applying a constant rate infusion regimen of dexamethasone (0.17, 1.7 and 17 μg/kg) to six Standardbreds. Plasma was analysed for dexamethasone and cortisol concentrations using UHPLC-MS/MS. Dexamethasone displayed linear kinetics within the concentration range studied. A turnover model of oscillatory behaviour accurately mimicked cortisol data. The mean baseline concentration range was 34-57 μg/L, the fractional turnover rate 0.47-1.5 1/h, the amplitude parameter 6.8-24 μg/L, the maximum inhibitory capacity 0.77-0.97, the drug potency 6-65 ng/L and the sigmoidicity factor 0.7-30. This analysis provided a better understanding of the time course of the cortisol response in horses. This includes baseline variability within and between horses and determinants of the equilibrium concentration-response relationship. The analysis also challenged a protocol for a dexamethasone suppression test design and indicated future improvement to increase the predictability of the test. PMID:26542753

  18. Genome-wide analyses of small noncoding RNAs in streptococci

    Directory of Open Access Journals (Sweden)

    Nadja ePatenge

    2015-05-01

    Full Text Available Streptococci represent a diverse group of Gram-positive bacteria, which colonize a wide range of hosts among animals and humans. Streptococcal species occur as commensal as well as pathogenic organisms. Many of the pathogenic species can cause severe, invasive infections in their hosts leading to a high morbidity and mortality. The consequence is a tremendous suffering on the part of men and livestock besides the significant financial burden in the agricultural and healthcare sectors. An environmentally stimulated and tightly controlled expression of virulence factor genes is of fundamental importance for streptococcal pathogenicity. Bacterial small noncoding RNAs (sRNAs modulate the expression of genes involved in stress response, sugar metabolism, surface composition, and other properties that are related to bacterial virulence. Even though the regulatory character is shared by this class of RNAs, variation on the molecular level results in a high diversity of functional mechanisms. The knowledge about the role of sRNAs in streptococci is still limited, but in recent years, genome-wide screens for sRNAs have been conducted in an increasing number of species. Bioinformatics prediction approaches have been employed as well as expression analyses by classical array techniques or next generation sequencing. This review will give an overview of whole genome screens for sRNAs in streptococci with a focus on describing the different methods and comparing their outcome considering sRNA conservation among species, functional similarities, and relevance for streptococcal infection.

  19. Mortality of atomic bomb survivors predicted from laboratory animals

    Science.gov (United States)

    Carnes, Bruce A.; Grahn, Douglas; Hoel, David

    2003-01-01

    Exposure, pathology and mortality data for mice, dogs and humans were examined to determine whether accurate interspecies predictions of radiation-induced mortality could be achieved. The analyses revealed that (1) days of life lost per unit dose can be estimated for a species even without information on radiation effects in that species, and (2) accurate predictions of age-specific radiation-induced mortality in beagles and the atomic bomb survivors can be obtained from a dose-response model for comparably exposed mice. These findings illustrate the value of comparative mortality analyses and the relevance of animal data to the study of human health effects.

  20. Analysing galaxy clustering for future experiments including the Dark Energy Survey

    OpenAIRE

    Nock, Kelly

    2010-01-01

    The use of Baryon Acoustic Oscillations (BAO) as a standard ruler in the 2-point galaxy clustering signal has proven to be an excellent probe of the cosmological expansion. With the abundance of good quality galaxy data predicted for future large sky surveys, the potential to conduct precision cosmology using clustering analyses is immense. Many of the next generation sky surveys, including the Dark Energy Survey (DES), the Panoramic Survey Telescope and Rapid Response System (PanStarrs), and...

  1. Seismic criteria studies and analyses. Quarterly progress report No. 3. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    1975-01-03

    Information is presented concerning the extent to which vibratory motions at the subsurface foundation level might differ from motions at the ground surface and the effects of the various subsurface materials on the overall Clinch River Breeder Reactor site response; seismic analyses of LMFBR type reactors to establish analytical procedures for predicting structure stresses and deformations; and aspects of the current technology regarding the representation of energy losses in nuclear power plants as equivalent viscous damping.

  2. Evaluation of mixed dentition analyses in north Indian population: A comparative study

    OpenAIRE

    Ravi Kumar Goyal; Vijay P Sharma; Pradeep Tandon; Amit Nagar; Gyan P Singh

    2014-01-01

    Introduction: Mixed dentition regression equations analyses (Moyers, Tanaka-Johnston) are based on European population , reliability of these methods is questionable over other population. Materials and Methods: The present study was conducted on total 260 study models. This study was done in two phases. In the first phase, linear regression equations were made. In the second phase, comparison of actual values of sum of mesiodistal width of canine, first and second premolars with the predicte...

  3. Analysing the Association of Leadership Style, Face-to-Face Communication, and Organizational Effectiveness

    OpenAIRE

    Vijai N. Giri; Tirumala Santra

    2008-01-01

    The present paper analyses the association of leadership styles, Face-to-Face communication (FtF) and organizational effectiveness. Data were collected from 324 employees from various organizations in India. It was found that leadership styles predicted significantly the organizational effectiveness. The transformational leadership and transactional leadership styles were found to be positively correlated to organizational effectiveness and lassaiz-faire leadership style was found to be negat...

  4. K-West and K-East basin thermal analyses for dry conditions

    International Nuclear Information System (INIS)

    Detailed 3 dimensional thermal analyses of the 100K East and 100 K West basins were conducted to determine the peak fuel temperature for intact fuel in the event of a complete loss of water from the basins. Thermal models for the building, an array of fuel encapsulation canisters on the basin floor, and the fuel within a single canister are described along with conservative predictions for the maximum expected temperatures for the loss of water event

  5. Mathematical Modeling of a SI Engine Cycle with Actual Air-Fuel Cycle Analyses

    OpenAIRE

    Perihan SEKMEN; Yakup SEKMEN

    2007-01-01

    The performance of an engine whose basic design parameters are known can be predicted with the assistance of simulation programs into the less time, cost  and near value of actual. However, inadequate areas of the current model can guide future research because the effects of design variables on engine performance can be determined before. In this study, thermodynamic cycle and performance analyses were simulated for various engine speeds (1800, 2400 ve 3600 1/min) and various excess air coef...

  6. Predictive Techniques for Spacecraft Cabin Air Quality Control

    Science.gov (United States)

    Perry, J. L.; Cromes, Scott D. (Technical Monitor)

    2001-01-01

    As assembly of the International Space Station (ISS) proceeds, predictive techniques are used to determine the best approach for handling a variety of cabin air quality challenges. These techniques use equipment offgassing data collected from each ISS module before flight to characterize the trace chemical contaminant load. Combined with crew metabolic loads, these data serve as input to a predictive model for assessing the capability of the onboard atmosphere revitalization systems to handle the overall trace contaminant load as station assembly progresses. The techniques for predicting in-flight air quality are summarized along with results from early ISS mission analyses. Results from groundbased analyses of in-flight air quality samples are compared to the predictions to demonstrate the technique's relative conservatism.

  7. It's difficult, but important, to make negative predictions.

    Science.gov (United States)

    Williams, Richard V; Amberg, Alexander; Brigo, Alessandro; Coquin, Laurence; Giddings, Amanda; Glowienke, Susanne; Greene, Nigel; Jolly, Robert; Kemper, Ray; O'Leary-Steele, Catherine; Parenty, Alexis; Spirkl, Hans-Peter; Stalford, Susanne A; Weiner, Sandy K; Wichard, Joerg

    2016-04-01

    At the confluence of predictive and regulatory toxicologies, negative predictions may be the thin green line that prevents populations from being exposed to harm. Here, two novel approaches to making confident and robust negative in silico predictions for mutagenicity (as defined by the Ames test) have been evaluated. Analyses of 12 data sets containing >13,000 compounds, showed that negative predictivity is high (∼90%) for the best approach and features that either reduce the accuracy or certainty of negative predictions are identified as misclassified or unclassified respectively. However, negative predictivity remains high (and in excess of the prevalence of non-mutagens) even in the presence of these features, indicating that they are not flags for mutagenicity. PMID:26785392

  8. Integrated Field Analyses of Thermal Springs

    Science.gov (United States)

    Shervais, K.; Young, B.; Ponce-Zepeda, M. M.; Rosove, S.

    2011-12-01

    A group of undergraduate researchers through the SURE internship offered by the Southern California Earthquake Center (SCEC) have examined thermal springs in southern Idaho, northern Utah as well as mud volcanoes in the Salton Sea, California. We used an integrated approach to estimate the setting and maximum temperature, including water chemistry, Ipad-based image and data-base management, microbiology, and gas analyses with a modified Giggenbach sampler.All springs were characterized using GISRoam (tmCogent3D). We are performing geothermometry calculations as well as comparisons with temperature gradient data on the results while also analyzing biological samples. Analyses include water temperature, pH, electrical conductivity, and TDS measured in the field. Each sample is sealed and chilled and delivered to a water lab within 12 hours.Temperatures are continuously monitored with the use of Solinst Levelogger Juniors. Through partnership with a local community college geology club, we receive results on a monthly basis and are able to process initial data earlier in order to evaluate data over a longer time span. The springs and mudpots contained microbial organisms which were analyzed using methods of single colony isolation, polymerase chain reaction, and DNA sequencing showing the impact of the organisms on the springs or vice versa. Soon we we will collect gas samples at sites that show signs of gas. This will be taken using a hybrid of the Giggenbach method and our own methods. Drawing gas samples has proven a challenge, however we devised a method to draw out gas samples utilizing the Giggenbach flask, transferring samples to glass blood sample tubes, replacing NaOH in the Giggenbach flask, and evacuating it in the field for multiple samples using a vacuum pump. We also use a floating platform devised to carry and lower a levelogger, to using an in-line fuel filter from a tractor in order to keep mud from contaminating the equipment.The use of raster

  9. Essays on Earnings Predictability

    DEFF Research Database (Denmark)

    Bruun, Mark

    This dissertation addresses the prediction of corporate earnings. The thesis aims to examine whether the degree of precision in earnings forecasts can be increased by basing them on historical financial ratios. Furthermore, the intent of the dissertation is to analyze whether accounting standards...... forecasts are not more accurate than the simpler forecasts based on a historical timeseries of earnings. Secondly, the dissertation shows how accounting standards affect analysts’ earnings predictions. Accounting conservatism contributes to a more volatile earnings process, which lowers the accuracy of...... analysts’ earnings forecasts. Furthermore, the dissertation shows how the stock market’s reaction to the disclosure of information about corporate earnings depends on how well corporate earnings can be predicted. The dissertation indicates that the stock market’s reaction to the disclosure of earnings...

  10. Neurological abnormalities predict disability

    DEFF Research Database (Denmark)

    Poggesi, Anna; Gouw, Alida; van der Flier, Wiesje;

    2014-01-01

    To investigate the role of neurological abnormalities and magnetic resonance imaging (MRI) lesions in predicting global functional decline in a cohort of initially independent-living elderly subjects. The Leukoaraiosis And DISability (LADIS) Study, involving 11 European centres, was primarily aimed...... at evaluating age-related white matter changes (ARWMC) as an independent predictor of the transition to disability (according to Instrumental Activities of Daily Living scale) or death in independent elderly subjects that were followed up for 3 years. At baseline, a standardized neurological examination...... abnormality independently predicted transition to disability or death [HR (95 % CI) 1.53 (1.01-2.34)]. The hazard increased with increasing number of abnormalities. Among MRI lesions, only ARWMC of severe grade independently predicted disability or death [HR (95 % CI) 2.18 (1.37-3.48)]. In our cohort...

  11. Prediction model Perla

    International Nuclear Information System (INIS)

    Prediction model Perla presents one of a tool for an evaluation of a stream ecological status. It enables a comparing with a standard. The standard is formed by a dataset of sites from all area of the Czech Republic. The sites were influenced by a human activity as few as possible. 8 variables were used for prediction (distance from source, elevation, stream width and depth, slope, substrate roughness, longitude and latitude. All of them were statistically important for benthic communities. Results do not response ecoregions, but rather stream size (type). B (EQItaxonu), EQISi, EQIASPT a EQIH appears applicable for assessment using the prediction model and for natural and human stress differentiating. Limiting values of the indices for good ecological status are suggested. On the contrary, using of EQIEPT a EQIekoprof indices would be possible only with difficulties. (authors)

  12. Permeability prediction in chalks

    DEFF Research Database (Denmark)

    Alam, Mohammad Monzurul; Fabricius, Ida Lykke; Prasad, Manika

    2011-01-01

    The velocity of elastic waves is the primary datum available for acquiring information about subsurface characteristics such as lithology and porosity. Cheap and quick (spatial coverage, ease of measurement) information of permeability can be achieved, if sonic velocity is used for permeability...... prediction, so we have investigated the use of velocity data to predict permeability. The compressional velocity fromwireline logs and core plugs of the chalk reservoir in the South Arne field, North Sea, has been used for this study. We compared various methods of permeability prediction from velocities....... The relationships between permeability and porosity from core data were first examined using Kozeny’s equation. The data were analyzed for any correlations to the specific surface of the grain, Sg, and to the hydraulic property defined as the flow zone indicator (FZI). These two methods use two...

  13. Partially predictable chaos

    CERN Document Server

    Wernecke, Hendrik; Gros, Claudius

    2016-01-01

    For a chaotic system pairs of initially close-by trajectories become eventually fully uncorrelated on the attracting set. This process of decorrelation is split into an initial decrease characterized by the maximal Lyapunov exponent and a subsequent diffusive process on the chaotic attractor causing the final loss of predictability. The time scales of both processes can be either of the same or of very different orders of magnitude. In the latter case the two trajectories linger within a finite but small distance (with respect to the overall size of the attractor) for exceedingly long times and therefore remain partially predictable. We introduce a 0-1 indicator for chaos capable of describing this scenario, arguing, in addition, that the chaotic closed braids found close to a period-doubling transition are generically partially predictable.

  14. Predicting the Sunspot Cycle

    Science.gov (United States)

    Hathaway, David H.

    2009-01-01

    The 11-year sunspot cycle was discovered by an amateur astronomer in 1844. Visual and photographic observations of sunspots have been made by both amateurs and professionals over the last 400 years. These observations provide key statistical information about the sunspot cycle that do allow for predictions of future activity. However, sunspots and the sunspot cycle are magnetic in nature. For the last 100 years these magnetic measurements have been acquired and used exclusively by professional astronomers to gain new information about the nature of the solar activity cycle. Recently, magnetic dynamo models have evolved to the stage where they can assimilate past data and provide predictions. With the advent of the Internet and open data policies, amateurs now have equal access to the same data used by professionals and equal opportunities to contribute (but, alas, without pay). This talk will describe some of the more useful prediction techniques and reveal what they say about the intensity of the upcoming sunspot cycle.

  15. Epitope prediction methods

    DEFF Research Database (Denmark)

    Karosiene, Edita

    introduces the NetMHCIIpan-3.0 predictor based on artificial neural networks, which is capable of giving binding affinities to any human MHC class II molecule. Chapter 4 of this thesis gives an overview of bioinformatics tools developed by the Immunological Bioinformatics group at Center for Biological...... machine learning techniques. Several MHC class I binding prediction algorithms have been developed and due to their high accuracy they are used by many immunologists to facilitate the conventional experimental process of epitope discovery. However, the accuracy of these methods depends on data defining...... the MHC molecule in question, making it difficult for the non-expert end-user to choose the most suitable predictor. The first paper in this thesis presents a new, publicly available, consensus method for MHC class I predictions. The NetMHCcons predictor combines three state-of-the-art prediction...

  16. Scorecard on weather predictions

    Science.gov (United States)

    Richman, Barbara T.

    No matter that several northern and eastern states were pelted by snow and sleet early in March, as far as longterm weather forecasters are concerned, winter ended on February 28. Now is the time to review their winter seasonal forecasts to determine how accurate were those predictions issued at the start of winter.The National Weather Service (NWS) predicted on November 27, 1981, that the winter season would bring colder-than-normal temperatures to the eastern half of the United States, while temperatures were expected to be higher than normal in the westernmost section (see Figure 1). The NWS made no prediction for the middle of the country, labeling the area ‘indeterminate,’ or having the same chance of experiencing above-normal temperatures as below-normal temperatures, explained Donald L. Gilman, chief of the NWS long-range forecasting group.

  17. On study design in neuroimaging heritability analyses

    Science.gov (United States)

    Koran, Mary Ellen; Li, Bo; Jahanshad, Neda; Thornton-Wells, Tricia A.; Glahn, David C.; Thompson, Paul M.; Blangero, John; Nichols, Thomas E.; Kochunov, Peter; Landman, Bennett A.

    2014-03-01

    Imaging genetics is an emerging methodology that combines genetic information with imaging-derived metrics to understand how genetic factors impact observable structural, functional, and quantitative phenotypes. Many of the most well-known genetic studies are based on Genome-Wide Association Studies (GWAS), which use large populations of related or unrelated individuals to associate traits and disorders with individual genetic factors. Merging imaging and genetics may potentially lead to improved power of association in GWAS because imaging traits may be more sensitive phenotypes, being closer to underlying genetic mechanisms, and their quantitative nature inherently increases power. We are developing SOLAR-ECLIPSE (SE) imaging genetics software which is capable of performing genetic analyses with both large-scale quantitative trait data and family structures of variable complexity. This program can estimate the contribution of genetic commonality among related subjects to a given phenotype, and essentially answer the question of whether or not the phenotype is heritable. This central factor of interest, heritability, offers bounds on the direct genetic influence over observed phenotypes. In order for a trait to be a good phenotype for GWAS, it must be heritable: at least some proportion of its variance must be due to genetic influences. A variety of family structures are commonly used for estimating heritability, yet the variability and biases for each as a function of the sample size are unknown. Herein, we investigate the ability of SOLAR to accurately estimate heritability models based on imaging data simulated using Monte Carlo methods implemented in R. We characterize the bias and the variability of heritability estimates from SOLAR as a function of sample size and pedigree structure (including twins, nuclear families, and nuclear families with grandparents).

  18. Preparation of biological samples for SIMS analyses

    International Nuclear Information System (INIS)

    Full text: For the first time at ANSTO, a program of SIMS analysis of biological samples was undertaken. This presentation will discuss how the wide variety of samples were prepared, and the methods used to gain useful information from SIMS analysis. Lack of matrix-matched standards made quantification difficult, but the strength of SIMS lies in the ability to detect a wide range of stable isotopes with good spatial resolution. This makes the technique suitable for studying organisms that archive signature elements in their structure. Samples such as bivalve shells and crocodile osteoderms were vacuum-impregnated in resin to a size suitable for the SIMS sample holder. Polishing was followed by a sputter coating with gold to alleviate charging of the sample during SIMS analysis. Some samples were introduced directly on the sample holder, either stuck to a glass slide or simply held in place with spring and backing plate. The only treatment in this case was gold coating and degassing in a vacuum pumping station. The porous nature of materials such as leaves and stromatolites requires a period of time under vacuum to remove gases which could interfere with the ultra high vacuum required for SIMS analysis. A calcite standard was used for comparison of oxygen isotopic ratios, but the only matrix-matched standard was available for metal analysis of coral skeletons. Otherwise, the calcium content of the material was assumed to be uniform and acted as an internal standard from which isotopic ratios of other elements could be determined. SIMS analysis of biological samples demonstrated that some matrices could reveal an archive of pollution histories. These samples require matrix-matched standards if the trends observed from analyses are to be quantified

  19. Pipeline for macro- and microarray analyses

    Directory of Open Access Journals (Sweden)

    R. Vicentini

    2007-05-01

    Full Text Available The pipeline for macro- and microarray analyses (PMmA is a set of scripts with a web interface developed to analyze DNA array data generated by array image quantification software. PMmA is designed for use with single- or double-color array data and to work as a pipeline in five classes (data format, normalization, data analysis, clustering, and array maps. It can also be used as a plugin in the BioArray Software Environment, an open-source database for array analysis, or used in a local version of the web service. All scripts in PMmA were developed in the PERL programming language and statistical analysis functions were implemented in the R statistical language. Consequently, our package is a platform-independent software. Our algorithms can correctly select almost 90% of the differentially expressed genes, showing a superior performance compared to other methods of analysis. The pipeline software has been applied to 1536 expressed sequence tags macroarray public data of sugarcane exposed to cold for 3 to 48 h. PMmA identified thirty cold-responsive genes previously unidentified in this public dataset. Fourteen genes were up-regulated, two had a variable expression and the other fourteen were down-regulated in the treatments. These new findings certainly were a consequence of using a superior statistical analysis approach, since the original study did not take into account the dependence of data variability on the average signal intensity of each gene. The web interface, supplementary information, and the package source code are available, free, to non-commercial users at http://ipe.cbmeg.unicamp.br/pub/PMmA.

  20. Pipeline for macro- and microarray analyses.

    Science.gov (United States)

    Vicentini, R; Menossi, M

    2007-05-01

    The pipeline for macro- and microarray analyses (PMmA) is a set of scripts with a web interface developed to analyze DNA array data generated by array image quantification software. PMmA is designed for use with single- or double-color array data and to work as a pipeline in five classes (data format, normalization, data analysis, clustering, and array maps). It can also be used as a plugin in the BioArray Software Environment, an open-source database for array analysis, or used in a local version of the web service. All scripts in PMmA were developed in the PERL programming language and statistical analysis functions were implemented in the R statistical language. Consequently, our package is a platform-independent software. Our algorithms can correctly select almost 90% of the differentially expressed genes, showing a superior performance compared to other methods of analysis. The pipeline software has been applied to 1536 expressed sequence tags macroarray public data of sugarcane exposed to cold for 3 to 48 h. PMmA identified thirty cold-responsive genes previously unidentified in this public dataset. Fourteen genes were up-regulated, two had a variable expression and the other fourteen were down-regulated in the treatments. These new findings certainly were a consequence of using a superior statistical analysis approach, since the original study did not take into account the dependence of data variability on the average signal intensity of each gene. The web interface, supplementary information, and the package source code are available, free, to non-commercial users at http://ipe.cbmeg.unicamp.br/pub/PMmA. PMID:17464422