WorldWideScience

Sample records for analyses predict a20

  1. Climate Prediction Center (CPC) US daily temperature analyses

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The U.S. daily temperature analyses are maps depicting various temperature quantities utilizing daily maximum and minimum temperature data across the US. Maps are...

  2. Analysing Twitter and web queries for flu trend prediction

    Science.gov (United States)

    2014-01-01

    Background Social media platforms encourage people to share diverse aspects of their daily life. Among these, shared health related information might be used to infer health status and incidence rates for specific conditions or symptoms. In this work, we present an infodemiology study that evaluates the use of Twitter messages and search engine query logs to estimate and predict the incidence rate of influenza like illness in Portugal. Results Based on a manually classified dataset of 2704 tweets from Portugal, we selected a set of 650 textual features to train a Naïve Bayes classifier to identify tweets mentioning flu or flu-like illness or symptoms. We obtained a precision of 0.78 and an F-measure of 0.83, based on cross validation over the complete annotated set. Furthermore, we trained a multiple linear regression model to estimate the health-monitoring data from the Influenzanet project, using as predictors the relative frequencies obtained from the tweet classification results and from query logs, and achieved a correlation ratio of 0.89 (p < 0.001). These classification and regression models were also applied to estimate the flu incidence in the following flu season, achieving a correlation of 0.72. Conclusions Previous studies addressing the estimation of disease incidence based on user-generated content have mostly focused on the english language. Our results further validate those studies and show that by changing the initial steps of data preprocessing and feature extraction and selection, the proposed approaches can be adapted to other languages. Additionally, we investigated whether the predictive model created can be applied to data from the subsequent flu season. In this case, although the prediction result was good, an initial phase to adapt the regression model could be necessary to achieve more robust results. PMID:25077431

  3. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  4. Prediction formulas for individual opioid analgesic requirements based on genetic polymorphism analyses.

    Directory of Open Access Journals (Sweden)

    Kaori Yoshida

    Full Text Available The analgesic efficacy of opioids is well known to vary widely among individuals, and various factors related to individual differences in opioid sensitivity have been identified. However, a prediction model to calculate appropriate opioid analgesic requirements has not yet been established. The present study sought to construct prediction formulas for individual opioid analgesic requirements based on genetic polymorphisms and clinical data from patients who underwent cosmetic orthognathic surgery and validate the utility of the prediction formulas in patients who underwent major open abdominal surgery.To construct the prediction formulas, we performed multiple linear regression analyses using data from subjects who underwent cosmetic orthognathic surgery. The dependent variable was 24-h postoperative or perioperative fentanyl use, and the independent variables were age, gender, height, weight, pain perception latencies (PPL, and genotype data of five single-nucleotide polymorphisms (SNPs. To examine the utility of the prediction formulas, we performed simple linear regression analyses using subjects who underwent major open abdominal surgery. Actual 24-h postoperative or perioperative analgesic use and the predicted values that were calculated using the multiple regression equations were incorporated as dependent and independent variables, respectively.Multiple linear regression analyses showed that the four SNPs, PPL, and weight were retained as independent predictors of 24-h postoperative fentanyl use (R² = 0.145, P = 5.66 × 10⁻¹⁰ and the two SNPs and weight were retained as independent predictors of perioperative fentanyl use (R² = 0.185, P = 1.99 × 10⁻¹⁵. Simple linear regression analyses showed that the predicted values were retained as an independent predictor of actual 24-h postoperative analgesic use (R² = 0.033, P = 0.030 and perioperative analgesic use (R² = 0.100, P = 1.09 × 10⁻⁴, respectively.We constructed

  5. Predictability of the monthly North Atlantic Oscillation index based on fractal analyses and dynamic system theory

    Directory of Open Access Journals (Sweden)

    M. D. Martínez

    2010-03-01

    Full Text Available The predictability of the monthly North Atlantic Oscillation, NAO, index is analysed from the point of view of different fractal concepts and dynamic system theory such as lacunarity, rescaled analysis (Hurst exponent and reconstruction theorem (embedding and correlation dimensions, Kolmogorov entropy and Lyapunov exponents. The main results point out evident signs of randomness and the necessity of stochastic models to represent time evolution of the NAO index. The results also show that the monthly NAO index behaves as a white-noise Gaussian process. The high minimum number of nonlinear equations needed to describe the physical process governing the NAO index fluctuations is evidence of its complexity. A notable predictive instability is indicated by the positive Lyapunov exponents. Besides corroborating the complex time behaviour of the NAO index, present results suggest that random Cantor sets would be an interesting tool to model lacunarity and time evolution of the NAO index.

  6. A20 Inhibits β-Cell Apoptosis by Multiple Mechanisms and Predicts Residual β-Cell Function in Type 1 Diabetes

    DEFF Research Database (Denmark)

    Fukaya, Makiko; Brorsson, Caroline A; Meyerovich, Kira;

    2016-01-01

    Activation of the transcription factor nuclear factor kappa B (NFkB) contributes to β-cell death in type 1 diabetes (T1D). Genome-wide association studies have identified the gene TNF-induced protein 3 (TNFAIP3), encoding for the zinc finger protein A20, as a susceptibility locus for T1D. A20 res...

  7. Design and Antigenic Epitopes Prediction of a New Trial Recombinant Multiepitopic Rotaviral Vaccine: In Silico Analyses.

    Science.gov (United States)

    Jafarpour, Sima; Ayat, Hoda; Ahadi, Ali Mohammad

    2015-01-01

    Rotavirus is the major etiologic factor of severe diarrheal disease. Natural infection provides protection against subsequent rotavirus infection and diarrhea. This research presents a new vaccine designed based on computational models. In this study, three types of epitopes are considered-linear, conformational, and combinational-in a proposed model protein. Several studies on rotavirus vaccines have shown that VP6 and VP4 proteins are good candidates for vaccine production. In the present study, a fusion protein was designed as a new generation of rotavirus vaccines by bioinformatics analyses. This model-based study using ABCpred, BCPREDS, Bcepred, and Ellipro web servers showed that the peptide presented in this article has the necessary properties to act as a vaccine. Prediction of linear B-cell epitopes of peptides is helpful to investigate whether these peptides are able to activate humoral immunity.

  8. Design and Antigenic Epitopes Prediction of a New Trial Recombinant Multiepitopic Rotaviral Vaccine: In Silico Analyses.

    Science.gov (United States)

    Jafarpour, Sima; Ayat, Hoda; Ahadi, Ali Mohammad

    2015-01-01

    Rotavirus is the major etiologic factor of severe diarrheal disease. Natural infection provides protection against subsequent rotavirus infection and diarrhea. This research presents a new vaccine designed based on computational models. In this study, three types of epitopes are considered-linear, conformational, and combinational-in a proposed model protein. Several studies on rotavirus vaccines have shown that VP6 and VP4 proteins are good candidates for vaccine production. In the present study, a fusion protein was designed as a new generation of rotavirus vaccines by bioinformatics analyses. This model-based study using ABCpred, BCPREDS, Bcepred, and Ellipro web servers showed that the peptide presented in this article has the necessary properties to act as a vaccine. Prediction of linear B-cell epitopes of peptides is helpful to investigate whether these peptides are able to activate humoral immunity. PMID:25965449

  9. PASMet: a web-based platform for prediction, modelling and analyses of metabolic systems.

    Science.gov (United States)

    Sriyudthsak, Kansuporn; Mejia, Ramon Francisco; Arita, Masanori; Hirai, Masami Yokota

    2016-07-01

    PASMet (Prediction, Analysis and Simulation of Metabolic networks) is a web-based platform for proposing and verifying mathematical models to understand the dynamics of metabolism. The advantages of PASMet include user-friendliness and accessibility, which enable biologists and biochemists to easily perform mathematical modelling. PASMet offers a series of user-functions to handle the time-series data of metabolite concentrations. The functions are organised into four steps: (i) Prediction of a probable metabolic pathway and its regulation; (ii) Construction of mathematical models; (iii) Simulation of metabolic behaviours; and (iv) Analysis of metabolic system characteristics. Each function contains various statistical and mathematical methods that can be used independently. Users who may not have enough knowledge of computing or programming can easily and quickly analyse their local data without software downloads, updates or installations. Users only need to upload their files in comma-separated values (CSV) format or enter their model equations directly into the website. Once the time-series data or mathematical equations are uploaded, PASMet automatically performs computation on server-side. Then, users can interactively view their results and directly download them to their local computers. PASMet is freely available with no login requirement at http://pasmet.riken.jp/ from major web browsers on Windows, Mac and Linux operating systems.

  10. Prevalence and Predictive Factors of Sexual Dysfunction in Iranian Women: Univariate and Multivariate Logistic Regression Analyses

    Science.gov (United States)

    Direkvand-Moghadam, Ashraf; Suhrabi, Zainab; Akbari, Malihe

    2016-01-01

    Background Female sexual dysfunction, which can occur during any stage of a normal sexual activity, is a serious condition for individuals and couples. The present study aimed to determine the prevalence and predictive factors of female sexual dysfunction in women referred to health centers in Ilam, the Western Iran, in 2014. Methods In the present cross-sectional study, 444 women who attended health centers in Ilam were enrolled from May to September 2014. Participants were selected according to the simple random sampling method. Univariate and multivariate logistic regression analyses were used to predict the risk factors of female sexual dysfunction. Diffe rences with an alpha error of 0.05 were regarded as statistically significant. Results Overall, 75.9% of the study population exhibited sexual dysfunction. Univariate logistic regression analysis demonstrated that there was a significant association between female sexual dysfunction and age, menarche age, gravidity, parity, and education (Pwomen suffer from sexual dysfunction. A lack of awareness of Iranian women's sexual pleasure and formal training on sexual function and its influencing factors, such as menarche age, gravida, and level of education, may lead to a high prevalence of female sexual dysfunction. PMID:27688863

  11. PASMet: a web-based platform for prediction, modelling and analyses of metabolic systems.

    Science.gov (United States)

    Sriyudthsak, Kansuporn; Mejia, Ramon Francisco; Arita, Masanori; Hirai, Masami Yokota

    2016-07-01

    PASMet (Prediction, Analysis and Simulation of Metabolic networks) is a web-based platform for proposing and verifying mathematical models to understand the dynamics of metabolism. The advantages of PASMet include user-friendliness and accessibility, which enable biologists and biochemists to easily perform mathematical modelling. PASMet offers a series of user-functions to handle the time-series data of metabolite concentrations. The functions are organised into four steps: (i) Prediction of a probable metabolic pathway and its regulation; (ii) Construction of mathematical models; (iii) Simulation of metabolic behaviours; and (iv) Analysis of metabolic system characteristics. Each function contains various statistical and mathematical methods that can be used independently. Users who may not have enough knowledge of computing or programming can easily and quickly analyse their local data without software downloads, updates or installations. Users only need to upload their files in comma-separated values (CSV) format or enter their model equations directly into the website. Once the time-series data or mathematical equations are uploaded, PASMet automatically performs computation on server-side. Then, users can interactively view their results and directly download them to their local computers. PASMet is freely available with no login requirement at http://pasmet.riken.jp/ from major web browsers on Windows, Mac and Linux operating systems. PMID:27174940

  12. Prediction of metabolisable energy value of broiler diets and water excretion from dietary chemical analyses.

    Science.gov (United States)

    Carré, B; Lessire, M; Juin, H

    2013-08-01

    Thirty various pelleted diets were given to broilers (8/diet) for in vivo measurements of dietary metabolisable energy (ME) value and digestibilities of proteins, lipids, starch and sugars from day 27 to day 31, with ad libitum feeding and total collection of excreta. Water excretion was also measured. Amino acid formulation of diets was done on the basis of ratios to crude proteins. Mean in vivo apparent ME values corrected to zero nitrogen retention (AMEn) were always lower than the AMEn values calculated for adult cockerels using predicting equations from literature based on the chemical analyses of diets. The difference between mean in vivo AMEn values and these calculated AMEn values increased linearly with increasing amount of wheat in diets (P = 0.0001). Mean digestibilities of proteins, lipids and starch were negatively related to wheat introduction (P = 0.0001). The correlations between mean in vivo AMEn values and diet analytical parameters were the highest with fibre-related parameters, such as water-insoluble cell-walls (WICW) (r = -0.91) or Real Applied Viscosity (RAV) (r = -0.77). Thirteen multiple regression equations relating mean in vivo AMEn values to dietary analytical data were calculated, with R² values ranging from 0.859 to 0.966 (P = 0.0001). The highest R² values were obtained when the RAV parameter was included in independent variables. The direct regression equations obtained with available components (proteins, lipids, starch, sucrose and oligosaccharides) and the indirect regression equations obtained with WICW and ash parameters showed similar R² values. Direct or indirect theoretical equations predicting AMEn values were established using the overall mean in vivo digestibility values. The principle of indirect equations was based on the assumption that WICW and ashes act as diluters. Addition of RAV or wheat content in variables improved the accuracy of theoretical equations. Efficiencies of theoretical equations for predicting AMEn

  13. Analyses of Optimal Embedding Dimension and Delay for Local Linear Prediction Model

    Institute of Scientific and Technical Information of China (English)

    MENG Qing-Fang; PENG Yu-Hua; LIU Yun-Xia; SUN Wei-Feng

    2007-01-01

    In the reconstructed phase space, a novel local linear prediction model is proposed to predict chaotic time series. The parameters of the proposed model take the values that are different from those of the phase space reconstruction. We propose a criterion based on prediction error to determine the optimal parameters of the proposed model. The simulation results show that the proposed model can effectively make one-step and multistep prediction for chaotic time series, and the one-step and multi-step prediction accuracy of the proposed model is superior to that of the traditional local linear prediction.

  14. Analysing the Relevance of Experience Partitions to the Prediction of Players’ Self-Reports of Affect

    DEFF Research Database (Denmark)

    Martínez, Héctor Pérez; Yannakakis, Georgios N.

    2011-01-01

    A common practice in modeling affect from physiological signals consists of reducing the signals to a set of statistical features that feed predictors of self-reported emotions. This paper analyses the impact of various time-windows, used for the extraction of physiological features, to the accur......A common practice in modeling affect from physiological signals consists of reducing the signals to a set of statistical features that feed predictors of self-reported emotions. This paper analyses the impact of various time-windows, used for the extraction of physiological features...

  15. Measuring Usable Knowledge: Teachers' Analyses of Mathematics Classroom Videos Predict Teaching Quality and Student Learning

    Science.gov (United States)

    Kersting, Nicole B.; Givvin, Karen B.; Thompson, Belinda J.; Santagata, Rossella; Stigler, James W.

    2012-01-01

    This study explores the relationships between teacher knowledge, teaching practice, and student learning in mathematics. It extends previous work that developed and evaluated an innovative approach to assessing teacher knowledge based on teachers' analyses of classroom video clips. Teachers watched and commented on 13 fraction clips. These written…

  16. Map on predicted deposition of Cs-137 in Spanish soils from geostatistical analyses

    International Nuclear Information System (INIS)

    The knowledge of the distribution of 137Cs deposition over Spanish mainland soils along with the geographical, physical and morphological terrain information enable us to know the 137Cs background content in soil. This could be useful as a tool in a hypothetical situation of an accident involving a radioactive discharge or in soil erosion studies. A Geographic Information System (GIS) would allow the gathering of all the mentioned information. In this work, gamma measurements of 137Cs on 34 Spanish mainland soils, rainfall data taken from 778 weather stations, soil types and geographical and physical terrain information were input into a GIS. Geostatistical techniques were applied to interpolate values of 137Cs activity at unsampled places, obtaining prediction maps of 137Cs deposition. Up to now, geostatistical methods have been used to model spatial continuity of data. Through semivariance and cross-covariance functions the spatial correlation of such data can be studied and described. Ordinary and simple kriging techniques were carried out to map spatial patterns of 137Cs deposition, and ordinary and simple co-kriging were used to improve the prediction map obtained through a second related variable: namely the rainfall. To choose the best prediction map of 137Cs deposition, the spatial dependence of the variable, the correlation coefficient and the prediction errors were evaluated using the different models previously mentioned. The best result for 137Cs deposition map was obtained when applying the co-kriging techniques. - Highlights: ► Implementation of 137Cs activity data, in Spanish soils, in a GIS. ► Prediction models were performed of Cs-137 fallout with kriging techniques. ► More accurate prediction surfaces were obtained using cokriging techniques. ► Rainfall is the second variable used to cokriging technique.

  17. Finite Element Creep Damage Analyses and Life Prediction of P91 Pipe Containing Local Wall Thinning Defect

    Science.gov (United States)

    Xue, Jilin; Zhou, Changyu

    2016-03-01

    Creep continuum damage finite element (FE) analyses were performed for P91 steel pipe containing local wall thinning (LWT) defect subjected to monotonic internal pressure, monotonic bending moment and combined internal pressure and bending moment by orthogonal experimental design method. The creep damage lives of pipe containing LWT defect under different load conditions were obtained. Then, the creep damage life formulas were regressed based on the creep damage life results from FE method. At the same time a skeletal point rupture stress was found and used for life prediction which was compared with creep damage lives obtained by continuum damage analyses. From the results, the failure lives of pipe containing LWT defect can be obtained accurately by using skeletal point rupture stress method. Finally, the influence of LWT defect geometry was analysed, which indicated that relative defect depth was the most significant factor for creep damage lives of pipe containing LWT defect.

  18. Aeromechanics and Aeroacoustics Predictions of the Boeing-SMART Rotor Using Coupled-CFD/CSD Analyses

    Science.gov (United States)

    Bain, Jeremy; Sim, Ben W.; Sankar, Lakshmi; Brentner, Ken

    2010-01-01

    This paper will highlight helicopter aeromechanics and aeroacoustics prediction capabilities developed by Georgia Institute of Technology, the Pennsylvania State University, and Northern Arizona University under the Helicopter Quieting Program (HQP) sponsored by the Tactical Technology Office of the Defense Advanced Research Projects Agency (DARPA). First initiated in 2004, the goal of the HQP was to develop high fidelity, state-of-the-art computational tools for designing advanced helicopter rotors with reduced acoustic perceptibility and enhanced performance. A critical step towards achieving this objective is the development of rotorcraft prediction codes capable of assessing a wide range of helicopter configurations and operations for future rotorcraft designs. This includes novel next-generation rotor systems that incorporate innovative passive and/or active elements to meet future challenging military performance and survivability goals.

  19. Computational Prediction and Biochemical Analyses of New Inverse Agonists for the CB1 Receptor.

    Science.gov (United States)

    Scott, Caitlin E; Ahn, Kwang H; Graf, Steven T; Goddard, William A; Kendall, Debra A; Abrol, Ravinder

    2016-01-25

    Human cannabinoid type 1 (CB1) G-protein coupled receptor is a potential therapeutic target for obesity. The previously predicted and experimentally validated ensemble of ligand-free conformations of CB1 [Scott, C. E. et al. Protein Sci. 2013 , 22 , 101 - 113 ; Ahn, K. H. et al. Proteins 2013 , 81 , 1304 - 1317] are used here to predict the binding sites for known CB1-selective inverse agonists including rimonabant and its seven known derivatives. This binding pocket, which differs significantly from previously published models, is used to identify 16 novel compounds expected to be CB1 inverse agonists by exploiting potential new interactions. We show experimentally that two of these compounds exhibit inverse agonist properties including inhibition of basal and agonist-induced G-protein coupling activity, as well as an enhanced level of CB1 cell surface localization. This demonstrates the utility of using the predicted binding sites for an ensemble of CB1 receptor structures for designing new CB1 inverse agonists.

  20. Accuracy of Fall Prediction in Parkinson Disease: Six-Month and 12-Month Prospective Analyses

    Directory of Open Access Journals (Sweden)

    Ryan P. Duncan

    2012-01-01

    Full Text Available Introduction. We analyzed the ability of four balance assessments to predict falls in people with Parkinson Disease (PD prospectively over six and 12 months. Materials and Methods. The BESTest, Mini-BESTest, Functional Gait Assessment (FGA, and Berg Balance Scale (BBS were administered to 80 participants with idiopathic PD at baseline. Falls were then tracked for 12 months. Ability of each test to predict falls at six and 12 months was assessed using ROC curves and likelihood ratios (LR. Results. Twenty-seven percent of the sample had fallen at six months, and 32% of the sample had fallen at 12 months. At six months, areas under the ROC curve (AUC for the tests ranged from 0.8 (FGA to 0.89 (BESTest with LR+ of 3.4 (FGA to 5.8 (BESTest. At 12 months, AUCs ranged from 0.68 (BESTest, BBS to 0.77 (Mini-BESTest with LR+ of 1.8 (BESTest to 2.4 (BBS, FGA. Discussion. The various balance tests were effective in predicting falls at six months. All tests were relatively ineffective at 12 months. Conclusion. This pilot study suggests that people with PD should be assessed biannually for fall risk.

  1. Predictability of Regional Climate: A Bayesian Approach to Analysing a WRF Model Ensemble

    Science.gov (United States)

    Bruyere, C. L.; Mesquita, M. D. S.; Paimazumder, D.

    2013-12-01

    This study investigates aspects of climate predictability with a focus on climatic variables and different characteristics of extremes over nine North American climatic regions and two selected Atlantic sectors. An ensemble of state-of-the-art Weather Research and Forecasting Model (WRF) simulations is used for the analysis. The ensemble is comprised of a combination of various physics schemes, initial conditions, domain sizes, boundary conditions and breeding techniques. The main objectives of this research are: 1) to increase our understanding of the ability of WRF to capture regional climate information - both at the individual and collective ensemble members, 2) to investigate the role of different members and their synergy in reproducing regional climate 3) to estimate the associated uncertainty. In this study, we propose a Bayesian framework to study the predictability of extremes and associated uncertainties in order to provide a wealth of knowledge about WRF reliability and provide further clarity and understanding of the sensitivities and optimal combinations. The choice of the Bayesian model, as opposed to standard methods, is made because: a) this method has a mean square error that is less than standard statistics, which makes it a more robust method; b) it allows for the use of small sample sizes, which are typical in high-resolution modeling; c) it provides a probabilistic view of uncertainty, which is useful when making decisions concerning ensemble members.

  2. Street-based Topological Representations and Analyses for Predicting Traffic Flow in GIS

    CERN Document Server

    Jiang, Bin

    2007-01-01

    It is well received in the space syntax community that traffic flow is significantly correlated to a morphological property of streets, which are represented by axial lines, forming a so called axial map. The correlation co-efficient (R square value) approaches 0.8 and even a higher value according to the space syntax literature. In this paper, we study the same issue using the Hong Kong street network and the Hong Kong Annual Average Daily Traffic (AADT) datasets, and find surprisingly that street-based topological representations (or street-street topologies) tend to be better representations than the axial map. In other words, vehicle flow is correlated to a morphological property of streets better than that of axial lines. Based on the finding, we suggest the street-based topological representations as an alternative GIS representation, and the topological analyses as a new analytical means for geographic knowledge discovery.

  3. Development of a feasibility prediction tool for solar power plant installation analyses

    International Nuclear Information System (INIS)

    Highlights: → An agglomerative hierarchical clustering tool is designed for renewable energy sources in this study. → In the model, nearest neighbor approach is used as clustering algorithm and Euclidean, Manhattan, and Minkowski distance metrics as distance equations. → The developed tool assists knowledge domain expert in terms of analysing extensive datasets. → The developed tool clusters the given sample data efficiently and successfully using each distance metrics. → The clustering results are compared according to success rates. -- Abstract: The solar energy becomes a challenging area among other renewable sources since the solar energy sources have the advantages of not causing pollution, having low maintenance cost, and not producing noise due to the absence of the moving parts. Although these advantages, the installation cost of a solar power plant is considerably high. However, feasibility analyses have a great role before installation in order to determine the most appropriate power plant site. Despite there are many methods used in feasibility analysis, this paper is focused on a new intelligent method based on an agglomerative hierarchical clustering approach. The solar irradiation and insolation parameters of Central Anatolian Region of Turkey are evaluated utilizing the intelligent feasibility analysis tool developed in this study. The clustering operation in the tool is performed by using the nearest neighbor algorithm. At the stage of determining the optimum hierarchical clustering results, Euclidean, Manhattan and Minkowski distance metrics are adapted to the tool. The achieved clustering results based on Minkowski distance metric provide the most feasible inferences to knowledge domain expert according to other distance metrics.

  4. CFD Analyses and Jet-Noise Predictions of Chevron Nozzles with Vortex Stabilization

    Science.gov (United States)

    Dippold, Vance

    2008-01-01

    The wind computational fluid dynamics code was used to perform a series of analyses on a single-flow plug nozzle with chevrons. Air was injected from tubes tangent to the nozzle outer surface at three different points along the chevron at the nozzle exit: near the chevron notch, at the chevron mid-point, and near the chevron tip. Three injection pressures were used for each injection tube location--10, 30, and 50 psig-giving injection mass flow rates of 0.1, 0.2, and 0.3 percent of the nozzle mass flow. The results showed subtle changes in the jet plume s turbulence and vorticity structure in the region immediately downstream of the nozzle exit. Distinctive patterns in the plume structure emerged from each injection location, and these became more pronounced as the injection pressure was increased. However, no significant changes in centerline velocity decay or turbulent kinetic energy were observed in the jet plume as a result of flow injection. Furthermore, computational acoustics calculations performed with the JeNo code showed no real reduction in jet noise relative to the baseline chevron nozzle.

  5. Phylogenomic analyses predict sistergroup relationship of nucleariids and Fungi and paraphyly of zygomycetes with significant support

    Directory of Open Access Journals (Sweden)

    Steenkamp Emma

    2009-01-01

    Full Text Available Abstract Background Resolving the evolutionary relationships among Fungi remains challenging because of their highly variable evolutionary rates, and lack of a close phylogenetic outgroup. Nucleariida, an enigmatic group of amoeboids, have been proposed to emerge close to the fungal-metazoan divergence and might fulfill this role. Yet, published phylogenies with up to five genes are without compelling statistical support, and genome-level data should be used to resolve this question with confidence. Results Our analyses with nuclear (118 proteins and mitochondrial (13 proteins data now robustly associate Nucleariida and Fungi as neighbors, an assemblage that we term 'Holomycota'. With Nucleariida as an outgroup, we revisit unresolved deep fungal relationships. Conclusion Our phylogenomic analysis provides significant support for the paraphyly of the traditional taxon Zygomycota, and contradicts a recent proposal to include Mortierella in a phylum Mucoromycotina. We further question the introduction of separate phyla for Glomeromycota and Blastocladiomycota, whose phylogenetic positions relative to other phyla remain unresolved even with genome-level datasets. Our results motivate broad sampling of additional genome sequences from these phyla.

  6. Circulating biomarkers for predicting cardiovascular disease risk; a systematic review and comprehensive overview of meta-analyses.

    Directory of Open Access Journals (Sweden)

    Thijs C van Holten

    Full Text Available BACKGROUND: Cardiovascular disease is one of the major causes of death worldwide. Assessing the risk for cardiovascular disease is an important aspect in clinical decision making and setting a therapeutic strategy, and the use of serological biomarkers may improve this. Despite an overwhelming number of studies and meta-analyses on biomarkers and cardiovascular disease, there are no comprehensive studies comparing the relevance of each biomarker. We performed a systematic review of meta-analyses on levels of serological biomarkers for atherothrombosis to compare the relevance of the most commonly studied biomarkers. METHODS AND FINDINGS: Medline and Embase were screened on search terms that were related to "arterial ischemic events" and "meta-analyses". The meta-analyses were sorted by patient groups without pre-existing cardiovascular disease, with cardiovascular disease and heterogeneous groups concerning general populations, groups with and without cardiovascular disease, or miscellaneous. These were subsequently sorted by end-point for cardiovascular disease or stroke and summarized in tables. We have identified 85 relevant full text articles, with 214 meta-analyses. Markers for primary cardiovascular events include, from high to low result: C-reactive protein, fibrinogen, cholesterol, apolipoprotein B, the apolipoprotein A/apolipoprotein B ratio, high density lipoprotein, and vitamin D. Markers for secondary cardiovascular events include, from high to low result: cardiac troponins I and T, C-reactive protein, serum creatinine, and cystatin C. For primary stroke, fibrinogen and serum uric acid are strong risk markers. Limitations reside in that there is no acknowledged search strategy for prognostic studies or meta-analyses. CONCLUSIONS: For primary cardiovascular events, markers with strong predictive potential are mainly associated with lipids. For secondary cardiovascular events, markers are more associated with ischemia. Fibrinogen is a

  7. Accuracy of finite element analyses of CT scans in predictions of vertebral failure patterns under axial compression and anterior flexion.

    Science.gov (United States)

    Jackman, Timothy M; DelMonaco, Alex M; Morgan, Elise F

    2016-01-25

    Finite element (FE) models built from quantitative computed tomography (QCT) scans can provide patient-specific estimates of bone strength and fracture risk in the spine. While prior studies demonstrate accurate QCT-based FE predictions of vertebral stiffness and strength, the accuracy of the predicted failure patterns, i.e., the locations where failure occurs within the vertebra and the way in which the vertebra deforms as failure progresses, is less clear. This study used digital volume correlation (DVC) analyses of time-lapse micro-computed tomography (μCT) images acquired during mechanical testing (compression and anterior flexion) of thoracic spine segments (T7-T9, n=28) to measure displacements occurring throughout the T8 vertebral body at the ultimate point. These displacements were compared to those simulated by QCT-based FE analyses of T8. We hypothesized that the FE predictions would be more accurate when the boundary conditions are based on measurements of pressure distributions within intervertebral discs of similar level of disc degeneration vs. boundary conditions representing rigid platens. The FE simulations captured some of the general, qualitative features of the failure patterns; however, displacement errors ranged 12-279%. Contrary to our hypothesis, no differences in displacement errors were found when using boundary conditions representing measurements of disc pressure vs. rigid platens. The smallest displacement errors were obtained using boundary conditions that were measured directly by DVC at the T8 endplates. These findings indicate that further work is needed to develop methods of identifying physiological loading conditions for the vertebral body, for the purpose of achieving robust, patient-specific FE analyses of failure mechanisms.

  8. Accuracy of finite element analyses of CT scans in predictions of vertebral failure patterns under axial compression and anterior flexion.

    Science.gov (United States)

    Jackman, Timothy M; DelMonaco, Alex M; Morgan, Elise F

    2016-01-25

    Finite element (FE) models built from quantitative computed tomography (QCT) scans can provide patient-specific estimates of bone strength and fracture risk in the spine. While prior studies demonstrate accurate QCT-based FE predictions of vertebral stiffness and strength, the accuracy of the predicted failure patterns, i.e., the locations where failure occurs within the vertebra and the way in which the vertebra deforms as failure progresses, is less clear. This study used digital volume correlation (DVC) analyses of time-lapse micro-computed tomography (μCT) images acquired during mechanical testing (compression and anterior flexion) of thoracic spine segments (T7-T9, n=28) to measure displacements occurring throughout the T8 vertebral body at the ultimate point. These displacements were compared to those simulated by QCT-based FE analyses of T8. We hypothesized that the FE predictions would be more accurate when the boundary conditions are based on measurements of pressure distributions within intervertebral discs of similar level of disc degeneration vs. boundary conditions representing rigid platens. The FE simulations captured some of the general, qualitative features of the failure patterns; however, displacement errors ranged 12-279%. Contrary to our hypothesis, no differences in displacement errors were found when using boundary conditions representing measurements of disc pressure vs. rigid platens. The smallest displacement errors were obtained using boundary conditions that were measured directly by DVC at the T8 endplates. These findings indicate that further work is needed to develop methods of identifying physiological loading conditions for the vertebral body, for the purpose of achieving robust, patient-specific FE analyses of failure mechanisms. PMID:26792288

  9. Computational Analyses in Support of Sub-scale Diffuser Testing for the A-3 Facility. Part 1; Steady Predictions

    Science.gov (United States)

    Allgood, Daniel C.; Graham, Jason S.; Ahuja, Vineet; Hosangadi, Ashvin

    2010-01-01

    Simulation technology can play an important role in rocket engine test facility design and development by assessing risks, providing analysis of dynamic pressure and thermal loads, identifying failure modes and predicting anomalous behavior of critical systems. Advanced numerical tools assume greater significance in supporting testing and design of high altitude testing facilities and plume induced testing environments of high thrust engines because of the greater inter-dependence and synergy in the functioning of the different sub-systems. This is especially true for facilities such as the proposed A-3 facility at NASA SSC because of a challenging operating envelope linked to variable throttle conditions at relatively low chamber pressures. Facility designs in this case will require a complex network of diffuser ducts, steam ejector trains, fast operating valves, cooling water systems and flow diverters that need to be characterized for steady state performance. In this paper, we will demonstrate with the use of CFD analyses s advanced capability to evaluate supersonic diffuser and steam ejector performance in a sub-scale A-3 facility at NASA Stennis Space Center (SSC) where extensive testing was performed. Furthermore, the focus in this paper relates to modeling of critical sub-systems and components used in facilities such as the A-3 facility. The work here will address deficiencies in empirical models and current CFD analyses that are used for design of supersonic diffusers/turning vanes/ejectors as well as analyses for confined plumes and venting processes. The primary areas that will be addressed are: (1) supersonic diffuser performance including analyses of thermal loads (2) accurate shock capturing in the diffuser duct; (3) effect of turning duct on the performance of the facility (4) prediction of mass flow rates and performance classification for steam ejectors (5) comparisons with test data from sub-scale diffuser testing and assessment of confidence

  10. Benchmark of SCALE (SAS2H) isotopic predictions of depletion analyses for San Onofre PWR MOX fuel

    Energy Technology Data Exchange (ETDEWEB)

    Hermann, O.W.

    2000-02-01

    The isotopic composition of mixed-oxide (MOX) fuel, fabricated with both uranium and plutonium, after discharge from reactors is of significant interest to the Fissile Materials Disposition Program. The validation of the SCALE (SAS2H) depletion code for use in the prediction of isotopic compositions of MOX fuel, similar to previous validation studies on uranium-only fueled reactors, has corresponding significance. The EEI-Westinghouse Plutonium Recycle Demonstration Program examined the use of MOX fuel in the San Onofre PWR, Unit 1, during cycles 2 and 3. Isotopic analyses of the MOX spent fuel were conducted on 13 actinides and {sup 148}Nd by either mass or alpha spectrometry. Six fuel pellet samples were taken from four different fuel pins of an irradiated MOX assembly. The measured actinide inventories from those samples has been used to benchmark SAS2H for MOX fuel applications. The average percentage differences in the code results compared with the measurement were {minus}0.9% for {sup 235}U and 5.2% for {sup 239}Pu. The differences for most of the isotopes were significantly larger than in the cases for uranium-only fueled reactors. In general, comparisons of code results with alpha spectrometer data had extreme differences, although the differences in the calculations compared with mass spectrometer analyses were not extremely larger than that of uranium-only fueled reactors. This benchmark study should be useful in estimating uncertainties of inventory, criticality and dose calculations of MOX spent fuel.

  11. ATOP - The Advanced Taiwan Ocean Prediction System Based on the mpiPOM. Part 1: Model Descriptions, Analyses and Results

    Directory of Open Access Journals (Sweden)

    Leo Oey

    2013-01-01

    Full Text Available A data-assimilated Taiwan Ocean Prediction (ATOP system is being developed at the National Central University, Taiwan. The model simulates sea-surface height, three-dimensional currents, temperature and salinity and turbulent mixing. The model has options for tracer and particle-tracking algorithms, as well as for wave-induced Stokes drift and wave-enhanced mixing and bottom drag. Two different forecast domains have been tested: a large-grid domain that encompasses the entire North Pacific Ocean at 0.1° × 0.1° horizontal resolution and 41 vertical sigma levels, and a smaller western North Pacific domain which at present also has the same horizontal resolution. In both domains, 25-year spin-up runs from 1988 - 2011 were first conducted, forced by six-hourly Cross-Calibrated Multi-Platform (CCMP and NCEP reanalysis Global Forecast System (GSF winds. The results are then used as initial conditions to conduct ocean analyses from January 2012 through February 2012, when updated hindcasts and real-time forecasts begin using the GFS winds. This paper describes the ATOP system and compares the forecast results against satellite altimetry data for assessing model skills. The model results are also shown to compare well with observations of (i the Kuroshio intrusion in the northern South China Sea, and (ii subtropical counter current. Review and comparison with other models in the literature of _ are also given.

  12. Analyses of Potential Predictive Markers and Response to Targeted Therapy in Patients with Advanced Clear-cell Renal Cell Carcinoma

    Directory of Open Access Journals (Sweden)

    Yan Song

    2015-01-01

    Full Text Available Background: Vascular endothelial growth factor-targeted agents are standard treatments in advanced clear-cell renal cell carcinoma (ccRCC, but biomarkers of activity are lacking. The aim of this study was to investigate the association of Von Hippel-Lindau (VHL gene status, vascular endothelial growth factor receptor (VEGFR or stem cell factor receptor (KIT expression, and their relationships with characteristics and clinical outcome of advanced ccRCC. Methods: A total of 59 patients who received targeted treatment with sunitinib or pazopanib were evaluated for determination at Cancer Hospital and Institute, Chinese Academy of Medical Sciences between January 2010 and November 2012. Paraffin-embedded tumor samples were collected and status of the VHL gene and expression of VEGFR and KIT were determined by VHL sequence analysis and immunohistochemistry. Clinical-pathological features were collected and efficacy such as response rate and Median progression-free survival (PFS and overall survival (OS were calculated and then compared based on expression status. The Chi-square test, the Kaplan-Meier method, and the Lon-rank test were used for statistical analyses. Results: Of 59 patients, objective responses were observed in 28 patients (47.5%. The median PFS was 13.8 months and median OS was 39.9 months. There was an improved PFS in patients with the following clinical features: Male gender, number of metastatic sites 2 or less, VEGFR-2 positive or KIT positive. Eleven patients (18.6% had evidence of VHL mutation, with an objective response rate of 45.5%, which showed no difference with patients with no VHL mutation (47.9%. VHL mutation status did not correlate with either overall response rate (P = 0.938 or PFS (P = 0.277. The PFS was 17.6 months and 22.2 months in VEGFR-2 positive patients and KIT positive patients, respectively, which was significantly longer than that of VEGFR-2 or KIT negative patients (P = 0.026 and P = 0.043. Conclusion

  13. Analyses of Potential Predictive Markers and Response to Targeted Therapy in Patients with Advanced Clear-cell Renal Cell Carcinoma

    Institute of Scientific and Technical Information of China (English)

    Yan Song; Jing Huang; Ling Shan; Hong-Tu Zhang

    2015-01-01

    Background:Vascular endothelial growth factor-targeted agents are standard treatments in advanced clear-cell renal cell carcinoma (ccRCC),but biomarkers of activity are lacking.The aim of this study was to investigate the association of Von Hippel-Lindau (VHL) gene status,vascular endothelial growth factor receptor (VEGFR) or stem cell factor receptor (KIT) expression,and their relationships with characteristics and clinical outcome of advanced ccRCC.Methods:A total of 59 patients who received targeted treatment with sunitinib or pazopanib were evaluated for determination at Cancer Hospital and Institute,Chinese Academy of Medical Sciences between January 2010 and November 2012.Paraffin-embedded tumor samples were collected and status of the VHL gene and expression of VEGFR and KIT were determined by VHL sequence analysis and immunohistochemistry.Clinical-pathological features were collected and efficacy such as response rate and Median progression-free survival (PFS) and ovcrall survival (OS) were calculated and then compared based on expression status.The Chi-square test,the KaplanMeier method,and the Lon-rank test were used for statistical analyses.Results:Of 59 patients,objective responses were observed in 28 patients (47.5%).The median PFS was 13.8 months and median OS was 39.9 months.There was an improved PFS in patients with the following clinical features:Male gender,number of metastatic sites 2 or less,VEGFR-2 positive or KIT positive.Eleven patients (18.6%) had evidence of VHL mutation,with an objective response rate of 45.5%,which showed no difference with patients with no VHL mutation (47.9%).VHL mutation status did not correlate with either overall response rate (P =0.938) or PFS (P =0.277).The PFS was 17.6 months and 22.2 months in VEGFR-2 positive patients and KIT positive patients,respectively,which was significantly longer than that of VEGFR-2 or KIT negative patients (P =0.026 and P =0.043).Conclusion:VHL mutation status could not predict

  14. Palindrome analyser - A new web-based server for predicting and evaluating inverted repeats in nucleotide sequences.

    Science.gov (United States)

    Brázda, Václav; Kolomazník, Jan; Lýsek, Jiří; Hároníková, Lucia; Coufal, Jan; Št'astný, Jiří

    2016-09-30

    DNA cruciform structures play an important role in the regulation of natural processes including gene replication and expression, as well as nucleosome structure and recombination. They have also been implicated in the evolution and development of diseases such as cancer and neurodegenerative disorders. Cruciform structures are formed by inverted repeats, and their stability is enhanced by DNA supercoiling and protein binding. They have received broad attention because of their important roles in biology. Computational approaches to study inverted repeats have allowed detailed analysis of genomes. However, currently there are no easily accessible and user-friendly tools that can analyse inverted repeats, especially among long nucleotide sequences. We have developed a web-based server, Palindrome analyser, which is a user-friendly application for analysing inverted repeats in various DNA (or RNA) sequences including genome sequences and oligonucleotides. It allows users to search and retrieve desired gene/nucleotide sequence entries from the NCBI databases, and provides data on length, sequence, locations and energy required for cruciform formation. Palindrome analyser also features an interactive graphical data representation of the distribution of the inverted repeats, with options for sorting according to the length of inverted repeat, length of loop, and number of mismatches. Palindrome analyser can be accessed at http://bioinformatics.ibp.cz. PMID:27603574

  15. Serial and panel analyses of biomarkers do not improve the prediction of bacteremia compared to one procalcitonin measurement

    NARCIS (Netherlands)

    Tromp, M.; Lansdorp, B.; Bleeker-Rovers, C.P.; Klein Gunnewiek, J.M.; Kullberg, B.J.; Pickkers, P.

    2012-01-01

    Objectives We evaluated the value of a single biomarker, biomarker panels, biomarkers combined with clinical signs of sepsis, and serial determinations of biomarkers in the prediction of bacteremia in patients with sepsis. Methods Adult patients visiting the emergency department because of a susp

  16. Serial and panel analyses of biomarkers do not improve the prediction of bacteremia compared to one procalcitonin measurement.

    NARCIS (Netherlands)

    Tromp, M.; Lansdorp, B.; Bleeker-Rovers, C.P.; Gunnewiek, J.M.; Kullberg, B.J.; Pickkers, P.

    2012-01-01

    OBJECTIVES: We evaluated the value of a single biomarker, biomarker panels, biomarkers combined with clinical signs of sepsis, and serial determinations of biomarkers in the prediction of bacteremia in patients with sepsis. METHODS: Adult patients visiting the emergency department because of a suspe

  17. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.

  18. Can the lifetime of the superheater tubes be predicted according to the fuel analyses? Assessment from field and laboratory data

    Energy Technology Data Exchange (ETDEWEB)

    Salmenoja, K. [Kvaerner Pulping Oy, Tampere (Finland)

    1998-12-31

    Lifetime of the superheaters in different power boilers is more or less still a mystery. This is especially true in firing biomass based fuels (biofuels), such as bark, forest residues, and straw. Due to the unhomogeneous nature of the biofuels, the lifetime of the superheaters may vary from case to case. Sometimes the lifetime is significantly shorter than originally expected, sometimes no corrosion even in the hottest tubes is observed. This is one of the main reasons why the boiler operators often demand for a better predictability on the corrosion resistance of the materials to avoid unscheduled shutdowns. (orig.) 9 refs.

  19. Evaluation of multivariate statistical analyses for monitoring and prediction of processes in an seawater reverse osmosis desalination plant

    Energy Technology Data Exchange (ETDEWEB)

    Kolluri, Srinivas Sahan; Esfahani, Iman Janghorban; Garikiparthy, Prithvi Sai Nadh; Yoo, Chang Kyoo [Kyung Hee University, Yongin (Korea, Republic of)

    2015-08-15

    Our aim was to analyze, monitor, and predict the outcomes of processes in a full-scale seawater reverse osmosis (SWRO) desalination plant using multivariate statistical techniques. Multivariate analysis of variance (MANOVA) was used to investigate the performance and efficiencies of two SWRO processes, namely, pore controllable fiber filterreverse osmosis (PCF-SWRO) and sand filtration-ultra filtration-reverse osmosis (SF-UF-SWRO). Principal component analysis (PCA) was applied to monitor the two SWRO processes. PCA monitoring revealed that the SF-UF-SWRO process could be analyzed reliably with a low number of outliers and disturbances. Partial least squares (PLS) analysis was then conducted to predict which of the seven input parameters of feed flow rate, PCF/SF-UF filtrate flow rate, temperature of feed water, turbidity feed, pH, reverse osmosis (RO)flow rate, and pressure had a significant effect on the outcome variables of permeate flow rate and concentration. Root mean squared errors (RMSEs) of the PLS models for permeate flow rates were 31.5 and 28.6 for the PCF-SWRO process and SF-UF-SWRO process, respectively, while RMSEs of permeate concentrations were 350.44 and 289.4, respectively. These results indicate that the SF-UF-SWRO process can be modeled more accurately than the PCF-SWRO process, because the RMSE values of permeate flowrate and concentration obtained using a PLS regression model of the SF-UF-SWRO process were lower than those obtained for the PCF-SWRO process.

  20. Standardized Software for Wind Load Forecast Error Analyses and Predictions Based on Wavelet-ARIMA Models - Applications at Multiple Geographically Distributed Wind Farms

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Zhangshuan; Makarov, Yuri V.; Samaan, Nader A.; Etingov, Pavel V.

    2013-03-19

    Given the multi-scale variability and uncertainty of wind generation and forecast errors, it is a natural choice to use time-frequency representation (TFR) as a view of the corresponding time series represented over both time and frequency. Here we use wavelet transform (WT) to expand the signal in terms of wavelet functions which are localized in both time and frequency. Each WT component is more stationary and has consistent auto-correlation pattern. We combined wavelet analyses with time series forecast approaches such as ARIMA, and tested the approach at three different wind farms located far away from each other. The prediction capability is satisfactory -- the day-ahead prediction of errors match the original error values very well, including the patterns. The observations are well located within the predictive intervals. Integrating our wavelet-ARIMA (‘stochastic’) model with the weather forecast model (‘deterministic’) will improve our ability significantly to predict wind power generation and reduce predictive uncertainty.

  1. Landscaping analyses of the ROC predictions of discrete-slots and signal-detection models of visual working memory.

    Science.gov (United States)

    Donkin, Chris; Tran, Sophia Chi; Nosofsky, Robert

    2014-10-01

    A fundamental issue concerning visual working memory is whether its capacity limits are better characterized in terms of a limited number of discrete slots (DSs) or a limited amount of a shared continuous resource. Rouder et al. (2008) found that a mixed-attention, fixed-capacity, DS model provided the best explanation of behavior in a change detection task, outperforming alternative continuous signal detection theory (SDT) models. Here, we extend their analysis in two ways: first, with experiments aimed at better distinguishing between the predictions of the DS and SDT models, and second, using a model-based analysis technique called landscaping, in which the functional-form complexity of the models is taken into account. We find that the balance of evidence supports a DS account of behavior in change detection tasks but that the SDT model is best when the visual displays always consist of the same number of items. In our General Discussion section, we outline, but ultimately reject, a number of potential explanations for the observed pattern of results. We finish by describing future research that is needed to pinpoint the basis for this observed pattern of results.

  2. The GENOTEND chip: a new tool to analyse gene expression in muscles of beef cattle for beef quality prediction

    Directory of Open Access Journals (Sweden)

    Hocquette Jean-Francois

    2012-08-01

    validated in the groups of 30 Charolais young bulls slaughtered in year 2, and in the 21 Charolais steers slaughtered in year 1, but not in the group of 19 steers slaughtered in year 2 which differ from the reference group by two factors (gender and year. When the first three groups of animals were analysed together, this subset of genes explained a 4-fold higher proportion of the variability in tenderness than muscle biochemical traits. Conclusion This study underlined the relevance of the GENOTEND chip to identify markers of beef quality, mainly by confirming previous results and by detecting other genes of the heat shock family as potential markers of beef quality. However, it was not always possible to extrapolate the relevance of these markers to all animal groups which differ by several factors (such as gender or environmental conditions of production from the initial population of reference in which these markers were identified.

  3. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

  4. Computational fluid dynamics analyses of lateral heat conduction, coolant azimuthal mixing and heat transfer predictions in a BR2 fuel assembly geometry.

    Energy Technology Data Exchange (ETDEWEB)

    Tzanos, C. P.; Dionne, B. (Nuclear Engineering Division)

    2011-05-23

    To support the analyses related to the conversion of the BR2 core from highly-enriched (HEU) to low-enriched (LEU) fuel, the thermal-hydraulics codes PLTEMP and RELAP-3D are used to evaluate the safety margins during steady-state operation (PLTEMP), as well as after a loss-of-flow, loss-of-pressure, or a loss of coolant event (RELAP). In the 1-D PLTEMP and RELAP simulations, conduction in the azimuthal and axial directions is not accounted. The very good thermal conductivity of the cladding and the fuel meat and significant temperature gradients in the lateral directions (axial and azimuthal directions) could lead to a heat flux distribution that is significantly different than the power distribution. To evaluate the significance of the lateral heat conduction, 3-D computational fluid dynamics (CFD) simulations, using the CFD code STAR-CD, were performed. Safety margin calculations are typically performed for a hot stripe, i.e., an azimuthal region of the fuel plates/coolant channel containing the power peak. In a RELAP model, for example, a channel between two plates could be divided into a number of RELAP channels (stripes) in the azimuthal direction. In a PLTEMP model, the effect of azimuthal power peaking could be taken into account by using engineering factors. However, if the thermal mixing in the azimuthal direction of a coolant channel is significant, a stripping approach could be overly conservative by not taking into account this mixing. STAR-CD simulations were also performed to study the thermal mixing in the coolant. Section II of this document presents the results of the analyses of the lateral heat conduction and azimuthal thermal mixing in a coolant channel. Finally, PLTEMP and RELAP simulations rely on the use of correlations to determine heat transfer coefficients. Previous analyses showed that the Dittus-Boelter correlation gives significantly more conservative (lower) predictions than the correlations of Sieder-Tate and Petukhov. STAR-CD 3-D

  5. Genetically Predicted Body Mass Index and Breast Cancer Risk: Mendelian Randomization Analyses of Data from 145,000 Women of European Descent

    Science.gov (United States)

    Guo, Yan; Warren Andersen, Shaneda; Shu, Xiao-Ou; Michailidou, Kyriaki; Bolla, Manjeet K.; Wang, Qin; Garcia-Closas, Montserrat; Milne, Roger L.; Schmidt, Marjanka K.; Chang-Claude, Jenny; Dunning, Allison; Bojesen, Stig E.; Ahsan, Habibul; Aittomäki, Kristiina; Andrulis, Irene L.; Anton-Culver, Hoda; Beckmann, Matthias W.; Beeghly-Fadiel, Alicia; Benitez, Javier; Bogdanova, Natalia V.; Bonanni, Bernardo; Børresen-Dale, Anne-Lise; Brand, Judith; Brauch, Hiltrud; Brenner, Hermann; Brüning, Thomas; Burwinkel, Barbara; Casey, Graham; Chenevix-Trench, Georgia; Couch, Fergus J.; Cross, Simon S.; Czene, Kamila; Dörk, Thilo; Dumont, Martine; Fasching, Peter A.; Figueroa, Jonine; Flesch-Janys, Dieter; Fletcher, Olivia; Flyger, Henrik; Fostira, Florentia; Gammon, Marilie; Giles, Graham G.; Guénel, Pascal; Haiman, Christopher A.; Hamann, Ute; Hooning, Maartje J.; Hopper, John L.; Jakubowska, Anna; Jasmine, Farzana; Jenkins, Mark; John, Esther M.; Johnson, Nichola; Jones, Michael E.; Kabisch, Maria; Knight, Julia A.; Koppert, Linetta B.; Kosma, Veli-Matti; Kristensen, Vessela; Le Marchand, Loic; Lee, Eunjung; Li, Jingmei; Lindblom, Annika; Lubinski, Jan; Malone, Kathi E.; Mannermaa, Arto; Margolin, Sara; McLean, Catriona; Meindl, Alfons; Neuhausen, Susan L.; Nevanlinna, Heli; Neven, Patrick; Olson, Janet E.; Perez, Jose I. A.; Perkins, Barbara; Phillips, Kelly-Anne; Pylkäs, Katri; Rudolph, Anja; Santella, Regina; Sawyer, Elinor J.; Schmutzler, Rita K.; Seynaeve, Caroline; Shah, Mitul; Shrubsole, Martha J.; Southey, Melissa C.; Swerdlow, Anthony J.; Toland, Amanda E.; Tomlinson, Ian; Torres, Diana; Truong, Thérèse; Ursin, Giske; Van Der Luijt, Rob B.; Verhoef, Senno; Whittemore, Alice S.; Winqvist, Robert; Zhao, Hui; Zhao, Shilin; Hall, Per; Simard, Jacques; Kraft, Peter; Hunter, David; Easton, Douglas F.; Zheng, Wei

    2016-01-01

    Background Observational epidemiological studies have shown that high body mass index (BMI) is associated with a reduced risk of breast cancer in premenopausal women but an increased risk in postmenopausal women. It is unclear whether this association is mediated through shared genetic or environmental factors. Methods We applied Mendelian randomization to evaluate the association between BMI and risk of breast cancer occurrence using data from two large breast cancer consortia. We created a weighted BMI genetic score comprising 84 BMI-associated genetic variants to predicted BMI. We evaluated genetically predicted BMI in association with breast cancer risk using individual-level data from the Breast Cancer Association Consortium (BCAC) (cases  =  46,325, controls  =  42,482). We further evaluated the association between genetically predicted BMI and breast cancer risk using summary statistics from 16,003 cases and 41,335 controls from the Discovery, Biology, and Risk of Inherited Variants in Breast Cancer (DRIVE) Project. Because most studies measured BMI after cancer diagnosis, we could not conduct a parallel analysis to adequately evaluate the association of measured BMI with breast cancer risk prospectively. Results In the BCAC data, genetically predicted BMI was found to be inversely associated with breast cancer risk (odds ratio [OR]  =  0.65 per 5 kg/m2 increase, 95% confidence interval [CI]: 0.56–0.75, p = 3.32 × 10−10). The associations were similar for both premenopausal (OR   =   0.44, 95% CI:0.31–0.62, p  =  9.91 × 10−8) and postmenopausal breast cancer (OR  =  0.57, 95% CI: 0.46–0.71, p  =  1.88 × 10−8). This association was replicated in the data from the DRIVE consortium (OR  =  0.72, 95% CI: 0.60–0.84, p   =   1.64 × 10−7). Single marker analyses identified 17 of the 84 BMI-associated single nucleotide polymorphisms (SNPs) in association with breast cancer risk at p

  6. Prediction

    CERN Document Server

    Sornette, Didier

    2010-01-01

    This chapter first presents a rather personal view of some different aspects of predictability, going in crescendo from simple linear systems to high-dimensional nonlinear systems with stochastic forcing, which exhibit emergent properties such as phase transitions and regime shifts. Then, a detailed correspondence between the phenomenology of earthquakes, financial crashes and epileptic seizures is offered. The presented statistical evidence provides the substance of a general phase diagram for understanding the many facets of the spatio-temporal organization of these systems. A key insight is to organize the evidence and mechanisms in terms of two summarizing measures: (i) amplitude of disorder or heterogeneity in the system and (ii) level of coupling or interaction strength among the system's components. On the basis of the recently identified remarkable correspondence between earthquakes and seizures, we present detailed information on a class of stochastic point processes that has been found to be particu...

  7. Using meta-analytic path analysis to test theoretical predictions in health behavior: An illustration based on meta-analyses of the theory of planned behavior

    OpenAIRE

    Hagger, Martin; Chan, Dervin K. C.; Protogerou, Cleo; Chatzisarantis, Nikos L. D.

    2016-01-01

    Objective Synthesizing research on social cognitive theories applied to health behavior is an important step in the development of an evidence base of psychological factors as targets for effective behavioral interventions. However, few meta-analyses of research on social cognitive theories in health contexts have conducted simultaneous tests of theoretically-stipulated pattern effects using path analysis. We argue that conducting path analyses of meta-analytic effects among constructs fr...

  8. The Janus-faced nature of time spent on homework : Using latent profile analyses to predict academic achievement over a school year

    NARCIS (Netherlands)

    Flunger, Barbara; Trautwein, Ulrich; Nagengast, Benjamin; Lüdtke, Oliver; Niggli, Alois; Schnyder, Inge

    2015-01-01

    Homework time and achievement are only modestly associated, whereas homework effort has consistently been shown to positively predict later achievement. We argue that time spent on homework can be an important predictor of achievement when combined with measures of homework effort. Latent profile an

  9. Logistic Regression Analyses for Predicting Clinically Important Differences in Motor Capacity, Motor Performance, and Functional Independence after Constraint-Induced Therapy in Children with Cerebral Palsy

    Science.gov (United States)

    Wang, Tien-ni; Wu, Ching-yi; Chen, Chia-ling; Shieh, Jeng-yi; Lu, Lu; Lin, Keh-chung

    2013-01-01

    Given the growing evidence for the effects of constraint-induced therapy (CIT) in children with cerebral palsy (CP), there is a need for investigating the characteristics of potential participants who may benefit most from this intervention. This study aimed to establish predictive models for the effects of pediatric CIT on motor and functional…

  10. A systematic study of coordinate precision in X-ray structure analyses. Pt. 2. Predictive estimates of E.S.D.'s for the general-atom case

    International Nuclear Information System (INIS)

    The relationship between the mean isotropic e.s.d. anti σ(A)o of any element type A in a crystal structure and the R factor and atomic constitution of that structure is explored for 124 905 element-type occurrences calculated from 33 955 entries in the Cambridge Structural Database. On the basis of the work of Cruickshank [Acta Cryst. (1960), 13, 774-777], it is shown that anti σ(A)p values can be estimated by equations of the form anti σ(A)p = KRN1/2c/ZA where Nc is taken as ΣZ2i/Z2C, the Zi are atomic numbers and the summation is over all atoms in the asymmetric unit. Values of K were obtained by regression techniques using the anti σ(A)o as basis. The constant Knc for noncentrosymmetric structures is found to be larger than Kc for centrosymmetric structures by a factor of ∼21/2, as predicted by Cruickshank (1960). Two predictive equations are generated, one for first-row elements and the second for elements with ZA > 10. The relationship between the different constants K that arise in these two situations is linked to shape differentials in scattering-factor (fi) curves for light and heavy atoms. It is found that predictive equations in which the Zi are selectively replaced by fi at a constant sinθ/λ of 0.30 A-1 generate closely similar values of K for the light-atom and heavy-atom subsets. The overall analysis indicates that atomic e.s.d.'s may be seriously underestimated in the more precise structure determinations, that e.s.d.'s for the heaviest atoms may be less reliable than those for lighter atoms and that e.s.d.'s in noncentrosymmetric structures may be less accurate than those in centrosymmetric structures. (orig.)

  11. Barriers to predicting changes in global terrestrial methane fluxes: analyses using CLM4Me, a methane biogeochemistry model integrated in CESM

    Directory of Open Access Journals (Sweden)

    W. J. Riley

    2011-07-01

    Full Text Available Terrestrial net CH4 surface fluxes often represent the difference between much larger gross production and consumption fluxes and depend on multiple physical, biological, and chemical mechanisms that are poorly understood and represented in regional- and global-scale biogeochemical models. To characterize uncertainties, study feedbacks between CH4 fluxes and climate, and to guide future model development and experimentation, we developed and tested a new CH4 biogeochemistry model (CLM4Me integrated in the land component (Community Land Model; CLM4 of the Community Earth System Model (CESM1. CLM4Me includes representations of CH4 production, oxidation, aerenchyma transport, ebullition, aqueous and gaseous diffusion, and fractional inundation. As with most global models, CLM4 lacks important features for predicting current and future CH4 fluxes, including: vertical representation of soil organic matter, accurate subgrid scale hydrology, realistic representation of inundated system vegetation, anaerobic decomposition, thermokarst dynamics, and aqueous chemistry. We compared the seasonality and magnitude of predicted CH4 emissions to observations from 18 sites and three global atmospheric inversions. Simulated net CH4 emissions using our baseline parameter set were 270, 160, 50, and 70 Tg CH4 yr−1 globally, in the tropics, in the temperate zone, and north of 45° N, respectively; these values are within the range of previous estimates. We then used the model to characterize the sensitivity of regional and global CH4 emission estimates to uncertainties in model parameterizations. Of the parameters we tested, the temperature sensitivity of CH4 production, oxidation parameters, and aerenchyma properties had the largest impacts on net CH4 emissions, up to a factor of 4 and 10 at the regional and gridcell scales

  12. Barriers to predicting changes in global terrestrial methane fluxes: analyses using CLM4Me, a methane biogeochemistry model integrated in CESM

    Directory of Open Access Journals (Sweden)

    W. J. Riley

    2011-02-01

    Full Text Available Terrestrial net CH4 surface fluxes often represent the difference between much larger gross production and consumption fluxes and depend on multiple physical, biological, and chemical mechanisms that are poorly understood and represented in regional- and global-scale biogeochemical models. To characterize uncertainties, study feedbacks between CH4 fluxes and climate, and to guide future model development and experimentation, we developed and tested a new CH4 biogeochemistry model (CLM4Me integrated in the land component (Community Land Model; CLM4 of the Community Earth System Model (CESM1. CLM4Me includes representations of CH4 production, oxidation, aerenchymous transport, ebullition, aqueous and gaseous diffusion, and fractional inundation. As with most global models, CLM4Me lacks important features for predicting current and future CH4 fluxes, including: vertical representation of soil organic matter, accurate subgrid scale hydrology, realistic representation of inundated system vegetation, anaerobic decomposition, thermokarst dynamics, and aqueous chemistry. We compared the seasonality and magnitude of predicted CH4 emissions to observations from 18 sites and three global atmospheric inversions. Simulated net CH4 emissions using our baseline parameter set were 270, 160, 50, and 70 Tg CH4 m−2 yr−1 globally, in the tropics, temperate zone, and north of 45° N, respectively; these values are within the range of previous estimates. We then used the model to characterize the sensitivity of regional and global CH4 emission estimates to uncertainties in model parameterizations. Of the parameters we tested, the temperature sensitivity of CH4 production, oxidation parameters, and aerenchyma properties had the largest impacts on net CH4 emissions, up to a factor of 4 and 10 at the regional and gridcell

  13. Generation of a predicted protein database from EST data and application to iTRAQ analyses in grape (Vitis vinifera cv. Cabernet Sauvignon berries at ripening initiation

    Directory of Open Access Journals (Sweden)

    Smith Derek

    2009-01-01

    Full Text Available Abstract Background iTRAQ is a proteomics technique that uses isobaric tags for relative and absolute quantitation of tryptic peptides. In proteomics experiments, the detection and high confidence annotation of proteins and the significance of corresponding expression differences can depend on the quality and the species specificity of the tryptic peptide map database used for analysis of the data. For species for which finished genome sequence data are not available, identification of proteins relies on similarity to proteins from other species using comprehensive peptide map databases such as the MSDB. Results We were interested in characterizing ripening initiation ('veraison' in grape berries at the protein level in order to better define the molecular control of this important process for grape growers and wine makers. We developed a bioinformatic pipeline for processing EST data in order to produce a predicted tryptic peptide database specifically targeted to the wine grape cultivar, Vitis vinifera cv. Cabernet Sauvignon, and lacking truncated N- and C-terminal fragments. By searching iTRAQ MS/MS data generated from berry exocarp and mesocarp samples at ripening initiation, we determined that implementation of the custom database afforded a large improvement in high confidence peptide annotation in comparison to the MSDB. We used iTRAQ MS/MS in conjunction with custom peptide db searches to quantitatively characterize several important pathway components for berry ripening previously described at the transcriptional level and confirmed expression patterns for these at the protein level. Conclusion We determined that a predicted peptide database for MS/MS applications can be derived from EST data using advanced clustering and trimming approaches and successfully implemented for quantitative proteome profiling. Quantitative shotgun proteome profiling holds great promise for characterizing biological processes such as fruit ripening

  14. Comparative analyses of genetic risk prediction methods reveal extreme diversity of genetic predisposition to nonalcoholic fatty liver disease (NAFLD) among ethnic populations of India

    Indian Academy of Sciences (India)

    Ankita Chatterjee; Analabha Basu; Abhijit Chowdhury; Kausik Das; Neeta Sarkar-Roy; Partha P. Majumder; Priyadarshi Basu

    2015-03-01

    Nonalcoholic fatty liver disease (NAFLD) is a distinct pathologic condition characterized by a disease spectrum ranging from simple steatosis to steato-hepatitis, cirrhosis and hepatocellular carcinoma. Prevalence of NAFLD varies in different ethnic groups, ranging from 12% in Chinese to 45% in Hispanics. Among Indian populations, the diversity in prevalence is high, ranging from 9% in rural populations to 32% in urban populations, with geographic differences as well. Here, we wished to find out if this difference is reflected in their genetic makeup. To date, several candidate genes and a few genomewide association studies (GWAS) have been carried out, and many associations between single nucleotide polymorphisms (SNPs) and NAFLD have been observed. In this study, the risk allele frequencies (RAFs) of NAFLD-associated SNPs in 20 Indian ethnic populations (376 individuals) were analysed. We used two different measures for calculating genetic risk scores and compared their performance. The correlation of additive risk scores of NAFLD for three Hapmap populations with their weighted mean prevalence was found to be high (2 = 0.93). Later we used this method to compare NAFLD risk among ethnic Indian populations. Based on our observation, the Indian caste populations have high risk scores compared to Caucasians, who are often used as surrogate and similar to Indian caste population in disease gene association studies, and is significantly higher than the Indian tribal populations.

  15. In Vitro and in Silico Analyses for Predicting Hepatic Cytochrome P450-Dependent Metabolic Potencies of Polychlorinated Biphenyls in the Baikal Seal.

    Science.gov (United States)

    Yoo, Jean; Hirano, Masashi; Mizukawa, Hazuki; Nomiyama, Kei; Agusa, Tetsuro; Kim, Eun-Young; Tanabe, Shinsuke; Iwata, Hisato

    2015-12-15

    The aim of this study was to understand the cytochrome P450 (CYP)-dependent metabolic pathway and potency of polychlorinated biphenyls (PCBs) in the Baikal seal (Pusa sibirica). In vitro metabolism of 62 PCB congener mixtures was investigated by using liver microsomes of this species. A decreased ratio of over 20% was observed for CB3, CB4, CB8, CB15, CB19, CB22, CB37, CB54, CB77, and CB105, suggesting the preferential metabolism of low-chlorinated PCBs by CYPs. The highly activated metabolic pathways in Baikal seals that were predicted from the decreased PCBs and detected hydroxylated PCBs (OH-PCBs) were CB22 to 4'OH-CB20 and CB77 to 4'OH-CB79. The total amount of OH-PCBs detected as identified and unidentified congeners accounted for only a 3.8 ± 1.7 mol % of loaded PCBs, indicating many unknown PCB metabolic pathways. To explore factors involved in CYP-dependent PCB metabolism, we examined the relationships among the structural and physicochemical properties of PCBs, the in silico PCB-CYP docking parameters, and the in vitro PCB decreased ratios by principal component analysis. Statistical analysis showed that the decreased PCB ratio was at least partly accounted for by the substituted chlorine number of PCBs and the distance from the Cl-unsubstituted carbon of docked PCBs to the heme Fe in CYP2A and 2B. PMID:26579933

  16. Effects of pharmacists' interventions on appropriateness of prescribing and evaluation of the instruments' (MAI, STOPP and STARTs' ability to predict hospitalization--analyses from a randomized controlled trial.

    Directory of Open Access Journals (Sweden)

    Ulrika Gillespie

    Full Text Available BACKGROUND: Appropriateness of prescribing can be assessed by various measures and screening instruments. The aims of this study were to investigate the effects of pharmacists' interventions on appropriateness of prescribing in elderly patients, and to explore the relationship between these results and hospital care utilization during a 12-month follow-up period. METHODS: The study population from a previous randomized controlled study, in which the effects of a comprehensive pharmacist intervention on re-hospitalization was investigated, was used. The criteria from the instruments MAI, STOPP and START were applied retrospectively to the 368 study patients (intervention group (I n = 182, control group (C n = 186. The assessments were done on admission and at discharge to detect differences over time and between the groups. Hospital care consumption was recorded and the association between scores for appropriateness, and hospitalization was analysed. RESULTS: The number of Potentially Inappropriate Medicines (PIMs per patient as identified by STOPP was reduced for I but not for C (1.42 to 0.93 vs. 1.46 to 1.66 respectively, p<0.01. The number of Potential Prescription Omissions (PPOs per patient as identified by START was reduced for I but not for C (0.36 to 0.09 vs. 0.42 to 0.45 respectively, p<0.001. The summated score for MAI was reduced for I but not for C (8.5 to 5.0 and 8.7 to 10.0 respectively, p<0.001. There was a positive association between scores for MAI and STOPP and drug-related readmissions (RR 8-9% and 30-34% respectively. No association was detected between the scores of the tools and total re-visits to hospital. CONCLUSION: The interventions significantly improved the appropriateness of prescribing for patients in the intervention group as evaluated by the instruments MAI, STOPP and START. High scores in MAI and STOPP were associated with a higher number of drug-related readmissions.

  17. A new tool for prediction and analysis of thermal comfort in steady and transient states; Un nouvel outil pour la prediction et l'analyse du confort thermique en regime permanent et variable

    Energy Technology Data Exchange (ETDEWEB)

    Megri, A.Ch. [Illinois Institute of Technology, Civil and Architectural Engineering Dept., Chicago, Illinois (United States); Megri, A.F. [Centre Universitaire de Tebessa, Dept. d' Electronique (Algeria); El Naqa, I. [Washington Univ., School of Medicine, Dept. of Radiation Oncology, Saint Louis, Missouri (United States); Achard, G. [Universite de Savoie, Lab. Optimisation de la Conception et Ingenierie de L' Environnement (LOCIE) - ESIGEC, 73 - Le Bourget du Lac (France)

    2006-02-15

    Thermal comfort is influenced by psychological as well as physiological factors. This paper proposes the use of support vector machine (SVM) learning for automated prediction of human thermal comfort in steady and transient states. The SVM is an artificial intelligent approach that could capture the input/output mapping from the given data. Support vector machines were developed based on the Structural Risk Minimization principle. Different sets of representative experimental environmental factors that affect a homogenous person's thermal balance were used for training the SVM machine. The SVM is a very efficient, fast, and accurate technique to identify thermal comfort. This technique permits the determination of thermal comfort indices for different sub-categories of people; sick and elderly, in extreme climatic conditions, when the experimental data for such sub-category are available. The experimental data has been used for the learning and testing processes. The results show a good correlation between SVM predicted values and those obtained from conventional thermal comfort, such as Fanger and Gagge models. The 'trained machine' with representative data could be used easily and effectively in comparison with other conventional estimation methods of different indices. (author)

  18. Life Course and Intergenerational Continuity of Intimate Partner Aggression and Physical Injury: A 20-Year Study.

    Science.gov (United States)

    Knight, Kelly E; Menard, Scott; Simmons, Sara B; Bouffard, Leana A; Orsi, Rebecca

    2016-01-01

    The objective of this study is to examine continuity of intimate partner aggression (IPA), which is defined as repeated annual involvement in IPA, across respondents' life course and into the next generation, where it may emerge among adult children. A national, longitudinal, and multigenerational sample of 1,401 individuals and their adult children is analyzed. Annual data on IPA severity and physical injury were collected by the National Youth Survey Family Study across a 20-year period from 1984 to 2004. Three hypotheses and biological sex differences are tested and effect sizes are estimated. First, findings reveal evidence for life course continuity (IPA is a strong predictor of subsequent IPA), but the overall trend decreases over time. Second, intergenerational continuity is documented (parents' IPA predicts adult children's IPA), but the effect is stronger for female than for male adult children. Third, results from combined and separate, more restrictive, measures of victimization and perpetration are nearly identical except in the intergenerational analyses. Fourth, evidence for continuity is not found when assessing physical injury alone. Together, these findings imply that some but not all forms of IPA are common, continuous, and intergenerational. Life course continuity appears stronger than intergenerational continuity. PMID:27076093

  19. Structural analysis of point mutations at the Vaccinia virus A20/D4 interface.

    Science.gov (United States)

    Contesto-Richefeu, Céline; Tarbouriech, Nicolas; Brazzolotto, Xavier; Burmeister, Wim P; Peyrefitte, Christophe N; Iseni, Frédéric

    2016-09-01

    The Vaccinia virus polymerase holoenzyme is composed of three subunits: E9, the catalytic DNA polymerase subunit; D4, a uracil-DNA glycosylase; and A20, a protein with no known enzymatic activity. The D4/A20 heterodimer is the DNA polymerase cofactor, the function of which is essential for processive DNA synthesis. The recent crystal structure of D4 bound to the first 50 amino acids of A20 (D4/A201-50) revealed the importance of three residues, forming a cation-π interaction at the dimerization interface, for complex formation. These are Arg167 and Pro173 of D4 and Trp43 of A20. Here, the crystal structures of the three mutants D4-R167A/A201-50, D4-P173G/A201-50 and D4/A201-50-W43A are presented. The D4/A20 interface of the three structures has been analysed for atomic solvation parameters and cation-π interactions. This study confirms previous biochemical data and also points out the importance for stability of the restrained conformational space of Pro173. Moreover, these new structures will be useful for the design and rational improvement of known molecules targeting the D4/A20 interface. PMID:27599859

  20. Adiponectin induces A20 expression in adipose tissue to confer metabolic benefit.

    Science.gov (United States)

    Hand, Laura E; Usan, Paola; Cooper, Garth J S; Xu, Lance Y; Ammori, Basil; Cunningham, Peter S; Aghamohammadzadeh, Reza; Soran, Handrean; Greenstein, Adam; Loudon, Andrew S I; Bechtold, David A; Ray, David W

    2015-01-01

    Obesity is a major risk factor for metabolic disease, with white adipose tissue (WAT) inflammation emerging as a key underlying pathology. We detail that mice lacking Reverbα exhibit enhanced fat storage without the predicted increased WAT inflammation or loss of insulin sensitivity. In contrast to most animal models of obesity and obese human patients, Reverbα(-/-) mice exhibit elevated serum adiponectin levels and increased adiponectin secretion from WAT explants in vitro, highlighting a potential anti-inflammatory role of this adipokine in hypertrophic WAT. Indeed, adiponectin was found to suppress primary macrophage responses to lipopolysaccharide and proinflammatory fatty acids, and this suppression depended on glycogen synthase kinase 3β activation and induction of A20. Attenuated inflammatory responses in Reverbα(-/-) WAT depots were associated with tonic elevation of A20 protein and ex vivo shown to depend on A20. We also demonstrate that adipose A20 expression in obese human subjects exhibits a negative correlation with measures of insulin sensitivity. Furthermore, bariatric surgery-induced weight loss was accompanied by enhanced WAT A20 expression, which is positively correlated with increased serum adiponectin and improved metabolic and inflammatory markers, including C-reactive protein. The findings identify A20 as a mediator of adiponectin anti-inflammatory action in WAT and a potential target for mitigating obesity-related pathology. PMID:25190567

  1. Network class superposition analyses.

    Science.gov (United States)

    Pearson, Carl A B; Zeng, Chen; Simha, Rahul

    2013-01-01

    Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30) for the yeast cell cycle process), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses. PMID:23565141

  2. Cruise>Climate Variability and Predictability (CLIVAR) A22,A20 (AT20, EM122)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The hydrographic surveys will consist of approximately 180 full water column CTD/LADCP casts along the trackline. Each cast will acquire up to 36 water samples on...

  3. Novel A20-gene-eluting stent inhibits carotid artery restenosis in a porcine model

    Science.gov (United States)

    Zhou, Zhen-hua; Peng, Jing; Meng, Zhao-you; Chen, Lin; Huang, Jia-Lu; Huang, He-qing; Li, Li; Zeng, Wen; Wei, Yong; Zhu, Chu-Hong; Chen, Kang-Ning

    2016-01-01

    Background Carotid artery stenosis is a major risk factor for ischemic stroke. Although carotid angioplasty and stenting using an embolic protection device has been introduced as a less invasive carotid revascularization approach, in-stent restenosis limits its long-term efficacy and safety. The objective of this study was to test the anti-restenosis effects of local stent-mediated delivery of the A20 gene in a porcine carotid artery model. Materials and methods The pCDNA3.1EHA20 was firmly attached onto stents that had been collagen coated and treated with N-succinimidyl-3-(2-pyridyldithiol)propionate solution and anti-DNA immunoglobulin fixation. Anti-restenosis effects of modified vs control (the bare-metal stent and pCDNA3.1 void vector) stents were assessed by Western blot and scanning electron microscopy, as well as by morphological and inflammatory reaction analyses. Results Stent-delivered A20 gene was locally expressed in porcine carotids in association with significantly greater extent of re-endothelialization at day 14 and of neointimal hyperplasia inhibition at 3 months than stenting without A20 gene expression. Conclusion The A20-gene-eluting stent inhibits neointimal hyperplasia while promoting re-endothelialization and therefore constitutes a novel potential alternative to prevent restenosis while minimizing complications. PMID:27540277

  4. Conceptual Nuclear Design of a 20 MW Multipurpose Research Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Chul Gyo; Kim, Hak Sung; Park, Cheol [KAERI, Daejeon (Korea, Republic of); Nghiem, Huynh Ton; Vinh, Le Vinh; Dang, Vo Doan Hai [Dalat Nuclear Research Reactor, Hanoi (Viet Nam)

    2007-08-15

    A conceptual nuclear design of a 20 MW multi-purpose research reactor for Vietnam has been jointly done by the KAERI and the DNRI (VAEC). The AHR reference core in this report is a right water cooled and a heavy water reflected open-tank-in-pool type multipurpose research reactor with 20 MW. The rod type fuel of a dispersed U{sub 3}Si{sub 2}-Al with a density of 4.0 gU/cc is used as a fuel. The core consists of fourteen 36-element assemblies, four 18-element assemblies and has three in-core irradiation sites. The reflector tank filled with heavy water surrounds the core and provides rooms for various irradiation holes. Major analyses have been done for the relevant nuclear design parameters such as the neutron flux and power distributions, reactivity coefficients, control rod worths, etc. For the analysis, the MCNP, MVP, and HELIOS codes were used by KAERI and DNRI (VAEC). The results by MCNP (KAERI) and MVP (DNRI) showed good agreements and can be summarized as followings. For a clean, unperturbed core condition such that the fuels are all fresh and there are no irradiation holes in the reflector region, the fast neutron flux (E{sub n}{>=}1.0 MeV) reaches 1.47x10{sup 14} n/cm{sup 2}s and the maximum thermal neutron flux (E{sub n}{<=}0.625 eV) reaches 4.43x10{sup 14} n/cm{sup 2}s in the core region. In the reflector region, the thermal neutron peak occurs about 28 cm far from the core center and the maximum thermal neutron flux is estimated to be 4.09x10{sup 14} n/cm{sup 2}s. For the analysis of the equilibrium cycle core, the irradiation facilities in the reflector region were considered. The cycle length was estimated as 38 days long with a refueling scheme of replacing three 36-element fuel assemblies or replacing two 36-element and one 18-element fuel assemblies. The excess reactivity at a BOC was 103.4 mk, and 24.6 mk at a minimum was reserved at an EOC. The assembly average discharge burnup was 54.6% of initial U-235 loading. For the proposed fuel management

  5. Multicultural Counseling Competencies Research: A 20-Year Content Analysis

    Science.gov (United States)

    Worthington, Roger L.; Soth-McNett, Angela M.; Moreno, Matthew V.

    2007-01-01

    The authors conducted a 20-year content analysis of the entire field of empirical research on the multicultural counseling competencies (D. W. Sue et al., 1982). They conducted an exhaustive search for empirical research articles using PsycINFO, as well as complete reviews of the past 20 years of several journals (e.g., Journal of Counseling…

  6. Novel A20-gene-eluting stent inhibits carotid artery restenosis in a porcine model

    Directory of Open Access Journals (Sweden)

    Zhou ZH

    2016-08-01

    Full Text Available Zhen-hua Zhou,1 Jing Peng,1 Zhao-you Meng,1 Lin Chen,1 Jia-Lu Huang,1 He-qing Huang,1 Li Li,2 Wen Zeng,2 Yong Wei,2 Chu-Hong Zhu,2 Kang-Ning Chen1 1Department of Neurology, Cerebrovascular Disease Research Institute, Southwest Hospital, 2Department of Anatomy, Key Laboratory for Biomechanics of Chongqing, Third Military Medical University, Chongqing, People’s Republic of China Background: Carotid artery stenosis is a major risk factor for ischemic stroke. Although carotid angioplasty and stenting using an embolic protection device has been introduced as a less invasive carotid revascularization approach, in-stent restenosis limits its long-term efficacy and safety. The objective of this study was to test the anti-restenosis effects of local stent-mediated delivery of the A20 gene in a porcine carotid artery model.Materials and methods: The pCDNA3.1EHA20 was firmly attached onto stents that had been collagen coated and treated with N-succinimidyl-3-(2-pyridyldithiolpropionate solution and anti-DNA immunoglobulin fixation. Anti-restenosis effects of modified vs control (the bare-metal stent and pCDNA3.1 void vector stents were assessed by Western blot and scanning electron microscopy, as well as by morphological and inflammatory reaction analyses.Results: Stent-delivered A20 gene was locally expressed in porcine carotids in association with significantly greater extent of re-endothelialization at day 14 and of neointimal hyperplasia inhibition at 3 months than stenting without A20 gene expression.Conclusion: The A20-gene-eluting stent inhibits neointimal hyperplasia while promoting re-endothelialization and therefore constitutes a novel potential alternative to prevent restenosis while minimizing complications. Keywords: restenosis, A20, gene therapy, stent, endothelialization

  7. Sproglig Metode og Analyse

    DEFF Research Database (Denmark)

    le Fevre Jakobsen, Bjarne

    Publikationen indeholder øvematerialer, tekster, powerpointpræsentationer og handouts til undervisningsfaget Sproglig Metode og Analyse på BA og tilvalg i Dansk/Nordisk 2010-2011......Publikationen indeholder øvematerialer, tekster, powerpointpræsentationer og handouts til undervisningsfaget Sproglig Metode og Analyse på BA og tilvalg i Dansk/Nordisk 2010-2011...

  8. Uncertainty and Sensitivity Analyses Plan

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.

  9. Uncertainty and Sensitivity Analyses Plan

    International Nuclear Information System (INIS)

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project

  10. A 20 GHz circularly polarized, fan beam slot array antenna

    Science.gov (United States)

    Weikle, D. C.

    1982-03-01

    An EHF waveguide slot array was developed for possible use as a receive-only paging antenna for ground mobile terminals. The design, fabrication, and measured performance of this antenna are presented. The antenna generates a circularly polarized fan beam that is narrow in azimuth and broad in elevation. When mechanically rotated in azimuth, it can receive a 20 GHz satellite transmission independent of mobile terminal direction. Azimuth plane sidelobe levels, which are typically <-40 dB from the main lobe, provide for discrimination against ground and airborne jammers.

  11. Laser Beam Focus Analyser

    DEFF Research Database (Denmark)

    Nielsen, Peter Carøe; Hansen, Hans Nørgaard; Olsen, Flemming Ove;

    2007-01-01

    The quantitative and qualitative description of laser beam characteristics is important for process implementation and optimisation. In particular, a need for quantitative characterisation of beam diameter was identified when using fibre lasers for micro manufacturing. Here the beam diameter limits...... the obtainable features in direct laser machining as well as heat affected zones in welding processes. This paper describes the development of a measuring unit capable of analysing beam shape and diameter of lasers to be used in manufacturing processes. The analyser is based on the principle of a rotating...... mechanical wire being swept through the laser beam at varying Z-heights. The reflected signal is analysed and the resulting beam profile determined. The development comprised the design of a flexible fixture capable of providing both rotation and Z-axis movement, control software including data capture...

  12. Antiferromagnetism in a 20% Ho-80% Tb alloy single crystal

    DEFF Research Database (Denmark)

    Lebech, Bente

    1968-01-01

    20% Ho-80% Tb exhibits two magnetic phases, similar to those of Tb. The spiral turn angle varies from 31.1° to 21.4°. A minimum effective spin for the occurrence of stable simple ferromagnetic structure at low temperatures is predicted.......20% Ho-80% Tb exhibits two magnetic phases, similar to those of Tb. The spiral turn angle varies from 31.1° to 21.4°. A minimum effective spin for the occurrence of stable simple ferromagnetic structure at low temperatures is predicted....

  13. Report sensory analyses veal

    OpenAIRE

    Veldman, M.; Schelvis-Smit, A.A.M.

    2005-01-01

    On behalf of a client of Animal Sciences Group, different varieties of veal were analyzed by both instrumental and sensory analyses. The sensory evaluation was performed with a sensory analytical panel in the period of 13th of May and 31st of May, 2005. The three varieties of veal were: young bull, pink veal and white veal. The sensory descriptive analyses show that the three groups Young bulls, pink veal and white veal, differ significantly in red colour for the raw meat as well as the baked...

  14. A 20 MHz CMOS reorder buffer for a superscalar microprocessor

    Science.gov (United States)

    Lenell, John; Wallace, Steve; Bagherzadeh, Nader

    1992-01-01

    Superscalar processors can achieve increased performance by issuing instructions out-of-order from the original sequential instruction stream. Implementing an out-of-order instruction issue policy requires a hardware mechanism to prevent incorrectly executed instructions from updating register values. A reorder buffer can be used to allow a superscalar processor to issue instructions out-of-order and maintain program correctness. This paper describes the design and implementation of a 20MHz CMOS reorder buffer for superscalar processors. The reorder buffer is designed to accept and retire two instructions per cycle. A full-custom layout in 1.2 micron has been implemented, measuring 1.1058 mm by 1.3542 mm.

  15. Meta-analyses

    NARCIS (Netherlands)

    Hendriks, M.A.; Luyten, J.W.; Scheerens, J.; Sleegers, P.J.C.; Scheerens, J.

    2014-01-01

    In this chapter results of a research synthesis and quantitative meta-analyses of three facets of time effects in education are presented, namely time at school during regular lesson hours, homework, and extended learning time. The number of studies for these three facets of time that could be used

  16. Probabilistic safety analyses (PSA)

    International Nuclear Information System (INIS)

    The guide shows how the probabilistic safety analyses (PSA) are used in the design, construction and operation of light water reactor plants in order for their part to ensure that the safety of the plant is good enough in all plant operational states

  17. Wavelet Analyses and Applications

    Science.gov (United States)

    Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.

    2009-01-01

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…

  18. Report sensory analyses veal

    NARCIS (Netherlands)

    Veldman, M.; Schelvis-Smit, A.A.M.

    2005-01-01

    On behalf of a client of Animal Sciences Group, different varieties of veal were analyzed by both instrumental and sensory analyses. The sensory evaluation was performed with a sensory analytical panel in the period of 13th of May and 31st of May, 2005. The three varieties of veal were: young bull,

  19. Possible future HERA analyses

    CERN Document Server

    Geiser, Achim

    2015-01-01

    A variety of possible future analyses of HERA data in the context of the HERA data preservation programme is collected, motivated, and commented. The focus is placed on possible future analyses of the existing $ep$ collider data and their physics scope. Comparisons to the original scope of the HERA programme are made, and cross references to topics also covered by other participants of the workshop are given. This includes topics on QCD, proton structure, diffraction, jets, hadronic final states, heavy flavours, electroweak physics, and the application of related theory and phenomenology topics like NNLO QCD calculations, low-x related models, nonperturbative QCD aspects, and electroweak radiative corrections. Synergies with other collider programmes are also addressed. In summary, the range of physics topics which can still be uniquely covered using the existing data is very broad and of considerable physics interest, often matching the interest of results from colliders currently in operation. Due to well-e...

  20. Statistisk analyse med SPSS

    OpenAIRE

    Linnerud, Kristin; Oklevik, Ove; Slettvold, Harald

    2004-01-01

    Dette notatet har sitt utspring i forelesninger og undervisning for 3.års studenter i økonomi og administrasjon ved høgskolen i Sogn og Fjordane. Notatet er særlig lagt opp mot undervisningen i SPSS i de to kursene ”OR 685 Marknadsanalyse og merkevarestrategi” og ”BD 616 Økonomistyring og analyse med programvare”.

  1. Biomass feedstock analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wilen, C.; Moilanen, A.; Kurkela, E. [VTT Energy, Espoo (Finland). Energy Production Technologies

    1996-12-31

    The overall objectives of the project `Feasibility of electricity production from biomass by pressurized gasification systems` within the EC Research Programme JOULE II were to evaluate the potential of advanced power production systems based on biomass gasification and to study the technical and economic feasibility of these new processes with different type of biomass feed stocks. This report was prepared as part of this R and D project. The objectives of this task were to perform fuel analyses of potential woody and herbaceous biomasses with specific regard to the gasification properties of the selected feed stocks. The analyses of 15 Scandinavian and European biomass feed stock included density, proximate and ultimate analyses, trace compounds, ash composition and fusion behaviour in oxidizing and reducing atmospheres. The wood-derived fuels, such as whole-tree chips, forest residues, bark and to some extent willow, can be expected to have good gasification properties. Difficulties caused by ash fusion and sintering in straw combustion and gasification are generally known. The ash and alkali metal contents of the European biomasses harvested in Italy resembled those of the Nordic straws, and it is expected that they behave to a great extent as straw in gasification. Any direct relation between the ash fusion behavior (determined according to the standard method) and, for instance, the alkali metal content was not found in the laboratory determinations. A more profound characterisation of the fuels would require gasification experiments in a thermobalance and a PDU (Process development Unit) rig. (orig.) (10 refs.)

  2. Possible future HERA analyses

    Energy Technology Data Exchange (ETDEWEB)

    Geiser, Achim

    2015-12-15

    A variety of possible future analyses of HERA data in the context of the HERA data preservation programme is collected, motivated, and commented. The focus is placed on possible future analyses of the existing ep collider data and their physics scope. Comparisons to the original scope of the HERA pro- gramme are made, and cross references to topics also covered by other participants of the workshop are given. This includes topics on QCD, proton structure, diffraction, jets, hadronic final states, heavy flavours, electroweak physics, and the application of related theory and phenomenology topics like NNLO QCD calculations, low-x related models, nonperturbative QCD aspects, and electroweak radiative corrections. Synergies with other collider programmes are also addressed. In summary, the range of physics topics which can still be uniquely covered using the existing data is very broad and of considerable physics interest, often matching the interest of results from colliders currently in operation. Due to well-established data and MC sets, calibrations, and analysis procedures the manpower and expertise needed for a particular analysis is often very much smaller than that needed for an ongoing experiment. Since centrally funded manpower to carry out such analyses is not available any longer, this contribution not only targets experienced self-funded experimentalists, but also theorists and master-level students who might wish to carry out such an analysis.

  3. THOR Turbulence Electron Analyser: TEA

    Science.gov (United States)

    Fazakerley, Andrew; Moore, Tom; Owen, Chris; Pollock, Craig; Wicks, Rob; Samara, Marilia; Rae, Jonny; Hancock, Barry; Kataria, Dhiren; Rust, Duncan

    2016-04-01

    Turbulence Heating ObserveR (THOR) is the first mission ever flown in space dedicated to plasma turbulence. The Turbulence Electron Analyser (TEA) will measure the plasma electron populations in the mission's Regions of Interest. It will collect a 3D electron velocity distribution with cadences as short as 5 ms. The instrument will be capable of measuring energies up to 30 keV. TEA consists of multiple electrostatic analyser heads arranged so as to measure electrons arriving from look directions covering the full sky, i.e. 4 pi solid angle. The baseline concept is similar to the successful FPI-DES instrument currently operating on the MMS mission. TEA is intended to have a similar angular resolution, but a larger geometric factor. In comparison to earlier missions, TEA improves on the measurement cadence. For example, MMS FPI-DES routinely operates at 30 ms cadence. The objective of measuring distributions at rates as fast as 5 ms is driven by the mission's scientific requirements to resolve electron gyroscale size structures, where plasma heating and fluctuation dissipation is predicted to occur. TEA will therefore be capable of making measurements of the evolution of distribution functions across thin (a few km) current sheets travelling past the spacecraft at up to 600 km/s, of the Power Spectral Density of fluctuations of electron moments and of distributions fast enough to match frequencies with waves expected to be dissipating turbulence (e.g. with 100 Hz whistler waves).

  4. Description of a 20 Kilohertz power distribution system

    Science.gov (United States)

    Hansen, I. G.

    1986-01-01

    A single phase, 440 VRMS, 20 kHz power distribution system with a regulated sinusoidal wave form is discussed. A single phase power system minimizes the wiring, sensing, and control complexities required in a multi-sourced redundantly distributed power system. The single phase addresses only the distribution link; mulitphase lower frequency inputs and outputs accommodation techniques are described. While the 440 V operating potential was initially selected for aircraft operating below 50,000 ft, this potential also appears suitable for space power systems. This voltage choice recognizes a reasonable upper limit for semiconductor ratings, yet will direct synthesis of 220 V, 3 power. A 20 kHz operating frequency was selected to be above the range of audibility, minimize the weight of reactive components, yet allow the construction of single power stages of 25 to 30 kW. The regulated sinusoidal distribution system has several advantages. With a regulated voltage, most ac/dc conversions involve rather simple transformer rectifier applications. A sinusoidal distribution system, when used in conjunction with zero crossing switching, represents a minimal source of EMI. The present state of 20 kHz power technology includes computer controls of voltage and/or frequency, low inductance cable, current limiting circuit protection, bi-directional power flow, and motor/generator operating using standard induction machines. A status update and description of each of these items and their significance is presented.

  5. Digital differential analysers

    CERN Document Server

    Shilejko, A V; Higinbotham, W

    1964-01-01

    Digital Differential Analysers presents the principles, operations, design, and applications of digital differential analyzers, a machine with the ability to present initial quantities and the possibility of dividing them into separate functional units performing a number of basic mathematical operations. The book discusses the theoretical principles underlying the operation of digital differential analyzers, such as the use of the delta-modulation method and function-generator units. Digital integration methods and the classes of digital differential analyzer designs are also reviewed. The te

  6. Wavelet analyses and applications

    Energy Technology Data Exchange (ETDEWEB)

    Bordeianu, Cristian C [Faculty of Physics, University of Bucharest, Bucharest, RO 077125 (Romania); Landau, Rubin H [Department of Physics, Oregon State University, Corvallis, OR 97331 (United States); Paez, Manuel J [Department of Physics, University of Antioquia, Medellin (Colombia)], E-mail: cristian.bordeianu@brahms.fizica.unibuc.ro, E-mail: rubin@science.oregonstate.edu, E-mail: mpaez@fisica.udea.edu.co

    2009-09-15

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each frequency as a function of time. Next, the theory is specialized to discrete values of time and frequency, and the resulting discrete wavelet transform is shown to be useful for data compression. This paper is addressed to a broad community, from undergraduate to graduate students to general physicists and to specialists in other fields than wavelets.

  7. Systemdynamisk analyse av vannkraftsystem

    OpenAIRE

    Rydning, Anja

    2007-01-01

    I denne oppgaven er det gjennomført en dynamisk analyse av vannkraftverket Fortun kraftverk. Tre fenomener er særlig vurdert i denne oppgaven: Sjaktsvingninger mellom svingesjakt og magasin, trykkstøt ved turbinen som følge av retardasjonstrykk ved endring i turbinvannføringen og reguleringsstabilitet. Sjaktsvingningene og trykkstøt beregnes analytisk ut fra kontinuitets- og bevegelsesligningen. Modeller av Fortun kraftverk er laget for å beregne trykkstøt og sjaktsvingninger. En modell e...

  8. AMS analyses at ANSTO

    Energy Technology Data Exchange (ETDEWEB)

    Lawson, E.M. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia). Physics Division

    1998-03-01

    The major use of ANTARES is Accelerator Mass Spectrometry (AMS) with {sup 14}C being the most commonly analysed radioisotope - presently about 35 % of the available beam time on ANTARES is used for {sup 14}C measurements. The accelerator measurements are supported by, and dependent on, a strong sample preparation section. The ANTARES AMS facility supports a wide range of investigations into fields such as global climate change, ice cores, oceanography, dendrochronology, anthropology, and classical and Australian archaeology. Described here are some examples of the ways in which AMS has been applied to support research into the archaeology, prehistory and culture of this continent`s indigenous Aboriginal peoples. (author)

  9. Analysis of K-net and Kik-net data: implications for ground motion prediction - acceleration time histories, response spectra and nonlinear site response; Analyse des donnees accelerometriques de K-net et Kik-net: implications pour la prediction du mouvement sismique - accelerogrammes et spectres de reponse - et la prise en compte des effets de site non-lineaire

    Energy Technology Data Exchange (ETDEWEB)

    Pousse, G

    2005-10-15

    This thesis intends to characterize ground motion during earthquake. This work is based on two Japanese networks. It deals with databases of shallow events, depth less than 25 km, with magnitude between 4.0 and 7.3. The analysis of K-net allows to compute a spectral ground motion prediction equation and to review the shape of the Eurocode 8 design spectra. We show the larger amplification at short period for Japanese data and bring in light the soil amplification that takes place at large period. In addition, we develop a new empirical model for simulating synthetic stochastic nonstationary acceleration time histories. By specifying magnitude, distance and site effect, this model allows to produce many time histories, that a seismic event is liable to produce at the place of interest. Furthermore, the study of near-field borehole records of the Kik-net allows to explore the validity domain of predictive equations and to explain what occurs by extrapolating ground motion predictions. Finally, we show that nonlinearity reduces the dispersion of ground motion at the surface. (author)

  10. A 20-year simulated climatology of global dust aerosol deposition.

    Science.gov (United States)

    Zheng, Yu; Zhao, Tianliang; Che, Huizheng; Liu, Yu; Han, Yongxiang; Liu, Chong; Xiong, Jie; Liu, Jianhui; Zhou, Yike

    2016-07-01

    Based on a 20-year (1991-2010) simulation of dust aerosol deposition with the global climate model CAM5.1 (Community Atmosphere Model, version 5.1), the spatial and temporal variations of dust aerosol deposition were analyzed using climate statistical methods. The results indicated that the annual amount of global dust aerosol deposition was approximately 1161±31Mt, with a decreasing trend, and its interannual variation range of 2.70% over 1991-2010. The 20-year average ratio of global dust dry to wet depositions was 1.12, with interannual variation of 2.24%, showing the quantity of dry deposition of dust aerosol was greater than dust wet deposition. High dry deposition was centered over continental deserts and surrounding regions, while wet deposition was a dominant deposition process over the North Atlantic, North Pacific and northern Indian Ocean. Furthermore, both dry and wet deposition presented a zonal distribution. To examine the regional changes of dust aerosol deposition on land and sea areas, we chose the North Atlantic, Eurasia, northern Indian Ocean, North Pacific and Australia to analyze the interannual and seasonal variations of dust deposition and dry-to-wet deposition ratio. The deposition amounts of each region showed interannual fluctuations with the largest variation range at around 26.96% in the northern Indian Ocean area, followed by the North Pacific (16.47%), Australia (9.76%), North Atlantic (9.43%) and Eurasia (6.03%). The northern Indian Ocean also had the greatest amplitude of interannual variation in dry-to-wet deposition ratio, at 22.41%, followed by the North Atlantic (9.69%), Australia (6.82%), North Pacific (6.31%) and Eurasia (4.36%). Dust aerosol presented a seasonal cycle, with typically strong deposition in spring and summer and weak deposition in autumn and winter. The dust deposition over the northern Indian Ocean exhibited the greatest seasonal change range at about 118.00%, while the North Atlantic showed the lowest seasonal

  11. EEG analyses with SOBI.

    Energy Technology Data Exchange (ETDEWEB)

    Glickman, Matthew R.; Tang, Akaysha (University of New Mexico, Albuquerque, NM)

    2009-02-01

    The motivating vision behind Sandia's MENTOR/PAL LDRD project has been that of systems which use real-time psychophysiological data to support and enhance human performance, both individually and of groups. Relevant and significant psychophysiological data being a necessary prerequisite to such systems, this LDRD has focused on identifying and refining such signals. The project has focused in particular on EEG (electroencephalogram) data as a promising candidate signal because it (potentially) provides a broad window on brain activity with relatively low cost and logistical constraints. We report here on two analyses performed on EEG data collected in this project using the SOBI (Second Order Blind Identification) algorithm to identify two independent sources of brain activity: one in the frontal lobe and one in the occipital. The first study looks at directional influences between the two components, while the second study looks at inferring gender based upon the frontal component.

  12. Characterization of bovine A20 gene: Expression mediated by NF-κB pathway in MDBK cells infected with bovine viral diarrhea virus-1.

    Science.gov (United States)

    Fredericksen, Fernanda; Villalba, Melina; Olavarría, Víctor H

    2016-05-01

    Cytokine production for immunological process is tightly regulated at the transcriptional and posttranscriptional levels. The NF-κB signaling pathway maintains immune homeostasis in the cell through the participation of molecules such as A20 (TNFAIP3), which is a key regulatory factor in the immune response, hematopoietic differentiation, and immunomodulation. Although A20 has been identified in mammals, and despite recent efforts to identify A20 members in other higher vertebrates, relatively little is known about the composition of this regulator in other classes of vertebrates, particularly for bovines. In this study, the genetic context of bovine A20 was explored and compared against homologous genes in the human, mouse, chicken, dog, and zebrafish chromosomes. Through in silico analysis, several regions of interest were found conserved between even phylogenetically distant species. Additionally, a protein-deduced sequence of bovine A20 evidenced many conserved domains in humans and mice. Furthermore, all potential amino acid residues implicated in the active site of A20 were conserved. Finally, bovine A20 mRNA expression as mediated by the bovine viral diarrhea virus and poly (I:C) was evaluated. These analyses evidenced a strong fold increase in A20 expression following virus exposure, a phenomenon blocked by a pharmacological NF-κB inhibitor (BAY 117085). Interestingly, A20 mRNA had a half-life of only 32min, likely due to adenylate- and uridylate-rich elements in the 3'-untranslated region. Collectively, these data identify bovine A20 as a regulator of immune marker expression. Finally, this is the first report to find the bovine viral diarrhea virus modulating bovine A20 activation through the NF-κB pathway. PMID:26809100

  13. Severe Accident Recriticality Analyses (SARA)

    Energy Technology Data Exchange (ETDEWEB)

    Frid, W. [Swedish Nuclear Power Inspectorate, Stockholm (Sweden); Hoejerup, F. [Risoe National Lab. (Denmark); Lindholm, I.; Miettinen, J.; Puska, E.K. [VTT Energy, Helsinki (Finland); Nilsson, Lars [Studsvik Eco and Safety AB, Nykoeping (Sweden); Sjoevall, H. [Teoliisuuden Voima Oy (Finland)

    1999-11-01

    Recriticality in a BWR has been studied for a total loss of electric power accident scenario. In a BWR, the B{sub 4}C control rods would melt and relocate from the core before the fuel during core uncovery and heat-up. If electric power returns during this time-window unborated water from ECCS systems will start to reflood the partly control rod free core. Recriticality might take place for which the only mitigating mechanisms are the Doppler effect and void formation. In order to assess the impact of recriticality on reactor safety, including accident management measures, the following issues have been investigated in the SARA project: 1. the energy deposition in the fuel during super-prompt power burst, 2. the quasi steady-state reactor power following the initial power burst and 3. containment response to elevated quasi steady-state reactor power. The approach was to use three computer codes and to further develop and adapt them for the task. The codes were SIMULATE-3K, APROS and RECRIT. Recriticality analyses were carried out for a number of selected reflooding transients for the Oskarshamn 3 plant in Sweden with SIMULATE-3K and for the Olkiluoto 1 plant in Finland with all three codes. The core state initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality - both superprompt power bursts and quasi steady-state power generation - for the studied range of parameters, i. e. with core uncovery and heat-up to maximum core temperatures around 1800 K and water flow rates of 45 kg/s to 2000 kg/s injected into the downcomer. Since the recriticality takes place in a small fraction of the core the power densities are high which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal/g, was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding

  14. The application analyses for primary spectrum pyrometer

    Institute of Scientific and Technical Information of China (English)

    FU; TaiRan

    2007-01-01

    In the applications of primary spectrum pyrometry, based on the dynamic range and the minimum sensibility of the sensor, the application issues, such as the measurement range and the measurement partition, were investigated through theoretical analyses. For a developed primary spectrum pyrometer, the theoretical predictions of measurement range and the distributions of measurement partition were presented through numerical simulations. And the measurement experiments of high-temperature blackbody and standard temperature lamp were processed to further verify the above theoretical analyses and numerical results. Therefore the research in the paper provides the helpful supports for the applications of primary spectrum pyrometer and other radiation pyrometers.……

  15. Website-analyse

    DEFF Research Database (Denmark)

    Thorlacius, Lisbeth

    2009-01-01

    planlægning af de funktionelle og indholdsmæssige aspekter ved websites. Der findes en stor mængde teori- og metodebøger, som har specialiseret sig i de tekniske problemstillinger i forbindelse med interaktion og navigation, samt det sproglige indhold på websites. Den danske HCI (Human Computer Interaction...... hyperfunktionelle websites. Det primære ærinde for HCI-eksperterne er at udarbejde websites, som er brugervenlige. Ifølge deres direktiver skal websites være opbygget med hurtige og effektive navigations- og interaktionsstrukturer, hvor brugeren kan få sine informationer ubesværet af lange downloadingshastigheder...... eller blindgyder, når han/hun besøger sitet. Studier i design og analyse af de visuelle og æstetiske aspekter i planlægning og brug af websites har imidlertid kun i et begrænset omfang været under reflektorisk behandling. Det er baggrunden for dette kapitel, som indleder med en gennemgang af æstetikkens...

  16. Incidence of hepatitis C infection among prisoners by routine laboratory values during a 20-year period.

    Directory of Open Access Journals (Sweden)

    Andrés Marco

    Full Text Available BACKGROUND: To estimate the incidence of Hepatitis C virus (HCV and the predictive factors through repeated routine laboratory analyses. METHODS: An observational cohort study was carried out in Quatre Camins Prison, Barcelona. The study included subjects with an initial negative HCV result and routine laboratory analyses containing HCV serology from 1992 to 2011. The incidence of infection was calculated for the study population and for sub-groups by 100 person-years of follow-up (100 py. The predictive factors were determined through Kaplan-Meier curves and a Cox regression. Hazard ratios (HR and 95% confidence intervals (CI were calculated. RESULTS: A total of 2,377 prisoners were included with a median follow-up time of 1,540.9 days per patient. Among the total population, 117 HCV seroconversions were detected (incidence of 1.17/100 py. The incidence was higher between 1992 and 1995 (2.57/100 py, among cases with HIV co-infection (8.34/100 py and among intravenous drug users (IDU without methadone treatment (MT during follow-up (6.66/100 py. The incidence rate of HCV seroconversion among cases with a history of IDU and current MT was 1.35/100 py, which is close to that of the total study population. The following variables had a positive predictive value for HCV infection: IDU (p<0.001; HR = 7,30; CI: 4.83-11.04, Spanish ethnicity (p = 0.009; HR = 2,03; CI: 1.93-3.44 and HIV infection (p = 0.015; HR = 1.97; CI: 1.14-3.39. CONCLUSION: The incidence of HCV infection among prisoners was higher during the first part of the study and among IDU during the entire study period. Preventative programs should be directed toward this sub-group of the prison population.

  17. 17 CFR 240.13a-20 - Plain English presentation of specified information.

    Science.gov (United States)

    2010-04-01

    ... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Plain English presentation of specified information. 240.13a-20 Section 240.13a-20 Commodity and Securities Exchanges SECURITIES AND... Regulations Under the Securities Exchange Act of 1934 Other Reports § 240.13a-20 Plain English presentation...

  18. A20 Deficiency in Lung Epithelial Cells Protects against Influenza A Virus Infection

    OpenAIRE

    Maelfait, Jonathan; Roose, Kenny; Vereecke, Lars; Mc Guire, Conor; Sze, Mozes; Schuijs, Martijn; Willart, Monique; Ibanez, Lorena; Hammad, Hamida; LAMBRECHT, Bart; Beyaert, Rudi; Saelens, Xavier; Loo, Geert

    2016-01-01

    A20 negatively regulates multiple inflammatory signalling pathways. We here addressed the role of A20 in club cells (also known as Clara cells) of the bronchial epithelium in their response to influenza A virus infection. Club cells provide a niche for influenza virus replication, but little is known about the functions of these cells in antiviral immunity. Using airway epithelial cell-specific A20 knockout (A20(AEC-KO)) mice, we show that A20 in club cells critically controls innate immune r...

  19. Predictable return distributions

    DEFF Research Database (Denmark)

    Pedersen, Thomas Quistgaard

    This paper provides detailed insights into predictability of the entire stock and bond return distribution through the use of quantile regression. This allows us to examine speci…c parts of the return distribution such as the tails or the center, and for a suf…ciently …ne grid of quantiles we can...... predictable as a function of economic state variables. The results are, however, very different for stocks and bonds. The state variables primarily predict only location shifts in the stock return distribution, while they also predict changes in higher-order moments in the bond return distribution. Out......-of-sample analyses show that the relative accuracy of the state variables in predicting future returns varies across the distribution. A portfolio study shows that an investor with power utility can obtain economic gains by applying the empirical return distribution in portfolio decisions instead of imposing an...

  20. Thermal stability test and analysis of a 20-actuator bimorph deformable mirror

    Institute of Scientific and Technical Information of China (English)

    Ning Yu; Zhou Hong; Yu Hao; Rao Chang-Hui; Jiang Wen-Han

    2009-01-01

    One of the important characteristic of adaptive mirrors is the thermal stability of surface flatness. In this paper, the thermal stability from 13℃ to 25℃ of a 20-actuator bimorph deformable mirror is tested by a Shack-Hartmann wavefront sensor. Experimental results show that, the surface P-V of bimorph increases nearly linearly with ambient temperature. The ratio is 0.11 μm/℃ and the major component of surface displacement is defocused, compared with which, astigmatism, coma and spherical aberration contribute very small. Besides, a finite element model is built up to analyse the influence of thickness, thermal expansion coefficient and Young's modulus of materials on thermal stability. Calculated results show that bimorph has the best thermal stability when the materials have the same thermal expansion coefficient. And when the thickness ratio of glass to PZT is 3 and Young's modulus ratio is approximately 0.4, the surface instability behaviour of the bimorph manifests itself most severely.

  1. Diagnostic system for a 20 TESLA single turn coil magnet prototype

    International Nuclear Information System (INIS)

    The Center for Electromechanics at The University of Texas at Austin (CEM-UT) has designed, fabricated, and is testing a prototype 20 T on-axis, single turn, toroidal field TF) coil. The purpose of this Ignition Technology Demonstration (ITD) is to prove the feasibility of the single-turn coil powered by homopolar generators (HPGs). A scaling factor of 0.06 was selected based on the current capability of CEM-UT's 60 MJ HPG power supply. The Balcones HPG power supply consists of six, 10 MJ HPGs, each rated at 1.5 MA at 100 V. When connected in a parallel configuration to the prototype TF coil they provide a 9 MA, 100 ms, critically damped current pulse. The objective of the diagnostic system for the prototype 20 T, TF coil is to determine displacements, temperatures, and magnetic fields at various locations in the coil. The values are then compared to predictions by the electromagnetic (EM) analysis to validate computational results. Operating conditions for instrumentation in a 20 T, cryogenically-cooled magnet are rather severe. Electromechanical simulations show that the 0.06 scale IGNITEX TF prototype will experience localized temperature rise from liquid-nitrogen temperature (-196 degrees C) to approximately 200 degrees C in less than 100 ms. Close to the inner leg of the coil where stresses and temperatures are maximum, the instrumentation experiences a 30 T field rise in 26 ms

  2. A20 restricts wnt signaling in intestinal epithelial cells and suppresses colon carcinogenesis.

    Directory of Open Access Journals (Sweden)

    Ling Shao

    Full Text Available Colon carcinogenesis consists of a multistep process during which a series of genetic and epigenetic adaptations occur that lead to malignant transformation. Here, we have studied the role of A20 (also known as TNFAIP3, a ubiquitin-editing enzyme that restricts NFκB and cell death signaling, in intestinal homeostasis and tumorigenesis. We have found that A20 expression is consistently reduced in human colonic adenomas than in normal colonic tissues. To further investigate A20's potential roles in regulating colon carcinogenesis, we have generated mice lacking A20 specifically in intestinal epithelial cells and interbred these with mice harboring a mutation in the adenomatous polyposis coli gene (APC(min. While A20(FL/FL villin-Cre mice exhibit uninflamed intestines without polyps, A20(FL/FL villin-Cre APC(min/+ mice contain far greater numbers and larger colonic polyps than control APC(min mice. We find that A20 binds to the β-catenin destruction complex and restricts canonical wnt signaling by supporting ubiquitination and degradation of β-catenin in intestinal epithelial cells. Moreover, acute deletion of A20 from intestinal epithelial cells in vivo leads to enhanced expression of the β-catenin dependent genes cyclinD1 and c-myc, known promoters of colon cancer. Taken together, these findings demonstrate new roles for A20 in restricting β-catenin signaling and preventing colon tumorigenesis.

  3. EXPRESSION OF THE INFLAMMATORY REGULATOR A20 CORRELATES WITH LUNG FUNCTION IN PATIENTS WITH CYSTIC FIBROSIS

    OpenAIRE

    Kelly, Catriona; Williams, Mark; Elborn, Stuart; Ennis, Madeleine; Schock, Bettina

    2012-01-01

    Abstract: Background: A20 and TAX1BP1 interact to negatively regulate NF--driven inflammation. A20 expression is altered in F508del/F508delpatients. Here we explore the effect of CFTR and CFTR genotype on A20 andTAX1BP1expression. The relationship with lung function is also assessed.Methods: Primary Nasal Epithelial cells (NECs) from CF patients(F508del/F508del, n=8, R117H/F508del, n=6) and Controls (age-matched,n=8), and 16HBE14o- cells were investigated. A20 and TAX1BP1 geneexpression was d...

  4. Seismic stability analyses - embankment dams

    Energy Technology Data Exchange (ETDEWEB)

    Boudreau, Stephane; Boulanger, Pierre; Caron, Louis Philippe [BPR, Montreal, (Canada); Karray, Mourad [Sherbrooke University, Sherbrooke, (Canada)

    2010-07-01

    An understanding of the effect of earthquakes is necessary to the design of safe dams. A wide number of methods are currently used or being developed for analysing the dynamic slop stability of embankments/dams. This paper investigated the effects of the dynamic aspects (natural period, amplifications and intensity of seismic loading) in the analysis of small dams. A procedure was developed to evaluate the performance of pseudo-static analyses by comparison with fully dynamic analyses. Static, pseudo-static, and dynamic analyses were performed using finite elements and Mohr-Coulomb shear strength criteria. The overall safety factor (FS) was compared using the reduction factor concept. The study worked on two examples of small dams located at moderate and violent seismic regions in the province of Quebec. These examples illustrated the difference between pseudo-static and dynamic analyses. The study also investigated the values of the kh coefficient for Eastern Canada seismicity.

  5. Musk fragrances, DEHP and heavy metals in a 20 years old sludge treatment reed bed system.

    Science.gov (United States)

    Matamoros, Víctor; Nguyen, Loc Xuan; Arias, Carlos A; Nielsen, Steen; Laugen, Maria Mølmer; Brix, Hans

    2012-08-01

    The Sludge Treatment Reed Bed (STRB) technology is a cost-efficient and environmentally friendly technology to dewater and mineralize surplus sludge from conventional wastewater treatment systems. Primary and secondary liquid sludge is loaded onto the surface of the bed over several years, where it is dewatered, mineralized and turned into a biosolid with a high dry matter content for use as an organic fertilizer on agricultural land. We analysed the concentrations of five organic micropollutants (galaxolide, tonalide, cashmeran, celestolide and DEHP) and six heavy metals (Pb, Ni, Cu, Cd, Zn and Cr) in the accumulated sludge in a 20-year old STRB in Denmark in order to assess the degradation and fate of these contaminants in a STRB and the relation to sludge composition. The results showed that the deposited sludge was dewatered to reach a dry matter content of 29%, and that up to a third of the organic content of the sludge was mineralized. The concentrations of heavy metals generally increased with depth in the vertical sludge profile due to the dewatering and mineralization of organic matter, but in all cases the concentrations were below the European Union legal limits for agricultural land disposal. The concentrations of fragrances and DEHP ranged from 10 to 9000 ng g(-1) dry mass. The attenuation of hydrophobic micropollutants from the top to the bottom layer of the reed bed ranged from 40 to 98%, except for tonalide which increased significantly with sludge depth, and consequently showed an unusual depth distribution of the galaxolide/tonalide ratio. This unexpected pattern may reflect changes imposed by a long storage time and/or different composition of the fresh sludge in the past. The lack of a significant decreasing DEHP concentration with sludge age might indicate that this compound is very persistent in STRBs. In conclusion the STRB was a feasible technology for sludge treatment before its land disposal. PMID:22608611

  6. Direct transfer of A20 gene into pancreas protected mice from streptozotocin-induced diabetes

    Institute of Scientific and Technical Information of China (English)

    Lu-yang YU; Bo LIN; Zhen-lin ZHANG; Li-he GUO

    2004-01-01

    AIM: To investigate the efficiency of transfer of A20 gene into pancreas against STZ-induced diabetes. METHODS:PVP-plasmid mixture was directly transferred into the pancreatic parenchyma 2 d before STZ injection. The uptake of plasmid pcDNA3-LacZ or pcDNA3-A20 was detected by PCR and the expression of LacZ was confirmed by histological analysis with X-gal. A20 expression in the pancreas of pcDNA3-A20 transgenic mice was measured by RT-PCR and Westem blots. Urine amylase, NO generation, and histological examination were examined. RESULTS:Injection of PVP-plasmid mixture directly into the pancreatic parenchyma increased urine amylase concentration 16 h after operation and reversed it to nearly normal 36 h later. On d 33 LacZ expression could be found in spleen,duodenum, and islets. The development of diabetes was prevented by direct A20 gene transferring into the pancreas and A20-mediated protection was correlated with suppression of NO production. The insulitis was ameliorated in A20-treated mice. CONCLUSION: Injection of PVP-plasmid mixture directly into the pancreatic parenchyma led to target gene expression in islets. Direct transfer of A20 gene into the pancreas protected mice from STZ-induced diabetes.

  7. A20 inhibits the motility of HCC cells induced by TNF-α

    Science.gov (United States)

    Xiao, Ying; Li, Na; Guo, Chun; Zhang, Lining; Shi, Yongyu

    2016-01-01

    Metastasis of hepatocellular carcinoma (HCC) can be facilitated by TNF-α, a prototypical inflammatory cytokine in the HCC microenvironment. A20 is a negative regulator of NF-κB signaling pathway. In the present study we ask whether A20 plays a role in HCC metastasis. We found that A20 expression was downregulated in the invasive cells of microvascular invasions (MVI) compared with the noninvasive cells in 89 tissue samples from patients with HCC by immunochemistry methods. Overexpression of A20 in HCC cell lines inhibited their motility induced by TNF-α. Furthermore, the overexpression of A20 inhibited epithelial-mesenchymal transition (EMT), FAK activation and RAC1 activity. By contrast, knockdown of A20 in one HCC cell line results in the converse. In addition, the overexpression of A20 restrained the formation of MVI in HCC xenograft in nude mice treated with TNF-α. All the results suggested that A20 functioned as a negative regulator in motility of HCC cells induced by TNF-α. PMID:26909601

  8. A20 inhibits the motility of HCC cells induced by TNF-α.

    Science.gov (United States)

    Wang, Xianteng; Ma, Chao; Zong, Zhaoyun; Xiao, Ying; Li, Na; Guo, Chun; Zhang, Lining; Shi, Yongyu

    2016-03-22

    Metastasis of hepatocellular carcinoma (HCC) can be facilitated by TNF-α, a prototypical inflammatory cytokine in the HCC microenvironment. A20 is a negative regulator of NF-κB signaling pathway. In the present study we ask whether A20 plays a role in HCC metastasis. We found that A20 expression was downregulated in the invasive cells of microvascular invasions (MVI) compared with the noninvasive cells in 89 tissue samples from patients with HCC by immunochemistry methods. Overexpression of A20 in HCC cell lines inhibited their motility induced by TNF-α. Furthermore, the overexpression of A20 inhibited epithelial-mesenchymal transition (EMT), FAK activation and RAC1 activity. By contrast, knockdown of A20 in one HCC cell line results in the converse. In addition, the overexpression of A20 restrained the formation of MVI in HCC xenograft in nude mice treated with TNF-α. All the results suggested that A20 functioned as a negative regulator in motility of HCC cells induced by TNF-α. PMID:26909601

  9. Feed analyses and their interpretation.

    Science.gov (United States)

    Hall, Mary Beth

    2014-11-01

    Compositional analysis is central to determining the nutritional value of feedstuffs for use in ration formulation. The utility of the values and how they should be used depends on how representative the feed subsample is, the nutritional relevance and analytical variability of the assays, and whether an analysis is suitable to be applied to a particular feedstuff. Commercial analyses presently available for carbohydrates, protein, and fats have improved nutritionally pertinent description of feed fractions. Factors affecting interpretation of feed analyses and the nutritional relevance and application of currently available analyses are discussed.

  10. STRATEGY PATTERNS PREDICTION MODEL

    OpenAIRE

    Aram Baruch Gonzalez Perez; Jorge Adolfo Ramirez Uresti

    2014-01-01

    Multi-agent systems are broadly known for being able to simulate real-life situations which require the interaction and cooperation of individuals. Opponent modeling can be used along with multi-agent systems to model complex situations such as competitions like soccer games. In this study, a model for predicting opponent moves based on their target is presented. The model is composed by an offline step (learning phase) and an online one (execution phase). The offline step gets and analyses p...

  11. Conjoint-Analyse und Marktsegmentierung

    OpenAIRE

    Steiner, Winfried J.; Baumgartner, Bernhard

    2003-01-01

    Die Marktsegmentierung zählt neben der Neuproduktplanung und Preisgestaltung zu den wesentlichen Einsatzgebieten der Conjoint-Analyse. Neben traditionell eingesetzten zweistufigen Vorgehensweisen, bei denen Conjoint-Analyse und Segmentierung in zwei getrennten Schritten erfolgen, stehen heute mit Methoden wie der Clusterwise Regression oder Mixture-Modellen neuere Entwicklungen, die eine simultane Segmentierung und Präferenzschätzung ermöglichen, zur Verfügung. Der Beitrag gibt einen Überblic...

  12. Prediction, Regression and Critical Realism

    DEFF Research Database (Denmark)

    Næss, Petter

    2004-01-01

    This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... of prediction necessary and possible in spatial planning of urban development. Finally, the political implications of positions within theory of science rejecting the possibility of predictions about social phenomena are addressed....... phenomena. This position is fundamentally problematic to public planning. Without at least some ability to predict the likely consequences of different proposals, the justification for public sector intervention into market mechanisms will be frail. Statistical methods like regression analyses are commonly...

  13. Molecular Basis for the Unique Deubiquitinating Activity of the NF-κB Inhibitor A20

    Energy Technology Data Exchange (ETDEWEB)

    Lin, S.; Chung, J; Lamothe, B; Rajashankar, K; Lu, M; Lo, Y; Lam, A; Darnay, B; Wu, H

    2008-01-01

    Nuclear factor ?B (NF-?B) activation in tumor necrosis factor, interleukin-1, and Toll-like receptor pathways requires Lys63-linked nondegradative polyubiquitination. A20 is a specific feedback inhibitor of NF-?B activation in these pathways that possesses dual ubiquitin-editing functions. While the N-terminal domain of A20 is a deubiquitinating enzyme (DUB) for Lys63-linked polyubiquitinated signaling mediators such as TRAF6 and RIP, its C-terminal domain is a ubiquitin ligase (E3) for Lys48-linked degradative polyubiquitination of the same substrates. To elucidate the molecular basis for the DUB activity of A20, we determined its crystal structure and performed a series of biochemical and cell biological studies. The structure reveals the potential catalytic mechanism of A20, which may be significantly different from papain-like cysteine proteases. Ubiquitin can be docked onto a conserved A20 surface; this interaction exhibits charge complementarity and no steric clash. Surprisingly, A20 does not have specificity for Lys63-linked polyubiquitin chains. Instead, it effectively removes Lys63-linked polyubiquitin chains from TRAF6 without dissembling the chains themselves. Our studies suggest that A20 does not act as a general DUB but has the specificity for particular polyubiquitinated substrates to assure its fidelity in regulating NF-?B activation in the tumor necrosis factor, interleukin-1, and Toll-like receptor pathways.

  14. The Prognostic Role of SOCS3 and A20 in Human Cholangiocarcinoma.

    Directory of Open Access Journals (Sweden)

    Yimin Wang

    Full Text Available As an antagonist of the JAK/STAT pathway, suppressor of cytokine signaling 3 (SOCS3 plays an integral role in shaping the inflammatory environment, tumorigenesis and disease progression in cholangiocarcinoma (CCA; however, its prognostic significance remains unclear. Although tumor necrosis factor α-induced protein 3 (TNFAIP3, also known as A20 can decrease SOCS3 expression and is involved in the regulation of tumorigenesis in certain malignancies, its role in CCA remains unknown. In this study, we investigated the expression of SOCS3 and A20 in human CCA tissues to assess the prognostic significance of these proteins. The expression of SOCS3 and A20 was initially detected by western blot in 22 cases of freshly frozen CCA tumors with corresponding peritumoral tissues and 22 control normal bile duct tissues. Then, these proteins were investigated in 86 CCA patients by immunohistochemistry (IHC and were evaluated for their association with clinicopathological parameters in human CCA. The results indicated that SOCS3 expression was significantly lower in CCA tumor tissues than in corresponding peritumoral biliary tissues and normal bile duct tissues. Conversely, A20 was overexpressed in CCA tissues. Thus, an inverse correlation between the expression of SOCS3 and A20 was discovered. Furthermore, patients with low SOCS3 expression or high A20 expression showed a dramatically lower overall survival rate. These proteins were both associated with CCA lymph node metastasis, postoperative recurrence and overall survival rate. However, only A20 showed a significant association with the tumor node metastasis (TNM stage, while SOCS3 showed a significant association with tumor differentiation. Multivariate Cox analysis revealed that SOCS3 and A20 were independent prognostic indicators for overall survival in CCA. Thus, our study demonstrated that SOCS3 and A20 represent novel prognostic factors for human CCA.

  15. Escape from transcriptional shutoff during poliovirus infection: NF-κB-responsive genes IκBa and A20.

    Science.gov (United States)

    Doukas, Tammy; Sarnow, Peter

    2011-10-01

    It has been known for a long time that infection of cultured cells with poliovirus results in the overall inhibition of transcription of most host genes. We examined whether selected host genes can escape transcriptional inhibition by thiouridine marking newly synthesized host mRNAs during viral infection. Using cDNA microarrays hybridized to cDNAs made from thiolated mRNAs, a small set of host transcripts was identified and their expression verified by quantitative PCR and Northern and Western blot analyses. These transcripts were synthesized from genes that displayed enrichment for NF-κB binding sites in their promoter regions, suggesting that some NF-κB-regulated promoters can escape the virus-induced inhibition of transcription. In particular, two negative regulators of NF-κB, IκBa and A20, were upregulated during viral infection. Depletion of A20 enhanced viral RNA abundance and viral yield, arguing that cells respond to virus infection by counteracting NF-κB-induced proviral effects.

  16. Dynamic modeling and analysis of a 20-cell PEM fuel cell stack considering temperature and two-phase effects

    Science.gov (United States)

    Park, Sang-Kyun; Choe, Song-Yul

    2008-05-01

    Dynamic characteristics and performance of a PEM fuel cell stack are crucial factors to ensure safe, effective and efficient operation. In particular, water and heat at varying loads are important factors that directly influence the stack performance and reliability. Herein, we present a new dynamic model that considers temperature and two-phase effects and analyze these effects on the characteristics of a stack. First, a model for a two-cell stack was developed and the simulated results were compared with experimental results. Next, a model for a 20-cell stack was constructed to investigate start-up and transient behavior. Start-up behavior under different conditions where the amplitudes and slopes of a load current, the temperature and flow rate of the coolant, and extra heating of end plates were varied were also analyzed. The transient analyses considered the dynamics of temperature, oxygen and vapor concentration in the gas diffusion media, liquid water saturation, and the variations of water content in the membranes at a multi-step load. Comparative studies revealed that the two-phase effect of water predominantly reduces oxygen concentration in the catalysts and subsequently increases the activation over-potential, while temperature gradients in the cells directly affect the ohmic over-potential. The results showed that the heat-up time at start-up to achieve a given reference working temperature was inversely proportional to the amplitude of the current density applied and the flow rate and temperature of the coolants. In addition, the asymmetric profile of the stack temperature in the stack was balanced when the temperature of the coolant supplied was reheated and elevated. Analyses of transient behaviors for a 20-cell stack showed that strong temperature gradients formed in the last four end cells, while temperature, oxygen concentration, vapor concentration, liquid water saturation, and membrane water content in the rest of the cells were uniform.

  17. Mitogenomic analyses from ancient DNA

    DEFF Research Database (Denmark)

    Paijmans, Johanna L.A.; Gilbert, M Thomas P; Hofreiter, Michael

    2013-01-01

    analyses (whether using modern or ancient DNA) were largely restricted to the analysis of short fragments of the mitochondrial genome. However, due to many technological advances during the past decade, a growing number of studies have explored the power of complete mitochondrial genome sequences...... (mitogenomes). Such studies were initially limited to analyses of extant organisms, but developments in both DNA sequencing technologies and general methodological aspects related to working with degraded DNA have resulted in complete mitogenomes becoming increasingly popular for ancient DNA studies as well....... To date, at least 124 partially or fully assembled mitogenomes from more than 20 species have been obtained, and, given the rapid progress in sequencing technology, this number is likely to dramatically increase in the future. The increased information content offered by analysing full mitogenomes has...

  18. Tracking log transport and deposition during a 20-year flood in a wide mountain river

    Science.gov (United States)

    Wyżga, Bartłomiej; Mikuś, Paweł; Zawiejska, Joanna; Ruiz-Villanueva, Virginia; Kaczka, Ryszard; Czech, Wiktoria

    2016-04-01

    Distance of large wood transport during floods and conditions for wood deposition in wide mountain rivers are still insufficiently recognised. Tracking logs tagged with radio transmitters was used to investigate differences in depositional conditions and the length of log displacement during a 20-year flood between channel reaches of different morphology in the Czarny Dunajec River, Polish Carpathians. During a rising limb of the flood, logs were placed into the river at the beginning of an incised reach, close to the beginning of a channelized reach, and 1 km upstream from the beginning of a wide, multi-thread reach. The incised, channelized, and multi-thread reaches retained 12.5%, 33%, and 94% of tagged logs introduced to these reaches, and all the logs retained in the multi-thread reach were deposited in its upstream half. Significant differences in the length of displacement existed between the logs delivered to the river at the three locations, with logs placed into the river at the beginning of the incised reach moved the longest distances and those delivered just upstream from the multi-thread reach the shortest ones. One-fourth of the logs were deposited in a low-flow channel or on channel margin, one-fifth on the floodplain and more than half on gravel bars. After the flood, river cross-sections with deposited logs and a set of cross-sections without wood deposits were surveyed to collect data for one-dimensional modelling of hydraulic conditions at the flood peak. The cross-sections with deposited logs were typified by significantly greater flow width and flow area, and significantly smaller mean flow depth, mean velocity, Froude number, mean bed shear stress and unit stream power. Principal component analysis of the hydraulic parameters in the analysed cross-sections grouped the two types of cross-sections in distinct clusters, indicating that multi-thread cross-sections differed in hydraulic parameters from all the other cross-sections. The experiment

  19. An extensible analysable system model

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, Rene Rydhof

    2008-01-01

    , this does not hold for real physical systems. Approaches such as threat modelling try to target the formalisation of the real-world domain, but still are far from the rigid techniques available in security research. Many currently available approaches to assurance of critical infrastructure security......Analysing real-world systems for vulnerabilities with respect to security and safety threats is a difficult undertaking, not least due to a lack of availability of formalisations for those systems. While both formalisations and analyses can be found for artificial systems such as software...

  20. Descriptive Analyses of Mechanical Systems

    DEFF Research Database (Denmark)

    Andreasen, Mogens Myrup; Hansen, Claus Thorp

    2003-01-01

    Forord Produktanalyse og teknologianalyse kan gennmføres med et bredt socio-teknisk sigte med henblik på at forstå kulturelle, sociologiske, designmæssige, forretningsmæssige og mange andre forhold. Et delområde heri er systemisk analyse og beskrivelse af produkter og systemer. Nærværende kompend...

  1. Evaluation "Risk analyses of agroparks"

    NARCIS (Netherlands)

    Ge, L.

    2011-01-01

    Dit TransForum project richt zich op analyse van de onzekerheden en mogelijkheden van agroparken. Dit heeft geleid tot een risicomodel dat de kwalitatieve en/of kwantitatieve onzekerheden van een agropark project in kaart brengt. Daarmee kunnen maatregelen en managementstrategiën worden geïdentifice

  2. Zinc finger protein A20 protects rats against chronic liver allograft dysfunction

    Institute of Scientific and Technical Information of China (English)

    Jie Yang; Ming-Qing Xu; Lu-Nan Yan; Xiao-Bo Chen; Jiao Liu

    2012-01-01

    AIM:To investigate the effect of zinc finger protein A20 on chronic liver allograft dysfunction in rats.METHODS:Allogeneic liver transplantation from DA rats to Lewis rats was performed.Chronic liver allograft dysfunction was induced in the rats by administering low-dose tacrolimus at postoperative day (POD) 5.Hepatic overexpression of A20 was achieved by recombinant adenovirus (rAd.)-mediated gene transfer administered intravenously every 10 d starting from POD 10.The recipient rats were injected with physiological saline,rAdEasy-A20 (1 x 109 pfu/30 g weight) or rAdEasy (1 x 109 pfu/30 g weight) every 10 d through the tail vein for 3 mo starting from POD 10.Liver tissue samples were harvested on POD 30 and POD 60.RESULTS:Liver-transplanted rats treated with only tacrolimus showed chronic allograft dysfunction with severe hepatic fibrosis.A20 overexpression ameliorated the effects on liver function,attenuated liver allograft fibrosis and prolonged the survival of the recipient rats.Treatment with A20 suppressed hepatic protein production of tumor growth factor (TGF)-β1,interleukin1β,caspase-8,CD40,CD40L,intercellular adhesion molecule-1,vascular cell adhesion molecule-1 and E-selectin.A20 treatment suppressed liver cell apoptosis and inhibited nuclear factor-κB activation of Kupffer cells (KCs),liver sinusoidal endothelial cells (LSECs)and hepatic stellate cells (HSCs),and it subsequently decreased cytokine mRNA expression in KCs and LSECs and reduced the production of TGF-β1 in HSCs.CONCLUSION:A20 might prevent chronic liver allograft dysfunction by re-establishing functional homeostasis of KCs,LSECs and HSCs.

  3. Accurate renormalization group analyses in neutrino sector

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Kaneta, Kunio [Kavli IPMU (WPI), The University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Takahashi, Ryo [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Yamaguchi, Yuya [Department of Physics, Faculty of Science, Hokkaido University, Sapporo 060-0810 (Japan)

    2014-08-15

    We investigate accurate renormalization group analyses in neutrino sector between ν-oscillation and seesaw energy scales. We consider decoupling effects of top quark and Higgs boson on the renormalization group equations of light neutrino mass matrix. Since the decoupling effects are given in the standard model scale and independent of high energy physics, our method can basically apply to any models beyond the standard model. We find that the decoupling effects of Higgs boson are negligible, while those of top quark are not. Particularly, the decoupling effects of top quark affect neutrino mass eigenvalues, which are important for analyzing predictions such as mass squared differences and neutrinoless double beta decay in an underlying theory existing at high energy scale.

  4. Successful Predictions

    Science.gov (United States)

    Pierrehumbert, R.

    2012-12-01

    In an observational science, it is not possible to test hypotheses through controlled laboratory experiments. One can test parts of the system in the lab (as is done routinely with infrared spectroscopy of greenhouse gases), but the collective behavior cannot be tested experimentally because a star or planet cannot be brought into the lab; it must, instead, itself be the lab. In the case of anthropogenic global warming, this is all too literally true, and the experiment would be quite exciting if it weren't for the unsettling fact that we and all our descendents for the forseeable future will have to continue making our home in the lab. There are nonetheless many routes though which the validity of a theory of the collective behavior can be determined. A convincing explanation must not be a"just-so" story, but must make additional predictions that can be verified against observations that were not originally used in formulating the theory. The field of Earth and planetary climate has racked up an impressive number of such predictions. I will also admit as "predictions" statements about things that happened in the past, provided that observations or proxies pinning down the past climate state were not available at the time the prediction was made. The basic prediction that burning of fossil fuels would lead to an increase of atmospheric CO2, and that this would in turn alter the Earth's energy balance so as to cause tropospheric warming, is one of the great successes of climate science. It began in the lineage of Fourier, Tyndall and Arrhenius, and was largely complete with the the radiative-convective modeling work of Manabe in the 1960's -- all well before the expected warming had progressed far enough to be observable. Similarly, long before the increase in atmospheric CO2 could be detected, Bolin formulated a carbon cycle model and used it to predict atmospheric CO2 out to the year 2000; the actual values come in at the high end of his predicted range, for

  5. Analyse du discours et archive

    OpenAIRE

    Maingueneau, Dominique

    2007-01-01

    Les recherches qui se réclament de "l’analyse du discours" connaissent un développement considérable dans le monde entier ; en revanche, "l’école française d’analyse du discours" (AD) traverse une crise d’identité depuis le début des années 80. Dans cet exposé nous voudrions explorer les raisons de cette crise, puis préciser le concept d’archive qui, à notre sens, permet de prolonger la voie ouverte à la fin des années 1960. Mais il ne s’agit que d’une des voies possibles, dès lors que, comme...

  6. Workload analyse of assembling process

    Science.gov (United States)

    Ghenghea, L. D.

    2015-11-01

    The workload is the most important indicator for managers responsible of industrial technological processes no matter if these are automated, mechanized or simply manual in each case, machines or workers will be in the focus of workload measurements. The paper deals with workload analyses made to a most part manual assembling technology for roller bearings assembling process, executed in a big company, with integrated bearings manufacturing processes. In this analyses the delay sample technique have been used to identify and divide all bearing assemblers activities, to get information about time parts from 480 minutes day work time that workers allow to each activity. The developed study shows some ways to increase the process productivity without supplementary investments and also indicated the process automation could be the solution to gain maximum productivity.

  7. Analysing Protocol Stacks for Services

    DEFF Research Database (Denmark)

    Gao, Han; Nielson, Flemming; Nielson, Hanne Riis

    2011-01-01

    We show an approach, CaPiTo, to model service-oriented applications using process algebras such that, on the one hand, we can achieve a certain level of abstraction without being overwhelmed by the underlying implementation details and, on the other hand, we respect the concrete industrial standa...... to the financial case study taken from Chapter 0-3. Finally, we develop a static analysis to analyse the security properties as they emerge at the level of concrete industrial protocols....

  8. Tematisk analyse af amerikansk hiphop

    OpenAIRE

    Tranberg-Hansen, Katrine; Bøgh Larsen, Cecilie; Jeppsson,Louise Emilie; Lindberg Kirkegaard, Nanna; Funch Madsen, Signe; Bülow Bach, Maria

    2013-01-01

    This paper examines the possible development in the function of American hiphop. It focuses on specific themes like ghetto, freedom, rebellion, and racial discrimination in hiphop music. To investigate this possible development two text analysis methods are used: a pragmatic and a stylistic text analysis, and a historical method is used: a source criticism. A minimal amount of literature has been published on how hiphop culture arose. The-­‐ se studies, however, make it possible to analyse...

  9. A20 (Tnfaip3 deficiency in myeloid cells protects against influenza A virus infection.

    Directory of Open Access Journals (Sweden)

    Jonathan Maelfait

    Full Text Available The innate immune response provides the first line of defense against viruses and other pathogens by responding to specific microbial molecules. Influenza A virus (IAV produces double-stranded RNA as an intermediate during the replication life cycle, which activates the intracellular pathogen recognition receptor RIG-I and induces the production of proinflammatory cytokines and antiviral interferon. Understanding the mechanisms that regulate innate immune responses to IAV and other viruses is of key importance to develop novel therapeutic strategies. Here we used myeloid cell specific A20 knockout mice to examine the role of the ubiquitin-editing protein A20 in the response of myeloid cells to IAV infection. A20 deficient macrophages were hyperresponsive to double stranded RNA and IAV infection, as illustrated by enhanced NF-κB and IRF3 activation, concomitant with increased production of proinflammatory cytokines, chemokines and type I interferon. In vivo this was associated with an increased number of alveolar macrophages and neutrophils in the lungs of IAV infected mice. Surprisingly, myeloid cell specific A20 knockout mice are protected against lethal IAV infection. These results challenge the general belief that an excessive host proinflammatory response is associated with IAV-induced lethality, and suggest that under certain conditions inhibition of A20 might be of interest in the management of IAV infections.

  10. Mitogenomic analyses of eutherian relationships.

    Science.gov (United States)

    Arnason, U; Janke, A

    2002-01-01

    Reasonably correct phylogenies are fundamental to the testing of evolutionary hypotheses. Here, we present phylogenetic findings based on analyses of 67 complete mammalian mitochondrial (mt) genomes. The analyses, irrespective of whether they were performed at the amino acid (aa) level or on nucleotides (nt) of first and second codon positions, placed Erinaceomorpha (hedgehogs and their kin) as the sister group of remaining eutherians. Thus, the analyses separated Erinaceomorpha from other traditional lipotyphlans (e.g., tenrecs, moles, and shrews), making traditional Lipotyphla polyphyletic. Both the aa and nt data sets identified the two order-rich eutherian clades, the Cetferungulata (comprising Pholidota, Carnivora, Perissodactyla, Artiodactyla, and Cetacea) and the African clade (Tenrecomorpha, Macroscelidea, Tubulidentata, Hyracoidea, Proboscidea, and Sirenia). The study corroborated recent findings that have identified a sister-group relationship between Anthropoidea and Dermoptera (flying lemurs), thereby making our own order, Primates, a paraphyletic assembly. Molecular estimates using paleontologically well-established calibration points, placed the origin of most eutherian orders in Cretaceous times, 70-100 million years before present (MYBP). The same estimates place all primate divergences much earlier than traditionally believed. For example, the divergence between Homo and Pan is estimated to have taken place approximately 10 MYBP, a dating consistent with recent findings in primate paleontology.

  11. Mitogenomic analyses of eutherian relationships.

    Science.gov (United States)

    Arnason, U; Janke, A

    2002-01-01

    Reasonably correct phylogenies are fundamental to the testing of evolutionary hypotheses. Here, we present phylogenetic findings based on analyses of 67 complete mammalian mitochondrial (mt) genomes. The analyses, irrespective of whether they were performed at the amino acid (aa) level or on nucleotides (nt) of first and second codon positions, placed Erinaceomorpha (hedgehogs and their kin) as the sister group of remaining eutherians. Thus, the analyses separated Erinaceomorpha from other traditional lipotyphlans (e.g., tenrecs, moles, and shrews), making traditional Lipotyphla polyphyletic. Both the aa and nt data sets identified the two order-rich eutherian clades, the Cetferungulata (comprising Pholidota, Carnivora, Perissodactyla, Artiodactyla, and Cetacea) and the African clade (Tenrecomorpha, Macroscelidea, Tubulidentata, Hyracoidea, Proboscidea, and Sirenia). The study corroborated recent findings that have identified a sister-group relationship between Anthropoidea and Dermoptera (flying lemurs), thereby making our own order, Primates, a paraphyletic assembly. Molecular estimates using paleontologically well-established calibration points, placed the origin of most eutherian orders in Cretaceous times, 70-100 million years before present (MYBP). The same estimates place all primate divergences much earlier than traditionally believed. For example, the divergence between Homo and Pan is estimated to have taken place approximately 10 MYBP, a dating consistent with recent findings in primate paleontology. PMID:12438776

  12. Learner as Statistical Units of Analyses

    Directory of Open Access Journals (Sweden)

    Vivek Venkatesh

    2011-01-01

    Full Text Available Educational psychologists have researched the generality and specificity of metacognitive monitoring in the context of college-level multiple-choice tests, but fairly little is known as to how learners monitor their performance on more complex academic tasks. Even lesser is known about how monitoring proficiencies such as discrimination and bias might be related to key self-regulatory processes associated with task understanding. This quantitative study explores the relationship between monitoring proficiencies and task understanding in 39 adult learners tackling ill-structured writing tasks for a graduate “theories of e-learning” course. Using learner as unit of analysis, the generality of monitoring is confirmed through intra-measure correlation analyses while facets of its specificity stand out due to the absence of inter-measure correlations. Unsurprisingly, learner-based correlational and repeated measures analyses did not reveal how monitoring proficiencies and task understanding might be related. However, using essay as unit of analysis, ordinal and multinomial regressions reveal how monitoring influences different levels of task understanding. Results are interpreted not only in light of novel procedures undertaken in calculating performance prediction capability but also in the application of essay-based, intra-sample statistical analysis that reveal heretofore unseen relationships between academic self-regulatory constructs.

  13. Analyses of containment structures with corrosion damage

    Energy Technology Data Exchange (ETDEWEB)

    Cherry, J.L. [Sandia National Labs., Albuquerque, NM (United States)

    1997-01-01

    Corrosion damage that has been found in a number of nuclear power plant containment structures can degrade the pressure capacity of the vessel. This has prompted concerns regarding the capacity of corroded containments to withstand accident loadings. To address these concerns, finite element analyses have been performed for a typical PWR Ice Condenser containment structure. Using ABAQUS, the pressure capacity was calculated for a typical vessel with no corrosion damage. Multiple analyses were then performed with the location of the corrosion and the amount of corrosion varied in each analysis. Using a strain-based failure criterion, a {open_quotes}lower bound{close_quotes}, {open_quotes}best estimate{close_quotes}, and {open_quotes}upper bound{close_quotes} failure level was predicted for each case. These limits were established by: determining the amount of variability that exists in material properties of typical containments, estimating the amount of uncertainty associated with the level of modeling detail and modeling assumptions, and estimating the effect of corrosion on the material properties.

  14. Analyses of containment structures with corrosion damage

    International Nuclear Information System (INIS)

    Corrosion damage that has been found in a number of nuclear power plant containment structures can degrade the pressure capacity of the vessel. This has prompted concerns regarding the capacity of corroded containments to withstand accident loadings. To address these concerns, finite element analyses have been performed for a typical PWR Ice Condenser containment structure. Using ABAQUS, the pressure capacity was calculated for a typical vessel with no corrosion damage. Multiple analyses were then performed with the location of the corrosion and the amount of corrosion varied in each analysis. Using a strain-based failure criterion, a open-quotes lower boundclose quotes, open-quotes best estimateclose quotes, and open-quotes upper boundclose quotes failure level was predicted for each case. These limits were established by: determining the amount of variability that exists in material properties of typical containments, estimating the amount of uncertainty associated with the level of modeling detail and modeling assumptions, and estimating the effect of corrosion on the material properties

  15. Prediction Markets

    DEFF Research Database (Denmark)

    Horn, Christian Franz; Ivens, Bjørn Sven; Ohneberg, Michael;

    2014-01-01

    In recent years, Prediction Markets gained growing interest as a forecasting tool among researchers as well as practitioners, which resulted in an increasing number of publications. In order to track the latest development of research, comprising the extent and focus of research, this article...

  16. Comparison of elastic and inelastic analyses

    International Nuclear Information System (INIS)

    The use of inelastic analysis methods instead of the traditional elastic analysis methods in the design of radioactive material (RAM) transport packagings leads to a better understanding of the response of the package to mechanical loadings. Thus, better assessment of the containment, thermal protection, and shielding integrity of the package after a structure accident event can be made. A more accurate prediction of the package response can lead to enhanced safety and also allow for a more efficient use of materials, possibly leading to a package with higher capacity or lower weight. This paper discusses the advantages and disadvantages of using inelastic analysis in the design of RAM shipping packages. The use of inelastic analysis presents several problems to the package designer. When using inelastic analysis the entire nonlinear response of the material must be known, including the effects of temperature changes and strain rate. Another problem is that there currently is not an acceptance criteria for this type of analysis that is approved by regulatory agencies. Inelastic analysis acceptance criteria based on failure stress, failure strain , or plastic energy density could be developed. For both elastic and inelastic analyses it is also important to include other sources of stress in the analyses, such as fabrication stresses, thermal stresses, stresses from bolt preloading, and contact stresses at material interfaces. Offsetting these added difficulties is the improved knowledge of the package behavior. This allows for incorporation of a more uniform margin of safety, which can result in weight savings and a higher level of confidence in the post-accident configuration of the package. In this paper, comparisons between elastic and inelastic analyses are made for a simple ring structure and for a package to transport a large quantity of RAM by rail (rail cask) with lead gamma shielding to illustrate the differences in the two analysis techniques

  17. STRATEGY PATTERNS PREDICTION MODEL

    Directory of Open Access Journals (Sweden)

    Aram Baruch Gonzalez Perez

    2014-01-01

    Full Text Available Multi-agent systems are broadly known for being able to simulate real-life situations which require the interaction and cooperation of individuals. Opponent modeling can be used along with multi-agent systems to model complex situations such as competitions like soccer games. In this study, a model for predicting opponent moves based on their target is presented. The model is composed by an offline step (learning phase and an online one (execution phase. The offline step gets and analyses previous experiences while the online step uses the data generated by offline analysis to predict opponent moves. This model is illustrated by an experiment with the RoboCup 2D Soccer Simulator. The proposed model was tested using 22 games to create the knowledge base and getting an accuracy rate over 80%.

  18. Analyses of a Virtual World

    CERN Document Server

    Holovatch, Yurij; Szell, Michael; Thurner, Stefan

    2016-01-01

    We present an overview of a series of results obtained from the analysis of human behavior in a virtual environment. We focus on the massive multiplayer online game (MMOG) Pardus which has a worldwide participant base of more than 400,000 registered players. We provide evidence for striking statistical similarities between social structures and human-action dynamics in the real and virtual worlds. In this sense MMOGs provide an extraordinary way for accurate and falsifiable studies of social phenomena. We further discuss possibilities to apply methods and concepts developed in the course of these studies to analyse oral and written narratives.

  19. Fracturing and brittleness index analyses of shales

    Science.gov (United States)

    Barnhoorn, Auke; Primarini, Mutia; Houben, Maartje

    2016-04-01

    The formation of a fracture network in rocks has a crucial control on the flow behaviour of fluids. In addition, an existing network of fractures , influences the propagation of new fractures during e.g. hydraulic fracturing or during a seismic event. Understanding of the type and characteristics of the fracture network that will be formed during e.g. hydraulic fracturing is thus crucial to better predict the outcome of a hydraulic fracturing job. For this, knowledge of the rock properties is crucial. The brittleness index is often used as a rock property that can be used to predict the fracturing behaviour of a rock for e.g. hydraulic fracturing of shales. Various terminologies of the brittleness index (BI1, BI2 and BI3) exist based on mineralogy, elastic constants and stress-strain behaviour (Jin et al., 2014, Jarvie et al., 2007 and Holt et al., 2011). A maximum brittleness index of 1 predicts very good and efficient fracturing behaviour while a minimum brittleness index of 0 predicts a much more ductile shale behaviour. Here, we have performed systematic petrophysical, acoustic and geomechanical analyses on a set of shale samples from Whitby (UK) and we have determined the three different brittleness indices on each sample by performing all the analyses on each of the samples. We show that each of the three brittleness indices are very different for the same sample and as such it can be concluded that the brittleness index is not a good predictor of the fracturing behaviour of shales. The brittleness index based on the acoustic data (BI1) all lie around values of 0.5, while the brittleness index based on the stress strain data (BI2) give an average brittleness index around 0.75, whereas the mineralogy brittleness index (BI3) predict values below 0.2. This shows that by using different estimates of the brittleness index different decisions can be made for hydraulic fracturing. If we would rely on the mineralogy (BI3), the Whitby mudstone is not a suitable

  20. The Parent-Child Home Program in Western Manitoba: A 20-Year Evaluation

    Science.gov (United States)

    Gfellner, Barbara M.; McLaren, Lorraine; Metcalfe, Arron

    2008-01-01

    This article is a 20-year evaluation of the Parent-Child Home Program (PCHP) of Child and Family Services in Western Manitoba. Following Levenstein's (1979, 1988) approach, home visitors model parent-child interchanges using books and toys to enhance children's cognitive development through appropriate parenting behaviors. The evaluation provides…

  1. 17 CFR 240.14a-20 - Shareholder approval of executive compensation of TARP recipients.

    Science.gov (United States)

    2010-04-01

    ... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Shareholder approval of... § 240.14a-20 Shareholder approval of executive compensation of TARP recipients. If a solicitation is... shareholder vote to approve the compensation of executives, as disclosed pursuant to Item 402 of Regulation...

  2. A20 plays a critical role in the immunoregulatory function of mesenchymal stem cells.

    Science.gov (United States)

    Dang, Rui-Jie; Yang, Yan-Mei; Zhang, Lei; Cui, Dian-Chao; Hong, Bangxing; Li, Ping; Lin, Qiuxia; Wang, Yan; Wang, Qi-Yu; Xiao, Fengjun; Mao, Ning; Wang, Changyong; Jiang, Xiao-Xia; Wen, Ning

    2016-08-01

    Mesenchymal stem cells (MSCs) possess an immunoregulatory capacity and are a therapeutic target for many inflammation-related diseases. However, the detailed mechanisms of MSC-mediated immunosuppression remain unclear. In this study, we provide new information to partly explain the molecular mechanisms of immunoregulation by MSCs. Specifically, we found that A20 expression was induced in MSCs by inflammatory cytokines. Knockdown of A20 in MSCs resulted in increased proliferation and reduced adipogenesis, and partly reversed the suppressive effect of MSCs on T cell proliferation in vitro and inhibited tumour growth in vivo. Mechanistic studies indicated that knockdown of A20 in MSCs inhibited activation of the p38 mitogen-activated protein kinase (MAPK) pathway, which potently promoted the production of tumour necrosis factor (TNF)-α and inhibited the production of interleukin (IL)-10. Collectively, these data reveal a crucial role of A20 in regulating the immunomodulatory activities of MSCs by controlling the expression of TNF-α and IL-10 in an inflammatory environment. These findings provide novel insights into the pathogenesis of various inflammatory-associated diseases, and are a new reference for the future development of treatments for such afflictions. PMID:27028905

  3. Experimental review on moment analyses

    CERN Document Server

    Calvi, M

    2003-01-01

    Moments of photon energy spectrum in B->Xs gamma decays, of hadronic mass spectrum and of lepton energy spectrum in B->Xc l nu decays are sensitive to the masses of the heavy quarks as well as to the non-perturbative parameters of the heavy quark expansion. Several measurements have been performed both at the Upsilon(4S) resonance and at Z0 center of mass energies. They provide constraints on the non-perturbative parameters, give a test of the consistency of the theoretical predictions and of the underlying assumptions and allow to reduce the indetermination in the |Vcb| extraction.

  4. Analysing ESP Texts, but How?

    Directory of Open Access Journals (Sweden)

    Borza Natalia

    2015-03-01

    Full Text Available English as a second language (ESL teachers instructing general English and English for specific purposes (ESP in bilingual secondary schools face various challenges when it comes to choosing the main linguistic foci of language preparatory courses enabling non-native students to study academic subjects in English. ESL teachers intending to analyse English language subject textbooks written for secondary school students with the aim of gaining information about what bilingual secondary school students need to know in terms of language to process academic textbooks cannot avoiding deal with a dilemma. It needs to be decided which way it is most appropriate to analyse the texts in question. Handbooks of English applied linguistics are not immensely helpful with regard to this problem as they tend not to give recommendation as to which major text analytical approaches are advisable to follow in a pre-college setting. The present theoretical research aims to address this lacuna. Respectively, the purpose of this pedagogically motivated theoretical paper is to investigate two major approaches of ESP text analysis, the register and the genre analysis, in order to find the more suitable one for exploring the language use of secondary school subject texts from the point of view of an English as a second language teacher. Comparing and contrasting the merits and limitations of the two contrastive approaches allows for a better understanding of the nature of the two different perspectives of text analysis. The study examines the goals, the scope of analysis, and the achievements of the register perspective and those of the genre approach alike. The paper also investigates and reviews in detail the starkly different methods of ESP text analysis applied by the two perspectives. Discovering text analysis from a theoretical and methodological angle supports a practical aspect of English teaching, namely making an informed choice when setting out to analyse

  5. Analysing performance through value creation

    Directory of Open Access Journals (Sweden)

    Adrian TRIFAN

    2015-12-01

    Full Text Available This paper draws a parallel between measuring financial performance in 2 variants: the first one using data offered by accounting, which lays emphasis on maximizing profit, and the second one which aims to create value. The traditional approach to performance is based on some indicators from accounting data: ROI, ROE, EPS. The traditional management, based on analysing the data from accounting, has shown its limits, and a new approach is needed, based on creating value. The evaluation of value based performance tries to avoid the errors due to accounting data, by using other specific indicators: EVA, MVA, TSR, CVA. The main objective is shifted from maximizing the income to maximizing the value created for shareholders. The theoretical part is accompanied by a practical analysis regarding the creation of value and an analysis of the main indicators which evaluate this concept.

  6. Economical analyses in interventional radiology

    International Nuclear Information System (INIS)

    Considerations about the relation between benefit and expenses are also gaining increasing importance in interventional radiology. This review aims at providing a survey about the published data concerning economical analyses of some of the more frequently employed interventions in radiology excluding neuroradiological and coronary interventions. Because of the relative scarcity of literature in this field, all identified articles (n=46) were included without selection for methodological quality. For a number of radiological interventions the cost-effectiveness has already been demonstrated, e.g., PTA of femoropopliteal and iliac artery stenoses, stenting of renal artery stenoses, placement of vena-cava filters, as well as metal stents in malignant biliary and esophageal obstructions. Conflicting data exist for the treatment of abdominal aortic aneurysms. So far, no analysis could be found that directly compares bypass surgery versus PTA+stent in iliac arteries. (orig.)

  7. HGCal Simulation Analyses for CMS

    CERN Document Server

    Bruno, Sarah Marie

    2015-01-01

    This summer, I approached the topic of fast-timing detection of photons from Higgs decays via simulation analyses, working under the supervision of Dr. Adolf Bornheim of the California Institute of Technology. My specific project focused on simulating the high granularity calorimeter for the Compact Muon Solenoid (CMS) experiment. CMS detects particles using calorimeters. The Electromagnetic Calorimeter (ECal) is arranged cylindrically to form a barrel section and two “endcaps.” Previously, both the barrel and endcap have employed lead tungstate crystal detectors, known as the “shashlik” design. The crystal detectors, however, rapidly degrade from exposure to radiation. This effect is most pronounced in the endcaps. To avoid the high expense of frequently replacing degraded detectors, it was recently decided to eliminate the endcap crystals in favor of an arrangement of silicon detectors known as the “High Granularity Calorimeter” (HGCal), while leaving the barrel detector technology unchanged. T...

  8. Analyse des besoins des usagers

    OpenAIRE

    KHOUDOUR,L; LANGLAIS,A; Charpentier, C.; MOTTE,C; PIAN,C

    2002-01-01

    Il s'agit d'étendre la surveillance vidéo de l'enceinte du métro vers l'intérieur des rames. Les images captées constituent des prises de vue des événements qui se déroulent à l'intérieur des véhicules afin notamment d'améliorer la sécurité des usagers transportes. Il est possible de mémoriser les images des quelques instants précédant un incident usager, d'analyser ces images en temps différé et de mieux appréhender en temps réel le comportement des usagers face à des événements ou des consi...

  9. Isotopic signatures by bulk analyses

    International Nuclear Information System (INIS)

    Los Alamos National Laboratory has developed a series of measurement techniques for identification of nuclear signatures by analyzing bulk samples. Two specific applications for isotopic fingerprinting to identify the origin of anthropogenic radioactivity in bulk samples are presented. The first example is the analyses of environmental samples collected in the US Arctic to determine the impact of dumping of radionuclides in this polar region. Analyses of sediment and biota samples indicate that for the areas sampled the anthropogenic radionuclide content of sediments was predominantly the result of the deposition of global fallout. The anthropogenic radionuclide concentrations in fish, birds and mammals were very low. It can be surmised that marine food chains are presently not significantly affected. The second example is isotopic fingerprinting of water and sediment samples from the Rocky Flats Facility (RFP). The largest source of anthropogenic radioactivity presently affecting surface-waters at RFP is the sediments that are currently residing in the holding ponds. One gram of sediment from a holding pond contains approximately 50 times more plutonium than 1 liter of water from the pond. Essentially 100% of the uranium in Ponds A-1 and A-2 originated as depleted uranium. The largest source of radioactivity in the terminal Ponds A-4, B-5 and C-2 was naturally occurring uranium and its decay product radium. The uranium concentrations in the waters collected from the terminal ponds contained 0.05% or less of the interim standard calculated derived concentration guide for uranium in waters available to the public. All of the radioactivity observed in soil, sediment and water samples collected at RFP was naturally occurring, the result of processes at RFP or the result of global fallout. No extraneous anthropogenic alpha, beta or gamma activities were detected. The plutonium concentrations in Pond C-2 appear to vary seasonally

  10. Predicting supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Heinemeyer, S. [Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Weiglein, G. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2010-07-15

    We review the result of SUSY parameter fits based on frequentist analyses of experimental constraints from electroweak precision data, (g-2){sub {mu}}, B physics and cosmological data. We investigate the parameters of the constrained MSSM (CMSSM) with universal soft supersymmetry-breaking mass parameters, and a model with common non-universal Higgs mass parameters in the superpotential (NUHM1). Shown are the results for the SUSY and Higgs spectrum of the models. Many sparticle masses are highly correlated in both the CMSSM and NUHM1, and parts of the regions preferred at the 68% C.L. are accessible to early LHC running. The best-fit points could be tested even with 1 fb{sup -1} at {radical}(s)=7 TeV. (orig.)

  11. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan;

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic...... incrementalizing a broad range of static analyses....

  12. Mirror energy difference and the structure of loosely bound proton-rich nuclei around A = 20

    CERN Document Server

    Yuan, Cenxi; Xu, Furong; Suzuki, Toshio; Otsuka, Takaharu

    2014-01-01

    The properties of loosely bound proton-rich nuclei around A = 20 are investigated within the framework of nuclear shell model. In these nuclei, the strength of the effective interactions involving the loosely bound proton s1=2 orbit are significantly reduced in comparison with those in their mirror nuclei. We evaluate the reduction of the effective interaction by calculating the monopole-baseduniversal interaction (VMU) in the Woods-Saxon basis. The shell-model Hamiltonian in the sd shell, such as USD, can thus be modified to reproduce the binding energies and energy levels of the weakly bound proton-rich nuclei around A = 20. The effect of the reduction of the effective interaction on the structure and decay properties of these nuclei is also discussed.

  13. A 20 kV, 5 A, 1 ns Risetime Pulsed Electron Beam Source

    Institute of Scientific and Technical Information of China (English)

    Chen Yulan; Zeng Zhengzhong; Wang Haiyang; Ma Lianying

    2005-01-01

    A 20 kV, 1 ns risetime pulsed electron beam source was developed using an extremely small gap (0.1 mm) diode driven by a sub-nanosecond risetime, 10 kV rectangular pulse generator. A beam current of 5 A was detected by using a fast response Faraday cup at a distance of 2 cm away from a grid anode. The shot to shot variation of the electron beam pulse was less than 10%.

  14. Left lung agenesis discovered by a spontaneous pneumothorax in a 20-year-old girl.

    Science.gov (United States)

    Hentati, Abdessalem; Neifar, Chawki; Abid, Walid; M'saad, Sameh

    2016-01-01

    Lung agenesis is a rare condition which prognosis widely depends on associated malformations. Clinical presentation is so variable and diagnosis is often made in childhood. Here, we present a case of a 20-year-old girl who was admitted because of a spontaneous pneumothorax. Explorations concluded at a left lung agenesis, a hyperinflated right lung crossing the midline with a corresponding pneumothorax. There was no malformation else. This congenital condition and treatment for this rare presentation are discussed in detail. PMID:27051112

  15. Congenital Orbital Lymphangioma in a 20-Years Old Girl – Case Report and Review of Literature

    Directory of Open Access Journals (Sweden)

    Mishra A

    2009-01-01

    Full Text Available We report a case of a 20-year-old girl who presented to the out-patients’ department with congenital, progressive unilateral proptosis and reduced vision. Ultrasound, computed tomography (CT scan and magnetic resonance imaging (MRI were performed. Diagnosis of orbital lymphangioma was made on imaging. Authors highlight the crucial role of imaging in diagnosis and to plan therapeutic approach. This case is reported because of its extreme rarity and unusual presentation.

  16. A 20 kV, 5 A, 1 ns Risetime Pulsed Electron Beam Source

    International Nuclear Information System (INIS)

    A 20 kV, 1 ns risetime pulsed electron beam source was developed using an extremely small gap (0.1 mm) diode driven by a sub-nanosecond risetime, 10 kV rectangular pulse generator. A beam current of 5 A was detected by using a fast response Faraday cup at a distance of 2 cm away from a grid anode. The shot to shot variation of the electron beam pulse was less than 10%

  17. Zonda downslope winds in the central Andes of South America in a 20-year climate simulation with the Eta model

    Science.gov (United States)

    Antico, Pablo L.; Chou, Sin Chan; Mourão, Caroline

    2015-12-01

    The Zonda wind is a local version of the alpine foehn in the central Andes Mountains in South America. It blows on the eastern slopes and produces an extremely warm and dry condition in Argentina. In this study, the occurrence of Zonda wind events during a 20-year simulation from the regional Eta model is analyzed and results are compared to previous studies of Zonda wind events based on weather observations. We define a set of parameters to account for the zonal pressure gradient across the mountain, vertical movement, and air humidity typical of Zonda wind events. These parameters are applied to characterize Zonda wind events in model run and to classify them as surface-level or high-level episodes. The resulting annual distribution of Zonda occurrences based on composite analyses shows a preference for winter and spring with rare occurrences during summer. For the surface-level Zonda wind events, the highest frequency occurs during spring. Whereas surface-level Zonda wind episodes more commonly initiate in the afternoon, high-level Zonda wind events show no preference for a given initiation time. Our results are mostly in agreement with previous observational results.

  18. Thermal and hydraulic analyses of the System 81 cold traps

    Energy Technology Data Exchange (ETDEWEB)

    Kim, K.

    1977-06-15

    Thermal and hydraulic analyses of the System 81 Type I and II cold traps were completed except for thermal transients analysis. Results are evaluated, discussed, and reported. Analytical models were developed to determine the physical dimensions of the cold traps and to predict the performance. The FFTF cold trap crystallizer performances were simulated using the thermal model. This simulation shows that the analytical model developed predicts reasonably conservative temperatures. Pressure drop and sodium residence time calculations indicate that the present design will meet the requirements specified in the E-Specification. Steady state temperature data for the critical regions were generated to assess the magnitude of the thermal stress.

  19. NOx analyser interefence from alkenes

    Science.gov (United States)

    Bloss, W. J.; Alam, M. S.; Lee, J. D.; Vazquez, M.; Munoz, A.; Rodenas, M.

    2012-04-01

    Nitrogen oxides (NO and NO2, collectively NOx) are critical intermediates in atmospheric chemistry. NOx abundance controls the levels of the primary atmospheric oxidants OH, NO3 and O3, and regulates the ozone production which results from the degradation of volatile organic compounds. NOx are also atmospheric pollutants in their own right, and NO2 is commonly included in air quality objectives and regulations. In addition to their role in controlling ozone formation, NOx levels affect the production of other pollutants such as the lachrymator PAN, and the nitrate component of secondary aerosol particles. Consequently, accurate measurement of nitrogen oxides in the atmosphere is of major importance for understanding our atmosphere. The most widely employed approach for the measurement of NOx is chemiluminescent detection of NO2* from the NO + O3 reaction, combined with NO2 reduction by either a heated catalyst or photoconvertor. The reaction between alkenes and ozone is also chemiluminescent; therefore alkenes may contribute to the measured NOx signal, depending upon the instrumental background subtraction cycle employed. This interference has been noted previously, and indeed the effect has been used to measure both alkenes and ozone in the atmosphere. Here we report the results of a systematic investigation of the response of a selection of NOx analysers, ranging from systems used for routine air quality monitoring to atmospheric research instrumentation, to a series of alkenes ranging from ethene to the biogenic monoterpenes, as a function of conditions (co-reactants, humidity). Experiments were performed in the European Photoreactor (EUPHORE) to ensure common calibration, a common sample for the monitors, and to unequivocally confirm the alkene (via FTIR) and NO2 (via DOAS) levels present. The instrument responses ranged from negligible levels up to 10 % depending upon the alkene present and conditions used. Such interferences may be of substantial importance

  20. Efficient ALL vs. ALL collision risk analyses

    Science.gov (United States)

    Escobar, D.; Paskowitz, M.; Agueda, A.; Garcia, G.; Molina, M.

    2011-09-01

    In recent years, the space debris has gained a lot of attention due to the increasing amount of uncontrolled man-made objects orbiting the Earth. This population poses a significant and constantly growing thread to operational satellites. In order to face this thread in an independent manner, ESA has launched an initiative for the development of a European SSA System where GMV is participating via several activities. Apart from those activities financed by ESA, GMV has developed closeap, a tool for efficient conjunction assessment and collision probability prediction. ESÁs NAPEOS has been selected as computational engine and numerical propagator to be used in the tool, which can be considered as an add-on to the standard NAPEOS package. closeap makes use of the same orbit computation, conjunction assessment and collision risk algorithms implemented in CRASS, but at the same time both systems are completely independent. Moreover, the implementation in closeap has been validated against CRASS with excellent results. This paper describes the performance improvements implemented in closeap at algorithm level to ensure that the most time demanding scenarios (e.g., all catalogued objects are analysed against each other - all vs. all scenarios -) can be analysed in a reasonable amount of time with commercial-off-the-shelf hardware. However, the amount of space debris increases steadily due to the human activities. Thus, the number of objects involved in a full collision assessment is expected to increase notably and, consequently, the computational cost, which scales as the square of the number of objects, will increase as well. Additionally, orbit propagation algorithms that are computationally expensive might be needed to predict more accurately the trajectories of the space debris. In order to cope with such computational needs, the next natural step in the development of collision assessment tools is the use of parallelization techniques. In this paper we investigate

  1. Budget-Impact Analyses: A Critical Review of Published Studies

    OpenAIRE

    Ewa Orlewska; Laszlo Gulcsi

    2009-01-01

    This article reviews budget-impact analyses (BIAs) published to date in peer-reviewed bio-medical journals with reference to current best practice, and discusses where future research needs to be directed. Published BIAs were identified by conducting a computerized search on PubMed using the search term 'budget impact analysis'. The years covered by the search included January 2000 through November 2008. Only studies (i) named by authors as BIAs and (ii) predicting financial consequences of a...

  2. Partitioning Uncertainty for Non-Ergodic Probabilistic Seismic Hazard Analyses

    OpenAIRE

    Dawood, Haitham Mohamed Mahmoud Mousad

    2014-01-01

    Properly accounting for the uncertainties in predicting ground motion parameters is critical for Probabilistic Seismic Hazard Analyses (PSHA). This is particularly important for critical facilities that are designed for long return period motions. Non-ergodic PSHA is a framework that allows for this proper accounting of uncertainties. This, in turn, allows for more informed decisions by designers, owners and regulating agencies. The ergodic assumption implies that the standard deviation ...

  3. Aerothermodynamic Analyses of Towed Ballutes

    Science.gov (United States)

    Gnoffo, Peter A.; Buck, Greg; Moss, James N.; Nielsen, Eric; Berger, Karen; Jones, William T.; Rudavsky, Rena

    2006-01-01

    A ballute (balloon-parachute) is an inflatable, aerodynamic drag device for application to planetary entry vehicles. Two challenging aspects of aerothermal simulation of towed ballutes are considered. The first challenge, simulation of a complete system including inflatable tethers and a trailing toroidal ballute, is addressed using the unstructured-grid, Navier-Stokes solver FUN3D. Auxiliary simulations of a semi-infinite cylinder using the rarefied flow, Direct Simulation Monte Carlo solver, DSV2, provide additional insight into limiting behavior of the aerothermal environment around tethers directly exposed to the free stream. Simulations reveal pressures higher than stagnation and corresponding large heating rates on the tether as it emerges from the spacecraft base flow and passes through the spacecraft bow shock. The footprint of the tether shock on the toroidal ballute is also subject to heating amplification. Design options to accommodate or reduce these environments are discussed. The second challenge addresses time-accurate simulation to detect the onset of unsteady flow interactions as a function of geometry and Reynolds number. Video of unsteady interactions measured in the Langley Aerothermodynamic Laboratory 20-Inch Mach 6 Air Tunnel and CFD simulations using the structured grid, Navier-Stokes solver LAURA are compared for flow over a rigid spacecraft-sting-toroid system. The experimental data provides qualitative information on the amplitude and onset of unsteady motion which is captured in the numerical simulations. The presence of severe unsteady fluid - structure interactions is undesirable and numerical simulation must be able to predict the onset of such motion.

  4. Measuring Quality Across Three Child Care Quality Rating and Improvement Systems: Findings from Secondary Analyses.

    OpenAIRE

    Lizabeth Malone; Gretchen Kirby; Pia Caronongan; Kimberly Boller; Kathryn Tout

    2011-01-01

    This report presents findings from an exploratory analysis of administrative data from three QRISs. The analyses examine the prevalence of quality components across centers and how they combine to result in an overall rating level and to predict observed quality.

  5. Nonlinear Analyses of the Dynamic Properties of Hydrostatic Bearing Systems

    Institute of Scientific and Technical Information of China (English)

    LIU Wei(刘伟); WU Xiujiang(吴秀江); V.A. Prokopenko

    2003-01-01

    Nonlinear analyses of hydrostatic bearing systems are necessary to adequately model the fluid-solid interaction. The dynamic properties of linear and nonlinear analytical models of hydrostatic bearings are compared in this paper. The analyses were based on the determination of the aperiodic border of transient processes with external step loads. The results show that the dynamic properties can be most effectively improved by increasing the hydrostatic bearing crosspiece width and additional pocket volume in a bearing can extend the load range for which the transient process is aperiodic, but an additional restrictor and capacitor (RC) chain must be introduced for increasing damping. The nonlinear analyses can also be used to predict typical design parameters for a hydrostatic bearing.

  6. Genetic Analyses in Health Laboratories: Current Status and Expectations

    Science.gov (United States)

    Finotti, Alessia; Breveglieri, Giulia; Borgatti, Monica; Gambari, Roberto

    Genetic analyses performed in health laboratories involve adult patients, newborns, embryos/fetuses, pre-implanted pre-embryos, pre-fertilized oocytes and should meet the major medical needs of hospitals and pharmaceutical companies. Recent data support the concept that, in addition to diagnosis and prognosis, genetic analyses might lead to development of personalized therapy. Novel frontiers in genetic testing involve the development of single cell analyses and non-invasive assays, including those able to predict outcome of cancer pathologies by looking at circulating tumor cells, DNA, mRNA and microRNAs. In this respect, PCR-free diagnostics appears to be one of the most interesting and appealing approaches.

  7. Left lung agenesis discovered by a spontaneous pneumothorax in a 20-year-old girl

    Directory of Open Access Journals (Sweden)

    Abdessalem Hentati

    2016-01-01

    Full Text Available Lung agenesis is a rare condition which prognosis widely depends on associated malformations. Clinical presentation is so variable and diagnosis is often made in childhood. Here, we present a case of a 20-year-old girl who was admitted because of a spontaneous pneumothorax. Explorations concluded at a left lung agenesis, a hyperinflated right lung crossing the midline with a corresponding pneumothorax. There was no malformation else. This congenital condition and treatment for this rare presentation are discussed in detail.

  8. [Evaluation of a 20 years' experience of colo-anal anastomoses. Indications, results and pitfalls].

    Science.gov (United States)

    Hautefeuille, P; Saab, M; Valleur, P

    1991-01-01

    Seventy nine anastomoses were performed over a 20 year period. Indications included 68 rectal adenocarcinomas and 11 benign lesions. There was no operative mortality. Anastomotic leak was the main cause of morbidity: 12 clinical (15%) and 4 radiological leaks. The 5-year actuarial disease-free survival was 70%, 7 local recurrences (10%) were observed; 6 were Dukes C and 1 Dukes B. Functional results were assessed in 61 patients. They were considered to be excellent in 35 (57%), good in 24 (39%) and bad in 2 (4%). Six failures were noted: 3 technical, 1 oncologic and 2 functional. Pitfalls of coloanal anastomosis are discussed. PMID:2064292

  9. A 20-Liter Test Stand with Gas Purification for Liquid Argon Research

    CERN Document Server

    Li, Yichen; Tang, Wei; Joshi, Jyoti; Qian, Xin; Diwan, Milind; Kettell, Steve; Morse, William; Rao, Triveni; Stewart, James; Tsang, Thomas; Zhang, Lige

    2016-01-01

    We describe the design of a 20-liter test stand constructed to study fundamental properties of liquid argon (LAr). This system utilizes a simple, cost-effective gas argon (GAr) purification to achieve ultra-high purity, which is necessary to study electron transport properties in LAr. An electron drift stack with up to 25 cm length is constructed to study electron drift, diffusion, and attachment at various electric fields. A gold photocathode and a pulsed laser are used as a bright electron source. The operational performance of this system is reported.

  10. Towards a 20th Century History of Relationships between Theatre and Neuroscience

    Directory of Open Access Journals (Sweden)

    Gabriele Sofia

    2014-05-01

    Full Text Available This article considers some preliminary reflections in view of a 20th century theatre-and-neuroscience history. Up to now, the history of the 20th century theatre has been too fragmentary and irregular, missing out on the subterranean links which, either directly or indirectly, bound different experiences. The article aims to put in evidence the recurrent problems of these encounters. The hypothesis of the essay concerns the possibility of gathering and grouping a great part of the relationships between theatre and neuroscience around four trajectories: the physiology of action, the physiology of emotions, ethology, and studies on the spectator’s perception.

  11. Novel heterozygous C243Y A20/TNFAIP3 gene mutation is responsible for chronic inflammation in autosomal-dominant Behçet's disease

    Science.gov (United States)

    Shigemura, Tomonari; Kaneko, Naoe; Kobayashi, Norimoto; Kobayashi, Keiko; Takeuchi, Yusuke; Nakano, Naoko; Masumoto, Junya; Agematsu, Kazunaga

    2016-01-01

    Objective Although Behçet's disease (BD) is a chronic inflammatory disorder of uncertain aetiology, the existence of familial BD with autosomal-dominant traits suggests that a responsibility gene (or genes) exists. We investigated a Japanese family with a history of BD to search for pathogenic mutations underlying the biological mechanisms of BD. Methods 6 patients over 4 generations who had suffered from frequent oral ulcers, genital ulcers and erythaema nodosum-like lesions in the skin were assessed. Whole-exome sequencing was performed on genomic DNA, and cytokine production was determined from stimulated mononuclear cells. Inflammatory cytokine secretion and Nod2-mediated NF-κB activation were analysed using the transfected cells. Results By whole-exome sequencing, we identified a common heterozygous missense mutation in A20/TNFAIP3, a gene known to regulate NF-κB signalling, for which all affected family members carried a heterozygous C243Y mutation in the ovarian tumour domain. Mononuclear cells obtained from the proband and his mother produced large amounts of interleukin 1β, IL-6 and tumour necrosis factor α (TNF-a) on stimulation as compared with those from normal controls. Although inflammatory cytokine secretion was suppressed by wild-type transfected cells, it was suppressed to a much lesser extent by mutated C243Y A20/TNFAIP3-transfected cells. In addition, impaired suppression of Nod2-mediated NF-κB activation by C243Y A20/TNFAIP3 was observed. Conclusions A C243Y mutation in A20/TNFAIP3 was likely responsible for increased production of human inflammatory cytokines by reduced suppression of NF-κB activation, and may have accounted for the autosomal-dominant Mendelian mode of BD transmission in this family. PMID:27175295

  12. Pawnee Nation Energy Option Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Matlock, M.; Kersey, K.; Riding In, C.

    2009-07-21

    Pawnee Nation of Oklahoma Energy Option Analyses In 2003, the Pawnee Nation leadership identified the need for the tribe to comprehensively address its energy issues. During a strategic energy planning workshop a general framework was laid out and the Pawnee Nation Energy Task Force was created to work toward further development of the tribe’s energy vision. The overarching goals of the “first steps” project were to identify the most appropriate focus for its strategic energy initiatives going forward, and to provide information necessary to take the next steps in pursuit of the “best fit” energy options. Description of Activities Performed The research team reviewed existing data pertaining to the availability of biomass (focusing on woody biomass, agricultural biomass/bio-energy crops, and methane capture), solar, wind and hydropower resources on the Pawnee-owned lands. Using these data, combined with assumptions about costs and revenue streams, the research team performed preliminary feasibility assessments for each resource category. The research team also reviewed available funding resources and made recommendations to Pawnee Nation highlighting those resources with the greatest potential for financially-viable development, both in the near-term and over a longer time horizon. Findings and Recommendations Due to a lack of financial incentives for renewable energy, particularly at the state level, combined mediocre renewable energy resources, renewable energy development opportunities are limited for Pawnee Nation. However, near-term potential exists for development of solar hot water at the gym, and an exterior wood-fired boiler system at the tribe’s main administrative building. Pawnee Nation should also explore options for developing LFGTE resources in collaboration with the City of Pawnee. Significant potential may also exist for development of bio-energy resources within the next decade. Pawnee Nation representatives should closely monitor

  13. Residual Strength Analyses of Monolithic Structures

    Science.gov (United States)

    Forth, Scott (Technical Monitor); Ambur, Damodar R. (Technical Monitor); Seshadri, B. R.; Tiwari, S. N.

    2003-01-01

    Finite-element fracture simulation methodology predicts the residual strength of damaged aircraft structures. The methodology uses the critical crack-tip-opening-angle (CTOA) fracture criterion to characterize the fracture behavior of the material. The CTOA fracture criterion assumes that stable crack growth occurs when the crack-tip angle reaches a constant critical value. The use of the CTOA criterion requires an elastic- plastic, finite-element analysis. The critical CTOA value is determined by simulating fracture behavior in laboratory specimens, such as a compact specimen, to obtain the angle that best fits the observed test behavior. The critical CTOA value appears to be independent of loading, crack length, and in-plane dimensions. However, it is a function of material thickness and local crack-front constraint. Modeling the local constraint requires either a three-dimensional analysis or a two-dimensional analysis with an approximation to account for the constraint effects. In recent times as the aircraft industry is leaning towards monolithic structures with the intention of reducing part count and manufacturing cost, there has been a consistent effort at NASA Langley to extend critical CTOA based numerical methodology in the analysis of integrally-stiffened panels.In this regard, a series of fracture tests were conducted on both flat and curved aluminum alloy integrally-stiffened panels. These flat panels were subjected to uniaxial tension and during the test, applied load-crack extension, out-of-plane displacements and local deformations around the crack tip region were measured. Compact and middle-crack tension specimens were tested to determine the critical angle (wc) using three-dimensional code (ZIP3D) and the plane-strain core height (hJ using two-dimensional code (STAGS). These values were then used in the STAGS analysis to predict the fracture behavior of the integrally-stiffened panels. The analyses modeled stable tearing, buckling, and crack

  14. Toward a 20% Wind Electricity Supply in the United States: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Flowers, L.; Dougherty, P.

    2007-05-01

    Since the U.S. Department of Energy (DOE) initiated the Wind Powering America (WPA) program in 1999, installed wind power capacity in the United States has increased from 2,500 MW to more than 11,000 MW. In 1999, only four states had more than 100 MW of installed wind capacity; now 16 states have more than 100 MW installed. In addition to WPA's efforts to increase deployment, the American Wind Energy Association (AWEA) is building a network of support across the country. In July 2005, AWEA launched the Wind Energy Works! Coalition, which is comprised of more than 70 organizations. In February 2006, the wind deployment vision was enhanced by President George W. Bush's Advanced Energy Initiative, which refers to a wind energy contribution of up to 20% of the electricity consumption of the United States. A 20% electricity contribution over the next 20 to 25 years represents 300 to 350 gigawatts (GW) of electricity. This paper provides a background of wind energy deployment in the United States and a history of the U.S. DOE's WPA program, as well as the program's approach to increasing deployment through removal of institutional and informational barriers to a 20% wind electricity future.

  15. Operation of a 20 tesla on-axis tokamak toroidal field magnet

    International Nuclear Information System (INIS)

    The Center for Electromechanics at The University of Texas at Austin (CEM-UT) has designed, built, and is presently testing a 20 T on-axis, single turn, toroidal field (TF) coil. The Ignition Technology Demonstration (ITD) is a 0.06-scale IGNITEX (Texas Fusion Ignition Experiment) TF-coil experiment. The purpose of the ITD program is to demonstrate the operation of a 20 T, single turn, TF coil powered by homopolar generators (HPGs). This program is funded by the Advanced Technology Program and the Texas Atomic Energy Research Foundation. Scaling of the prototype 20 T TF coil was selected to be 0.06 on the basis of the maximum current capability of CEM-UT's 60 MJ HPG power supply, which has a rating of 9 MA at 100 V in a parallel configuration. Stresses and temperatures reached in the scale TF coil are representative of those that would be experienced in a full-scale IGNITEX TF coil with a 1.5 m major radius and a 5 s flat top current profile. The 60 MJ HPG system consists of six, 20 MJ, drum-type HPGs each capable of 1.5 MA at 100 V. Only 25% of the available system energy is used to drive the single turn TF coil to 20 T

  16. Study of proton and 2 protons emission from light neutron deficient nuclei around A=20; Etude de l'emission proton et de deux protons dans les noyaux legers deficients en neutrons de la region A=20

    Energy Technology Data Exchange (ETDEWEB)

    Zerguerras, T

    2001-09-01

    Proton and two proton emission from light neutron deficient nuclei around A=20 have been studied. A radioactive beam of {sup 18}Ne, {sup 17}F and {sup 20}Mg, produced at the Grand Accelerateur National d'Ions Lourds by fragmentation of a {sup 24}Mg primary beam at 95 MeV/A, bombarded a {sup 9}Be target to form unbound states. Proton(s) and nuclei from the decay were detected respectively in the MUST array and the SPEG spectrometer. From energy and angle measurements, the invariant mass of the decaying nucleus could be reconstructed. Double coincidence events between a proton and {sup 17}F, {sup 16}O, {sup 15}O, {sup 14}O and {sup 18}Ne were registered to obtain excitation energy spectra of {sup 18}Ne, {sup 17}F, {sup 16}F, {sup 15}F et {sup 19}Na. Generally, the masses measures are in agreement with previous experiments. In the case of {sup 18}Ne, excitation energy and angular distributions agree well with the predictions of a break up model calculation. From {sup 17}Ne proton coincidences, a first experimental measurement of the ground state mass excess of {sup 18}Na has been obtained and yields 24,19(0,15)MeV. Two proton emission from {sup 17}Ne and {sup 18}Ne excited states and the {sup 19}Mg ground state was studied through triple coincidences between two proton and {sup 15}O, {sup 16}O and {sup 17}Ne respectively. In the first case, the proton-proton relative angle distribution in the center of mass has been compared with model calculation. Sequential emission from excited states of {sup 17}Ne, above the proton emission threshold, through {sup 16}F is dominant but a {sup 2}He decay channel could not be excluded. No {sup 2}He emission from the 1.288 MeV {sup 17}Ne state, or from the 6.15 MeV {sup 18}Ne state has been observed. Only one coincidence event between {sup 17}Ne and two proton was registered, the value of the one neutron stripping reaction cross section of {sup 20}Mg being much lower than predicted. (author)

  17. Revalidation of the isobaric multiplet mass equation for the $A=20$ quintet

    CERN Document Server

    Glassman, B E; Wrede, C; Allen, J; Bardayan, D W; Bennett, M B; Brown, B A; Chipps, K A; Febbraro, M; Fry, C; Hall, M R; Hall, O; Liddick, S N; O'Malley, P; Ong, W; Pain, S D; Schwartz, S B; Shidling, P; Sims, H; Thompson, P; Zhang, H

    2015-01-01

    An unexpected breakdown of the isobaric multiplet mass equation in the $A=20$, $T=2$ quintet was recently reported, presenting a challenge to modern theories of nuclear structure. In the present work, the excitation energy of the lowest $T = 2$ state in $^{20}$Na has been measured to be $6498.4 \\pm 0.2_{\\textrm{stat}} \\pm 0.4_{\\textrm{syst}}$ keV by using the superallowed $0^+ \\rightarrow 0^+$ beta decay of $^{20}$Mg to access it and an array of high-purity germanium detectors to detect its $\\gamma$-ray deexcitation. This value differs by 27 keV (1.9 standard deviations) from the recommended value of $6525 \\pm 14$ keV and is a factor of 28 more precise. The isobaric multiplet mass equation is shown to be revalidated when the new value is adopted.

  18. A 20-KW Wind Energy Conversion System (WECS) at the Marine Corps Air Station, Kaneohe, Hawaii

    Science.gov (United States)

    Pal, D.

    1983-01-01

    The wind turbine generator chosen for the evaluation was a horizontal-axis-propeller-downwind rotor driving a three-phase, self-excited alternator through a step-up gear box. The alternator is fed into the base power distribution system through a three-phase, line-communtated-synchronous inverter using SCRs. The site has moderate wind conditions with an annual average windspeed of 12 to 14 mph, and the WECS turbine has a relatively high (29 mph) rated windspeed. The 20-kW WECS systems was primarily designed to obtain operating experience with, and maintenance information on, a 20-kW-sized WECS. This report describes in detail the experience gained and lessons learned during the field evaluation.

  19. Interaction cross-sections and matter radii of A = 20 isobars

    International Nuclear Information System (INIS)

    High-energy interaction cross-sections of A=20 nuclei (20N, 20O, 20F, 20Ne, 20Na, 20Mg) on carbon were measured with accuracies of ∼1%. The nuclear matter rms radii derived from the measured cross-sections show an irregular dependence on isospin projection. The largest difference in radii, which amounts to approximately 0.2 fm, has been obtained for the mirror nuclei 20O and 20Mg. The influenc of nuclear deformation and binding energy on the radii is discussed. By evaluating the difference in rms radii of neutron and proton distributions, evidence has been found for the existence of a proton skin for 20Mg and of a neutron skin for 20N. (orig.)

  20. Antiapoptotic effect both in vivo and in vitro of A20 gene when transfected into rat hippocampal neurons

    Institute of Scientific and Technical Information of China (English)

    Hong-sheng MIAO; Lu-yang YU; Guo-zhen HUI; Li-he GUO

    2005-01-01

    Aim: To evaluate the antiapoptotic effect of the A20 gene in primary hippocampal neurons both in vivo and in vitro. Methods: Primary hippocampal neurons in embryonic day 18 (El 8) rats were transfected with the A20 gene by using the new Nucleofector electroporation transfection method. We then examined, whether A20 -neurons possessed anti-apoptotic abilities after TNF-α stimulation in vitro.A20-neurons and pcDNA3 -neurons were transplanted into the penumbra of the brains of rats that had been subjected to 90-min of ischemia induced by left middle cerebral artery occlusion (MCAO). Results: A20-neurons resisted TNF-α induced apoptosis in vitro. The apoptosis rate of neurons overexpressing A20(28.46%±3.87%) was lower than that in neurons transfected with pcDNA3(53.06%±5.36%). More A20-neurons survived in the penumbra both 3-d and 7-d after transplantation than did sham pcDNA3 neurons. Conclusion: The novel function of A20 may make it a potential targets for the gene therapy for neurological diseases.

  1. Comparison of Two Fluid Replacement Protocols During a 20-km Trail Running Race in the Heat.

    Science.gov (United States)

    Lopez, Rebecca M; Casa, Douglas J; Jensen, Katherine A; Stearns, Rebecca L; DeMartini, Julie K; Pagnotta, Kelly D; Roti, Melissa W; Armstrong, Lawrence E; Maresh, Carl M

    2016-09-01

    Lopez, RM, Casa, DJ, Jensen, K, Stearns, RL, DeMartini, JK, Pagnotta, KD, Roti, MW, Armstrong, LE, and Maresh, CM. Comparison of two fluid replacement protocols during a 20-km trail running race in the heat. J Strength Cond Res 30(9): 2609-2616, 2016-Proper hydration is imperative for athletes striving for peak performance and safety, however, the effectiveness of various fluid replacement strategies in the field setting is unknown. The purpose of this study was to investigate how two hydration protocols affect physiological responses and performance during a 20-km trail running race. A randomized, counter-balanced, crossover design was used in a field setting (mean ± SD: WBGT 28.3 ± 1.9° C). Well-trained male (n = 8) and female (n = 5) runners (39 ± 14 years; 175 ± 9 cm; 67.5 ± 11.1 kg; 13.4 ± 4.6% BF) completed two 20-km trail races (5 × 4-km loop) with different water hydration protocols: (a) ad libitum (AL) consumption and (b) individualized rehydration (IR). Data were analyzed using repeated measures ANOVA. Paired t-tests compared pre-race-post-race measures. Main outcome variables were race time, heart rate (HR), gastrointestinal temperature (TGI), fluid consumed, percent body mass loss (BML), and urine osmolality (Uosm). Race times between groups were similar. There was a significant condition × time interaction (p = 0.048) for HR, but TGI was similar between conditions. Subjects replaced 30 ± 14% of their water losses in AL and 64 ± 16% of their losses in IR (p 2% BML in AL. Ad libitum drinking resulted in 1.3% greater BML over the 20-km race, which resulted in no thermoregulatory or performance differences from IR.

  2. Self-Rated Activity Levels and Longevity: Evidence from a 20 Year Longitudinal Study

    Science.gov (United States)

    Mullee, Mark A.; Coleman, Peter G.; Briggs, Roger S. J.; Stevenson, James E.; Turnbull, Joanne C.

    2008-01-01

    The study reports on factors predicting the longevity of 328 people over the age of 65 drawn from an English city and followed over 20 years. Both the reported activities score and the individual's comparative evaluation of their own level of activity independently reduced the risk of death, even when health and cognitive status were taken into…

  3. Uncertainty and Sensitivity Analyses of Duct Propagation Models

    Science.gov (United States)

    Nark, Douglas M.; Watson, Willie R.; Jones, Michael G.

    2008-01-01

    This paper presents results of uncertainty and sensitivity analyses conducted to assess the relative merits of three duct propagation codes. Results from this study are intended to support identification of a "working envelope" within which to use the various approaches underlying these propagation codes. This investigation considers a segmented liner configuration that models the NASA Langley Grazing Incidence Tube, for which a large set of measured data was available. For the uncertainty analysis, the selected input parameters (source sound pressure level, average Mach number, liner impedance, exit impedance, static pressure and static temperature) are randomly varied over a range of values. Uncertainty limits (95% confidence levels) are computed for the predicted values from each code, and are compared with the corresponding 95% confidence intervals in the measured data. Generally, the mean values of the predicted attenuation are observed to track the mean values of the measured attenuation quite well and predicted confidence intervals tend to be larger in the presence of mean flow. A two-level, six factor sensitivity study is also conducted in which the six inputs are varied one at a time to assess their effect on the predicted attenuation. As expected, the results demonstrate the liner resistance and reactance to be the most important input parameters. They also indicate the exit impedance is a significant contributor to uncertainty in the predicted attenuation.

  4. VICTORIA-92 pretest analyses of PHEBUS-FPT0

    Energy Technology Data Exchange (ETDEWEB)

    Bixler, N.E.; Erickson, C.M.

    1994-01-01

    FPT0 is the first of six tests that are scheduled to be conducted in an experimental reactor in Cadarache, France. The test apparatus consists of an in-pile fuel bundle, an upper plenum, a hot leg, a steam generator, a cold leg, and a small containment. Thus, the test is integral in the sense that it attempts to simulate all of the processes that would be operative in a severe nuclear accident. In FPT0, the fuel will be trace irradiated; in subsequent tests high burn-up fuel will be used. This report discusses separate pretest analyses of the FPT0 fuel bundle and primary circuit have been conducted using the USNRC`s source term code, VICTORIA-92. Predictions for release of fission product, control rod, and structural elements from the test section are compared with those given by CORSOR-M. In general, the releases predicted by VICTORIA-92 occur earlier than those predicted by CORSOR-M. The other notable difference is that U release is predicted to be on a par with that of the control rod elements; CORSOR-M predicts U release to be about 2 orders of magnitude greater.

  5. Social Network Analyses and Nutritional Behavior: An Integrated Modeling Approach

    Directory of Open Access Journals (Sweden)

    Alistair McNair Senior

    2016-01-01

    Full Text Available Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent advances in nutrition research, combining state-space models of nutritional geometry with agent-based models of systems biology, show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a tangible and practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit agent-based models that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition. Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interaction in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  6. 神经元蜡样质脂褐质沉积病(NCL)的基因型与表型相关性研究%Genotype-phenotype analyses of classic neuronal ceroid lipofuscinosis (NCLs): genetic predictions from clinical and pathological findings

    Institute of Scientific and Technical Information of China (English)

    Weina JU; W. Ted BROWN; Nanbert ZHONG; Anetta WRONSKA; Dorota N. MOROZIEWICZ; Rocksheng ZHONG; Natalia WISNIEWSKI; Anna JURKIEWICZ; Michael FIORY; Krystyna E. WISNIEWSKI; Lance JOHNSTON

    2006-01-01

    Objective:Genotype-phenotype associations were studied in 517 subjects clinically affected by classical neuronal ceroid lipofuscinosis (NCL). Methods:Genetic loci CLN1-3 were analyzed in regard to age of onset, initial neurological symptoms, and electron microscope (EM) profiles. Results: The most common initial symptom leading to a clinical evaluation was developmental delay (30%) in NCL1, seizures (42.4%) in NCL2, and vision problems (53.5%) in NCL3. Eighty-two percent of NCL1 cases had granular osmiophilic deposits (GRODs) or mixed-GROD-containing EM profiles; 94% of NCL2 cases had curvilinear (CV) or mixed-CV-containing profiles; and 91% of NCL3 had fingerprint (FP) or mixed-FP-containing profiles. The mixed-type EM profile was found in approximately one-third of the NCL cases. DNA mutations within a specific CLN gene were further correlated with NCL phenotypes. Seizures were noticed to associate with common mutations 523G>A and 636C>T of CLN2 in NCL2 but not with common mutations 223G>A and 451C>T of CLN1 in NCL1. Vision loss was the initial symptom in all types of mutations in NCL3. Surprisingly, our data showed that the age of onset was atypical in 51.3% of NCL1 (infantile form) cases, 19.7% of NCL2 (late-infantile form) cases, and 42.8% of NCL3 (juvenile form) cases.Conclusion:Our data provide an overall picture regarding the clinical recognition of classical childhood NCLs. This may assist in the prediction and genetic identification of NCL1-3 via their characteristic clinical features.

  7. Economische analyse van de Nederlandse biotechnologiesector

    OpenAIRE

    Giessen, A.M. van der; Gijsbers, G.W.; Koops, R.; Zee, F.A. van der

    2014-01-01

    In opdracht van de Commissie Genetische Modificatie (COGEM) heeft TNO een deskstudie uitgevoerd getiteld “Economische analyse van de Nederlandse biotechnologiesector”. Deze analyse is één van de voorstudies die de COGEM laat uitvoeren als voorbereiding op de Trendanalyse Biotechnologie, die naar verwachting in 2015 zal worden uitgevoerd. Voor deze analyse heeft de COGEM aan TNO gevraagd ontwikkelingen, trends en kansen van de biotechnologie opnieuw in kaart te brengen, met een nadruk op econo...

  8. [Anne Arold. Kontrastive Analyse...] / Paul Alvre

    Index Scriptorium Estoniae

    Alvre, Paul, 1921-2008

    2001-01-01

    Arvustus: Arold, Anne. Kontrastive analyse der Wortbildungsmuster im Deutschen und im Estnischen (am Beispiel der Aussehensadjektive). Tartu, 2000. (Dissertationes philologiae germanicae Universitatis Tartuensis)

  9. Adiponectin Induces A20 Expression in Adipose Tissue To Confer Metabolic Benefit

    OpenAIRE

    Hand LE, Usan P, Cooper GJS, Xu LY, Ammori B, Cunningham PS, Aghamohammadzadeh R, Soran H, Greenstein A, Loudon ASI, Bechtold DA, Ray DW

    2014-01-01

    Obesity is a major risk factor for metabolic disease, with white adipose tissue (WAT) inflammation emerging as a key underlying pathology. We detail that mice lacking Reverbα exhibit enhanced fat storage without the predicted increased WAT inflammation or loss of insulin sensitivity. In contrast to most animal models of obesity and obese human patients, Reverbα−/− mice exhibit elevated serum adiponectin levels and increased adiponectin secretion from WAT explants in vitro, highlighting ...

  10. Predicting protein structure classes from function predictions

    DEFF Research Database (Denmark)

    Sommer, I.; Rahnenfuhrer, J.; de Lichtenberg, Ulrik;

    2004-01-01

    We introduce a new approach to using the information contained in sequence-to-function prediction data in order to recognize protein template classes, a critical step in predicting protein structure. The data on which our method is based comprise probabilities of functional categories; for given......-to-structure prediction methods....

  11. Star 48 solid rocket motor nozzle analyses and instrumented firings

    Science.gov (United States)

    Porter, R. L.

    1986-01-01

    The analyses and testing performed by NASA in support of an expanded and improved nozzle design data base for use by the U.S. solid rocket motor industry is presented. A production nozzle with a history of one ground failure and two flight failures was selected for analyses and testing. The stress analysis was performed with the Champion computer code developed by the U.S. Navy. Several improvements were made to the code. Strain predictions were made and compared to test data. Two short duration motor firings were conducted with highly instrumented nozzles. The first nozzle had 58 thermocouples, 66 strain gages, and 8 bondline pressure measurements. The second nozzle had 59 thermocouples, 68 strain measurements, and 8 bondline pressure measurements. Most of this instrumentation was on the nonmetallic parts, and provided significantly more thermal and strain data on the nonmetallic components of a nozzle than has been accumulated in a solid rocket motor test to date.

  12. Frequency and changes in trends of leading risk factors of coronary heart disease in women in the city of Novi Sad during a 20-year period

    Directory of Open Access Journals (Sweden)

    Rakić Dušica

    2012-01-01

    Full Text Available Backround/Aim. From 1984 to 2004 the city of Novi Sad participated through its Health Center “Novi Sad” in the international Multinational MONItoring of Trends and Determinants in CArdiovascular Disease (MONICA project, as one of the 38 research centers in 21 countries around the world. The aim of this study was to determine frequency and changes of trends in leading risk factors of coronary heart disease (CHD and to analyze the previous trend of movement of coronary event in women in Novi Sad during a 20- year period. Methods. In 2004, the fourth survey within MONICA project was conducted in the city of Novi Sad. The representative sample included 1,041 women between the age of 25 and 74. The prevalence of risk factors in CHD such as smoking, high blood pressure, elevated blood cholesterol, elevated blood glucose and obesity was determined. Also, indicators of risk factors and rates of coronary events in women were compared with the results from MONICA project obtained in previous three screens, as well as with the results from other research centres. χ2-test, linear trend and correlartion coefficient were used in statistical analysis of results obtained. Results. It was observed that during a 20-year period covered by the study, the prevalence of the leading risk factors for the development of CHD in the surveyed women was significantly increasing and in positive correlation with the values of linear trend. Also, the increase of morbidity rates and mortality rates of coronary event were in positive correlation. The decrease was only recorded in the period from 1985-1989 (the implementation of the intervention programme. Conclusion. Upon analysing the increase in prevalence of leading risk factors of CHD and significant increase in the rates of coronary event, we can conclude that health status of women in Novi Sad during a 20-year period was deteriorating.

  13. Involvement of Ubiquitin-Editing Protein A20 in Modulating Inflammation in Rat Cochlea Associated with Silver Nanoparticle-Induced CD68 Upregulation and TLR4 Activation

    Science.gov (United States)

    Feng, Hao; Pyykkö, Ilmari; Zou, Jing

    2016-05-01

    Silver nanoparticles (AgNPs) were shown to temporarily impair the biological barriers in the skin of the external ear canal, mucosa of the middle ear, and inner ear, causing partially reversible hearing loss after delivery into the middle ear. The current study aimed to elucidate the molecular mechanism, emphasizing the TLR signaling pathways in association with the potential recruitment of macrophages in the cochlea and the modulation of inflammation by ubiquitin-editing protein A20. Molecules potentially involved in these signaling pathways were thoroughly analysed using immunohistochemistry in the rat cochlea exposed to AgNPs at various concentrations through intratympanic injection. The results showed that 0.4 % AgNPs but not 0.02 % AgNPs upregulated the expressions of CD68, TLR4, MCP1, A20, and RNF11 in the strial basal cells, spiral ligament fibrocytes, and non-sensory supporting cells of Corti's organ. 0.4 % AgNPs had no effect on CD44, TLR2, MCP2, Rac1, myosin light chain, VCAM1, Erk1/2, JNK, p38, IL-1β, TNF-α, TNFR1, TNFR2, IL-10, or TGF-β. This study suggested that AgNPs might confer macrophage-like functions on the strial basal cells and spiral ligament fibrocytes and enhance the immune activities of non-sensory supporting cells of Corti's organ through the upregulation of CD68, which might be involved in TLR4 activation. A20 and RNF11 played roles in maintaining cochlear homeostasis via negative regulation of the expressions of inflammatory cytokines.

  14. Involvement of Ubiquitin-Editing Protein A20 in Modulating Inflammation in Rat Cochlea Associated with Silver Nanoparticle-Induced CD68 Upregulation and TLR4 Activation.

    Science.gov (United States)

    Feng, Hao; Pyykkö, Ilmari; Zou, Jing

    2016-12-01

    Silver nanoparticles (AgNPs) were shown to temporarily impair the biological barriers in the skin of the external ear canal, mucosa of the middle ear, and inner ear, causing partially reversible hearing loss after delivery into the middle ear. The current study aimed to elucidate the molecular mechanism, emphasizing the TLR signaling pathways in association with the potential recruitment of macrophages in the cochlea and the modulation of inflammation by ubiquitin-editing protein A20. Molecules potentially involved in these signaling pathways were thoroughly analysed using immunohistochemistry in the rat cochlea exposed to AgNPs at various concentrations through intratympanic injection. The results showed that 0.4 % AgNPs but not 0.02 % AgNPs upregulated the expressions of CD68, TLR4, MCP1, A20, and RNF11 in the strial basal cells, spiral ligament fibrocytes, and non-sensory supporting cells of Corti's organ. 0.4 % AgNPs had no effect on CD44, TLR2, MCP2, Rac1, myosin light chain, VCAM1, Erk1/2, JNK, p38, IL-1β, TNF-α, TNFR1, TNFR2, IL-10, or TGF-β. This study suggested that AgNPs might confer macrophage-like functions on the strial basal cells and spiral ligament fibrocytes and enhance the immune activities of non-sensory supporting cells of Corti's organ through the upregulation of CD68, which might be involved in TLR4 activation. A20 and RNF11 played roles in maintaining cochlear homeostasis via negative regulation of the expressions of inflammatory cytokines. PMID:27142878

  15. Assessment of protein disorder region predictions in CASP10

    KAUST Repository

    Monastyrskyy, Bohdan

    2013-11-22

    The article presents the assessment of disorder region predictions submitted to CASP10. The evaluation is based on the three measures tested in previous CASPs: (i) balanced accuracy, (ii) the Matthews correlation coefficient for the binary predictions, and (iii) the area under the curve in the receiver operating characteristic (ROC) analysis of predictions using probability annotation. We also performed new analyses such as comparison of the submitted predictions with those obtained with a Naïve disorder prediction method and with predictions from the disorder prediction databases D2P2 and MobiDB. On average, the methods participating in CASP10 demonstrated slightly better performance than those in CASP9.

  16. Can NGOs Make a Difference? Revisiting and Reframing a 20-year Debate

    DEFF Research Database (Denmark)

    Opoku-Mensah, Paul Yaw

    2007-01-01

    The article seeks to connect the vibrant debates in the Nordic region on NGOs and the aid system with the international comparative debates on NGOs and development alternatives. It argues for a    reformulation of the international debate on NGOs and development alternatives to address the...... foundational questions related to the formative role and structural impact of the international aid system on NGOs and their roles. This reformulation moves the discussions further and enables analyses that provide understanding of the actual and potential role of NGOs to transform development  processes....

  17. Economische analyse van de Nederlandse biotechnologiesector

    NARCIS (Netherlands)

    Giessen, A.M. van der; Gijsbers, G.W.; Koops, R.; Zee, F.A. van der

    2014-01-01

    In opdracht van de Commissie Genetische Modificatie (COGEM) heeft TNO een deskstudie uitgevoerd getiteld “Economische analyse van de Nederlandse biotechnologiesector”. Deze analyse is één van de voorstudies die de COGEM laat uitvoeren als voorbereiding op de Trendanalyse Biotechnologie, die naar ver

  18. New environmental metabarcodes for analysing soil DNA

    DEFF Research Database (Denmark)

    Epp, Laura S.; Boessenkool, Sanne; Bellemain, Eva P.;

    2012-01-01

    Metabarcoding approaches use total and typically degraded DNA from environmental samples to analyse biotic assemblages and can potentially be carried out for any kinds of organisms in an ecosystem. These analyses rely on specific markers, here called metabarcodes, which should be optimized for ta....... The metabarcoding approach has considerable potential for biodiversity screening of modern samples and also as a palaeoecological tool....

  19. Novel Algorithms for Astronomical Plate Analyses

    Indian Academy of Sciences (India)

    Rene Hudec; Lukas Hudec

    2011-03-01

    Powerful computers and dedicated software allow effective data mining and scientific analyses in astronomical plate archives. We give and discuss examples of newly developed algorithms for astronomical plate analyses, e.g., searches for optical transients, as well as for major spectral and brightness changes.

  20. Characterization of a 20-nm hard x-ray focus by ptychographic coherent diffractive imaging

    Science.gov (United States)

    Vila-Comamala, Joan; Diaz, Ana; Guizar-Sicairos, Manuel; Gorelick, Sergey; Guzenko, Vitaliy A.; Karvinen, Petri; Kewish, Cameron M.; Färm, Elina; Ritala, Mikko; Mantion, Alexandre; Bunk, Oliver; Menzel, Andreas; David, Christian

    2011-09-01

    Recent advances in the fabrication of diffractive X-ray optics have boosted hard X-ray microscopy into spatial resolutions of 30 nm and below. Here, we demonstrate the fabrication of zone-doubled Fresnel zone plates for multi-keV photon energies (4-12 keV) with outermost zone widths down to 20 nm. However, the characterization of such elements is not straightforward using conventional methods such as knife edge scans on well-characterized test objects. To overcome this limitation, we have used ptychographic coherent diffractive imaging to characterize a 20 nm-wide X-ray focus produced by a zone-doubled Fresnel zone plate at a photon energy of 6.2 keV. An ordinary scanning transmission X-ray microscope was modified to acquire the ptychographic data from a strongly scattering test object. The ptychographic algorithms allowed for the reconstruction of the image of the test object as well as for the reconstruction of the focused hard X-ray beam waist, with high spatial resolution and dynamic range. This method yields a full description of the focusing performance of the Fresnel zone plate and we demonstrate the usefulness ptychographic coherent diffractive imaging for metrology and alignment of nanofocusing diffractive X-ray lenses.

  1. Conceptual design of a 20 Tesla pulsed solenoid for a laser solenoid fusion reactor

    International Nuclear Information System (INIS)

    Design considerations are described for a strip wound solenoid which is pulsed to 20 tesla while immersed in a 20 tesla bias field so as to achieve within the bore of the pulsed solenoid at net field sequence starting at 20 tesla and going first down to zero, then up to 40 tesla, and finally back to 20 tesla in a period of about 5 x 10-3 seconds. The important parameters of the solenoid, e.g., aperture, build, turns, stored and dissipated energy, field intensity and powering circuit, are given. A numerical example for a specific design is presented. Mechanical stresses in the solenoid and the subsequent choice of materials for coil construction are discussed. Although several possible design difficulties are not discussed in this preliminary report of a conceptual magnet design, such as uniformity of field, long-term stability of insulation under neutron bombardment and choice of structural materials of appropriate tensile strength and elasticity to withstand magnetic forces developed, these questions are addressed in detail in the complete design report and in part in reference one. Furthermore, the authors feel that the problems encountered in this conceptual design are surmountable and are not a hindrance to the construction of such a magnet system

  2. Seismic Collapse Assessment of a 20-Story Steel Moment-Resisting Frame Structure

    Directory of Open Access Journals (Sweden)

    Annika Mathiasson

    2014-10-01

    Full Text Available The 2010 edition of the load standard in the United States (U.S., ASCE 7-10, (Minimum Design Loads for Buildings and Other Structures introduced risk-targeted spectral acceleration values for the estimation of seismic design loads. In this study, a 20-story steel moment resisting frame structure located in Century City, CA, USA was designed based on ASCE 7-10 and a probabilistic seismic collapse assessment was conducted. The main goals of this study are: (a to evaluate whether the design of a typical steel moment-frame structure based on risk-targeted spectral accelerations fulfills the target design collapse level of 1% probability of collapse in 50 years; and (b to quantify the collapse potential of a tall steel structure design based on the most current U.S. seismic code provisions. The probability of collapse was estimated for two sets of 104 and 224 recorded ground motions, respectively. An evaluation of the results demonstrated that for this specific structure the code-prescribed collapse performance target was reasonably met.

  3. Cost-Benefit Analyses of Transportation Investments

    DEFF Research Database (Denmark)

    Næss, Petter

    2006-01-01

    This paper discusses the practice of cost-benefit analyses of transportation infrastructure investment projects from the meta-theoretical perspective of critical realism. Such analyses are based on a number of untenable ontological assumptions about social value, human nature and the natural......-to-pay investigations. Accepting the ontological and epistemological assumptions of cost-benefit analysis involves an implicit acceptance of the ethical and political values favoured by these assumptions. Cost-benefit analyses of transportation investment projects tend to neglect long-term environmental consequences...... and needs among population groups with a low ability to pay. Instead of cost-benefit analyses, impact analyses evaluating the likely effects of project alternatives against a wide range of societal goals is recommended, with quantification and economic valorisation only for impact categories where this can...

  4. Ginkgo biloba extract and long-term cognitive decline: a 20-year follow-up population-based study.

    Directory of Open Access Journals (Sweden)

    Hélène Amieva

    Full Text Available BACKGROUND: Numerous studies have looked at the potential benefits of various nootropic drugs such as Ginkgo biloba extract (EGb761®; Tanakan® and piracetam (Nootropyl® on age-related cognitive decline often leading to inconclusive results due to small sample sizes or insufficient follow-up duration. The present study assesses the association between intake of EGb761® and cognitive function of elderly adults over a 20-year period. METHODS AND FINDINGS: The data were gathered from the prospective community-based cohort study 'Paquid'. Within the study sample of 3612 non-demented participants aged 65 and over at baseline, three groups were compared: 589 subjects reporting use of EGb761® at at least one of the ten assessment visits, 149 subjects reporting use of piracetam at one of the assessment visits and 2874 subjects not reporting use of either EGb761® or piracetam. Decline on MMSE, verbal fluency and visual memory over the 20-year follow-up was analysed with a multivariate mixed linear effects model. A significant difference in MMSE decline over the 20-year follow-up was observed in the EGb761® and piracetam treatment groups compared to the 'neither treatment' group. These effects were in opposite directions: the EGb761® group declined less rapidly than the 'neither treatment' group, whereas the piracetam group declined more rapidly (β = -0.6. Regarding verbal fluency and visual memory, no difference was observed between the EGb761® group and the 'neither treatment' group (respectively, β = 0.21 and β = -0.03, whereas the piracetam group declined more rapidly (respectively, β = -1.40 and β = -0.44. When comparing the EGb761® and piracetam groups directly, a different decline was observed for the three tests (respectively β = -1.07, β = -1.61 and β = -0.41. CONCLUSION: Cognitive decline in a non-demented elderly population was lower in subjects who reported using EGb761® than in those who did

  5. Prediction of coefficients of thermal expansion for unidirectional composites

    Science.gov (United States)

    Bowles, David E.; Tompkins, Stephen S.

    1989-01-01

    Several analyses for predicting the longitudinal, alpha(1), and transverse, alpha(2), coefficients of thermal expansion of unidirectional composites were compared with each other, and with experimental data on different graphite fiber reinforced resin, metal, and ceramic matrix composites. Analytical and numerical analyses that accurately accounted for Poisson restraining effects in the transverse direction were in consistently better agreement with experimental data for alpha(2), than the less rigorous analyses. All of the analyses predicted similar values of alpha(1), and were in good agreement with the experimental data. A sensitivity analysis was conducted to determine the relative influence of constituent properties on the predicted values of alpha(1), and alpha(2). As would be expected, the prediction of alpha(1) was most sensitive to longitudinal fiber properties and the prediction of alpha(2) was most sensitive to matrix properties.

  6. Regulation of the human SLC25A20 expression by peroxisome proliferator-activated receptor alpha in human hepatoblastoma cells

    Energy Technology Data Exchange (ETDEWEB)

    Tachibana, Keisuke, E-mail: nya@phs.osaka-u.ac.jp [Graduate School of Pharmaceutical Sciences, Osaka University, 1-6 Yamadaoka, Suita, Osaka 565-0871 (Japan); Takeuchi, Kentaro; Inada, Hirohiko [Graduate School of Pharmaceutical Sciences, Osaka University, 1-6 Yamadaoka, Suita, Osaka 565-0871 (Japan); Yamasaki, Daisuke [Graduate School of Pharmaceutical Sciences, Osaka University, 1-6 Yamadaoka, Suita, Osaka 565-0871 (Japan); The Center for Advanced Medical Engineering and Informatics, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871 (Japan); Ishimoto, Kenji [Graduate School of Pharmaceutical Sciences, Osaka University, 1-6 Yamadaoka, Suita, Osaka 565-0871 (Japan); Graduate School of Medicine, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871 (Japan); Tanaka, Toshiya; Hamakubo, Takao; Sakai, Juro; Kodama, Tatsuhiko [Laboratory for System Biology and Medicine, Research Center for Advanced Science and Technology, University of Tokyo, 4-6-1 Komaba, Meguro, Tokyo 153-8904 (Japan); Doi, Takefumi [Graduate School of Pharmaceutical Sciences, Osaka University, 1-6 Yamadaoka, Suita, Osaka 565-0871 (Japan); The Center for Advanced Medical Engineering and Informatics, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871 (Japan); Graduate School of Medicine, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871 (Japan)

    2009-11-20

    Solute carrier family 25, member 20 (SLC25A20) is a key molecule that transfers acylcarnitine esters in exchange for free carnitine across the mitochondrial membrane in the mitochondrial {beta}-oxidation. The peroxisome proliferator-activated receptor alpha (PPAR{alpha}) is a ligand-activated transcription factor that plays an important role in the regulation of {beta}-oxidation. We previously established tetracycline-regulated human cell line that can be induced to express PPAR{alpha} and found that PPAR{alpha} induces the SLC25A20 expression. In this study, we analyzed the promoter region of the human slc25a20 gene and showed that PPAR{alpha} regulates the expression of human SLC25A20 via the peroxisome proliferator responsive element.

  7. A20 inhibits human salivary adenoid cystic carcinoma cells invasion via blocking nuclear factor-κB activation

    Institute of Scientific and Technical Information of China (English)

    ZHANG Bin; GUAN Cheng-chao; CHEN Wan-tao; ZHANG Ping; YAN Ming; SHI Jiu-hui; QIN Chun-lin; YANG Qian

    2007-01-01

    Background A20, also known as tumor necrosis factor α induced protein 3 (TNFaip3), is a cytoplasmic zinc finger protein that inhibits nuclear factor kappa-B (NF-κB) activity and prevents tumor necrosis factor (TNF)-mediated programmed cell death. NF-κB is a transcription factor that regulates expression of genes involved in cell proliferation,cell survival and anti-apoptosis. Several studies have implicated that the NF-κB signal pathway is associated with angiogenesis and clinico-pathological process of adenoid cystic carcinoma (ACC) of the salivary glands.Methods The ability of overexpression of A20 to influence the biological behavior and invasion of ACC cells was examined. The cells were stably transfected with full-length A20 cDNA. Stable gene transfer was verified by realtime-polymerase chain reaction (PCR) and Western blot analysis. The change of cell biological behavior was examined by methyl thiazolyl tetrazolium (MTT) and NF-κB luciferase reporter assay and the invasion of the cells was examined by a Matrigel invasion chamber.Results pEGPFN3-A20 gene was stably transferred into ACC-2 cells and overexpressed. When cells were treated with TNFα, the NF-κB activity of ACC-2-A20 cells could be down-regulated about 46.32% in contrast to ACC-2-GFP cells (P<0.05). A20 potently inhibited growth of A20 transfectant ACC-2-A20 compared with control vector transfected groups and the ACC-2 empty control group (P<0.05). The ACC-2-A20 cells showed significantly reduced ability to invade through Matrigei-coated filters compared to ACC-2-GFP and ACC-2 cells. The inhibition rate was up to 71.05% (P<0.05).Conclusions A20 gene transfer is associated with decreased tumor invasion, in part via the down-regulation of NF-κB expression, providing evidence for a potential application of A20 in designing a treatment modality for salivary gland cancers such as ACC.

  8. Quality assurance for Chinese herbal formulae: standardization of IBS-20, a 20-herb preparation

    Directory of Open Access Journals (Sweden)

    Bensoussan Alan

    2010-02-01

    Full Text Available Abstract Background The employment of well characterized test samples prepared from authenticated, high quality medicinal plant materials is key to reproducible herbal research. The present study aims to demonstrate a quality assurance program covering the acquisition, botanical validation, chemical standardization and good manufacturing practices (GMP production of IBS-20, a 20-herb Chinese herbal formula under study as a potential agent for the treatment of irritable bowel syndrome. Methods Purity and contaminant tests for the presence of toxic metals, pesticide residues, mycotoxins and microorganisms were performed. Qualitative chemical fingerprint analysis and quantitation of marker compounds of the herbs, as well as that of the IBS-20 formula was carried out with high-performance liquid chromatography (HPLC. Extraction and manufacture of the 20-herb formula were carried out under GMP. Chemical standardization was performed with liquid chromatography-mass spectrometry (LC-MS analysis. Stability of the formula was monitored with HPLC in real time. Results Quality component herbs, purchased from a GMP supplier were botanically and chemically authenticated and quantitative HPLC profiles (fingerprints of each component herb and of the composite formula were established. An aqueous extract of the mixture of the 20 herbs was prepared and formulated into IBS-20, which was chemically standardized by LC-MS, with 20 chemical compounds serving as reference markers. The stability of the formula was monitored and shown to be stable at room temperature. Conclusion A quality assurance program has been developed for the preparation of a standardized 20-herb formulation for use in the clinical studies for the treatment of irritable bowel syndrome (IBS. The procedures developed in the present study will serve as a protocol for other poly-herbal Chinese medicine studies.

  9. Ecosystem development after mangrove wetland creation: plant-soil change across a 20-year chronosequence

    Science.gov (United States)

    Osland, Michael J.; Spivak, Amanda C.; Nestlerode, Janet A.; Lessmann, Jeannine M.; Almario, Alejandro E.; Heitmuller, Paul T.; Russell, Marc J.; Krauss, Ken W.; Alvarez, Federico; Dantin, Darrin D.; Harvey, James E.; From, Andrew S.; Cormier, Nicole; Stagg, Camille L.

    2012-01-01

    Mangrove wetland restoration and creation efforts are increasingly proposed as mechanisms to compensate for mangrove wetland losses. However, ecosystem development and functional equivalence in restored and created mangrove wetlands are poorly understood. We compared a 20-year chronosequence of created tidal wetland sites in Tampa Bay, Florida (USA) to natural reference mangrove wetlands. Across the chronosequence, our sites represent the succession from salt marsh to mangrove forest communities. Our results identify important soil and plant structural differences between the created and natural reference wetland sites; however, they also depict a positive developmental trajectory for the created wetland sites that reflects tightly coupled plant-soil development. Because upland soils and/or dredge spoils were used to create the new mangrove habitats, the soils at younger created sites and at lower depths (10-30 cm) had higher bulk densities, higher sand content, lower soil organic matter (SOM), lower total carbon (TC), and lower total nitrogen (TN) than did natural reference wetland soils. However, in the upper soil layer (0-10 cm), SOM, TC, and TN increased with created wetland site age simultaneously with mangrove forest growth. The rate of created wetland soil C accumulation was comparable to literature values for natural mangrove wetlands. Notably, the time to equivalence for the upper soil layer of created mangrove wetlands appears to be faster than for many other wetland ecosystem types. Collectively, our findings characterize the rate and trajectory of above- and below-ground changes associated with ecosystem development in created mangrove wetlands; this is valuable information for environmental managers planning to sustain existing mangrove wetlands or mitigate for mangrove wetland losses.

  10. The psychological status of phonological analyses

    Directory of Open Access Journals (Sweden)

    David Eddington

    2015-09-01

    Full Text Available This paper casts doubt on the psychological relevance of many phonological analyses. There are four reasons for this: 1 theoretical adequacy does not necessarily imply psychological significance; 2 most approaches are nonempirical in that they are not subject to potential spatiotemporal falsification; 3 phonological analyses are estab­ lished with little or no recourse to the speakers of the language via experimental psy­ chology; 4 the limited base of evidence which most analyses are founded on is further cause for skepticism.

  11. L’Analyse de discours des Sociologues

    OpenAIRE

    Demailly, Lise

    2013-01-01

    Les sociologues utilisent, comme méthode d'analyse, l'analyse de discours. Des recherches, ici exposées, ont été menées sur cette méthode, ses spécificités et ses apports à la formation aux techniques d'expression (T.E.). Il ressort que le sociologue produit d'abord des discours (par l'entretien et l'observation) puis les analyse, les traite. Ces discours sont difficilement utilisables en T.E. tant ils sont saturés d'enjeux théoriques voire idéologiques.

  12. Predictability of blocking

    International Nuclear Information System (INIS)

    Tibaldi and Molteni (1990, hereafter referred to as TM) had previously investigated operational blocking predictability by the ECMWF model and the possible relationships between model systematic error and blocking in the winter season of the Northern Hemisphere, using seven years of ECMWF operational archives of analyses and day 1 to 10 forecasts. They showed that fewer blocking episodes than in the real atmosphere were generally simulated by the model, and that this deficiency increased with increasing forecast time. As a consequence of this, a major contribution to the systematic error in the winter season was shown to derive from the inability of the model to properly forecast blocking. In this study, the analysis performed in TM for the first seven winter seasons of the ECMWF operational model is extended to the subsequent five winters, during which model development, reflecting both resolution increases and parametrisation modifications, continued unabated. In addition the objective blocking index developed by TM has been applied to the observed data to study the natural low frequency variability of blocking. The ability to simulate blocking of some climate models has also been tested

  13. A20 is critical for the induction of Pam3CSK4-tolerance in monocytic THP-1 cells.

    Directory of Open Access Journals (Sweden)

    Jinyue Hu

    Full Text Available A20 functions to terminate Toll-like receptor (TLR-induced immune response, and play important roles in the induction of lipopolysacchride (LPS-tolerance. However, the molecular mechanism for Pam3CSK4-tolerance is uncertain. Here we report that TLR1/2 ligand Pam3CSK4 induced tolerance in monocytic THP-1 cells. The pre-treatment of THP-1 cells with Pam3CSK4 down-regulated the induction of pro-inflammatory cytokines induced by Pam3CSK4 re-stimulation. Pam3CSK4 pre-treatment also down-regulated the signaling transduction of JNK, p38 and NF-κB induced by Pam3CSK4 re-stimulation. The activation of TLR1/2 induced a rapid and robust up-regulation of A20, suggesting that A20 may contribute to the induction of Pam3CSK4-tolerance. This hypothesis was proved by the observation that the over-expression of A20 by gene transfer down-regulated Pam3CSK4-induced inflammatory responses, and the down-regulation of A20 by RNA interference inhibited the induction of tolerance. Moreover, LPS induced a significant up-regulation of A20, which contributed to the induction of cross-tolerance between LPS and Pam3CSK4. A20 was also induced by the treatment of THP-1 cells with TNF-α and IL-1β. The pre-treatment with TNF-α and IL-1β partly down-regulated Pam3CSK4-induced activation of MAPKs. Furthermore, pharmacologic inhibition of GSK3 signaling down-regulated Pam3CSK4-induced A20 expression, up-regulated Pam3CSK4-induced inflammatory responses, and partly reversed Pam3CSK4 pre-treatment-induced tolerance, suggesting that GSK3 is involved in TLR1/2-induced tolerance by up-regulation of A20 expression. Taken together, these results indicated that A20 is a critical regulator for TLR1/2-induced pro-inflammatory responses.

  14. Anthocyanin analyses of Vaccinium fruit dietary supplements

    Science.gov (United States)

    Vaccinium fruit ingredients within dietary supplements were identified by comparisons with anthocyanin analyses of known Vaccinium profiles (demonstration of anthocyanin fingerprinting). Available Vaccinium supplements were purchased and analyzed; their anthocyanin profiles (based on HPLC separation...

  15. 49 CFR 1180.7 - Market analyses.

    Science.gov (United States)

    2010-10-01

    ... OF TRANSPORTATION RULES OF PRACTICE RAILROAD ACQUISITION, CONTROL, MERGER, CONSOLIDATION PROJECT, TRACKAGE RIGHTS, AND LEASE PROCEDURES General Acquisition Procedures § 1180.7 Market analyses. (a) For... company's marketing plan and existing and potential competitive alternatives (inter- as well as...

  16. Understanding Human Error Based on Automated Analyses

    Data.gov (United States)

    National Aeronautics and Space Administration — This is a report on a continuing study of automated analyses of experiential textual reports to gain insight into the causal factors of human errors in aviation...

  17. Random error in cardiovascular meta-analyses

    DEFF Research Database (Denmark)

    Albalawi, Zaina; McAlister, Finlay A; Thorlund, Kristian;

    2013-01-01

    BACKGROUND: Cochrane reviews are viewed as the gold standard in meta-analyses given their efforts to identify and limit systematic error which could cause spurious conclusions. The potential for random error to cause spurious conclusions in meta-analyses is less well appreciated. METHODS: We...... examined all reviews approved and published by the Cochrane Heart Group in the 2012 Cochrane Library that included at least one meta-analysis with 5 or more randomized trials. We used trial sequential analysis to classify statistically significant meta-analyses as true positives if their pooled sample size...... and/or their cumulative Z-curve crossed the O'Brien-Fleming monitoring boundaries for detecting a RRR of at least 25%. We classified meta-analyses that did not achieve statistical significance as true negatives if their pooled sample size was sufficient to reject a RRR of 25%. RESULTS: Twenty three...

  18. A History of Rotorcraft Comprehensive Analyses

    Science.gov (United States)

    Johnson, Wayne

    2013-01-01

    A history of the development of rotorcraft comprehensive analyses is presented. Comprehensive analyses are digital computer programs that calculate the aeromechanical behavior of the rotor and aircraft, bringing together the most advanced models of the geometry, structure, dynamics, and aerodynamics available in rotary wing technology. The development of the major codes of the last five decades from industry, government, and universities is described. A number of common themes observed in this history are discussed.

  19. Thermal Analyse sof Cross-Linked Polyethylene

    Directory of Open Access Journals (Sweden)

    Radek Polansky

    2007-01-01

    Full Text Available The paper summarizes results obtained during the structural analyses measurements (Differential Scanning Calorimetry DSC, Thermogravimetry TG, Thermomechanical analysis TMA and Fourier transform infrared spectroscopy FT-IR. The samples of cross-linked polyethylene cable insulation were tested via these analyses. The DSC and TG were carried out using simultaneous thermal analyzer TA Instruments SDT Q600 with connection of Fourier transform infrared spectrometer Nicolet 380. Thermomechanical analysis was carried out by TMA Q400EM TA Instruments apparatus.

  20. The molecular spectrum and distribution of haemoglobinopathies in Cyprus: a 20-year retrospective study

    Science.gov (United States)

    Kountouris, Petros; Kousiappa, Ioanna; Papasavva, Thessalia; Christopoulos, George; Pavlou, Eleni; Petrou, Miranda; Feleki, Xenia; Karitzie, Eleni; Phylactides, Marios; Fanis, Pavlos; Lederer, Carsten W.; Kyrri, Andreani R.; Kalogerou, Eleni; Makariou, Christiana; Ioannou, Christiana; Kythreotis, Loukas; Hadjilambi, Georgia; Andreou, Nicoletta; Pangalou, Evangelia; Savvidou, Irene; Angastiniotis, Michael; Hadjigavriel, Michael; Sitarou, Maria; Kolnagou, Annita; Kleanthous, Marina; Christou, Soteroula

    2016-01-01

    Haemoglobinopathies are the most common monogenic diseases, posing a major public health challenge worldwide. Cyprus has one the highest prevalences of thalassaemia in the world and has been the first country to introduce a successful population-wide prevention programme, based on premarital screening. In this study, we report the most significant and comprehensive update on the status of haemoglobinopathies in Cyprus for at least two decades. First, we identified and analysed all known 592 β-thalassaemia patients and 595 Hb H disease patients in Cyprus. Moreover, we report the molecular spectrum of α-, β- and δ-globin gene mutations in the population and their geographic distribution, using a set of 13824 carriers genotyped from 1995 to 2015, and estimate relative allele frequencies in carriers of β- and δ-globin gene mutations. Notably, several mutations are reported for the first time in the Cypriot population, whereas important differences are observed in the distribution of mutations across different districts of the island. PMID:27199182

  1. Insights into a 20-ha multi-contaminated brownfield megasite: An environmental forensics approach.

    Science.gov (United States)

    Gallego, J R; Rodríguez-Valdés, E; Esquinas, N; Fernández-Braña, A; Afif, E

    2016-09-01

    Here we addressed the contamination of soils in an abandoned brownfield located in an industrial area. Detailed soil and waste characterisation guided by historical information about the site revealed pyrite ashes (a residue derived from the roasting of pyrite ores) as the main environmental risk. In fact, the disposal of pyrite ashes and the mixing of these ashes with soils have affected a large area of the site, thereby causing heavy metal(loid) pollution (As and Pb levels reaching several thousands of ppm). A full characterisation of the pyrite ashes was thus performed. In this regard, we determined the bioavailable metal species present and their implications, grain-size distribution, mineralogy, and Pb isotopic signature in order to obtain an accurate conceptual model of the site. We also detected significant concentrations of pyrogenic benzo(a)pyrene and other PAHs, and studied the relation of these compounds with the pyrite ashes. In addition, we examined other waste and spills of minor importance within the study site. The information gathered offered an insight into pollution sources, unravelled evidence from the industrial processes that took place decades ago, and identified the co-occurrence of contaminants by means of multivariate statistics. The environmental forensics study carried out provided greater information than conventional analyses for risk assessment purposes and for the selection of clean-up strategies adapted to future land use. PMID:26475240

  2. Finite element analyses of CCAT preliminary design

    Science.gov (United States)

    Sarawit, Andrew T.; Kan, Frank W.

    2014-07-01

    This paper describes the development of the CCAT telescope finite element model (FEM) and the analyses performed to support the preliminary design work. CCAT will be a 25 m diameter telescope operating in the 0.2 to 2 mm wavelength range. It will be located at an elevation of 5600 m on Cerro Chajnantor in Northern Chile, near ALMA. The telescope will be equipped with wide-field cameras and spectrometers mounted at the two Nasmyth foci. The telescope will be inside an enclosure to protect it from wind buffeting, direct solar heating, and bad weather. The main structures of the telescope include a steel Mount and a carbon-fiber-reinforced-plastic (CFRP) primary truss. The finite element model developed in this study was used to perform modal, frequency response, seismic response spectrum, stress, and deflection analyses of telescope. Modal analyses of telescope were performed to compute the structure natural frequencies and mode shapes and to obtain reduced order modal output at selected locations in the telescope structure to support the design of the Mount control system. Modal frequency response analyses were also performed to compute transfer functions at these selected locations. Seismic response spectrum analyses of the telescope subject to the Maximum Likely Earthquake were performed to compute peak accelerations and seismic demand stresses. Stress analyses were performed for gravity load to obtain gravity demand stresses. Deflection analyses for gravity load, thermal load, and differential elevation drive torque were performed so that the CCAT Observatory can verify that the structures meet the stringent telescope surface and pointing error requirements.

  3. Nonparametric bootstrap prediction

    OpenAIRE

    Fushiki, Tadayoshi; Komaki, Fumiyasu; Aihara, Kazuyuki

    2005-01-01

    Ensemble learning has recently been intensively studied in the field of machine learning. `Bagging' is a method of ensemble learning and uses bootstrap data to construct various predictors. The required prediction is then obtained by averaging the predictors. Harris proposed using this technique with the parametric bootstrap predictive distribution to construct predictive distributions, and showed that the parametric bootstrap predictive distribution gives asymptotically better prediction tha...

  4. Integrative genomic analyses of a novel cytokine, interleukin-34 and its potential role in cancer prediction

    OpenAIRE

    Wang, Bo; Xu, Wenming; TAN, MIAOLIAN; Xiao, Yan; Yang, Haiwei; Xia, Tian-Song

    2014-01-01

    Interleukin-34 (IL-34) is a novel cytokine, which is composed of 222 amino acids and forms homodimers. It binds to the macrophage colony-stimulating factor (M-CSF) receptor and plays an important role in innate immunity and inflammatory processes. In the present study, we identified the completed IL-34 gene in 25 various mammalian genomes and found that IL-34 existed in all types of vertebrates, including fish, amphibians, birds and mammals. These species have a similar 7 exon/6 intron gene o...

  5. Use of CFD Analyses to Predict Disk Friction Loss of Centrifugal Compressor Impellers

    Science.gov (United States)

    Cho, Leesang; Lee, Seawook; Cho, Jinsoo

    To improve the total efficiency of centrifugal compressors, it is necessary to reduce disk friction loss, which is expressed as the power loss. In this study, to reduce the disk friction loss due to the effect of axial clearance and surface roughness is analyzed and methods to reduce disk friction loss are proposed. The rotating reference frame technique using a commercial CFD tool (FLUENT) is used for steady-state analysis of the centrifugal compressor. Numerical results of the CFD analysis are compared with theoretical results using established experimental empirical equations. The disk friction loss of the impeller is decreased in line with increments in axial clearance until the axial clearance between the impeller disk and the casing is smaller than the boundary layer thickness. In addition, the disk friction loss of the impeller is increased in line with the increments in surface roughness in a similar pattern as that of existing experimental empirical formulas. The disk friction loss of the impeller is more affected by the surface roughness than the change of the axial clearance. To minimize disk friction loss on the centrifugal compressor impeller, the axial clearance and the theoretical boundary layer thickness should be designed to be the same. The design of the impeller requires careful consideration in order to optimize axial clearance and minimize surface roughness.

  6. Fertility prediction of frozen boar sperm using novel and conventional analyses

    Science.gov (United States)

    Frozen-thawed boar sperm is seldom used for artificial insemination (AI) because fertility is lower than fresh or cooled semen. Despite the many advantages of AI including reduced pathogen exposure and ease of semen transport, cryo-induced damage to sperm usually results in decreased litter sizes a...

  7. Analyses of the predicted changes of the global oceans under the increased greenhouse gases scenarios

    Institute of Scientific and Technical Information of China (English)

    MU Lin; WU Dexing; CHEN Xue'en; J Jungclaus

    2006-01-01

    A new climate model (ECHAM5/MPIOM1) developed for the fourth assessment report of the Intergovernmental Panel on Climate Change (IPCC) at Max-Planck Institute for Meteorology is used to study the climate changes under the different increased CO2 scenarios (B1, A1B and A2). Based on the corresponding model results, the sea surface temperature and salinity structure, the variations of the thermohaline circulation (THC) and the changes of sea ice in the northern hemisphere are analyzed. It is concluded that from the year of 2000 to 2100, under the B1, A1B and A2 scenarios, the global mean sea surface temperatures (SST) would increase by 2.5℃, 3.5℃ and 4.0℃ respectively, especially in the region of the Arctic, the increase of SST would be even above 10.0℃; the maximal negative value of the variation of the fresh water flux is located in the subtropical oceans, while the precipitation in the eastern tropical Pacific increases. The strength of THC decreases under the B1, A1B and A2 scenarios, and the reductions would be about 20%, 25% and 25.1% of the present THC strength respectively. In the northern hemisphere, the area of the sea ice cover would decrease by about 50% under the A1B scenario.

  8. ANALYSING URBAN EFFECTS IN BUDAPEST USING THE WRF NUMERICAL WEATHER PREDICTION MODEL

    Directory of Open Access Journals (Sweden)

    JÚLIA GÖNDÖCS

    2016-03-01

    Full Text Available Continuously growing cities significantly modify the entire environment through air pollution and modification of land surface, resulting altered energy budget and land-atmosphere exchange processes over built-up areas. These effects mainly appear in cities or metropolitan areas, leading to the Urban Heat Island (UHI phenomenon, which occurs due to the temperature difference between the built-up areas and their cooler surroundings. The Weather Research and Forecasting (WRF mesoscale model coupled to multilayer urban canopy parameterisation is used to investigate this phenomenon for Budapest and its surroundings with actual land surface properties. In this paper the basic ideas of our research and the methodology in brief are presented. The simulation is completed for one week in summer 2015 with initial meteorological fields from Global Forecasting System (GFS outputs, under atmospheric conditions of weak wind and clear sky for the Pannonian Basin. Then, to improve the WRF model and its settings, the calculated skin temperature is compared to the remotely sensed measurements derived from satellites Aqua and Terra, and the temporal and spatial bias values are estimated.

  9. Behavioral and Physiological Neural Network Analyses: A Common Pathway toward Pattern Recognition and Prediction

    Science.gov (United States)

    Ninness, Chris; Lauter, Judy L.; Coffee, Michael; Clary, Logan; Kelly, Elizabeth; Rumph, Marilyn; Rumph, Robin; Kyle, Betty; Ninness, Sharon K.

    2012-01-01

    Using 3 diversified datasets, we explored the pattern-recognition ability of the Self-Organizing Map (SOM) artificial neural network as applied to diversified nonlinear data distributions in the areas of behavioral and physiological research. Experiment 1 employed a dataset obtained from the UCI Machine Learning Repository. Data for this study…

  10. Integrative genomic analyses of a novel cytokine, interleukin-34 and its potential role in cancer prediction.

    Science.gov (United States)

    Wang, Bo; Xu, Wenming; Tan, Miaolian; Xiao, Yan; Yang, Haiwei; Xia, Tian-Song

    2015-01-01

    Interleukin-34 (IL-34) is a novel cytokine, which is composed of 222 amino acids and forms homodimers. It binds to the macrophage colony-stimulating factor (M-CSF) receptor and plays an important role in innate immunity and inflammatory processes. In the present study, we identified the completed IL-34 gene in 25 various mammalian genomes and found that IL-34 existed in all types of vertebrates, including fish, amphibians, birds and mammals. These species have a similar 7 exon/6 intron gene organization. The phylogenetic tree indicated that the IL-34 gene from the primate lineage, rodent lineage and teleost lineage form a species-specific cluster. It was found mammalian that IL-34 was under positive selection pressure with the identified positively selected site, 196Val. Fifty-five functionally relevant single nucleotide polymorphisms (SNPs), including 32 SNPs causing missense mutations, 3 exonic splicing enhancer SNPs and 20 SNPs causing nonsense mutations were identified from 2,141 available SNPs in the human IL-34 gene. IL-34 was expressed in various types of cancer, including blood, brain, breast, colorectal, eye, head and neck, lung, ovarian and skin cancer. A total of 5 out of 40 tests (1 blood cancer, 1 brain cancer, 1 colorectal cancer and 2 lung cancer) revealed an association between IL-34 gene expression and cancer prognosis. It was found that the association between the expression of IL-34 and cancer prognosis varied in different types of cancer, even in the same types of cancer from different databases. This suggests that the function of IL-34 in these tumors may be multidimensional. The upstream transcription factor 1 (USF1), regulatory factor X-1 (RFX1), the Sp1 transcription factor 1 , POU class 3 homeobox 2 (POU3F2) and forkhead box L1 (FOXL1) regulatory transcription factor binding sites were identified in the IL-34 gene upstream (promoter) region, which may be involved in the effects of IL-34 in tumors. PMID:25395235

  11. Gene expression array analyses predict increased proto-oncogene expression in MMTV induced mammary tumors.

    Science.gov (United States)

    Popken-Harris, Pamela; Kirchhof, Nicole; Harrison, Ben; Harris, Lester F

    2006-08-01

    Exogenous infection by milk-borne mouse mammary tumor viruses (MMTV) typically induce mouse mammary tumors in genetically susceptible mice at a rate of 90-95% by 1 year of age. In contrast to other transforming retroviruses, MMTV acts as an insertional mutagen and under the influence of steroid hormones induces oncogenic transformation after insertion into the host genome. As these events correspond with increases in adjacent proto-oncogene transcription, we used expression array profiling to determine which commonly associated MMTV insertion site proto-oncogenes were transcriptionally active in MMTV induced mouse mammary tumors. To verify our gene expression array results we developed real-time quantitative RT-PCR assays for the common MMTV insertion site genes found in RIII/Sa mice (int-1/wnt-1, int-2/fgf-3, int-3/Notch 4, and fgf8/AIGF) as well as two genes that were consistently up regulated (CCND1, and MAT-8) and two genes that were consistently down regulated (FN1 and MAT-8) in the MMTV induced tumors as compared to normal mammary gland. Finally, each tumor was also examined histopathologically. Our expression array findings support a model whereby just one or a few common MMTV insertions into the host genome sets up a dominant cascade of events that leave a characteristic molecular signature.

  12. Monitoring and prediction of natural disasters

    International Nuclear Information System (INIS)

    The problems of natural disaster predicting and accomplishing a synthesis of environmental monitoring systems to collect, store, and process relevant information for their solution are analysed. A three-level methodology is proposed for making decisions concerning the natural disaster dynamics. The methodology is based on the assessment of environmental indicators and the use of numerical models of the environment

  13. TALC: a new deployable concept for a 20m far-infrared space telescope

    Science.gov (United States)

    Durand, Gilles; Sauvage, Marc; Bonnet, Aymeric; Rodriguez, Louis; Ronayette, Samuel; Chanial, Pierre; Scola, Loris; Révéret, Vincent; Aussel, Hervé; Carty, Michael; Durand, Matthis; Durand, Lancelot; Tremblin, Pascal; Pantin, Eric; Berthe, Michel; Martignac, Jérôme; Motte, Frédérique; Talvard, Michel; Minier, Vincent; Bultel, Pascal

    2014-08-01

    TALC, Thin Aperture Light Collector is a 20 m space observatory project exploring some unconventional optical solutions (between the single dish and the interferometer) allowing the resolving power of a classical 27 m telescope. With TALC, the principle is to remove the central part of the prime mirror dish, cut the remaining ring into 24 sectors and store them on top of one-another. The aim of this far infrared telescope is to explore the 600 μm to 100 μm region. With this approach we have shown that we can store a ring-telescope of outer diameter 20m and ring thickness of 3m inside the fairing of Ariane 5 or Ariane 6. The general structure is the one of a bicycle wheel, whereas the inner sides of the segments are in compression to each other and play the rule of a rim. The segments are linked to each other using a pantograph scissor system that let the segments extend from a pile of dishes to a parabolic ring keeping high stiffness at all time during the deployment. The inner corners of the segments are linked to a central axis using spokes as in a bicycle wheel. The secondary mirror and the instrument box are built as a solid unit fixed at the extremity of the main axis. The tensegrity analysis of this structure shows a very high stiffness to mass ratio, resulting into 3 Hz Eigen frequency. The segments will consist of two composite skins and honeycomb CFRP structure build by replica process. Solid segments will be compared to deformable segments using the controlled shear of the rear surface. The adjustment of the length of the spikes and the relative position of the side of neighbor segments let control the phasing of the entire primary mirror. The telescope is cooled by natural radiation. It is protected from sun radiation by a large inflatable solar screen, loosely linked to the telescope. The orientation is performed by inertia-wheels. This telescope carries a wide field bolometer camera using cryocooler at 0.3K as one of the main instruments. This

  14. Alboran jets, gyres and eddies in a 20-year high resolution simulation

    Science.gov (United States)

    Peliz, A.

    2012-04-01

    The circulation of the Alboran Sea has long been described as being in a quasi-steady state composed of the Atlantic Jet meandering on the northern bound of two conspicuous gyres: the Western Alboran Gyre and Eastern Alboran Gyre (WAG and EAG). Changes to this 2 gyre flow system (transitions or transient events) are not very well explored yet. Periodic disappearances of the WAG (collapses or migrations) have been reported, but a single event of WAG migration, observed in fall 1996, is described in detail. These studies suggested that WAG is more likely to disappear in winter after drastic changes in the inflow, and that 2-gyre steady states are essentially observed in summer. The transition periods and the occurrence of smaller eddies are episodically referred in the literature but poorly known. Using a 20 yr 2km resolution Regional Ocean Modeling System simulation of the Gulf of Cadiz-Alboran Sea basins (from the "Inter-basin Exchange in a changing Mediterranean Sea" project MedEX), a classification of the circulation types and mesoscale structures in the Alboran Sea is conducted, characterizing their duration and frequency of occurrence, and temporal evolution. The 2-gyre quasi-steady state (or blocking situation) is confirmed as the most common flow type in the Alboran (occurring during about 42% of the simulation time) and that it is more frequent in summer. However, periods of double gyre flow in winter are also present although the gyre organization is slightly different and this state is described as a 2-gyre winter type. Long stable periods of a single gyre blocking were also identified, and they occupy about 17% of the 20-year period This single gyre usually constitutes a larger version of the WAG somewhat displaced to the east and occurs all year round although it is more common in winter months. The remaining time, the Alboran Sea is in relatively fast evolving flow transitions. The transitions were classified into, WAG migrations (when the WAG clearly

  15. Architecture, persistence and dissolution of a 20 to 45 year old trichloroethene DNAPL source zone

    Science.gov (United States)

    Rivett, Michael O.; Dearden, Rachel A.; Wealthall, Gary P.

    2014-12-01

    A detailed field-scale investigation of processes controlling the architecture, persistence and dissolution of a 20 to 45 year old trichloroethene (TCE) dense non-aqueous phase liquid (DNAPL) source zone located within a heterogeneous sand/gravel aquifer at a UK industrial site is presented. The source zone was partially enclosed by a 3-sided cell that allowed detailed longitudinal/fence transect monitoring along/across a controlled streamtube of flow induced by an extraction well positioned at the cell closed end. Integrated analysis of high-resolution DNAPL saturation (Sn) (from cores), dissolved-phase plume concentration (from multilevel samplers), tracer test and permeability datasets was undertaken. DNAPL architecture was determined from soil concentration data using partitioning calculations. DNAPL threshold soil concentrations and low Sn values calculated were sensitive to sorption assumptions. An outcome of this was the uncertainty in demarcation of secondary source zone diffused and sorbed mass that is distinct from trace amounts of low Sn DNAPL mass. The majority of source mass occurred within discrete lenses or pools of DNAPL associated with low permeability geological units. High residual saturation (Sn > 10-20%) and pools (Sn > 20%) together accounted for almost 40% of the DNAPL mass, but only 3% of the sampled source volume. High-saturation DNAPL lenses/pools were supported by lower permeability layers, but with DNAPL still primarily present within slightly more permeable overlying units. These lenses/pools exhibited approximately linearly declining Sn profiles with increasing elevation ascribed to preferential dissolution of the uppermost DNAPL. Bi-component partitioning calculations on soil samples confirmed that the dechlorination product cDCE (cis-dichloroethene) was accumulating in the TCE DNAPL. Estimated cDCE mole fractions in the DNAPL increased towards the DNAPL interface with the uppermost mole fraction of 0.04 comparable to literature

  16. A 20-yr reanalysis Experiment in the Baltic Sea Using three Dimensional Variational (3DVAR method

    Directory of Open Access Journals (Sweden)

    W. Fu

    2012-05-01

    Full Text Available A 20-year retrospective reanalysis of the ocean state in the Baltic Sea is constructed using three dimensional variational (3DVAR data assimilation combining an operational numerical model with available historical temperature (T and salinity (S profiles. To determine the accuracy of the reanalysis, the authors present a series of comparisons with independent observations on a monthly mean basis. The performance of the assimilation in deep/shallow waters is investigated.

    With assimilation, temperature and salinity in the reanalysis fit better than the free run with independent measurements at different depths. Overall, the mean biases of temperature and salinity are reduced by 0.32 °C and 0.34 psu, respectively. Similarly, the mean root mean square error (RMSE of the reanalysis is decreased by 0.35 °C and 0.3 psu compared to the free run. In space, the model error is inhomogeneous and strongly steered by the model error dynamics. Seasonally varying error of the modeled sea surface temperature is mainly controlled by the weather forcing, and shows the least improvements due to sparse observations. Deep layers, on the other hand, witness significant and stable model error improvements. In particular, the salinity related to saline water intrusions into the Baltic Proper is largely improved in the reanalysis. The major inflow events such as in 1993 and 2003 are captured more accurately in the reanalysis as the model salinity in the bottom layer is increased by 2–3 psu. Sea level is also improved due to an improved density field. The correlation between model and observation is increased by 2 %–5 %, and the RMSE is generally reduced by 10 cm in the reanalysis compared to the free run. The reduction of RMSE is mainly due to the reduction of mean bias. Assimilation of T/S contributes little to the barotropic transport in the shallow Danish Transition zone.

    The mixed layer depth exhibits strong seasonal

  17. Architecture, persistence and dissolution of a 20 to 45 year old trichloroethene DNAPL source zone.

    Science.gov (United States)

    Rivett, Michael O; Dearden, Rachel A; Wealthall, Gary P

    2014-12-01

    A detailed field-scale investigation of processes controlling the architecture, persistence and dissolution of a 20 to 45year old trichloroethene (TCE) dense non-aqueous phase liquid (DNAPL) source zone located within a heterogeneous sand/gravel aquifer at a UK industrial site is presented. The source zone was partially enclosed by a 3-sided cell that allowed detailed longitudinal/fence transect monitoring along/across a controlled streamtube of flow induced by an extraction well positioned at the cell closed end. Integrated analysis of high-resolution DNAPL saturation (Sn) (from cores), dissolved-phase plume concentration (from multilevel samplers), tracer test and permeability datasets was undertaken. DNAPL architecture was determined from soil concentration data using partitioning calculations. DNAPL threshold soil concentrations and low Sn values calculated were sensitive to sorption assumptions. An outcome of this was the uncertainty in demarcation of secondary source zone diffused and sorbed mass that is distinct from trace amounts of low Sn DNAPL mass. The majority of source mass occurred within discrete lenses or pools of DNAPL associated with low permeability geological units. High residual saturation (Sn>10-20%) and pools (Sn>20%) together accounted for almost 40% of the DNAPL mass, but only 3% of the sampled source volume. High-saturation DNAPL lenses/pools were supported by lower permeability layers, but with DNAPL still primarily present within slightly more permeable overlying units. These lenses/pools exhibited approximately linearly declining Sn profiles with increasing elevation ascribed to preferential dissolution of the uppermost DNAPL. Bi-component partitioning calculations on soil samples confirmed that the dechlorination product cDCE (cis-dichloroethene) was accumulating in the TCE DNAPL. Estimated cDCE mole fractions in the DNAPL increased towards the DNAPL interface with the uppermost mole fraction of 0.04 comparable to literature laboratory

  18. Three distinct suppressors of RNA silencing encoded by a 20-kb viral RNA genome

    Science.gov (United States)

    Lu, Rui; Folimonov, Alexey; Shintaku, Michael; Li, Wan-Xiang; Falk, Bryce W.; Dawson, William O.; Ding, Shou-Wei

    2004-11-01

    Viral infection in both plant and invertebrate hosts requires a virus-encoded function to block the RNA silencing antiviral defense. Here, we report the identification and characterization of three distinct suppressors of RNA silencing encoded by the 20-kb plus-strand RNA genome of citrus tristeza virus (CTV). When introduced by genetic crosses into plants carrying a silencing transgene, both p20 and p23, but not coat protein (CP), restored expression of the transgene. Although none of the CTV proteins prevented DNA methylation of the transgene, export of the silencing signal (capable of mediating intercellular silencing spread) was detected only from the F1 plants expressing p23 and not from the CP- or p20-expressing F1 plants, demonstrating suppression of intercellular silencing by CP and p20 but not by p23. Thus, intracellular and intercellular silencing are each targeted by a CTV protein, whereas the third, p20, inhibits silencing at both levels. Notably, CP suppresses intercellular silencing without interfering with intracellular silencing. The novel property of CP suggests a mechanism distinct to p20 and all of the other viral suppressors known to interfere with intercellular silencing and that this class of viral suppressors may not be consistently identified by Agrobacterium coinfiltration because it also induces RNA silencing against the infiltrated suppressor transgene. Our analyses reveal a sophisticated viral counter-defense strategy that targets the silencing antiviral pathway at multiple steps and may be essential for protecting CTV with such a large RNA genome from antiviral silencing in the perennial tree host. RNA interference | citrus tristeza virus | virus synergy | antiviral immunity

  19. PMA/IONO affects diffuse large B-cell lymphoma cell growth through upregulation of A20 expression.

    Science.gov (United States)

    Yang, Wenxiu; Li, Yi; Li, Pinhao; Wang, Lingling

    2016-08-01

    Diffuse large B-cell lymphoma (DLBCL) is a common non-Hodgkin lymphoma. A20 and mucosa-associated lymphoid tissue lymphoma translocation gene 1 (MALT1) are known to be related to DLBCL pathogenesis and progression. This study aimed to assess the effects of phorbol myristate acetate/ionomycin (PMA/IONO) on the growth and apoptosis of the DLBCL cell line OCI-LY1, and their associations with A20, MALT1 and survivin levels. Cell viability was assessed by MTT assay. Cell cycle distribution and apoptosis were evaluated using flow cytometry after incubation with Annexin V-FITC/propidium iodide (PI) and RNase/PI, respectively. Gene and protein expression levels were determined by quantitative real-time PCR and western blotting, respectively. To further determine the role of A20, this gene was silenced in the OCI-LY1 cell line by specific siRNA transfection. A20 protein levels were higher in the OCI-LY1 cells treated with PMA/IONO compared with the controls, and were positively correlated with the concentration and treatment time of IONO, but not with changes of PMA and MALT1. Meanwhile, survivin expression was reduced in the OCI-LY1 cells after PMA/IONO treatment. In addition, OCI-LY1 proliferation was markedly inhibited, with a negative correlation between cell viability and IONO concentration. In concordance, apoptosis rates were higher in the OCI-LY1 cells after PMA + IONO treatment. Cell cycle distribution differed between the OCI-LY1 cells with and without PMA/IONO treatment only at 24 h, with increased cells in the G0/G1 stage after PMA/IONO treatment. These findings indicate that PMA/IONO promotes the apoptosis and inhibits the growth of DLBCL cells, in association with A20 upregulation. Thus, A20 may be a potential therapeutic target for DLBCL. PMID:27349720

  20. Identifying, analysing and solving problems in practice.

    Science.gov (United States)

    Hewitt-Taylor, Jaqui

    When a problem is identified in practice, it is important to clarify exactly what it is and establish the cause before seeking a solution. This solution-seeking process should include input from those directly involved in the problematic situation, to enable individuals to contribute their perspective, appreciate why any change in practice is necessary and what will be achieved by the change. This article describes some approaches to identifying and analysing problems in practice so that effective solutions can be devised. It includes a case study and examples of how the Five Whys analysis, fishbone diagram, problem tree analysis, and Seven-S Model can be used to analyse a problem.

  1. TOGGLE: toolbox for generic NGS analyses

    OpenAIRE

    Monat, Cécile; Tranchant-Dubreuil, Christine; Kougbeadjo, Ayité; Farcy , Cédric; Ortega-Abboud, Enrique; Amanzougarene, Souhila; Ravel, Sébastien; Agbessi, Mawusse; Orjuela-Bouniol, Julie; Summo, Marilyne; Sabot, François

    2015-01-01

    Background: The explosion of NGS (Next Generation Sequencing) sequence data requires a huge effort in Bioinformatics methods and analyses. The creation of dedicated, robust and reliable pipelines able to handle dozens of samples from raw FASTQ data to relevant biological data is a time-consuming task in all projects relying on NGS. To address this, we created a generic and modular toolbox for developing such pipelines. Results: TOGGLE (TOolbox for Generic nGs anaLysEs) is a suite of tools abl...

  2. Analyse de discours et demande sociale

    OpenAIRE

    Cislaru, Georgeta; Garnier, Sylvie; Matras, Marie-Thérèse; Pugnière-Saavedra, Frédéric; Rousseau, Patrick; Sitri, Frédérique; Veniard, Marie

    2010-01-01

    Que peut nous révéler l’analyse de discours des pratiques sociétales et des pratiques discursives qui les sous-tendent ? En questionnant le discours,l'analyse de discours questionne aussi ses instances productrices : instances politiques, médiatiques, institutionnelles. Elle a ainsi engagé, depuis une quarantaine d’années, un dialogue interdisciplinaire fructueux. Avec cinq contributions d’analystes de discours et deux de professionnels de la protection de l’enfance, ce numéro des Carnets du ...

  3. TOGGLE: toolbox for generic NGS analyses

    OpenAIRE

    Monat, Cécile; Tranchant-Dubreuil, Christine; Kougbeadjo, Ayité; Farcy, Cédric; Ortega-Abboud, Enrique; Amanzougarene, Souhila; Ravel, Sébastien; Agbessi, Mawusse; Orjuela-Bouniol , Julie; Summo, Marilyne; Sabot, François

    2015-01-01

    Background The explosion of NGS (Next Generation Sequencing) sequence data requires a huge effort in Bioinformatics methods and analyses. The creation of dedicated, robust and reliable pipelines able to handle dozens of samples from raw FASTQ data to relevant biological data is a time-consuming task in all projects relying on NGS. To address this, we created a generic and modular toolbox for developing such pipelines. Results TOGGLE (TOolbox for Generic nGs anaLysEs) is a suite of tools able ...

  4. Prosjektering og analyse av en spennarmert betongbru

    OpenAIRE

    Strand, Elin Holsten; Kaldbekkdalen, Ann-Kristin

    2014-01-01

    Hensikten med rapporten er å gjennomføre analyse og dimensjonering av en etteroppspent betongbru. Modellering og analyse er gjennomført i NovaFrame 5. En del av oppgaven var å bestemme spennsystem og tverrsnittshøyden i brua. Det ble antatt seks spennkabler i felt, og tolv over støtte. Videre ble tverrsnittshøyden satt lik 1,3 meter. Dimensjoneringen ble gjennomført i henhold til gjeldende Eurokoder, aktuelle dokumenter og Håndbok 185, som er utarb...

  5. A theoretical framework for analysing preschool teaching

    DEFF Research Database (Denmark)

    Chaiklin, Seth

    2014-01-01

    This article introduces a theoretical framework for analysing preschool teaching as a historically-grounded societal practice. The aim is to present a unified framework that can be used to analyse and compare both historical and contemporary examples of preschool teaching practice within and across...... national traditions. The framework has two main components, an analysis of preschool teaching as a practice, formed in relation to societal needs, and an analysis of the categorical relations which necessarily must be addressed in preschool teaching activity. The framework is introduced and illustrated...... teaching as a practice....

  6. Interferences in reactor neutron activation analyses

    International Nuclear Information System (INIS)

    It has been shown that interfering reactions may occur in neutron activation analyses of aluminum and zinc matrixes, commonly used in nuclear areas. The interferences analysed were: Al2713 (n, α) Na2411 and Zn6430 (n, p) Cu6429. The method used was the non-destructive neutron activation analysis and the spectra were obtained in a 1024 multichannel system coupled with a Ge(Li) detector. Sodium was detected in aluminum samples from the reactor tank and pneumatic transfer system. The independence of the sodium concentration in samples in the range of 0 - 100 ppm is shown by the attenuation obtained with the samples encapsulated in cadmium. (Author)

  7. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions. PMID:27286683

  8. PREDICTING TURBINE STAGE PERFORMANCE

    Science.gov (United States)

    Boyle, R. J.

    1994-01-01

    This program was developed to predict turbine stage performance taking into account the effects of complex passage geometries. The method uses a quasi-3D inviscid-flow analysis iteratively coupled to calculated losses so that changes in losses result in changes in the flow distribution. In this manner the effects of both the geometry on the flow distribution and the flow distribution on losses are accounted for. The flow may be subsonic or shock-free transonic. The blade row may be fixed or rotating, and the blades may be twisted and leaned. This program has been applied to axial and radial turbines, and is helpful in the analysis of mixed flow machines. This program is a combination of the flow analysis programs MERIDL and TSONIC coupled to the boundary layer program BLAYER. The subsonic flow solution is obtained by a finite difference, stream function analysis. Transonic blade-to-blade solutions are obtained using information from the finite difference, stream function solution with a reduced flow factor. Upstream and downstream flow variables may vary from hub to shroud and provision is made to correct for loss of stagnation pressure. Boundary layer analyses are made to determine profile and end-wall friction losses. Empirical loss models are used to account for incidence, secondary flow, disc windage, and clearance losses. The total losses are then used to calculate stator, rotor, and stage efficiency. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 370/3033 under TSS with a central memory requirement of approximately 4.5 Megs of 8 bit bytes. This program was developed in 1985.

  9. The prediction of different experiences of longterm illness

    DEFF Research Database (Denmark)

    Blank, N; Diderichsen, Finn

    1996-01-01

    To analyse the role played by socioeconomic factors and self rated general health in the prediction of the reporting of severe longterm illness, and the extent to which these factors explain social class differences in the reporting of such illness.......To analyse the role played by socioeconomic factors and self rated general health in the prediction of the reporting of severe longterm illness, and the extent to which these factors explain social class differences in the reporting of such illness....

  10. Cosmetology: Task Analyses. Competency-Based Education.

    Science.gov (United States)

    Henrico County Public Schools, Glen Allen, VA. Virginia Vocational Curriculum Center.

    These task analyses are designed to be used in combination with the "Trade and Industrial Education Service Area Resource" in order to implement competency-based education in the cosmetology program in Virginia. The task analysis document contains the task inventory, suggested task sequence lists, and content outlines for the secondary courses…

  11. Chemical Analyses of Silicon Aerogel Samples

    CERN Document Server

    van der Werf, I; De Leo, R; Marrone, S

    2008-01-01

    After five years of operating, two Aerogel counters: A1 and A2, taking data in Hall A at Jefferson Lab, suffered a loss of performance. In this note possible causes of degradation have been studied. In particular, various chemical and physical analyses have been carried out on several Aerogel tiles and on adhesive tape in order to reveal the presence of contaminants.

  12. A gamma model for {DNA} mixture analyses

    OpenAIRE

    Cowell, R. G.; Lauritzen, S L; Mortera, J.

    2007-01-01

    We present a new methodology for analysing forensic identification problems involving DNA mixture traces where several individuals may have contributed to the trace. The model used for identification and separation of DNA mixtures is based on a gamma distribution for peak area values. In this paper we illustrate the gamma model and apply it on several real examples from forensic casework.

  13. Micromechanical photothermal analyser of microfluidic samples

    DEFF Research Database (Denmark)

    2014-01-01

    The present invention relates to a micromechanical photothermal analyser of microfluidic samples comprising an oblong micro-channel extending longitudinally from a support element, the micro-channel is made from at least two materials with different thermal expansion coefficients, wherein the...

  14. Amino acid analyses of Apollo 14 samples.

    Science.gov (United States)

    Gehrke, C. W.; Zumwalt, R. W.; Kuo, K.; Aue, W. A.; Stalling, D. L.; Kvenvolden, K. A.; Ponnamperuma, C.

    1972-01-01

    Detection limits were between 300 pg and 1 ng for different amino acids, in an analysis by gas-liquid chromatography of water extracts from Apollo 14 lunar fines in which amino acids were converted to their N-trifluoro-acetyl-n-butyl esters. Initial analyses of water and HCl extracts of sample 14240 and 14298 samples showed no amino acids above background levels.

  15. Meta-analyses on viral hepatitis

    DEFF Research Database (Denmark)

    Gluud, Lise L; Gluud, Christian

    2009-01-01

    This article summarizes the meta-analyses of interventions for viral hepatitis A, B, and C. Some of the interventions assessed are described in small trials with unclear bias control. Other interventions are supported by large, high-quality trials. Although attempts have been made to adjust...

  16. Analyses of hydraulic performance of velocity caps

    DEFF Research Database (Denmark)

    Christensen, Erik Damgaard; Degn Eskesen, Mark Chr.; Buhrkall, Jeppe;

    2014-01-01

    The hydraulic performance of a velocity cap has been investigated. Velocity caps are often used in connection with offshore intakes. CFD (computational fluid dynamics) examined the flow through the cap openings and further down into the intake pipes. This was combined with dimension analyses...

  17. Comparing functional annotation analyses with Catmap

    Directory of Open Access Journals (Sweden)

    Krogh Morten

    2004-12-01

    Full Text Available Abstract Background Ranked gene lists from microarray experiments are usually analysed by assigning significance to predefined gene categories, e.g., based on functional annotations. Tools performing such analyses are often restricted to a category score based on a cutoff in the ranked list and a significance calculation based on random gene permutations as null hypothesis. Results We analysed three publicly available data sets, in each of which samples were divided in two classes and genes ranked according to their correlation to class labels. We developed a program, Catmap (available for download at http://bioinfo.thep.lu.se/Catmap, to compare different scores and null hypotheses in gene category analysis, using Gene Ontology annotations for category definition. When a cutoff-based score was used, results depended strongly on the choice of cutoff, introducing an arbitrariness in the analysis. Comparing results using random gene permutations and random sample permutations, respectively, we found that the assigned significance of a category depended strongly on the choice of null hypothesis. Compared to sample label permutations, gene permutations gave much smaller p-values for large categories with many coexpressed genes. Conclusions In gene category analyses of ranked gene lists, a cutoff independent score is preferable. The choice of null hypothesis is very important; random gene permutations does not work well as an approximation to sample label permutations.

  18. FAME: Software for analysing rock microstructures

    Science.gov (United States)

    Hammes, Daniel M.; Peternell, Mark

    2016-05-01

    Determination of rock microstructures leads to a better understanding of the formation and deformation of polycrystalline solids. Here, we present FAME (Fabric Analyser based Microstructure Evaluation), an easy-to-use MATLAB®-based software for processing datasets recorded by an automated fabric analyser microscope. FAME is provided as a MATLAB®-independent Windows® executable with an intuitive graphical user interface. Raw data from the fabric analyser microscope can be automatically loaded, filtered and cropped before analysis. Accurate and efficient rock microstructure analysis is based on an advanced user-controlled grain labelling algorithm. The preview and testing environments simplify the determination of appropriate analysis parameters. Various statistic and plotting tools allow a graphical visualisation of the results such as grain size, shape, c-axis orientation and misorientation. The FAME2elle algorithm exports fabric analyser data to an elle (modelling software)-supported format. FAME supports batch processing for multiple thin section analysis or large datasets that are generated for example during 2D in-situ deformation experiments. The use and versatility of FAME is demonstrated on quartz and deuterium ice samples.

  19. Uncertainty quantification approaches for advanced reactor analyses.

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, L. L.; Nuclear Engineering Division

    2009-03-24

    The original approach to nuclear reactor design or safety analyses was to make very conservative modeling assumptions so as to ensure meeting the required safety margins. Traditional regulation, as established by the U. S. Nuclear Regulatory Commission required conservatisms which have subsequently been shown to be excessive. The commission has therefore moved away from excessively conservative evaluations and has determined best-estimate calculations to be an acceptable alternative to conservative models, provided the best-estimate results are accompanied by an uncertainty evaluation which can demonstrate that, when a set of analysis cases which statistically account for uncertainties of all types are generated, there is a 95% probability that at least 95% of the cases meet the safety margins. To date, nearly all published work addressing uncertainty evaluations of nuclear power plant calculations has focused on light water reactors and on large-break loss-of-coolant accident (LBLOCA) analyses. However, there is nothing in the uncertainty evaluation methodologies that is limited to a specific type of reactor or to specific types of plant scenarios. These same methodologies can be equally well applied to analyses for high-temperature gas-cooled reactors and to liquid metal reactors, and they can be applied to steady-state calculations, operational transients, or severe accident scenarios. This report reviews and compares both statistical and deterministic uncertainty evaluation approaches. Recommendations are given for selection of an uncertainty methodology and for considerations to be factored into the process of evaluating uncertainties for advanced reactor best-estimate analyses.

  20. What's missing from avian global diversification analyses?

    Science.gov (United States)

    Reddy, Sushma

    2014-08-01

    The accumulation of vast numbers of molecular phylogenetic studies has contributed to huge knowledge gains in the evolutionary history of birds. This permits subsequent analyses of avian diversity, such as how and why diversification varies across the globe and among taxonomic groups. However, available genetic data for these meta-analyses are unevenly distributed across different geographic regions and taxonomic groups. To comprehend the impact of this variation on the interpretation of global diversity patterns, I examined the availability of genetic data for possible biases in geographic and taxonomic sampling of birds. I identified three main disparities of sampling that are geographically associated with latitude (temperate, tropical), hemispheres (East, West), and range size. Tropical regions, which host the vast majority of species, are substantially less studied. Moreover, Eastern regions, such as the Old World Tropics and Australasia, stand out as being disproportionately undersampled, with up to half of communities not being represented in recent studies. In terms of taxonomic discrepancies, a majority of genetically undersampled clades are exclusively found in tropical regions. My analysis identifies several disparities in the key regions of interest of global diversity analyses. Differential sampling can have considerable impacts on these global comparisons and call into question recent interpretations of latitudinal or hemispheric differences of diversification rates. Moreover, this review pinpoints understudied regions whose biota are in critical need of modern systematic analyses.

  1. The Economic Cost of Homosexuality: Multilevel Analyses

    Science.gov (United States)

    Baumle, Amanda K.; Poston, Dudley, Jr.

    2011-01-01

    This article builds on earlier studies that have examined "the economic cost of homosexuality," by using data from the 2000 U.S. Census and by employing multilevel analyses. Our findings indicate that partnered gay men experience a 12.5 percent earnings penalty compared to married heterosexual men, and a statistically insignificant earnings…

  2. Comparison of veterinary import risk analyses studies

    NARCIS (Netherlands)

    Vos-de Jong, de C.J.; Conraths, F.J.; Adkin, A.; Jones, E.M.; Hallgren, G.S.; Paisley, L.G.

    2011-01-01

    Twenty-two veterinary import risk analyses (IRAs) were audited: a) for inclusion of the main elements of risk analysis; b) between different types of IRAs; c) between reviewers' scores. No significant differences were detected between different types of IRAs, although quantitative IRAs and IRAs publ

  3. Xinjiang Hongze Mining Invests 1 billion-1.2 billion yuan in a 20,000-ton Copper Mining Project

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    <正>Xinjiang Hongze Mining Co.,Ltd.plans to invest 1 billion-1.2 billion yuan in a 20,000-ton copper mining project in Wuqia County.So far, it has completed registration,and completed consolidation of 7 mining rights of copper,lead

  4. Loss-of-function mutations in TNFAIP3 leading to A20 haploinsufficiency cause an early-onset autoinflammatory disease

    NARCIS (Netherlands)

    Zhou, Qing; Wang, Hongying; Schwartz, Daniella M; Stoffels, Monique; Park, Yong Hwan; Zhang, Yuan; Yang, Dan; Demirkaya, Erkan; Takeuchi, Masaki; Tsai, Wanxia Li; Lyons, Jonathan J; Yu, Xiaomin; Ouyang, Claudia; Chen, Celeste; Chin, David T; Zaal, Kristien; Chandrasekharappa, Settara C; P Hanson, Eric; Yu, Zhen; Mullikin, James C; Hasni, Sarfaraz A; Wertz, Ingrid E; Ombrello, Amanda K; Stone, Deborah L; Hoffmann, Patrycja; Jones, Anne; Barham, Beverly K; Leavis, Helen L; van Royen, Annet; Sibley, Cailin; Batu, Ezgi D; Gül, Ahmet; Siegel, Richard M; Boehm, Manfred; Milner, Joshua D; Ozen, Seza; Gadina, Massimo; Chae, JaeJin; Laxer, Ronald M; Kastner, Daniel L; Aksentijevich, Ivona

    2015-01-01

    Systemic autoinflammatory diseases are driven by abnormal activation of innate immunity. Herein we describe a new disease caused by high-penetrance heterozygous germline mutations in TNFAIP3, which encodes the NF-κB regulatory protein A20, in six unrelated families with early-onset systemic inflamma

  5. Loss-of-function mutations in TNFAIP3 leading to A20 haploinsufficiency cause an early onset autoinflammatory syndrome

    Science.gov (United States)

    Zhou, Qing; Wang, Hongying; Schwartz, Daniella M.; Stoffels, Monique; Park, Yong Hwan; Zhang, Yuan; Yang, Dan; Demirkaya, Erkan; Takeuchi, Masaki; Tsai, Wanxia Li; Lyons, Jonathan J.; Yu, Xiaomin; Ouyang, Claudia; Chen, Celeste; Chin, David T.; Zaal, Kristien; Chandrasekharappa, Settara C.; Hanson, Eric P.; Yu, Zhen; Mullikin, James C.; Hasni, Sarfaraz A.; Wertz, Ingrid; Ombrello, Amanda K.; Stone, Deborah L.; Hoffmann, Patrycja; Jones, Anne; Barham, Beverly K.; Leavis, Helen L.; van Royen-Kerkof, Annet; Sibley, Cailin; Batu, Ezgi D.; Gül, Ahmet; Siegel, Richard M.; Boehm, Manfred; Milner, Joshua D.; Ozen, Seza; Gadina, Massimo; Chae, JaeJin; Laxer, Ronald M.; Kastner, Daniel L.; Aksentijevich, Ivona

    2016-01-01

    Systemic autoinflammatory diseases are driven by abnormal activation of innate immunity1. Herein we describe a new syndrome caused by high penetrance heterozygous germline mutations in the NFκB regulatory protein TNFAIP3 (A20) in six unrelated families with early onset systemic inflammation. The syndrome resembles Behçet’s disease (BD), which is typically considered a polygenic disorder with onset in early adulthood2. A20 is a potent inhibitor of the NFκB signaling pathway3. TNFAIP3 mutant truncated proteins are likely to act by haploinsufficiency since they do not exert a dominant-negative effect in overexpression experiments. Patients’ cells show increased degradation of IκBα and nuclear translocation of NFκB p65, and increased expression of NFκB-mediated proinflammatory cytokines. A20 restricts NFκB signals via deubiquitinating (DUB) activity. In cells expressing the mutant A20 protein, there is defective removal of K63-linked ubiquitin from TRAF6, NEMO, and RIP1 after TNF stimulation. NFκB-dependent pro-inflammatory cytokines are potential therapeutic targets for these patients. PMID:26642243

  6. Nonlinear Combustion Instability Prediction

    Science.gov (United States)

    Flandro, Gary

    2010-01-01

    The liquid rocket engine stability prediction software (LCI) predicts combustion stability of systems using LOX-LH2 propellants. Both longitudinal and transverse mode stability characteristics are calculated. This software has the unique feature of being able to predict system limit amplitude.

  7. A20 overexpression under control of mouse osteocalcin promoter in MC3T3-E1 cells inhibited tumor necrosis factor-alpha-induced apoptosis

    Institute of Scientific and Technical Information of China (English)

    Yue-juan QIN; Zhen-lin ZHANG; Lu-yang YU; Jin-wei HE; Ya-nan HOU; Tian-jin LIU; Jia-cai WU; Song-hua WU; Li-he GUO

    2006-01-01

    Aim: To construct an A20 expression vector under the control of mouse osteocalcin promoter (OC-A20), and investigate osteoblastic MC3T3-E1 cell line, which stably overexpresses A20 protein prevented tumor necrosis factor (TNF)-alpha-induced apoptosis. Methods: OC-A20 vector was constructed by fusing a fragment of the mouse osteocalcin gene-2 promoter with human A20 complementary DNA. Then the mouse MC3T3-E1 cell line, stably transfected by A20, was established. The expression of A20 mRNA and A20 protein in the cells were detected by reverse transcription-polymerase chain reaction (RT-PCR) and Western blot analysis, respectively. To determine the specificity of A20 expression in osteoblast, the mouse osteoblastic MC3T3-E1 cell line and mouse embryo fibro-blast NIH3T3 cell line were transiently transfected with OC-A20. The anti-apoptotic role of A20 in MC3T3-E1 cells was determined by Flow cytometric analysis (FACS), terminal dUTP nick endo-labeling (TUNEL) and DNA gel electrophoresis analysis (DNA Ladder), respectively. Results: Weak A20 expression was found in MC3T3-El cells with the primers of mouse A20. A20 mRNA and A20 protein expression were identified in MC3T3-E1 cells transfected with OC-A20 using RT-PCR and Western blot analysis. Only A20 mRNA expression was found in MC3T3-E1 cell after MC3T3-E1 cells and NIH3T3 cells were transient transfected with OC-A20. A decrease obviously occurred in the rate of apoptosis in the OC-A20 group compared with the empty vector (pcDNA3) group by FACS (P<0.001). A significant increase in TUNEL positive staining was found in the pcDNA group compared with OC-A20 group (P<0.001). Simultaneously, similar effects were demonstrated in DNA gel electrophoresis analysis. Conclusion: We constructed an osteoblast-specific expression vector that expressed A20 protein in MC3T3-E1 cells and confirmed that A20 protects osteoblast against TNF-alpha-induced apoptosis.

  8. Spent fuel shipping costs for transportation logistics analyses

    International Nuclear Information System (INIS)

    Logistics analyses supplied to the nuclear waste management programs of the U.S. Department of Energy through the Transportation Technology Center (TTC) at Sandia National Laboratories are used to predict nuclear waste material logistics, transportation packaging demands, shipping and receiving rates and transportation-related costs for alternative strategies. This study is an in-depth analysis of the problems and contingencies associated with the costs of shipping irradiated reactor fuel. These costs are extremely variable however, and have changed frequently (sometimes monthly) during the past few years due to changes in capital, fuel, and labor costs. All costs and charges reported in this study are based on January 1982 data using existing transport cask systems and should be used as relative indices only. Actual shipping costs would be negotiable for each origin-destination combination

  9. Monte Carlo uncertainty analyses for integral beryllium experiments

    CERN Document Server

    Fischer, U; Tsige-Tamirat, H

    2000-01-01

    The novel Monte Carlo technique for calculating point detector sensitivities has been applied to two representative beryllium transmission experiments with the objective to investigate the sensitivity of important responses such as the neutron multiplication and to assess the related uncertainties due to the underlying cross-section data uncertainties. As an important result, it has been revealed that the neutron multiplication power of beryllium can be predicted with good accuracy using state-of-the-art nuclear data evaluations. Severe discrepancies do exist for the spectral neutron flux distribution that would transmit into significant uncertainties of the calculated neutron spectra and of the nuclear blanket performance in blanket design calculations. With regard to this, it is suggested to re-analyse the secondary energy and angle distribution data of beryllium by means of Monte Carlo based sensitivity and uncertainty calculations. Related code development work is underway.

  10. Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.

  11. Testing earthquake predictions

    Science.gov (United States)

    Luen, Brad; Stark, Philip B.

    2008-01-01

    Statistical tests of earthquake predictions require a null hypothesis to model occasional chance successes. To define and quantify 'chance success' is knotty. Some null hypotheses ascribe chance to the Earth: Seismicity is modeled as random. The null distribution of the number of successful predictions - or any other test statistic - is taken to be its distribution when the fixed set of predictions is applied to random seismicity. Such tests tacitly assume that the predictions do not depend on the observed seismicity. Conditioning on the predictions in this way sets a low hurdle for statistical significance. Consider this scheme: When an earthquake of magnitude 5.5 or greater occurs anywhere in the world, predict that an earthquake at least as large will occur within 21 days and within an epicentral distance of 50 km. We apply this rule to the Harvard centroid-moment-tensor (CMT) catalog for 2000-2004 to generate a set of predictions. The null hypothesis is that earthquake times are exchangeable conditional on their magnitudes and locations and on the predictions - a common "nonparametric" assumption in the literature. We generate random seismicity by permuting the times of events in the CMT catalog. We consider an event successfully predicted only if (i) it is predicted and (ii) there is no larger event within 50 km in the previous 21 days. The P-value for the observed success rate is <0.001: The method successfully predicts about 5% of earthquakes, far better than 'chance' because the predictor exploits the clustering of earthquakes - occasional foreshocks - which the null hypothesis lacks. Rather than condition on the predictions and use a stochastic model for seismicity, it is preferable to treat the observed seismicity as fixed, and to compare the success rate of the predictions to the success rate of simple-minded predictions like those just described. If the proffered predictions do no better than a simple scheme, they have little value.

  12. STACE: Source Term Analyses for Containment Evaluations of transport casks

    International Nuclear Information System (INIS)

    The development of the Source Term Analyses for Containment Evaluations (STACE) methodology provides a unique means for estimating the probability of cladding breach within transport casks, quantifying the amount of radioactive material released into the cask interior, and calculating the releasable radionuclide concentrations and corresponding maximum permissible leakage rates. Following the guidance of ANSI N14.5, the STACE methodology provides a technically defensible means for estimating maximum permissible leakage rates. These containment criteria attempt to reflect the true radiological hazard by performing a detailed examination of the spent fuel, CRUD, and residual contamination contributions to the releasable source term. The evaluation of the spent fuel contribution to the source team has been modeled fairly accurately using the STACE methodology. The structural model predicts the cask drop load history, the mechanical response of the fuel assembly, and the probability of cladding breach. These data are then used to predict the amount of fission gas, volitile species, and fuel fines that are releasable from the cask. There are some areas where data are sparse or lacking in which experimental validation is planned. Finally, the ANSI N14.5 recommendation that 3% and 100% of the fuel rods fail during normal and hypothetical accident conditions of transport, respectively, has been show to be overly conservative by several degrees of magnitude for these example analyses. Furthermore, the maximum permissible leakage rates for this example assembly under normal and hypothetical accident conditions are significanly higher that the leaktight requirements. By relaxing the maximum permissible leakage rates, the source term methodology is expected to significantly improvecask economics and safety

  13. Three-dimensional lake water quality modeling: sensitivity and uncertainty analyses.

    Science.gov (United States)

    Missaghi, Shahram; Hondzo, Miki; Melching, Charles

    2013-11-01

    Two sensitivity and uncertainty analysis methods are applied to a three-dimensional coupled hydrodynamic-ecological model (ELCOM-CAEDYM) of a morphologically complex lake. The primary goals of the analyses are to increase confidence in the model predictions, identify influential model parameters, quantify the uncertainty of model prediction, and explore the spatial and temporal variabilities of model predictions. The influence of model parameters on four model-predicted variables (model output) and the contributions of each of the model-predicted variables to the total variations in model output are presented. The contributions of predicted water temperature, dissolved oxygen, total phosphorus, and algal biomass contributed 3, 13, 26, and 58% of total model output variance, respectively. The fraction of variance resulting from model parameter uncertainty was calculated by two methods and used for evaluation and ranking of the most influential model parameters. Nine out of the top 10 parameters identified by each method agreed, but their ranks were different. Spatial and temporal changes of model uncertainty were investigated and visualized. Model uncertainty appeared to be concentrated around specific water depths and dates that corresponded to significant storm events. The results suggest that spatial and temporal variations in the predicted water quality variables are sensitive to the hydrodynamics of physical perturbations such as those caused by stream inflows generated by storm events. The sensitivity and uncertainty analyses identified the mineralization of dissolved organic carbon, sediment phosphorus release rate, algal metabolic loss rate, internal phosphorus concentration, and phosphorus uptake rate as the most influential model parameters.

  14. Albedo Pattern Recognition and Time-Series Analyses in Malaysia

    Science.gov (United States)

    Salleh, S. A.; Abd Latif, Z.; Mohd, W. M. N. Wan; Chan, A.

    2012-07-01

    Pattern recognition and time-series analyses will enable one to evaluate and generate predictions of specific phenomena. The albedo pattern and time-series analyses are very much useful especially in relation to climate condition monitoring. This study is conducted to seek for Malaysia albedo pattern changes. The pattern recognition and changes will be useful for variety of environmental and climate monitoring researches such as carbon budgeting and aerosol mapping. The 10 years (2000-2009) MODIS satellite images were used for the analyses and interpretation. These images were being processed using ERDAS Imagine remote sensing software, ArcGIS 9.3, the 6S code for atmospherical calibration and several MODIS tools (MRT, HDF2GIS, Albedo tools). There are several methods for time-series analyses were explored, this paper demonstrates trends and seasonal time-series analyses using converted HDF format MODIS MCD43A3 albedo land product. The results revealed significance changes of albedo percentages over the past 10 years and the pattern with regards to Malaysia's nebulosity index (NI) and aerosol optical depth (AOD). There is noticeable trend can be identified with regards to its maximum and minimum value of the albedo. The rise and fall of the line graph show a similar trend with regards to its daily observation. The different can be identified in term of the value or percentage of rises and falls of albedo. Thus, it can be concludes that the temporal behavior of land surface albedo in Malaysia have a uniform behaviours and effects with regards to the local monsoons. However, although the average albedo shows linear trend with nebulosity index, the pattern changes of albedo with respects to the nebulosity index indicates that there are external factors that implicates the albedo values, as the sky conditions and its diffusion plotted does not have uniform trend over the years, especially when the trend of 5 years interval is examined, 2000 shows high negative linear

  15. Identifying, analysing and solving problems in practice.

    Science.gov (United States)

    Hewitt-Taylor, Jaqui

    When a problem is identified in practice, it is important to clarify exactly what it is and establish the cause before seeking a solution. This solution-seeking process should include input from those directly involved in the problematic situation, to enable individuals to contribute their perspective, appreciate why any change in practice is necessary and what will be achieved by the change. This article describes some approaches to identifying and analysing problems in practice so that effective solutions can be devised. It includes a case study and examples of how the Five Whys analysis, fishbone diagram, problem tree analysis, and Seven-S Model can be used to analyse a problem. PMID:22848969

  16. Analyses of cavitation instabilities in ductile metals

    DEFF Research Database (Denmark)

    Tvergaard, Viggo

    2007-01-01

    , and also tests for a thin ductile metal layer bonding two ceramic blocks have indicated rapid void growth. Analyses for these material configurations are discussed here. When the void radius is very small, a nonlocal plasticity model is needed to account for observed size-effects, and recent analyses...... analyzed for a material containing a periodic distribution of spherical voids with two different void sizes, where the stress fields around larger voids may accelerate the growth of smaller voids. Another approach has been an analysis of a unit cell model in which a central cavity is discretely represented......, while the surrounding voids are represented by a porous ductile material model in terms of a field quantity that specifies the variation of the void volume fraction in the surrounding metal....

  17. Reliability of chemical analyses of water samples

    Energy Technology Data Exchange (ETDEWEB)

    Beardon, R.

    1989-11-01

    Ground-water quality investigations require reliable chemical analyses of water samples. Unfortunately, laboratory analytical results are often unreliable. The Uranium Mill Tailings Remedial Action (UMTRA) Project`s solution to this problem was to establish a two phase quality assurance program for the analysis of water samples. In the first phase, eight laboratories analyzed three solutions of known composition. The analytical accuracy of each laboratory was ranked and three laboratories were awarded contracts. The second phase consists of on-going monitoring of the reliability of the selected laboratories. The following conclusions are based on two years experience with the UMTRA Project`s Quality Assurance Program. The reliability of laboratory analyses should not be taken for granted. Analytical reliability may be independent of the prices charged by laboratories. Quality assurance programs benefit both the customer and the laboratory.

  18. Pratique de l'analyse fonctionelle

    CERN Document Server

    Tassinari, Robert

    1997-01-01

    Mettre au point un produit ou un service qui soit parfaitement adapté aux besoins et aux exigences du client est indispensable pour l'entreprise. Pour ne rien laisser au hasard, il s'agit de suivre une méthodologie rigoureuse : celle de l'analyse fonctionnelle. Cet ouvrage définit précisément cette méthode ainsi que ses champs d'application. Il décrit les méthodes les plus performantes en termes de conception de produit et de recherche de qualité et introduit la notion d'analyse fonctionnelle interne. Un ouvrage clé pour optimiser les processus de conception de produit dans son entreprise. -- Idées clés, par Business Digest

  19. Mikromechanische Analyse der Wirkungsmechanismen elektrischer Dehnungsmessstreifen

    OpenAIRE

    Stockmann, Martin

    2000-01-01

    Die elektrische Dehnungsmesstechnik auf der Grundlage separater Dehnungsmessstreifen (DMS) stellt heute eine der wesentlichsten Methoden zur experimentellen Beanspruchungs- analyse dar. Präzise Messungen außerhalb der Kalibrierbedingungen, insbesondere bei großen Deformationen oder hohen Querdehnungsanteilen, erfordern die Berücksichtigung nicht- linearer Zusammenhänge zwischen den zu bestimmenden Komponenten der Bauteildehnung und der Widerstandsänderung des Messgitters. ...

  20. El Cours d’Analyse de Cauchy

    OpenAIRE

    Pérez, Javier; Aizpuru, Antonio

    1999-01-01

    En este artículo presentamos un estudio contextualizado de Cours d’Analyse de Cauchy, analizando su significado e importancia. Presentamos especial atención al grado de elaboración teórica de límites, continuidad, series, números reales funciones y series completas, relacionando las aportaciones de Cauchi del nivel conceptual anterior a esta ahora.

  1. Mass spectrometer for the analyses of gases

    International Nuclear Information System (INIS)

    A 6-in-radius, 600 magnetic-sector mass spectrometer (designated as the MS-200) has been constructed for the quantitative and qualitative analyses of fixed gases and volatile organics in the concentration range from 1 ppM (by volume) to 100%. A partial pressure of 1 x 10-6 torr in the inlet expansion volume is required to achieve a useful signal at an electron-multiplier gain of 10,000

  2. Ethics of cost analyses in medical education

    OpenAIRE

    Walsh, Kieran

    2013-01-01

    Cost analyses in medical education are rarely straightforward, and rarely lead to clear-cut conclusions. Occasionally they do lead to clear conclusions but even when that happens, some stakeholders will ask difficult but valid questions about what to do following cost analyses–specifically about distributive justice in the allocation of resources. At present there are few or no debates about these issues and rationing decisions that are taken in medical education are largely made subconscious...

  3. En Billig GPS Data Analyse Platform

    DEFF Research Database (Denmark)

    Andersen, Ove; Christiansen, Nick; Larsen, Niels T.;

    2011-01-01

    Denne artikel præsenterer en komplet software platform til analyse af GPS data. Platformen er bygget udelukkende vha. open-source komponenter. De enkelte komponenter i platformen beskrives i detaljer. Fordele og ulemper ved at bruge open-source diskuteres herunder hvilke IT politiske tiltage, der...... organisationer med et digitalt vejkort og GPS data begynde at lave trafikanalyser på disse data. Det er et krav, at der er passende IT kompetencer tilstede i organisationen....

  4. Delvis drenert analyse av innvendig avstivet utgraving

    OpenAIRE

    Myhrvold, Michael F

    2013-01-01

    Denne masteroppgaven omhandler analyser av de delvis drenerte effektene som kan oppstå ved innvendig, avstivede utgravinger. Formålet med masteroppgaven er å gjennomføre en numerisk studie av prosessen som styrer den tidsavhengige utviklingen ved avstivede utgravinger i lavpermeable jordtyper. Det gir muligheten til å vurdere de delvis drenerte effektene og innflytelsen disse utgjør ved denne typen utgravinger. Ettersom jordens oppførsel ved små tø...

  5. ANALYSING SPACE: ADAPTING AND EXTENDING MULTIMODAL SEMIOTICS

    Directory of Open Access Journals (Sweden)

    Louise J. Ravelli

    2015-07-01

    Full Text Available In the field of multimodal discourse analysis, one of the most exciting sites of application is that of 3D space: examining aspects of built environment for its meaningmaking potential. For the built environment – homes, offices, public buildings, parks, etc. – does indeed make meaning. These are spaces which speak – often their meanings are so familiar, we no longer hear what they say; sometimes, new and unusual sites draw attention to their meanings, and they are hotly contested. This chapter will suggest ways of analyzing 3D texts, based on the framework of Kress and van Leeuwen (2006. This framework, developed primarily for the analysis of 2D images, has been successfully extended to a range of other multimodal texts. Extension to the built environment includes Pang (2004, O’Toole (1994, Ravelli (2006, Safeyton (2004, Stenglin (2004 and White (1994, whose studies will inform the analyses presented here. This article will identify some of the key theoretical principles which underline this approach, including the notions of text, context and metafunction, and will describe some of the main areas of analysis for 3D texts. Also, ways of bringing the analyses together will be considered. The analyses will be demonstrated in relation to the Scientia building at the University of New South Wales, Australia.

  6. Sequencing and comparative analyses of the genomes of zoysiagrasses.

    Science.gov (United States)

    Tanaka, Hidenori; Hirakawa, Hideki; Kosugi, Shunichi; Nakayama, Shinobu; Ono, Akiko; Watanabe, Akiko; Hashiguchi, Masatsugu; Gondo, Takahiro; Ishigaki, Genki; Muguerza, Melody; Shimizu, Katsuya; Sawamura, Noriko; Inoue, Takayasu; Shigeki, Yuichi; Ohno, Naoki; Tabata, Satoshi; Akashi, Ryo; Sato, Shusei

    2016-04-01

    Zoysiais a warm-season turfgrass, which comprises 11 allotetraploid species (2n= 4x= 40), each possessing different morphological and physiological traits. To characterize the genetic systems ofZoysiaplants and to analyse their structural and functional differences in individual species and accessions, we sequenced the genomes ofZoysiaspecies using HiSeq and MiSeq platforms. As a reference sequence ofZoysiaspecies, we generated a high-quality draft sequence of the genome ofZ. japonicaaccession 'Nagirizaki' (334 Mb) in which 59,271 protein-coding genes were predicted. In parallel, draft genome sequences ofZ. matrella'Wakaba' andZ. pacifica'Zanpa' were also generated for comparative analyses. To investigate the genetic diversity among theZoysiaspecies, genome sequence reads of three additional accessions,Z. japonica'Kyoto',Z. japonica'Miyagi' andZ. matrella'Chiba Fair Green', were accumulated, and aligned against the reference genome of 'Nagirizaki' along with those from 'Wakaba' and 'Zanpa'. As a result, we detected 7,424,163 single-nucleotide polymorphisms and 852,488 short indels among these species. The information obtained in this study will be valuable for basic studies on zoysiagrass evolution and genetics as well as for the breeding of zoysiagrasses, and is made available in the 'Zoysia Genome Database' athttp://zoysia.kazusa.or.jp. PMID:26975196

  7. Sequencing and comparative analyses of the genomes of zoysiagrasses.

    Science.gov (United States)

    Tanaka, Hidenori; Hirakawa, Hideki; Kosugi, Shunichi; Nakayama, Shinobu; Ono, Akiko; Watanabe, Akiko; Hashiguchi, Masatsugu; Gondo, Takahiro; Ishigaki, Genki; Muguerza, Melody; Shimizu, Katsuya; Sawamura, Noriko; Inoue, Takayasu; Shigeki, Yuichi; Ohno, Naoki; Tabata, Satoshi; Akashi, Ryo; Sato, Shusei

    2016-04-01

    Zoysiais a warm-season turfgrass, which comprises 11 allotetraploid species (2n= 4x= 40), each possessing different morphological and physiological traits. To characterize the genetic systems of Zoysia plants and to analyse their structural and functional differences in individual species and accessions, we sequenced the genomes of Zoysia species using HiSeq and MiSeq platforms. As a reference sequence of Zoysia species, we generated a high-quality draft sequence of the genome of Z. japonica accession 'Nagirizaki' (334 Mb) in which 59,271 protein-coding genes were predicted. In parallel, draft genome sequences of Z. matrella 'Wakaba' and Z. pacifica 'Zanpa' were also generated for comparative analyses. To investigate the genetic diversity among the Zoysia species, genome sequence reads of three additional accessions, Z. japonica'Kyoto', Z. japonica'Miyagi' and Z. matrella'Chiba Fair Green', were accumulated, and aligned against the reference genome of 'Nagirizaki' along with those from 'Wakaba' and 'Zanpa'. As a result, we detected 7,424,163 single-nucleotide polymorphisms and 852,488 short indels among these species. The information obtained in this study will be valuable for basic studies on zoysiagrass evolution and genetics as well as for the breeding of zoysiagrasses, and is made available in the 'Zoysia Genome Database' at http://zoysia.kazusa.or.jp.

  8. Consumer brand choice: individual and group analyses of demand elasticity.

    Science.gov (United States)

    Oliveira-Castro, Jorge M; Foxall, Gordon R; Schrezenmaier, Teresa C

    2006-03-01

    Following the behavior-analytic tradition of analyzing individual behavior, the present research investigated demand elasticity of individual consumers purchasing supermarket products, and compared individual and group analyses of elasticity. Panel data from 80 UK consumers purchasing 9 product categories (i.e., baked beans, biscuits, breakfast cereals, butter, cheese, fruit juice, instant coffee, margarine and tea) during a 16-week period were used. Elasticity coefficients were calculated for individual consumers with data from all or only 1 product category (intra-consumer elasticities), and for each product category using all data points from all consumers (overall product elasticity) or 1 average data point per consumer (interconsumer elasticity). In addition to this, split-sample elasticity coefficients were obtained for each individual with data from all product categories purchased during weeks 1 to 8 and 9 to 16. The results suggest that: 1) demand elasticity coefficients calculated for individual consumers purchasing supermarket food products are compatible with predictions from economic theory and behavioral economics; 2) overall product elasticities, typically employed in marketing and econometric research, include effects of interconsumer and intraconsumer elasticities; 3) when comparing demand elasticities of different product categories, group and individual analyses yield similar trends; and 4) individual differences in demand elasticity are relatively consistent across time, but do not seem to be consistent across products. These results demonstrate the theoretical, methodological, and managerial relevance of investigating the behavior of individual consumers.

  9. Department of Energy's team's analyses of Soviet designed VVERs

    Energy Technology Data Exchange (ETDEWEB)

    1989-09-01

    This document provides Appendices A thru K of this report. The topics discussed respectively are: radiation induced embrittlement and annealing of reactor pressure vessel steels; loss of coolant accident blowdown analyses; LOCA blowdown response analyses; non-seismic structural response analyses; seismic analyses; S'' seal integrity; reactor transient analyses; fire protection; aircraft impacts; and boric acid induced corrosion. (FI).

  10. Predictive systems ecology.

    Science.gov (United States)

    Evans, Matthew R; Bithell, Mike; Cornell, Stephen J; Dall, Sasha R X; Díaz, Sandra; Emmott, Stephen; Ernande, Bruno; Grimm, Volker; Hodgson, David J; Lewis, Simon L; Mace, Georgina M; Morecroft, Michael; Moustakas, Aristides; Murphy, Eugene; Newbold, Tim; Norris, K J; Petchey, Owen; Smith, Matthew; Travis, Justin M J; Benton, Tim G

    2013-11-22

    Human societies, and their well-being, depend to a significant extent on the state of the ecosystems that surround them. These ecosystems are changing rapidly usually in response to anthropogenic changes in the environment. To determine the likely impact of environmental change on ecosystems and the best ways to manage them, it would be desirable to be able to predict their future states. We present a proposal to develop the paradigm of predictive systems ecology, explicitly to understand and predict the properties and behaviour of ecological systems. We discuss the necessary and desirable features of predictive systems ecology models. There are places where predictive systems ecology is already being practised and we summarize a range of terrestrial and marine examples. Significant challenges remain but we suggest that ecology would benefit both as a scientific discipline and increase its impact in society if it were to embrace the need to become more predictive.

  11. Predictability of conversation partners

    CERN Document Server

    Takaguchi, Taro; Sato, Nobuo; Yano, Kazuo; Masuda, Naoki

    2011-01-01

    Recent developments in sensing technologies have enabled us to examine the nature of human social behavior in greater detail. By applying an information theoretic method to the spatiotemporal data of cell-phone locations, Song et al. (2010) found that human mobility patterns are remarkably predictable. Inspired by their work, we address a similar predictability question in a different kind of human social activity: conversation events. The predictability in the sequence of one's conversation partners is defined as the degree to which one's next conversation partner can be predicted given the current partner. We quantify this predictability by using the mutual information. We examine the predictability of conversation events for each individual using the longitudinal data of face-to-face interactions collected from two company offices in Japan. Each subject wears a name tag equipped with an infrared sensor node, and conversation events are marked when signals are exchanged between close sensor nodes. We find t...

  12. Evaluation of Model Operational Analyses during DYNAMO

    Science.gov (United States)

    Ciesielski, Paul; Johnson, Richard

    2013-04-01

    A primary component of the observing system in the DYNAMO-CINDY2011-AMIE field campaign was an atmospheric sounding network comprised of two sounding quadrilaterals, one north and one south of the equator over the central Indian Ocean. During the experiment a major effort was undertaken to ensure the real-time transmission of these data onto the GTS (Global Telecommunication System) for dissemination to the operational centers (ECMWF, NCEP, JMA, etc.). Preliminary estimates indicate that ~95% of the soundings from the enhanced sounding network were successfully transmitted and potentially used in their data assimilation systems. Because of the wide use of operational and reanalysis products (e.g., in process studies, initializing numerical simulations, construction of large-scale forcing datasets for CRMs, etc.), their validity will be examined by comparing a variety of basic and diagnosed fields from two operational analyses (ECMWF and NCEP) to similar analyses based solely on sounding observations. Particular attention will be given to the vertical structures of apparent heating (Q1) and drying (Q2) from the operational analyses (OA), which are strongly influenced by cumulus parameterizations, a source of model infidelity. Preliminary results indicate that the OA products did a reasonable job at capturing the mean and temporal characteristics of convection during the DYNAMO enhanced observing period, which included the passage of two significant MJO events during the October-November 2011 period. For example, temporal correlations between Q2-budget derived rainfall from the OA products and that estimated from the TRMM satellite (i.e., the 3B42V7 product) were greater than 0.9 over the Northern Sounding Array of DYNAMO. However closer inspection of the budget profiles show notable differences between the OA products and the sounding-derived results in low-level (surface to 700 hPa) heating and drying structures. This presentation will examine these differences and

  13. Stable isotopic analyses in paleoclimatic reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Wigand, P.E. [Univ. and Community College System of Nevada, Reno, NV (United States)

    1995-09-01

    Most traditional paleoclimatic proxy data have inherent time lags between climatic input and system response that constrain their use in accurate reconstruction of paleoclimate chronology, scaling of its variability, and the elucidation of the processes that determine its impact on the biotic and abiotic environment. With the exception of dendroclimatology, and studies of short-lived organisms and pollen recovered from annually varved lacustrine sediments, significant periods of time ranging from years, to centuries, to millennia may intervene between climate change and its first manifestation in paleoclimatic proxy data records. Reconstruction of past climate through changes in plant community composition derived from pollen sequences and plant remains from ancient woodrat middens, wet environments and dry caves all suffer from these lags. However, stable isotopic analyses can provide more immediate indication of biotic response to climate change. Evidence of past physiological response of organisms to changes in effective precipitation as climate varies can be provided by analyses of the stable isotopic content of plant macrofossils from various contexts. These analyses consider variation in the stable isotopic (hydrogen, oxygen and carbon) content of plant tissues as it reflects (1) past global or local temperature through changes in meteoric (rainfall) water chemistry in the case of the first two isotopes, and (2) plant stress through changes in plant respiration/transpiration processes under differing water availability, and varying atmospheric CO, composition (which itself may actually be a net result of biotic response to climate change). Studies currently being conducted in the Intermountain West indicate both long- and short-term responses that when calibrated with modem analogue studies have the potential of revealing not only the timing of climate events, but their direction, magnitude and rapidity.

  14. IDEA: Interactive Display for Evolutionary Analyses

    Directory of Open Access Journals (Sweden)

    Carlton Jane M

    2008-12-01

    Full Text Available Abstract Background The availability of complete genomic sequences for hundreds of organisms promises to make obtaining genome-wide estimates of substitution rates, selective constraints and other molecular evolution variables of interest an increasingly important approach to addressing broad evolutionary questions. Two of the programs most widely used for this purpose are codeml and baseml, parts of the PAML (Phylogenetic Analysis by Maximum Likelihood suite. A significant drawback of these programs is their lack of a graphical user interface, which can limit their user base and considerably reduce their efficiency. Results We have developed IDEA (Interactive Display for Evolutionary Analyses, an intuitive graphical input and output interface which interacts with PHYLIP for phylogeny reconstruction and with codeml and baseml for molecular evolution analyses. IDEA's graphical input and visualization interfaces eliminate the need to edit and parse text input and output files, reducing the likelihood of errors and improving processing time. Further, its interactive output display gives the user immediate access to results. Finally, IDEA can process data in parallel on a local machine or computing grid, allowing genome-wide analyses to be completed quickly. Conclusion IDEA provides a graphical user interface that allows the user to follow a codeml or baseml analysis from parameter input through to the exploration of results. Novel options streamline the analysis process, and post-analysis visualization of phylogenies, evolutionary rates and selective constraint along protein sequences simplifies the interpretation of results. The integration of these functions into a single tool eliminates the need for lengthy data handling and parsing, significantly expediting access to global patterns in the data.

  15. Sivers function: SIDIS data, fits and predictions

    CERN Document Server

    Anselmino, M; D'Alesio, U; Kotzinian, A; Murgia, F; Prokudin, A

    2005-01-01

    The most recent data on the weighted transverse single spin asymmetry A_{UT}^{\\sin(\\phi_h-\\phi_S)} from HERMES and COMPASS collaborations are analysed within LO parton model; all transverse motions are taken into account. Extraction of the Sivers function for u and d quarks is performed. Based on the extracted Sivers functions, predictions for A_{UT}^{\\sin(\\phi_h-\\phi_S)} asymmetries at JLab are given; suggestions for further measurements at COMPASS, with a transversely polarized hydrogen target and selecting favourable kinematical ranges, are discussed. Predictions are also presented for Single Spin Asymmetries (SSA) in Drell-Yan processes at RHIC and GSI.

  16. FEM-ANALYSE AV INDUSTRIELL ALUMINIUMSPROFILEKSTRUDERING

    OpenAIRE

    Christenssen, Wenche

    2014-01-01

    Avhandlingen er skrevet for å øke forståelsen og kunnskapen rundt materialflyt ved ekstruderingav komplekse og tynnvegde aluminiumprofiler. Det gjennomgås også hvordan ujevn materialflyt utav en matrise kan avbalanseres ved bruk av forkammer.Rapporten tar for seg oppbygning av modeller og simulering for to forskjellige profilgeometrier.Det første profilet er et U-profil som det tidligere er gjort analyser av ved bruk av modellmateriale.Dette ble gjort i en Diplom...

  17. Erregerspektrum bei tiefen Halsinfektionen: Eine retrospektive Analyse

    OpenAIRE

    Sömmer, C; Haid, M; Hommerich, C; Laskawi, R; Canis, M; Matthias, C

    2014-01-01

    Einleitung: Tiefe Halsinfektionen zählen zu den gefährlichsten Erkrankungen in der HNO-Heilkunde. Diese Analyse gibt einen Überblick über die Mikrobiologie tiefer Halsinfektionen und Einflussfaktoren, die zu einer Änderung des Keimspektrums führen können. Methoden: Von Januar 2002 bis Dezember 2012 wurden 63 Patienten mit tiefen Halsinfektionen in der HNO-Klinik der Universitätsmedizin Göttingen behandelt. Es wurden intraoperative Abstriche entnommen. Die Inzidenz der häufigsten Erreger wur...

  18. Use of Geospatial Analyses for Semantic Reasoning

    OpenAIRE

    Karmacharya, Ashish; Cruz, Christophe; Boochs, Frank; Marzani, Franck

    2010-01-01

    International audience This work focuses on the integration of the spatial analyses for semantic reasoning in order to compute new axioms of an existing OWL ontology. To make it concrete, we have defined Spatial Built-ins, an extension of existing Built-ins of the SWRL rule language. It permits to run deductive rules with the help of a translation rule engine. Thus, the Spatial SWRL rules are translated to standard SWRL rules. Once the spatial functions of the Spatial SWRL rules are comput...

  19. Hybrid Logical Analyses of the Ambient Calculus

    DEFF Research Database (Denmark)

    Bolander, Thomas; Hansen, Rene Rydhof

    2010-01-01

    In this paper, hybrid logic is used to formulate three control flow analyses for Mobile Ambients, a process calculus designed for modelling mobility. We show that hybrid logic is very well-suited to express the semantic structure of the ambient calculus and how features of hybrid logic can...... be exploited to reduce the "administrative overhead" of the analysis specification and thus simplify it. Finally, we use HyLoTab, a fully automated theorem prover for hybrid logic, both as a convenient platform for a prototype implementation as well as to formally prove the correctness of the analysis. (C...

  20. Analyses of containment structures with corrosion damage

    International Nuclear Information System (INIS)

    Corrosion damage to a nuclear power plant containment structure can degrade the pressure capacity of the vessel. For the low-carbon, low- strength steels used in containments, the effect of corrosion on material properties is discussed. Strain-to-failure tests, in uniaxial tension, have been performed on corroded material samples. Results were used to select strain-based failure criteria for corroded steel. Using the ABAQUS finite element analysis code, the capacity of a typical PWR Ice Condenser containment with corrosion damage has been studied. Multiple analyses were performed with the locations of the corrosion the containment, and the amount of corrosion varied in each analysis

  1. Rod Ellis, Gary Barkhuizen, Analysing Learner Language

    OpenAIRE

    Narcy-Combes, Marie-Françoise

    2014-01-01

    Ce livre vient à point nommé pour compléter les outils à la disposition des jeunes chercheurs en linguistique appliquée et didactique des langues, comme des praticiens de terrain désireux de conduire une recherche-action. Comme souvent en ce qui concerne les ouvrages de Rod Ellis, il s’agit d’une somme : une étude diachronique des outils utilisés depuis les années soixante par les chercheurs en acquisition des langues pour l’analyse des productions écrites et orales des apprenants de langue. ...

  2. En analyse av Yoga-kundalini-upanisad

    OpenAIRE

    2006-01-01

    Avhandlingen En analyse av Yoga-kundalini-upanisad bygger på den indiske asketen Narayanaswamy Aiyers engelske oversettelse av Yoga-kundalini-upanisad, utgitt i Thirty Minor Upanisad-s, Including the Yoga Upanisad-s (Oklahoma, Santarasa Publications, 1980). Denne hinduistiske teksten er omtalt som en av de 21 yoga-upanishadene, den åttisjette av de 108 klassiske upanishadene, og utgjør en del av tekstkorpuset Krsna-Yajurveda. Teksten fungerer som en manual i øvelser fra disiplinene hathayoga,...

  3. Introduction: Analysing Emotion and Theorising Affect

    Directory of Open Access Journals (Sweden)

    Peta Tait

    2016-08-01

    Full Text Available This discussion introduces ideas of emotion and affect for a volume of articles demonstrating the scope of approaches used in their study within the humanities and creative arts. The volume offers multiple perspectives on emotion and affect within 20th-century and 21st-century texts, arts and organisations and their histories. The discussion explains how emotion encompasses the emotions, emotional feeling, sensation and mood and how these can be analysed particularly in relation to literature, art and performance. It briefly summarises concepts of affect theory within recent approaches before introducing the articles.

  4. Fully Coupled FE Analyses of Buried Structures

    Directory of Open Access Journals (Sweden)

    James T. Baylot

    1994-01-01

    Full Text Available Current procedures for determining the response of buried structures to the effects of the detonation of buried high explosives recommend decoupling the free-field stress analysis from the structure response analysis. A fully coupled (explosive–soil structure finite element analysis procedure was developed so that the accuracies of current decoupling procedures could be evaluated. Comparisons of the results of analyses performed using this procedure with scale-model experiments indicate that this finite element procedure can be used to effectively evaluate the accuracies of the methods currently being used to decouple the free-field stress analysis from the structure response analysis.

  5. Large scale breeder reactor pump dynamic analyses

    International Nuclear Information System (INIS)

    The lateral natural frequency and vibration response analyses of the Large Scale Breeder Reactor (LSBR) primary pump were performed as part of the total dynamic analysis effort to obtain the fabrication release. The special features of pump modeling are outlined in this paper. The analysis clearly demonstrates the method of increasing the system natural frequency by reducing the generalized mass without significantly changing the generalized stiffness of the structure. Also, a method of computing the maximum relative and absolute steady state responses and associated phase angles at given locations is provided. This type of information is very helpful in generating response versus frequency and phase angle versus frequency plots

  6. Visualizing Risk Prediction Models

    OpenAIRE

    Vanya Van Belle; Ben Van Calster

    2015-01-01

    Objective Risk prediction models can assist clinicians in making decisions. To boost the uptake of these models in clinical practice, it is important that end-users understand how the model works and can efficiently communicate its results. We introduce novel methods for interpretable model visualization. Methods The proposed visualization techniques are applied to two prediction models from the Framingham Heart Study for the prediction of intermittent claudication and stroke after atrial fib...

  7. Pyroshock prediction procedures

    Science.gov (United States)

    Piersol, Allan G.

    2002-05-01

    Given sufficient effort, pyroshock loads can be predicted by direct analytical procedures using Hydrocodes that analytically model the details of the pyrotechnic explosion and its interaction with adjacent structures, including nonlinear effects. However, it is more common to predict pyroshock environments using empirical procedures based upon extensive studies of past pyroshock data. Various empirical pyroshock prediction procedures are discussed, including those developed by the Jet Propulsion Laboratory, Lockheed-Martin, and Boeing.

  8. Predictability of Conversation Partners

    Science.gov (United States)

    Takaguchi, Taro; Nakamura, Mitsuhiro; Sato, Nobuo; Yano, Kazuo; Masuda, Naoki

    2011-08-01

    Recent developments in sensing technologies have enabled us to examine the nature of human social behavior in greater detail. By applying an information-theoretic method to the spatiotemporal data of cell-phone locations, [C. Song , ScienceSCIEAS0036-8075 327, 1018 (2010)] found that human mobility patterns are remarkably predictable. Inspired by their work, we address a similar predictability question in a different kind of human social activity: conversation events. The predictability in the sequence of one’s conversation partners is defined as the degree to which one’s next conversation partner can be predicted given the current partner. We quantify this predictability by using the mutual information. We examine the predictability of conversation events for each individual using the longitudinal data of face-to-face interactions collected from two company offices in Japan. Each subject wears a name tag equipped with an infrared sensor node, and conversation events are marked when signals are exchanged between sensor nodes in close proximity. We find that the conversation events are predictable to a certain extent; knowing the current partner decreases the uncertainty about the next partner by 28.4% on average. Much of the predictability is explained by long-tailed distributions of interevent intervals. However, a predictability also exists in the data, apart from the contribution of their long-tailed nature. In addition, an individual’s predictability is correlated with the position of the individual in the static social network derived from the data. Individuals confined in a community—in the sense of an abundance of surrounding triangles—tend to have low predictability, and those bridging different communities tend to have high predictability.

  9. Is Time Predictability Quantifiable?

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    -case execution time. To compare different approaches we would like to quantify time predictability. That means we need to measure time predictability. In this paper we discuss the different approaches for these measurements and conclude that time predictability is practically not quantifiable. We can only......Computer architects and researchers in the realtime domain start to investigate processors and architectures optimized for real-time systems. Optimized for real-time systems means time predictable, i.e., architectures where it is possible to statically derive a tight bound of the worst...... compare the worst-case execution time bounds of different architectures....

  10. Hitchhikers’ guide to analysing bird ringing data

    Directory of Open Access Journals (Sweden)

    Harnos Andrea

    2015-12-01

    Full Text Available Bird ringing datasets constitute possibly the largest source of temporal and spatial information on vertebrate taxa available on the globe. Initially, the method was invented to understand avian migration patterns. However, data deriving from bird ringing has been used in an array of other disciplines including population monitoring, changes in demography, conservation management and to study the effects of climate change to name a few. Despite the widespread usage and importance, there are no guidelines available specifically describing the practice of data management, preparation and analyses of ringing datasets. Here, we present the first of a series of comprehensive tutorials that may help fill this gap. We describe in detail and through a real-life example the intricacies of data cleaning and how to create a data table ready for analyses from raw ringing data in the R software environment. Moreover, we created and present here the R package; ringR, designed to carry out various specific tasks and plots related to bird ringing data. Most methods described here can also be applied to a wide range of capture-recapture type data based on individual marking, regardless to taxa or research question.

  11. Transportation systems analyses: Volume 1: Executive Summary

    Science.gov (United States)

    1993-05-01

    The principal objective of this study is to accomplish a systems engineering assessment of the nation's space transportation infrastructure. This analysis addresses the necessary elements to perform man delivery and return, cargo transfer, cargo delivery, payload servicing, and the exploration of the Moon and Mars. Specific elements analyzed, but not limited to, include the Space Exploration Initiative (SEI), the National Launch System (NLS), the current expendable launch vehicle (ELV) fleet, ground facilities, the Space Station Freedom (SSF), and other civil, military and commercial payloads. The performance of this study entails maintaining a broad perspective on the large number of transportation elements that could potentially comprise the U.S. space infrastructure over the next several decades. To perform this systems evaluation, top-level trade studies are conducted to enhance our understanding of the relationships between elements of the infrastructure. This broad 'infrastructure-level perspective' permits the identification of preferred infrastructures. Sensitivity analyses are performed to assure the credibility and usefulness of study results. This executive summary of the transportation systems analyses (TSM) semi-annual report addresses the SSF logistics resupply. Our analysis parallels the ongoing NASA SSF redesign effort. Therefore, there could be no SSF design to drive our logistics analysis. Consequently, the analysis attempted to bound the reasonable SSF design possibilities (and the subsequent transportation implications). No other strategy really exists until after a final decision is rendered on the SSF configuration.

  12. Hierarchical regression for analyses of multiple outcomes.

    Science.gov (United States)

    Richardson, David B; Hamra, Ghassan B; MacLehose, Richard F; Cole, Stephen R; Chu, Haitao

    2015-09-01

    In cohort mortality studies, there often is interest in associations between an exposure of primary interest and mortality due to a range of different causes. A standard approach to such analyses involves fitting a separate regression model for each type of outcome. However, the statistical precision of some estimated associations may be poor because of sparse data. In this paper, we describe a hierarchical regression model for estimation of parameters describing outcome-specific relative rate functions and associated credible intervals. The proposed model uses background stratification to provide flexible control for the outcome-specific associations of potential confounders, and it employs a hierarchical "shrinkage" approach to stabilize estimates of an exposure's associations with mortality due to different causes of death. The approach is illustrated in analyses of cancer mortality in 2 cohorts: a cohort of dioxin-exposed US chemical workers and a cohort of radiation-exposed Japanese atomic bomb survivors. Compared with standard regression estimates of associations, hierarchical regression yielded estimates with improved precision that tended to have less extreme values. The hierarchical regression approach also allowed the fitting of models with effect-measure modification. The proposed hierarchical approach can yield estimates of association that are more precise than conventional estimates when one wishes to estimate associations with multiple outcomes. PMID:26232395

  13. Ethics of cost analyses in medical education.

    Science.gov (United States)

    Walsh, Kieran

    2013-11-01

    Cost analyses in medical education are rarely straightforward, and rarely lead to clear-cut conclusions. Occasionally they do lead to clear conclusions but even when that happens, some stakeholders will ask difficult but valid questions about what to do following cost analyses-specifically about distributive justice in the allocation of resources. At present there are few or no debates about these issues and rationing decisions that are taken in medical education are largely made subconsciously. Distributive justice 'concerns the nature of a socially just allocation of goods in a society'. Inevitably there is a large degree of subjectivity in the judgment as to whether an allocation is seen as socially just or ethical. There are different principles by which we can view distributive justice and which therefore affect the prism of subjectivity through which we see certain problems. For example, we might say that distributive justice at a certain institution or in a certain medical education system operates according to the principle that resources must be divided equally amongst learners. Another system may say that resources should be distributed according to the needs of learners or even of patients. No ethical system or model is inherently right or wrong, they depend on the context in which the educator is working. PMID:24203859

  14. Bioinformatics tools for analysing viral genomic data.

    Science.gov (United States)

    Orton, R J; Gu, Q; Hughes, J; Maabar, M; Modha, S; Vattipally, S B; Wilkie, G S; Davison, A J

    2016-04-01

    The field of viral genomics and bioinformatics is experiencing a strong resurgence due to high-throughput sequencing (HTS) technology, which enables the rapid and cost-effective sequencing and subsequent assembly of large numbers of viral genomes. In addition, the unprecedented power of HTS technologies has enabled the analysis of intra-host viral diversity and quasispecies dynamics in relation to important biological questions on viral transmission, vaccine resistance and host jumping. HTS also enables the rapid identification of both known and potentially new viruses from field and clinical samples, thus adding new tools to the fields of viral discovery and metagenomics. Bioinformatics has been central to the rise of HTS applications because new algorithms and software tools are continually needed to process and analyse the large, complex datasets generated in this rapidly evolving area. In this paper, the authors give a brief overview of the main bioinformatics tools available for viral genomic research, with a particular emphasis on HTS technologies and their main applications. They summarise the major steps in various HTS analyses, starting with quality control of raw reads and encompassing activities ranging from consensus and de novo genome assembly to variant calling and metagenomics, as well as RNA sequencing.

  15. CHEMICAL ANALYSES OF SODIUM SYSTEMS FINAL REPORT

    Energy Technology Data Exchange (ETDEWEB)

    Greenhalgh, W. O.; Yunker, W. H.; Scott, F. A.

    1970-06-01

    BNWL-1407 summarizes information gained from the Chemical Analyses of Sodium Systems Program pursued by Battelle- Northwest over the period from July 1967 through June 1969. Tasks included feasibility studies for performing coulometric titration and polarographic determinations of oxygen in sodium, and the development of new separation techniques for sodium impurities and their subsequent analyses. The program was terminated ahead of schedule so firm conclusions were not obtained in all areas of the work. At least 40 coulometric titrations were carried out and special test cells were developed for coulometric application. Data indicated that polarographic measurements are theoretically feasible, but practical application of the method was not verified. An emission spectrographic procedure for trace metal impurities was developed and published. Trace metal analysis by a neutron activation technique was shown to be feasible; key to the success of the activation technique was the application of a new ion exchange resin which provided a sodium separation factor of 10{sup 11}. Preliminary studies on direct scavenging of trace metals produced no conclusive results.

  16. ANALYSES ON SYSTEMATIC CONFRONTATION OF FIGHTER AIRCRAFT

    Institute of Scientific and Technical Information of China (English)

    HuaiJinpeng; WuZhe; HuangJun

    2002-01-01

    Analyses of the systematic confrontation between two military forcfes are the highest hierarchy on opera-tional effectiveness study of weapon systema.The physi-cal model for tactical many-on-many engagements of an aerial warfare with heterogeneous figher aircraft is estab-lished.On the basis of Lanchester multivariate equations of square law,a mathematical model corresponding to the established physical model is given.A superiorityh parame-ter is then derived directly from the mathematical model.With view to the high -tech condition of modern war-fare,the concept of superiority parameter which more well and truly reflects the essential of an air-to-air en-gagement is further formulated.The attrition coeffi-cients,which are key to the differential equations,are de-termined by using tactics of random target assignment and air-to-air capability index of the fighter aircraft.Hereby,taking the mathematical model and superiority parameter as cores,calculations amd analyses of complicate systemic problems such as evaluation of battle superiority,prog-mostication of combat process and optimization of colloca-tions have been accomplished.Results indicate that a clas-sical combat theory with its certain recent development has received newer applications in the military operation research for complicated confrontation analysis issues.

  17. Sensitivity in risk analyses with uncertain numbers.

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, W. Troy; Ferson, Scott

    2006-06-01

    Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is Dempster-Shafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a ''pinching'' strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered.

  18. Waste Stream Analyses for Nuclear Fuel Cycles

    Energy Technology Data Exchange (ETDEWEB)

    N. R. Soelberg

    2010-08-01

    A high-level study was performed in Fiscal Year 2009 for the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE) Advanced Fuel Cycle Initiative (AFCI) to provide information for a range of nuclear fuel cycle options (Wigeland 2009). At that time, some fuel cycle options could not be adequately evaluated since they were not well defined and lacked sufficient information. As a result, five families of these fuel cycle options are being studied during Fiscal Year 2010 by the Systems Analysis Campaign for the DOE NE Fuel Cycle Research and Development (FCRD) program. The quality and completeness of data available to date for the fuel cycle options is insufficient to perform quantitative radioactive waste analyses using recommended metrics. This study has been limited thus far to qualitative analyses of waste streams from the candidate fuel cycle options, because quantitative data for wastes from the front end, fuel fabrication, reactor core structure, and used fuel for these options is generally not yet available.

  19. Structural and mutational analyses of cis-acting sequences in the 5'-untranslated region of satellite RNA of bamboo mosaic potexvirus

    International Nuclear Information System (INIS)

    The satellite RNA of Bamboo mosaic virus (satBaMV) contains on open reading frame for a 20-kDa protein that is flanked by a 5'-untranslated region (UTR) of 159 nucleotides (nt) and a 3'-UTR of 129 nt. A secondary structure was predicted for the 5'-UTR of satBaMV RNA, which folds into a large stem-loop (LSL) and a small stem-loop. Enzymatic probing confirmed the existence of LSL (nt 8-138) in the 5'-UTR. The essential cis-acting sequences in the 5'-UTR required for satBaMV RNA replication were determined by deletion and substitution mutagenesis. Their replication efficiencies were analyzed in Nicotiana benthamiana protoplasts and Chenopodium quinoa plants coinoculated with helper BaMV RNA. All deletion mutants abolished the replication of satBaMV RNA, whereas mutations introduced in most of the loop regions and stems showed either no replication or a decreased replication efficiency. Mutations that affected the positive-strand satBaMV RNA accumulation also affected the accumulation of negative-strand RNA; however, the accumulation of genomic and subgenomic RNAs of BaMV were not affected. Moreover, covariation analyses of natural satBaMV variants provide substantial evidence that the secondary structure in the 5'-UTR of satBaMV is necessary for efficient replication

  20. Les conditions de l’analyse qualitative

    Directory of Open Access Journals (Sweden)

    Pierre Paillé

    2011-07-01

    Full Text Available Les méthodes d’analyse des données qualitatives et le monde informatique étaient faits pour se rencontrer. Et en effet, la question est d’actualité et les outils informatiques nombreux et avancés. Ce phénomène ne saurait s’estomper, d’autant moins que l’analyse des données secondaires connaît en même temps des développements importants. Mais l’attrait pour les logiciels d’analyse peut devenir tel qu’on ne verrait plus trop à quel titre et pour quelles raisons on pourrait s’en passer. L’article tente de cerner une vision et une pratique de l’analyse qualitative qui, dans son essence, ne se prête pas à l’utilisation d’outils informatiques spécialisés. Il situe sa réflexion dans le cadre de la méthodologie qualitative (démarche qualitative, recherche qualitative, analyse qualitative, plus particulièrement au niveau de l’enquête qualitative de terrain.The conditions of qualitative analysisThe methods for analyzing qualitative data and the computer world were meant to meet. And, by that, the question is valid and computer tools numerous and advanced. This phenomenon will not slowdown, especially not since secondary data analysis experiences significant developments. But the attraction for analysis software could develop so that we would not see too much why and for what type of reasons we could part from them. This article attempts to define a vision and a practice of qualitative analysis that, in essence, does not use specialized computer tools. It situates its reflection within qualitative methodology (qualitative approaches, qualitative research, qualitative analysis and moreover in the level of qualitative fieldwork investigation.Las condiciones del análisis cualitativo. Reflexiones sobre la utilización de programas informáticosLos métodos de análisis de los datos cualitativos y el mundo de la informática han nacido para entenderse mutualmente. En efecto, la cuestión es actual y los

  1. SRM Internal Flow Tests and Computational Fluid Dynamic Analysis. Volume 2; CFD RSRM Full-Scale Analyses

    Science.gov (United States)

    2001-01-01

    This document presents the full-scale analyses of the CFD RSRM. The RSRM model was developed with a 20 second burn time. The following are presented as part of the full-scale analyses: (1) RSRM embedded inclusion analysis; (2) RSRM igniter nozzle design analysis; (3) Nozzle Joint 4 erosion anomaly; (4) RSRM full motor port slag accumulation analysis; (5) RSRM motor analysis of two-phase flow in the aft segment/submerged nozzle region; (6) Completion of 3-D Analysis of the hot air nozzle manifold; (7) Bates Motor distributed combustion test case; and (8) Three Dimensional Polysulfide Bump Analysis.

  2. Machine learning algorithms for datasets popularity prediction

    CERN Document Server

    Kancys, Kipras

    2016-01-01

    This report represents continued study where ML algorithms were used to predict databases popularity. Three topics were covered. First of all, there was a discrepancy between old and new meta-data collection procedures, so a reason for that had to be found. Secondly, different parameters were analysed and dropped to make algorithms perform better. And third, it was decided to move modelling part on Spark.

  3. Improved nonlinear prediction method

    Science.gov (United States)

    Adenan, Nur Hamiza; Md Noorani, Mohd Salmi

    2014-06-01

    The analysis and prediction of time series data have been addressed by researchers. Many techniques have been developed to be applied in various areas, such as weather forecasting, financial markets and hydrological phenomena involving data that are contaminated by noise. Therefore, various techniques to improve the method have been introduced to analyze and predict time series data. In respect of the importance of analysis and the accuracy of the prediction result, a study was undertaken to test the effectiveness of the improved nonlinear prediction method for data that contain noise. The improved nonlinear prediction method involves the formation of composite serial data based on the successive differences of the time series. Then, the phase space reconstruction was performed on the composite data (one-dimensional) to reconstruct a number of space dimensions. Finally the local linear approximation method was employed to make a prediction based on the phase space. This improved method was tested with data series Logistics that contain 0%, 5%, 10%, 20% and 30% of noise. The results show that by using the improved method, the predictions were found to be in close agreement with the observed ones. The correlation coefficient was close to one when the improved method was applied on data with up to 10% noise. Thus, an improvement to analyze data with noise without involving any noise reduction method was introduced to predict the time series data.

  4. Predicting AD conversion

    DEFF Research Database (Denmark)

    Liu, Yawu; Mattila, Jussi; Ruiz, Miguel �ngel Mu�oz;

    2013-01-01

    To compare the accuracies of predicting AD conversion by using a decision support system (PredictAD tool) and current research criteria of prodromal AD as identified by combinations of episodic memory impairment of hippocampal type and visual assessment of medial temporal lobe atrophy (MTA) on MRI...

  5. Prediction of Antibody Epitopes

    DEFF Research Database (Denmark)

    Nielsen, Morten; Marcatili, Paolo

    2015-01-01

    self-proteins. Given the sequence or the structure of a protein of interest, several methods exploit such features to predict the residues that are more likely to be recognized by an immunoglobulin.Here, we present two methods (BepiPred and DiscoTope) to predict linear and discontinuous antibody...

  6. Nuclear Analyses For ITER NB System

    International Nuclear Information System (INIS)

    Full text: Detailed nuclear analyses for the latest ITER NB system are required to ensure that NB design conforms to the nuclear regulations and licensing. A variety of nuclear analyses was conducted for the NB system including a tokamak building and outside the building by using Monte Carlo code MCNP5.14, activation code ACT-4 and Fusion Evaluated Nuclear Data Library FENDL-2.1. A special “Direct 1-step Monte Carlo” method is adopted for the shutdown dose rate calculation. The NB system and the tokamak building are very complicated, and it is practically impossible to make geometry input data manually. We used the automatic converter code GEOMIT from CAD data to MCNP geometry input data. GEOMIT was improved for these analyses, and the conversion performance was drastically enhanced. Void cells in MCNP input data were generated by subtracting solid cells data from simple rectangular void cells. The CAD data were successfully converted to MCNP geometry input data, and void data were also adequately produced with GEOMIT. The effective dose rates at external zones (non-controlled areas) should be less than 80 μSv/month according to French regulations. Shielding structures are under analysis to reduce the radiation streaming through the openings. We are confirming that the criterion is satisfied for the NB system. The effective dose rate data in the NB cell after shutdown are necessary to check the dose rate during possible rad-works for maintenance. Dose rates for workers must be maintained as low as reasonably achievable, and at locations where hands-on maintenance is performed should be below a target of 100 μSv/h at 12 days after shutdown. We are specifying the adequate zoning and area where hands-on maintenance can be allowed, based on the analysis results. The cask design for transport activated NB components is an important issue, and we are calculating the effective dose rates. The target of the effective dose rate from the activated NB components is less

  7. Analysing lawyers’ attitude towards knowledge sharing

    Directory of Open Access Journals (Sweden)

    Wole M. Olatokun

    2012-02-01

    Full Text Available Objectives: The study examined and identified the factors that affect lawyers’ attitudes to knowledge sharing, and their knowledge sharing behaviour. Specifically, it investigated the relationship between the salient beliefs affecting the knowledge sharing attitude of lawyers’, and applied a modified version of the Theory of Reasoned Action (TRA in the knowledge sharing context, to predict how these factors affect their knowledge sharing behaviour.Method: A field survey of 273 lawyers was carried out, using questionnaire for data collection. Collected data on all variables were structured into grouped frequency distributions. Principal Component Factor Analysis was applied to reduce the constructs and Simple Regression was applied to test the hypotheses. These were tested at 0.05% level of significance.Results: Results showed that expected associations and contributions were the major determinants of lawyers’ attitudes towards knowledge sharing. Expected reward was not significantly related to lawyers’ attitudes towards knowledge sharing. A positive attitude towards knowledge sharing was found to lead to a positive intention to share knowledge, although a positive intention to share knowledge did not significantly predict a positive knowledge sharing behaviour. The level of Information Technology (IT usage was also found to significantly affect the knowledge sharing behaviour of lawyers’.Conclusion: It was recommended that law firms in the study area should deploy more IT infrastructure and services that encourage effective knowledge sharing amongst lawyers. 

  8. Castor-1C spent fuel storage cask decay heat, heat transfer, and shielding analyses

    International Nuclear Information System (INIS)

    This report documents the decay heat, heat transfer, and shielding analyses of the Gesellschaft fuer Nuklear Services (GNS) CASTOR-1C cask used in a spent fuel storage demonstration performed at Preussen Elektra's Wurgassen nuclear power plant. The demonstration was performed between March 1982 and January 1984, and resulted in cask and fuel temperature data and cask exterior surface gamma-ray and neutron radiation dose rate measurements. The purpose of the analyses reported here was to evaluate decay heat, heat transfer, and shielding computer codes. The analyses consisted of (1) performing pre-look predictions (predictions performed before the analysts were provided the test data), (2) comparing ORIGEN2 (decay heat), COBRA-SFS and HYDRA (heat transfer), and QAD and DOT (shielding) results to data, and (3) performing post-test analyses if appropriate. Even though two heat transfer codes were used to predict CASTOR-1C cask test data, no attempt was made to compare the two codes. The codes are being evaluated with other test data (single-assembly data and other cask data), and to compare the codes based on one set of data may be premature and lead to erroneous conclusions

  9. Evaluating prediction uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    McKay, M.D. [Los Alamos National Lab., NM (United States)

    1995-03-01

    The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented.

  10. Trend Analyses of Nitrate in Danish Groundwater

    DEFF Research Database (Denmark)

    Hansen, B.; Thorling, L.; Dalgaard, Tommy;

    2012-01-01

    This presentation assesses the long-term development in the oxic groundwater nitrate concentration and nitrogen (N) loss due to intensive farming in Denmark. Firstly, up to 20-year time-series from the national groundwater monitoring network enable a statistically systematic analysis...... two dataset. The development in the nitrate concentration of oxic groundwater clearly mirrors the development in the national agricultural N surplus, and a corresponding trend reversal is found in groundwater. Regulation and technical improvements in the intensive farming in Denmark have succeeded...... in decreasing the N surplus by 40% since the mid 1980s while at the same time maintaining crop yields and increasing the animal production of especially pigs. Trend analyses prove that the youngest (0-15 years old) oxic groundwater shows more pronounced significant downward nitrate trends (44%) than the oldest...

  11. Thermal hydraulic reactor safety analyses and experiments

    International Nuclear Information System (INIS)

    The report introduces the results of the thermal hydraulic reactor safety research performed in the Nuclear Engineering Laboratory of the Technical Research Centre of Finland (VTT) during the years 1972-1987. Also practical applications i.e. analyses for the safety authorities and power companies are presented. The emphasis is on description of the state-of-the-art know how. The report describes VTT's most important computer codes, both those of foreign origin and those developed at VTT, and their assessment work, VTT's own experimental research, as well as international experimental projects and other forms of cooperation VTT has participated in. Appendix 8 contains a comprehensive list of the most important publications and technical reports produced. They present the content and results of the research in detail.(orig.)

  12. Reliability and safety analyses under fuzziness

    International Nuclear Information System (INIS)

    Fuzzy theory, for example possibility theory, is compatible with probability theory. What is shown so far is that probability theory needs not be replaced by fuzzy theory, but rather that the former works much better in applications if it is combined with the latter. In fact, it is said that there are two essential uncertainties in the field of reliability and safety analyses: One is a probabilistic uncertainty which is more relevant for mechanical systems and the natural environment, and the other is fuzziness (imprecision) caused by the existence of human beings in systems. The classical probability theory alone is therefore not sufficient to deal with uncertainties in humanistic system. In such a context this collection of works will put a milestone in the arguments of probability theory and fuzzy theory. This volume covers fault analysis, life time analysis, reliability, quality control, safety analysis and risk analysis. (orig./DG). 106 figs

  13. Anthocyanin analyses of Vaccinium fruit dietary supplements.

    Science.gov (United States)

    Lee, Jungmin

    2016-09-01

    Vaccinium fruit ingredients within dietary supplements were identified by comparisons with anthocyanin analyses of known Vaccinium profiles (demonstration of anthocyanin fingerprinting). Available Vaccinium supplements were purchased and analyzed, their anthocyanin profiles (based on high-performance liquid chromatography [HPLC] separation) indicated if products' fruit origin listings were authentic. Over 30% of the Vaccinium fruit (cranberry, lingonberry, bilberry, and blueberry; 14 of 45) products available as dietary supplements did not contain the fruit listed as ingredients. Six supplements contained no anthocyanins. Five others had contents differing from labeled fruit (e.g., bilberry capsules containing Andean blueberry fruit). Of the samples that did contain the specified fruit (n = 27), anthocyanin content ranged from 0.04 to 14.37 mg per capsule, tablet, or teaspoon (5 g). Approaches to utilizing anthocyanins in assessment of sample authenticity, and a discussion of the challenges with anthocyanin profiles in quality control are both presented. PMID:27625778

  14. Feasibility Analyses of Integrated Broiler Production

    Directory of Open Access Journals (Sweden)

    L. Komalasari

    2010-12-01

    Full Text Available The major obstacles in the development of broiler raising is the expensive price of feed and the fluctuative price of DOCs. The cheap price of imported leg quarters reduces the competitiveness of the local broilers. Therefore, an effort to increase production efficiency is needed through integration between broiler raising and corn farmers and feed producers (integrated farming. The purpose of this study is to analyze the feasibility of integrating broiler raising with corn cultivation and feed production. Besides that, a simulation was conducted to analyze the effects of DOC price changes, broiler price and production capacity. The analyses showed that integrated farming and a mere combination between broiler raising and feed factory of a 10,000 bird capacity is not financially feasible. Increasing the production to 25,000 broiler chickens will make the integrated farming financially feasible. Unintegrated broiler raising is relatively sensitive to broiler price decreases and DOC price increases compared to integrated farming.

  15. Genetic Analyses of Meiotic Recombination in Arabidopsis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Meiosis is essential for sexual reproduction and recombination is a critical step required for normal meiosis. Understanding the underlying molecular mechanisms that regulate recombination ie important for medical, agricultural and ecological reasons. Readily available molecular and cytological tools make Arabidopsis an excellent system to study meiosis. Here we review recent developments in molecular genetic analyses on meiotic recombination. These Include studies on plant homologs of yeast and animal genes, as well as novel genes that were first identified in plants. The characterizations of these genes have demonstrated essential functions from the initiation of recombination by double-strand breaks to repair of such breaks, from the formation of double-Holliday junctions to possible resolution of these junctions, both of which are critical for crossover formation. The recent advances have ushered a new era in plant meiosis, in which the combination of genetics, genomics, and molecular cytology can uncover important gene functions.

  16. Attitude stability analyses for small artificial satellites

    International Nuclear Information System (INIS)

    The objective of this paper is to analyze the stability of the rotational motion of a symmetrical spacecraft, in a circular orbit. The equilibrium points and regions of stability are established when components of the gravity gradient torque acting on the spacecraft are included in the equations of rotational motion, which are described by the Andoyer's variables. The nonlinear stability of the equilibrium points of the rotational motion is analysed here by the Kovalev-Savchenko theorem. With the application of the Kovalev-Savchenko theorem, it is possible to verify if they remain stable under the influence of the terms of higher order of the normal Hamiltonian. In this paper, numerical simulations are made for a small hypothetical artificial satellite. Several stable equilibrium points were determined and regions around these points have been established by variations in the orbital inclination and in the spacecraft principal moment of inertia. The present analysis can directly contribute in the maintenance of the spacecraft's attitude

  17. Spatial Analyses of Harappan Urban Settlements

    Directory of Open Access Journals (Sweden)

    Hirofumi Teramura

    2006-12-01

    Full Text Available The Harappan Civilization occupies a unique place among the early civilizations of the world with its well planned urban settlements, advanced handicraft and technology, religious and trade activities. Using a Geographical Information Systems (GIS, this study presents spatial analyses that locate urban settlements on a digital elevation model (DEM according to the three phases of early, mature and late. Understanding the relationship between the spatial distribution of Harappan sites and the change in some factors, such as topographic features, river passages or sea level changes, will lead to an understanding of the dynamism of this civilization. It will also afford a glimpse of the factors behind the formation, development, and decline of the Harappan Civilization.

  18. Analysing weak orbital signals in Gaia data

    CERN Document Server

    Lucy, L B

    2014-01-01

    Anomalous orbits are found when minimum-chi^{2} estimation is applied to synthetic Gaia data for weak orbital signals - i.e., orbits whose astrometric signatures are comparable to the single-scan measurement error (Pourbaix 2002). These orbits are nearly parabolic, edge-on, and their major axes align with the line-of-sight to the observer. Such orbits violate the Copernican principle (CPr) and as such could be rejected. However, the preferred alternative is to develop a statistical technique that incorporates the CPr as a fundamental postulate. This can be achieved in the context of Bayesian estimation by defining a Copernican prior. With this development, Pourbaix's anomalous orbits no longer arise. Instead, orbits with a somewhat higher chi^{2} but which do not violate the CPr are selected. Other areas of astronomy where the investigator must analyse data from 'imperfect experiments' might similarly benefit from appropriately- defined Copernican priors.

  19. Modelling and Analysing Socio-Technical Systems

    DEFF Research Database (Denmark)

    Aslanyan, Zaruhi; Ivanova, Marieta Georgieva; Nielson, Flemming;

    2015-01-01

    Modern organisations are complex, socio-technical systems consisting of a mixture of physical infrastructure, human actors, policies and processes. An in-creasing number of attacks on these organisations exploits vulnerabilities on all different levels, for example combining a malware attack...... with social engineering. Due to this combination of attack steps on technical and social levels, risk assessment in socio-technical systems is complex. Therefore, established risk assessment methods often abstract away the internal structure of an organisation and ignore human factors when modelling...... and assessing attacks. In our work we model all relevant levels of socio-technical systems, and propose evaluation techniques for analysing the security properties of the model. Our approach simplifies the identification of possible attacks and provides qualified assessment and ranking of attacks based...

  20. Externalizing Behaviour for Analysing System Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof;

    2013-01-01

    System models have recently been introduced to model organisations and evaluate their vulnerability to threats and especially insider threats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside...... attackers. Therefore, many attacks are considerably easier to be performed for insiders than for outsiders. However, current models do not support explicit specification of different behaviours. Instead, behaviour is deeply embedded in the analyses supported by the models, meaning that it is a complex......, if not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to integrate...

  1. Deterministic analyses of severe accident issues

    International Nuclear Information System (INIS)

    Severe accidents in light water reactors involve complex physical phenomena. In the past there has been a heavy reliance on simple assumptions regarding physical phenomena alongside of probability methods to evaluate risks associated with severe accidents. Recently GE has developed realistic methodologies that permit deterministic evaluations of severe accident progression and of some of the associated phenomena in the case of Boiling Water Reactors (BWRs). These deterministic analyses indicate that with appropriate system modifications, and operator actions, core damage can be prevented in most cases. Furthermore, in cases where core-melt is postulated, containment failure can either be prevented or significantly delayed to allow sufficient time for recovery actions to mitigate severe accidents

  2. Analysing Medieval Urban Space; a methodology

    Directory of Open Access Journals (Sweden)

    Marlous L. Craane MA

    2007-08-01

    Full Text Available This article has been written in reaction to recent developments in medieval history and archaeology, to study not only the buildings in a town but also the spaces that hold them together. It discusses a more objective and interdisciplinary approach for analysing urban morphology and use of space. It proposes a 'new' methodology by combining town plan analysis and space syntax. This methodology was trialled on the city of Utrecht in the Netherlands. By comparing the results of this 'new' methodology with the results of previous, more conventional, research, this article shows that space syntax can be applied successfully to medieval urban contexts. It does this by demonstrating a strong correlation between medieval economic spaces and the most integrated spaces, just as is found in the study of modern urban environments. It thus provides a strong basis for the use of this technique in future research of medieval urban environments.

  3. Cointegration Approach to Analysing Inflation in Croatia

    Directory of Open Access Journals (Sweden)

    Lena Malešević-Perović

    2009-06-01

    Full Text Available The aim of this paper is to analyse the determinants of inflation in Croatia in the period 1994:6-2006:6. We use a cointegration approach and find that increases in wages positively influence inflation in the long-run. Furthermore, in the period from June 1994 onward, the depreciation of the currency also contributed to inflation. Money does not explain Croatian inflation. This irrelevance of the money supply is consistent with its endogeneity to exchange rate targeting, whereby the money supply is determined by developments in the foreign exchange market. The value of inflation in the previous period is also found to be significant, thus indicating some inflation inertia.

  4. Analysing Terrorism from a Systems Thinking Perspective

    Directory of Open Access Journals (Sweden)

    Lukas Schoenenberger

    2014-02-01

    Full Text Available Given the complexity of terrorism, solutions based on single factors are destined to fail. Systems thinking offers various tools for helping researchers and policy makers comprehend terrorism in its entirety. We have developed a semi-quantitative systems thinking approach for characterising relationships between variables critical to terrorism and their impact on the system as a whole. For a better understanding of the mechanisms underlying terrorism, we present a 16-variable model characterising the critical components of terrorism and perform a series of highly focused analyses. We show how to determine which variables are best suited for government intervention, describing in detail their effects on the key variable—the political influence of a terrorist network. We also offer insights into how to elicit variables that destabilise and ultimately break down these networks. Because we clarify our novel approach with fictional data, the primary importance of this paper lies in the new framework for reasoning that it provides.

  5. First international intercomparison of image analysers

    CERN Document Server

    Pálfalvi, J; Eoerdoegh, I

    1999-01-01

    Image analyser systems used for evaluating solid state nuclear track detectors (SSNTD) were compared in order to establish minimum hardware and software requirements and methodology necessary in different fields of radiation dosimetry. For the purpose, CR-39 detectors (TASL, Bristol, U.K.) were irradiated with different (n,alpha) and (n,p) converters in a reference Pu-Be neutron field, in an underground laboratory with high radon concentration and by different alpha sources at the Atomic Energy Research Institute (AERI) in Budapest, Hungary. 6 sets of etched and pre-evaluated detectors and the 7th one without etching were distributed among the 14 laboratories from 11 countries. The participants measured the different track parameters and statistically evaluated the results, to determine the performance of their system. The statistical analysis of results showed high deviations from the mean values in many cases. As the conclusion of the intercomparison recommendations were given to fulfill those requirements ...

  6. Accounting for demand failures in Markovian analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wood, A.P.

    1980-01-03

    Reliability has become a fundamental concern in the development of nuclear power plants. The use of Markov Analysis in reliability evaluations is progressing quickly since it offers methods for dealing with repair considerations, common cause failures, time-dependent failure rates, cyclic failures, and availability calculations. A Markov process is a stochastic or time-dependent process allowing time-dependent failure processes to be modelled easily. In redundant safety systems, however, there are often passive system components whose failure to start on demand is higher than their probability of failure during the time for which the safety system must function. It may be impossible to model this failure process as a system with standby failure rates because the actual failure may be caused by the demand itself. This paper deals with a method for extending Markov Analyses to include demand failures.

  7. Communication analyses of plant operator crews

    International Nuclear Information System (INIS)

    Elucidation of crew communication aspects is required to improve the man-man interface which supports operators' diagnoses and decisions. Experiments to clarify operator performance under abnormal condition were evaluated by protocol analyses, interviews, etc. using a training simulator. We had the working hypothesis, based on experimental observations, that operator performance can be evaluated by analysis of crew communications. The following four approaches were tried to evaluate operator performance. (1) Crew performance was quantitatively evaluated by the number of tasks undertaken by an operator crew. (2) The group thinking process was clarified by cognition-communication flow. (3) The group response process was clarified by movement flow. (4) Quantitative indexes for evaluating crew performance were considered to be represented by the amount of information effectively exchanged among operators. (author)

  8. Digital analyses of cartometric Fruska Gora guidelines

    Directory of Open Access Journals (Sweden)

    Živković Dragica

    2013-01-01

    Full Text Available Modern geo morphological topography research have been using quantity statistic and cartographic methods for topographic relief features, mutual relief features, mutual connection analyses on the grounds of good quality numeric parameters etc. Topographic features are important for topographic activities are important for important natural activities. Important morphological characteristics are precisely at the angle of topography, hypsometry, and topography exposition and so on. Small yet unknown relief slants can deeply affect land configuration, hypsometry, topographic exposition etc. Expositions modify the light and heat of interconnected phenomena: soil and air temperature, soil disintegration, the length of vegetation period, the complexity of photosynthesis, the fruitfulness of agricultural crops, the height of snow limit etc. [Projekat Ministarstva nauke Republike Srbije, br. 176008 i br. III44006

  9. Risques naturels en montagne et analyse spatiale

    Directory of Open Access Journals (Sweden)

    Yannick Manche

    1999-06-01

    Full Text Available Le concept de risque repose sur deux notions :l'aléa, qui représente le phénomène physique par son amplitude et sa période retour ;la vulnérabilité, qui représente l'ensemble des biens et des personnes pouvant être touchés par un phénomène naturel.Le risque se définit alors comme le croisement de ces deux notions. Cette vision théorique permet de modéliser indépendamment les aléas et la vulnérabilité.Ce travail s'intéresse essentiellement à la prise en compte de la vulnérabilité dans la gestion des risques naturels. Son évaluation passe obligatoirement par une certaine analyse spatiale qui prend en compte l'occupation humaine et différentes échelles de l'utilisation de l'espace. Mais l'évaluation spatiale, que ce soit des biens et des personnes, ou des effets indirects se heurte à de nombreux problèmes. Il faut estimer l'importance de l'occupation de l'espace. Par ailleurs, le traitement des données implique des changements constants d'échelle pour passer des éléments ponctuels aux surfaces, ce que les systèmes d'information géographique ne gèrent pas parfaitement. La gestion des risques entraîne de fortes contraintes d'urbanisme, la prise en compte de la vulnérabilité permet de mieux comprendre et gérer les contraintes spatiales qu'impliquent les risques naturels. aléa, analyse spatiale, risques naturels, S.I.G., vulnérabilité

  10. The radiation analyses of ITER lower ports

    International Nuclear Information System (INIS)

    The ITER Vacuum Vessel has upper, equatorial, and lower ports used for equipment installation, diagnostics, heating and current drive systems, cryo-vacuum pumping, and access inside the vessel for maintenance. At the level of the divertor, the nine lower ports for remote handling, cryo-vacuum pumping and diagnostic are inclined downwards and toroidally located each every 40o. The cryopump port has additionally a branch to allocate a second cryopump. The ports, as openings in the Vacuum Vessel, permit radiation streaming out of the vessel which affects the heating in the components in the outer regions of the machine inside and outside the ports. Safety concerns are also raised with respect to the dose after shutdown at the cryostat behind the ports: in such zones the radiation dose level must be kept below the regulatory limit to allow personnel access for maintenance purposes. Neutronic analyses have been required to qualify the ITER project related to the lower ports. A 3-D model was used to take into account full details of the ports and the lower machine surroundings. MCNP version 5 1.40 has been used with the FENDL 2.1 nuclear data library. The ITER 40o model distributed by the ITER Organization was developed in the lower part to include the relevant details. The results of a first analysis, focused on cryopump system only, were recently published. In this paper more complete data on the cryopump port and analysis for the remote handling port and the diagnostic rack are presented; the results of both analyses give a complete map of the radiation loads in the outer divertor ports. Nuclear heating, dpa, tritium production, and dose rates after shutdown are provided and the implications for the design are discussed.

  11. The radiation analyses of ITER lower ports

    Energy Technology Data Exchange (ETDEWEB)

    Petrizzi, L., E-mail: petrizzi@frascati.enea.it [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy); Brolatti, G. [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy); Martin, A.; Loughlin, M. [ITER Organization, Cadarache, 13108 St Paul-lez-Durance (France); Moro, F.; Villari, R. [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy)

    2010-12-15

    The ITER Vacuum Vessel has upper, equatorial, and lower ports used for equipment installation, diagnostics, heating and current drive systems, cryo-vacuum pumping, and access inside the vessel for maintenance. At the level of the divertor, the nine lower ports for remote handling, cryo-vacuum pumping and diagnostic are inclined downwards and toroidally located each every 40{sup o}. The cryopump port has additionally a branch to allocate a second cryopump. The ports, as openings in the Vacuum Vessel, permit radiation streaming out of the vessel which affects the heating in the components in the outer regions of the machine inside and outside the ports. Safety concerns are also raised with respect to the dose after shutdown at the cryostat behind the ports: in such zones the radiation dose level must be kept below the regulatory limit to allow personnel access for maintenance purposes. Neutronic analyses have been required to qualify the ITER project related to the lower ports. A 3-D model was used to take into account full details of the ports and the lower machine surroundings. MCNP version 5 1.40 has been used with the FENDL 2.1 nuclear data library. The ITER 40{sup o} model distributed by the ITER Organization was developed in the lower part to include the relevant details. The results of a first analysis, focused on cryopump system only, were recently published. In this paper more complete data on the cryopump port and analysis for the remote handling port and the diagnostic rack are presented; the results of both analyses give a complete map of the radiation loads in the outer divertor ports. Nuclear heating, dpa, tritium production, and dose rates after shutdown are provided and the implications for the design are discussed.

  12. Immunoregulatory effect of bifidobacteria strains in porcine intestinal epithelial cells through modulation of ubiquitin-editing enzyme A20 expression.

    Directory of Open Access Journals (Sweden)

    Yohsuke Tomosada

    Full Text Available BACKGROUND: We previously showed that evaluation of anti-inflammatory activities of lactic acid bacteria in porcine intestinal epithelial (PIE cells is useful for selecting potentially immunobiotic strains. OBJECTIVE: The aims of the present study were: i to select potentially immunomodulatory bifidobacteria that beneficially modulate the Toll-like receptor (TLR-4-triggered inflammatory response in PIE cells and; ii to gain insight into the molecular mechanisms involved in the anti-inflammatory effect of immunobiotics by evaluating the role of TLR2 and TLR negative regulators in the modulation of proinflammatory cytokine production and activation of mitogen-activated protein kinase (MAPK and nuclear factor-κB (NF-κB pathways in PIE cells. RESULTS: Bifidobacteria longum BB536 and B. breve M-16V strains significantly downregulated levels of interleukin (IL-8, monocyte chemotactic protein (MCP-1 and IL-6 in PIE cells challenged with heat-killed enterotoxigenic Escherichia coli. Moreover, BB536 and M-16V strains attenuated the proinflammatory response by modulating the NF-κB and MAPK pathways. In addition, our findings provide evidence for a key role for the ubiquitin-editing enzyme A20 in the anti-inflammatory effect of immunobiotic bifidobacteria in PIE cells. CONCLUSIONS: We show new data regarding the mechanism involved in the anti-inflammatory effect of immunobiotics. Several strains with immunoregulatory capabilities used a common mechanism to induce tolerance in PIE cells. Immunoregulatory strains interacted with TLR2, upregulated the expression of A20 in PIE cells, and beneficially modulated the subsequent TLR4 activation by reducing the activation of MAPK and NF-κB pathways and the production of proinflammatory cytokines. We also show that the combination of TLR2 activation and A20 induction can be used as biomarkers to screen and select potential immunoregulatory bifidobacteria strains.

  13. Structural prediction in aphasia

    Directory of Open Access Journals (Sweden)

    Tessa Warren

    2015-05-01

    Full Text Available There is considerable evidence that young healthy comprehenders predict the structure of upcoming material, and that their processing is facilitated when they encounter material matching those predictions (e.g., Staub & Clifton, 2006; Yoshida, Dickey & Sturt, 2013. However, less is known about structural prediction in aphasia. There is evidence that lexical prediction may be spared in aphasia (Dickey et al., 2014; Love & Webb, 1977; cf. Mack et al, 2013. However, predictive mechanisms supporting facilitated lexical access may not necessarily support structural facilitation. Given that many people with aphasia (PWA exhibit syntactic deficits (e.g. Goodglass, 1993, PWA with such impairments may not engage in structural prediction. However, recent evidence suggests that some PWA may indeed predict upcoming structure (Hanne, Burchert, De Bleser, & Vashishth, 2015. Hanne et al. tracked the eyes of PWA (n=8 with sentence-comprehension deficits while they listened to reversible subject-verb-object (SVO and object-verb-subject (OVS sentences in German, in a sentence-picture matching task. Hanne et al. manipulated case and number marking to disambiguate the sentences’ structure. Gazes to an OVS or SVO picture during the unfolding of a sentence were assumed to indicate prediction of the structure congruent with that picture. According to this measure, the PWA’s structural prediction was impaired compared to controls, but they did successfully predict upcoming structure when morphosyntactic cues were strong and unambiguous. Hanne et al.’s visual-world evidence is suggestive, but their forced-choice sentence-picture matching task places tight constraints on possible structural predictions. Clearer evidence of structural prediction would come from paradigms where the content of upcoming material is not as constrained. The current study used self-paced reading study to examine structural prediction among PWA in less constrained contexts. PWA (n=17 who

  14. A 20 year review of punishment and alternative methods to treat problem behaviors in developmentally delayed persons.

    Science.gov (United States)

    Matson, J L; Taras, M E

    1989-01-01

    Relevant journals were reviewed (n = 23) for a 20 year period (1967 to 1987) to assess the status of treatments for severe behavior problems of developmentally delayed persons. A hand search of journals was made; 382 studies were identified. Procedures were analyzed by problem behaviors treated, side effects reported, whether the procedure involved painful stimuli, nonpainful stimuli, food satiation, positive procedures, extinction or combinations of methods. The number of studies reported yearly was also plotted. The implication of these data for federal and state policy makers and for treatment programs dealing with difficult to treat clients is discussed. PMID:2648505

  15. RAMA Surveillance Capsule and Component Activation Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Watkins, Kenneth E.; Jones, Eric N. [TransWare Enterprises Inc., 1565 Mediterranean Dr., Sycamore, IL 60178 (United States); Carter, Robert G. [Electric Power Research Institute, 1300 West W. T. Harris Blvd., Charlotte, NC 28262 (United States)

    2011-07-01

    This paper presents the calculated-to-measured ratios associated with the application of the RAMA Fluence Methodology software to light water reactor surveillance capsule and reactor component activation evaluations. Comparisons to measurements are performed for pressurized water reactor and boiling water reactor surveillance capsule activity specimens from seventeen operating light water reactors. Comparisons to measurements are also performed for samples removed from the core shroud, top guide, and jet pump brace pads from two reactors. In conclusion: The flexible geometry modeling capabilities provided by RAMA, combined with the detailed representation of operating reactor history and anisotropic scattering detail, produces accurate predictions of the fast neutron fluence and neutron activation for BWR and PWR surveillance capsule geometries. This allows best estimate RPV fluence to be determined without the need for multiplicative bias corrections. The three-dimensional modeling capability in RAMA provides an accurate estimate of the fast neutron fluence for regions far removed from the core mid-plane elevation. The comparisons to activation measurements for various core components indicate that the RAMA predictions are reasonable, and notably conservative (i.e., C/M ratios are consistently greater than unity). It should be noted that in the current evaluations, the top and bottom fuel regions are represented by six inch height nodes. As a result, the leakage-induced decrease in power near the upper and lower edges of the core are not well represented in the current models. More precise predictions of fluence for components that lie above and below the core boundaries could be obtained if the upper and lower fuel nodes were subdivided into multiple axial regions with assigned powers that reflect the neutron leakage at the top and bottom of the core. This use of additional axial sub-meshing at the top and bottom of the core is analogous to the use of pin

  16. Analysing CMS transfers using Machine Learning techniques

    CERN Document Server

    Diotalevi, Tommaso

    2016-01-01

    LHC experiments transfer more than 10 PB/week between all grid sites using the FTS transfer service. In particular, CMS manages almost 5 PB/week of FTS transfers with PhEDEx (Physics Experiment Data Export). FTS sends metrics about each transfer (e.g. transfer rate, duration, size) to a central HDFS storage at CERN. The work done during these three months, here as a Summer Student, involved the usage of ML techniques, using a CMS framework called DCAFPilot, to process this new data and generate predictions of transfer latencies on all links between Grid sites. This analysis will provide, as a future service, the necessary information in order to proactively identify and maybe fix latency issued transfer over the WLCG.

  17. Failure analyses of composite bolted joints

    Science.gov (United States)

    Wilson, D. W.; Gillespie, J. W.; York, J. L.; Pipes, R. B.

    1980-01-01

    The complex failure behavior exhibited by bolted joints of graphite epoxy (Hercules AS/3501) was investigated for the net tension, bearing and shearout failure modes using combined analytical and experimental techniques. Plane stress, linear elastic, finite element methods were employed to determine the two dimensional state of stress resulting from a loaded hole in a finite width, semiinfinite strip. The stresses predicted by the finite element method were verified by experiment to lend credence to the analysis. The influence of joint geometric parameters on the state of stress and resultant strength of the joint was also studied. The resulting functional relationships found to exist between bolted joint strength and the geometric parameters, were applied in the formulation of semiempirical strength models for the basic failure modes. A point stress failure criterion was successfully applied as the failure criterion for the net tension and shearout failure modes.

  18. Bayesian Uncertainty Analyses Via Deterministic Model

    Science.gov (United States)

    Krzysztofowicz, R.

    2001-05-01

    Rational decision-making requires that the total uncertainty about a variate of interest (a predictand) be quantified in terms of a probability distribution, conditional on all available information and knowledge. Suppose the state-of-knowledge is embodied in a deterministic model, which is imperfect and outputs only an estimate of the predictand. Fundamentals are presented of three Bayesian approaches to producing a probability distribution of the predictand via any deterministic model. The Bayesian Processor of Output (BPO) quantifies the total uncertainty in terms of a posterior distribution, conditional on model output. The Bayesian Processor of Ensemble (BPE) quantifies the total uncertainty in terms of a posterior distribution, conditional on an ensemble of model output. The Bayesian Forecasting System (BFS) decomposes the total uncertainty into input uncertainty and model uncertainty, which are characterized independently and then integrated into a predictive distribution.

  19. Proposed Testing to Assess the Accuracy of Glass-To-Metal Seal Stress Analyses.

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, Robert S.; Emery, John M; Tandon, Rajan; Antoun, Bonnie R.; Stavig, Mark E.; Newton, Clay S.; Gibson, Cory S; Bencoe, Denise N.

    2014-09-01

    The material characterization tests conducted on 304L VAR stainless steel and Schott 8061 glass have provided higher fidelity data for calibration of material models used in Glass - T o - Metal (GTM) seal analyses. Specifically, a Thermo - Multi - Linear Elastic Plastic ( thermo - MLEP) material model has be en defined for S S304L and the Simplified Potential Energy Clock nonlinear visc oelastic model has been calibrated for the S8061 glass. To assess the accuracy of finite element stress analyses of GTM seals, a suite of tests are proposed to provide data for comparison to mo del predictions.

  20. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  1. Zephyr - The prediction models

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, T.S.; Madsen, H.; Nielsen, H.Aa. [Informatics and Mathematical Modelling - DTU, Kgs. Lyngby (Denmark); Landberg, L.; Giebel, G. [Risoe National Lab., Roskilde (Denmark)

    2006-07-01

    This paper briefly describes new models and methods for predicting the wind power output from wind farms. The system is being developed in a project which has the research organization Risoe and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Danish utilities as partners and users. The new models are evaluated for five wind farms in Denmark as well as one wind farm in Spain. It is shown that the predictions based on conditional parametric models are superior to the predictions obtained by state-of-the-art parametric models. (au)

  2. On Prediction of EOP

    CERN Document Server

    Malkin, Z

    2009-01-01

    Two methods of prediction of the Pole coordinates and TAI-UTC were tested -- extrapolation of the deterministic components and ARIMA. It was found that each of these methods is most effective for certain length of prognosis. For short-time prediction ARIMA algorithm yields more accurate prognosis, and for long-time one extrapolation is preferable. So, the combined algorithm is being used in practice of IAA EOP Service. The accuracy of prognosis is close to accuracy of IERS algorithms. For prediction of nutation the program KSV-1996-1 by T. Herring is being used.

  3. Bond return predictability in expansions and recessions

    DEFF Research Database (Denmark)

    Engsted, Tom; Møller, Stig Vinther; Jensen, Magnus David Sander

    We document that over the period 1953-2011 US bond returns are predictable in expansionary periods but unpredictable during recessions. This result holds in both in-sample and out-of-sample analyses and using both univariate regressions and combination forecasting techniques. A simulation study...... shows that our tests have power to reject unpredictability in both expansions and recessions. To judge the economic significance of the results we compute utility gains for a meanvariance investor who takes the predictability patterns into account and show that utility gains are positive in expansions...... but negative in recessions. The results are also consistent with tests showing that the expectations hypothesis of the term structure holds in recessions but not in expansions. However, the results for bonds are in sharp contrast to results for stocks showing that stock returns are predictable in recessions...

  4. Differential AR algorithm for packet delay prediction

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Different delay prediction algorithms have been applied in multimedia communication, among which linear prediction is attractive because of its low complexity. AR (auto regressive) algorithm is a traditional one with low computation cost, while NLMS (normalize least mean square) algorithm is more precise. In this paper, referring to ARIMA (auto regression integrated with moving averages) model, a differential AR algorithm (DIAR) is proposed based on the analyses of both AR and NLMS algorithms. The prediction precision of the new algorithm is about 5-10 db higher than that of the AR algorithm without increasing the computation complexity.Compared with NLMS algorithm, its precision slightly improves by 0.1 db on average, but the algorithm complexity reduces more than 90%. Our simulation and tests also demonstrate that this method improves the performance of the average end-to-end delay and packet loss ratio significantly.

  5. Split Hopkinson pressure bar technique: Experiments, analyses and applications

    Science.gov (United States)

    Gama, Bazle Anwer

    A critical review of the Hopkinson bar experimental technique is performed to identify the validity and applicability of the classic one-dimensional theory. A finite element model of the Hopkinson bar experiment is developed in three-dimensions and is used in detailed numerical analyses. For a small diameter hard specimen, the bar-specimen interfaces are non-planar, which predicts higher specimen strain and, thus, lower initial modulus in the linear elastic phase of deformation. In such cases, the stress distribution in the specimen is not uni-axial and a chamfered specimen geometry is found to provide better uni-axial stress condition in the specimen. In addition, a new Hopkinson bar with transmission tube is found suitable for small strain measurement of small diameter specimens. A one-dimensional exact Hopkinson bar theory considering the stress wave propagation in an equal diameter specimen has been formulated which predicts physically meaningful results in all extreme cases as compared to classic theory. In light of the theoretical and numerical investigations, an experimental methodology for rate dependent modulus and strength is developed. Quasi-static and dynamic behavior of plain weave (15 x 15) S-2 glass/SC15 composites has been investigated. A new circular-rectangular prism specimen (C-RPS) geometry is found suitable for testing laminated composites in the in-plane directions. Rate sensitive strength, non-linear strain and elastic modulus parameters for plain-weave (15 x 15) S-2 glass/SC15 composites have been experimentally determined.

  6. An Illumination Modeling System for Human Factors Analyses

    Science.gov (United States)

    Huynh, Thong; Maida, James C.; Bond, Robert L. (Technical Monitor)

    2002-01-01

    Seeing is critical to human performance. Lighting is critical for seeing. Therefore, lighting is critical to human performance. This is common sense, and here on earth, it is easily taken for granted. However, on orbit, because the sun will rise or set every 45 minutes on average, humans working in space must cope with extremely dynamic lighting conditions. Contrast conditions of harsh shadowing and glare is also severe. The prediction of lighting conditions for critical operations is essential. Crew training can factor lighting into the lesson plans when necessary. Mission planners can determine whether low-light video cameras are required or whether additional luminaires need to be flown. The optimization of the quantity and quality of light is needed because of the effects on crew safety, on electrical power and on equipment maintainability. To address all of these issues, an illumination modeling system has been developed by the Graphics Research and Analyses Facility (GRAF) and Lighting Environment Test Facility (LETF) in the Space Human Factors Laboratory at NASA Johnson Space Center. The system uses physically based ray tracing software (Radiance) developed at Lawrence Berkeley Laboratories, a human factors oriented geometric modeling system (PLAID) and an extensive database of humans and environments. Material reflectivity properties of major surfaces and critical surfaces are measured using a gonio-reflectometer. Luminaires (lights) are measured for beam spread distribution, color and intensity. Video camera performances are measured for color and light sensitivity. 3D geometric models of humans and the environment are combined with the material and light models to form a system capable of predicting lighting conditions and visibility conditions in space.

  7. High perfomance liquid chromatography in pharmaceutical analyses.

    Science.gov (United States)

    Nikolin, Branko; Imamović, Belma; Medanhodzić-Vuk, Saira; Sober, Miroslav

    2004-05-01

    In testing the pre-sale procedure the marketing of drugs and their control in the last ten years, high performance liquid chromatography replaced numerous spectroscopic methods and gas chromatography in the quantitative and qualitative analysis. In the first period of HPLC application it was thought that it would become a complementary method of gas chromatography, however, today it has nearly completely replaced gas chromatography in pharmaceutical analysis. The application of the liquid mobile phase with the possibility of transformation of mobilized polarity during chromatography and all other modifications of mobile phase depending upon the characteristics of substance which are being tested, is a great advantage in the process of separation in comparison to other methods. The greater choice of stationary phase is the next factor which enables realization of good separation. The separation line is connected to specific and sensitive detector systems, spectrafluorimeter, diode detector, electrochemical detector as other hyphernated systems HPLC-MS and HPLC-NMR, are the basic elements on which is based such wide and effective application of the HPLC method. The purpose high performance liquid chromatography (HPLC) analysis of any drugs is to confirm the identity of a drug and provide quantitative results and also to monitor the progress of the therapy of a disease.1) Measuring presented on the Fig. 1. is chromatogram obtained for the plasma of depressed patients 12 h before oral administration of dexamethasone. It may also be used to further our understanding of the normal and disease process in the human body trough biomedical and therapeutically research during investigation before of the drugs registration. The analyses of drugs and metabolites in biological fluids, particularly plasma, serum or urine is one of the most demanding but one of the most common uses of high performance of liquid chromatography. Blood, plasma or serum contains numerous endogenous

  8. Hungarian approach to LOCA analyses for SARs

    International Nuclear Information System (INIS)

    The Hungarian AGNES project in the period of 1992-94 was performed with the aim to reassess the safety of the Paks NPP using state-of-the-art techniques. The project comprised - among others - a complete design basis accident (DBA) analysis. Major part of the thermal-hydraulic analyses has been performed by the RELAP5/mod2.5/V251 code version with conservative approach. In the medium size LOCA calculations and the PTS studies the six reactor cooling loops of the WWER-440/213 system were modelled by three loops (a single, a double and a triple loop). In the further developed version of the input model used in small break LOCA and other DBA analyses the six loops were modelled separately. The nodalisation schemes of the reactor vessel and the pressurizer, moreover the single primary loops are identical in the two input models. For the six-loop inputs model the trip cards, general tables and control variables are generated by using a RELAP5 object-oriented pre-processing interactive code, the TROPIC 4.0 code received from TRACTEBEL Belgium. The six-loop input model for WWER-440/V213 system was verified by the data of two operational transients measured in Paks NPP. The analysis of large break LOCAs, where the combined simultaneous upper plenum and downcomer injection results in a rather complicated process during reflooding phase, was carried out by using the ATHLET mod 1.1 Cycle code version (developed by GRS) in the framework of a bilateral German-Hungarian cooperation agreement using two-loop (1+5) input model. Later on in our safety analysis activities the application of best estimate methodology gained ground. In the last years AEKI in framework of different projects as US CAMP activity, EU PHARE and 5th Framework Programmes, as well as national projects to support the plant operation performed also many cases of LOCA analysis including primary to secondary leakages, feedwater and steam line breaks. These can be the preparation for a new DBA Analysis project

  9. Genomic Prediction in Barley

    DEFF Research Database (Denmark)

    Edriss, Vahid; Cericola, Fabio; Jensen, Jens D;

    Genomic prediction uses markers (SNPs) across the whole genome to predict individual breeding values at an early growth stage potentially before large scale phenotyping. One of the applications of genomic prediction in plant breeding is to identify the best individual candidate lines to contribute...... to next generation. The main goal of this study was to see the potential of using genomic prediction in a commercial Barley breeding program. The data used in this study was from Nordic Seed company which is located in Denmark. Around 350 advanced lines were genotyped with 9K Barely chip from...... Illumina. Traits used in this study were grain yield, plant height and heading date. Heading date is number days it takes after 1st June for plant to head. Heritabilities were 0.33, 0.44 and 0.48 for yield, height and heading, respectively for the average of nine plots. The GBLUP model was used for genomic...

  10. Genomic Prediction in Barley

    DEFF Research Database (Denmark)

    Edriss, Vahid; Cericola, Fabio; Jensen, Jens D;

    2015-01-01

    Genomic prediction uses markers (SNPs) across the whole genome to predict individual breeding values at an early growth stage potentially before large scale phenotyping. One of the applications of genomic prediction in plant breeding is to identify the best individual candidate lines to contribute...... to next generation. The main goal of this study was to see the potential of using genomic prediction in a commercial Barley breeding program. The data used in this study was from Nordic Seed company which is located in Denmark. Around 350 advanced lines were genotyped with 9K Barely chip from...... Illumina. Traits used in this study were grain yield, plant height and heading date. Heading date is number days it takes after 1st June for plant to head. Heritabilities were 0.33, 0.44 and 0.48 for yield, height and heading, respectively for the average of nine plots. The GBLUP model was used for genomic...

  11. Epitope prediction methods

    DEFF Research Database (Denmark)

    Karosiene, Edita

    leucocyte antigen (HLA) molecules, are encoded by extremely polymorphic genes on chromosome 6. Due to this polymorphism, thousands of different MHC molecules exist, making the experimental identification of peptide-MHC interactions a very costly procedure. This has primed the need for in silico peptide......-MHC prediction methods, and over the last decade several such methods have been successfully developed and used for epitope discovery purposes. My PhD project has been dedicated to improve methods for predicting peptide-MHC interactions by developing new strategies for training prediction algorithms based...... on machine learning techniques. Several MHC class I binding prediction algorithms have been developed and due to their high accuracy they are used by many immunologists to facilitate the conventional experimental process of epitope discovery. However, the accuracy of these methods depends on data defining...

  12. Predictable grammatical constructions

    DEFF Research Database (Denmark)

    Lucas, Sandra

    2015-01-01

    My aim in this paper is to provide evidence from diachronic linguistics for the view that some predictable units are entrenched in grammar and consequently in human cognition, in a way that makes them functionally and structurally equal to nonpredictable grammatical units, suggesting...... that these predictable units should be considered grammatical constructions on a par with the nonpredictable constructions. Frequency has usually been seen as the only possible argument speaking in favor of viewing some formally and semantically fully predictable units as grammatical constructions. However, this paper...... semantically and formally predictable. Despite this difference, [méllo INF], like the other future periphrases, seems to be highly entrenched in the cognition (and grammar) of Early Medieval Greek language users, and consequently a grammatical construction. The syntactic evidence speaking in favor of [méllo...

  13. Predicting toxicity of nanoparticles

    OpenAIRE

    BURELLO ENRICO; Worth, Andrew

    2011-01-01

    A statistical model based on a quantitative structure–activity relationship accurately predicts the cytotoxicity of various metal oxide nanoparticles, thus offering a way to rapidly screen nanomaterials and prioritize testing.

  14. Robust Distributed Online Prediction

    CERN Document Server

    Dekel, Ofer; Shamir, Ohad; Xiao, Lin

    2010-01-01

    The standard model of online prediction deals with serial processing of inputs by a single processor. However, in large-scale online prediction problems, where inputs arrive at a high rate, an increasingly common necessity is to distribute the computation across several processors. A non-trivial challenge is to design distributed algorithms for online prediction, which maintain good regret guarantees. In \\cite{DMB}, we presented the DMB algorithm, which is a generic framework to convert any serial gradient-based online prediction algorithm into a distributed algorithm. Moreover, its regret guarantee is asymptotically optimal for smooth convex loss functions and stochastic inputs. On the flip side, it is fragile to many types of failures that are common in distributed environments. In this companion paper, we present variants of the DMB algorithm, which are resilient to many types of network failures, and tolerant to varying performance of the computing nodes.

  15. Trend Analyses of Nitrate in Danish Groundwater

    Science.gov (United States)

    Hansen, B.; Thorling, L.; Dalgaard, T.; Erlandsen, M.

    2012-04-01

    This presentation assesses the long-term development in the oxic groundwater nitrate concentration and nitrogen (N) loss due to intensive farming in Denmark. Firstly, up to 20-year time-series from the national groundwater monitoring network enable a statistically systematic analysis of distribution, trends and trend reversals in the groundwater nitrate concentration. Secondly, knowledge about the N surplus in Danish agriculture since 1950 is used as an indicator of the potential loss of N. Thirdly, groundwater recharge CFC (Chlorofluorocarbon) age determination allows linking of the first two dataset. The development in the nitrate concentration of oxic groundwater clearly mirrors the development in the national agricultural N surplus, and a corresponding trend reversal is found in groundwater. Regulation and technical improvements in the intensive farming in Denmark have succeeded in decreasing the N surplus by 40% since the mid 1980s while at the same time maintaining crop yields and increasing the animal production of especially pigs. Trend analyses prove that the youngest (0-15 years old) oxic groundwater shows more pronounced significant downward nitrate trends (44%) than the oldest (25-50 years old) oxic groundwater (9%). This amounts to clear evidence of the effect of reduced nitrate leaching on groundwater nitrate concentrations in Denmark. Are the Danish groundwater monitoring strategy obtimal for detection of nitrate trends? Will the nitrate concentrations in Danish groundwater continue to decrease or are the Danish nitrate concentration levels now appropriate according to the Water Framework Directive?

  16. Network-Based and Binless Frequency Analyses.

    Directory of Open Access Journals (Sweden)

    Sybil Derrible

    Full Text Available We introduce and develop a new network-based and binless methodology to perform frequency analyses and produce histograms. In contrast with traditional frequency analysis techniques that use fixed intervals to bin values, we place a range ±ζ around each individual value in a data set and count the number of values within that range, which allows us to compare every single value of a data set with one another. In essence, the methodology is identical to the construction of a network, where two values are connected if they lie within a given a range (±ζ. The value with the highest degree (i.e., most connections is therefore assimilated to the mode of the distribution. To select an optimal range, we look at the stability of the proportion of nodes in the largest cluster. The methodology is validated by sampling 12 typical distributions, and it is applied to a number of real-world data sets with both spatial and temporal components. The methodology can be applied to any data set and provides a robust means to uncover meaningful patterns and trends. A free python script and a tutorial are also made available to facilitate the application of the method.

  17. Consumption patterns and perception analyses of hangwa.

    Science.gov (United States)

    Kwock, Chang Geun; Lee, Min A; Park, So Hyun

    2012-03-01

    Hangwa is a traditional food, corresponding to the current consumption trend, in need of marketing strategies to extend its consumption. Therefore, the purpose of this study was to analyze consumers' consumption patterns and perception of Hangwa to increase consumption in the market. A questionnaire was sent to 250 consumers by e-mail from Oct 8∼23, 2009 and the data from 231 persons were analyzed in this study. Statistical, descriptive, paired samples t-test, and importance-performance analyses were conducted using SPSS WIN 17.0. According to the results, Hangwa was purchased mainly 'for present' (39.8%) and the main reasons for buying it were 'traditional image' (33.3%) and 'taste' (22.5%). When importance and performance of attributes considered in purchasing Hangwa were evaluated, performance was assessed to be lower than importance for all attributes. The attributes in the first quadrant with a high importance and a high performance were 'a sanitary process', 'a rigorous quality mark' and 'taste', which were related with quality of the products. In addition, those with a high importance but a low performance were 'popularization through advertisement', 'promotion through mass media', 'conversion of thought on traditional foods', 'a reasonable price' and 'a wide range of price'. In conclusion, Hangwa manufacturers need to diversify products and extend the expiration date based on technologies to promote its consumption. In terms of price, Hangwa should become more available by lowering the price barrier for consumers who are sensitive to price. PMID:24471065

  18. Analyses of demand response in Denmark

    International Nuclear Information System (INIS)

    Due to characteristics of the power system, costs of producing electricity vary considerably over short time intervals. Yet, many consumers do not experience corresponding variations in the price they pay for consuming electricity. The topic of this report is: are consumers willing and able to respond to short-term variations in electricity prices, and if so, what is the social benefit of consumers doing so? Taking Denmark and the Nord Pool market as a case, the report focuses on what is known as short-term consumer flexibility or demand response in the electricity market. With focus on market efficiency, efficient allocation of resources and security of supply, the report describes demand response from a micro-economic perspective and provides empirical observations and case studies. The report aims at evaluating benefits from demand response. However, only elements contributing to an overall value are presented. In addition, the analyses are limited to benefits for society, and costs of obtaining demand response are not considered. (au)

  19. Evaluation of the Olympus AU-510 analyser.

    Science.gov (United States)

    Farré, C; Velasco, J; Ramón, F

    1991-01-01

    The selective multitest Olympus AU-510 analyser was evaluated according to the recommendations of the Comision de Instrumentacion de la Sociedad Española de Quimica Clinica and the European Committee for Clinical Laboratory Standards. The evaluation was carried out in two stages: an examination of the analytical units and then an evaluation in routine work conditions. The operational characteristics of the system were also studied.THE FIRST STAGE INCLUDED A PHOTOMETRIC STUDY: dependent on the absorbance, the inaccuracy varies between +0.5% to -0.6% at 405 nm and from -5.6% to 10.6% at 340 nm; the imprecision ranges between -0.22% and 0.56% at 405 nm and between 0.09% and 2.74% at 340 nm. Linearity was acceptable, apart from a very low absorbance for NADH at 340 nm; and the imprecision of the serum sample pipetter was satisfactory.TWELVE SERUM ANALYTES WERE STUDIED UNDER ROUTINE CONDITIONS: glucose, urea urate, cholesterol, triglycerides, total bilirubin, creatinine, phosphate, iron, aspartate aminotransferase, alanine aminotransferase and gamma-glutamyl transferase.The within-run imprecision (CV%) ranged from 0.67% for phosphate to 2.89% for iron and the between-run imprecision from 0.97% for total bilirubin to 7.06% for iron. There was no carryover in a study of the serum sample pipetter. Carry-over studies with the reagent and sample pipetters shows some cross contamination in the iron assay.

  20. Parametric analyses of fusion-fission systems

    International Nuclear Information System (INIS)

    After a short review of the nuclear reactions relevant to fusion-fission systems the various types of blankets and characteristic model cases are presented. The fusion-fission system is modelled by its energy flow diagram. The system components and the system as a whole are characterized by 'component parameters' and 'system parameters' all of which are energy ratios. A cost estimate is given for the net energy delivered by the system, and a collection of formulas for the various energies flowing in the system in terms of the thermal energy delivered by the fusion part is presented. For sensitivity analysis four reference cases are defined which combine two plasma confinement schemes (mirror and tokamak) with two fissile fuel cycles (thorium-uranium and uranium-plutonium). The sensitivity of the critical plasma energy multiplication, of the circulating energy fraction, and of the energy cost with respect to changes of the component parameters is analysed. For the mirror case only superconducting magnets are considered, whereas two tokimak cases take into account both superconducting and normal-conducting coils. A section presenting relations between the plasma energy multiplication and the confinement parameter n tausub(E) of driven tokamak plasmas is added for reference. The conclusions summarize the results which could be obtained within the framework of energy balances, cost estimates and their parametric sensitivities. This is supplemented by listing those issues which lie beyond this scope but have to be taken into account when assessments of fusion-fission systems are made. (orig.)

  1. WIND SPEED AND ENERGY POTENTIAL ANALYSES

    Directory of Open Access Journals (Sweden)

    A. TOKGÖZLÜ

    2013-01-01

    Full Text Available This paper provides a case study on application of wavelet techniques to analyze wind speed and energy (renewable and environmental friendly energy. Solar and wind are main sources of energy that allows farmers to have the potential for transferring kinetic energy captured by the wind mill for pumping water, drying crops, heating systems of green houses, rural electrification's or cooking. Larger wind turbines (over 1 MW can pump enough water for small-scale irrigation. This study tried to initiate data gathering process for wavelet analyses, different scale effects and their role on wind speed and direction variations. The wind data gathering system is mounted at latitudes: 37° 50" N; longitude 30° 33" E and height: 1200 m above mean sea level at a hill near Süleyman Demirel University campus. 10 minutes average values of two levels wind speed and direction (10m and 30m above ground level have been recorded by a data logger between July 2001 and February 2002. Wind speed values changed between the range of 0 m/s and 54 m/s. Annual mean speed value is 4.5 m/s at 10 m ground level. Prevalent wind

  2. Comparative analyses of bidirectional promoters in vertebrates

    Directory of Open Access Journals (Sweden)

    Taylor James

    2008-05-01

    Full Text Available Abstract Background Orthologous genes with deep phylogenetic histories are likely to retain similar regulatory features. In this report we utilize orthology assignments for pairs of genes co-regulated by bidirectional promoters to map the ancestral history of the promoter regions. Results Our mapping of bidirectional promoters from humans to fish shows that many such promoters emerged after the divergence of chickens and fish. Furthermore, annotations of promoters in deep phylogenies enable detection of missing data or assembly problems present in higher vertebrates. The functional importance of bidirectional promoters is indicated by selective pressure to maintain the arrangement of genes regulated by the promoter over long evolutionary time spans. Characteristics unique to bidirectional promoters are further elucidated using a technique for unsupervised classification, known as ESPERR. Conclusion Results of these analyses will aid in our understanding of the evolution of bidirectional promoters, including whether the regulation of two genes evolved as a consequence of their proximity or if function dictated their co-regulation.

  3. Field analyses of tritium at environmental levels

    Energy Technology Data Exchange (ETDEWEB)

    Hofstetter, K.J.; Cable, P.R.; Beals, D.M

    1999-02-11

    An automated, remote system to analyze tritium in aqueous solutions at environmental levels has been tested and has demonstrated laboratory quality tritium analysis capability in near real time. The field deployable tritium analysis system (FDTAS) consists of a novel multi-port autosampler, an on-line water purification system, and a prototype stop-flow liquid scintillation counter (LSC) which can be remotely controlled for unmanned operation. Backgrounds of {approx}1.5 counts/min in the tritium channel are routinely measured with a tritium detection efficiency of {approx}25% for the custom 11 ml cell. A detection limit of <0.3 pCi/ml has been achieved for 100-min counts using a 50 : 50 mixture of sample and cocktail. To assess the long-term performance characteristics of the FDTAS, a composite sampler was installed on the Savannah River, downstream of the Savannah River Site, and collected repetitive 12-hour composite samples over a 14 day period. The samples were analyzed using the FDTAS and in the laboratory using a standard bench-top LSC. The results of the tritium analyses by the FDTAS and by the laboratory LSC were consistent for comparable counting times at the typical river tritium background levels ({approx}1 pCi/ml)

  4. ANALYSES AND INFLUENCES OF GLAZED BUILDING ENVELOPES

    Directory of Open Access Journals (Sweden)

    Sabina Jordan

    2011-01-01

    Full Text Available The article presents the results of an analytical study of the functioning of glazing at two different yet interacting levels: at the level of the building as a whole, and at that of glazing as a building element. At the building level, analyses were performed on a sample of high-rise business buildings in Slovenia, where the glazing"s share of the building envelope was calculated, and estimates of the proportion of shade provided by external blinds were made. It is shown that, especially in the case of modern buildings with large proportions of glazing and buildings with no shading devices, careful glazing design is needed, together with a sound knowledge of energy performance. In the second part of the article, the energy balance values relating to selected types of glazing are presented, including solar control glazing. The paper demonstrates the need for a holistic energy approach to glazing problems, as well as how different types of glazing can be methodically compared, thus improving the design of sustainability-orientated buildings.

  5. Statistical analyses of extreme food habits

    International Nuclear Information System (INIS)

    This report is a summary of the results of the project ''Statistical analyses of extreme food habits'', which was ordered from the National Office for Radiation Protection as a contribution to the amendment of the ''General Administrative Regulation to paragraph 45 of the Decree on Radiation Protection: determination of the radiation exposition by emission of radioactive substances from facilities of nuclear technology''. Its aim is to show if the calculation of the radiation ingested by 95% of the population by food intake, like it is planned in a provisional draft, overestimates the true exposure. If such an overestimation exists, the dimension of it should be determined. It was possible to prove the existence of this overestimation but its dimension could only roughly be estimated. To identify the real extent of it, it is necessary to include the specific activities of the nuclides, which were not available for this investigation. In addition to this the report shows how the amounts of food consumption of different groups of foods influence each other and which connections between these amounts should be taken into account, in order to estimate the radiation exposition as precise as possible. (orig.)

  6. Kinematic gait analyses in healthy Golden Retrievers

    Directory of Open Access Journals (Sweden)

    Gabriela C.A. Silva

    2014-12-01

    Full Text Available Kinematic analysis relates to the relative movement between rigid bodies and finds application in gait analysis and other body movements, interpretation of their data when there is change, determines the choice of treatment to be instituted. The objective of this study was to standardize the march of Dog Golden Retriever Healthy to assist in the diagnosis and treatment of musculoskeletal disorders. We used a kinematic analysis system to analyse the gait of seven dogs Golden Retriever, female, aged between 2 and 4 years, weighing 21.5 to 28 kg, clinically normal. Flexion and extension were described for shoulder, elbow, carpal, hip, femorotibialis and tarsal joints. The gait was characterized lateral and had accepted hypothesis of normality for all variables, except for the stance of hip and elbow, considering a confidence level of 95%, significance level α = 0.05. Variations have been attributed to displacement of the stripes during movement and the duplicated number of reviews. The kinematic analysis proved to be a consistent method of evaluation of the movement during canine gait and the data can be used in the diagnosis and evaluation of canine gait in comparison to other studies and treatment of dogs with musculoskeletal disorders.

  7. Reliability Analyses of Groundwater Pollutant Transport

    Energy Technology Data Exchange (ETDEWEB)

    Dimakis, Panagiotis

    1997-12-31

    This thesis develops a probabilistic finite element model for the analysis of groundwater pollution problems. Two computer codes were developed, (1) one using finite element technique to solve the two-dimensional steady state equations of groundwater flow and pollution transport, and (2) a first order reliability method code that can do a probabilistic analysis of any given analytical or numerical equation. The two codes were connected into one model, PAGAP (Probability Analysis of Groundwater And Pollution). PAGAP can be used to obtain (1) the probability that the concentration at a given point at a given time will exceed a specified value, (2) the probability that the maximum concentration at a given point will exceed a specified value and (3) the probability that the residence time at a given point will exceed a specified period. PAGAP could be used as a tool for assessment purposes and risk analyses, for instance the assessment of the efficiency of a proposed remediation technique or to study the effects of parameter distribution for a given problem (sensitivity study). The model has been applied to study the greatest self sustained, precipitation controlled aquifer in North Europe, which underlies Oslo`s new major airport. 92 refs., 187 figs., 26 tabs.

  8. PRECLOSURE CONSEQUENCE ANALYSES FOR LICENSE APPLICATION

    Energy Technology Data Exchange (ETDEWEB)

    S. Tsai

    2005-01-12

    Radiological consequence analyses are performed for potential releases from normal operations in surface and subsurface facilities and from Category 1 and Category 2 event sequences during the preclosure period. Surface releases from normal repository operations are primarily from radionuclides released from opening a transportation cask during dry transfer operations of spent nuclear fuel (SNF) in Dry Transfer Facility 1 (DTF 1), Dry Transfer Facility 2 (DTF 2), the Canister Handling facility (CHF), or the Fuel Handling Facility (FHF). Subsurface releases from normal repository operations are from resuspension of waste package surface contamination and neutron activation of ventilated air and silica dust from host rock in the emplacement drifts. The purpose of this calculation is to demonstrate that the preclosure performance objectives, specified in 10 CFR 63.111(a) and 10 CFR 63.111(b), have been met for the proposed design and operations in the geologic repository operations area. Preclosure performance objectives are discussed in Section 6.2.3 and are summarized in Tables 1 and 2.

  9. 绿色水处理剂对A20钢的缓蚀阻垢性能研究%Research on aging inhibiting corrosing and scaling properties of green water treatment agents for A20

    Institute of Scientific and Technical Information of China (English)

    李素云; 王钢; 邵波; 梅其政; 袁曹龙

    2011-01-01

    在p(Ca)=p(HCO)=250mg/L的模拟循环水中,PESA、HEDP、AA/AMPS三种药剂对A20钢的阻垢性能优劣顺序是:PESA>HEDP>AA/AMPS;最佳复配方是HEDP 2.5mg/L、PESA 1.5mg/L、咪唑啉1.5mg/L.复配药剂具有良好的缓蚀阻垢效果,缓蚀过程以抑制阳极为主,阻垢作用通过配位作用和晶格畸变二者的协同作用实现.

  10. Seismic Soil-Structure Interaction Analyses of a Deeply Embedded Model Reactor – SASSI Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Nie J.; Braverman J.; Costantino, M.

    2013-10-31

    This report summarizes the SASSI analyses of a deeply embedded reactor model performed by BNL and CJC and Associates, as part of the seismic soil-structure interaction (SSI) simulation capability project for the NEAMS (Nuclear Energy Advanced Modeling and Simulation) Program of the Department of Energy. The SASSI analyses included three cases: 0.2 g, 0.5 g, and 0.9g, all of which refer to nominal peak accelerations at the top of the bedrock. The analyses utilized the modified subtraction method (MSM) for performing the seismic SSI evaluations. Each case consisted of two analyses: input motion in one horizontal direction (X) and input motion in the vertical direction (Z), both of which utilized the same in-column input motion. Besides providing SASSI results for use in comparison with the time domain SSI results obtained using the DIABLO computer code, this study also leads to the recognition that the frequency-domain method should be modernized so that it can better serve its mission-critical role for analysis and design of nuclear power plants.

  11. 'Red Flag' Predictions

    DEFF Research Database (Denmark)

    Hallin, Carina Antonia; Andersen, Torben Juul; Tveterås, Sigbjørn

    -generation prediction markets and outline its unique features as a third-generation prediction market. It is argued that frontline employees gain deep insights when they execute operational activities on an ongoing basis in the organization. The experiential learning from close interaction with internal and external...... stakeholders provides unique insights not otherwise available to senior management. We outline a methodology to agglomerate these insights in a performance barometer as an important source for problem identification and innovation....

  12. Stuck pipe prediction

    KAUST Repository

    Alzahrani, Majed

    2016-03-10

    Disclosed are various embodiments for a prediction application to predict a stuck pipe. A linear regression model is generated from hook load readings at corresponding bit depths. A current hook load reading at a current bit depth is compared with a normal hook load reading from the linear regression model. A current hook load greater than a normal hook load for a given bit depth indicates the likelihood of a stuck pipe.

  13. Use of EBSD Data in Numerical Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Becker, R; Wiland, H

    2000-01-14

    obtained for comparison with the model predictions. More recent work has taken advantage of automated data collection on deformed specimens as a means of collecting detailed and spatially correlated data for model validation. Although it will not be discussed in detail here, another area in which EBSD data is having a great impact is on recrystallization modeling. EBSD techniques can be used to collect data for quantitative microstructural analysis. This data can be used to infer growth kinetics of specific orientations, and this information can be synthesized into more accurate grain growth or recrystallization models. Another role which EBSD techniques may play is in determining initial structures for recrystallization models. A realistic starting structure is vital for evaluating the models, and attempts at predicting realistic structures with finite element simulations are not yet successful. As methodologies and equipment resolution continue to improve, it is possible that measured structures will serve as input for recrystallization models. Simulations have already been run using information obtained manually from a TEM.

  14. Operational Dust Prediction

    Science.gov (United States)

    Benedetti, Angela; Baldasano, Jose M.; Basart, Sara; Benincasa, Francesco; Boucher, Olivier; Brooks, Malcolm E.; Chen, Jen-Ping; Colarco, Peter R.; Gong, Sunlin; Huneeus, Nicolas; Jones, Luke; Lu, Sarah; Menut, Laurent; Morcrette, Jean-Jacques; Mulcahy, Jane; Nickovic, Slobodan; Garcia-Pando, Carlos P.; Reid, Jeffrey S.; Sekiyama, Thomas T.; Tanaka, Taichu Y.; Terradellas, Enric; Westphal, Douglas L.; Zhang, Xiao-Ye; Zhou, Chun-Hong

    2014-01-01

    Over the last few years, numerical prediction of dust aerosol concentration has become prominent at several research and operational weather centres due to growing interest from diverse stakeholders, such as solar energy plant managers, health professionals, aviation and military authorities and policymakers. Dust prediction in numerical weather prediction-type models faces a number of challenges owing to the complexity of the system. At the centre of the problem is the vast range of scales required to fully account for all of the physical processes related to dust. Another limiting factor is the paucity of suitable dust observations available for model, evaluation and assimilation. This chapter discusses in detail numerical prediction of dust with examples from systems that are currently providing dust forecasts in near real-time or are part of international efforts to establish daily provision of dust forecasts based on multi-model ensembles. The various models are introduced and described along with an overview on the importance of dust prediction activities and a historical perspective. Assimilation and evaluation aspects in dust prediction are also discussed.

  15. Analyse textuelle des discours: Niveaux ou plans d´analyse

    Directory of Open Access Journals (Sweden)

    Jean-Michel Adam

    2012-12-01

    Full Text Available L’article porte sur la théorie de l´Analyse Textuelle des Discours, à partir d´une reprisede la traduction brésilienne de La linguistique textuelle: introduction à l’analyse textuelle desdiscours (Cortez, 2008. L’ATD est pensée en fonction de trois observations préliminaires: lalinguistique textuelle est une des disciplines de l’analyse de discours, le texte est l’objet d’analysede l’ATD, et, dès qu’il y a texte, c’est-à-dire reconnaissance du fait qu’une suite d’énoncésforme un tout de communication, il y a effet de généricité, c’est-à-dire inscription de cette suited’énoncés dans une classe de discours. Le modèle théorique de l’ATD est éclairé par une reprisede son schéma 4, où sont représentés huit niveaux d’analyse. L´ATD est abordée sous l’angled’une double exigence – des raisons théoriques et des raisons méthodologiques et didactiquesqui conduisent à ces niveaux – et sont détaillées et illustrées les cinq plans ou niveaux d’analysetextuelle. Pour finir, des parties de l’oeuvre sont reprises et élargies, avec d’autres analyses où denouveaux aspcts théoriques sont détaillés.

  16. Genome-wide analyses of small noncoding RNAs in streptococci

    Directory of Open Access Journals (Sweden)

    Nadja ePatenge

    2015-05-01

    Full Text Available Streptococci represent a diverse group of Gram-positive bacteria, which colonize a wide range of hosts among animals and humans. Streptococcal species occur as commensal as well as pathogenic organisms. Many of the pathogenic species can cause severe, invasive infections in their hosts leading to a high morbidity and mortality. The consequence is a tremendous suffering on the part of men and livestock besides the significant financial burden in the agricultural and healthcare sectors. An environmentally stimulated and tightly controlled expression of virulence factor genes is of fundamental importance for streptococcal pathogenicity. Bacterial small noncoding RNAs (sRNAs modulate the expression of genes involved in stress response, sugar metabolism, surface composition, and other properties that are related to bacterial virulence. Even though the regulatory character is shared by this class of RNAs, variation on the molecular level results in a high diversity of functional mechanisms. The knowledge about the role of sRNAs in streptococci is still limited, but in recent years, genome-wide screens for sRNAs have been conducted in an increasing number of species. Bioinformatics prediction approaches have been employed as well as expression analyses by classical array techniques or next generation sequencing. This review will give an overview of whole genome screens for sRNAs in streptococci with a focus on describing the different methods and comparing their outcome considering sRNA conservation among species, functional similarities, and relevance for streptococcal infection.

  17. Rock penetration : finite element sensitivity and probabilistic modeling analyses.

    Energy Technology Data Exchange (ETDEWEB)

    Fossum, Arlo Frederick

    2004-08-01

    This report summarizes numerical analyses conducted to assess the relative importance on penetration depth calculations of rock constitutive model physics features representing the presence of microscale flaws such as porosity and networks of microcracks and rock mass structural features. Three-dimensional, nonlinear, transient dynamic finite element penetration simulations are made with a realistic geomaterial constitutive model to determine which features have the most influence on penetration depth calculations. A baseline penetration calculation is made with a representative set of material parameters evaluated from measurements made from laboratory experiments conducted on a familiar sedimentary rock. Then, a sequence of perturbations of various material parameters allows an assessment to be made of the main penetration effects. A cumulative probability distribution function is calculated with the use of an advanced reliability method that makes use of this sensitivity database, probability density functions, and coefficients of variation of the key controlling parameters for penetration depth predictions. Thus the variability of the calculated penetration depth is known as a function of the variability of the input parameters. This simulation modeling capability should impact significantly the tools that are needed to design enhanced penetrator systems, support weapons effects studies, and directly address proposed HDBT defeat scenarios.

  18. Microstructural and compositional analyses of GaN-based nanostructures

    Energy Technology Data Exchange (ETDEWEB)

    Pretorius, Angelika; Mueller, Knut; Rosenauer, Andreas [Section Electron Microscopy, Institute of Solid State Physics, University of Bremen, Otto-Hahn-Allee 1, 28359 Bremen (Germany); Schmidt, Thomas; Falta, Jens [Section Surface Physics, Institute of Solid State Physics, University of Bremen, Otto-Hahn-Allee 1, 28359 Bremen (Germany); Aschenbrenner, Timo; Yamaguchi, Tomohiro; Dartsch, Heiko; Hommel, Detlef [Section Semiconductor Epitaxy, Institute of Solid State Physics, University of Bremen, Otto-Hahn-Allee 1, 28359 Bremen (Germany); Kuebel, Christian [Institute of Nanotechnology, Karlsruher Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)

    2011-08-15

    Composition and microstructure of GaN-based island structures and distributed Bragg reflectors (DBRs) were investigated with transmission electron microscopy (TEM). We analysed free-standing InGaN islands and islands capped with GaN. Growth of the islands performed by molecular beam epitaxy (MBE) and metal organic vapour phase epitaxy (MOVPE) resulted in different microstructures. The islands grown by MBE were plastically relaxed. Cap layer deposition resulted in a rapid dissolution of the islands already at early stages of cap layer growth. These findings are confirmed by grazing-incidence X-ray diffraction (GIXRD). In contrast, the islands grown by MOVPE relax only elastically. Strain state analysis (SSA) revealed that the indium concentration increases towards the tips of the islands. For an application as quantum dots, the islands must be embedded into DBRs. Structure and composition of Al{sub y}Ga{sub 1-y}N/GaN Bragg reflectors on top of an AlGaN buffer layer and In{sub x}Al{sub 1-x}N/GaN Bragg reflectors on top of a GaN buffer layer were investigated. Specifically, structural defects such as threading dislocations (TDs) and inversion domains (IDs) were studied, and we investigated thicknesses, interfaces and interface roughnesses of the layers. As the peak reflectivities of the investigated DBRs do not reach the theoretical predictions, possible reasons are discussed. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  19. Review of Approximate Analyses of Sheet Forming Processes

    Science.gov (United States)

    Weiss, Matthias; Rolfe, Bernard; Yang, Chunhui; de Souza, Tim; Hodgson, Peter

    2011-08-01

    Approximate models are often used for the following purposes: • in on-line control systems of metal forming processes where calculation speed is critical; • to obtain quick, quantitative information on the magnitude of the main variables in the early stages of process design; • to illustrate the role of the major variables in the process; • as an initial check on numerical modelling; and • as a basis for quick calculations on processes in teaching and training packages. The models often share many similarities; for example, an arbitrary geometric assumption of deformation giving a simplified strain distribution, simple material property descriptions—such as an elastic, perfectly plastic law—and mathematical short cuts such as a linear approximation of a polynomial expression. In many cases, the output differs significantly from experiment and performance or efficiency factors are developed by experience to tune the models. In recent years, analytical models have been widely used at Deakin University in the design of experiments and equipment and as a pre-cursor to more detailed numerical analyses. Examples that are reviewed in this paper include deformation of sandwich material having a weak, elastic core, load prediction in deep drawing, bending of strip (particularly of ageing steel where kinking may occur), process analysis of low-pressure hydroforming of tubing, analysis of the rejection rates in stamping, and the determination of constitutive models by an inverse method applied to bending tests.

  20. Non-Statistical Methods of Analysing of Bankruptcy Risk

    Directory of Open Access Journals (Sweden)

    Pisula Tomasz

    2015-06-01

    Full Text Available The article focuses on assessing the effectiveness of a non-statistical approach to bankruptcy modelling in enterprises operating in the logistics sector. In order to describe the issue more comprehensively, the aforementioned prediction of the possible negative results of business operations was carried out for companies functioning in the Polish region of Podkarpacie, and in Slovakia. The bankruptcy predictors selected for the assessment of companies operating in the logistics sector included 28 financial indicators characterizing these enterprises in terms of their financial standing and management effectiveness. The purpose of the study was to identify factors (models describing the bankruptcy risk in enterprises in the context of their forecasting effectiveness in a one-year and two-year time horizon. In order to assess their practical applicability the models were carefully analysed and validated. The usefulness of the models was assessed in terms of their classification properties, and the capacity to accurately identify enterprises at risk of bankruptcy and healthy companies as well as proper calibration of the models to the data from training sample sets.

  1. Quantifying Fire Cycle from Dendroecological Records Using Survival Analyses

    Directory of Open Access Journals (Sweden)

    Dominic Cyr

    2016-06-01

    Full Text Available Quantifying fire regimes in the boreal forest ecosystem is crucial for understanding the past and present dynamics, as well as for predicting its future dynamics. Survival analyses have often been used to estimate the fire cycle in eastern Canada because they make it possible to take into account the censored information that is made prevalent by the typically long fire return intervals and the limited scope of the dendroecological methods that are used to quantify them. Here, we assess how the true length of the fire cycle, the short-term temporal variations in fire activity, and the sampling effort affect the accuracy and precision of estimates obtained from two types of parametric survival models, the Weibull and the exponential models, and one non-parametric model obtained with the Cox regression. Then, we apply those results in a case area located in eastern Canada. Our simulation experiment confirms some documented concerns regarding the detrimental effects of temporal variations in fire activity on parametric estimation of the fire cycle. Cox regressions appear to provide the most accurate and robust estimator, being by far the least affected by temporal variations in fire activity. The Cox-based estimate of the fire cycle for the last 300 years in the case study area is 229 years (CI95: 162–407, compared with the likely overestimated 319 years obtained with the commonly used exponential model.

  2. Taxometric analyses of paranoid and schizoid personality disorders.

    Science.gov (United States)

    Ahmed, Anthony Olufemi; Green, Bradley Andrew; Buckley, Peter Francis; McFarland, Megan Elizabeth

    2012-03-30

    There remains debate about whether personality disorders (PDs) are better conceptualized as categorical, reflecting discontinuity from normal personality; or dimensional, existing on a continuum of severity with normal personality traits. Evidence suggests that most PDs are dimensional but there is a lack of consensus about the structure of Cluster A disorders. Taxometric methods are adaptable to investigating the taxonic status of psychiatric disorders. The current study investigated the latent structure of paranoid and schizoid PDs in an epidemiological sample (N=43,093) drawn from the National Epidemiological Survey on Alcohol and Related Conditions (NESARC) using taxometric analyses. The current study used taxometric methods to analyze three indicators of paranoid PD - mistrust, resentment, and functional disturbance - and three indicators of schizoid PD - emotional detachment, social withdrawal, and functional disturbance - derived factor analytically. Overall, taxometrics supported a dimensional rather than taxonic structure for paranoid and schizoid PDs through examination of taxometric graphs and comparative curve fit indices. Dimensional models of paranoid and schizoid PDs better predicted social functioning, role-emotional, and mental health scales in the survey than categorical models. Evidence from the current study supports recent efforts to represent paranoid and schizoid PDs as well as other PDs along broad personality dimensions.

  3. Application of Polar Cap (PC) indices in analyses and forecasts of geophysical conditions

    Science.gov (United States)

    Stauning, Peter

    2016-07-01

    The Polar Cap (PC) indices could be considered to represent the input of power from the solar wind to the Earth's magnetosphere. The indices have been used to analyse interplanetary electric fields, effects of solar wind pressure pulses, cross polar cap voltages and polar cap diameter, ionospheric Joule heating, and other issues of polar cap dynamics. The PC indices have also been used to predict auroral electrojet intensities and global auroral power as well as ring current intensities. For specific space weather purposes the PC indices could be used to forecast substorm development and predict associated power line disturbances in the subauroral regions. The presentation shall outline the general background for applying the PC indices in analyses or forecasts of solar wind-magnetosphere-ionosphere interactions and provide illustrative examples of the use of the Polar Cap indices in specific cases

  4. Integrated Field Analyses of Thermal Springs

    Science.gov (United States)

    Shervais, K.; Young, B.; Ponce-Zepeda, M. M.; Rosove, S.

    2011-12-01

    A group of undergraduate researchers through the SURE internship offered by the Southern California Earthquake Center (SCEC) have examined thermal springs in southern Idaho, northern Utah as well as mud volcanoes in the Salton Sea, California. We used an integrated approach to estimate the setting and maximum temperature, including water chemistry, Ipad-based image and data-base management, microbiology, and gas analyses with a modified Giggenbach sampler.All springs were characterized using GISRoam (tmCogent3D). We are performing geothermometry calculations as well as comparisons with temperature gradient data on the results while also analyzing biological samples. Analyses include water temperature, pH, electrical conductivity, and TDS measured in the field. Each sample is sealed and chilled and delivered to a water lab within 12 hours.Temperatures are continuously monitored with the use of Solinst Levelogger Juniors. Through partnership with a local community college geology club, we receive results on a monthly basis and are able to process initial data earlier in order to evaluate data over a longer time span. The springs and mudpots contained microbial organisms which were analyzed using methods of single colony isolation, polymerase chain reaction, and DNA sequencing showing the impact of the organisms on the springs or vice versa. Soon we we will collect gas samples at sites that show signs of gas. This will be taken using a hybrid of the Giggenbach method and our own methods. Drawing gas samples has proven a challenge, however we devised a method to draw out gas samples utilizing the Giggenbach flask, transferring samples to glass blood sample tubes, replacing NaOH in the Giggenbach flask, and evacuating it in the field for multiple samples using a vacuum pump. We also use a floating platform devised to carry and lower a levelogger, to using an in-line fuel filter from a tractor in order to keep mud from contaminating the equipment.The use of raster

  5. On study design in neuroimaging heritability analyses

    Science.gov (United States)

    Koran, Mary Ellen; Li, Bo; Jahanshad, Neda; Thornton-Wells, Tricia A.; Glahn, David C.; Thompson, Paul M.; Blangero, John; Nichols, Thomas E.; Kochunov, Peter; Landman, Bennett A.

    2014-03-01

    Imaging genetics is an emerging methodology that combines genetic information with imaging-derived metrics to understand how genetic factors impact observable structural, functional, and quantitative phenotypes. Many of the most well-known genetic studies are based on Genome-Wide Association Studies (GWAS), which use large populations of related or unrelated individuals to associate traits and disorders with individual genetic factors. Merging imaging and genetics may potentially lead to improved power of association in GWAS because imaging traits may be more sensitive phenotypes, being closer to underlying genetic mechanisms, and their quantitative nature inherently increases power. We are developing SOLAR-ECLIPSE (SE) imaging genetics software which is capable of performing genetic analyses with both large-scale quantitative trait data and family structures of variable complexity. This program can estimate the contribution of genetic commonality among related subjects to a given phenotype, and essentially answer the question of whether or not the phenotype is heritable. This central factor of interest, heritability, offers bounds on the direct genetic influence over observed phenotypes. In order for a trait to be a good phenotype for GWAS, it must be heritable: at least some proportion of its variance must be due to genetic influences. A variety of family structures are commonly used for estimating heritability, yet the variability and biases for each as a function of the sample size are unknown. Herein, we investigate the ability of SOLAR to accurately estimate heritability models based on imaging data simulated using Monte Carlo methods implemented in R. We characterize the bias and the variability of heritability estimates from SOLAR as a function of sample size and pedigree structure (including twins, nuclear families, and nuclear families with grandparents).

  6. Pipeline for macro- and microarray analyses

    Directory of Open Access Journals (Sweden)

    R. Vicentini

    2007-05-01

    Full Text Available The pipeline for macro- and microarray analyses (PMmA is a set of scripts with a web interface developed to analyze DNA array data generated by array image quantification software. PMmA is designed for use with single- or double-color array data and to work as a pipeline in five classes (data format, normalization, data analysis, clustering, and array maps. It can also be used as a plugin in the BioArray Software Environment, an open-source database for array analysis, or used in a local version of the web service. All scripts in PMmA were developed in the PERL programming language and statistical analysis functions were implemented in the R statistical language. Consequently, our package is a platform-independent software. Our algorithms can correctly select almost 90% of the differentially expressed genes, showing a superior performance compared to other methods of analysis. The pipeline software has been applied to 1536 expressed sequence tags macroarray public data of sugarcane exposed to cold for 3 to 48 h. PMmA identified thirty cold-responsive genes previously unidentified in this public dataset. Fourteen genes were up-regulated, two had a variable expression and the other fourteen were down-regulated in the treatments. These new findings certainly were a consequence of using a superior statistical analysis approach, since the original study did not take into account the dependence of data variability on the average signal intensity of each gene. The web interface, supplementary information, and the package source code are available, free, to non-commercial users at http://ipe.cbmeg.unicamp.br/pub/PMmA.

  7. Pipeline for macro- and microarray analyses.

    Science.gov (United States)

    Vicentini, R; Menossi, M

    2007-05-01

    The pipeline for macro- and microarray analyses (PMmA) is a set of scripts with a web interface developed to analyze DNA array data generated by array image quantification software. PMmA is designed for use with single- or double-color array data and to work as a pipeline in five classes (data format, normalization, data analysis, clustering, and array maps). It can also be used as a plugin in the BioArray Software Environment, an open-source database for array analysis, or used in a local version of the web service. All scripts in PMmA were developed in the PERL programming language and statistical analysis functions were implemented in the R statistical language. Consequently, our package is a platform-independent software. Our algorithms can correctly select almost 90% of the differentially expressed genes, showing a superior performance compared to other methods of analysis. The pipeline software has been applied to 1536 expressed sequence tags macroarray public data of sugarcane exposed to cold for 3 to 48 h. PMmA identified thirty cold-responsive genes previously unidentified in this public dataset. Fourteen genes were up-regulated, two had a variable expression and the other fourteen were down-regulated in the treatments. These new findings certainly were a consequence of using a superior statistical analysis approach, since the original study did not take into account the dependence of data variability on the average signal intensity of each gene. The web interface, supplementary information, and the package source code are available, free, to non-commercial users at http://ipe.cbmeg.unicamp.br/pub/PMmA. PMID:17464422

  8. Reclaiming the individual from Hofstede's ecological analysis--a 20-year odyssey: comment on Oyserman et al. (2002).

    Science.gov (United States)

    Bond, Michael Harris

    2002-01-01

    D. Oyserman, H. M. Coon, and M. Kemmelmeier (2002) challenge the stereotype that European Americans are more individualistic and less collectivistic than persons from most other ethnic groups. The author contends that this stereotype took firm empirical root with G. Hofstede's (1980) monumental publication identifying the United States as the most individualistic of his then 40 nations. This empirical designation arose because of challengeable decisions Hofstede made about the analysis of his data and the labeling of his dimensions. The conflation of concepts under the rubric of cultural individualism plus psychologists' unwarranted psychologizing of the construct then combined with Hofstede's empirical location of America to set a 20-year agenda for data collection. Oyserman et al. disentangle and organize this mass of studies, enabling the discipline of cross-cultural psychology to forge ahead in more productive directions, less reliant on previous assumptions and measures. PMID:11843548

  9. Trichodysplasia Spinulosa in a 20-Month-Old Girl With a Good Response to Topical Cidofovir 1%.

    Science.gov (United States)

    Santesteban, Raquel; Feito, Marta; Mayor, Ander; Beato, María; Ramos, Esther; de Lucas, Raúl

    2015-12-01

    Trichodysplasia spinulosa (TS) is a rare entity, characterized by a follicular digitate keratosis predominantly affecting the face and variable degrees of hair loss, most severely facial hair, that occurs in immunosuppressed individuals, and is considered to be a viral infection caused by a human polyomavirus, the "TS-associated polyomavirus." Histologically it is characterized by hair follicles with excessive inner root-sheath differentiation and intraepithelial viral inclusions. Correlation of these findings with clinical features is required for diagnosis. Treatment with antiviral agents appears to be the most effective. We report the occurrence of TS in a 20-month-old girl with multivisceral transplantation due to short-bowel syndrome secondary to intestinal atresia and gastroschisis. The patient was treated with cidofovir 1% cream, with significant improvement and without any adverse effects. We describe the youngest patient, to our knowledge, with TS. PMID:26620059

  10. Reclaiming the individual from Hofstede's ecological analysis--a 20-year odyssey: comment on Oyserman et al. (2002).

    Science.gov (United States)

    Bond, Michael Harris

    2002-01-01

    D. Oyserman, H. M. Coon, and M. Kemmelmeier (2002) challenge the stereotype that European Americans are more individualistic and less collectivistic than persons from most other ethnic groups. The author contends that this stereotype took firm empirical root with G. Hofstede's (1980) monumental publication identifying the United States as the most individualistic of his then 40 nations. This empirical designation arose because of challengeable decisions Hofstede made about the analysis of his data and the labeling of his dimensions. The conflation of concepts under the rubric of cultural individualism plus psychologists' unwarranted psychologizing of the construct then combined with Hofstede's empirical location of America to set a 20-year agenda for data collection. Oyserman et al. disentangle and organize this mass of studies, enabling the discipline of cross-cultural psychology to forge ahead in more productive directions, less reliant on previous assumptions and measures.

  11. Seismic criteria studies and analyses. Quarterly progress report No. 3. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    1975-01-03

    Information is presented concerning the extent to which vibratory motions at the subsurface foundation level might differ from motions at the ground surface and the effects of the various subsurface materials on the overall Clinch River Breeder Reactor site response; seismic analyses of LMFBR type reactors to establish analytical procedures for predicting structure stresses and deformations; and aspects of the current technology regarding the representation of energy losses in nuclear power plants as equivalent viscous damping.

  12. Evaluation of mixed dentition analyses in north Indian population: A comparative study

    OpenAIRE

    Ravi Kumar Goyal; Vijay P Sharma; Pradeep Tandon; Amit Nagar; Gyan P Singh

    2014-01-01

    Introduction: Mixed dentition regression equations analyses (Moyers, Tanaka-Johnston) are based on European population , reliability of these methods is questionable over other population. Materials and Methods: The present study was conducted on total 260 study models. This study was done in two phases. In the first phase, linear regression equations were made. In the second phase, comparison of actual values of sum of mesiodistal width of canine, first and second premolars with the predicte...

  13. Analyses of hypomethylated oil palm gene space.

    Science.gov (United States)

    Low, Eng-Ti L; Rosli, Rozana; Jayanthi, Nagappan; Mohd-Amin, Ab Halim; Azizi, Norazah; Chan, Kuang-Lim; Maqbool, Nauman J; Maclean, Paul; Brauning, Rudi; McCulloch, Alan; Moraga, Roger; Ong-Abdullah, Meilina; Singh, Rajinder

    2014-01-01

    Demand for palm oil has been increasing by an average of ∼8% the past decade and currently accounts for about 59% of the world's vegetable oil market. This drives the need to increase palm oil production. Nevertheless, due to the increasing need for sustainable production, it is imperative to increase productivity rather than the area cultivated. Studies on the oil palm genome are essential to help identify genes or markers that are associated with important processes or traits, such as flowering, yield and disease resistance. To achieve this, 294,115 and 150,744 sequences from the hypomethylated or gene-rich regions of Elaeis guineensis and E. oleifera genome were sequenced and assembled into contigs. An additional 16,427 shot-gun sequences and 176 bacterial artificial chromosomes (BAC) were also generated to check the quality of libraries constructed. Comparison of these sequences revealed that although the methylation-filtered libraries were sequenced at low coverage, they still tagged at least 66% of the RefSeq supported genes in the BAC and had a filtration power of at least 2.0. A total 33,752 microsatellites and 40,820 high-quality single nucleotide polymorphism (SNP) markers were identified. These represent the most comprehensive collection of microsatellites and SNPs to date and would be an important resource for genetic mapping and association studies. The gene models predicted from the assembled contigs were mined for genes of interest, and 242, 65 and 14 oil palm transcription factors, resistance genes and miRNAs were identified respectively. Examples of the transcriptional factors tagged include those associated with floral development and tissue culture, such as homeodomain proteins, MADS, Squamosa and Apetala2. The E. guineensis and E. oleifera hypomethylated sequences provide an important resource to understand the molecular mechanisms associated with important agronomic traits in oil palm. PMID:24497974

  14. Analyses of hypomethylated oil palm gene space.

    Directory of Open Access Journals (Sweden)

    Eng-Ti L Low

    Full Text Available Demand for palm oil has been increasing by an average of ∼8% the past decade and currently accounts for about 59% of the world's vegetable oil market. This drives the need to increase palm oil production. Nevertheless, due to the increasing need for sustainable production, it is imperative to increase productivity rather than the area cultivated. Studies on the oil palm genome are essential to help identify genes or markers that are associated with important processes or traits, such as flowering, yield and disease resistance. To achieve this, 294,115 and 150,744 sequences from the hypomethylated or gene-rich regions of Elaeis guineensis and E. oleifera genome were sequenced and assembled into contigs. An additional 16,427 shot-gun sequences and 176 bacterial artificial chromosomes (BAC were also generated to check the quality of libraries constructed. Comparison of these sequences revealed that although the methylation-filtered libraries were sequenced at low coverage, they still tagged at least 66% of the RefSeq supported genes in the BAC and had a filtration power of at least 2.0. A total 33,752 microsatellites and 40,820 high-quality single nucleotide polymorphism (SNP markers were identified. These represent the most comprehensive collection of microsatellites and SNPs to date and would be an important resource for genetic mapping and association studies. The gene models predicted from the assembled contigs were mined for genes of interest, and 242, 65 and 14 oil palm transcription factors, resistance genes and miRNAs were identified respectively. Examples of the transcriptional factors tagged include those associated with floral development and tissue culture, such as homeodomain proteins, MADS, Squamosa and Apetala2. The E. guineensis and E. oleifera hypomethylated sequences provide an important resource to understand the molecular mechanisms associated with important agronomic traits in oil palm.

  15. A conceptual DFT approach towards analysing toxicity

    Indian Academy of Sciences (India)

    U Sarkar; D R Roy; P K Chattaraj; R Parthasarathi; J Padmanabhan; V Subramanian

    2005-09-01

    The applicability of DFT-based descriptors for the development of toxicological structure-activity relationships is assessed. Emphasis in the present study is on the quality of DFT-based descriptors for the development of toxicological QSARs and, more specifically, on the potential of the electrophilicity concept in predicting toxicity of benzidine derivatives and the series of polyaromatic hydrocarbons (PAH) expressed in terms of their biological activity data (50). First, two benzidine derivatives, which act as electron-donating agents in their interactions with biomolecules are considered. Overall toxicity in general and the most probable site of reactivity in particular are effectively described by the global and local electrophilicity parameters respectively. Interaction of two benzidine derivatives with nucleic acid (NA) bases/selected base pairs is determined using Parr’s charge transfer formula. The experimental biological activity data (50) for the family of PAH, namely polychlorinated dibenzofurans (PCDF), polyhalogenated dibenzo--dioxins (PHDD) and polychlorinated biphenyls (PCB) are taken as dependent variables and the HF energy (), along with DFT-based global and local descriptors, viz., electrophilicity index () and local electrophilic power (+) respectively are taken as independent variables. Fairly good correlation is obtained showing the significance of the selected descriptors in the QSAR on toxins that act as electron acceptors in the presence of biomolecules. Effects of population analysis schemes in the calculation of Fukui functions as well as that of solvation are probed. Similarly, some electron-donor aliphatic amines are studied in the present work. We see that global and local electrophilicities along with the HF energy are adequate in explaining the toxicity of several substances

  16. Quantitative DNA Analyses for Airborne Birch Pollen.

    Directory of Open Access Journals (Sweden)

    Isabell Müller-Germann

    Full Text Available Birch trees produce large amounts of highly allergenic pollen grains that are distributed by wind and impact human health by causing seasonal hay fever, pollen-related asthma, and other allergic diseases. Traditionally, pollen forecasts are based on conventional microscopic counting techniques that are labor-intensive and limited in the reliable identification of species. Molecular biological techniques provide an alternative approach that is less labor-intensive and enables identification of any species by its genetic fingerprint. A particularly promising method is quantitative Real-Time polymerase chain reaction (qPCR, which can be used to determine the number of DNA copies and thus pollen grains in air filter samples. During the birch pollination season in 2010 in Mainz, Germany, we collected air filter samples of fine (<3 μm and coarse air particulate matter. These were analyzed by qPCR using two different primer pairs: one for a single-copy gene (BP8 and the other for a multi-copy gene (ITS. The BP8 gene was better suitable for reliable qPCR results, and the qPCR results obtained for coarse particulate matter were well correlated with the birch pollen forecasting results of the regional air quality model COSMO-ART. As expected due to the size of birch pollen grains (~23 μm, the concentration of DNA in fine particulate matter was lower than in the coarse particle fraction. For the ITS region the factor was 64, while for the single-copy gene BP8 only 51. The possible presence of so-called sub-pollen particles in the fine particle fraction is, however, interesting even in low concentrations. These particles are known to be highly allergenic, reach deep into airways and cause often severe health problems. In conclusion, the results of this exploratory study open up the possibility of predicting and quantifying the pollen concentration in the atmosphere more precisely in the future.

  17. Aircraft noise prediction

    Science.gov (United States)

    Filippone, Antonio

    2014-07-01

    This contribution addresses the state-of-the-art in the field of aircraft noise prediction, simulation and minimisation. The point of view taken in this context is that of comprehensive models that couple the various aircraft systems with the acoustic sources, the propagation and the flight trajectories. After an exhaustive review of the present predictive technologies in the relevant fields (airframe, propulsion, propagation, aircraft operations, trajectory optimisation), the paper addresses items for further research and development. Examples are shown for several airplanes, including the Airbus A319-100 (CFM engines), the Bombardier Dash8-Q400 (PW150 engines, Dowty R408 propellers) and the Boeing B737-800 (CFM engines). Predictions are done with the flight mechanics code FLIGHT. The transfer function between flight mechanics and the noise prediction is discussed in some details, along with the numerical procedures for validation and verification. Some code-to-code comparisons are shown. It is contended that the field of aircraft noise prediction has not yet reached a sufficient level of maturity. In particular, some parametric effects cannot be investigated, issues of accuracy are not currently addressed, and validation standards are still lacking.

  18. Solar Cycle Prediction

    CERN Document Server

    Petrovay, K

    2010-01-01

    A review of solar cycle prediction methods and their performance is given, including forecasts for cycle 24 and focusing on aspects of the solar cycle prediction problem that have a bearing on dynamo theory. The scope of the review is further restricted to the issue of predicting the amplitude (and optionally the epoch) of an upcoming solar maximum no later than right after the start of the given cycle. Prediction methods form three main groups. Precursor methods rely on the value of some measure of solar activity or magnetism at a specified time to predict the amplitude of the following solar maximum. Their implicit assumption is that each numbered solar cycle is a consistent unit in itself, while solar activity seems to consist of a series of much less tightly intercorrelated individual cycles. Extrapolation methods, in contrast, are based on the premise that the physical process giving rise to the sunspot number record is statistically homogeneous, i.e., the mathematical regularities underlying its variati...

  19. A combined computational-experimental analyses of selected metabolic enzymes in Pseudomonas species

    Directory of Open Access Journals (Sweden)

    Deepak Perumal, Chu Sing Lim, Vincent T.K. Chow, Kishore R. Sakharkar, Meena K. Sakharkar

    2008-01-01

    Full Text Available Comparative genomic analysis has revolutionized our ability to predict the metabolic subsystems that occur in newly sequenced genomes, and to explore the functional roles of the set of genes within each subsystem. These computational predictions can considerably reduce the volume of experimental studies required to assess basic metabolic properties of multiple bacterial species. However, experimental validations are still required to resolve the apparent inconsistencies in the predictions by multiple resources. Here, we present combined computational-experimental analyses on eight completely sequenced Pseudomonas species. Comparative pathway analyses reveal that several pathways within the Pseudomonas species show high plasticity and versatility. Potential bypasses in 11 metabolic pathways were identified. We further confirmed the presence of the enzyme O-acetyl homoserine (thiol lyase (EC: 2.5.1.49 in P. syringae pv. tomato that revealed inconsistent annotations in KEGG and in the recently published SYSTOMONAS database. These analyses connect and integrate systematic data generation, computational data interpretation, and experimental validation and represent a synergistic and powerful means for conducting biological research.

  20. Summary Report of Laboratory Critical Experiment Analyses Performed for the Disposal Criticality Analysis Methodology

    Energy Technology Data Exchange (ETDEWEB)

    J. Scaglione

    1999-09-09

    This report, ''Summary Report of Laboratory Critical Experiment Analyses Performed for the Disposal Criticality Analysis Methodology'', contains a summary of the laboratory critical experiment (LCE) analyses used to support the validation of the disposal criticality analysis methodology. The objective of this report is to present a summary of the LCE analyses' results. These results demonstrate the ability of MCNP to accurately predict the critical multiplication factor (keff) for fuel with different configurations. Results from the LCE evaluations will support the development and validation of the criticality models used in the disposal criticality analysis methodology. These models and their validation have been discussed in the ''Disposal Criticality Analysis Methodology Topical Report'' (CRWMS M&O 1998a).

  1. Mortality of atomic bomb survivors predicted from laboratory animals

    Science.gov (United States)

    Carnes, Bruce A.; Grahn, Douglas; Hoel, David

    2003-01-01

    Exposure, pathology and mortality data for mice, dogs and humans were examined to determine whether accurate interspecies predictions of radiation-induced mortality could be achieved. The analyses revealed that (1) days of life lost per unit dose can be estimated for a species even without information on radiation effects in that species, and (2) accurate predictions of age-specific radiation-induced mortality in beagles and the atomic bomb survivors can be obtained from a dose-response model for comparably exposed mice. These findings illustrate the value of comparative mortality analyses and the relevance of animal data to the study of human health effects.

  2. First Super-Earth Atmosphere Analysed

    Science.gov (United States)

    2010-12-01

    The atmosphere around a super-Earth exoplanet has been analysed for the first time by an international team of astronomers using ESO's Very Large Telescope. The planet, which is known as GJ 1214b, was studied as it passed in front of its parent star and some of the starlight passed through the planet's atmosphere. We now know that the atmosphere is either mostly water in the form of steam or is dominated by thick clouds or hazes. The results will appear in the 2 December 2010 issue of the journal Nature. The planet GJ 1214b was confirmed in 2009 using the HARPS instrument on ESO's 3.6-metre telescope in Chile (eso0950) [1]. Initial findings suggested that this planet had an atmosphere, which has now been confirmed and studied in detail by an international team of astronomers, led by Jacob Bean (Harvard-Smithsonian Center for Astrophysics), using the FORS instrument on ESO's Very Large Telescope. "This is the first super-Earth to have its atmosphere analysed. We've reached a real milestone on the road toward characterising these worlds," said Bean. GJ 1214b has a radius of about 2.6 times that of the Earth and is about 6.5 times as massive, putting it squarely into the class of exoplanets known as super-Earths. Its host star lies about 40 light-years from Earth in the constellation of Ophiuchus (the Serpent Bearer). It is a faint star [2], but it is also small, which means that the size of the planet is large compared to the stellar disc, making it relatively easy to study [3]. The planet travels across the disc of its parent star once every 38 hours as it orbits at a distance of only two million kilometres: about seventy times closer than the Earth orbits the Sun. To study the atmosphere, the team observed the light coming from the star as the planet passed in front of it [4]. During these transits, some of the starlight passes through the planet's atmosphere and, depending on the chemical composition and weather on the planet, specific wavelengths of light are

  3. Prediction model Perla

    International Nuclear Information System (INIS)

    Prediction model Perla presents one of a tool for an evaluation of a stream ecological status. It enables a comparing with a standard. The standard is formed by a dataset of sites from all area of the Czech Republic. The sites were influenced by a human activity as few as possible. 8 variables were used for prediction (distance from source, elevation, stream width and depth, slope, substrate roughness, longitude and latitude. All of them were statistically important for benthic communities. Results do not response ecoregions, but rather stream size (type). B (EQItaxonu), EQISi, EQIASPT a EQIH appears applicable for assessment using the prediction model and for natural and human stress differentiating. Limiting values of the indices for good ecological status are suggested. On the contrary, using of EQIEPT a EQIekoprof indices would be possible only with difficulties. (authors)

  4. Partially predictable chaos

    CERN Document Server

    Wernecke, Hendrik; Gros, Claudius

    2016-01-01

    For a chaotic system pairs of initially close-by trajectories become eventually fully uncorrelated on the attracting set. This process of decorrelation is split into an initial decrease characterized by the maximal Lyapunov exponent and a subsequent diffusive process on the chaotic attractor causing the final loss of predictability. The time scales of both processes can be either of the same or of very different orders of magnitude. In the latter case the two trajectories linger within a finite but small distance (with respect to the overall size of the attractor) for exceedingly long times and therefore remain partially predictable. We introduce a 0-1 indicator for chaos capable of describing this scenario, arguing, in addition, that the chaotic closed braids found close to a period-doubling transition are generically partially predictable.

  5. Predicting the Sunspot Cycle

    Science.gov (United States)

    Hathaway, David H.

    2009-01-01

    The 11-year sunspot cycle was discovered by an amateur astronomer in 1844. Visual and photographic observations of sunspots have been made by both amateurs and professionals over the last 400 years. These observations provide key statistical information about the sunspot cycle that do allow for predictions of future activity. However, sunspots and the sunspot cycle are magnetic in nature. For the last 100 years these magnetic measurements have been acquired and used exclusively by professional astronomers to gain new information about the nature of the solar activity cycle. Recently, magnetic dynamo models have evolved to the stage where they can assimilate past data and provide predictions. With the advent of the Internet and open data policies, amateurs now have equal access to the same data used by professionals and equal opportunities to contribute (but, alas, without pay). This talk will describe some of the more useful prediction techniques and reveal what they say about the intensity of the upcoming sunspot cycle.

  6. Predictive Techniques for Spacecraft Cabin Air Quality Control

    Science.gov (United States)

    Perry, J. L.; Cromes, Scott D. (Technical Monitor)

    2001-01-01

    As assembly of the International Space Station (ISS) proceeds, predictive techniques are used to determine the best approach for handling a variety of cabin air quality challenges. These techniques use equipment offgassing data collected from each ISS module before flight to characterize the trace chemical contaminant load. Combined with crew metabolic loads, these data serve as input to a predictive model for assessing the capability of the onboard atmosphere revitalization systems to handle the overall trace contaminant load as station assembly progresses. The techniques for predicting in-flight air quality are summarized along with results from early ISS mission analyses. Results from groundbased analyses of in-flight air quality samples are compared to the predictions to demonstrate the technique's relative conservatism.

  7. It's difficult, but important, to make negative predictions.

    Science.gov (United States)

    Williams, Richard V; Amberg, Alexander; Brigo, Alessandro; Coquin, Laurence; Giddings, Amanda; Glowienke, Susanne; Greene, Nigel; Jolly, Robert; Kemper, Ray; O'Leary-Steele, Catherine; Parenty, Alexis; Spirkl, Hans-Peter; Stalford, Susanne A; Weiner, Sandy K; Wichard, Joerg

    2016-04-01

    At the confluence of predictive and regulatory toxicologies, negative predictions may be the thin green line that prevents populations from being exposed to harm. Here, two novel approaches to making confident and robust negative in silico predictions for mutagenicity (as defined by the Ames test) have been evaluated. Analyses of 12 data sets containing >13,000 compounds, showed that negative predictivity is high (∼90%) for the best approach and features that either reduce the accuracy or certainty of negative predictions are identified as misclassified or unclassified respectively. However, negative predictivity remains high (and in excess of the prevalence of non-mutagens) even in the presence of these features, indicating that they are not flags for mutagenicity. PMID:26785392

  8. Linguistic Structure Prediction

    CERN Document Server

    Smith, Noah A

    2011-01-01

    A major part of natural language processing now depends on the use of text data to build linguistic analyzers. We consider statistical, computational approaches to modeling linguistic structure. We seek to unify across many approaches and many kinds of linguistic structures. Assuming a basic understanding of natural language processing and/or machine learning, we seek to bridge the gap between the two fields. Approaches to decoding (i.e., carrying out linguistic structure prediction) and supervised and unsupervised learning of models that predict discrete structures as outputs are the focus. W

  9. Atmospheric predictability revisited

    Directory of Open Access Journals (Sweden)

    Lizzie S. R. Froude

    2013-06-01

    Full Text Available This article examines the potential to improve numerical weather prediction (NWP by estimating upper and lower bounds on predictability by re-visiting the original study of Lorenz (1982 but applied to the most recent version of the European Centre for Medium Range Weather Forecasts (ECMWF forecast system, for both the deterministic and ensemble prediction systems (EPS. These bounds are contrasted with an older version of the same NWP system to see how they have changed with improvements to the NWP system. The computations were performed for the earlier seasons of DJF 1985/1986 and JJA 1986 and the later seasons of DJF 2010/2011 and JJA 2011 using the 500-hPa geopotential height field. Results indicate that for this field, we may be approaching the limit of deterministic forecasting so that further improvements might only be obtained by improving the initial state. The results also show that predictability calculations with earlier versions of the model may overestimate potential forecast skill, which may be due to insufficient internal variability in the model and because recent versions of the model are more realistic in representing the true atmospheric evolution. The same methodology is applied to the EPS to calculate upper and lower bounds of predictability of the ensemble mean forecast in order to explore how ensemble forecasting could extend the limits of the deterministic forecast. The results show that there is a large potential to improve the ensemble predictions, but for the increased predictability of the ensemble mean, there will be a trade-off in information as the forecasts will become increasingly smoothed with time. From around the 10-d forecast time, the ensemble mean begins to converge towards climatology. Until this point, the ensemble mean is able to predict the main features of the large-scale flow accurately and with high consistency from one forecast cycle to the next. By the 15-d forecast time, the ensemble mean has lost

  10. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg;

    2001-01-01

    This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Danish...... utilities as partners and users. The new models are evaluated for five wind farms in Denmark as well as one wind farm in Spain. It is shown that the predictions based on conditional parametric models are superior to the predictions obatined by state-of-the-art parametric models....

  11. RETAIL BANKRUPTCY PREDICTION

    Directory of Open Access Journals (Sweden)

    Johnny Pang

    2013-01-01

    Full Text Available This study reintroduces the famous discriminant functions from Edward Altman and Begley, Ming and Watts (BMW that were used to predict bankrupts. We will formulate three new discriminant functions which differ from Altman’s and BMW’s re-estimated Altman model. Altman’s models as well as Begley, Ming and Watts’s re-estimated Altman model apply to publicly traded industries, whereas the new models formulated in this study are based on retail companies. The three new functions will provide better predictions on retail bankruptcy and they will minimize the chance of misclassifications.

  12. The Effect of Scale Dependent Discretization on the Progressive Failure of Composite Materials Using Multiscale Analyses

    Science.gov (United States)

    Ricks, Trenton M.; Lacy, Thomas E., Jr.; Pineda, Evan J.; Bednarcyk, Brett A.; Arnold, Steven M.

    2013-01-01

    A multiscale modeling methodology, which incorporates a statistical distribution of fiber strengths into coupled micromechanics/ finite element analyses, is applied to unidirectional polymer matrix composites (PMCs) to analyze the effect of mesh discretization both at the micro- and macroscales on the predicted ultimate tensile (UTS) strength and failure behavior. The NASA code FEAMAC and the ABAQUS finite element solver were used to analyze the progressive failure of a PMC tensile specimen that initiates at the repeating unit cell (RUC) level. Three different finite element mesh densities were employed and each coupled with an appropriate RUC. Multiple simulations were performed in order to assess the effect of a statistical distribution of fiber strengths on the bulk composite failure and predicted strength. The coupled effects of both the micro- and macroscale discretizations were found to have a noticeable effect on the predicted UTS and computational efficiency of the simulations.

  13. Prediction method abstracts

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-12-31

    This conference was held December 4--8, 1994 in Asilomar, California. The purpose of this meeting was to provide a forum for exchange of state-of-the-art information concerning the prediction of protein structure. Attention if focused on the following: comparative modeling; sequence to fold assignment; and ab initio folding.

  14. THE PREDICTION OF OVULATION

    Institute of Scientific and Technical Information of China (English)

    WANGXin-Xing; ZHAShu-Wei; WUZhou-Ya

    1989-01-01

    The authors present their work on the prediction of ovulation in forty-five women with normal menstrual cycles for a total of 72 cycles by several indices, including ultrasonography, BBT graph, cervical mucus and mittelschmerz, LH peak values were also determined for reference in 20 cases ( 20 cycles ), Results are as follows:

  15. Predicting coronary heart disease

    DEFF Research Database (Denmark)

    Sillesen, Henrik; Fuster, Valentin

    2012-01-01

    Atherosclerosis is the leading cause of death and disabling disease. Whereas risk factors are well known and constitute therapeutic targets, they are not useful for prediction of risk of future myocardial infarction, stroke, or death. Therefore, methods to identify atherosclerosis itself have bee...

  16. Predicting Lotto Numbers

    NARCIS (Netherlands)

    Jorgensen, C.B.; Suetens, S.; Tyran, J.R.

    2011-01-01

    We investigate the "law of small numbers" using a unique panel data set on lotto gambling. Because we can track individual players over time, we can measure how they react to outcomes of recent lotto drawings. We can therefore test whether they behave as if they believe they can predict lotto number

  17. Prediction of resonant oscillation

    DEFF Research Database (Denmark)

    2010-01-01

    The invention relates to methods for prediction of parametric rolling of vessels. The methods are based on frequency domain and time domain information in order do set up a detector able to trigger an alarm when parametric roll is likely to occur. The methods use measurements of e.g. pitch and roll...

  18. Predicting service life margins

    Science.gov (United States)

    Egan, G. F.

    1971-01-01

    Margins are developed for equipment susceptible to malfunction due to excessive time or operation cycles, and for identifying limited life equipment so monitoring and replacing is accomplished before hardware failure. Method applies to hardware where design service is established and where reasonable expected usage prediction is made.

  19. Gate valve performance prediction

    International Nuclear Information System (INIS)

    The Electric Power Research Institute is carrying out a program to improve the performance prediction methods for motor-operated valves. As part of this program, an analytical method to predict the stem thrust required to stroke a gate valve has been developed and has been assessed against data from gate valve tests. The method accounts for the loads applied to the disc by fluid flow and for the detailed mechanical interaction of the stem, disc, guides, and seats. To support development of the method, two separate-effects test programs were carried out. One test program determined friction coefficients for contacts between gate valve parts by using material specimens in controlled environments. The other test program investigated the interaction of the stem, disc, guides, and seat using a special fixture with full-sized gate valve parts. The method has been assessed against flow-loop and in-plant test data. These tests include valve sizes from 3 to 18 in. and cover a considerable range of flow, temperature, and differential pressure. Stem thrust predictions for the method bound measured results. In some cases, the bounding predictions are substantially higher than the stem loads required for valve operation, as a result of the bounding nature of the friction coefficients in the method

  20. Predicting Classroom Success.

    Science.gov (United States)

    Kessler, Ronald P.

    A study was conducted at Rancho Santiago College (RSC) to identify personal and academic factors that are predictive of students' success in their courses. The study examined the following possible predictors of success: language and math test scores; background characteristics; length of time out of high school; high school background; college…

  1. Predicting Intrinsic Motivation

    Science.gov (United States)

    Martens, Rob; Kirschner, Paul A.

    2004-01-01

    Intrinsic motivation can be predicted from participants' perceptions of the social environment and the task environment (Ryan & Deci, 2000)in terms of control, relatedness and competence. To determine the degree of independence of these factors 251 students in higher vocational education (physiotherapy and hotel management) indicated the extent to…

  2. Predictability of critical transitions

    Science.gov (United States)

    Zhang, Xiaozhu; Kuehn, Christian; Hallerberg, Sarah

    2015-11-01

    Critical transitions in multistable systems have been discussed as models for a variety of phenomena ranging from the extinctions of species to socioeconomic changes and climate transitions between ice ages and warm ages. From bifurcation theory we can expect certain critical transitions to be preceded by a decreased recovery from external perturbations. The consequences of this critical slowing down have been observed as an increase in variance and autocorrelation prior to the transition. However, especially in the presence of noise, it is not clear whether these changes in observation variables are statistically relevant such that they could be used as indicators for critical transitions. In this contribution we investigate the predictability of critical transitions in conceptual models. We study the quadratic integrate-and-fire model and the van der Pol model under the influence of external noise. We focus especially on the statistical analysis of the success of predictions and the overall predictability of the system. The performance of different indicator variables turns out to be dependent on the specific model under study and the conditions of accessing it. Furthermore, we study the influence of the magnitude of transitions on the predictive performance.

  3. Towards Predictive Association Theories

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios; Tsivintzelis, Ioannis; Michelsen, Michael Locht;

    2011-01-01

    Association equations of state like SAFT, CPA and NRHB have been previously applied to many complex mixtures. In this work we focus on two of these models, the CPA and the NRHB equations of state and the emphasis is on the analysis of their predictive capabilities for a wide range of applications...

  4. PREDICTION OF OVULATION

    Institute of Scientific and Technical Information of China (English)

    LIUYong; CHENSu-Ru; ZHOUJin-Ting; LIUJi-Ying

    1989-01-01

    The purpose or this research is: I) to observe the secretory pattern of five reproductive hormones in Chinese women with normal menstrual cyclcs, especially at the prc-ovulatory peroid; 2) to study whether urinary LH measurement could be used instead of serum LH measurement; 3) to evaluate the significance of LH-EIA kit (Right-Day) for ovulation prediction.

  5. Prediction in OLAP Cube

    Directory of Open Access Journals (Sweden)

    Abdellah Sair

    2012-05-01

    Full Text Available Data warehouses are now offering an adequate solution for managing large volumes of data. Online analysis supports OLAP data warehouses in the process of decision support and visualization tools offer, structure and operation of data warehouse. On the other hand, data mining allows the extraction of knowledge with technical description, classification, explanation and prediction. It is therefore possible to better understand the data by coupling on-line analysis with data mining through a unified analysis process. Continuing the work of R. Ben Messaoud, where exploitation of the coupling of on-line analysis and data mining focuses on the description, visualization, classification and explanation, we propose extending the OLAP prediction capabilities. To integrate the prediction in the heart of OLAP, an approach based on automatic learning with regression trees is proposed in order to predict the value of an aggregate or a measure. We will try to express our approach using data from a service management reviews to know that it would be the average obtained by the students if we open a new module, for a department at a certain criterion.

  6. Can observers predict trustworthiness?

    NARCIS (Netherlands)

    M. Belot; V. Bhaskar; J. van de Ven

    2009-01-01

    We analyze experimental evidence on whether untrained subjects can predict how trustworthy an individual is. Two players on a TV show play a high stakes prisoner's dilemma with pre-play communication. Our subjects report probabilistic beliefs that each player cooperates, before and after communicati

  7. Chloride ingress prediction

    DEFF Research Database (Denmark)

    Frederiksen, Jens Mejer; Geiker, Mette Rica

    2008-01-01

    Prediction of chloride ingress into concrete is an important part of durability design of reinforced concrete structures exposed to chloride containing environment. This paper presents experimentally based design parameters for Portland cement concretes with and without silica fume and fly ash in...

  8. A case study of real-world tailpipe emissions for school buses using a 20% biodiesel blend.

    Science.gov (United States)

    Mazzoleni, Claudio; Kuhns, Hampden D; Moosmüller, Hans; Witt, Jay; Nussbaum, Nicholas J; Oliver Chang, M-C; Parthasarathy, Gayathri; Nathagoundenpalayam, Suresh Kumar K; Nikolich, George; Watson, John G

    2007-10-15

    Numerous laboratory studies report carbon monoxide, hydrocarbon, and particulate matter emission reductions with a slight nitrogen oxides emission increase from engines operating with biodiesel and biodiesel blends as compared to using petroleum diesel. We conducted a field study on a fleet of school buses to evaluate the effects of biodiesel use on gaseous and particulate matter fuel-based emission factors under real-world conditions. The field experiment was carried out in two phases during winter 2004. In January (phase I), emissions from approximately 200 school buses operating on petroleum diesel were measured. Immediately after the end of the first phase measurement period, the buses were switched to a 20% biodiesel blend. Emission factors were measured again in March 2004 (phase II) and compared with the January emission factors. To measure gaseous emission factors we used a commercial gaseous remote sensor. Particulate matter emission factors were determined with a combination of the gaseous remote sensor, a Lidar (light detection and ranging), and transmissometer system developed at the Desert Research Institute of Reno, NV, U.S.A. Particulate matter emissions from school buses significantly increased (up to a factor of 1.8) after the switch from petroleum diesel to a 20% biodiesel blend. The fuel used during this campaign was provided by a local distributor and was independently analyzed at the end of the on-road experiment. The analysis found high concentrations of free glycerin and reduced flash points in the B 100 parent fuel. Both measures indicate improper separation and processing of the biodiesel product during production. The biodiesel fuels used in the school buses were not in compliance with the U.S.A. ASTM D6751 biodiesel standard that was finalized in December of 2001. The U.S.A. National Biodiesel Board has formed a voluntary National Biodiesel Accreditation Program for producers and marketers of biodiesel to ensure product quality and

  9. Analyses of the early stages of star formation

    Science.gov (United States)

    Lintott, Christopher John

    This thesis presents a study of the physical and chemical properties of star forming regions, both in the Milky Way and in the distant Universe, building on the existing astrochem- ical models developed by the group at UCL. Observations of the nearby star-forming region, L134A, which were carried out with the James Clark Maxwell Telescope (JCMT) in Hawai'i are compared to the predictions of a model of star formation from gas rich in atomic (rather than molecular) hydrogen. A similar model is used to investigate the effect of non-equilibrium chemistry on the derivation of the cosmic-ray ionization rate, an important parameter in controlling both the chemistry and the physics of star forming clumps. A collapse faster than free-fall is proposed as an explanation for differences be tween the distribution of CS and N2H+ in such regions. Moving beyond the Milky Way, JCMT observations of sulphur-bearing species in the nearby starburst galaxy, M82, are presented and compared with existing molecular observations of similar systems. M82 is a local anlogue for star forming systems in the early Universe, many of which have star formation rates several thousand times that of the Milky Way. A model which treats the molecular gas in such systems as an assembly of 'hot cores' (protostellar cores which have a distinctive chemical signature) has been developed, and is used to predict the abundance of many species. An application of this model is used to explain the observed deviation in the early Universe from the otherwise tight relation between infrared and HCN luminosity via relatively recent star formation from near-primordial gas. Many of the stars formed in the early Universe must now be in massive elliptical systems, and work on the structure of these systems is presented. Data from the Sloan Digital Sky Survey is analysed to show that such galaxies have cores dominated by baryons rather than dark matter, and the dark matter profile is constrained by adiabatic contraction.

  10. L'analyse qualitative comme approche multiple

    Directory of Open Access Journals (Sweden)

    Roberto Cipriani

    2009-11-01

    Full Text Available L’exemple de l’enquête historique, visant à identifier les caractéristiques de la naissance et du développement d’une science et des lectures qu’elle donne des événements sociaux, est des plus originaux. Toute méthodologie historique non seulement débouche sur une pure et simple masse d’épisodes et d’événements, mais est également une narration et une élaboration critique de ces mêmes faits. Michael Postan écrit à juste titre que la complexité des données historiques est cependant de telle nature, et les différences et les similitudes tellement difficiles à cerner, que les efforts des historiens et des sociologues pour construire des comparaisons explicites se sont soldées, pour la plupart, par des tentatives grossières et naïves. La leçon des Annales a contribué en effet à construire l’idée d’une histoire qui puisse lire et expliquer ce qui est uniforme et ce qui est singulier. Rien de plus naturel que la réunion d’« êtres psychiques », à l’instar de l’assemblage des cellules en un organisme, en un « être psychique » nouveau et différent. Un tournant s’impose donc vers une expérimentation empirique plus ample et plus correcte, afin de disposer des instruments adéquats, capables de garantir à la méthodologie micro, qualitative et biographique, une fiabilité suffisante.Historical approach offers a relevant contribution in order to find the features of birth and development of a science which analyses social events. Historical methodology produces not only a lot of data but also a narrative, and an interpretation of facts. According to Michael Postan, history and sociology have made many efforts to compare data that are complex but similar and different at the same time. And the results seem to be naïf. Thanks to Les Annales suggestion it is possible to read and to explain what is uniform and what is singular. To put together “psychical beings”, like organic cells, in a new

  11. Reliability and validity of a 20-s alternative to the wingate anaerobic test in team sport male athletes.

    Directory of Open Access Journals (Sweden)

    Ahmed Attia

    Full Text Available The intent of this study was to evaluate relative and absolute reliability of the 20-s anaerobic test (WAnT20 versus the WAnT30 and to verify how far the various indices of the 30-s Wingate anaerobic test (WAnT30 could be predicted from the WAnT20 data in male athletes. The participants were Exercise Science majors (age: 21.5±1.6 yrs, stature: 0.183±0.08 m, body mass: 81.2±10.9 kg who participated regularly in team sports. In Phase I, 41 participants performed duplicate WAnT20 and WAnT30 tests to assess reliability. In Phase II, 31 participants performed one trial each of the WAnT20 and WAnT30 to determine the ability of the WAnT20 to predict components of the WAnT30. In Phase III, 31 participants were used to cross-validate the prediction equations developed in Phase II. Respective intra-class correlation coefficients (ICC for peak power output (PPO (ICC = 0.98 and 0.95 and mean power output (MPO (ICC 0.98 and 0.90 did not differ significantly between WAnT20 and WAnT30. ICCs for minimal power output (POmin and fatigue index (FI were poor for both tests (range 0.53 to 0.76. Standard errors of the means (SEM for PPO and MPO were less than their smallest worthwhile changes (SWC in both tests; however, POmin and FI values were "marginal," with SEM values greater than their respective SWCs for both tests values. Stepwise regression analysis showed that MPO had the highest coefficient of predictability (R = 0.97, with POmin and FI considerable lower (R = 0.71 and 0.41 respectively. Cross-validation showed insignificant bias with limits of agreement of 0.99±1.04, 6.5±92.7 W, and 1.6±9.8% between measured and predicted MPO, POmin, and FI, respectively. WAnT20 offers a reliable and valid test of leg anaerobic power in male athletes and could replace the classic WAnT30.

  12. Trends in absolute socioeconomic inequalities in mortality in Sweden and New Zealand. A 20-year gender perspective

    Directory of Open Access Journals (Sweden)

    Blakely Tony

    2006-06-01

    Full Text Available Abstract Background Both trends in socioeconomic inequalities in mortality, and cross-country comparisons, may give more information about the causes of health inequalities. We analysed trends in socioeconomic differentials by mortality from early 1980s to late 1990s, comparing Sweden with New Zealand. Methods The New Zealand Census Mortality Study (NZCMS consisting of over 2 million individuals and the Swedish Survey of Living Conditions (ULF comprising over 100, 000 individuals were used for analyses. Education and household income were used as measures of socioeconomic position (SEP. The slope index of inequality (SII was calculated to estimate absolute inequalities in mortality. Analyses were based on 3–5 year follow-up and limited to individuals aged 25–77 years. Age standardised mortality rates were calculated using the European population standard. Results Absolute inequalities in mortality on average over the 1980s and 1990s for both men and women by education were similar in Sweden and New Zealand, but by income were greater in Sweden. Comparing trends in absolute inequalities over the 1980s and 1990s, men's absolute inequalities by education decreased by 66% in Sweden and by 17% in New Zealand (p for trend Conclusion Trends in socioeconomic inequalities in mortality were clearly most favourable for men in Sweden. Trends also seemed to be more favourable for men than women in New Zealand. Assuming the trends in male inequalities in Sweden were not a statistical chance finding, it is not clear what the substantive reason(s was for the pronounced decrease. Further gender comparisons are required.

  13. Exchange Rate Predictions

    OpenAIRE

    Yablonskyy, Karen

    2012-01-01

    The aim of this thesis is to analyze the foreign exchange currency forecasting techniques. Moreover the central idea behind the topic is to develop the strategy of forecasting by choosing indicators and techniques to make own forecast on currency pair EUR/USD. This thesis work is a mixture of theory and practice analyses. The goal during the work on this project was to study different types of forecasting techniques and make own forecast, practice forecasting and trading on Forex platform, ba...

  14. Few-cycle, Broadband, Mid-infrared Optical Parametric Oscillator Pumped by a 20-fs Ti:sapphire Laser

    CERN Document Server

    Kumar, Suddapalli Chaitanya; Ideguchi, Takuro; Yan, Ming; Holzner, Simon; Hänsch, Theodor W; Picqué, Nathalie; Ebrahim-Zadeh, Majid

    2014-01-01

    We report a few-cycle, broadband, singly-resonant optical parametric oscillator (OPO) for the mid-infrared based on MgO-doped periodically-poled LiNbO3 (MgO:PPLN), synchronously pumped by a 20-fs Ti:sapphire laser. By using crystal interaction lengths as short as 250 um, and careful dispersion management of input pump pulses and the OPO resonator, near-transform-limited, few-cycle idler pulses tunable across the mid-infrared have been generated, with as few as 3.7 optical cycles at 2682 nm. The OPO can be continuously tuned over 2179-3732 nm by cavity delay tuning, providing up to 33 mW of output power at 3723 nm. The idler spectra exhibit stable broadband profiles with bandwidths spaning over 422 nm (FWHM) recorded at 3732 nm. We investigate the effect of crystal length on spectral bandwidth and pulse duration at a fixed wavelength, confirming near-transform-limited idler pulses for all grating interaction lengths. By locking the repetition frequency of the pump laser to a radio-frequency reference, and with...

  15. Design of a cryogenic system for a 20m direct current superconducting MgB2 and YBCO power cable

    Science.gov (United States)

    Cheadle, Michael J.; Bromberg, Leslie; Jiang, Xiaohua; Glowacki, Bartek; Zeng, Rong; Minervini, Joseph; Brisson, John

    2014-01-01

    The Massachusetts Institute of Technology, the University of Cambridge in the United Kingdom, and Tsinghua University in Beijing, China, are collaborating to design, construct, and test a 20 m, direct current, superconducting MgB2 and YBCO power cable. The cable will be installed in the State Key Laboratory of Power Systems at Tsinghua University in Beijing beginning in 2013. In a previous paper [1], the cryogenic system was briefly discussed, focusing on the cryogenic issues for the superconducting cable. The current paper provides a detailed discussion of the design, construction, and assembly of the cryogenic system and its components. The two-stage system operates at nominally 80 K and 20 K with the primary cryogen being helium gas. The secondary cryogen, liquid nitrogen, is used to cool the warm stage of binary current leads. The helium gas provides cooling to both warm and cold stages of the rigid cryostat housing the MgB2 and YBCO conductors, as well as the terminations of the superconductors at the end of the current leads. A single cryofan drives the helium gas in both stages, which are thermally isolated with a high effectiveness recuperator. Refrigeration for the helium circuit is provided by a Sumitomo RDK415 cryocooler. This paper focuses on the design, construction, and assembly of the cryostat, the recuperator, and the current leads with associated superconducting cable terminations.

  16. NIR Color vs Launch Date: A 20-Year Analysis of Space Weathering Effects on the Boeing 376 Spacecraft

    Science.gov (United States)

    Frith, James; Anz-Meador, Philip; Lederer, Sue; Cowardin, Heather; Buckalew, Brent

    2015-01-01

    The Boeing HS-376 spin stabilized spacecraft was a popular design that was launched continuously into geosynchronous orbit starting in 1980 with the last launch occurring in 2002. Over 50 of the HS-376 buses were produced to fulfill a variety of different communication missions for countries all over the world. The design of the bus is easily approximated as a telescoping cylinder that is covered with solar cells and an Earth facing antenna that is despun at the top of the cylinder. The similarity in design and the number of spacecraft launched over a long period of time make the HS-376 a prime target for studying the effects of solar weathering on solar panels as a function of time. A selection of primarily non-operational HS-376 spacecraft launched over a 20 year time period were observed using the United Kingdom Infrared Telescope on Mauna Kea and multi-band near-infrared photometry produced. Each spacecraft was observed for an entire night cycling through ZYJHK filters and time-varying colors produced to compare near-infrared color as a function of launch date. The resulting analysis shown here may help in the future to set launch date constraints on the parent object of unidentified debris objects or other unknown spacecraft.

  17. Experimental analysis of a 20 kWe PEM fuel cell system in dynamic conditions representative of automotive applications

    International Nuclear Information System (INIS)

    The dynamic performance of a laboratory fuel cell system based on a 20 kW H2/air proton exchange membrane (PEM) stack was investigated on test cycles compatible with automotive applications, with particular reference to the effect of different air management strategies on cell voltage uniformity and fuel cell system efficiency. The air management strategies were varied by imposing different stoichiometric ratio values as function of stack current, and were studied on two test cycles characterized by current variation rates ranging from 2 to 50 A/s, with maximum stack current of 240 A. Stack temperature and reactant pressure during the tests were maintained below 330 K and 150 kPa, respectively. The best compromise between fuel cell system efficiency and dynamic response in terms of cell voltage regularity was obtained with an air management strategy characterized by stoichiometric ratio values slightly superior to those optimized for steady state conditions. This management strategy determined an efficiency decrease in steady state conditions of maximum 3% for the sub-system stack + compressor in the range 0-200 A. The individual cell voltage uniformity was continuously monitored by a statistical indicator (coefficient variation Cv), which was always lower than 3% also at 50 A/s, indicating a satisfactory dynamic stack operation

  18. Effects of hot rolling on microstructure and properties of a 20 vol.% SiCp/Al composite

    Institute of Scientific and Technical Information of China (English)

    QU Shoujiang; GENG Lin; MENG Qingchang; FENG Aihan; LEI Tingquan

    2005-01-01

    A 20 vol.% SiCp/Al composite was fabricated by squeeze casting, of which a new process for fabricating the preform was used by blending Al powder and SiC particulates with average diameters of 10 and 3.5 μm, respectively. The microstructure of the as-cast and the hot-rolled composite was investigated by using TEM, EDS, and SEM, and their tensile properties were measured at room temperature. The results show that the ultimate tensile strength and ultimate elongation of the hot-rolled composite are 80% and 140% higher than those of the as-cast one. The TEM observation result indicates that there are high density of dislocations and dislocation tangles in the hot-rolled composite. Al2O3 layers in the composite resulting from the surface oxidation of the aluminum powders were damaged to spherical particles during hot rolling. All the results indicate that hot-rolling can improve the mechanical properties of the composite and, therefore, engineering components of the 20 vol.% SiCp/Al composite can be produced by squeeze casting followed by hot-rolling.

  19. NIR Color vs Launch Date: A 20-year Analysis of Space Weathering Effects on the Boeing 376 Spacecraft

    Science.gov (United States)

    Frith, J.; Anz-Meador, P.; Lederer, S.; Cowardin, H.; Buckalew, B.

    The Boeing HS-376 spin stabilized spacecraft was a popular design that was launched continuously into geosynchronous orbit starting in 1980 with the last launch occurring in 2002. Over 50 of the HS-376 buses were produced to fulfill a variety of different communication missions for countries all over the world. The design of the bus is easily approximated as a telescoping cylinder that is covered with solar cells and an Earth facing antenna that is despun at the top of the cylinder. The similarity in design and the number of spacecraft launched over a long period of time make the HS-376 a prime target for studying the effects of solar weathering on solar panels as a function of time. A selection of primarily non-operational HS-376 spacecraft launched over a 20 year time period were observed using the United Kingdom Infrared Telescope on Mauna Kea and multi-band near-infrared photometry produced. Each spacecraft was observed for an entire night cycling through ZYJHK filters and time-varying colors produced to compare near-infrared color as a function of launch date. The resulting analysis shown here may help in the future to set launch date constraints on the parent object of unidentified debris objects or other unknown spacecraft.

  20. Predictive role of the nighttime blood pressure

    DEFF Research Database (Denmark)

    Hansen, Tine W; Li, Yan; Boggia, José;

    2011-01-01

    Numerous studies addressed the predictive value of the nighttime blood pressure (BP) as captured by ambulatory monitoring. However, arbitrary cutoff limits in dichotomized analyses of continuous variables, data dredging across selected subgroups, extrapolation of cross-sectional studies...... of conclusive evidence proving that nondipping is a reversible risk factor, the option whether or not to restore the diurnal blood pressure profile to a normal pattern should be left to the clinical judgment of doctors and should be individualized for each patient. Current guidelines on the interpretation...... studies in hypertensive patients (n = 23 856) separately from those in individuals randomly recruited from populations (n = 9641). We pooled summary statistics and individual subject data, respectively. In both patients and populations, in analyses in which nighttime BP was additionally adjusted...

  1. Predicting Major Solar Eruptions

    Science.gov (United States)

    Kohler, Susanna

    2016-05-01

    Coronal mass ejections (CMEs) and solar flares are two examples of major explosions from the surface of the Sun but theyre not the same thing, and they dont have to happen at the same time. A recent study examines whether we can predict which solar flares will be closely followed by larger-scale CMEs.Image of a solar flare from May 2013, as captured by NASAs Solar Dynamics Observatory. [NASA/SDO]Flares as a Precursor?A solar flare is a localized burst of energy and X-rays, whereas a CME is an enormous cloud of magnetic flux and plasma released from the Sun. We know that some magnetic activity on the surface of the Sun triggers both a flare and a CME, whereas other activity only triggers a confined flare with no CME.But what makes the difference? Understanding this can help us learn about the underlying physical drivers of flares and CMEs. It also might help us to better predict when a CME which can pose a risk to astronauts, disrupt radio transmissions, and cause damage to satellites might occur.In a recent study, Monica Bobra and Stathis Ilonidis (Stanford University) attempt to improve our ability to make these predictions by using a machine-learning algorithm.Classification by ComputerUsing a combination of 6 or more features results in a much better predictive success (measured by the True Skill Statistic; higher positive value = better prediction) for whether a flare will be accompanied by a CME. [Bobra Ilonidis 2016]Bobra and Ilonidis used magnetic-field data from an instrument on the Solar Dynamics Observatory to build a catalog of solar flares, 56 of which were accompanied by a CME and 364 of which were not. The catalog includes information about 18 different features associated with the photospheric magnetic field of each flaring active region (for example, the mean gradient of the horizontal magnetic field).The authors apply a machine-learning algorithm known as a binary classifier to this catalog. This algorithm tries to predict, given a set of features

  2. Validation of HELIOS for ATR Core Follow Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Bays, Samuel E. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Swain, Emily T. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Crawford, Douglas S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Nigg, David W. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-03-01

    This work summarizes the validation analyses for the HELIOS code to support core design and safety assurance calculations of the Advanced Test Reactor (ATR). Past and current core safety assurance is performed by the PDQ-7 diffusion code; a state of the art reactor physics simulation tool from the nuclear industry’s earlier days. Over the past twenty years, improvements in computational speed have enabled the use of modern neutron transport methodologies to replace the role of diffusion theory for simulation of complex systems, such as the ATR. More exact methodologies have enabled a paradigm-shift away from highly tuned codes that force compliance with a bounding safety envelope, and towards codes regularly validated against routine measurements. To validate HELIOS, the 16 ATR operational cycles from late-2009 to present were modeled. The computed power distribution was compared against data collected by the ATR’s on-line power surveillance system. It was found that the ATR’s lobe-powers could be determined with ±10% accuracy. Also, the ATR’s cold startup shim configuration for each of these 16 cycles was estimated and compared against the reported critical position from the reactor log-book. HELIOS successfully predicted criticality within the tolerance set by the ATR startup procedure for 13 out of the 16 cycles. This is compared to 12 times for PDQ (without empirical adjustment). These findings, as well as other insights discussed in this report, suggest that HELIOS is highly suited for replacing PDQ for core safety assurance of the ATR. Furthermore, a modern verification and validation framework has been established that allows reactor and fuel performance data to be computed with a known degree of accuracy and stated uncertainty.

  3. Pan-cancer analyses of the nuclear receptor superfamily

    Science.gov (United States)

    Long, Mark D.; Campbell, Moray J.

    2016-01-01

    Nuclear receptors (NR) act as an integrated conduit for environmental and hormonal signals to govern genomic responses, which relate to cell fate decisions. We review how their integrated actions with each other, shared co-factors and other transcription factors are disrupted in cancer. Steroid hormone nuclear receptors are oncogenic drivers in breast and prostate cancer and blockade of signaling is a major therapeutic goal. By contrast to blockade of receptors, in other cancers enhanced receptor function is attractive, as illustrated initially with targeting of retinoic acid receptors in leukemia. In the post-genomic era large consortia, such as The Cancer Genome Atlas, have developed a remarkable volume of genomic data with which to examine multiple aspects of nuclear receptor status in a pan-cancer manner. Therefore to extend the review of NR function we have also undertaken bioinformatics analyses of NR expression in over 3000 tumors, spread across six different tumor types (bladder, breast, colon, head and neck, liver and prostate). Specifically, to ask how the NR expression was distorted (altered expression, mutation and CNV) we have applied bootstrapping approaches to simulate data for comparison, and also compared these NR findings to 12 other transcription factor families. Nuclear receptors were uniquely and uniformly downregulated across all six tumor types, more than predicted by chance. These approaches also revealed that each tumor type had a specific NR expression profile but these were most similar between breast and prostate cancer. Some NRs were down-regulated in at least five tumor types (e.g. NR3C2/MR and NR5A2/LRH-1)) whereas others were uniquely down-regulated in one tumor (e.g. NR1B3/RARG). The downregulation was not driven by copy number variation or mutation and epigenetic mechanisms maybe responsible for the altered nuclear receptor expression. PMID:27200367

  4. Pan-Cancer Analyses of the Nuclear Receptor Superfamily

    Directory of Open Access Journals (Sweden)

    Mark D. Long

    2015-12-01

    Full Text Available Nuclear receptors (NR act as an integrated conduit for environmental and hormonal signals to govern genomic responses, which relate to cell fate decisions. We review how their integrated actions with each other, shared co-factors and other transcription factors are disrupted in cancer. Steroid hormone nuclear receptors are oncogenic drivers in breast and prostate cancer and blockade of signaling is a major therapeutic goal. By contrast to blockade of receptors, in other cancers enhanced receptor function is attractive, as illustrated initially with targeting of retinoic acid receptors in leukemia. In the post-genomic era large consortia, such as The Cancer Genome Atlas, have developed a remarkable volume of genomic data with which to examine multiple aspects of nuclear receptor status in a pan-cancer manner. Therefore to extend the review of NR function we have also undertaken bioinformatics analyses of NR expression in over 3000 tumors, spread across six different tumor types (bladder, breast, colon, head and neck, liver and prostate. Specifically, to ask how the NR expression was distorted (altered expression, mutation and CNV we have applied bootstrapping approaches to simulate data for comparison, and also compared these NR findings to 12 other transcription factor families. Nuclear receptors were uniquely and uniformly downregulated across all six tumor types, more than predicted by chance. These approaches also revealed that each tumor type had a specific NR expression profile but these were most similar between breast and prostate cancer. Some NRs were down-regulated in at least five tumor types (e.g., NR3C2/MR and NR5A2/LRH-1 whereas others were uniquely down-regulated in one tumor (e.g., NR1B3/RARG. The downregulation was not driven by copy number variation or mutation and epigenetic mechanisms maybe responsible for the altered nuclear receptor expression.

  5. Integrating and scheduling an open set of static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Mezini, Mira; Kloppenburg, Sven;

    2006-01-01

    To improve the productivity of the development process, more and more tools for static software analysis are tightly integrated into the incremental build process of an IDE. If multiple interdependent analyses are used simultaneously, the coordination between the analyses becomes a major obstacle...... to keep the set of analyses open. We propose an approach to integrating and scheduling an open set of static analyses which decouples the individual analyses and coordinates the analysis executions such that the overall time and space consumption is minimized. The approach has been implemented...

  6. Prediction of postoperative pain: a systematic review of predictive experimental pain studies

    DEFF Research Database (Denmark)

    Werner, Mads Utke; Mjöbo, Helena N; Nielsen, Per R;

    2010-01-01

    Quantitative testing of a patient's basal pain perception before surgery has the potential to be of clinical value if it can accurately predict the magnitude of pain and requirement of analgesics after surgery. This review includes 14 studies that have investigated the correlation between...... preoperative responses to experimental pain stimuli and clinical postoperative pain and demonstrates that the preoperative pain tests may predict 4-54% of the variance in postoperative pain experience depending on the stimulation methods and the test paradigm used. The predictive strength is much higher than...... previously reported for single factor analyses of demographics and psychologic factors. In addition, some of these studies indicate that an increase in preoperative pain sensitivity is associated with a high probability of development of sustained postsurgical pain....

  7. Integrated Waste Treatment Unit (IWTU) Input Coal Analyses and Off-Gass Filter (OGF) Content Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Jantzen, Carol M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Missimer, David M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Guenther, Chris P. [National Energy Technology Lab. (NETL), Morgantown, WV (United States); Shekhawat, Dushyant [National Energy Technology Lab. (NETL), Morgantown, WV (United States); VanEssendelft, Dirk T. [National Energy Technology Lab. (NETL), Morgantown, WV (United States); Means, Nicholas C. [AECOM Technology Corp., Oak Ridge, TN (United States)

    2015-04-23

    in process piping and materials, in excessive off-gas absorbent loading, and in undesired process emissions. The ash content of the coal is important as the ash adds to the DMR and other vessel products which affect the final waste product mass and composition. The amount and composition of the ash also affects the reaction kinetics. Thus ash content and composition contributes to the mass balance. In addition, sodium, potassium, calcium, sulfur, and maybe silica and alumina in the ash may contribute to wall-scale formation. Sodium, potassium, and alumina in the ash will be overwhelmed by the sodium, potassium, and alumina from the feed but the impact from the other ash components needs to be quantified. A maximum coal particle size is specified so the feed system does not plug and a minimum particle size is specified to prevent excess elutriation from the DMR to the Process Gas Filter (PGF). A vendor specification was used to procure the calcined coal for IWTU processing. While the vendor supplied a composite analysis for the 22 tons of coal (Appendix A), this study compares independent analyses of the coal performed at the Savannah River National Laboratory (SRNL) and at the National Energy Technology Laboratory (NETL). Three supersacks a were sampled at three different heights within the sack in order to determine within bag variability and between bag variability of the coal. These analyses were also compared to the vendor’s composite analyses and to the coal specification. These analyses were also compared to historic data on Bestac coal analyses that had been performed at Hazen Research Inc. (HRI) between 2004-2011.

  8. Refining intra-protein contact prediction by graph analysis

    Directory of Open Access Journals (Sweden)

    Eyal Eran

    2007-05-01

    Full Text Available Abstract Background Accurate prediction of intra-protein residue contacts from sequence information will allow the prediction of protein structures. Basic predictions of such specific contacts can be further refined by jointly analyzing predicted contacts, and by adding information on the relative positions of contacts in the protein primary sequence. Results We introduce a method for graph analysis refinement of intra-protein contacts, termed GARP. Our previously presented intra-contact prediction method by means of pair-to-pair substitution matrix (P2PConPred was used to test the GARP method. In our approach, the top contact predictions obtained by a basic prediction method were used as edges to create a weighted graph. The edges were scored by a mutual clustering coefficient that identifies highly connected graph regions, and by the density of edges between the sequence regions of the edge nodes. A test set of 57 proteins with known structures was used to determine contacts. GARP improves the accuracy of the P2PConPred basic prediction method in whole proteins from 12% to 18%. Conclusion Using a simple approach we increased the contact prediction accuracy of a basic method by 1.5 times. Our graph approach is simple to implement, can be used with various basic prediction methods, and can provide input for further downstream analyses.

  9. Final report on reliability and lifetime prediction.

    Energy Technology Data Exchange (ETDEWEB)

    Gillen, Kenneth T; Wise, Jonathan; Jones, Gary D.; Causa, Al G.; Terrill, Edward R.; Borowczak, Marc

    2012-12-01

    This document highlights the important results obtained from the subtask of the Goodyear CRADA devoted to better understanding reliability of tires and to developing better lifetime prediction methods. The overall objective was to establish the chemical and physical basis for the degradation of tires using standard as well as unique models and experimental techniques. Of particular interest was the potential application of our unique modulus profiling apparatus for assessing tire properties and for following tire degradation. During the course of this complex investigation, extensive relevant information was generated, including experimental results, data analyses and development of models and instruments. Detailed descriptions of the findings are included in this report.

  10. Educational Data Mining & Students’ Performance Prediction

    Directory of Open Access Journals (Sweden)

    Amjad Abu Saa

    2016-05-01

    Full Text Available It is important to study and analyse educational data especially students’ performance. Educational Data Mining (EDM is the field of study concerned with mining educational data to find out interesting patterns and knowledge in educational organizations. This study is equally concerned with this subject, specifically, the students’ performance. This study explores multiple factors theoretically assumed to affect students’ performance in higher education, and finds a qualitative model which best classifies and predicts the students’ performance based on related personal and social factors.

  11. Decline of semen quality among 10932 males consulting for couple infertility over a 20-year period in Marseille,France

    Institute of Scientific and Technical Information of China (English)

    Cendrine Geoffroy-Siraudin; Anderson Dieudonné Loundou; Fanny Romain; Vincent Achard; Blandine Courbière; Marie-Hé1ène Perrard; Philippe Durand; Marie-Roberte Guichaoua

    2012-01-01

    Semen from 10932 male partners of infertile couples was analysed and sperm parameter trends were evaluated at the Reproduction Biology Laboratory of the University Hospital of Marseille (France) between 1988 and 2007.After 3-6 days of abstinence,semen samples were collected.Measurements of seminal fluid volume,pH,sperm concentration,total sperm count,motility and detailed morphology of spermatozoa were performed.Sperm parameters were analysed on the entire population and in men with normal total numeration (≥ 40 million per ejaculate).The whole population demonstrated declining trends in sperm concentration (1.5% per year),total sperm count (1.6% per year),total motility (0.4% per year),rapid motility (5.5% per year) and normal morphology (2.2% per year).In the group of selected samples with total normal sperm count,the same trends of sperm quality deterioration with time were observed.Our results clearly indicate that the quality of semen decreased in this population over the study period.

  12. Prediction of Wild-type Enzyme Characteristics

    DEFF Research Database (Denmark)

    Geertz-Hansen, Henrik Marcus

    of biotechnology, including enzyme discovery and characterization. This work presents two articles on sequence-based discovery and functional annotation of enzymes in environmental samples, and two articles on analysis and prediction of enzyme thermostability and cofactor requirements. The first article presents...... a sequence-based approach to discovery of proteolytic enzymes in metagenomes obtained from the Polar oceans. We show that microorganisms living in these extreme environments of constant low temperature harbour genes encoding novel proteolytic enzymes with potential industrial relevance. The second article...... presents a web server for the processing and annotation of functional metagenomics sequencing data, tailored to meet the requirements of non-bioinformaticians. The third article presents analyses of the molecular determinants of enzyme thermostability, and a feature-based prediction method of the melting...

  13. Comparing Spatial Predictions

    KAUST Repository

    Hering, Amanda S.

    2011-11-01

    Under a general loss function, we develop a hypothesis test to determine whether a significant difference in the spatial predictions produced by two competing models exists on average across the entire spatial domain of interest. The null hypothesis is that of no difference, and a spatial loss differential is created based on the observed data, the two sets of predictions, and the loss function chosen by the researcher. The test assumes only isotropy and short-range spatial dependence of the loss differential but does allow it to be non-Gaussian, non-zero-mean, and spatially correlated. Constant and nonconstant spatial trends in the loss differential are treated in two separate cases. Monte Carlo simulations illustrate the size and power properties of this test, and an example based on daily average wind speeds in Oklahoma is used for illustration. Supplemental results are available online. © 2011 American Statistical Association and the American Society for Qualitys.

  14. Predictive Hypothesis Identification

    CERN Document Server

    Hutter, Marcus

    2008-01-01

    While statistics focusses on hypothesis testing and on estimating (properties of) the true sampling distribution, in machine learning the performance of learning algorithms on future data is the primary issue. In this paper we bridge the gap with a general principle (PHI) that identifies hypotheses with best predictive performance. This includes predictive point and interval estimation, simple and composite hypothesis testing, (mixture) model selection, and others as special cases. For concrete instantiations we will recover well-known methods, variations thereof, and new ones. PHI nicely justifies, reconciles, and blends (a reparametrization invariant variation of) MAP, ML, MDL, and moment estimation. One particular feature of PHI is that it can genuinely deal with nested hypotheses.

  15. Predicting Lotto Numbers

    DEFF Research Database (Denmark)

    Jørgensen, Claus Bjørn; Suetens, Sigrid; Tyran, Jean-Robert

    numbers based on recent drawings. While most players pick the same set of numbers week after week without regards of numbers drawn or anything else, we find that those who do change, act on average in the way predicted by the law of small numbers as formalized in recent behavioral theory. In particular......We investigate the “law of small numbers” using a unique panel data set on lotto gambling. Because we can track individual players over time, we can measure how they react to outcomes of recent lotto drawings. We can therefore test whether they behave as if they believe they can predict lotto......, on average they move away from numbers that have recently been drawn, as suggested by the “gambler’s fallacy”, and move toward numbers that are on streak, i.e. have been drawn several weeks in a row, consistent with the “hot hand fallacy”....

  16. Predictability of Critical Transitions

    CERN Document Server

    Zhang, Xiaozhu; Hallerberg, Sarah

    2015-01-01

    Critical transitions in multistable systems have been discussed as models for a variety of phenomena ranging from the extinctions of species to socio-economic changes and climate transitions between ice-ages and warm-ages. From bifurcation theory we can expect certain critical transitions to be preceded by a decreased recovery from external perturbations. The consequences of this critical slowing down have been observed as an increase in variance and autocorrelation prior to the transition. However especially in the presence of noise it is not clear, whether these changes in observation variables are statistically relevant such that they could be used as indicators for critical transitions. In this contribution we investigate the predictability of critical transitions in conceptual models. We study the the quadratic integrate-and-fire model and the van der Pol model, under the influence of external noise. We focus especially on the statistical analysis of the success of predictions and the overall predictabil...

  17. Predicting Bankruptcy in Pakistan

    Directory of Open Access Journals (Sweden)

    Abdul RASHID

    2011-09-01

    Full Text Available This paper aims to identify the financial ratios that are most significant in bankruptcy prediction for the non-financial sector of Pakistan based on a sample of companies which became bankrupt over the time period 1996-2006. Twenty four financial ratios covering four important financial attributes, namely profitability, liquidity, leverage, and turnover ratios, were examined for a five-year period prior bankruptcy. The discriminant analysis produced a parsimonious model of three variables viz. sales to total assets, EBIT to current liabilities, and cash flow ratio. Our estimates provide evidence that the firms having Z-value below zero fall into the “bankrupt” whereas the firms with Z-value above zero fall into the “non-bankrupt” category. The model achieved 76.9% prediction accuracy when it is applied to forecast bankruptcies on the underlying sample.

  18. Chaos detection and predictability

    CERN Document Server

    Gottwald, Georg; Laskar, Jacques

    2016-01-01

    Distinguishing chaoticity from regularity in deterministic dynamical systems and specifying the subspace of the phase space in which instabilities are expected to occur is of utmost importance in as disparate areas as astronomy, particle physics and climate dynamics.   To address these issues there exists a plethora of methods for chaos detection and predictability. The most commonly employed technique for investigating chaotic dynamics, i.e. the computation of Lyapunov exponents, however, may suffer a number of problems and drawbacks, for example when applied to noisy experimental data.   In the last two decades, several novel methods have been developed for the fast and reliable determination of the regular or chaotic nature of orbits, aimed at overcoming the shortcomings of more traditional techniques. This set of lecture notes and tutorial reviews serves as an introduction to and overview of modern chaos detection and predictability techniques for graduate students and non-specialists.   The book cover...

  19. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik;

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines the...

  20. Nominal model predictive control

    OpenAIRE

    Grüne, Lars

    2013-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  1. Multivariate respiratory motion prediction

    International Nuclear Information System (INIS)

    In extracranial robotic radiotherapy, tumour motion is compensated by tracking external and internal surrogates. To compensate system specific time delays, time series prediction of the external optical surrogates is used. We investigate whether the prediction accuracy can be increased by expanding the current clinical setup by an accelerometer, a strain belt and a flow sensor. Four previously published prediction algorithms are adapted to multivariate inputs—normalized least mean squares (nLMS), wavelet-based least mean squares (wLMS), support vector regression (SVR) and relevance vector machines (RVM)—and evaluated for three different prediction horizons. The measurement involves 18 subjects and consists of two phases, focusing on long term trends (M1) and breathing artefacts (M2). To select the most relevant and least redundant sensors, a sequential forward selection (SFS) method is proposed. Using a multivariate setting, the results show that the clinically used nLMS algorithm is susceptible to large outliers. In the case of irregular breathing (M2), the mean root mean square error (RMSE) of a univariate nLMS algorithm is 0.66 mm and can be decreased to 0.46 mm by a multivariate RVM model (best algorithm on average). To investigate the full potential of this approach, the optimal sensor combination was also estimated on the complete test set. The results indicate that a further decrease in RMSE is possible for RVM (to 0.42 mm). This motivates further research about sensor selection methods. Besides the optical surrogates, the sensors most frequently selected by the algorithms are the accelerometer and the strain belt. These sensors could be easily integrated in the current clinical setup and would allow a more precise motion compensation. (paper)

  2. Predictive Game Theory

    Science.gov (United States)

    Wolpert, David H.

    2005-01-01

    Probability theory governs the outcome of a game; there is a distribution over mixed strat.'s, not a single "equilibrium". To predict a single mixed strategy must use our loss function (external to the game's players. Provides a quantification of any strategy's rationality. Prove rationality falls as cost of computation rises (for players who have not previously interacted). All extends to games with varying numbers of players.

  3. Time-predictable architectures

    CERN Document Server

    Rochange, Christine; Uhrig , Sascha

    2014-01-01

    Building computers that can be used to design embedded real-time systems is the subject of this title. Real-time embedded software requires increasingly higher performances. The authors therefore consider processors that implement advanced mechanisms such as pipelining, out-of-order execution, branch prediction, cache memories, multi-threading, multicorearchitectures, etc. The authors of this book investigate the timepredictability of such schemes.

  4. Multivariate respiratory motion prediction

    Science.gov (United States)

    Dürichen, R.; Wissel, T.; Ernst, F.; Schlaefer, A.; Schweikard, A.

    2014-10-01

    In extracranial robotic radiotherapy, tumour motion is compensated by tracking external and internal surrogates. To compensate system specific time delays, time series prediction of the external optical surrogates is used. We investigate whether the prediction accuracy can be increased by expanding the current clinical setup by an accelerometer, a strain belt and a flow sensor. Four previously published prediction algorithms are adapted to multivariate inputs—normalized least mean squares (nLMS), wavelet-based least mean squares (wLMS), support vector regression (SVR) and relevance vector machines (RVM)—and evaluated for three different prediction horizons. The measurement involves 18 subjects and consists of two phases, focusing on long term trends (M1) and breathing artefacts (M2). To select the most relevant and least redundant sensors, a sequential forward selection (SFS) method is proposed. Using a multivariate setting, the results show that the clinically used nLMS algorithm is susceptible to large outliers. In the case of irregular breathing (M2), the mean root mean square error (RMSE) of a univariate nLMS algorithm is 0.66 mm and can be decreased to 0.46 mm by a multivariate RVM model (best algorithm on average). To investigate the full potential of this approach, the optimal sensor combination was also estimated on the complete test set. The results indicate that a further decrease in RMSE is possible for RVM (to 0.42 mm). This motivates further research about sensor selection methods. Besides the optical surrogates, the sensors most frequently selected by the algorithms are the accelerometer and the strain belt. These sensors could be easily integrated in the current clinical setup and would allow a more precise motion compensation.

  5. Thinking about Aid Predictability

    OpenAIRE

    Andrews, Matthew; Wilhelm, Vera

    2008-01-01

    Researchers are giving more attention to aid predictability. In part, this is because of increases in the number of aid agencies and aid dollars and the growing complexity of the aid community. A growing body of research is examining key questions: Is aid unpredictable? What causes unpredictability? What can be done about it? This note draws from a selection of recent literature to bring s...

  6. Predicting helpful product reviews

    OpenAIRE

    O'Mahony, Michael P.; Cunningham, Pádraig; Smyth, Barry

    2010-01-01

    Millions of users are today posting user-generated content online, expressing their opinions on all manner of goods and services, topics and social affairs. While undoubtedly useful,user-generated content presents consumers with significant challenges in terms of information overload and quality considerations. In this paper, we address these issues in the context of product reviews and present a brief survey of our work to date on predicting review helpfulness. In particular, the performa...

  7. The Predictive Audit Framework

    OpenAIRE

    Kuenkaikaew, Siripan; Vasarhelyi, Miklos A.

    2013-01-01

    Assurance is an essential part of the business process of the modern enterprise. Auditing is a widely used assurance method made mandatory for public companies since 1934. The traditional (retroactive) audit provides after-the-fact audit reports, and is of limited value in the ever changing modern business environment because it is slow and backwards looking. Contemporary auditing and monitoring technologies could shorten the audit and assurance time frame. This paper proposes the predictive ...

  8. Predicting appointment breaking.

    Science.gov (United States)

    Bean, A G; Talaga, J

    1995-01-01

    The goal of physician referral services is to schedule appointments, but if too many patients fail to show up, the value of the service will be compromised. The authors found that appointment breaking can be predicted by the number of days to the scheduled appointment, the doctor's specialty, and the patient's age and gender. They also offer specific suggestions for modifying the marketing mix to reduce the incidence of no-shows. PMID:10142384

  9. Azimuthal angular distributions in EDDE as a spin-parity analyser and glueball filter for the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Petrov, Vladimir Alexeevich [Institute for High Energy Physics, 142 281, Protvino (Russian Federation); Ryutin, Roman Anatolievich [Institute for High Energy Physics, 142 281, Protvino (Russian Federation); Sobol, Andrei E. [Institute for High Energy Physics, 142 281, Protvino (Russian Federation); Guillaud, Jean-Paul [LAPP, Annecy (France)

    2005-06-01

    Exclusive Double Diffractive Events (EDDE) are analysed as the source of the information about the central system. The experimental possibilities for the exotic particles searches are considered. From the reggeized tensor current picture some azimuthal angle dependences were obtained to fit the data from the WA102 experiment and to make predictions for the LHC collider.

  10. Eclipse prediction in Mesopotamia.

    Science.gov (United States)

    Steele, J. M.

    2000-02-01

    Among the many celestial phenomena observed in ancient Mesopotamia, eclipses, particularly eclipses of the Moon, were considered to be among the astrologically most significant events. In Babylon, by at least the middle of the seventh century BC, and probably as early as the middle of the eighth century BC, astronomical observations were being systematically conducted and recorded in a group of texts which we have come to call Astronomical Diaries. These Diaries contain many observations and predictions of eclipses. The predictions generally include the expected time of the eclipse, apparently calculated quite precisely. By the last three centuries BC, the Babylonian astronomers had developed highly advanced mathematical theories of the Moon and planets. This paper outlines the various methods which appear to have been formulated by the Mesopotamian astronomers to predict eclipses of the Sun and the Moon. It also considers the question of which of these methods were actually used in compiling the Astronomical Diaries, and speculates why these particular methods were used.

  11. Is Suicide Predictable?

    Directory of Open Access Journals (Sweden)

    S Asmaee

    2012-04-01

    Full Text Available Background:The current study aimed to test the hypothesis: Is suicide predictable? And try to classify the predictive factors in multiple suicide attempts.Methods:A cross-sectional study was administered to 223 multiple attempters, women who came to a medical poison centre after a suicide attempt.The participants were young, poor, and single.A Logistic Regression Analiysis was used to classify the predictive factors of suicide.Results:Women who had multiple suicide attempts exhibited a significant tendency to attempt suicide again. They had a history for more than two years of multiple suicide attempts, from three to as many as 18 times, plus mental illnesses such as depression and substance abuse.They also had a positive history of mental illnesses.Conclusion:Results indicate that contributing factors for another suicide attempt include previous suicide attempts, mental illness (depression,or a positive history of mental illnesses in the family affecting them at a young age, and substance abuse.

  12. Improvements in Hanford TRU Program Utilizing Systems Modeling and Analyses

    International Nuclear Information System (INIS)

    Hanford's Transuranic (TRU) Program is responsible for certifying contact-handled (CH) TRU waste and shipping the certified waste to the Waste Isolation Pilot Plant (WIPP). Hanford's CH TRU waste includes material that is in retrievable storage as well as above ground storage, and newly generated waste. Certifying a typical container entails retrieving and then characterizing it (Non-Destructive Examination [NDE], Non-Destructive Assay [NDA], and Head Space Gas Sampling [HSG]), validating records (data review and reconciliation), and designating the container for a payload. The certified payload is then shipped to WIPP. Systems modeling and analysis techniques were applied to Hanford's TRU Program to help streamline the certification process and increase shipping rates. The modeling and analysis yields several benefits: - Maintains visibility on system performance and predicts downstream consequences of production issues. - Predicts future system performance with higher confidence, based on tracking past performance. - Applies speculation analyses to determine the impact of proposed changes (e.g., apparent shortage of feed should not be used as basis to reassign personnel if more feed is coming in the queue). - Positively identifies the appropriate queue for all containers (e.g., discovered several containers that were not actively being worked because they were in the wrong 'physical' location - method used previously for queuing up containers). - Identifies anomalies with the various data systems used to track inventory (e.g., dimensional differences for Standard Waste Boxes). A model of the TRU Program certification process was created using custom queries of the multiple databases for managing waste containers. The model was developed using a simplified process chart based on the expected path for a typical container. The process chart was augmented with the remediation path for containers that do not meet acceptance criteria for WIPP. Containers are sorted

  13. Predicting Community Evolution in Social Networks

    Directory of Open Access Journals (Sweden)

    Stanisław Saganowski

    2015-05-01

    Full Text Available Nowadays, sustained development of different social media can be observed worldwide. One of the relevant research domains intensively explored recently is analysis of social communities existing in social media as well as prediction of their future evolution taking into account collected historical evolution chains. These evolution chains proposed in the paper contain group states in the previous time frames and its historical transitions that were identified using one out of two methods: Stable Group Changes Identification (SGCI and Group Evolution Discovery (GED. Based on the observed evolution chains of various length, structural network features are extracted, validated and selected as well as used to learn classification models. The experimental studies were performed on three real datasets with different profile: DBLP, Facebook and Polish blogosphere. The process of group prediction was analysed with respect to different classifiers as well as various descriptive feature sets extracted from evolution chains of different length. The results revealed that, in general, the longer evolution chains the better predictive abilities of the classification models. However, chains of length 3 to 7 enabled the GED-based method to almost reach its maximum possible prediction quality. For SGCI, this value was at the level of 3–5 last periods.

  14. Odor Impression Prediction from Mass Spectra

    Science.gov (United States)

    Nakamoto, Takamichi

    2016-01-01

    The sense of smell arises from the perception of odors from chemicals. However, the relationship between the impression of odor and the numerous physicochemical parameters has yet to be understood owing to its complexity. As such, there is no established general method for predicting the impression of odor of a chemical only from its physicochemical properties. In this study, we designed a novel predictive model based on an artificial neural network with a deep structure for predicting odor impression utilizing the mass spectra of chemicals, and we conducted a series of computational analyses to evaluate its performance. Feature vectors extracted from the original high-dimensional space using two autoencoders equipped with both input and output layers in the model are used to build a mapping function from the feature space of mass spectra to the feature space of sensory data. The results of predictions obtained by the proposed new method have notable accuracy (R≅0.76) in comparison with a conventional method (R≅0.61). PMID:27326765

  15. Seasonal cycle of volume transport through Kerama Gap revealed by a 20-year global HYbrid Coordinate Ocean Model reanalysis

    Science.gov (United States)

    Yu, Zhitao; Metzger, E. Joseph; Thoppil, Prasad; Hurlburt, Harley E.; Zamudio, Luis; Smedstad, Ole Martin; Na, Hanna; Nakamura, Hirohiko; Park, Jae-Hun

    2015-12-01

    The temporal variability of volume transport from the North Pacific Ocean to the East China Sea (ECS) through Kerama Gap (between Okinawa Island and Miyakojima Island - a part of Ryukyu Islands Arc) is investigated using a 20-year global HYbrid Coordinate Ocean Model (HYCOM) reanalysis with the Navy Coupled Ocean Data Assimilation from 1993 to 2012. The HYCOM mean transport is 2.1 Sv (positive into the ECS, 1 Sv = 106 m3/s) from June 2009 to June 2011, in good agreement with the observed 2.0 Sv transport during the same period. This is similar to the 20-year mean Kerama Gap transport of 1.95 ± 4.0 Sv. The 20-year monthly mean volume transport (transport seasonal cycle) is maximum in October (3.0 Sv) and minimum in November (0.5 Sv). The annual variation component (345-400 days), mesoscale eddy component (70-345 days), and Kuroshio meander component (< 70 days) are separated to determine their contributions to the transport seasonal cycle. The annual variation component has a close relation with the local wind field and increases (decreases) transport into the ECS through Kerama Gap in summer (winter). Most of the variations in the transport seasonal cycle come from the mesoscale eddy component. The impinging mesoscale eddies increase the transport into the ECS during January, February, May, and October, and decrease it in March, April, November, and December, but have little effect in summer (June-September). The Kuroshio meander components cause smaller transport variations in summer than in winter.

  16. Mortality trends in women and men presenting with acute coronary syndrome: insights from a 20-year registry.

    Directory of Open Access Journals (Sweden)

    Ayman El-Menyar

    Full Text Available BACKGROUND: Coronary artery disease (CAD is the leading cause of mortality worldwide. The present study evaluated the impact of gender in patients hospitalized with acute coronary syndromes (ACS over a 20-year period in Qatar. METHODS: Data were collected retrospectively from the registry of the department of cardiology for all patients admitted with ACS during the study period (1991-2010 and were analyzed according to gender. RESULTS: Among 16,736 patients who were admitted with ACS, 14262 (85% were men and 2474 (15% were women. Cardiovascular risk factors were more prevalent among women in comparison to men. On admission, women presented mainly with non-ST-elevation ACS and were more likely to be undertreated with β-blockers (BB, antiplatelet agents and reperfusion therapy in comparison to men. However, from 1999 through 2010, the use of aspirin, angiotensin-converting enzyme inhibitors and BB increased from 66% to 79%, 27% to 41% and 17% to 49%, respectively in women. In the same period, relative risk reduction for mortality was 64% in women and 51% in men. Across the 20-year period, the mortality rate decreased from 27% to 7% among the Middle Eastern Arab women. Multivariate logistic regression analysis showed that female gender was independent predictor of in-hospital mortality (odd ratio 1.51, 95% CI 1.27-1.79. CONCLUSIONS: Women presenting with ACS are high-risk population and their in-hospital mortality remains higher for all age groups in comparison to men. Although, substantial improvement in the hospital outcome has been observed, guidelines adherence and improvement in the hospital care have not yet been optimized.

  17. Plasma antioxidant responses and oxidative stress following a 20 meter shuttle run test in female volleyball players

    Directory of Open Access Journals (Sweden)

    Emre Özgür Bulduk

    2011-08-01

    Full Text Available The effect of physical exercise on oxidant stress and antioxidants has been investigated extensively in the last twenty years. Cells continuously produce free radicals under normal conditions during mitochondrial electron transport chain (ETC. Experimental studies have shown elevated metabolic rate by strenuous physical exercise induces oxidative stress and production of excessive amounts of free radicals. Lipid peroxidation occurs when free radicals react with cellular components involving polyunsaturated fatty acid residues of phospholipids which are very sensitive to oxidation. This study aimed to determine plasma antioxidant responses and oxidative stress following a 20 meter shuttle run test in female volleyball players. Ten female volleyball players from the same team, and ten sedentary female ages between 18-24 years old volunteered to participate in this study. They were in good health and 48 hours before the test did not receive any drug or alcohol. None of them had any endocrine, orthopedic problems. Before the study, Informed, written consent was obtained from all the participants after full explanation of the procedures involved. All procedures were approved by the Selçuk University Meram Medical School of Ethical Committee. 20 meter shuttle run test was designed to estimate the maximal aerobic power of athletes performing in sports with frequent stops and starts (eg. Basketball, volleyball, fencing and so on. Findings of our study demonstrate that in both female groups 20 meter shuttle run test leads to production of more reactive oxygen species than the antioxidant systems can scavenge. Decrease in the activities of these antioxidant enzymes may be due to their inactivation caused by the higher production of the free radicals. it seems that the vulnerability of the body to oxidative stress is significantly enhanced after strenuous exercise test.

  18. Aeroacoustic Prediction Codes

    Science.gov (United States)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor

  19. Immunomodulatory Effects of Hemagglutinin- (HA- Modified A20 B-Cell Lymphoma Expanded as a Brain Tumor on Adoptively Transferred HA-Specific CD4+ T Cells

    Directory of Open Access Journals (Sweden)

    Valentin P. Shichkin

    2014-01-01

    Full Text Available Previously, the mouse A20 B-cell lymphoma engineered to express hemagglutinin (HA antigen (A20HA was used as a systemic tumor model. In this work, we used the A20HA cells as a brain tumor. HA-specific CD4+ T cells were transferred intravenously in a tail vein 5 days after A20HA intracranial inoculation and analyzed on days 2, 9, and 16 after the adoptive transfer by different methods. The transferred cells demonstrated state of activation as early as day 2 after the adoptive transfer and most the of viable HA-specific cells became anergic on day 16. Additionally, symptoms of systemic immunosuppression were observed in mice with massive brain tumors at a late stage of the brain tumor progression (days 20–24 after the A20HA inoculation. Despite that, a deal of HA-specific CD4+ T cells kept the functional activity even at the late stage of A20HA tumor growth. The activated HA-specific CD4+ T cells were found also in the brain of brain-tumor-bearing mice. These cells were still responding to reactivation with HA-peptide in vitro. Our data support an idea about sufficient role of both the tumor-specific and -nonspecific mechanisms inducing immunosuppression in cancer patients.

  20. Predicting Alcohol, Cigarette, and Marijuana Use from Preferential Music Consumption

    Science.gov (United States)

    Oberle, Crystal D.; Garcia, Javier A.

    2015-01-01

    This study investigated whether use of alcohol, cigarettes, and marijuana may be predicted from preferential consumption of particular music genres. Undergraduates (257 women and 78 men) completed a questionnaire assessing these variables. Partial correlation analyses, controlling for sensation-seeking tendencies and behaviors, revealed that…