WorldWideScience

Sample records for hard comparing predictive

  1. Comparative study of carp otolith hardness: lapillus and asteriscus.

    Science.gov (United States)

    Ren, Dongni; Meyers, Marc André; Zhou, Bo; Feng, Qingling

    2013-05-01

    Otoliths are calcium carbonate biominerals in the inner ear of vertebrates; they play a role in balance, movement, and sound perception. Two types of otoliths in freshwater carp are investigated using nano- and micro-indentation: asteriscus and lapillus. The hardness, modulus, and creep of asteriscus (vaterite crystals) and lapillus (aragonite crystals) are compared. The hardness and modulus of lapillus are higher than those of asteriscus both in nano- and micro-testing, which is attributed to the different crystal polymorphs. Both materials exhibit a certain degree of creep, which indicates some time dependence of the mechanical behavior and is attributed to the organic components. The nano-indentation hardnesses are higher than micro-hardnesses for both otoliths, a direct result of the scale dependence of strength; fewer flaws are encountered by the nano than by the microindenter. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Comparative metallurgical study of thick hard coatings without cobalt

    International Nuclear Information System (INIS)

    Clemendot, F.; Van Duysen, J.C.; Champredonde, J.

    1992-07-01

    Wear and corrosion of stellite type hard coatings for valves of the PWR primary system raise important problems of contamination. Substitution of these alloys by cobalt-free hard coatings (Colmonoy 4 and 4.26, Cenium 36) should allow to reduce this contamination. A comparative study (chemical, mechanical, thermal, metallurgical), as well as a corrosion study of these coatings were carried out. The results of this characterization show that none of the studied products has globally characteristics as good as those of grade 6 Stellite currently in service

  3. Comparing Spatial Predictions

    KAUST Repository

    Hering, Amanda S.; Genton, Marc G.

    2011-01-01

    Under a general loss function, we develop a hypothesis test to determine whether a significant difference in the spatial predictions produced by two competing models exists on average across the entire spatial domain of interest. The null hypothesis

  4. Prediction of hardness in pieces of quenched and tempered steel

    International Nuclear Information System (INIS)

    Yanzon, Rodolfo Carlos; Rodriguez, Augusto; Sanchez, Arlington Ricardo

    2006-01-01

    This presentation describes the first stage of a work plan to obtain a simple software, for predicting properties at certain points of a quenched and tempered piece. In this first stage, this prediction is limited to cylindrical pieces made with steels whose chemical composition is within a certain range. The methodology basically consists of obtaining , from experimental data, a mathematical tool able to predict the hardness value, for Jominy test piece ends made with this type of steel. This meant beginning with the analysis of the usual forms of theoretical calculation of Jominy curves of quenched samples, which resulted in a proposal to modify the Just equation. Two different mathematical methods were then developed, that could predict hardness values in tempered Jominy test pieces. One, based on the determination of polynomic equations, that reproduces the loss of hardness at points along the test piece, base on the quenching value and as a function of the tempering temperature. The other one, which uses the lineal multidimensional interpolation method, because of its ease of application, has been selected as the mathematical tool to use in the software under development. At this stage of the work, the relationship between the points on the piece and those on the Jominy test pieces is carried out by the Lamont method and the representative variable of the temperature/time combination for the tempering process itself, is obtained with software based on the Hollomon and Jaffe expression. Data is needed to define: a) chemical composition and grain size of the steel used, b) diameter of the piece, c) 'H G ' severity of the quenching medium d) temperature and time of the tempering. The work's second stage continued with the addition of hardness values measured in Jominy test pieces made with other steels. The chemical composition and grain size data of each steel introduced are converted by the software into one more variable, using the concept of ideal critical

  5. Luck is Hard to Beat: The Difficulty of Sports Prediction

    OpenAIRE

    Aoki, Raquel YS; Assuncao, Renato M; de Melo, Pedro OS Vaz

    2017-01-01

    Predicting the outcome of sports events is a hard task. We quantify this difficulty with a coefficient that measures the distance between the observed final results of sports leagues and idealized perfectly balanced competitions in terms of skill. This indicates the relative presence of luck and skill. We collected and analyzed all games from 198 sports leagues comprising 1503 seasons from 84 countries of 4 different sports: basketball, soccer, volleyball and handball. We measured the competi...

  6. Comparing Spatial Predictions

    KAUST Repository

    Hering, Amanda S.

    2011-11-01

    Under a general loss function, we develop a hypothesis test to determine whether a significant difference in the spatial predictions produced by two competing models exists on average across the entire spatial domain of interest. The null hypothesis is that of no difference, and a spatial loss differential is created based on the observed data, the two sets of predictions, and the loss function chosen by the researcher. The test assumes only isotropy and short-range spatial dependence of the loss differential but does allow it to be non-Gaussian, non-zero-mean, and spatially correlated. Constant and nonconstant spatial trends in the loss differential are treated in two separate cases. Monte Carlo simulations illustrate the size and power properties of this test, and an example based on daily average wind speeds in Oklahoma is used for illustration. Supplemental results are available online. © 2011 American Statistical Association and the American Society for Qualitys.

  7. Radiation hardness of diamond and silicon sensors compared

    CERN Document Server

    de Boer, Wim; Furgeri, Alexander; Mueller, Steffen; Sander, Christian; Berdermann, Eleni; Pomorski, Michal; Huhtinen, Mika

    2007-01-01

    The radiation hardness of silicon charged particle sensors is compared with single crystal and polycrystalline diamond sensors, both experimentally and theoretically. It is shown that for Si- and C-sensors, the NIEL hypothesis, which states that the signal loss is proportional to the Non-Ionizing Energy Loss, is a good approximation to the present data. At incident proton and neutron energies well above 0.1 GeV the radiation damage is dominated by the inelastic cross section, while at non-relativistic energies the elastic cross section prevails. The smaller inelastic nucleon-Carbon cross section and the light nuclear fragments imply that at high energies diamond is an order of magnitude more radiation hard than silicon, while at energies below 0.1 GeV the difference becomes significantly smaller.

  8. A top-down approach for the prediction of hardness and toughness of hierarchical materials

    International Nuclear Information System (INIS)

    Carpinteri, Alberto; Paggi, Marco

    2009-01-01

    Many natural and man-made materials exhibit structure over more than one length scale. In this paper, we deal with hierarchical grained composite materials that have recently been designed to achieve superior hardness and toughness as compared to their traditional counterparts. Their nested structure, where meso-grains are recursively composed of smaller and smaller micro-grains at the different scales with a fractal-like topology, is herein studied from a hierarchical perspective. Considering a top-down approach, i.e. from the largest to the smallest scale, we propose a recursive micromechanical model coupled with a generalized fractal mixture rule for the prediction of hardness and toughness of a grained material with n hierarchical levels. A relationship between hardness and toughness is also derived and the analytical predictions are compared with experimental data.

  9. Using Neural Networks to Predict the Hardness of Aluminum Alloys

    Directory of Open Access Journals (Sweden)

    B. Zahran

    2015-02-01

    Full Text Available Aluminum alloys have gained significant industrial importance being involved in many of the light and heavy industries and especially in aerospace engineering. The mechanical properties of aluminum alloys are defined by a number of principal microstructural features. Conventional mathematical models of these properties are sometimes very complex to be analytically calculated. In this paper, a neural network model is used to predict the correlations between the hardness of aluminum alloys in relation to certain alloying elements. A backpropagation neural network is trained using a thorough dataset. The impact of certain elements is documented and an optimum structure is proposed.

  10. A novel method to predict the highest hardness of plasma sprayed coating without micro-defects

    Science.gov (United States)

    Zhuo, Yukun; Ye, Fuxing; Wang, Feng

    2018-04-01

    The plasma sprayed coatings are stacked by splats, which are regarded generally as the elementary units of coating. Many researchers have focused on the morphology and formation mechanism of splat. However, a novel method to predict the highest hardness of plasma sprayed coating without micro-defects is proposed according to the nanohardness of splat in this paper. The effectiveness of this novel method was examined by experiments. Firstly, the microstructure of splats and coating, meanwhile the 3D topography of the splats were observed by SEM (SU1510) and video microscope (VHX-2000). Secondly, the nanohardness of splats was evaluated by nanoindentation (NHT) in order to be compared with microhardness of coating measured by microhardness tester (HV-1000A). The results show that the nanohardness of splats with diameter of 70 μm, 100 μm and 140 μm were in the scope of 11∼12 GPa while the microhardness of coating were in the range of 8∼9 GPa. Because the splats had not micro-defects such as pores and cracks in the nanohardness evaluated nano-zone, the nanohardness of the splats can be utilized to predict the highest hardness of coating without micro-defects. This method indicates the maximum of sprayed coating hardness and will reduce the test number to get high hardness coating for better wear resistance.

  11. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    Science.gov (United States)

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  12. Comparison of first quadrant yield loci for Ti--6Al--4V with those predicted by Knoop hardness measurements

    International Nuclear Information System (INIS)

    Amateau, M.F.; Hanna, W.D.

    1975-01-01

    Knoop hardness impressions were used to construct biaxial yield loci in Ti--6A l--4V for a variety of textures. These results were compared with partial yield loci in the first quadrant, determined from flow stress measurements at three stress ratios. In each case, the Knoop hardness technique was not sufficiently sensitive to predict the shape of the yield locus, the largest discrepancy occurring for the most anisotropic sample. (U.S.)

  13. Prediction of hardness for Al-Cu-Zn alloys in as-cast and quenching conditions

    International Nuclear Information System (INIS)

    Villegas-Cardenas, J. D.; Saucedo-Munoz, M. L.; Lopez-Hirata, V. M.; Dorantes Rosales, H. J.

    2014-01-01

    This work presents a new experimental and numerical methodology in order to predict the hardness in the as-cast, and solution treated and quenched Al-Cu-Zn alloys. Chemical composition of alloys is located inside two straight lines represented by two equations. Eight different compositions were selected from each line. All the alloys were characterized for light microscope, scanning electron microscope, X-ray diffraction and Rockwell B hardness test. The equilibrium phases were obtained at different temperatures by Thermo-Calc. The microstructure characterization and regression analysis enabled to determine the phase transformations and two equations of hardness assessment. The combination of hardness equations and composition line equations permitted to estimate the hardness of any alloy composition inside this zone. This was verified by calculating hardness with the information reported in other works, with an error lower than 7% in the estimated hardness. (Author)

  14. Low empathy in deaf and hard of hearing (preadolescents compared to normal hearing controls.

    Directory of Open Access Journals (Sweden)

    Anouk P Netten

    Full Text Available The purpose of this study was to examine the level of empathy in deaf and hard of hearing (preadolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy.The study group (mean age 11.9 years consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children's level of empathy, their attendance to others' emotions, emotion recognition, and supportive behavior.Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported language or attend special education. However, they are still outperformed by normal hearing children.Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships.

  15. Low empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls.

    Science.gov (United States)

    Netten, Anouk P; Rieffe, Carolien; Theunissen, Stephanie C P M; Soede, Wim; Dirks, Evelien; Briaire, Jeroen J; Frijns, Johan H M

    2015-01-01

    The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy. The study group (mean age 11.9 years) consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids) and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children's level of empathy, their attendance to others' emotions, emotion recognition, and supportive behavior. Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported) language or attend special education. However, they are still outperformed by normal hearing children. Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships.

  16. Shared Task System Description: Frustratingly Hard Compositionality Prediction

    DEFF Research Database (Denmark)

    Johannsen, Anders Trærup; Martinez Alonso, Hector; Rishøj, Christian

    2011-01-01

    , and the likelihood of long translation equivalents in other languages. Many of the features we considered correlated significantly with human compositionality scores, but in support vector regression experiments we obtained the best results using only COALS-based endocentricity scores. Our system was nevertheless......We considered a wide range of features for the DiSCo 2011 shared task about compositionality prediction for word pairs, including COALS-based endocentricity scores, compositionality scores based on distributional clusters, statistics about wordnet-induced paraphrases, hyphenation...

  17. Predicting the Performance of Chain Saw Machines Based on Shore Scleroscope Hardness

    Science.gov (United States)

    Tumac, Deniz

    2014-03-01

    Shore hardness has been used to estimate several physical and mechanical properties of rocks over the last few decades. However, the number of researches correlating Shore hardness with rock cutting performance is quite limited. Also, rather limited researches have been carried out on predicting the performance of chain saw machines. This study differs from the previous investigations in the way that Shore hardness values (SH1, SH2, and deformation coefficient) are used to determine the field performance of chain saw machines. The measured Shore hardness values are correlated with the physical and mechanical properties of natural stone samples, cutting parameters (normal force, cutting force, and specific energy) obtained from linear cutting tests in unrelieved cutting mode, and areal net cutting rate of chain saw machines. Two empirical models developed previously are improved for the prediction of the areal net cutting rate of chain saw machines. The first model is based on a revised chain saw penetration index, which uses SH1, machine weight, and useful arm cutting depth as predictors. The second model is based on the power consumed for only cutting the stone, arm thickness, and specific energy as a function of the deformation coefficient. While cutting force has a strong relationship with Shore hardness values, the normal force has a weak or moderate correlation. Uniaxial compressive strength, Cerchar abrasivity index, and density can also be predicted by Shore hardness values.

  18. Hardness prediction of HAZ in temper bead welding by non-consistent layer technique

    International Nuclear Information System (INIS)

    Yu, Lina; Saida, Kazuyoshi; Mochizuki, Masahito; Kameyama, Masashi; Chigusa, Naoki; Nishimoto, Kazutoshi

    2014-01-01

    Based on the experimentally obtained hardness database, the neural network-based hardness prediction system of heat affect zone (HAZ) in temper bead welding by Consistent Layer (CSL) technique has been constructed by the authors. However in practical operation, CSL technique is sometimes difficult to perform because of difficulty of the precise heat input controlling, and in such case non-CSL techniques are mainly used in the actual repair process. Therefore in the present study, the neural network-based hardness prediction system of HAZ in temper bead welding by non-CSL techniques has been constructed through thermal cycle simplification, from the view of engineering. The hardness distribution in HAZ with non-CSL techniques was calculated based on the thermal cycles numerically obtained by finite element method. The experimental result has shown that the predicted hardness is in good accordance with the measured ones. It follows that the currently proposed method is effective for estimating the tempering effect during temper bead welding by non-CSL techniques. (author)

  19. Low Empathy in Deaf and Hard of Hearing (Pre)Adolescents Compared to Normal Hearing Controls

    Science.gov (United States)

    Netten, Anouk P.; Rieffe, Carolien; Theunissen, Stephanie C. P. M.; Soede, Wim; Dirks, Evelien; Briaire, Jeroen J.; Frijns, Johan H. M.

    2015-01-01

    Objective The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy. Methods The study group (mean age 11.9 years) consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids) and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children’s level of empathy, their attendance to others’ emotions, emotion recognition, and supportive behavior. Results Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported) language or attend special education. However, they are still outperformed by normal hearing children. Conclusions Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships. PMID:25906365

  20. A comparative study of the effect of soft and hard cervical collars on static postural

    Directory of Open Access Journals (Sweden)

    Minoo Khalkhali Zavieh

    2013-01-01

    Full Text Available AbstractBackground and Aim: Using cervical collars is one of the treatment methods for releaving cervical pain. The effect of limb orthotics on proprioception and postural stability has been suggested. There is not sufficient studies about the effect of cervical collars on static and dynamic stability, and the effect of soft and hard collars have not been compared with one another. The objective of this study is investigating and comparing the immediate effect of soft and hard cervical collars on static postural stability in healthy young subjects. Methods & Materials: In standing position on firm surface with closed eyes, both soft and hard collars decreased the stability and there was not any significant difference among collars. In standing positions on soft surface with closed and opened eyes, using none of the soft and hard collars did not change the stability. This quasi experimental study through repeated measure method has been conducted on 65 healthy young male and female college students. Static stability was evaluated by modified Clinical Test for Sensory Interaction and Balance (CTSIB in conditions without collar and by soft and hard cervical collars and were compared between the conditions. Results: Conclusion: Our results suggest that in static conditions, without vision, both collars decrease the stability in healthy young subjects. So considering the evaluation of stability and prevention of balance disturbance during the collar prescription seems to be necessary.

  1. Low empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls

    NARCIS (Netherlands)

    Netten, A.P.; Rieffe, C.; Theunissen, S.C.P.M.; Soede, W.; Dirks, E.; Briaire, J.J.; Frijns, J.H.M.

    2015-01-01

    Objective The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy. Methods The study group (mean age

  2. Comparative Assessment of Cutting Inserts and Optimization during Hard Turning: Taguchi-Based Grey Relational Analysis

    Science.gov (United States)

    Venkata Subbaiah, K.; Raju, Ch.; Suresh, Ch.

    2017-08-01

    The present study aims to compare the conventional cutting inserts with wiper cutting inserts during the hard turning of AISI 4340 steel at different workpiece hardness. Type of insert, hardness, cutting speed, feed, and depth of cut are taken as process parameters. Taguchi’s L18 orthogonal array was used to conduct the experimental tests. Parametric analysis carried in order to know the influence of each process parameter on the three important Surface Roughness Characteristics (Ra, Rz, and Rt) and Material Removal Rate. Taguchi based Grey Relational Analysis (GRA) used to optimize the process parameters for individual response and multi-response outputs. Additionally, the analysis of variance (ANOVA) is also applied to identify the most significant factor.

  3. Mathematical model of heat transfer to predict distribution of hardness through the Jominy bar

    International Nuclear Information System (INIS)

    Lopez, E.; Hernandez, J. B.; Solorio, G.; Vergara, H. J.; Vazquez, O.; Garnica, F.

    2013-01-01

    The heat transfer coefficient was estimated at the bottom surface at Jominy bar end quench specimen by solution of the heat inverse conduction problem. A mathematical model based on the finite-difference method was developed to predict thermal paths and volume fraction of transformed phases. The mathematical model was codified in the commercial package Microsoft Visual Basic v. 6. The calculated thermal path and final phase distribution were used to evaluate the hardness distribution along the AISI 4140 Jominy bar. (Author)

  4. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  5. Ridge regression for predicting elastic moduli and hardness of calcium aluminosilicate glasses

    Science.gov (United States)

    Deng, Yifan; Zeng, Huidan; Jiang, Yejia; Chen, Guorong; Chen, Jianding; Sun, Luyi

    2018-03-01

    It is of great significance to design glasses with satisfactory mechanical properties predictively through modeling. Among various modeling methods, data-driven modeling is such a reliable approach that can dramatically shorten research duration, cut research cost and accelerate the development of glass materials. In this work, the ridge regression (RR) analysis was used to construct regression models for predicting the compositional dependence of CaO-Al2O3-SiO2 glass elastic moduli (Shear, Bulk, and Young’s moduli) and hardness based on the ternary diagram of the compositions. The property prediction over a large glass composition space was accomplished with known experimental data of various compositions in the literature, and the simulated results are in good agreement with the measured ones. This regression model can serve as a facile and effective tool for studying the relationship between the compositions and the property, enabling high-efficient design of glasses to meet the requirements for specific elasticity and hardness.

  6. Response surface and neural network based predictive models of cutting temperature in hard turning

    Directory of Open Access Journals (Sweden)

    Mozammel Mia

    2016-11-01

    Full Text Available The present study aimed to develop the predictive models of average tool-workpiece interface temperature in hard turning of AISI 1060 steels by coated carbide insert. The Response Surface Methodology (RSM and Artificial Neural Network (ANN were employed to predict the temperature in respect of cutting speed, feed rate and material hardness. The number and orientation of the experimental trials, conducted in both dry and high pressure coolant (HPC environments, were planned using full factorial design. The temperature was measured by using the tool-work thermocouple. In RSM model, two quadratic equations of temperature were derived from experimental data. The analysis of variance (ANOVA and mean absolute percentage error (MAPE were performed to suffice the adequacy of the models. In ANN model, 80% data were used to train and 20% data were employed for testing. Like RSM, herein, the error analysis was also conducted. The accuracy of the RSM and ANN model was found to be ⩾99%. The ANN models exhibit an error of ∼5% MAE for testing data. The regression coefficient was found to be greater than 99.9% for both dry and HPC. Both these models are acceptable, although the ANN model demonstrated a higher accuracy. These models, if employed, are expected to provide a better control of cutting temperature in turning of hardened steel.

  7. Comparative face-shear piezoelectric properties of soft and hard PZT ceramics

    Science.gov (United States)

    Miao, Hongchen; Chen, Xi; Cai, Hairong; Li, Faxin

    2015-12-01

    The face-shear ( d 36 ) mode may be the most practical shear mode in piezoelectrics, while theoretically this mode cannot appear in piezoelectric ceramics because of its transversally isotropic symmetry. Recently, we realized piezoelectric coefficient d 36 up to 206pC/N in soft PbZr1-xTixO3 (PZT) ceramics via ferroelastic domain engineering [H. C. Miao and F. X. Li, Appl. Phys. Lett. 107, 122902 (2015)]. In this work, we further realized the face-shear mode in both hard and soft PZT ceramics including PZT-4 (hard), PZT-51(soft), and PZT-5H (soft) and investigated the electric properties systematically. The resonance methods are derived to measure the d 36 coefficients using both square patches and narrow bar samples, and the obtained values are consistent with that measured by a modified d 33 meter previously. For all samples, the pure d 36 mode can only appear near the resonance frequency, and the coupled d 36 - d 31 mode dominates off resonance. It is found that both the piezoelectric coefficient d 36 and the electromechanical coupling factor k 36 of soft PZT ceramics (PZT-5H and PZT-51) are considerably larger than those of the hard PZT ceramics (PZT-4). The obtained d 36 of 160-275pC/N, k 36 ˜ 0.24, and the mechanical quality factor Q 36 of 60-90 in soft PZT ceramics are comparable with the corresponding properties of the d 31 mode sample. Therefore, the d 36 mode in modified soft PZT ceramics is more promising for industrial applications such as face-shear resonators and shear horizontal wave generators.

  8. Influence of Subjectivity in Geological Mapping on the Net Penetration Rate Prediction for a Hard Rock TBM

    Science.gov (United States)

    Seo, Yongbeom; Macias, Francisco Javier; Jakobsen, Pål Drevland; Bruland, Amund

    2018-05-01

    The net penetration rate of hard rock tunnel boring machines (TBM) is influenced by rock mass degree of fracturing. This influence is taken into account in the NTNU prediction model by the rock mass fracturing factor ( k s). k s is evaluated by geological mapping, the measurement of the orientation of fractures and the spacing of fractures and fracture type. Geological mapping is a subjective procedure. Mapping results can therefore contain considerable uncertainty. The mapping data of a tunnel mapped by three researchers were compared, and the influence of the variation in geological mapping was estimated to assess the influence of subjectivity in geological mapping. This study compares predicted net penetration rates and actual net penetration rates for TBM tunneling (from field data) and suggests mapping methods that can reduce the error related to subjectivity. The main findings of this paper are as follows: (1) variation of mapping data between individuals; (2) effect of observed variation on uncertainty in predicted net penetration rates; (3) influence of mapping methods on the difference between predicted and actual net penetration rate.

  9. Hard-Wired Dopant Networks and the Prediction of High Transition Temperatures in Ceramic Superconductors

    International Nuclear Information System (INIS)

    Phillips, J.C.

    2010-01-01

    The review multiple successes of the discrete hard-wired dopant network model ZZIP, and comment on the equally numerous failures of continuum models, in describing and predicting the properties of ceramic superconductors. The prediction of transition temperatures can be regarded in several ways, either as an exacting test of theory, or as a tool for identifying theoretical rules for defining new homology models. Popular first principle methods for predicting transition temperatures in conventional crystalline superconductors have failed for cuprate HTSC, as have parameterized models based on CuO2 planes (with or without apical oxygen). Following a path suggested by Bayesian probability, it was found that the glassy, self-organized dopant network percolative model is so successful that it defines a new homology class appropriate to ceramic superconductors. The reasons for this success in an exponentially complex (non-polynomial complete, NPC) problem are discussed, and a critical comparison is made with previous polynomial (PC) theories. The predictions are successful for the superfamily of all ceramics, including new non-cuprates based on FeAs in place of CuO2.

  10. Numerical simulation of continuous cooling of a low alloy steel to predict microstructure and hardness

    International Nuclear Information System (INIS)

    Kakhki, M Eshraghi; Kermanpur, A; Golozar, M A

    2009-01-01

    In this work, a numerical model was developed to simulate the continuous cooling of a low alloy steel. In order to simulate the kinetics of diffusional phase transformations, the Johnson–Mehl–Avrami–Kolmogorov (JMAK) equation and additivity rule were employed, while a new model was applied for martensitic transformation. In addition, a novel approach was applied for computing the actual phase fractions in the multiphase steel. Effects of latent heat release during phase transformations, temperature and phase fractions on the variation of thermo-physical properties were considered. The developed numerical model was applied to simulate the cooling process during the Jominy end quench test as well as the quenching of a steel gear in water and oil. In this respect, precise models were used to simulate the complex boundary conditions in the Jominy test and a stainless steel probe was used for determining the heat transfer coefficients of quenching media by an inverse method. The present model was validated against cooling curve measurements, metallographic analysis and hardness tests. Good agreement was found between the experimental and simulation results. This model is able to simulate the continuous cooling and kinetics of phase transformation and to predict the final distribution of microstructures and hardness in low alloy steels

  11. Hard electronics; Hard electronics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Hard material technologies were surveyed to establish the hard electronic technology which offers superior characteristics under hard operational or environmental conditions as compared with conventional Si devices. The following technologies were separately surveyed: (1) The device and integration technologies of wide gap hard semiconductors such as SiC, diamond and nitride, (2) The technology of hard semiconductor devices for vacuum micro- electronics technology, and (3) The technology of hard new material devices for oxides. The formation technology of oxide thin films made remarkable progress after discovery of oxide superconductor materials, resulting in development of an atomic layer growth method and mist deposition method. This leading research is expected to solve such issues difficult to be easily realized by current Si technology as high-power, high-frequency and low-loss devices in power electronics, high temperature-proof and radiation-proof devices in ultimate electronics, and high-speed and dense- integrated devices in information electronics. 432 refs., 136 figs., 15 tabs.

  12. Hardness prediction for the repair welding of 2.25Cr-1Mo pressure vessels

    International Nuclear Information System (INIS)

    Oddy, A.S.; Chandel, R.S.

    1991-01-01

    Reactor vessels used for the hydrotreating of heavy oils and tar sand bitumen are frequently made of 2.25Cr-1Mo steel in thicknesses of 150 to 300 nm. Defects developed during installation or service are often repaired by welding. For practical reasons, postweld heat treatment of the repair welds is undesirable. This has led to continued effort to develop weld repair techniques that do not involve postweld heat treatment. Recently a six-layer automatic gas tungsten arc welding (GTAW) technique has been proposed for the repair welding of nuclear reactor vessels made of SA508 Class 2 Steel. In this technique, the second and third passes refine the microstructure of the first pass, and the last three passes temper the first pass. Alberry has developed a set of empirical rules predicting the hardness after each pass in multipass welds made in SA508 Class 2 Steels. This algorithm has been used to predict the number of layers required to achieve desired hardness. A transformation and tempering algorithm for 2.25Cr-1Mo, similar to that of the above steel, is presented. The tempering algorithm of Alberry suffers from several minor problems and can be improved. A mathematically correct method for the calculation of the tempering occurring in an anisothermal cycle is demonstrated. In addition, the rules used to relate the softening that occurs during temperature are heuristic. Separate rules are proposed for the kinetics of softening depending on the peak temperature. A re-examination of those rules reveals that they can be recast in the form of a single rule for the material examined. Reassessing the basic data presented by Alberry leads to a single softening rule with better theoretical justification

  13. Predictive and comparative analysis of Ebolavirus proteins

    Science.gov (United States)

    Cong, Qian; Pei, Jimin; Grishin, Nick V

    2015-01-01

    Ebolavirus is the pathogen for Ebola Hemorrhagic Fever (EHF). This disease exhibits a high fatality rate and has recently reached a historically epidemic proportion in West Africa. Out of the 5 known Ebolavirus species, only Reston ebolavirus has lost human pathogenicity, while retaining the ability to cause EHF in long-tailed macaque. Significant efforts have been spent to determine the three-dimensional (3D) structures of Ebolavirus proteins, to study their interaction with host proteins, and to identify the functional motifs in these viral proteins. Here, in light of these experimental results, we apply computational analysis to predict the 3D structures and functional sites for Ebolavirus protein domains with unknown structure, including a zinc-finger domain of VP30, the RNA-dependent RNA polymerase catalytic domain and a methyltransferase domain of protein L. In addition, we compare sequences of proteins that interact with Ebolavirus proteins from RESTV-resistant primates with those from RESTV-susceptible monkeys. The host proteins that interact with GP and VP35 show an elevated level of sequence divergence between the RESTV-resistant and RESTV-susceptible species, suggesting that they may be responsible for host specificity. Meanwhile, we detect variable positions in protein sequences that are likely associated with the loss of human pathogenicity in RESTV, map them onto the 3D structures and compare their positions to known functional sites. VP35 and VP30 are significantly enriched in these potential pathogenicity determinants and the clustering of such positions on the surfaces of VP35 and GP suggests possible uncharacterized interaction sites with host proteins that contribute to the virulence of Ebolavirus. PMID:26158395

  14. Predictive and comparative analysis of Ebolavirus proteins.

    Science.gov (United States)

    Cong, Qian; Pei, Jimin; Grishin, Nick V

    2015-01-01

    Ebolavirus is the pathogen for Ebola Hemorrhagic Fever (EHF). This disease exhibits a high fatality rate and has recently reached a historically epidemic proportion in West Africa. Out of the 5 known Ebolavirus species, only Reston ebolavirus has lost human pathogenicity, while retaining the ability to cause EHF in long-tailed macaque. Significant efforts have been spent to determine the three-dimensional (3D) structures of Ebolavirus proteins, to study their interaction with host proteins, and to identify the functional motifs in these viral proteins. Here, in light of these experimental results, we apply computational analysis to predict the 3D structures and functional sites for Ebolavirus protein domains with unknown structure, including a zinc-finger domain of VP30, the RNA-dependent RNA polymerase catalytic domain and a methyltransferase domain of protein L. In addition, we compare sequences of proteins that interact with Ebolavirus proteins from RESTV-resistant primates with those from RESTV-susceptible monkeys. The host proteins that interact with GP and VP35 show an elevated level of sequence divergence between the RESTV-resistant and RESTV-susceptible species, suggesting that they may be responsible for host specificity. Meanwhile, we detect variable positions in protein sequences that are likely associated with the loss of human pathogenicity in RESTV, map them onto the 3D structures and compare their positions to known functional sites. VP35 and VP30 are significantly enriched in these potential pathogenicity determinants and the clustering of such positions on the surfaces of VP35 and GP suggests possible uncharacterized interaction sites with host proteins that contribute to the virulence of Ebolavirus.

  15. A hard tissue cephalometric comparative study between hand tracing and computerized tracing

    Directory of Open Access Journals (Sweden)

    Ramachandra Prabhakar

    2014-01-01

    Full Text Available Aims: To analyze and compare the angular and linear hard tissue cephalometric measurements using hand-tracing and computerized tracings with Nemoceph and Dolphin software systems. Subjects and Methods: A total of 30 cephalograms were randomly chosen for study with the following criteria, cephalograms of patients with good contrast, no distortion, and minimal radiographic artifacts were considered using the digital method (Kodak 8000 C with 12 angular and nine linear parameters selected for the study. Comparisons were determined by post-hoc test using Tukey HSD method. The N-Par tests were performed using Kruskal-Walli′s method. Statistical Analysis Used: ANOVA and post-hoc. Results: The results of this study show that there is no significant difference in the angular and linear measurements recorded. The P values were significant at 0.05 levels for two parameters, Co-A and Co-Gn with the hand-tracing method. This was significant in ANOVA and post-hoc test by Tukey HSD method. Conclusions: This study of comparison provides support for transition from digital hand to computerized tracing methodology. In fact, digital computerized tracings were easier and less time consuming, with the same reliability irrespective of each method of tracing.

  16. Comparing two sampling methods to engage hard-to-reach communities in research priority setting.

    Science.gov (United States)

    Valerio, Melissa A; Rodriguez, Natalia; Winkler, Paula; Lopez, Jaime; Dennison, Meagen; Liang, Yuanyuan; Turner, Barbara J

    2016-10-28

    Effective community-partnered and patient-centered outcomes research needs to address community priorities. However, optimal sampling methods to engage stakeholders from hard-to-reach, vulnerable communities to generate research priorities have not been identified. In two similar rural, largely Hispanic communities, a community advisory board guided recruitment of stakeholders affected by chronic pain using a different method in each community: 1) snowball sampling, a chain- referral method or 2) purposive sampling to recruit diverse stakeholders. In both communities, three groups of stakeholders attended a series of three facilitated meetings to orient, brainstorm, and prioritize ideas (9 meetings/community). Using mixed methods analysis, we compared stakeholder recruitment and retention as well as priorities from both communities' stakeholders on mean ratings of their ideas based on importance and feasibility for implementation in their community. Of 65 eligible stakeholders in one community recruited by snowball sampling, 55 (85 %) consented, 52 (95 %) attended the first meeting, and 36 (65 %) attended all 3 meetings. In the second community, the purposive sampling method was supplemented by convenience sampling to increase recruitment. Of 69 stakeholders recruited by this combined strategy, 62 (90 %) consented, 36 (58 %) attended the first meeting, and 26 (42 %) attended all 3 meetings. Snowball sampling recruited more Hispanics and disabled persons (all P research, focusing on non-pharmacologic interventions for management of chronic pain. Ratings on importance and feasibility for community implementation differed only on the importance of massage services (P = 0.045) which was higher for the purposive/convenience sampling group and for city improvements/transportation services (P = 0.004) which was higher for the snowball sampling group. In each of the two similar hard-to-reach communities, a community advisory board partnered with researchers

  17. Comparing two sampling methods to engage hard-to-reach communities in research priority setting

    Directory of Open Access Journals (Sweden)

    Melissa A. Valerio

    2016-10-01

    Full Text Available Abstract Background Effective community-partnered and patient-centered outcomes research needs to address community priorities. However, optimal sampling methods to engage stakeholders from hard-to-reach, vulnerable communities to generate research priorities have not been identified. Methods In two similar rural, largely Hispanic communities, a community advisory board guided recruitment of stakeholders affected by chronic pain using a different method in each community: 1 snowball sampling, a chain- referral method or 2 purposive sampling to recruit diverse stakeholders. In both communities, three groups of stakeholders attended a series of three facilitated meetings to orient, brainstorm, and prioritize ideas (9 meetings/community. Using mixed methods analysis, we compared stakeholder recruitment and retention as well as priorities from both communities’ stakeholders on mean ratings of their ideas based on importance and feasibility for implementation in their community. Results Of 65 eligible stakeholders in one community recruited by snowball sampling, 55 (85 % consented, 52 (95 % attended the first meeting, and 36 (65 % attended all 3 meetings. In the second community, the purposive sampling method was supplemented by convenience sampling to increase recruitment. Of 69 stakeholders recruited by this combined strategy, 62 (90 % consented, 36 (58 % attended the first meeting, and 26 (42 % attended all 3 meetings. Snowball sampling recruited more Hispanics and disabled persons (all P < 0.05. Despite differing recruitment strategies, stakeholders from the two communities identified largely similar ideas for research, focusing on non-pharmacologic interventions for management of chronic pain. Ratings on importance and feasibility for community implementation differed only on the importance of massage services (P = 0.045 which was higher for the purposive/convenience sampling group and for city improvements

  18. In Vitro Comparative Study of Two Different Bleaching Agents on Micro-hardness Dental Enamel.

    Science.gov (United States)

    Fatima, Nazish; Ali Abidi, Syed Yawar; Meo, Ashraf Ali

    2016-02-01

    To evaluate the effect of home-use bleaching agent containing 16% Carbamide Peroxide (CP) and in-office bleaching agent containing 38% Hydrogen Peroxide (HP) on enamel micro-hardness. An in vitroexperimental study. Department of Operative Dentistry and Science of Dental Materials at Dr. Ishrat-ul-Ebad Khan Institute of Oral Health Sciences, Dow University of Health Sciences and Material Engineering Department of NED University of Engineering and Technology, Karachi, from July to December 2014. Atotal of 90 enamel slabs from 45 sound human 3rd molar were randomly divided into 3 groups. Each group contained 30 specimens (n=30). Group 1 was kept in artificial saliva at 37°C in incubator during the whole experiment. However, Groups 2 and 3 were treated with power whitening gel and tooth whitening pen respectively. After bleaching session, specimens were thoroughly rinsed with deionized water again for 10 seconds and then stored in artificial saliva at 37°C in incubator. Artificial saliva was changed after every 2 days. The Vickers hardness tester (Wolpert 402 MVD, Germany) was adjusted to a load of 0.1 kg (100 gm) and dwell time of 5 seconds. Three Vickers were performed on each specimen using a hardness tester according to the ISO 6507-3:1998 specification. Micro-hardness measurements were performed before and after bleaching at day 1, 7 and 14. In the control group, the baseline micro-hardness was 181.1 ±9.3 which was reduced after the storage on day 1, 7 and 14 (p = 0.104). In Group 2, baseline micro-hardness was 180.4 ±10.1 which was reduced to 179.79 ±10.0 units after day 1. Whereas, on day 7 and 14, the values of micro-hardness were 179.8 ±10 and 179.7 ±10.29, respectively (p=0.091). Furthermore, the baseline micro-hardness in Group 3 was 174.0 ±22.9 units which was reduced to 173 ±23 on day 1, 170 ±30 on day 7 and 173 ±23 on day 14 (p = 0.256). The statistically insignificant difference was found among micro-hardness values of different bleaching

  19. In Vitro Comparative Study of Two Different Bleaching Agents on Micro-hardness Dental Enamel

    International Nuclear Information System (INIS)

    Fatima, N.; Abidi, S. Y. A.; Meo, A. A.

    2016-01-01

    Objective: To evaluate the effect of home-use bleaching agent containing 16 percentage Carbamide Peroxide (CP) and in-office bleaching agent containing 38 percentage Hydrogen Peroxide (HP) on enamel micro-hardness. Study Design: An in vitro experimental study. Place and Duration of Study: Department of Operative Dentistry and Science of Dental Materials at Dr. Ishrat-ul-Ebad Khan Institute of Oral Health Sciences, Dow University of Health Sciences and Material Engineering Department of NED University of Engineering and Technology, Karachi, from July to December 2014. Methodology: A total of 90 enamel slabs from 45 sound human 3rd molar were randomly divided into 3 groups. Each group contained 30 specimens (n=30). Group 1 was kept in artificial saliva at 37 Degree C in incubator during the whole experiment. However, Groups 2 and 3 were treated with power whitening gel and tooth whitening pen respectively. After bleaching session, specimens were thoroughly rinsed with deionized water again for 10 seconds and then stored in artificial saliva at 37 Degree C in incubator. Artificial saliva was changed after every 2 days. The Vickers hardness tester (Wolpert 402 MVD, Germany) was adjusted to a load of 0.1 kg (100 gm) and dwell time of 5 seconds. Three Vickers were performed on each specimen using a hardness tester according to the ISO 6507-3:1998 specification. Micro-hardness measurements were performed before and after bleaching at day 1, 7 and 14. Results: In the control group, the baseline micro-hardness was 181.1 ± 9.3 which was reduced after the storage on day 1, 7 and 14 (p = 0.104). In Group 2, baseline micro-hardness was 180.4 ±10.1 which was reduced to 179.79 ± 10.0 units after day 1. Whereas, on day 7 and 14, the values of micro-hardness were 179.8 ±10 and 179.7 ±10.29, respectively (p=0.091). Furthermore, the baseline micro-hardness in Group 3 was 174.0 ±22.9 units which was reduced to 173 ± 23 on day 1, 170 ±30 on day 7 and 173 ± 23 on day 14 (p = 0

  20. An Evolutionary Algorithm for Feature Subset Selection in Hard Disk Drive Failure Prediction

    Science.gov (United States)

    Bhasin, Harpreet

    2011-01-01

    Hard disk drives are used in everyday life to store critical data. Although they are reliable, failure of a hard disk drive can be catastrophic, especially in applications like medicine, banking, air traffic control systems, missile guidance systems, computer numerical controlled machines, and more. The use of Self-Monitoring, Analysis and…

  1. A comparative tribological study of chromium coatings with different specific hardness

    International Nuclear Information System (INIS)

    Darbeida, A.; Von Stebut, J.; Barthole, M.; Belliard, P.; Lelait, L.

    1995-06-01

    The wear resistance in dry friction of two electrolytic and two pVD hard chromium coatings deposited on construction steel substrates is studied by means of standard pin on disc multi-pass, unidirectional operation. For both of these friction modes low cycle high load operation with cemented carbide pins leads to essentially coatings hardness controlled, abrasive wear. For these well adhering commercial coatings (both for through thickness cracking and for spalling failure) assessed by standard testing, are inadequate for quality ranking with respect to wear resistance. Steady state friction corresponds to a stabilised third body essentially composed of chromium oxide. (authors). 13 refs., 7 figs., 1 tab

  2. Comparing theories' performance in predicting violence.

    Science.gov (United States)

    Haas, Henriette; Cusson, Maurice

    2015-01-01

    The stakes of choosing the best theory as a basis for violence prevention and offender rehabilitation are high. However, no single theory of violence has ever been universally accepted by a majority of established researchers. Psychiatry, psychology and sociology are each subdivided into different schools relying upon different premises. All theories can produce empirical evidence for their validity, some of them stating the opposite of each other. Calculating different models with multivariate logistic regression on a dataset of N = 21,312 observations and ninety-two influences allowed a direct comparison of the performance of operationalizations of some of the most important schools. The psychopathology model ranked as the best model in terms of predicting violence right after the comprehensive interdisciplinary model. Next came the rational choice and lifestyle model and third the differential association and learning theory model. Other models namely the control theory model, the childhood-trauma model and the social conflict and reaction model turned out to have low sensitivities for predicting violence. Nevertheless, all models produced acceptable results in predictions of a non-violent outcome. Copyright © 2015. Published by Elsevier Ltd.

  3. Comparison of observed rheological properties of hard wheat flour dough with predictions of the Giesekus-Leonov, White-Metzner and Phan-Thien Tanner models

    Science.gov (United States)

    Dhanasekharan, M.; Huang, H.; Kokini, J. L.; Janes, H. W. (Principal Investigator)

    1999-01-01

    The measured rheological behavior of hard wheat flour dough was predicted using three nonlinear differential viscoelastic models. The Phan-Thien Tanner model gave good zero shear viscosity prediction, but overpredicted the shear viscosity at higher shear rates and the transient and extensional properties. The Giesekus-Leonov model gave similar predictions to the Phan-Thien Tanner model, but the extensional viscosity prediction showed extension thickening. Using high values of the mobility factor, extension thinning behavior was observed but the predictions were not satisfactory. The White-Metzner model gave good predictions of the steady shear viscosity and the first normal stress coefficient but it was unable to predict the uniaxial extensional viscosity as it exhibited asymptotic behavior in the tested extensional rates. It also predicted the transient shear properties with moderate accuracy in the transient phase, but very well at higher times, compared to the Phan-Thien Tanner model and the Giesekus-Leonov model. None of the models predicted all observed data consistently well. Overall the White-Metzner model appeared to make the best predictions of all the observed data.

  4. Impact of natural organic matter and increased water hardness on DGT prediction of copper bioaccumulation by yellow lampmussel (Lampsilis cariosa) and fathead minnow (Pimephales promelas).

    Science.gov (United States)

    Philipps, Rebecca R; Xu, Xiaoyu; Mills, Gary L; Bringolf, Robert B

    2018-06-01

    We conducted an exposure experiment with Diffusive Gradients in Thin- Films (DGT), fathead minnow (Pimephales promelas), and yellow lampmussel (Lampsilis cariosa) to estimate bioavailability and bioaccumulation of Cu. We hypothesized that Cu concentrations measured by DGT can be used to predict Cu accumulation in aquatic animals and alterations of water chemistry can affect DGT's predict ability. Three water chemistries (control soft water, hard water, and addition of natural organic matter (NOM)) and three Cu concentrations (0, 30, and 60 μg/L) were selected, so nine Cu-water chemistry combinations were used. NOM addition treatments resulted in decreased concentrations of DGT-measured Cu and free Cu ion predicted by Biotic Ligand Model (BLM). Both hard water and NOM addition treatments had reduced concentrations of Cu ion and Cu-dissolved organic matter complexes compared to other treatments. DGT-measured Cu concentrations were linearly correlated to fish accumulated Cu, but not to mussel accumulated Cu. Concentrations of bioavailable Cu predicted by BLM, the species complexed with biotic ligands of aquatic organisms and, was highly correlated to DGT-measured Cu. In general, DGT-measured Cu fit Cu accumulations in fish, and this passive sampling technique is acceptable at predicting Cu concentrations in fish in waters with low NOM concentrations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Prediction of novel hard phases of Si{sub 3}N{sub 4}: First-principles calculations

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Lin; Hu, Meng; Wang, Qianqian; Xu, Bo; Yu, Dongli; Liu, Zhongyuan; He, Julong, E-mail: hjl@ysu.edu.cn

    2015-08-15

    Exploration of novel hard metastable phases of silicon nitride was performed using a recently developed particle-swarm optimization method within the CALYPSO software package. Three potential hard metastable phases of t-Si{sub 3}N{sub 4}, m-Si{sub 3}N{sub 4}, and o-Si{sub 3}N{sub 4} were predicted. These phases are mechanically and dynamically stable at ambient pressure based on their elastic constants and phonon dispersions. t-Si{sub 3}N{sub 4} and m-Si{sub 3}N{sub 4} exhibit lower energies than γ-Si{sub 3}N{sub 4} at pressures below 2.5 GPa and 2.9 GPa, respectively, which promise that the formers could be obtained by quenching from γ-Si{sub 3}N{sub 4}. o-Si{sub 3}N{sub 4} is a better high-pressure metastable phase than CaTi{sub 2}O{sub 4}-type Si{sub 3}N{sub 4} proposed by Tatsumi et al. and it can come from the transition of γ-Si{sub 3}N{sub 4} under 198 GPa. The theoretical band gaps of t-Si{sub 3}N{sub 4}, m-Si{sub 3}N{sub 4}, and o-Si{sub 3}N{sub 4} at ambient pressure were 3.15 eV, 3.90 eV, and 3.36 eV, respectively. At ambient pressure, the Vickers hardness values of t-Si{sub 3}N{sub 4} (32.6 GPa), m-Si{sub 3}N{sub 4} (31.5 GPa), and o-Si{sub 3}N{sub 4} (36.1 GPa) are comparable to β-Si{sub 3}N{sub 4} and γ-Si{sub 3}N{sub 4}. With the pressure increasing, t-Si{sub 3}N{sub 4}, m-Si{sub 3}N{sub 4}, and o-Si{sub 3}N{sub 4} will change from the brittle to ductile state at about 15.7 GPa, 7.3 GPa and 28.9 GPa, respectively. - Graphical abstract: This figure shows the crystal structures of three Si{sub 3}N{sub 4} predicted in this manuscript, and left to right: t-Si{sub 3}N{sub 4}, m-Si{sub 3}N{sub 4} and o-Si{sub 3}N{sub 4}. - Highlights: • We explored three metastable phases of Si{sub 3}N{sub 4} — t-Si{sub 3}N{sub 4}, m-Si{sub 3}N{sub 4}, and o-Si{sub 3}N{sub 4}. • The enthalpies of t and m- are much lower than that of γ at ambient pressure. • ois one further high pressure phase than γ. • o-Si{sub 3}N{sub 4} is the most hardest phase in Si

  6. Breakup and then makeup: a predictive model of how cilia self-regulate hardness for posture control.

    Science.gov (United States)

    Bandyopadhyay, Promode R; Hansen, Joshua C

    2013-01-01

    Functioning as sensors and propulsors, cilia are evolutionarily conserved organelles having a highly organized internal structure. How a paramecium's cilium produces off-propulsion-plane curvature during its return stroke for symmetry breaking and drag reduction is not known. We explain these cilium deformations by developing a torsional pendulum model of beat frequency dependence on viscosity and an olivo-cerebellar model of self-regulation of posture control. The phase dependence of cilia torsion is determined, and a bio-physical model of hardness control with predictive features is offered. Crossbridge links between the central microtubule pair harden the cilium during the power stroke; this stroke's end is a critical phase during which ATP molecules soften the crossbridge-microtubule attachment at the cilium inflection point where torsion is at its maximum. A precipitous reduction in hardness ensues, signaling the start of ATP hydrolysis that re-hardens the cilium. The cilium attractor basin could be used as reference for perturbation sensing.

  7. A comprehensive comparison of comparative RNA structure prediction approaches

    DEFF Research Database (Denmark)

    Gardner, P. P.; Giegerich, R.

    2004-01-01

    -finding and multiple-sequence-alignment algorithms. Results Here we evaluate a number of RNA folding algorithms using reliable RNA data-sets and compare their relative performance. Conclusions We conclude that comparative data can enhance structure prediction but structure-prediction-algorithms vary widely in terms......Background An increasing number of researchers have released novel RNA structure analysis and prediction algorithms for comparative approaches to structure prediction. Yet, independent benchmarking of these algorithms is rarely performed as is now common practice for protein-folding, gene...

  8. Work Hard / Play Hard

    OpenAIRE

    Burrows, J.; Johnson, V.; Henckel, D.

    2016-01-01

    Work Hard / Play Hard was a participatory performance/workshop or CPD experience hosted by interdisciplinary arts atelier WeAreCodeX, in association with AntiUniversity.org. As a socially/economically engaged arts practice, Work Hard / Play Hard challenged employees/players to get playful, or go to work. 'The game changes you, you never change the game'. Employee PLAYER A 'The faster the better.' Employer PLAYER B

  9. HVM-TP: A Time Predictable, Portable Java Virtual Machine for Hard Real-Time Embedded Systems

    DEFF Research Database (Denmark)

    Luckow, Kasper Søe; Thomsen, Bent; Korsholm, Stephan Erbs

    2014-01-01

    We present HVMTIME; a portable and time predictable JVM implementation with applications in resource-constrained hard real-time embedded systems. In addition, it implements the Safety Critical Java (SCJ) Level 1 specification. Time predictability is achieved by a combination of time predictable...... algorithms, exploiting the programming model of the SCJ specification, and harnessing static knowledge of the hosted SCJ system. This paper presents HVMTIME in terms of its design and capabilities, and demonstrates how a complete timing model of the JVM represented as a Network of Timed Automata can...... be obtained using the tool TetaSARTSJVM. Further, using the timing model, we derive Worst Case Execution Times (WCETs) and Best Case Execution Times (BCETs) of the Java Bytecodes....

  10. Modelling of hardness prediction of magnesium alloys using artificial neural networks applications

    OpenAIRE

    L.A. Dobrzański; T. Tański; J. Trzaska; L. Čížek

    2008-01-01

    Purpose: In the following paper there have been presented the optimisation of heat treatment condition and structure of the MCMgAl12Zn1, MCMgAl9Zn1, MCMgAl6Zn1, MCMgAl3Zn1 magnesium cast alloy as-cast state and after a heat treatment.Design/methodology/approach: Working out of a neural network model for simulation of influence of temperature, solution heat treatment and ageing time and aluminium content on hardness of the analyzed magnesium cast alloys.Findings: The different heat treatment k...

  11. A comparative analysis of soft computing techniques for gene prediction.

    Science.gov (United States)

    Goel, Neelam; Singh, Shailendra; Aseri, Trilok Chand

    2013-07-01

    The rapid growth of genomic sequence data for both human and nonhuman species has made analyzing these sequences, especially predicting genes in them, very important and is currently the focus of many research efforts. Beside its scientific interest in the molecular biology and genomics community, gene prediction is of considerable importance in human health and medicine. A variety of gene prediction techniques have been developed for eukaryotes over the past few years. This article reviews and analyzes the application of certain soft computing techniques in gene prediction. First, the problem of gene prediction and its challenges are described. These are followed by different soft computing techniques along with their application to gene prediction. In addition, a comparative analysis of different soft computing techniques for gene prediction is given. Finally some limitations of the current research activities and future research directions are provided. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Clinical outcome among HIV-infected patients starting saquinavir hard gel compared to ritonavir or indinavir

    DEFF Research Database (Denmark)

    Kirk, O; Mocroft, A; Pradier, C

    2001-01-01

    -up within the EuroSIDA study. METHODS: Changes in plasma viral load (pVL) and CD4 cell count from baseline were compared between treatment groups. Time to new AIDS-defining events and death were compared in Kaplan--Meier models, and Cox models were established to further assess differences in clinical...

  13. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  14. Application of gamma-gamma logging in predicting anomalous geodynamic phenomena in hard coal mines

    International Nuclear Information System (INIS)

    Blaha, F.; Keclik, L.

    1980-01-01

    The application is discussed of gamma-gamma logging in the prediction of dynamic events like coal and gas bursts, coal sliding in medium dip seams and rock bumps. The overall analysis of the applied rock characterization method and of the measurement results shows that the prediction possibilities of dynamic events are partly limited due to specific effects of the rock massive. In these cases, however, unambiguous results may be obtained by other geophysical methods such as seismoacoustic testing etc. In most cases, however, the gamma-gamma logging results may be used for estimating the degree of dynamic events or checking the efficiency of preventive measures in the locality under investigation. (author)

  15. Why Gender and Age Prediction from Tweets is Hard : Lessons from a Crowdsourcing Experiment

    NARCIS (Netherlands)

    Nguyen, D.; Trieschnigg, D.; Dogruöz, A. Seza; Gravel, Rilana; Theune, Mariët; Meder, Theo; de Jong, Franciska

    2014-01-01

    There is a growing interest in automatically predicting the gender and age of authors from texts. However, most research so far ignores that language use is related to the social identity of speakers, which may be different from their biological identity. In this paper, we combine insights from

  16. Why Gender and Age Prediction from Tweets is Hard: Lessons from a Crowdsourcing Experiment

    NARCIS (Netherlands)

    Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Dogruoz, A. Seza; Gravel, Rilana; Theune, Mariet; Meder, Theo; de Jong, Franciska M.G.

    2014-01-01

    There is a growing interest in automatically predicting the gender and age of authors from texts. However, most research so far ignores that language use is related to the social identity of speakers, which may be different from their biological identity. In this paper, we combine insights from

  17. Evaluation of accelerated test parameters for CMOS IC total dose hardness prediction

    International Nuclear Information System (INIS)

    Sogoyan, A.V.; Nikiforov, A.Y.; Chumakov, A.I.

    1999-01-01

    The approach to accelerated test parameters evaluation is presented in order to predict CMOS IC total dose behavior in variable dose-rate environment. The technique is based on the analytical model of MOSFET parameters total dose degradation. The simple way to estimate model parameter is proposed using IC's input-output MOSFET radiation test results. (authors)

  18. New tips for structure prediction by comparative modeling

    OpenAIRE

    Rayan, Anwar

    2009-01-01

    Comparative modelling is utilized to predict the 3-dimensional conformation of a given protein (target) based on its sequence alignment to experimentally determined protein structure (template). The use of such technique is already rewarding and increasingly widespread in biological research and drug development. The accuracy of the predictions as commonly accepted depends on the score of sequence identity of the target protein to the template. To assess the relationship between sequence iden...

  19. Non-isothermal kinetics model to predict accurate phase transformation and hardness of 22MnB5 boron steel

    Energy Technology Data Exchange (ETDEWEB)

    Bok, H.-H.; Kim, S.N.; Suh, D.W. [Graduate Institute of Ferrous Technology, POSTECH, San 31, Hyoja-dong, Nam-gu, Pohang, Gyeongsangbuk-do (Korea, Republic of); Barlat, F., E-mail: f.barlat@postech.ac.kr [Graduate Institute of Ferrous Technology, POSTECH, San 31, Hyoja-dong, Nam-gu, Pohang, Gyeongsangbuk-do (Korea, Republic of); Lee, M.-G., E-mail: myounglee@korea.ac.kr [Department of Materials Science and Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul (Korea, Republic of)

    2015-02-25

    A non-isothermal phase transformation kinetics model obtained by modifying the well-known JMAK approach is proposed for application to a low carbon boron steel (22MnB5) sheet. In the modified kinetics model, the parameters are functions of both temperature and cooling rate, and can be identified by a numerical optimization method. Moreover, in this approach the transformation start and finish temperatures are variable instead of the constants that depend on chemical composition. These variable reference temperatures are determined from the measured CCT diagram using dilatation experiments. The kinetics model developed in this work captures the complex transformation behavior of the boron steel sheet sample accurately. In particular, the predicted hardness and phase fractions in the specimens subjected to a wide range of cooling rates were validated by experiments.

  20. Predicting reading ability in teenagers who are deaf or hard of hearing: A longitudinal analysis of language and reading.

    Science.gov (United States)

    Worsfold, Sarah; Mahon, Merle; Pimperton, Hannah; Stevenson, Jim; Kennedy, Colin

    2018-04-13

    Deaf and hard of hearing (D/HH) children and young people are known to show group-level deficits in spoken language and reading abilities relative to their hearing peers. However, there is little evidence on the longitudinal predictive relationships between language and reading in this population. To determine the extent to which differences in spoken language ability in childhood predict reading ability in D/HH adolescents. and procedures: Participants were drawn from a population-based cohort study and comprised 53 D/HH teenagers, who used spoken language, and a comparison group of 38 normally hearing teenagers. All had completed standardised measures of spoken language (expression and comprehension) and reading (accuracy and comprehension) at 6-10 and 13-19 years of age. and results: Forced entry stepwise regression showed that, after taking reading ability at age 8 years into account, language scores at age 8 years did not add significantly to the prediction of Reading Accuracy z-scores at age 17 years (change in R 2  = 0.01, p = .459) but did make a significant contribution to the prediction of Reading Comprehension z-scores at age 17 years (change in R 2  = 0.17, p skills in middle childhood predict reading comprehension ability in adolescence. Continued intervention to support language development beyond primary school has the potential to benefit reading comprehension and hence educational access for D/HH adolescents. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Comparative study on structure, corrosion and hardness of Zn-Ni alloy deposition on AISI 347 steel aircraft material

    Energy Technology Data Exchange (ETDEWEB)

    Gnanamuthu, RM. [Department of Chemical Engineering, College of Engineering, Kyung Hee University, 1732 Deogyeong-daero, Gihung, Yongin, Gyeonggi 446-701 (Korea, Republic of); Mohan, S., E-mail: sanjnamohan@yahoo.com [Central Electrochemical Research Institute, (CSIR), Karaikudi 630 006, Tamilnadu (India); Saravanan, G. [Central Electrochemical Research Institute, (CSIR), Karaikudi 630 006, Tamilnadu (India); Lee, Chang Woo, E-mail: cwlee@khu.ac.kr [Department of Chemical Engineering, College of Engineering, Kyung Hee University, 1732 Deogyeong-daero, Gihung, Yongin, Gyeonggi 446-701 (Korea, Republic of)

    2012-02-05

    Highlights: Black-Right-Pointing-Pointer Electrodeposition of Zn-Ni alloy on AISI 347 steel as an aircraft material has been carried out from various baths. Black-Right-Pointing-Pointer The effect of pulse duty cycle on thickness, current efficiency and hardness reached maximum values at 40% duty cycle and for 50 Hz frequencies average current density of 4 A dm{sup -2}. Black-Right-Pointing-Pointer The XRF characterizations of 88:12% Zn-Ni alloy provided excellent corrosion resistance. Black-Right-Pointing-Pointer It is found that Zn-Ni alloy on AISI 347 aircraft material has better structure and corrosion resistance by pulse electrodeposits from electrolyte-4. - Abstract: Zn-Ni alloys were electrodeposited on AISI 347 steel aircraft materials from various electrolytes under direct current (DCD) and pulsed electrodepositing (PED) techniques. The effects of pulse duty cycle on thickness, current efficiency and hardness of electrodeposits were studied. Alloy phases of the Zn-Ni were indexed by X-ray diffraction (XRD) techniques. Microstructural morphology, topography and elemental compositions were characterized using scanning electron microscopy (SEM), atomic force microscopy (AFM) and X-ray fluorescence spectroscopy (XRF). The corrosion resistance properties of electrodeposited Zn-Ni alloy in 3.5% NaCl aqueous solution obtained by DCD and PED were compared using potentiodynamic polarization and electrochemical impedance spectroscopy (EIS) technique. Elemental analysis showed that 88% of Zn and 12% of Ni obtained from electrolyte-4 by PED technique at 40% duty cycle for 50 Hz frequencies having better corrosion resistance than that of deposits obtained from other electrolytes.

  2. Comparative assessment of the interfacial soft and hard tissues investing implants and natural teeth in the macaque mandible.

    Science.gov (United States)

    Siar, Chong Huat; Toh, Chooi Gait; Romanos, Georgios E; Ng, Kok Han

    2015-07-01

    The aim of this study was to conduct a comparative qualitative and quantitative assessment of the interfacial soft and hard tissues investing implants and natural teeth. The test sample consisted of six adult healthy male Macaca fascicularis with three-unit splinted crowns, each crown supported by an Ankylos screw-shaped titanium implant. These implants were placed in the mandibular premolar-second molar region, one side by an immediate-loading (IL) and the other by delayed-loading (DL) protocol. The animals were sacrificed after 3 months of functional loading. Another two monkeys with natural dentition served as controls. Nondecalcified sections were prepared for assessment of optical intensities (OI) under a confocal laser scanning microscope. In both the test (IL and DL) and control, the soft tissue complexes demonstrated a highly fluorescent keratinized layer and diminished cytoplasmic and enhanced membranous fluorescence in the remaining epithelium. Peri-implant mucosa was further characterized by an intense fluorescence at the junctional epithelium-implant interface and in the stromal mononuclear infiltrate. Connective tissue contact and periodontal ligament were weakly fluorescent. In hard tissues, a high fluorescence was observed in peri-implant woven bone and along the implant-bone interface. Mean OI was significantly higher in peri-implant woven bone than around teeth (P  0.05). Present findings suggest that peri-implant woven bone is highly mineralized, while the peri-implant and gingival mucosa share structural similarities. Optical intensities of interfacial tissues investing implants and teeth are related to their biological properties.

  3. Pedestrian Path Prediction with Recursive Bayesian Filters: A Comparative Study

    NARCIS (Netherlands)

    Schneider, N.; Gavrila, D.M.

    2013-01-01

    In the context of intelligent vehicles, we perform a comparative study on recursive Bayesian filters for pedestrian path prediction at short time horizons (< 2s). We consider Extended Kalman Filters (EKF) based on single dynamical models and Interacting Multiple Models (IMM) combining several such

  4. Assertiveness expectancies: how hard people push depends on the consequences they predict.

    Science.gov (United States)

    Ames, Daniel R

    2008-12-01

    The present article seeks to explain varying levels of assertiveness in interpersonal conflict and negotiations with assertiveness expectancies, idiosyncratic predictions people make about the social and instrumental consequences of assertive behavior. This account complements motivation-based models of assertiveness and competitiveness, suggesting that individuals may possess the same social values (e.g., concern for relationships) but show dramatically different assertiveness due to different assumptions about behavioral consequences. Results clarify the form of assertiveness expectancies, namely that most people assume increasing assertiveness can yield positive social and instrumental benefits up to a point, beyond which benefits decline. However, people vary in how assertive this perceived optimal point is. These individual differences in expectancies are linked in 4 studies to assertiveness, including self-reported assertiveness, rated behavioral preferences in assorted interpersonal conflict scenarios, partner ratings of participants' behavior in a face-to-face dyadic negotiation, and work colleague ratings of participants' assertiveness in the workplace. In each case, the link between expectancies and behavior remained after controlling for values. The results suggest a place for expectancies alongside values in psychological models of interpersonal assertiveness.

  5. Application of Semiempirical Methods to Transition Metal Complexes: Fast Results but Hard-to-Predict Accuracy.

    KAUST Repository

    Minenkov, Yury

    2018-05-22

    A series of semiempirical PM6* and PM7 methods has been tested in reproducing of relative conformational energies of 27 realistic-size complexes of 16 different transition metals (TMs). An analysis of relative energies derived from single-point energy evaluations on density functional theory (DFT) optimized conformers revealed pronounced deviations between semiempirical and DFT methods indicating fundamental difference in potential energy surfaces (PES). To identify the origin of the deviation, we compared fully optimized PM7 and respective DFT conformers. For many complexes, differences in PM7 and DFT conformational energies have been confirmed often manifesting themselves in false coordination of some atoms (H, O) to TMs and chemical transformations/distortion of coordination center geometry in PM7 structures. Despite geometry optimization with fixed coordination center geometry leads to some improvements in conformational energies, the resulting accuracy is still too low to recommend explored semiempirical methods for out-of-the-box conformational search/sampling: careful testing is always needed.

  6. Prediction of the hardness profile of an AISI 4340 steel cylinder heat-treated by laser - 3D and artificial neural networks modelling and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Hadhri, Mahdi; Ouafi, Abderazzak El; Barka, Noureddine [University of Quebec, Rimouski (Canada)

    2017-02-15

    This paper presents a comprehensive approach developed to design an effective prediction model for hardness profile in laser surface transformation hardening process. Based on finite element method and Artificial neural networks, the proposed approach is built progressively by (i) examining the laser hardening parameters and conditions known to have an influence on the hardened surface attributes through a structured experimental investigation, (ii) investigating the laser hardening parameters effects on the hardness profile through extensive 3D modeling and simulation efforts and (ii) integrating the hardening process parameters via neural network model for hardness profile prediction. The experimental validation conducted on AISI4340 steel using a commercial 3 kW Nd:Yag laser, confirm the feasibility and efficiency of the proposed approach leading to an accurate and reliable hardness profile prediction model. With a maximum relative error of about 10 % under various practical conditions, the predictive model can be considered as effective especially in the case of a relatively complex system such as laser surface transformation hardening process.

  7. Prediction of the hardness profile of an AISI 4340 steel cylinder heat-treated by laser - 3D and artificial neural networks modelling and experimental validation

    International Nuclear Information System (INIS)

    Hadhri, Mahdi; Ouafi, Abderazzak El; Barka, Noureddine

    2017-01-01

    This paper presents a comprehensive approach developed to design an effective prediction model for hardness profile in laser surface transformation hardening process. Based on finite element method and Artificial neural networks, the proposed approach is built progressively by (i) examining the laser hardening parameters and conditions known to have an influence on the hardened surface attributes through a structured experimental investigation, (ii) investigating the laser hardening parameters effects on the hardness profile through extensive 3D modeling and simulation efforts and (ii) integrating the hardening process parameters via neural network model for hardness profile prediction. The experimental validation conducted on AISI4340 steel using a commercial 3 kW Nd:Yag laser, confirm the feasibility and efficiency of the proposed approach leading to an accurate and reliable hardness profile prediction model. With a maximum relative error of about 10 % under various practical conditions, the predictive model can be considered as effective especially in the case of a relatively complex system such as laser surface transformation hardening process

  8. Immediate postoperative outcome of orthognathic surgical planning, and prediction of positional changes in hard and soft tissue, independently of the extent and direction of the surgical corrections required

    DEFF Research Database (Denmark)

    Donatsky, Ole; Bjørn-Jørgensen, Jens; Hermund, Niels Ulrich

    2011-01-01

    orthognathic correction using the computerised, cephalometric, orthognathic, surgical planning system (TIOPS). Preoperative cephalograms were analysed and treatment plans and prediction tracings produced by computerised interactive simulation. The planned changes were transferred to models and finally...... with the presently included soft tissue algorithms, the current study shows relatively high mean predictability of the immediately postoperative hard and soft tissue outcome, independent of the extent and direction of required orthognathic correction. Because of the relatively high individual variability, caution...

  9. Scaling and predictability in stock markets: a comparative study.

    Directory of Open Access Journals (Sweden)

    Huishu Zhang

    Full Text Available Most people who invest in stock markets want to be rich, thus, many technical methods have been created to beat the market. If one knows the predictability of the price series in different markets, it would be easier for him/her to make the technical analysis, at least to some extent. Here we use one of the most basic sold-and-bought trading strategies to establish the profit landscape, and then calculate the parameters to characterize the strength of predictability. According to the analysis of scaling of the profit landscape, we find that the Chinese individual stocks are harder to predict than US ones, and the individual stocks are harder to predict than indexes in both Chinese stock market and US stock market. Since the Chinese (US stock market is a representative of emerging (developed markets, our comparative study on the markets of these two countries is of potential value not only for conducting technical analysis, but also for understanding physical mechanisms of different kinds of markets in terms of scaling.

  10. Scaling and predictability in stock markets: a comparative study.

    Science.gov (United States)

    Zhang, Huishu; Wei, Jianrong; Huang, Jiping

    2014-01-01

    Most people who invest in stock markets want to be rich, thus, many technical methods have been created to beat the market. If one knows the predictability of the price series in different markets, it would be easier for him/her to make the technical analysis, at least to some extent. Here we use one of the most basic sold-and-bought trading strategies to establish the profit landscape, and then calculate the parameters to characterize the strength of predictability. According to the analysis of scaling of the profit landscape, we find that the Chinese individual stocks are harder to predict than US ones, and the individual stocks are harder to predict than indexes in both Chinese stock market and US stock market. Since the Chinese (US) stock market is a representative of emerging (developed) markets, our comparative study on the markets of these two countries is of potential value not only for conducting technical analysis, but also for understanding physical mechanisms of different kinds of markets in terms of scaling.

  11. A Comparative Study Using CFD to Predict Iced Airfoil Aerodynamics

    Science.gov (United States)

    Chi, x.; Li, Y.; Chen, H.; Addy, H. E.; Choo, Y. K.; Shih, T. I-P.

    2005-01-01

    WIND, Fluent, and PowerFLOW were used to predict the lift, drag, and moment coefficients of a business-jet airfoil with a rime ice (rough and jagged, but no protruding horns) and with a glaze ice (rough and jagged end has two or more protruding horns) for angles of attack from zero to and after stall. The performance of the following turbulence models were examined by comparing predictions with available experimental data. Spalart-Allmaras (S-A), RNG k-epsilon, shear-stress transport, v(sup 2)-f, and a differential Reynolds stress model with and without non-equilibrium wall functions. For steady RANS simulations, WIND and FLUENT were found to give nearly identical results if the grid about the iced airfoil, the turbulence model, and the order of accuracy of the numerical schemes used are the same. The use of wall functions was found to be acceptable for the rime ice configuration and the flow conditions examined. For rime ice, the S-A model was found to predict accurately until near the stall angle. For glaze ice, the CFD predictions were much less satisfactory for all turbulence models and codes investigated because of the large separated region produced by the horns. For unsteady RANS, WIND and FLUENT did not provide better results. PowerFLOW, based on the Lattice Boltzmann method, gave excellent results for the lift coefficient at and near stall for the rime ice, where the flow is inherently unsteady.

  12. Comparative and Predictive Multimedia Assessments Using Monte Carlo Uncertainty Analyses

    Science.gov (United States)

    Whelan, G.

    2002-05-01

    Multiple-pathway frameworks (sometimes referred to as multimedia models) provide a platform for combining medium-specific environmental models and databases, such that they can be utilized in a more holistic assessment of contaminant fate and transport in the environment. These frameworks provide a relatively seamless transfer of information from one model to the next and from databases to models. Within these frameworks, multiple models are linked, resulting in models that consume information from upstream models and produce information to be consumed by downstream models. The Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) is an example, which allows users to link their models to other models and databases. FRAMES is an icon-driven, site-layout platform that is an open-architecture, object-oriented system that interacts with environmental databases; helps the user construct a Conceptual Site Model that is real-world based; allows the user to choose the most appropriate models to solve simulation requirements; solves the standard risk paradigm of release transport and fate; and exposure/risk assessments to people and ecology; and presents graphical packages for analyzing results. FRAMES is specifically designed allow users to link their own models into a system, which contains models developed by others. This paper will present the use of FRAMES to evaluate potential human health exposures using real site data and realistic assumptions from sources, through the vadose and saturated zones, to exposure and risk assessment at three real-world sites, using the Multimedia Environmental Pollutant Assessment System (MEPAS), which is a multimedia model contained within FRAMES. These real-world examples use predictive and comparative approaches coupled with a Monte Carlo analysis. A predictive analysis is where models are calibrated to monitored site data, prior to the assessment, and a comparative analysis is where models are not calibrated but

  13. New tips for structure prediction by comparative modeling

    Science.gov (United States)

    Rayan, Anwar

    2009-01-01

    Comparative modelling is utilized to predict the 3-dimensional conformation of a given protein (target) based on its sequence alignment to experimentally determined protein structure (template). The use of such technique is already rewarding and increasingly widespread in biological research and drug development. The accuracy of the predictions as commonly accepted depends on the score of sequence identity of the target protein to the template. To assess the relationship between sequence identity and model quality, we carried out an analysis of a set of 4753 sequence and structure alignments. Throughout this research, the model accuracy was measured by root mean square deviations of Cα atoms of the target-template structures. Surprisingly, the results show that sequence identity of the target protein to the template is not a good descriptor to predict the accuracy of the 3-D structure model. However, in a large number of cases, comparative modelling with lower sequence identity of target to template proteins led to more accurate 3-D structure model. As a consequence of this study, we suggest new tips for improving the quality of omparative models, particularly for models whose target-template sequence identity is below 50%. PMID:19255646

  14. Comparing 2 Whiplash Grading Systems to Predict Clinical Outcomes.

    Science.gov (United States)

    Croft, Arthur C; Bagherian, Alireza; Mickelsen, Patrick K; Wagner, Stephen

    2016-06-01

    Two whiplash severity grading systems have been developed: Quebec Task Force on Whiplash-Associated Disorders (QTF-WAD) and the Croft grading system. The majority of clinical studies to date have used the modified grading system published by the QTF-WAD in 1995 and have demonstrated some ability to predict outcome. But most studies include only injuries of lower severity (grades 1 and 2), preventing a broader interpretation. The purpose of this study was assess the ability of these grading systems to predict clinical outcome within the context of a broader injury spectrum. This study evaluated both grading systems for their ability to predict the bivalent outcome, recovery, within a sample of 118 whiplash patients who were part of a previous case-control designed study. Of these, 36% (controls) had recovered, and 64% (cases) had not recovered. The discrete bivariate distribution between recovery status and whiplash grade was analyzed using the 2-tailed cross-tabulation statistics. Applying the criteria of the original 1993 Croft grading system, the subset comprised 1 grade 1 injury, 32 grade 2 injuries, 53 grade 3 injuries, and 32 grade 4 injuries. Applying the criteria of the modified (QTF-WAD) grading system, there were 1 grade 1 injury, 89 grade 2 injuries, and 28 grade 3 injuries. Both whiplash grading systems correlated negatively with recovery; that is, higher severity grades predicted a lower probability of recovery, and statistically significant correlations were observed in both, but the Croft grading system substantially outperformed the QTF-WAD system on this measure. The Croft grading system for whiplash injury severity showed a better predictive measure for recovery status from whiplash injuries as compared with the QTF-WAD grading system.

  15. Dinucleotide controlled null models for comparative RNA gene prediction

    Directory of Open Access Journals (Sweden)

    Gesell Tanja

    2008-05-01

    Full Text Available Abstract Background Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. Results We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. Conclusion SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require

  16. Dinucleotide controlled null models for comparative RNA gene prediction.

    Science.gov (United States)

    Gesell, Tanja; Washietl, Stefan

    2008-05-27

    Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require randomization of multiple alignments can be considered. SISSIz

  17. Comparing predicted estrogen concentrations with measurements in US waters

    International Nuclear Information System (INIS)

    Kostich, Mitch; Flick, Robert; Martinson, John

    2013-01-01

    The range of exposure rates to the steroidal estrogens estrone (E1), beta-estradiol (E2), estriol (E3), and ethinyl estradiol (EE2) in the aquatic environment was investigated by modeling estrogen introduction via municipal wastewater from sewage plants across the US. Model predictions were compared to published measured concentrations. Predictions were congruent with most of the measurements, but a few measurements of E2 and EE2 exceed those that would be expected from the model, despite very conservative model assumptions of no degradation or in-stream dilution. Although some extreme measurements for EE2 may reflect analytical artifacts, remaining data suggest concentrations of E2 and EE2 may reach twice the 99th percentile predicted from the model. The model and bulk of the measurement data both suggest that cumulative exposure rates to humans are consistently low relative to effect levels, but also suggest that fish exposures to E1, E2, and EE2 sometimes substantially exceed chronic no-effect levels. -- Highlights: •Conservatively modeled steroidal estrogen concentrations in ambient water. •Found reasonable agreement between model and published measurements. •Model and measurements agree that risks to humans are remote. •Model and measurements agree significant questions remain about risk to fish. •Need better understanding of temporal variations and their impact on fish. -- Our model and published measurements for estrogens suggest aquatic exposure rates for humans are below potential effect levels, but fish exposure sometimes exceeds published no-effect levels

  18. Atterberg Limits Prediction Comparing SVM with ANFIS Model

    Directory of Open Access Journals (Sweden)

    Mohammad Murtaza Sherzoy

    2017-03-01

    Full Text Available Support Vector Machine (SVM and Adaptive Neuro-Fuzzy inference Systems (ANFIS both analytical methods are used to predict the values of Atterberg limits, such as the liquid limit, plastic limit and plasticity index. The main objective of this study is to make a comparison between both forecasts (SVM & ANFIS methods. All data of 54 soil samples are used and taken from the area of Peninsular Malaysian and tested for different parameters containing liquid limit, plastic limit, plasticity index and grain size distribution and were. The input parameter used in for this case are the fraction of grain size distribution which are the percentage of silt, clay and sand. The actual and predicted values of Atterberg limit which obtained from the SVM and ANFIS models are compared by using the correlation coefficient R2 and root mean squared error (RMSE value.  The outcome of the study show that the ANFIS model shows higher accuracy than SVM model for the liquid limit (R2 = 0.987, plastic limit (R2 = 0.949 and plastic index (R2 = 0966. RMSE value that obtained for both methods have shown that the ANFIS model has represent the best performance than SVM model to predict the Atterberg Limits as a whole.

  19. Evaluating and comparing algorithms for respiratory motion prediction

    International Nuclear Information System (INIS)

    Ernst, F; Dürichen, R; Schlaefer, A; Schweikard, A

    2013-01-01

    In robotic radiosurgery, it is necessary to compensate for systematic latencies arising from target tracking and mechanical constraints. This compensation is usually achieved by means of an algorithm which computes the future target position. In most scientific works on respiratory motion prediction, only one or two algorithms are evaluated on a limited amount of very short motion traces. The purpose of this work is to gain more insight into the real world capabilities of respiratory motion prediction methods by evaluating many algorithms on an unprecedented amount of data. We have evaluated six algorithms, the normalized least mean squares (nLMS), recursive least squares (RLS), multi-step linear methods (MULIN), wavelet-based multiscale autoregression (wLMS), extended Kalman filtering, and ε-support vector regression (SVRpred) methods, on an extensive database of 304 respiratory motion traces. The traces were collected during treatment with the CyberKnife (Accuray, Inc., Sunnyvale, CA, USA) and feature an average length of 71 min. Evaluation was done using a graphical prediction toolkit, which is available to the general public, as is the data we used. The experiments show that the nLMS algorithm—which is one of the algorithms currently used in the CyberKnife—is outperformed by all other methods. This is especially true in the case of the wLMS, the SVRpred, and the MULIN algorithms, which perform much better. The nLMS algorithm produces a relative root mean square (RMS) error of 75% or less (i.e., a reduction in error of 25% or more when compared to not doing prediction) in only 38% of the test cases, whereas the MULIN and SVRpred methods reach this level in more than 77%, the wLMS algorithm in more than 84% of the test cases. Our work shows that the wLMS algorithm is the most accurate algorithm and does not require parameter tuning, making it an ideal candidate for clinical implementation. Additionally, we have seen that the structure of a patient

  20. Comparative of the Tribological Performance of Hydraulic Cylinders Coated by the Process of Thermal Spray HVOF and Hard Chrome Plating

    Directory of Open Access Journals (Sweden)

    R.M. Castro

    2014-03-01

    Full Text Available Due to the necessity of obtaining a surface that is resistant to wear and oxidation, hydraulic cylinders are typically coated with hard chrome through the process of electroplating process. However, this type of coating shows an increase of the area to support sealing elements, which interferes directly in the lubrication of the rod, causing damage to the seal components and bringing oil leakage. Another disadvantage in using the electroplated hard chromium process is the presence of high level hexavalent chromium Cr+6 which is not only carcinogenic, but also extremely contaminating to the environment. Currently, the alternative process of high-speed thermal spraying (HVOF - High Velocity Oxy-Fuel, uses composite materials (metal-ceramic possessing low wear rates. Research has shown that some mechanical properties are changed positively with the thermal spray process in industrial applications. It is evident that a coating based on WC has upper characteristics as: wear resistance, low friction coefficient, with respect to hard chrome coatings. These characteristics were analyzed by optical microscopy, roughness measurements and wear test.

  1. Prediction of Large Vessel Occlusions in Acute Stroke: National Institute of Health Stroke Scale Is Hard to Beat.

    Science.gov (United States)

    Vanacker, Peter; Heldner, Mirjam R; Amiguet, Michael; Faouzi, Mohamed; Cras, Patrick; Ntaios, George; Arnold, Marcel; Mattle, Heinrich P; Gralla, Jan; Fischer, Urs; Michel, Patrik

    2016-06-01

    provides minimal incremental predictive value compared with the National Institute of Health Stroke Scale alone.

  2. Prediction of Surface Roughness in End Milling Process Using Intelligent Systems: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Abdel Badie Sharkawy

    2011-01-01

    Full Text Available A study is presented to model surface roughness in end milling process. Three types of intelligent networks have been considered. They are (i radial basis function neural networks (RBFNs, (ii adaptive neurofuzzy inference systems (ANFISs, and (iii genetically evolved fuzzy inference systems (G-FISs. The machining parameters, namely, the spindle speed, feed rate, and depth of cut have been used as inputs to model the workpiece surface roughness. The goal is to get the best prediction accuracy. The procedure is illustrated using experimental data of end milling 6061 aluminum alloy. The three networks have been trained using experimental training data. After training, they have been examined using another set of data, that is, validation data. Results are compared with previously published results. It is concluded that ANFIS networks may suffer the local minima problem, and genetic tuning of fuzzy networks cannot insure perfect optimality unless suitable parameter setting (population size, number of generations etc. and tuning range for the FIS, parameters are used which can be hardly satisfied. It is shown that the RBFN model has the best performance (prediction accuracy in this particular case.

  3. Statistical experiments using the multiple regression research for prediction of proper hardness in areas of phosphorus cast-iron brake shoes manufacturing

    Science.gov (United States)

    Kiss, I.; Cioată, V. G.; Ratiu, S. A.; Rackov, M.; Penčić, M.

    2018-01-01

    Multivariate research is important in areas of cast-iron brake shoes manufacturing, because many variables interact with each other simultaneously. This article focuses on expressing the multiple linear regression model related to the hardness assurance by the chemical composition of the phosphorous cast irons destined to the brake shoes, having in view that the regression coefficients will illustrate the unrelated contributions of each independent variable towards predicting the dependent variable. In order to settle the multiple correlations between the hardness of the cast-iron brake shoes, and their chemical compositions several regression equations has been proposed. Is searched a mathematical solution which can determine the optimum chemical composition for the hardness desirable values. Starting from the above-mentioned affirmations two new statistical experiments are effectuated related to the values of Phosphorus [P], Manganese [Mn] and Silicon [Si]. Therefore, the regression equations, which describe the mathematical dependency between the above-mentioned elements and the hardness, are determined. As result, several correlation charts will be revealed.

  4. [Comparative study of expression of homeobox gene Msx-1, Msx-2 mRNA during the hard tissue formation of mouse tooth development].

    Science.gov (United States)

    Wang, Y; Wang, J; Gao, Y

    2001-07-01

    To observe and compare the expression pattern of Msx-1, Msx-2 mRNA during the different stages of hard tissue formation in the first mandibular molar of mouse and investigate the relationship between the two genes. First mandibular molar germs from 1, 3, 7 and 14-days old mouse were separated and reverse transcription-polymerase chain reaction was performed on the total RNA of them using Msx-1, Msx-2 specific primers separately. Expression of both genes were detected during the different stages of hard tissue formation in the mouse first mandibular molars, but there was some interesting differences in the quantitiy between the two genes. Msx-1 transcripts appeared at the 1 day postnatally, and increase through 3 day, 7 day, then maximally expressed at 14 days postnatally; while Msx-2 mRNA was seen and expressed maximally at the 3 days postnatally, then there was a gradual reduction at 7 days, and 14 days postnatally. The homeobox gene Msx-1, Msx-2 may play a role in the events of the hard tissue formation. The complementary expression pattern of them during the specific stage of hard tissue formation indicates that there may be some functional redundancy between them during the biomineralization.

  5. Prediction of hardness for Al-Cu-Zn alloys in as-cast and quenching conditions; Prediccion de la dureza de aleaciones Al-Cu-Zn en estado de colada y templado

    Energy Technology Data Exchange (ETDEWEB)

    Villegas-Cardenas, J. D.; Saucedo-Munoz, M. L.; Lopez-Hirata, V. M.; Dorantes Rosales, H. J.

    2014-10-01

    This work presents a new experimental and numerical methodology in order to predict the hardness in the as-cast, and solution treated and quenched Al-Cu-Zn alloys. Chemical composition of alloys is located inside two straight lines represented by two equations. Eight different compositions were selected from each line. All the alloys were characterized for light microscope, scanning electron microscope, X-ray diffraction and Rockwell B hardness test. The equilibrium phases were obtained at different temperatures by Thermo-Calc. The microstructure characterization and regression analysis enabled to determine the phase transformations and two equations of hardness assessment. The combination of hardness equations and composition line equations permitted to estimate the hardness of any alloy composition inside this zone. This was verified by calculating hardness with the information reported in other works, with an error lower than 7% in the estimated hardness. (Author)

  6. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    A computer program was adopted from the work of Hill et al. (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of ...

  7. Intertidal beach slope predictions compared to field data

    NARCIS (Netherlands)

    Madsen, A.J.; Plant, N.G.

    2001-01-01

    This paper presents a test of a very simple model for predicting beach slope changes. The model assumes that these changes are a function of both the incident wave conditions and the beach slope itself. Following other studies, we hypothesized that the beach slope evolves towards an equilibrium

  8. A comparative approach for the investigation of biological information processing: An examination of the structure and function of computer hard drives and DNA

    Science.gov (United States)

    2010-01-01

    Background The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Methods Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Results Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological

  9. A comparative approach for the investigation of biological information processing: an examination of the structure and function of computer hard drives and DNA.

    Science.gov (United States)

    D'Onofrio, David J; An, Gary

    2010-01-21

    The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological systems do not have an

  10. A comparative approach for the investigation of biological information processing: An examination of the structure and function of computer hard drives and DNA

    Directory of Open Access Journals (Sweden)

    D'Onofrio David J

    2010-01-01

    Full Text Available Abstract Background The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Methods Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1 orthogonal uniqueness, (2 low level formatting, (3 high level formatting and (4 translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Results Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT during high level formatting of the computer hard drive and the subsequent loading of an operating

  11. Hard processes in hadronic interactions

    International Nuclear Information System (INIS)

    Satz, H.; Wang, X.N.

    1995-01-01

    Quantum chromodynamics is today accepted as the fundamental theory of strong interactions, even though most hadronic collisions lead to final states for which quantitative QCD predictions are still lacking. It therefore seems worthwhile to take stock of where we stand today and to what extent the presently available data on hard processes in hadronic collisions can be accounted for in terms of QCD. This is one reason for this work. The second reason - and in fact its original trigger - is the search for the quark-gluon plasma in high energy nuclear collisions. The hard processes to be considered here are the production of prompt photons, Drell-Yan dileptons, open charm, quarkonium states, and hard jets. For each of these, we discuss the present theoretical understanding, compare the resulting predictions to available data, and then show what behaviour it leads to at RHIC and LHC energies. All of these processes have the structure mentioned above: they contain a hard partonic interaction, calculable perturbatively, but also the non-perturbative parton distribution within a hadron. These parton distributions, however, can be studied theoretically in terms of counting rule arguments, and they can be checked independently by measurements of the parton structure functions in deep inelastic lepton-hadron scattering. The present volume is the work of Hard Probe Collaboration, a group of theorists who are interested in the problem and were willing to dedicate a considerable amount of their time and work on it. The necessary preparation, planning and coordination of the project were carried out in two workshops of two weeks' duration each, in February 1994 at CERn in Geneva andin July 1994 at LBL in Berkeley

  12. Vis-NIR hyperspectral imaging and multivariate analysis for prediction of the moisture content and hardness of Pistachio kernels roasted in different conditions

    Directory of Open Access Journals (Sweden)

    T Mohammadi Moghaddam

    2015-09-01

    of determination (R2, the root mean square error of prediction (RMSEP and the ratio of the standard deviation of the response variable to RMSEP (known as relative performance determinant (RPD were calculated. Results and discussion: Interpretation of hyperspectral data: The results showed that the spectra of the shell, the whole kernel and the internal part of the kernel have different patterns. The internal part of thekernel had 2 peaks at 630 nm and 690 nm, while the shell and the whole kernel had 1 peak at 670 nm and 720 nm, respectively and the peak of the whole kernel was sharper than that of the shell. The highest and lowest intensities were for the internal part of the kernel and the whole kernel, respectively. The spectral slope of the internal part is higher than that of the shell and the whole kernel at 500-700 nm. The effect of different pre-processing techniques and analysis on prediction of pistachio kernels properties: In the absence of pre-processing techniques, low correlation coefficients were observed for prediction of moisture content and hardness. However, with the use of pre-processing techniques, in some models, correlation coefficient and RPD increased and the RMSEP decreased. The results revealed that ANN models would predict moisture content and textural characteristics of roasted pistachio kernels better than PLSR models. Moisture content: ANN models can predict moisture content of roasted pistachio kernels better than PLSR models. In total, PLSR models showed low RPD and R2. For all samples, RPD was lower than 1.5, indicating that the developed models do not give an accurate prediction for moisture content. The best results with ANN method were achieved using a combination of SNV, wavelet and D1 for predicting moisture content with R2 =0.907 and RMSEP=0.179. Hardness: The results indicated that ANN models can predict the hardness better than PLSR models. The best results with PLSR models were achieved using a combination of SNV, wavelet and

  13. COMPARING FINANCIAL DISTRESS PREDICTION MODELS BEFORE AND DURING RECESSION

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2011-02-01

    Full Text Available The purpose of this paper is to design three separate financial distress prediction models that will track the changes in a relative importance of financial ratios throughout three consecutive years. The models were based on the financial data from 2000 privately-owned small and medium-sized enterprises in Croatia from 2006 to 2009, and developed by means of logistic regression. Macroeconomic conditions as well as market dynamic have been changed over the mentioned period. Financial ratios that were less important in one period become more important in the next period. Composition of model starting in 2006 has been changed in the next years. It tells us what financial ratios are more important during the time of economic downturn. Besides, it helps us to understand behavior of small and medium-sized enterprises in the period of prerecession and in the period of recession.

  14. PREDICTING THE INTENTION TO USE INTERNET – A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Slaven Brumec

    2006-06-01

    Full Text Available This article focuses on an application of the Triandis Model in researching Internet usage and the intention to use Internet. Unlike other TAM-based studies undertaken to date, the Triandis Model offers a sociological account of interaction between the various factors, particularly attitude, intention, and behavior. The technique of Structural Equation Modeling was used to assess the impact those factors have on intention to use the Internet in accordance with the relationships posited by the Triandis Model. The survey was administered to Croatian undergraduate students at and employed individuals. The survey results are compared to the results of a similar survey that was carried out by two universities in Hong Kong.

  15. SELF-ESTEEM OF DEAF AND HARD OF HEARING COMPARED WITH HEARING ADOLESCENTS IN SLOVENIA – THE CONTEXT OF SOCIAL AND COMMUNICATION FACTORS

    Directory of Open Access Journals (Sweden)

    Damjana KOGOVSEK

    2015-11-01

    Full Text Available Objective: The study focuses on the self-esteem of deaf and hard of hearing (D/HH and hearing adolescents (HA in Slovenia. The aim of this study is a comparison of self-esteem between D/HH and HA regarding the hearing status, age, gender, and the comparison among D/HH adolescents regarding communication and education settings. It is hypothesized that deaf and hard of hearing adolescents have lower self-esteem than their hearing peers. Methods: The final sample included 130 adolescents who were split into two groups with the method of equal pairs: 65 D/HH adolescents and 65 HA, which were established on the basis of gender, age, nationality, and educational programme of schooling. The phenomenon of self-esteem was measured with the Rosenberg Self-Esteem Scale, which was translated and adapted into the Slovenian Sign Language (SSL. Results: The results show significant differrences in self-esteem between D/HH and HA adolescents. D/HH adolescents have, on average, lower self-esteem than HA. There are differences in self-esteem regarding gender and also regarding ages of 16 and of 20. D/HH adolescents who use speech or sign language in their communication have higher self-esteem than those who use mostly sign language. D/HH adolescents in mainstream schools have higher self-esteem than those included into a segregated form of schooling. Discussion: There are differences among adolescents in how they view themselves. Self-esteem can be a significant predictor of life satisfaction. Conclusion: D/HH adolescents experience lower self-esteem when compared with HA peers.

  16. Standard hardness conversion tables for metals relationship among brinell hardness, vickers hardness, rockwell hardness, superficial hardness, knoop hardness, and scleroscope hardness

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2007-01-01

    1.1 Conversion Table 1 presents data in the Rockwell C hardness range on the relationship among Brinell hardness, Vickers hardness, Rockwell hardness, Rockwell superficial hardness, Knoop hardness, and Scleroscope hardness of non-austenitic steels including carbon, alloy, and tool steels in the as-forged, annealed, normalized, and quenched and tempered conditions provided that they are homogeneous. 1.2 Conversion Table 2 presents data in the Rockwell B hardness range on the relationship among Brinell hardness, Vickers hardness, Rockwell hardness, Rockwell superficial hardness, Knoop hardness, and Scleroscope hardness of non-austenitic steels including carbon, alloy, and tool steels in the as-forged, annealed, normalized, and quenched and tempered conditions provided that they are homogeneous. 1.3 Conversion Table 3 presents data on the relationship among Brinell hardness, Vickers hardness, Rockwell hardness, Rockwell superficial hardness, and Knoop hardness of nickel and high-nickel alloys (nickel content o...

  17. Hard coatings

    International Nuclear Information System (INIS)

    Dan, J.P.; Boving, H.J.; Hintermann, H.E.

    1993-01-01

    Hard, wear resistant and low friction coatings are presently produced on a world-wide basis, by different processes such as electrochemical or electroless methods, spray technologies, thermochemical, CVD and PVD. Some of the most advanced processes, especially those dedicated to thin film depositions, basically belong to CVD or PVD technologies, and will be looked at in more detail. The hard coatings mainly consist of oxides, nitrides, carbides, borides or carbon. Over the years, many processes have been developed which are variations and/or combinations of the basic CVD and PVD methods. The main difference between these two families of deposition techniques is that the CVD is an elevated temperature process (≥ 700 C), while the PVD on the contrary, is rather a low temperature process (≤ 500 C); this of course influences the choice of substrates and properties of the coating/substrate systems. Fundamental aspects of the vapor phase deposition techniques and some of their influences on coating properties will be discussed, as well as the very important interactions between deposit and substrate: diffusions, internal stress, etc. Advantages and limitations of CVD and PVD respectively will briefly be reviewed and examples of applications of the layers will be given. Parallel to the development and permanent updating of surface modification technologies, an effort was made to create novel characterisation methods. A close look will be given to the coating adherence control by means of the scratch test, at the coating hardness measurement by means of nanoindentation, at the coating wear resistance by means of a pin-on-disc tribometer, and at the surface quality evaluation by Atomic Force Microscopy (AFM). Finally, main important trends will be highlighted. (orig.)

  18. Comparative evaluation of soft and hard tissue dimensions in the anterior maxilla using radiovisiography and cone beam computed tomography: A pilot study

    Directory of Open Access Journals (Sweden)

    Savita Mallikarjun

    2016-01-01

    Full Text Available Aims: To assess and compare the thickness of gingiva in the anterior maxilla using radiovisiography (RVG and cone beam computed tomography (CBCT and its correlation with the thickness of underlying alveolar bone. Settings and Design: This cross-sectional study included 10 male subjects in the age group of 20–45 years. Materials and Methods: After analyzing the width of keratinized gingiva of the maxillary right central incisor, the radiographic assessment was done using a modified technique for RVG and CBCT, to measure the thickness of both the labial gingiva and labial plate of alveolar bone at 4 predetermined locations along the length of the root in each case. Statistical Analysis Used: Statistical analysis was performed using Student's t-test and Pearson's correlation test, with the help of statistical software (SPSS V13. Results: No statistically significant differences were obtained in the measurement made using RVG and CBCT. The results of the present study also failed to reveal any significant correlation between the width of gingiva and the alveolar bone in the maxillary anterior region. Conclusions: Within the limitations of this study, it can be concluded that both CBCT and RVG can be used as valuable tools in the assessment of the soft and hard tissue dimensions.

  19. A Comparative Taxonomy of Parallel Algorithms for RNA Secondary Structure Prediction

    Science.gov (United States)

    Al-Khatib, Ra’ed M.; Abdullah, Rosni; Rashid, Nur’Aini Abdul

    2010-01-01

    RNA molecules have been discovered playing crucial roles in numerous biological and medical procedures and processes. RNA structures determination have become a major problem in the biology context. Recently, computer scientists have empowered the biologists with RNA secondary structures that ease an understanding of the RNA functions and roles. Detecting RNA secondary structure is an NP-hard problem, especially in pseudoknotted RNA structures. The detection process is also time-consuming; as a result, an alternative approach such as using parallel architectures is a desirable option. The main goal in this paper is to do an intensive investigation of parallel methods used in the literature to solve the demanding issues, related to the RNA secondary structure prediction methods. Then, we introduce a new taxonomy for the parallel RNA folding methods. Based on this proposed taxonomy, a systematic and scientific comparison is performed among these existing methods. PMID:20458364

  20. Hard-on-hard lubrication in the artificial hip under dynamic loading conditions.

    Directory of Open Access Journals (Sweden)

    Robert Sonntag

    Full Text Available The tribological performance of an artificial hip joint has a particularly strong influence on its success. The principle causes for failure are adverse short- and long-term reactions to wear debris and high frictional torque in the case of poor lubrication that may cause loosening of the implant. Therefore, using experimental and theoretical approaches models have been developed to evaluate lubrication under standardized conditions. A steady-state numerical model has been extended with dynamic experimental data for hard-on-hard bearings used in total hip replacements to verify the tribological relevance of the ISO 14242-1 gait cycle in comparison to experimental data from the Orthoload database and instrumented gait analysis for three additional loading conditions: normal walking, climbing stairs and descending stairs. Ceramic-on-ceramic bearing partners show superior lubrication potential compared to hard-on-hard bearings that work with at least one articulating metal component. Lubrication regimes during the investigated activities are shown to strongly depend on the kinematics and loading conditions. The outcome from the ISO gait is not fully confirmed by the normal walking data and more challenging conditions show evidence of inferior lubrication. These findings may help to explain the differences between the in vitro predictions using the ISO gait cycle and the clinical outcome of some hard-on-hard bearings, e.g., using metal-on-metal.

  1. A Comparative Study of Spectral Auroral Intensity Predictions From Multiple Electron Transport Models

    Science.gov (United States)

    Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha

    2018-01-01

    It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.

  2. On the application of response surface methodology for predicting and optimizing surface roughness and cutting forces in hard turning by PVD coated insert

    Directory of Open Access Journals (Sweden)

    Hessainia Zahia

    2015-04-01

    Full Text Available This paper focuses on the exploitation of the response surface methodology (RSM to determine optimum cutting conditions leading to minimum surface roughness and cutting force components. The technique of RSM helps to create an efficient statistical model for studying the evolution of surface roughness and cutting forces according to cutting parameters: cutting speed, feed rate and depth of cut. For this purpose, turning tests of hardened steel alloy (AISI 4140 (56 HRC were carried out using PVD – coated ceramic insert under different cutting conditions. The equations of surface roughness and cutting forces were achieved by using the experimental data and the technique of the analysis of variance (ANOVA. The obtained results are presented in terms of mean values and confidence levels. It is shown that feed rate and depth of cut are the most influential factors on surface roughness and cutting forces, respectively. In addition, it is underlined that the surface roughness is mainly related to the cutting speed, whereas depth of cut has the greatest effect on the evolution of cutting forces. The optimal machining parameters obtained in this study represent reductions about 6.88%, 3.65%, 19.05% in cutting force components (Fa, Fr, Ft, respectively. The latters are compared with the results of initial cutting parameters for machining AISI 4140 steel in the hard turning process.

  3. Using data-driven approach for wind power prediction: A comparative study

    International Nuclear Information System (INIS)

    Taslimi Renani, Ehsan; Elias, Mohamad Fathi Mohamad; Rahim, Nasrudin Abd.

    2016-01-01

    Highlights: • Double exponential smoothing is the most accurate model in wind speed prediction. • A two-stage feature selection method is proposed to select most important inputs. • Direct prediction illustrates better accuracy than indirect prediction. • Adaptive neuro fuzzy inference system outperforms data mining algorithms. • Random forest performs the worst compared to other data mining algorithm. - Abstract: Although wind energy is intermittent and stochastic in nature, it is increasingly important in the power generation due to its sustainability and pollution-free. Increased utilization of wind energy sources calls for more robust and efficient prediction models to mitigate uncertainties associated with wind power. This research compares two different approaches in wind power forecasting which are indirect and direct prediction methods. In indirect method, several times series are applied to forecast the wind speed, whereas the logistic function with five parameters is then used to forecast the wind power. In this study, backtracking search algorithm with novel crossover and mutation operators is employed to find the best parameters of five-parameter logistic function. A new feature selection technique, combining the mutual information and neural network is proposed in this paper to extract the most informative features with a maximum relevancy and minimum redundancy. From the comparative study, the results demonstrate that, in the direct prediction approach where the historical weather data are used to predict the wind power generation directly, adaptive neuro fuzzy inference system outperforms five data mining algorithms namely, random forest, M5Rules, k-nearest neighbor, support vector machine and multilayer perceptron. Moreover, it is also found that the mean absolute percentage error of the direct prediction method using adaptive neuro fuzzy inference system is 1.47% which is approximately less than half of the error obtained with the

  4. RegPredict: an integrated system for regulon inference in prokaryotes by comparative genomics approach

    Energy Technology Data Exchange (ETDEWEB)

    Novichkov, Pavel S.; Rodionov, Dmitry A.; Stavrovskaya, Elena D.; Novichkova, Elena S.; Kazakov, Alexey E.; Gelfand, Mikhail S.; Arkin, Adam P.; Mironov, Andrey A.; Dubchak, Inna

    2010-05-26

    RegPredict web server is designed to provide comparative genomics tools for reconstruction and analysis of microbial regulons using comparative genomics approach. The server allows the user to rapidly generate reference sets of regulons and regulatory motif profiles in a group of prokaryotic genomes. The new concept of a cluster of co-regulated orthologous operons allows the user to distribute the analysis of large regulons and to perform the comparative analysis of multiple clusters independently. Two major workflows currently implemented in RegPredict are: (i) regulon reconstruction for a known regulatory motif and (ii) ab initio inference of a novel regulon using several scenarios for the generation of starting gene sets. RegPredict provides a comprehensive collection of manually curated positional weight matrices of regulatory motifs. It is based on genomic sequences, ortholog and operon predictions from the MicrobesOnline. An interactive web interface of RegPredict integrates and presents diverse genomic and functional information about the candidate regulon members from several web resources. RegPredict is freely accessible at http://regpredict.lbl.gov.

  5. Mathematical model of heat transfer to predict distribution of hardness through the Jominy bar; Modelo matematico de la transferencia de calor para predecir el perfil de durezas en probetas Jominy

    Energy Technology Data Exchange (ETDEWEB)

    Lopez, E.; Hernandez, J. B.; Solorio, G.; Vergara, H. J.; Vazquez, O.; Garnica, F.

    2013-06-01

    The heat transfer coefficient was estimated at the bottom surface at Jominy bar end quench specimen by solution of the heat inverse conduction problem. A mathematical model based on the finite-difference method was developed to predict thermal paths and volume fraction of transformed phases. The mathematical model was codified in the commercial package Microsoft Visual Basic v. 6. The calculated thermal path and final phase distribution were used to evaluate the hardness distribution along the AISI 4140 Jominy bar. (Author)

  6. A comparative study on prediction methods for China's medium- and long-term coal demand

    International Nuclear Information System (INIS)

    Li, Bing-Bing; Liang, Qiao-Mei; Wang, Jin-Cheng

    2015-01-01

    Given the dominant position of coal in China's energy structure and in order to ensure a safe and stable energy supply, it is essential to perform a scientific and effective prediction of China's medium- and long-term coal demand. Based on the historical data of coal consumption and related factors such as GDP (Gross domestic product), coal price, industrial structure, total population, energy structure, energy efficiency, coal production and urbanization rate from 1987 to 2012, this study compared the prediction effects of five types of models. These models include the VAR (vector autoregressive model), RBF (radial basis function) neural network model, GA-DEM (genetic algorithm demand estimation model), PSO-DEM (particle swarm optimization demand estimation model) and IO (input–output model). By comparing the results of different models with the corresponding actual coal consumption, it is concluded that with a testing period from 2006 to 2012, the PSO-DEM model has a relatively optimal predicted effect on China's total coal demand, where the MAPE (mean absolute percentage error) is close to or below 2%. - Highlights: • The prediction effects of five methods for China's coal demand were compared. • Each model has acceptable prediction results, with MAPE below 5%. • Particle swarm optimization demand estimation model has better forecast efficacy.

  7. Overview, comparative assessment and recommendations of forecasting models for short-term water demand prediction

    CSIR Research Space (South Africa)

    Anele, AO

    2017-11-01

    Full Text Available -term water demand (STWD) forecasts. In view of this, an overview of forecasting methods for STWD prediction is presented. Based on that, a comparative assessment of the performance of alternative forecasting models from the different methods is studied. Times...

  8. Comparative predictions of discharge from an artificial catchment (Chicken Creek using sparse data

    Directory of Open Access Journals (Sweden)

    H. Flühler

    2009-11-01

    Full Text Available Ten conceptually different models in predicting discharge from the artificial Chicken Creek catchment in North-East Germany were used for this study. Soil texture and topography data were given to the modellers, but discharge data was withheld. We compare the predictions with the measurements from the 6 ha catchment and discuss the conceptualization and parameterization of the models. The predictions vary in a wide range, e.g. with the predicted actual evapotranspiration ranging from 88 to 579 mm/y and the discharge from 19 to 346 mm/y. The predicted components of the hydrological cycle deviated systematically from the observations, which were not known to the modellers. Discharge was mainly predicted as subsurface discharge with little direct runoff. In reality, surface runoff was a major flow component despite the fairly coarse soil texture. The actual evapotranspiration (AET and the ratio between actual and potential ET was systematically overestimated by nine of the ten models. None of the model simulations came even close to the observed water balance for the entire 3-year study period. The comparison indicates that the personal judgement of the modellers was a major source of the differences between the model results. The most important parameters to be presumed were the soil parameters and the initial soil-water content while plant parameterization had, in this particular case of sparse vegetation, only a minor influence on the results.

  9. 20. Prediction of 10-year risk of hard coronary events among Saudi adults based on prevalence of heart disease risk factors

    Directory of Open Access Journals (Sweden)

    Muhammad Adil Soofi

    2015-10-01

    Conclusions: Our study is the first to estimate the 10-year risk of HCE among adults in an emerging country and discovered a significant proportion of younger aged population are at risk for development of hard coronary events. Public awareness programs to control risk factors are warranted.

  10. A Comparative Study to Predict Student’s Performance Using Educational Data Mining Techniques

    Science.gov (United States)

    Uswatun Khasanah, Annisa; Harwati

    2017-06-01

    Student’s performance prediction is essential to be conducted for a university to prevent student fail. Number of student drop out is one of parameter that can be used to measure student performance and one important point that must be evaluated in Indonesia university accreditation. Data Mining has been widely used to predict student’s performance, and data mining that applied in this field usually called as Educational Data Mining. This study conducted Feature Selection to select high influence attributes with student performance in Department of Industrial Engineering Universitas Islam Indonesia. Then, two popular classification algorithm, Bayesian Network and Decision Tree, were implemented and compared to know the best prediction result. The outcome showed that student’s attendance and GPA in the first semester were in the top rank from all Feature Selection methods, and Bayesian Network is outperforming Decision Tree since it has higher accuracy rate.

  11. Comparing predictive models of glioblastoma multiforme built using multi-institutional and local data sources.

    Science.gov (United States)

    Singleton, Kyle W; Hsu, William; Bui, Alex A T

    2012-01-01

    The growing amount of electronic data collected from patient care and clinical trials is motivating the creation of national repositories where multiple institutions share data about their patient cohorts. Such efforts aim to provide sufficient sample sizes for data mining and predictive modeling, ultimately improving treatment recommendations and patient outcome prediction. While these repositories offer the potential to improve our understanding of a disease, potential issues need to be addressed to ensure that multi-site data and resultant predictive models are useful to non-contributing institutions. In this paper we examine the challenges of utilizing National Cancer Institute datasets for modeling glioblastoma multiforme. We created several types of prognostic models and compared their results against models generated using data solely from our institution. While overall model performance between the data sources was similar, different variables were selected during model generation, suggesting that mapping data resources between models is not a straightforward issue.

  12. Metal accumulation in the earthworm Lumbricus rubellus. Model predictions compared to field data

    Science.gov (United States)

    Veltman, K.; Huijbregts, M.A.J.; Vijver, M.G.; Peijnenburg, W.J.G.M.; Hobbelen, P.H.F.; Koolhaas, J.E.; van Gestel, C.A.M.; van Vliet, P.C.J.; Jan, Hendriks A.

    2007-01-01

    The mechanistic bioaccumulation model OMEGA (Optimal Modeling for Ecotoxicological Applications) is used to estimate accumulation of zinc (Zn), copper (Cu), cadmium (Cd) and lead (Pb) in the earthworm Lumbricus rubellus. Our validation to field accumulation data shows that the model accurately predicts internal cadmium concentrations. In addition, our results show that internal metal concentrations in the earthworm are less than linearly (slope < 1) related to the total concentration in soil, while risk assessment procedures often assume the biota-soil accumulation factor (BSAF) to be constant. Although predicted internal concentrations of all metals are generally within a factor 5 compared to field data, incorporation of regulation in the model is necessary to improve predictability of the essential metals such as zinc and copper. ?? 2006 Elsevier Ltd. All rights reserved.

  13. Comparative assessment of predictions in ungauged basins – Part 3: Runoff signatures in Austria

    Directory of Open Access Journals (Sweden)

    A. Viglione

    2013-06-01

    Full Text Available This is the third of a three-part paper series through which we assess the performance of runoff predictions in ungauged basins in a comparative way. Whereas the two previous papers by Parajka et al. (2013 and Salinas et al. (2013 assess the regionalisation performance of hydrographs and hydrological extremes on the basis of a comprehensive literature review of thousands of case studies around the world, in this paper we jointly assess prediction performance of a range of runoff signatures for a consistent and rich dataset. Daily runoff time series are predicted for 213 catchments in Austria by a regionalised rainfall–runoff model and by Top-kriging, a geostatistical estimation method that accounts for the river network hierarchy. From the runoff time-series, six runoff signatures are extracted: annual runoff, seasonal runoff, flow duration curves, low flows, high flows and runoff hydrographs. The predictive performance is assessed in terms of the bias, error spread and proportion of unexplained spatial variance of statistical measures of these signatures in cross-validation (blind testing mode. Results of the comparative assessment show that, in Austria, the predictive performance increases with catchment area for both methods and for most signatures, it tends to increase with elevation for the regionalised rainfall–runoff model, while the dependence on climate characteristics is weaker. Annual and seasonal runoff can be predicted more accurately than all other signatures. The spatial variability of high flows in ungauged basins is the most difficult to estimate followed by the low flows. It also turns out that in this data-rich study in Austria, the geostatistical approach (Top-kriging generally outperforms the regionalised rainfall–runoff model.

  14. When Theory Meets Data: Comparing Model Predictions Of Hillslope Sediment Size With Field Measurements.

    Science.gov (United States)

    Mahmoudi, M.; Sklar, L. S.; Leclere, S.; Davis, J. D.; Stine, A.

    2017-12-01

    The size distributions of sediment produced on hillslopes and supplied to river channels influence a wide range of fluvial processes, from bedrock river incision to the creation of aquatic habitats. However, the factors that control hillslope sediment size are poorly understood, limiting our ability to predict sediment size and model the evolution of sediment size distributions across landscapes. Recently separate field and theoretical investigations have begun to address this knowledge gap. Here we compare the predictions of several emerging modeling approaches to landscapes where high quality field data are available. Our goals are to explore the sensitivity and applicability of the theoretical models in each field context, and ultimately to provide a foundation for incorporating hillslope sediment size into models of landscape evolution. The field data include published measurements of hillslope sediment size from the Kohala peninsula on the island of Hawaii and tributaries to the Feather River in the northern Sierra Nevada mountains of California, and an unpublished data set from the Inyo Creek catchment of the southern Sierra Nevada. These data are compared to predictions adapted from recently published modeling approaches that include elements of topography, geology, structure, climate and erosion rate. Predictive models for each site are built in ArcGIS using field condition datasets: DEM topography (slope, aspect, curvature), bedrock geology (lithology, mineralogy), structure (fault location, fracture density), climate data (mean annual precipitation and temperature), and estimates of erosion rates. Preliminary analysis suggests that models may be finely tuned to the calibration sites, particularly when field conditions most closely satisfy model assumptions, leading to unrealistic predictions from extrapolation. We suggest a path forward for developing a computationally tractable method for incorporating spatial variation in production of hillslope

  15. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  16. Comparative values of medical school assessments in the prediction of internship performance.

    Science.gov (United States)

    Lee, Ming; Vermillion, Michelle

    2018-02-01

    Multiple undergraduate achievements have been used for graduate admission consideration. Their relative values in the prediction of residency performance are not clear. This study compared the contributions of major undergraduate assessments to the prediction of internship performance. Internship performance ratings of the graduates of a medical school were collected from 2012 to 2015. Hierarchical multiple regression analyses were used to examine the predictive values of undergraduate measures assessing basic and clinical sciences knowledge and clinical performances, after controlling for differences in the Medical College Admission Test (MCAT). Four hundred eighty (75%) graduates' archived data were used in the study. Analyses revealed that clinical competencies, assessed by the USMLE Step 2 CK, NBME medicine exam, and an eight-station objective structured clinical examination (OSCE), were strong predictors of internship performance. Neither the USMLE Step 1 nor the inpatient internal medicine clerkship evaluation predicted internship performance. The undergraduate assessments as a whole showed a significant collective relationship with internship performance (ΔR 2  = 0.12, p < 0.001). The study supports the use of clinical competency assessments, instead of pre-clinical measures, in graduate admission consideration. It also provides validity evidence for OSCE scores in the prediction of workplace performance.

  17. Comparative Risk Predictions of Second Cancers After Carbon-Ion Therapy Versus Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Eley, John G., E-mail: jeley@som.umaryland.edu [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); University of Texas Graduate School of Biomedical Sciences, Houston, Texas (United States); Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland (United States); Friedrich, Thomas [GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt (Germany); Homann, Kenneth L.; Howell, Rebecca M. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); University of Texas Graduate School of Biomedical Sciences, Houston, Texas (United States); Scholz, Michael; Durante, Marco [GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt (Germany); Newhauser, Wayne D. [Department of Physics and Astronomy, Louisiana State University and Agricultural and Mechanical College, Baton Rouge, Louisiana (United States); Mary Bird Perkins Cancer Center, Baton Rouge, Louisiana (United States)

    2016-05-01

    Purpose: This work proposes a theoretical framework that enables comparative risk predictions for second cancer incidence after particle beam therapy for different ion species for individual patients, accounting for differences in relative biological effectiveness (RBE) for the competing processes of tumor initiation and cell inactivation. Our working hypothesis was that use of carbon-ion therapy instead of proton therapy would show a difference in the predicted risk of second cancer incidence in the breast for a sample of Hodgkin lymphoma (HL) patients. Methods and Materials: We generated biologic treatment plans and calculated relative predicted risks of second cancer in the breast by using two proposed methods: a full model derived from the linear quadratic model and a simpler linear-no-threshold model. Results: For our reference calculation, we found the predicted risk of breast cancer incidence for carbon-ion plans-to-proton plan ratio, , to be 0.75 ± 0.07 but not significantly smaller than 1 (P=.180). Conclusions: Our findings suggest that second cancer risks are, on average, comparable between proton therapy and carbon-ion therapy.

  18. Comparative study of biodegradability prediction of chemicals using decision trees, functional trees, and logistic regression.

    Science.gov (United States)

    Chen, Guangchao; Li, Xuehua; Chen, Jingwen; Zhang, Ya-Nan; Peijnenburg, Willie J G M

    2014-12-01

    Biodegradation is the principal environmental dissipation process of chemicals. As such, it is a dominant factor determining the persistence and fate of organic chemicals in the environment, and is therefore of critical importance to chemical management and regulation. In the present study, the authors developed in silico methods assessing biodegradability based on a large heterogeneous set of 825 organic compounds, using the techniques of the C4.5 decision tree, the functional inner regression tree, and logistic regression. External validation was subsequently carried out by 2 independent test sets of 777 and 27 chemicals. As a result, the functional inner regression tree exhibited the best predictability with predictive accuracies of 81.5% and 81.0%, respectively, on the training set (825 chemicals) and test set I (777 chemicals). Performance of the developed models on the 2 test sets was subsequently compared with that of the Estimation Program Interface (EPI) Suite Biowin 5 and Biowin 6 models, which also showed a better predictability of the functional inner regression tree model. The model built in the present study exhibits a reasonable predictability compared with existing models while possessing a transparent algorithm. Interpretation of the mechanisms of biodegradation was also carried out based on the models developed. © 2014 SETAC.

  19. Comparative study of the interface composition of TiN and TiCN hard coatings on high speed steel substrates obtained by arc discharge

    Energy Technology Data Exchange (ETDEWEB)

    Roman, E. (Lab. de Fisica de Superficies, Inst. de Ciencia de Materiales, CSIC, Madrid (Spain)); Segovia, J.L. de (Lab. de Fisica de Superficies, Inst. de Ciencia de Materiales, CSIC, Madrid (Spain)); Alberdi, A. (TEKNIKER, Asociacion de Investigacion Tecnologica, Eibar (Spain)); Calvo, J. (TEKNIKER, Asociacion de Investigacion Tecnologica, Eibar (Spain)); Laucirica, J. (TEKNIKER, Asociacion de Investigacion Tecnologica, Eibar (Spain))

    1993-05-15

    In this paper the composition of the interface of TiN and TiCN hard coatings deposited onto high speed steel substrates obtained by the arc discharge technique is studied using Auger electron spectroscopy at two different substrate temperatures, 520 K and 720 K. The low temperature (520 K) TiN coating developed an oxygen phase at the interface, producing a weak adherence of 40 N, while the high temperature coatings (720 K) had a less intense oxygen phase, giving a greater adherence to the substrate of 60 N. TiCN coatings at 520 K are characterized by a low oxygen intensity at the interface. However, their adherence of 50 N is lower than the value of 60 N for the high temperature TiN coatings and is independent of the substrate temperature. (orig.)

  20. Key role of chemical hardness to compare 2,2-diphenyl-1-picrylhydrazyl radical scavenging power of flavone and flavonol O-glycoside and C-glycoside derivatives.

    Science.gov (United States)

    Waki, Tsukasa; Nakanishi, Ikuo; Matsumoto, Ken-ichiro; Kitajima, Junichi; Chikuma, Toshiyuki; Kobayashi, Shigeki

    2012-01-01

    The antioxidant activities of flavonoids and their glycosides were measured with the 2,2-diphenyl-1-picrylhydrazyl radical (DPPH radical, DPPH(·)) scavenging method. The results show that free hydroxyl flavonoids are not necessarily more active than O-glycoside. Quercetin and kaempferol showed higher activity than apigenin. The C- and O-glycosides of flavonoids generally showed higher radical scavenging activity than aglycones; however, kaempferol C3-O-glycoside (astragalin) showed higher activity than kaempferol. In the radical scavenging activity of flavonoids, it was expected that OH substitutions at C3 and C5 and catechol substitution at C2 of B ring and intramolecular hydrogen bonding between OH at C5 and ketone at C3 would increase the activity; however, the reasons have yet to be clarified. We here show that the radical scavenging activities of flavonoids are controlled by their absolute hardness (η) and absolute electronegativity (χ) as a electronic state. Kaempferol and quercetin provide high radical scavenging activity since (i) OH substitutions at C3 and C5 strikingly decrease η of flavones, (ii) OH substitutions at C3 and C7 decrease χ and η of flavones, and (iii) phenol or o-catechol substitution at C2 of B ring decrease χ of flavones. The coordinate r(χ, η) as the electron state must be small to increase the radical scavenging activity of flavonoids. The results show that chemically soft kaempferol and quercetin have higher DPPH radical scavenging activity than chemically hard genistein and daidzein.

  1. Comparative Analysis of Soft Computing Models in Prediction of Bending Rigidity of Cotton Woven Fabrics

    Science.gov (United States)

    Guruprasad, R.; Behera, B. K.

    2015-10-01

    Quantitative prediction of fabric mechanical properties is an essential requirement for design engineering of textile and apparel products. In this work, the possibility of prediction of bending rigidity of cotton woven fabrics has been explored with the application of Artificial Neural Network (ANN) and two hybrid methodologies, namely Neuro-genetic modeling and Adaptive Neuro-Fuzzy Inference System (ANFIS) modeling. For this purpose, a set of cotton woven grey fabrics was desized, scoured and relaxed. The fabrics were then conditioned and tested for bending properties. With the database thus created, a neural network model was first developed using back propagation as the learning algorithm. The second model was developed by applying a hybrid learning strategy, in which genetic algorithm was first used as a learning algorithm to optimize the number of neurons and connection weights of the neural network. The Genetic algorithm optimized network structure was further allowed to learn using back propagation algorithm. In the third model, an ANFIS modeling approach was attempted to map the input-output data. The prediction performances of the models were compared and a sensitivity analysis was reported. The results show that the prediction by neuro-genetic and ANFIS models were better in comparison with that of back propagation neural network model.

  2. Prediction of paddy drying kinetics: A comparative study between mathematical and artificial neural network modelling

    Directory of Open Access Journals (Sweden)

    Beigi Mohsen

    2017-01-01

    Full Text Available The present study aimed at investigation of deep bed drying of rough rice kernels at various thin layers at different drying air temperatures and flow rates. A comparative study was performed between mathematical thin layer models and artificial neural networks to estimate the drying curves of rough rice. The suitability of nine mathematical models in simulating the drying kinetics was examined and the Midilli model was determined as the best approach for describing drying curves. Different feed forward-back propagation artificial neural networks were examined to predict the moisture content variations of the grains. The ANN with 4-18-18-1 topology, transfer function of hyperbolic tangent sigmoid and a Levenberg-Marquardt back propagation training algorithm provided the best results with the maximum correlation coefficient and the minimum mean square error values. Furthermore, it was revealed that ANN modeling had better performance in prediction of drying curves with lower root mean square error values.

  3. Hepatocellular carcinoma: IVIM diffusion quantification for prediction of tumor necrosis compared to enhancement ratios

    International Nuclear Information System (INIS)

    Kakite, Suguru; Dyvorne, Hadrien A.; Lee, Karen M.; Jajamovich, Guido H.; Knight-Greenfield, Ashley; Taouli, Bachir

    2015-01-01

    To correlate intra voxel incoherent motion (IVIM) diffusion parameters of liver parenchyma and hepatocellular carcinoma (HCC) with degree of liver/tumor enhancement and necrosis; and to assess the diagnostic performance of diffusion parameters vs. enhancement ratios (ER) for prediction of complete tumor necrosis. In this IRB approved HIPAA compliant study, we included 46 patients with HCC who underwent IVIM diffusion-weighted (DW) MRI in addition to routine sequences at 3.0 T. True diffusion coefficient (D), pseudo-diffusion coefficient (D*), perfusion fraction (PF) and apparent diffusion coefficient (ADC) were quantified in tumors and liver parenchyma. Tumor ER were calculated using contrast-enhanced imaging, and degree of tumor necrosis was assessed using post-contrast image subtraction. IVIM parameters and ER were compared between HCC and background liver and between necrotic and viable tumor components. ROC analysis for prediction of complete tumor necrosis was performed. 79 HCCs were assessed (mean size 2.5 cm). D, PF and ADC were significantly higher in HCC vs. liver (p < 0.0001). There were weak significant negative/positive correlations between D/PF and ER, and significant correlations between D/PF/ADC and tumor necrosis (for D, r 0.452, p < 0.001). Among diffusion parameters, D had the highest area under the curve (AUC 0.811) for predicting complete tumor necrosis. ER outperformed diffusion parameters for prediction of complete tumor necrosis (AUC > 0.95, p < 0.002). D has a reasonable diagnostic performance for predicting complete tumor necrosis, however lower than that of contrast-enhanced imaging

  4. Studying Musical and Linguistic Prediction in Comparable Ways: The Melodic Cloze Probability Method.

    Science.gov (United States)

    Fogel, Allison R; Rosenberg, Jason C; Lehman, Frank M; Kuperberg, Gina R; Patel, Aniruddh D

    2015-01-01

    Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5-9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such 'authentic cadence' melody was matched to a 'non-cadential' (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of prediction in

  5. Prediction of hardness of the Zn-Al-Cu alloys of agreement by composition in weight; Prediccion de la dureza de la aleacion Zn-Al-Cu de acuerdo a su composicion en peso

    Energy Technology Data Exchange (ETDEWEB)

    Villegas-Cardenas, Jose David; Camarillo-Villegas, Alejandra; Juanico-Loran, Antonio [Universidad Politecnica del Valle de Mexico, Tultitlan, Estado de Mexico (Mexico)]. E-mails: jdvc76@yahoo.com.mx; v_c_a_77@hotmail.com; ajuanico@yahoo.com.mx; Espinosa-Rojas, Raul [Universidad Autonoma Metropolitana, Unidad Azcapotzalco (Mexico)]. E-mail: rer21@hotmail.com; Camacho-Olguin, Carlos [Universidad Politecnica del Valle de Mexico, Tultitlan, Estado de Mexico (Mexico)]. E-mail: ccamacho@upvm.edu.mx

    2013-07-15

    Ten alloys Zn-Al-Cu were developed in two parts, in agreement to two zones presented in the isopleth diagrams (Villas et al., 1995). The percentage of Cu and Al was systematically varied. Subsequently, hardness measurements were performed. These measurements allowed establishing two equations that predict the hardness with an error lower than 5%. With these equations, it is possible to obtain alloys that replace Al base alloys by a Zn base alloy, having the same hardness. This implicates also the elimination of the volumetric change in the presence of e phase. [Spanish] Se desarrollaron diez aleaciones Zn-Al-Cu divididas en dos partes, de acuerdo a dos zonas presentadas en los diagramas isopleticos de Hans (Villas et al., 1995). Se incremento el porcentaje de Cu y Al paulatinamente. Posteriormente se desarrollaron pruebas de macrodureza y de ese analisis se obtuvieron dos ecuaciones que permiten pronosticar y disenar aleaciones de una dureza determinada de acuerdo a su porcentaje en peso de cada elemento, con un error menor que 5%. Como se demuestra en este trabajo, con estas ecuaciones es posible desarrollar aleaciones sustitutas base aluminio por una aleacion base zinc o viceversa, teniendo la misma dureza para cada tipo de aleacion y eliminando el problema del cambio volumetrico debido a la presencia de la fase e.

  6. Comparing frailty measures in their ability to predict adverse outcome among older residents of assisted living

    Directory of Open Access Journals (Sweden)

    Hogan David B

    2012-09-01

    Full Text Available Abstract Background Few studies have directly compared the competing approaches to identifying frailty in more vulnerable older populations. We examined the ability of two versions of a frailty index (43 vs. 83 items, the Cardiovascular Health Study (CHS frailty criteria, and the CHESS scale to accurately predict the occurrence of three outcomes among Assisted Living (AL residents followed over one year. Methods The three frailty measures and the CHESS scale were derived from assessment items completed among 1,066 AL residents (aged 65+ participating in the Alberta Continuing Care Epidemiological Studies (ACCES. Adjusted risks of one-year mortality, hospitalization and long-term care placement were estimated for those categorized as frail or pre-frail compared with non-frail (or at high/intermediate vs. low risk on CHESS. The area under the ROC curve (AUC was calculated for select models to assess the predictive accuracy of the different frailty measures and CHESS scale in relation to the three outcomes examined. Results Frail subjects defined by the three approaches and those at high risk for decline on CHESS showed a statistically significant increased risk for death and long-term care placement compared with those categorized as either not frail or at low risk for decline. The risk estimates for hospitalization associated with the frailty measures and CHESS were generally weaker with one of the frailty indices (43 items showing no significant association. For death and long-term care placement, the addition of frailty (however derived or CHESS significantly improved on the AUC obtained with a model including only age, sex and co-morbidity, though the magnitude of improvement was sometimes small. The different frailty/risk models did not differ significantly from each other in predicting mortality or hospitalization; however, one of the frailty indices (83 items showed significantly better performance over the other measures in predicting long

  7. Comparative Genomics and Disorder Prediction Identify Biologically Relevant SH3 Protein Interactions.

    Directory of Open Access Journals (Sweden)

    2005-08-01

    Full Text Available Protein interaction networks are an important part of the post-genomic effort to integrate a part-list view of the cell into system-level understanding. Using a set of 11 yeast genomes we show that combining comparative genomics and secondary structure information greatly increases consensus-based prediction of SH3 targets. Benchmarking of our method against positive and negative standards gave 83% accuracy with 26% coverage. The concept of an optimal divergence time for effective comparative genomics studies was analyzed, demonstrating that genomes of species that diverged very recently from Saccharomyces cerevisiae(S. mikatae, S. bayanus, and S. paradoxus, or a long time ago (Neurospora crassa and Schizosaccharomyces pombe, contain less information for accurate prediction of SH3 targets than species within the optimal divergence time proposed. We also show here that intrinsically disordered SH3 domain targets are more probable sites of interaction than equivalent sites within ordered regions. Our findings highlight several novel S. cerevisiae SH3 protein interactions, the value of selection of optimal divergence times in comparative genomics studies, and the importance of intrinsic disorder for protein interactions. Based on our results we propose novel roles for the S. cerevisiae proteins Abp1p in endocytosis and Hse1p in endosome protein sorting.

  8. Comparative genomics and disorder prediction identify biologically relevant SH3 protein interactions.

    Directory of Open Access Journals (Sweden)

    Pedro Beltrao

    2005-08-01

    Full Text Available Protein interaction networks are an important part of the post-genomic effort to integrate a part-list view of the cell into system-level understanding. Using a set of 11 yeast genomes we show that combining comparative genomics and secondary structure information greatly increases consensus-based prediction of SH3 targets. Benchmarking of our method against positive and negative standards gave 83% accuracy with 26% coverage. The concept of an optimal divergence time for effective comparative genomics studies was analyzed, demonstrating that genomes of species that diverged very recently from Saccharomyces cerevisiae(S. mikatae, S. bayanus, and S. paradoxus, or a long time ago (Neurospora crassa and Schizosaccharomyces pombe, contain less information for accurate prediction of SH3 targets than species within the optimal divergence time proposed. We also show here that intrinsically disordered SH3 domain targets are more probable sites of interaction than equivalent sites within ordered regions. Our findings highlight several novel S. cerevisiae SH3 protein interactions, the value of selection of optimal divergence times in comparative genomics studies, and the importance of intrinsic disorder for protein interactions. Based on our results we propose novel roles for the S. cerevisiae proteins Abp1p in endocytosis and Hse1p in endosome protein sorting.

  9. Comparing Structural Identification Methodologies for Fatigue Life Prediction of a Highway Bridge

    Directory of Open Access Journals (Sweden)

    Sai G. S. Pai

    2018-01-01

    Full Text Available Accurate measurement-data interpretation leads to increased understanding of structural behavior and enhanced asset-management decision making. In this paper, four data-interpretation methodologies, residual minimization, traditional Bayesian model updating, modified Bayesian model updating (with an L∞-norm-based Gaussian likelihood function, and error-domain model falsification (EDMF, a method that rejects models that have unlikely differences between predictions and measurements, are compared. In the modified Bayesian model updating methodology, a correction is used in the likelihood function to account for the effect of a finite number of measurements on posterior probability–density functions. The application of these data-interpretation methodologies for condition assessment and fatigue life prediction is illustrated on a highway steel–concrete composite bridge having four spans with a total length of 219 m. A detailed 3D finite-element plate and beam model of the bridge and weigh-in-motion data are used to obtain the time–stress response at a fatigue critical location along the bridge span. The time–stress response, presented as a histogram, is compared to measured strain responses either to update prior knowledge of model parameters using residual minimization and Bayesian methodologies or to obtain candidate model instances using the EDMF methodology. It is concluded that the EDMF and modified Bayesian model updating methodologies provide robust prediction of fatigue life compared with residual minimization and traditional Bayesian model updating in the presence of correlated non-Gaussian uncertainty. EDMF has additional advantages due to ease of understanding and applicability for practicing engineers, thus enabling incremental asset-management decision making over long service lives. Finally, parallel implementations of EDMF using grid sampling have lower computations times than implementations using adaptive sampling.

  10. A computational approach to compare regression modelling strategies in prediction research.

    Science.gov (United States)

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  11. Comparative Study of Different Methods for the Prediction of Drug-Polymer Solubility

    DEFF Research Database (Denmark)

    Knopp, Matthias Manne; Tajber, Lidia; Tian, Yiwei

    2015-01-01

    monomer weight ratios. The drug–polymer solubility at 25 °C was predicted using the Flory–Huggins model, from data obtained at elevated temperature using thermal analysis methods based on the recrystallization of a supersaturated amorphous solid dispersion and two variations of the melting point......, which suggests that this method can be used as an initial screening tool if a liquid analogue is available. The learnings of this important comparative study provided general guidance for the selection of the most suitable method(s) for the screening of drug–polymer solubility....

  12. Non-invasively predicting differentiation of pancreatic cancer through comparative serum metabonomic profiling.

    Science.gov (United States)

    Wen, Shi; Zhan, Bohan; Feng, Jianghua; Hu, Weize; Lin, Xianchao; Bai, Jianxi; Huang, Heguang

    2017-11-02

    The differentiation of pancreatic ductal adenocarcinoma (PDAC) could be associated with prognosis and may influence the choices of clinical management. No applicable methods could reliably predict the tumor differentiation preoperatively. Thus, the aim of this study was to compare the metabonomic profiling of pancreatic ductal adenocarcinoma with different differentiations and assess the feasibility of predicting tumor differentiations through metabonomic strategy based on nuclear magnetic resonance spectroscopy. By implanting pancreatic cancer cell strains Panc-1, Bxpc-3 and SW1990 in nude mice in situ, we successfully established the orthotopic xenograft models of PDAC with different differentiations. The metabonomic profiling of serum from different PDAC was achieved and analyzed by using 1 H nuclear magnetic resonance (NMR) spectroscopy combined with the multivariate statistical analysis. Then, the differential metabolites acquired were used for enrichment analysis of metabolic pathways to get a deep insight. An obvious metabonomic difference was demonstrated between all groups and the pattern recognition models were established successfully. The higher concentrations of amino acids, glycolytic and glutaminolytic participators in SW1990 and choline-contain metabolites in Panc-1 relative to other PDAC cells were demonstrated, which may be served as potential indicators for tumor differentiation. The metabolic pathways and differential metabolites identified in current study may be associated with specific pathways such as serine-glycine-one-carbon and glutaminolytic pathways, which can regulate tumorous proliferation and epigenetic regulation. The NMR-based metabonomic strategy may be served as a non-invasive detection method for predicting tumor differentiation preoperatively.

  13. Living Donor Liver Transplantation for Acute Liver Failure : Comparing Guidelines on the Prediction of Liver Transplantation.

    Science.gov (United States)

    Yoshida, Kazuhiro; Umeda, Yuzo; Takaki, Akinobu; Nagasaka, Takeshi; Yoshida, Ryuichi; Nobuoka, Daisuke; Kuise, Takashi; Takagi, Kosei; Yasunaka, Tetsuya; Okada, Hiroyuki; Yagi, Takahito; Fujiwara, Toshiyoshi

    2017-10-01

    Determining the indications for and timing of liver transplantation (LT) for acute liver failure (ALF) is essential. The King's College Hospital (KCH) guidelines and Japanese guidelines are used to predict the need for LT and the outcomes in ALF. These guidelines' accuracy when applied to ALF in different regional and etiological backgrounds may differ. Here we compared the accuracy of new (2010) Japanese guidelines that use a simple scoring system with the 1996 Japanese guidelines and the KCH criteria for living donor liver transplantation (LDLT). We retrospectively analyzed 24 adult ALF patients (18 acute type, 6 sub-acute type) who underwent LDLT in 1998-2009 at our institution. We assessed the accuracies of the 3 guidelines' criteria for ALF. The overall 1-year survival rate was 87.5%. The new and previous Japanese guidelines were superior to the KCH criteria for accurately predicting LT for acute-type ALF (72% vs. 17%). The new Japanese guidelines could identify 13 acute-type ALF patients for LT, based on the timing of encephalopathy onset. Using the previous Japanese guidelines, although the same 13 acute-type ALF patients (72%) had indications for LT, only 4 patients were indicated at the 1st step, and it took an additional 5 days to decide the indication at the 2nd step in the other 9 cases. Our findings showed that the new Japanese guidelines can predict the indications for LT and provide a reliable alternative to the previous Japanese and KCH guidelines.

  14. Doses from aquatic pathways in CSA-N288.1: deterministic and stochastic predictions compared

    Energy Technology Data Exchange (ETDEWEB)

    Chouhan, S.L.; Davis, P

    2002-04-01

    The conservatism and uncertainty in the Canadian Standards Association (CSA) model for calculating derived release limits (DRLs) for aquatic emissions of radionuclides from nuclear facilities was investigated. The model was run deterministically using the recommended default values for its parameters, and its predictions were compared with the distributed doses obtained by running the model stochastically. Probability density functions (PDFs) for the model parameters for the stochastic runs were constructed using data reported in the literature and results from experimental work done by AECL. The default values recommended for the CSA model for some parameters were found to be lower than the central values of the PDFs in about half of the cases. Doses (ingestion, groundshine and immersion) calculated as the median of 400 stochastic runs were higher than the deterministic doses predicted using the CSA default values of the parameters for more than half (85 out of the 163) of the cases. Thus, the CSA model is not conservative for calculating DRLs for aquatic radionuclide emissions, as it was intended to be. The output of the stochastic runs was used to determine the uncertainty in the CSA model predictions. The uncertainty in the total dose was high, with the 95% confidence interval exceeding an order of magnitude for all radionuclides. A sensitivity study revealed that total ingestion doses to adults predicted by the CSA model are sensitive primarily to water intake rates, bioaccumulation factors for fish and marine biota, dietary intakes of fish and marine biota, the fraction of consumed food arising from contaminated sources, the irrigation rate, occupancy factors and the sediment solid/liquid distribution coefficient. To improve DRL models, further research into aquatic exposure pathways should concentrate on reducing the uncertainty in these parameters. The PDFs given here can he used by other modellers to test and improve their models and to ensure that DRLs

  15. Prediction of cause of death from forensic autopsy reports using text classification techniques: A comparative study.

    Science.gov (United States)

    Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa

    2018-07-01

    Automatic text classification techniques are useful for classifying plaintext medical documents. This study aims to automatically predict the cause of death from free text forensic autopsy reports by comparing various schemes for feature extraction, term weighing or feature value representation, text classification, and feature reduction. For experiments, the autopsy reports belonging to eight different causes of death were collected, preprocessed and converted into 43 master feature vectors using various schemes for feature extraction, representation, and reduction. The six different text classification techniques were applied on these 43 master feature vectors to construct a classification model that can predict the cause of death. Finally, classification model performance was evaluated using four performance measures i.e. overall accuracy, macro precision, macro-F-measure, and macro recall. From experiments, it was found that that unigram features obtained the highest performance compared to bigram, trigram, and hybrid-gram features. Furthermore, in feature representation schemes, term frequency, and term frequency with inverse document frequency obtained similar and better results when compared with binary frequency, and normalized term frequency with inverse document frequency. Furthermore, the chi-square feature reduction approach outperformed Pearson correlation, and information gain approaches. Finally, in text classification algorithms, support vector machine classifier outperforms random forest, Naive Bayes, k-nearest neighbor, decision tree, and ensemble-voted classifier. Our results and comparisons hold practical importance and serve as references for future works. Moreover, the comparison outputs will act as state-of-art techniques to compare future proposals with existing automated text classification techniques. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  16. Scientific Instrument for a Controlled Choice of Optimal Photon Energy Spectrum: A Comparison Between Calculational Methods and Laboratory Irradiations of Comparable Hard Tissue Phantoms

    International Nuclear Information System (INIS)

    Helmrot, E.; Sandborg, M.; Eckerdal, O.; Alm Carlsson, G.

    1998-01-01

    Basic performance parameters are defined and analysed in order to optimise physical image quality in relation to the energy imparted to the patient in dental radiology. Air cavities were embedded in well-defined multi material, hard tissue phantoms to represent various objects in dento-maxillo-facial examinations. Basic performance parameters were: object contrast (C), energy imparted (ε) to the patient, signal-to-noise ration (SNR), C 2 /ε (film) and (SNR) / ε (digital imaging system) as functions of HVL (half-value layer), used to describe the photon energy spectrum. For the film receptor, the performance index C 2 /ε is maximum (optimal) at HVL values of 1.5-1.7 mm Al in the simulated Incisive, Premolar and Molar examinations. Other imaging tasks (examinations), not simulated here, may require other optimal HVL. For the digital imaging system (Digora) the performance index (SNR) 2 /ε, theoretically calculated, indicates that a lower value of HVL is optimal than with film as receptor. However, due to the limited number of bits (8 bits) in the analogue to digital converter (ADC) contrast resolution is degraded and calls for use of higher photon energies (HVL). Customised optimisations with proper concern for patient category, type of examination, diagnostic task is the ultimate goal of this work. The conclusions stated above give some general advice on the appropriate choice of photon energy spectrum (HVL). In particular situations, it may be necessary to use more dose demanding kV settings (lower HVL) in order to get sufficient image quality for the diagnostic task. (author)

  17. Comparing observed and predicted mortality among ICUs using different prognostic systems: why do performance assessments differ?

    Science.gov (United States)

    Kramer, Andrew A; Higgins, Thomas L; Zimmerman, Jack E

    2015-02-01

    To compare ICU performance using standardized mortality ratios generated by the Acute Physiology and Chronic Health Evaluation IVa and a National Quality Forum-endorsed methodology and examine potential reasons for model-based standardized mortality ratio differences. Retrospective analysis of day 1 hospital mortality predictions at the ICU level using Acute Physiology and Chronic Health Evaluation IVa and National Quality Forum models on the same patient cohort. Forty-seven ICUs at 36 U.S. hospitals from January 2008 to May 2013. Eighty-nine thousand three hundred fifty-three consecutive unselected ICU admissions. None. We assessed standardized mortality ratios for each ICU using data for patients eligible for Acute Physiology and Chronic Health Evaluation IVa and National Quality Forum predictions in order to compare unit-level model performance, differences in ICU rankings, and how case-mix adjustment might explain standardized mortality ratio differences. Hospital mortality was 11.5%. Overall standardized mortality ratio was 0.89 using Acute Physiology and Chronic Health Evaluation IVa and 1.07 using National Quality Forum, the latter having a widely dispersed and multimodal standardized mortality ratio distribution. Model exclusion criteria eliminated mortality predictions for 10.6% of patients for Acute Physiology and Chronic Health Evaluation IVa and 27.9% for National Quality Forum. The two models agreed on the significance and direction of standardized mortality ratio only 45% of the time. Four ICUs had standardized mortality ratios significantly less than 1.0 using Acute Physiology and Chronic Health Evaluation IVa, but significantly greater than 1.0 using National Quality Forum. Two ICUs had standardized mortality ratios exceeding 1.75 using National Quality Forum, but nonsignificant performance using Acute Physiology and Chronic Health Evaluation IVa. Stratification by patient and institutional characteristics indicated that units caring for more

  18. Cheiloscopy, Palatoscopy and Odontometrics in Sex Prediction and Dis-crimination - a Comparative Study.

    Science.gov (United States)

    V, Nagalaxmi; Ugrappa, Sridevi; M, Naga Jyothi; Ch, Lalitha; Maloth, Kotya Naik; Kodangal, Srikanth

    2014-01-01

    Human identification is the forensic odontologist's primary duty in the fields like violent crime, child abuse, elder abuse, missing persons and mass disaster scenarios. In each context, dental traits may produce compelling evidence to aid victim identity, suspect identity and narrow down the outcome of investigative casework. Sometimes it becomes necessary to apply some least known and less popular techniques in identification procedure where lip prints, rugae patterns and canine odontometrics can give us comparatively valid conclusions pertaining to person's identification. This study elucidates the significance of cheiloscopy, palatoscopy and canine odontometry in sex prediction and discrimination. A cross- sectional study involving a total of 60 subjects, 30 males and 30 females were selected from the outpatient department of oral medicine and radiology. Lip prints were recorded using lipstick, palatal impressions were taken with alginate and odontometric measurements were taken with digital vernier calipers from every subject. All the obtained records were analyzed by two observers. Reliability of lip prints was assessed using Kappa coefficient. Comparison of rugae patterns was done using Chi-square test. Mean canine and inter canine width was compared using t test. A p-value of print patterns analyzed in males and females, while no significant difference was observed in the rugae patterns but a significant difference in the mesio-distal width of mandibular canines in males and females was found with right mandibular canine(3.73%) showing greater sexual dimorphism compared to left mandibular canine(3.06%). This study shows the uniqueness of the lip prints and rugae patterns with the lip prints showing sensitivity of 81.7% giving reliable prediction of sex over palatoscopy. Hence, cheiloscopy along with the canine odontometrics aid in sex determination and can be considered as an ancilliary forensic tool in identification.

  19. Impact of rotavirus vaccination on hospitalisations in Belgium: comparing model predictions with observed data.

    Directory of Open Access Journals (Sweden)

    Baudouin Standaert

    Full Text Available BACKGROUND: Published economic assessments of rotavirus vaccination typically use modelling, mainly static Markov cohort models with birth cohorts followed up to the age of 5 years. Rotavirus vaccination has now been available for several years in some countries, and data have been collected to evaluate the real-world impact of vaccination on rotavirus hospitalisations. This study compared the economic impact of vaccination between model estimates and observed data on disease-specific hospitalisation reductions in a country for which both modelled and observed datasets exist (Belgium. METHODS: A previously published Markov cohort model estimated the impact of rotavirus vaccination on the number of rotavirus hospitalisations in children aged <5 years in Belgium using vaccine efficacy data from clinical development trials. Data on the number of rotavirus-positive gastroenteritis hospitalisations in children aged <5 years between 1 June 2004 and 31 May 2006 (pre-vaccination study period or 1 June 2007 to 31 May 2010 (post-vaccination study period were analysed from nine hospitals in Belgium and compared with the modelled estimates. RESULTS: The model predicted a smaller decrease in hospitalisations over time, mainly explained by two factors. First, the observed data indicated indirect vaccine protection in children too old or too young for vaccination. This herd effect is difficult to capture in static Markov cohort models and therefore was not included in the model. Second, the model included a 'waning' effect, i.e. reduced vaccine effectiveness over time. The observed data suggested this waning effect did not occur during that period, and so the model systematically underestimated vaccine effectiveness during the first 4 years after vaccine implementation. CONCLUSIONS: Model predictions underestimated the direct medical economic value of rotavirus vaccination during the first 4 years of vaccination by approximately 10% when assessing

  20. In silico models for predicting ready biodegradability under REACH: a comparative study.

    Science.gov (United States)

    Pizzo, Fabiola; Lombardo, Anna; Manganaro, Alberto; Benfenati, Emilio

    2013-10-01

    REACH (Registration Evaluation Authorization and restriction of Chemicals) legislation is a new European law which aims to raise the human protection level and environmental health. Under REACH all chemicals manufactured or imported for more than one ton per year must be evaluated for their ready biodegradability. Ready biodegradability is also used as a screening test for persistent, bioaccumulative and toxic (PBT) substances. REACH encourages the use of non-testing methods such as QSAR (quantitative structure-activity relationship) models in order to save money and time and to reduce the number of animals used for scientific purposes. Some QSAR models are available for predicting ready biodegradability. We used a dataset of 722 compounds to test four models: VEGA, TOPKAT, BIOWIN 5 and 6 and START and compared their performance on the basis of the following parameters: accuracy, sensitivity, specificity and Matthew's correlation coefficient (MCC). Performance was analyzed from different points of view. The first calculation was done on the whole dataset and VEGA and TOPKAT gave the best accuracy (88% and 87% respectively). Then we considered the compounds inside and outside the training set: BIOWIN 6 and 5 gave the best results for accuracy (81%) outside training set. Another analysis examined the applicability domain (AD). VEGA had the highest value for compounds inside the AD for all the parameters taken into account. Finally, compounds outside the training set and in the AD of the models were considered to assess predictive ability. VEGA gave the best accuracy results (99%) for this group of chemicals. Generally, START model gave poor results. Since BIOWIN, TOPKAT and VEGA models performed well, they may be used to predict ready biodegradability. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. An appraisal of wind speed distribution prediction by soft computing methodologies: A comparative study

    International Nuclear Information System (INIS)

    Petković, Dalibor; Shamshirband, Shahaboddin; Anuar, Nor Badrul; Saboohi, Hadi; Abdul Wahab, Ainuddin Wahid; Protić, Milan; Zalnezhad, Erfan; Mirhashemi, Seyed Mohammad Amin

    2014-01-01

    Highlights: • Probabilistic distribution functions of wind speed. • Two parameter Weibull probability distribution. • To build an effective prediction model of distribution of wind speed. • Support vector regression application as probability function for wind speed. - Abstract: The probabilistic distribution of wind speed is among the more significant wind characteristics in examining wind energy potential and the performance of wind energy conversion systems. When the wind speed probability distribution is known, the wind energy distribution can be easily obtained. Therefore, the probability distribution of wind speed is a very important piece of information required in assessing wind energy potential. For this reason, a large number of studies have been established concerning the use of a variety of probability density functions to describe wind speed frequency distributions. Although the two-parameter Weibull distribution comprises a widely used and accepted method, solving the function is very challenging. In this study, the polynomial and radial basis functions (RBF) are applied as the kernel function of support vector regression (SVR) to estimate two parameters of the Weibull distribution function according to previously established analytical methods. Rather than minimizing the observed training error, SVR p oly and SVR r bf attempt to minimize the generalization error bound, so as to achieve generalized performance. According to the experimental results, enhanced predictive accuracy and capability of generalization can be achieved using the SVR approach compared to other soft computing methodologies

  2. Comparing predicted yield and yield stability of willow and Miscanthus across Denmark

    DEFF Research Database (Denmark)

    Larsen, Søren; Jaiswal, Deepak; Bentsen, Niclas Scott

    2016-01-01

    was 12.1 Mg DM ha−1 yr−1 for willow and 10.2 Mg DM ha−1 yr−1 for Miscanthus. Coefficent of variation as a measure for yield stability was poorest on the sandy soils of northern and western Jutland and the year-to-year variation in yield was greatest on these soils. Willow was predicted to outyield...... Miscanthus on poor, sandy soils whereas Miscanthus was higher yielding on clay-rich soils. The major driver of yield in both crops was variation in soil moisture, with radiation and precipitation exerting less influence. This is the first time these two major feedstocks for northern Europe have been compared....... The semi-mechanistic crop model BioCro was used to simulate the production of both short rotation coppice (SRC) willow and Miscanthus across Denmark. Predictions were made from high spatial resolution soil data and weather records across this area for 1990-2010. The potential average, rain-fed mean yield...

  3. HeartCare+: A Smart Heart Care Mobile Application for Framingham-Based Early Risk Prediction of Hard Coronary Heart Diseases in Middle East

    Directory of Open Access Journals (Sweden)

    Hoda Ahmed Galal Elsayed

    2017-01-01

    Full Text Available Background. Healthcare is a challenging, yet so demanding sector that developing countries are paying more attention to recently. Statistics show that rural areas are expected to develop a high rate of heart diseases, which is a leading cause of sudden mortality, in the future. Thus, providing solutions that can assist rural people in detecting the cardiac risks early will be vital for uncovering and even preventing the long-term complications of cardiac diseases. Methodology. Mobile technology can be effectively utilized to limit the cardiac diseases’ prevalence in rural Middle East. This paper proposes a smart mobile solution for early risk detection of hard coronary heart diseases that uses the Framingham scoring model. Results. Smart HeartCare+ mobile app estimates accurately coronary heart diseases’ risk over 10 years based on clinical and nonclinical data and classifies the patient risk to low, moderate, or high. HeartCare+ also directs the patients to further treatment recommendations. Conclusion. This work attempts to investigate the effectiveness of the mobile technology in the early risk detection of coronary heart diseases. HeartCare+ app intensifies the communication channel between the lab workers and patients residing in rural areas and cardiologists and specialist residing in urban places.

  4. Soft And Hard Skills of Social Worker

    OpenAIRE

    HANTOVÁ, Libuše

    2011-01-01

    The work deals with soft and hard skills relevant to the profession of social worker. The theoretical part at first evaluates and analyzes important soft and hard skills necessary for people working in the field of social work. Then these skills are compared. The practical part illustrates the use of soft and hard skills in practice by means of model scenes and deals with the preferences in three groups of people ? students of social work, social workers and people outside the sphere, namely ...

  5. Hard x-ray micro-tomography of a human head post-mortem as a gold standard to compare X-ray modalities

    DEFF Research Database (Denmark)

    Dalstra, Michel; Schulz, Georg; Dagassan-Berndt, Dorothea

    2016-01-01

    in a larger study comparing the image quality of various cone beam CT systems currently used in dentistry. The image quality of the micro-CT scans was indeed better than the ones of the clinical imaging modalities, both with regard to noise and streak artifacts due to metal dental implants. Bony features...

  6. Photon technology. Hard photon technology; Photon technology. Hard photon gijutsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    For the application of photon to industrial technologies, in particular, a hard photon technology was surveyed which uses photon beams of 0.1-200nm in wavelength. Its features such as selective atom reaction, dense inner shell excitation and spacial high resolution by quantum energy are expected to provide innovative techniques for various field such as fine machining, material synthesis and advanced inspection technology. This wavelength region has been hardly utilized for industrial fields because of poor development of suitable photon sources and optical devices. The developmental meaning, usable time and issue of a hard photon reduction lithography were surveyed as lithography in ultra-fine region below 0.1{mu}m. On hard photon analysis/evaluation technology, the industrial use of analysis, measurement and evaluation technologies by micro-beam was viewed, and optimum photon sources and optical systems were surveyed. Prediction of surface and surface layer modification by inner shell excitation, the future trend of this process and development of a vacuum ultraviolet light source were also surveyed. 383 refs., 153 figs., 17 tabs.

  7. Comparative dynamics, seasonality in transmission, and predictability of childhood infections in Mexico

    Science.gov (United States)

    Mahmud, A. S.; Metcalf, C. J. E.; Grenfell, B. T.

    2018-01-01

    The seasonality and periodicity of infections, and the mechanisms underlying observed dynamics, can have implications for control efforts. This is particularly true for acute childhood infections. Among these, the dynamics of measles is the best understood and has been extensively studied, most notably in the UK prior to the start of vaccination. Less is known about the dynamics of other childhood diseases, particularly outside Europe and the US. In this paper, we leverage a unique dataset to examine the epidemiology of six childhood infections - measles, mumps, rubella, varicella, scarlet fever and pertussis - across 32 states in Mexico from 1985 to 2007. This dataset provides us with a spatiotemporal probe into the dynamics of six common childhood infections, and allows us to compare them in the same setting over the same time period. We examine three key epidemiological characteristics of these infections – the age profile of infections, spatiotemporal dynamics, and seasonality in transmission - and compare with predictions from existing theory and past findings. Our analysis reveals interesting epidemiological differences between the six pathogens, and variations across space. We find signatures of term time forcing (reduced transmission during the summer) for measles, mumps, rubella, varicella, and scarlet fever; for pertussis, a lack of term time forcing could not be rejected. PMID:27873563

  8. Hardness of ion implanted ceramics

    International Nuclear Information System (INIS)

    Oliver, W.C.; McHargue, C.J.; Farlow, G.C.; White, C.W.

    1985-01-01

    It has been established that the wear behavior of ceramic materials can be modified through ion implantation. Studies have been done to characterize the effect of implantation on the structure and composition of ceramic surfaces. To understand how these changes affect the wear properties of the ceramic, other mechanical properties must be measured. To accomplish this, a commercially available ultra low load hardness tester has been used to characterize Al 2 O 3 with different implanted species and doses. The hardness of the base material is compared with the highly damaged crystalline state as well as the amorphous material

  9. Prediction of clearance, volume of distribution and half-life by allometric scaling and by use of plasma concentrations predicted from pharmacokinetic constants: a comparative study.

    Science.gov (United States)

    Mahmood, I

    1999-08-01

    Pharmacokinetic parameters (clearance, CL, volume of distribution in the central compartment, VdC, and elimination half-life, t1/2beta) predicted by an empirical allometric approach have been compared with parameters predicted from plasma concentrations calculated by use of the pharmacokinetic constants A, B, alpha and beta, where A and B are the intercepts on the Y axis of the plot of plasma concentration against time and alpha and beta are the rate constants, both pairs of constants being for the distribution and elimination phases, respectively. The pharmacokinetic parameters of cefpiramide, actisomide, troglitazone, procaterol, moxalactam and ciprofloxacin were scaled from animal data obtained from the literature. Three methods were used to generate plots for the prediction of clearance in man: dependence of clearance on body weight (simple allometric equation); dependence of the product of clearance and maximum life-span potential (MLP) on body weight; and dependence of the product of clearance and brain weight on body weight. Plasma concentrations of the drugs were predicted in man by use of A, B, alpha and beta obtained from animal data. The predicted plasma concentrations were then used to calculate CL, VdC and t1/2beta. The pharmacokinetic parameters predicted by use of both approaches were compared with measured values. The results indicate that simple allometry did not predict clearance satisfactorily for actisomide, troglitazone, procaterol and ciprofloxacin. Use of MLP or the product of clearance and brain weight improved the prediction of clearance for these four drugs. Except for troglitazone, VdC and t1/2beta predicted for man by use of the allometric approach were comparable with measured values for the drugs studied. CL, VdC and t1/2beta predicted by use of pharmacokinetic constants were comparable with values predicted by simple allometry. Thus, if simple allometry failed to predict clearance of a drug, so did the pharmacokinetic constant

  10. Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes

    Science.gov (United States)

    Umbrello, Domenico; Rizzuti, Stefania; Outeiro, José C.; Shivpuri, Rajiv

    2007-04-01

    In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change.

  11. Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes

    International Nuclear Information System (INIS)

    Umbrello, Domenico; Rizzuti, Stefania; Outeiro, Jose C.; Shivpuri, Rajiv

    2007-01-01

    In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change

  12. Comparing various artificial neural network types for water temperature prediction in rivers

    Science.gov (United States)

    Piotrowski, Adam P.; Napiorkowski, Maciej J.; Napiorkowski, Jaroslaw J.; Osuch, Marzena

    2015-10-01

    A number of methods have been proposed for the prediction of streamwater temperature based on various meteorological and hydrological variables. The present study shows a comparison of few types of data-driven neural networks (multi-layer perceptron, product-units, adaptive-network-based fuzzy inference systems and wavelet neural networks) and nearest neighbour approach for short time streamwater temperature predictions in two natural catchments (mountainous and lowland) located in temperate climate zone, with snowy winters and hot summers. To allow wide applicability of such models, autoregressive inputs are not used and only easily available measurements are considered. Each neural network type is calibrated independently 100 times and the mean, median and standard deviation of the results are used for the comparison. Finally, the ensemble aggregation approach is tested. The results show that simple and popular multi-layer perceptron neural networks are in most cases not outperformed by more complex and advanced models. The choice of neural network is dependent on the way the models are compared. This may be a warning for anyone who wish to promote own models, that their superiority should be verified in different ways. The best results are obtained when mean, maximum and minimum daily air temperatures from the previous days are used as inputs, together with the current runoff and declination of the Sun from two recent days. The ensemble aggregation approach allows reducing the mean square error up to several percent, depending on the case, and noticeably diminishes differences in modelling performance obtained by various neural network types.

  13. Waist circumference as compared with body-mass index in predicting mortality from specific causes.

    Directory of Open Access Journals (Sweden)

    Michael F Leitzmann

    2011-04-01

    Full Text Available Whether waist circumference provides clinically meaningful information not delivered by body-mass index regarding prediction of cause-specific death is uncertain.We prospectively examined waist circumference (WC and body-mass index (BMI in relation to cause-specific death in 225,712 U.S. women and men. Cox regression was used to estimate relative risks and 95% confidence intervals (CI. Statistical analyses were conducted using SAS version 9.1.During follow-up from 1996 through 2005, we documented 20,977 deaths. Increased WC consistently predicted risk of death due to any cause as well as major causes of death, including deaths from cancer, cardiovascular disease, and non-cancer/non-cardiovascular diseases, independent of BMI, age, sex, race/ethnicity, smoking status, and alcohol intake. When WC and BMI were mutually adjusted in a model, WC was related to 1.37 fold increased risk of death from any cancer and 1.82 fold increase risk of death from cardiovascular disease, comparing the highest versus lowest WC categories. Importantly, WC, but not BMI showed statistically significant positive associations with deaths from lung cancer and chronic respiratory disease. Participants in the highest versus lowest WC category had a relative risk of death from lung cancer of 1.77 (95% CI, 1.41 to 2.23 and of death from chronic respiratory disease of 2.77 (95% CI, 1.95 to 3.95. In contrast, subjects in the highest versus lowest BMI category had a relative risk of death from lung cancer of 0.94 (95% CI, 0.75 to 1.17 and of death from chronic respiratory disease of 1.18 (95% CI, 0.89 to 1.56.Increased abdominal fat measured by WC was related to a higher risk of deaths from major specific causes, including deaths from lung cancer and chronic respiratory disease, independent of BMI.

  14. A comparative study: classification vs. user-based collaborative filtering for clinical prediction

    Directory of Open Access Journals (Sweden)

    Fang Hao

    2016-12-01

    Full Text Available Abstract Background Recommender systems have shown tremendous value for the prediction of personalized item recommendations for individuals in a variety of settings (e.g., marketing, e-commerce, etc.. User-based collaborative filtering is a popular recommender system, which leverages an individuals’ prior satisfaction with items, as well as the satisfaction of individuals that are “similar”. Recently, there have been applications of collaborative filtering based recommender systems for clinical risk prediction. In these applications, individuals represent patients, and items represent clinical data, which includes an outcome. Methods Application of recommender systems to a problem of this type requires the recasting a supervised learning problem as unsupervised. The rationale is that patients with similar clinical features carry a similar disease risk. As the “Big Data” era progresses, it is likely that approaches of this type will be reached for as biomedical data continues to grow in both size and complexity (e.g., electronic health records. In the present study, we set out to understand and assess the performance of recommender systems in a controlled yet realistic setting. User-based collaborative filtering recommender systems are compared to logistic regression and random forests with different types of imputation and varying amounts of missingness on four different publicly available medical data sets: National Health and Nutrition Examination Survey (NHANES, 2011-2012 on Obesity, Study to Understand Prognoses Preferences Outcomes and Risks of Treatment (SUPPORT, chronic kidney disease, and dermatology data. We also examined performance using simulated data with observations that are Missing At Random (MAR or Missing Completely At Random (MCAR under various degrees of missingness and levels of class imbalance in the response variable. Results Our results demonstrate that user-based collaborative filtering is consistently inferior

  15. A comparative study: classification vs. user-based collaborative filtering for clinical prediction.

    Science.gov (United States)

    Hao, Fang; Blair, Rachael Hageman

    2016-12-08

    Recommender systems have shown tremendous value for the prediction of personalized item recommendations for individuals in a variety of settings (e.g., marketing, e-commerce, etc.). User-based collaborative filtering is a popular recommender system, which leverages an individuals' prior satisfaction with items, as well as the satisfaction of individuals that are "similar". Recently, there have been applications of collaborative filtering based recommender systems for clinical risk prediction. In these applications, individuals represent patients, and items represent clinical data, which includes an outcome. Application of recommender systems to a problem of this type requires the recasting a supervised learning problem as unsupervised. The rationale is that patients with similar clinical features carry a similar disease risk. As the "Big Data" era progresses, it is likely that approaches of this type will be reached for as biomedical data continues to grow in both size and complexity (e.g., electronic health records). In the present study, we set out to understand and assess the performance of recommender systems in a controlled yet realistic setting. User-based collaborative filtering recommender systems are compared to logistic regression and random forests with different types of imputation and varying amounts of missingness on four different publicly available medical data sets: National Health and Nutrition Examination Survey (NHANES, 2011-2012 on Obesity), Study to Understand Prognoses Preferences Outcomes and Risks of Treatment (SUPPORT), chronic kidney disease, and dermatology data. We also examined performance using simulated data with observations that are Missing At Random (MAR) or Missing Completely At Random (MCAR) under various degrees of missingness and levels of class imbalance in the response variable. Our results demonstrate that user-based collaborative filtering is consistently inferior to logistic regression and random forests with different

  16. Comparing an Annual and a Daily Time-Step Model for Predicting Field-Scale Phosphorus Loss.

    Science.gov (United States)

    Bolster, Carl H; Forsberg, Adam; Mittelstet, Aaron; Radcliffe, David E; Storm, Daniel; Ramirez-Avila, John; Sharpley, Andrew N; Osmond, Deanna

    2017-11-01

    A wide range of mathematical models are available for predicting phosphorus (P) losses from agricultural fields, ranging from simple, empirically based annual time-step models to more complex, process-based daily time-step models. In this study, we compare field-scale P-loss predictions between the Annual P Loss Estimator (APLE), an empirically based annual time-step model, and the Texas Best Management Practice Evaluation Tool (TBET), a process-based daily time-step model based on the Soil and Water Assessment Tool. We first compared predictions of field-scale P loss from both models using field and land management data collected from 11 research sites throughout the southern United States. We then compared predictions of P loss from both models with measured P-loss data from these sites. We observed a strong and statistically significant ( loss between the two models; however, APLE predicted, on average, 44% greater dissolved P loss, whereas TBET predicted, on average, 105% greater particulate P loss for the conditions simulated in our study. When we compared model predictions with measured P-loss data, neither model consistently outperformed the other, indicating that more complex models do not necessarily produce better predictions of field-scale P loss. Our results also highlight limitations with both models and the need for continued efforts to improve their accuracy. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  17. Support vector regression for porosity prediction in a heterogeneous reservoir: A comparative study

    Science.gov (United States)

    Al-Anazi, A. F.; Gates, I. D.

    2010-12-01

    In wells with limited log and core data, porosity, a fundamental and essential property to characterize reservoirs, is challenging to estimate by conventional statistical methods from offset well log and core data in heterogeneous formations. Beyond simple regression, neural networks have been used to develop more accurate porosity correlations. Unfortunately, neural network-based correlations have limited generalization ability and global correlations for a field are usually less accurate compared to local correlations for a sub-region of the reservoir. In this paper, support vector machines are explored as an intelligent technique to correlate porosity to well log data. Recently, support vector regression (SVR), based on the statistical learning theory, have been proposed as a new intelligence technique for both prediction and classification tasks. The underlying formulation of support vector machines embodies the structural risk minimization (SRM) principle which has been shown to be superior to the traditional empirical risk minimization (ERM) principle employed by conventional neural networks and classical statistical methods. This new formulation uses margin-based loss functions to control model complexity independently of the dimensionality of the input space, and kernel functions to project the estimation problem to a higher dimensional space, which enables the solution of more complex nonlinear problem optimization methods to exist for a globally optimal solution. SRM minimizes an upper bound on the expected risk using a margin-based loss function ( ɛ-insensitivity loss function for regression) in contrast to ERM which minimizes the error on the training data. Unlike classical learning methods, SRM, indexed by margin-based loss function, can also control model complexity independent of dimensionality. The SRM inductive principle is designed for statistical estimation with finite data where the ERM inductive principle provides the optimal solution (the

  18. Hard breakup of two nucleons from the 3He nucleus

    International Nuclear Information System (INIS)

    Sargsian, Misak M.; Granados, Carlos

    2009-01-01

    We investigate a large angle photodisintegration of two nucleons from the 3 He nucleus within the framework of the hard rescattering model (HRM). In the HRM a quark of one nucleon knocked out by an incoming photon rescatters with a quark of the other nucleon leading to the production of two nucleons with large relative momentum. Assuming the dominance of the quark-interchange mechanism in a hard nucleon-nucleon scattering, the HRM allows the expression of the amplitude of a two-nucleon breakup reaction through the convolution of photon-quark scattering, NN hard scattering amplitude, and nuclear spectral function, which can be calculated using a nonrelativistic 3 He wave function. The photon-quark scattering amplitude can be explicitly calculated in the high energy regime, whereas for NN scattering one uses the fit of the available experimental data. The HRM predicts several specific features for the hard breakup reaction. First, the cross section will approximately scale as s -11 . Second, the s 11 weighted cross section will have the shape of energy dependence similar to that of s 10 weighted NN elastic scattering cross section. Also one predicts an enhancement of the pp breakup relative to the pn breakup cross section as compared to the results from low energy kinematics. Another result is the prediction of different spectator momentum dependencies of pp and pn breakup cross sections. This is due to the fact that the same-helicity pp-component is strongly suppressed in the ground state wave function of 3 He. Because of this suppression the HRM predicts significantly different asymmetries for the cross section of polarization transfer NN breakup reactions for circularly polarized photons. For the pp breakup this asymmetry is predicted to be zero while for the pn it is close to (2/3).

  19. COMPARATIVE MODELLING AND LIGAND BINDING SITE PREDICTION OF A FAMILY 43 GLYCOSIDE HYDROLASE FROM Clostridium thermocellum

    Directory of Open Access Journals (Sweden)

    Shadab Ahmed

    2012-06-01

    Full Text Available The phylogenetic analysis of Clostridium thermocellum family 43 glycoside hydrolase (CtGH43 showed close evolutionary relation with carbohydrate binding family 6 proteins from C. cellulolyticum, C. papyrosolvens, C. cellulyticum, and A. cellulyticum. Comparative modeling of CtGH43 was performed based on crystal structures with PDB IDs 3C7F, 1YIF, 1YRZ, 2EXH and 1WL7. The structure having lowest MODELLER objective function was selected. The three-dimensional structure revealed typical 5-fold beta–propeller architecture. Energy minimization and validation of predicted model with VERIFY 3D indicated acceptability of the proposed atomic structure. The Ramachandran plot analysis by RAMPAGE confirmed that family 43 glycoside hydrolase (CtGH43 contains little or negligible segments of helices. It also showed that out of 301 residues, 267 (89.3% were in most favoured region, 23 (7.7% were in allowed region and 9 (3.0% were in outlier region. IUPred analysis of CtGH43 showed no disordered region. Active site analysis showed presence of two Asp and one Glu, assumed to form a catalytic triad. This study gives us information about three-dimensional structure and reaffirms the fact that it has the similar core 5-fold beta–propeller architecture and so probably has the same inverting mechanism of action with the formation of above mentioned catalytic triad for catalysis of polysaccharides.

  20. Hard processes. Vol. 1

    International Nuclear Information System (INIS)

    Ioffe, B.L.; Khoze, V.A.; Lipatov, L.N.

    1984-01-01

    Deep inelastic (hard) processes are now at the epicenter of modern high-energy physics. These processes are governed by short-distance dynamics, which reveals the intrinsic structure of elementary particles. The theory of deep inelastic processes is now sufficiently well settled. The authors' aim was to give an effective tool to theoreticians and experimentalists who are engaged in high-energy physics. This book is intended primarily for physicists who are only beginning to study the field. To read the book, one should be acquainted with the Feynman diagram technique and with some particular topics from elementary particle theory (symmetries, dispersion relations, Regge pole theory, etc.). Theoretical consideration of deep inelastic processes is now based on quantum chromodynamics (QCD). At the same time, analysis of relevant physical phenomena demands a synthesis of QCD notions (quarks, gluons) with certain empirical characteristics. Therefore, the phenomenological approaches presented are a necessary stage in a study of this range of phenomena which should undoubtedly be followed by a detailed description based on QCD and electroweak theory. The authors were naturally unable to dwell on experimental data accumulated during the past decade of intensive investigations. Priority was given to results which allow a direct comparison with theoretical predictions. (Auth.)

  1. A comparative study of the radiation hardness of plastic scintillators for the upgrade of the Tile Calorimeter of the ATLAS detector

    Science.gov (United States)

    Liao, S.; Erasmus, R.; Jivan, H.; Pelwan, C.; Peters, G.; Sideras-Haddad, E.

    2015-10-01

    The influence of radiation on the light transmittance of plastic scintillators was studied experimentally. The high optical transmittance property of plastic scintillators makes them essential in the effective functioning of the Tile calorimeter of the ATLAS detector at CERN. This significant role played by the scintillators makes this research imperative in the movement towards the upgrade of the tile calorimeter. The radiation damage of polyvinyl toluene (PVT) based plastic scintillators was studied, namely, EJ-200, EJ-208 and EJ-260, all manufactured and provided to us by ELJEN technology. In addition, in order to compare to scintillator brands actually in use at the ATLAS detector currently, two polystyrene (PS) based scintillators and an additional PVT based scintillator were also scrutinized in this study, namely, Dubna, Protvino and Bicron, respectively. All the samples were irradiated using a 6 MeV proton beam at different doses at iThemba LABS Gauteng. The radiation process was planned and mimicked by doing simulations using a SRIM program. In addition, transmission spectra for the irradiated and unirradiated samples of each grade were obtained, observed and analyzed.

  2. Comprehensive hard materials

    CERN Document Server

    2014-01-01

    Comprehensive Hard Materials deals with the production, uses and properties of the carbides, nitrides and borides of these metals and those of titanium, as well as tools of ceramics, the superhard boron nitrides and diamond and related compounds. Articles include the technologies of powder production (including their precursor materials), milling, granulation, cold and hot compaction, sintering, hot isostatic pressing, hot-pressing, injection moulding, as well as on the coating technologies for refractory metals, hard metals and hard materials. The characterization, testing, quality assurance and applications are also covered. Comprehensive Hard Materials provides meaningful insights on materials at the leading edge of technology. It aids continued research and development of these materials and as such it is a critical information resource to academics and industry professionals facing the technological challenges of the future. Hard materials operate at the leading edge of technology, and continued res...

  3. Comparing Fine-Grained Source Code Changes And Code Churn For Bug Prediction

    NARCIS (Netherlands)

    Giger, E.; Pinzger, M.; Gall, H.C.

    2011-01-01

    A significant amount of research effort has been dedicated to learning prediction models that allow project managers to efficiently allocate resources to those parts of a software system that most likely are bug-prone and therefore critical. Prominent measures for building bug prediction models are

  4. Long-Term Survival Prediction for Coronary Artery Bypass Grafting: Validation of the ASCERT Model Compared With The Society of Thoracic Surgeons Predicted Risk of Mortality.

    Science.gov (United States)

    Lancaster, Timothy S; Schill, Matthew R; Greenberg, Jason W; Ruaengsri, Chawannuch; Schuessler, Richard B; Lawton, Jennifer S; Maniar, Hersh S; Pasque, Michael K; Moon, Marc R; Damiano, Ralph J; Melby, Spencer J

    2018-05-01

    The recently developed American College of Cardiology Foundation-Society of Thoracic Surgeons (STS) Collaboration on the Comparative Effectiveness of Revascularization Strategy (ASCERT) Long-Term Survival Probability Calculator is a valuable addition to existing short-term risk-prediction tools for cardiac surgical procedures but has yet to be externally validated. Institutional data of 654 patients aged 65 years or older undergoing isolated coronary artery bypass grafting between 2005 and 2010 were reviewed. Predicted survival probabilities were calculated using the ASCERT model. Survival data were collected using the Social Security Death Index and institutional medical records. Model calibration and discrimination were assessed for the overall sample and for risk-stratified subgroups based on (1) ASCERT 7-year survival probability and (2) the predicted risk of mortality (PROM) from the STS Short-Term Risk Calculator. Logistic regression analysis was performed to evaluate additional perioperative variables contributing to death. Overall survival was 92.1% (569 of 597) at 1 year and 50.5% (164 of 325) at 7 years. Calibration assessment found no significant differences between predicted and actual survival curves for the overall sample or for the risk-stratified subgroups, whether stratified by predicted 7-year survival or by PROM. Discriminative performance was comparable between the ASCERT and PROM models for 7-year survival prediction (p validated for prediction of long-term survival after coronary artery bypass grafting in all risk groups. The widely used STS PROM performed comparably as a predictor of long-term survival. Both tools provide important information for preoperative decision making and patient counseling about potential outcomes after coronary artery bypass grafting. Copyright © 2018 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  5. Comparative Study of foF2 Measurements with IRI-2007 Model Predictions During Extended Solar Minimum

    Science.gov (United States)

    Zakharenkova, I. E.; Krankowski, A.; Bilitza, D.; Cherniak, Iu.V.; Shagimuratov, I.I.; Sieradzki, R.

    2013-01-01

    The unusually deep and extended solar minimum of cycle 2324 made it very difficult to predict the solar indices 1 or 2 years into the future. Most of the predictions were proven wrong by the actual observed indices. IRI gets its solar, magnetic, and ionospheric indices from an indices file that is updated twice a year. In recent years, due to the unusual solar minimum, predictions had to be corrected downward with every new indices update. In this paper we analyse how much the uncertainties in the predictability of solar activity indices affect the IRI outcome and how the IRI values calculated with predicted and observed indices compared to the actual measurements.Monthly median values of F2 layer critical frequency (foF2) derived from the ionosonde measurements at the mid-latitude ionospheric station Juliusruh were compared with the International Reference Ionosphere (IRI-2007) model predictions. The analysis found that IRIprovides reliable results that compare well with actual measurements, when the definite (observed and adjusted) indices of solar activityare used, while IRI values based on earlier predictions of these indices noticeably overestimated the measurements during the solar minimum.One of the principal objectives of this paper is to direct attention of IRI users to update their solar activity indices files regularly.Use of an older index file can lead to serious IRI overestimations of F-region electron density during the recent extended solar minimum.

  6. Comparing the accuracy of perturbative and variational calculations for predicting fundamental vibrational frequencies of dihalomethanes

    Science.gov (United States)

    Krasnoshchekov, Sergey V.; Schutski, Roman S.; Craig, Norman C.; Sibaev, Marat; Crittenden, Deborah L.

    2018-02-01

    Three dihalogenated methane derivatives (CH2F2, CH2FCl, and CH2Cl2) were used as model systems to compare and assess the accuracy of two different approaches for predicting observed fundamental frequencies: canonical operator Van Vleck vibrational perturbation theory (CVPT) and vibrational configuration interaction (VCI). For convenience and consistency, both methods employ the Watson Hamiltonian in rectilinear normal coordinates, expanding the potential energy surface (PES) as a Taylor series about equilibrium and constructing the wavefunction from a harmonic oscillator product basis. At the highest levels of theory considered here, fourth-order CVPT and VCI in a harmonic oscillator basis with up to 10 quanta of vibrational excitation in conjunction with a 4-mode representation sextic force field (SFF-4MR) computed at MP2/cc-pVTZ with replacement CCSD(T)/aug-cc-pVQZ harmonic force constants, the agreement between computed fundamentals is closer to 0.3 cm-1 on average, with a maximum difference of 1.7 cm-1. The major remaining accuracy-limiting factors are the accuracy of the underlying electronic structure model, followed by the incompleteness of the PES expansion. Nonetheless, computed and experimental fundamentals agree to within 5 cm-1, with an average difference of 2 cm-1, confirming the utility and accuracy of both theoretical models. One exception to this rule is the formally IR-inactive but weakly allowed through Coriolis-coupling H-C-H out-of-plane twisting mode of dichloromethane, whose spectrum we therefore revisit and reassign. We also investigate convergence with respect to order of CVPT, VCI excitation level, and order of PES expansion, concluding that premature truncation substantially decreases accuracy, although VCI(6)/SFF-4MR results are still of acceptable accuracy, and some error cancellation is observed with CVPT2 using a quartic force field.

  7. Choosing algorithms for TB screening: a modelling study to compare yield, predictive value and diagnostic burden.

    Science.gov (United States)

    Van't Hoog, Anna H; Onozaki, Ikushi; Lonnroth, Knut

    2014-10-19

    To inform the choice of an appropriate screening and diagnostic algorithm for tuberculosis (TB) screening initiatives in different epidemiological settings, we compare algorithms composed of currently available methods. Of twelve algorithms composed of screening for symptoms (prolonged cough or any TB symptom) and/or chest radiography abnormalities, and either sputum-smear microscopy (SSM) or Xpert MTB/RIF (XP) as confirmatory test we model algorithm outcomes and summarize the yield, number needed to screen (NNS) and positive predictive value (PPV) for different levels of TB prevalence. Screening for prolonged cough has low yield, 22% if confirmatory testing is by SSM and 32% if XP, and a high NNS, exceeding 1000 if TB prevalence is ≤0.5%. Due to low specificity the PPV of screening for any TB symptom followed by SSM is less than 50%, even if TB prevalence is 2%. CXR screening for TB abnormalities followed by XP has the highest case detection (87%) and lowest NNS, but is resource intensive. CXR as a second screen for symptom screen positives improves efficiency. The ideal algorithm does not exist. The choice will be setting specific, for which this study provides guidance. Generally an algorithm composed of CXR screening followed by confirmatory testing with XP can achieve the lowest NNS and highest PPV, and is the least amenable to setting-specific variation. However resource requirements for tests and equipment may be prohibitive in some settings and a reason to opt for symptom screening and SSM. To better inform disease control programs we need empirical data to confirm the modeled yield, cost-effectiveness studies, transmission models and a better screening test.

  8. A Multimethod Assessment of Juvenile Psychopathy: Comparing the Predictive Utility of the PCL:YV, YPI, and NEO PRI

    Science.gov (United States)

    Cauffman, Elizabeth; Kimonis, Eva R.; Dmitrieva, Julia; Monahan, Kathryn C.

    2009-01-01

    The current study compares 3 distinct approaches for measuring juvenile psychopathy and their utility for predicting short- and long-term recidivism among a sample of 1,170 serious male juvenile offenders. The assessment approaches compared a clinical interview method (the Psychopathy Checklist: Youth Version [PCL:YV]; Forth, Kosson, & Hare,…

  9. The Prediction of Consumer Buying Intentions: A Comparative Study of the Predictive Efficacy of Two Attitudinal Models. Faculty Working Paper No. 234.

    Science.gov (United States)

    Bhagat, Rabi S.; And Others

    The role of attitudes in the conduct of buyer behavior is examined in the context of two competitive models of attitude structure and attitude-behavior relationship. Specifically, the objectives of the study were to compare the Fishbein and Sheth models on the criteria of predictive as well as cross validities. Data on both the models were…

  10. Comparative analysis of modified PMV models and SET models to predict human thermal sensation in naturally ventilated buildings

    DEFF Research Database (Denmark)

    Gao, Jie; Wang, Yi; Wargocki, Pawel

    2015-01-01

    In this paper, a comparative analysis was performed on the human thermal sensation estimated by modified predicted mean vote (PMV) models and modified standard effective temperature (SET) models in naturally ventilated buildings; the data were collected in field study. These prediction models were....../s, the expectancy factors for the extended PMV model and the extended SET model were from 0.770 to 0.974 and from 1.330 to 1.363, and the adaptive coefficients for the adaptive PMV model and the adaptive SET model were from 0.029 to 0.167 and from-0.213 to-0.195. In addition, the difference in thermal sensation...... between the measured and predicted values using the modified PMV models exceeded 25%, while the difference between the measured thermal sensation and the predicted thermal sensation using modified SET models was approximately less than 25%. It is concluded that the modified SET models can predict human...

  11. Hard And Soft QCD Physics In ATLAS

    Directory of Open Access Journals (Sweden)

    Adomeit Stefanie

    2014-04-01

    Full Text Available Hard and soft QCD results using proton-proton collisions recorded with the ATLAS detector at the LHC are reported. Charged-particle distributions and forward-backward correlations have been studied in low-luminosity minimum bias data taken at centre-of-mass energies of √s = 0.9, 2.36 and 7 TeV. Recent measurements on underlying event characteristics using charged-particle jets are also presented. The results are tested against various phenomenological soft QCD models implemented in Monte-Carlo generators. A summary of hard QCD measurements involving high transverse momentum jets is also given. Inclusive jet and dijet cross-sections have been measured at a centre-of-mass energy of 7 TeV and are compared to expectations based on NLO pQCD calculations corrected for non-perturbative effects as well as to NLO Monte Carlo predictions. Recent studies exploiting jet substructure techniques to identify hadronic decays of boosted massive particles are reported.

  12. Seismic signals hard clipping overcoming

    Science.gov (United States)

    Olszowa, Paula; Sokolowski, Jakub

    2018-01-01

    In signal processing the clipping is understand as the phenomenon of limiting the signal beyond certain threshold. It is often related to overloading of a sensor. Two particular types of clipping are being recognized: soft and hard. Beyond the limiting value soft clipping reduces the signal real gain while the hard clipping stiffly sets the signal values at the limit. In both cases certain amount of signal information is lost. Obviously if one possess the model which describes the considered signal and the threshold value (which might be slightly more difficult to obtain in the soft clipping case), the attempt of restoring the signal can be made. Commonly it is assumed that the seismic signals take form of an impulse response of some specific system. This may lead to belief that the sine wave may be the most appropriate to fit in the clipping period. However, this should be tested. In this paper the possibility of overcoming the hard clipping in seismic signals originating from a geoseismic station belonging to an underground mine is considered. A set of raw signals will be hard-clipped manually and then couple different functions will be fitted and compared in terms of least squares. The results will be then analysed.

  13. General Theory versus ENA Theory: Comparing Their Predictive Accuracy and Scope.

    Science.gov (United States)

    Ellis, Lee; Hoskin, Anthony; Hartley, Richard; Walsh, Anthony; Widmayer, Alan; Ratnasingam, Malini

    2015-12-01

    General theory attributes criminal behavior primarily to low self-control, whereas evolutionary neuroandrogenic (ENA) theory envisions criminality as being a crude form of status-striving promoted by high brain exposure to androgens. General theory predicts that self-control will be negatively correlated with risk-taking, while ENA theory implies that these two variables should actually be positively correlated. According to ENA theory, traits such as pain tolerance and muscularity will be positively associated with risk-taking and criminality while general theory makes no predictions concerning these relationships. Data from Malaysia and the United States are used to test 10 hypotheses derived from one or both of these theories. As predicted by both theories, risk-taking was positively correlated with criminality in both countries. However, contrary to general theory and consistent with ENA theory, the correlation between self-control and risk-taking was positive in both countries. General theory's prediction of an inverse correlation between low self-control and criminality was largely supported by the U.S. data but only weakly supported by the Malaysian data. ENA theory's predictions of positive correlations between pain tolerance, muscularity, and offending were largely confirmed. For the 10 hypotheses tested, ENA theory surpassed general theory in predictive scope and accuracy. © The Author(s) 2014.

  14. Influenza detection and prediction algorithms: comparative accuracy trial in Östergötland county, Sweden, 2008-2012.

    Science.gov (United States)

    Spreco, A; Eriksson, O; Dahlström, Ö; Timpka, T

    2017-07-01

    Methods for the detection of influenza epidemics and prediction of their progress have seldom been comparatively evaluated using prospective designs. This study aimed to perform a prospective comparative trial of algorithms for the detection and prediction of increased local influenza activity. Data on clinical influenza diagnoses recorded by physicians and syndromic data from a telenursing service were used. Five detection and three prediction algorithms previously evaluated in public health settings were calibrated and then evaluated over 3 years. When applied on diagnostic data, only detection using the Serfling regression method and prediction using the non-adaptive log-linear regression method showed acceptable performances during winter influenza seasons. For the syndromic data, none of the detection algorithms displayed a satisfactory performance, while non-adaptive log-linear regression was the best performing prediction method. We conclude that evidence was found for that available algorithms for influenza detection and prediction display satisfactory performance when applied on local diagnostic data during winter influenza seasons. When applied on local syndromic data, the evaluated algorithms did not display consistent performance. Further evaluations and research on combination of methods of these types in public health information infrastructures for 'nowcasting' (integrated detection and prediction) of influenza activity are warranted.

  15. Induced spherococcoid hard wheat

    International Nuclear Information System (INIS)

    Yanev, Sh.

    1981-01-01

    A mutant has been obtained - a spheroccocoid line -through irradiation of hard wheat seed with fast neutrons. It is distinguished by semispherical glumes and smaller grain; the plants have low stem with erect leaves but with shorter spikes and with lesser number of spikelets than those of the initial cultivar. Good productive tillering and resistance to lodging contributed to 23.5% higher yield. The line was superior to the standard and the initial cultivars by 14.2% as regards protein content, and by up to 22.8% - as to flour gluten. It has been successfully used in hybridization producing high-yielding hard wheat lines resistant to lodging, with good technological and other indicators. The possibility stated is of obtaining a spherococcoid mutant in tetraploid (hard) wheat out of the D-genome as well as its being suited to hard wheat breeding to enhance protein content, resistance to lodging, etc. (author)

  16. Hard probes 2006 Asilomar

    CERN Multimedia

    2006-01-01

    "The second international conference on hard and electromagnetic probes of high-energy nuclear collisions was held June 9 to 16, 2006 at the Asilomar Conference grounds in Pacific Grove, California" (photo and 1/2 page)

  17. Comparing statistical and machine learning classifiers: alternatives for predictive modeling in human factors research.

    Science.gov (United States)

    Carnahan, Brian; Meyer, Gérard; Kuntz, Lois-Ann

    2003-01-01

    Multivariate classification models play an increasingly important role in human factors research. In the past, these models have been based primarily on discriminant analysis and logistic regression. Models developed from machine learning research offer the human factors professional a viable alternative to these traditional statistical classification methods. To illustrate this point, two machine learning approaches--genetic programming and decision tree induction--were used to construct classification models designed to predict whether or not a student truck driver would pass his or her commercial driver license (CDL) examination. The models were developed and validated using the curriculum scores and CDL exam performances of 37 student truck drivers who had completed a 320-hr driver training course. Results indicated that the machine learning classification models were superior to discriminant analysis and logistic regression in terms of predictive accuracy. Actual or potential applications of this research include the creation of models that more accurately predict human performance outcomes.

  18. COMPAR

    International Nuclear Information System (INIS)

    Kuefner, K.

    1976-01-01

    COMPAR works on FORTRAN arrays with four indices: A = A(i,j,k,l) where, for each fixed k 0 ,l 0 , only the 'plane' [A(i,j,k 0 ,l 0 ), i = 1, isub(max), j = 1, jsub(max)] is held in fast memory. Given two arrays A, B of this type COMPAR has the capability to 1) re-norm A and B ind different ways; 2) calculate the deviations epsilon defined as epsilon(i,j,k,l): =[A(i,j,k,l) - B(i,j,k,l)] / GEW(i,j,k,l) where GEW (i,j,k,l) may be chosen in three different ways; 3) calculate mean, standard deviation and maximum in the array epsilon (by several intermediate stages); 4) determine traverses in the array epsilon; 5) plot these traverses by a printer; 6) simplify plots of these traverses by the PLOTEASY-system by creating input data blocks for this system. The main application of COMPAR is given (so far) by the comparison of two- and three-dimensional multigroup neutron flux-fields. (orig.) [de

  19. Hard coal; Steinkohle

    Energy Technology Data Exchange (ETDEWEB)

    Loo, Kai van de; Sitte, Andreas-Peter [Gesamtverband Steinkohle e.V., Herne (Germany)

    2013-04-01

    The year 2012 benefited from a growth of the consumption of hard coal at the national level as well as at the international level. Worldwide, the hard coal still is the number one energy source for power generation. This leads to an increasing demand for power plant coal. In this year, the conversion of hard coal into electricity also increases in this year. In contrast to this, the demand for coking coal as well as for coke of the steel industry is still declining depending on the market conditions. The enhanced utilization of coal for the domestic power generation is due to the reduction of the nuclear power from a relatively bad year for wind power as well as reduced import prices and low CO{sub 2} prices. Both justify a significant price advantage for coal in comparison to the utilisation of natural gas in power plants. This was mainly due to the price erosion of the inexpensive US coal which partly was replaced by the expansion of shale gas on the domestic market. As a result of this, the inexpensive US coal looked for an outlet for sales in Europe. The domestic hard coal has continued the process of adaptation and phase-out as scheduled. Two further hard coal mines were decommissioned in the year 2012. RAG Aktiengesellschaft (Herne, Federal Republic of Germany) running the hard coal mining in this country begins with the preparations for the activities after the time of mining.

  20. Thermal spray coatings replace hard chrome

    International Nuclear Information System (INIS)

    Schroeder, M.; Unger, R.

    1997-01-01

    Hard chrome plating provides good wear and erosion resistance, as well as good corrosion protection and fine surface finishes. Until a few years ago, it could also be applied at a reasonable cost. However, because of the many environmental and financial sanctions that have been imposed on the process over the past several years, cost has been on a consistent upward trend, and is projected to continue to escalate. Therefore, it is very important to find a coating or a process that offers the same characteristics as hard chrome plating, but without the consequent risks. This article lists the benefits and limitations of hard chrome plating, and describes the performance of two thermal spray coatings (tungsten carbide and chromium carbide) that compared favorably with hard chrome plating in a series of tests. It also lists three criteria to determine whether plasma spray or hard chrome plating should be selected

  1. Physiologically-based, predictive analytics using the heart-rate-to-Systolic-Ratio significantly improves the timeliness and accuracy of sepsis prediction compared to SIRS.

    Science.gov (United States)

    Danner, Omar K; Hendren, Sandra; Santiago, Ethel; Nye, Brittany; Abraham, Prasad

    2017-04-01

    Enhancing the efficiency of diagnosis and treatment of severe sepsis by using physiologically-based, predictive analytical strategies has not been fully explored. We hypothesize assessment of heart-rate-to-systolic-ratio significantly increases the timeliness and accuracy of sepsis prediction after emergency department (ED) presentation. We evaluated the records of 53,313 ED patients from a large, urban teaching hospital between January and June 2015. The HR-to-systolic ratio was compared to SIRS criteria for sepsis prediction. There were 884 patients with discharge diagnoses of sepsis, severe sepsis, and/or septic shock. Variations in three presenting variables, heart rate, systolic BP and temperature were determined to be primary early predictors of sepsis with a 74% (654/884) accuracy compared to 34% (304/884) using SIRS criteria (p < 0.0001)in confirmed septic patients. Physiologically-based predictive analytics improved the accuracy and expediency of sepsis identification via detection of variations in HR-to-systolic ratio. This approach may lead to earlier sepsis workup and life-saving interventions. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A behavioral economic reward index predicts drinking resolutions: moderation revisited and compared with other outcomes.

    Science.gov (United States)

    Tucker, Jalie A; Roth, David L; Vignolo, Mary J; Westfall, Andrew O

    2009-04-01

    Data were pooled from 3 studies of recently resolved community-dwelling problem drinkers to determine whether a behavioral economic index of the value of rewards available over different time horizons distinguished among moderation (n = 30), abstinent (n = 95), and unresolved (n = 77) outcomes. Moderation over 1- to 2-year prospective follow-up intervals was hypothesized to involve longer term behavior regulation processes than abstinence or relapse and to be predicted by more balanced preresolution monetary allocations between short-term and longer term objectives (i.e., drinking and saving for the future). Standardized odds ratios (ORs) based on changes in standard deviation units from a multinomial logistic regression indicated that increases on this "Alcohol-Savings Discretionary Expenditure" index predicted higher rates of abstinence (OR = 1.93, p = .004) and relapse (OR = 2.89, p moderation outcomes. The index had incremental utility in predicting moderation in complex models that included other established predictors. The study adds to evidence supporting a behavioral economic analysis of drinking resolutions and shows that a systematic analysis of preresolution spending patterns aids in predicting moderation.

  3. Comparing three attitude-behavior theories for predicting science teachers' intentions

    Science.gov (United States)

    Zint, Michaela

    2002-11-01

    Social psychologists' attitude-behavior theories can contribute to understanding science teachers' behaviors. Such understanding can, in turn, be used to improve professional development. This article describes leading attitude-behavior theories and summarizes results from past tests of these theories. A study predicting science teachers' intention to incorporate environmental risk education based on these theories is also reported. Data for that study were collected through a mail questionnaire (n = 1336, radjusted = 80%) and analyzed using confirmatory factor and multiple regression analysis. All determinants of intention to act in the Theory of Reasoned Action and Theory of Planned Behavior and some determinants in the Theory of Trying predicted science teachers' environmental risk education intentions. Given the consistency of results across studies, the Theory of Planned Behavior augmented with past behavior is concluded to provide the best attitude-behavior model for predicting science teachers' intention to act. Thus, science teachers' attitude toward the behavior, perceived behavioral control, and subjective norm need to be enhanced to modify their behavior. Based on the Theory of Trying, improving their attitude toward the process and toward success, and expectations of success may also result in changes. Future research should focus on identifying determinants that can further enhance the ability of these theories to predict and explain science teachers' behaviors.

  4. Prediction of "BRCAness" in breast cancer by array comparative genomic hybridization

    NARCIS (Netherlands)

    Joosse, Simon Andreas

    2012-01-01

    Predicting the likelihood that an individual is a BRCA mutation carrier is the first step to genetic counseling, followed by germ-line mutation testing in many family cancer clinics. Individuals who have been diagnosed as BRCA mutation-positive are offered special medical care; however, clinical

  5. A comparative study of ANN and neuro-fuzzy for the prediction of ...

    Indian Academy of Sciences (India)

    Istanbul Technical University, Faculty of Civil Engineering, Hydraulics and Water. Resources Division, Maslak 34469, Istanbul, Turkey. Singh et al (2005) examined the potential of the ANN and neuro-fuzzy systems application for the prediction of dynamic constant of rockmass. However, the model proposed by them has ...

  6. Vaginal birth after caesarean section prediction models: a UK comparative observational study.

    Science.gov (United States)

    Mone, Fionnuala; Harrity, Conor; Mackie, Adam; Segurado, Ricardo; Toner, Brenda; McCormick, Timothy R; Currie, Aoife; McAuliffe, Fionnuala M

    2015-10-01

    Primarily, to assess the performance of three statistical models in predicting successful vaginal birth in patients attempting a trial of labour after one previous lower segment caesarean section (TOLAC). The statistically most reliable models were subsequently subjected to validation testing in a local antenatal population. A retrospective observational study was performed with study data collected from the Northern Ireland Maternity Service Database (NIMATs). The study population included all women that underwent a TOLAC (n=385) from 2010 to 2012 in a regional UK obstetric unit. Data was collected from the Northern Ireland Maternity Service Database (NIMATs). Area under the curve (AUC) and correlation analysis was performed. Of the three prediction models evaluated, AUC calculations for the Smith et al., Grobman et al. and Troyer and Parisi Models were 0.74, 0.72 and 0.65, respectively. Using the Smith et al. model, 52% of women had a low risk of caesarean section (CS) (predicted VBAC >72%) and 20% had a high risk of CS (predicted VBAC <60%), of whom 20% and 63% had delivery by CS. The fit between observed and predicted outcome in this study cohort using the Smith et al. and Grobman et al. models were greatest (Chi-square test, p=0.228 and 0.904), validating both within the population. The Smith et al. and Grobman et al. models could potentially be utilized within the UK to provide women with an informed choice when deciding on mode of delivery after a previous CS. Crown Copyright © 2015. Published by Elsevier Ireland Ltd. All rights reserved.

  7. A comparative analysis among computational intelligence techniques for dissolved oxygen prediction in Delaware River

    Directory of Open Access Journals (Sweden)

    Ehsan Olyaie

    2017-05-01

    Full Text Available Most of the water quality models previously developed and used in dissolved oxygen (DO prediction are complex. Moreover, reliable data available to develop/calibrate new DO models is scarce. Therefore, there is a need to study and develop models that can handle easily measurable parameters of a particular site, even with short length. In recent decades, computational intelligence techniques, as effective approaches for predicting complicated and significant indicator of the state of aquatic ecosystems such as DO, have created a great change in predictions. In this study, three different AI methods comprising: (1 two types of artificial neural networks (ANN namely multi linear perceptron (MLP and radial based function (RBF; (2 an advancement of genetic programming namely linear genetic programming (LGP; and (3 a support vector machine (SVM technique were used for DO prediction in Delaware River located at Trenton, USA. For evaluating the performance of the proposed models, root mean square error (RMSE, Nash–Sutcliffe efficiency coefficient (NS, mean absolute relative error (MARE and, correlation coefficient statistics (R were used to choose the best predictive model. The comparison of estimation accuracies of various intelligence models illustrated that the SVM was able to develop the most accurate model in DO estimation in comparison to other models. Also, it was found that the LGP model performs better than the both ANNs models. For example, the determination coefficient was 0.99 for the best SVM model, while it was 0.96, 0.91 and 0.81 for the best LGP, MLP and RBF models, respectively. In general, the results indicated that an SVM model could be employed satisfactorily in DO estimation.

  8. Comparing human-Salmonella with plant-Salmonella protein-protein interaction predictions

    Directory of Open Access Journals (Sweden)

    Sylvia eSchleker

    2015-01-01

    Full Text Available Salmonellosis is the most frequent food-borne disease world-wide and can be transmitted to humans by a variety of routes, especially via animal and plant products. Salmonella bacteria are believed to use not only animal and human but also plant hosts despite their evolutionary distance. This raises the question if Salmonella employs similar mechanisms in infection of these diverse hosts. Given that most of our understanding comes from its interaction with human hosts, we investigate here to what degree knowledge of Salmonella-human interactions can be transferred to the Salmonella-plant system. Reviewed are recent publications on analysis and prediction of Salmonella-host interactomes. Putative protein-protein interactions (PPIs between Salmonella and its human and Arabidopsis hosts were retrieved utilizing purely interolog-based approaches in which predictions were inferred based on available sequence and domain information of known PPIs, and machine learning approaches that integrate a larger set of useful information from different sources. Transfer learning is an especially suitable machine learning technique to predict plant host targets from the knowledge of human host targets. A comparison of the prediction results with transcriptomic data shows a clear overlap between the host proteins predicted to be targeted by PPIs and their gene ontology enrichment in both host species and regulation of gene expression. In particular, the cellular processes Salmonella interferes with in plants and humans are catabolic processes. The details of how these processes are targeted, however, are quite different between the two organisms, as expected based on their evolutionary and habitat differences. Possible implications of this observation on evolution of host-pathogen communication are discussed.

  9. Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships (MCBIOS)

    Science.gov (United States)

    Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships Jie Liu1,2, Richard Judson1, Matthew T. Martin1, Huixiao Hong3, Imran Shah1 1National Center for Computational Toxicology (NCCT), US EPA, RTP, NC...

  10. Comparing the performance of 11 crop simulation models in predicting yield response to nitrogen fertilization

    DEFF Research Database (Denmark)

    Salo, T J; Palosuo, T; Kersebaum, K C

    2016-01-01

    Eleven widely used crop simulation models (APSIM, CERES, CROPSYST, COUP, DAISY, EPIC, FASSET, HERMES, MONICA, STICS and WOFOST) were tested using spring barley (Hordeum vulgare L.) data set under varying nitrogen (N) fertilizer rates from three experimental years in the boreal climate of Jokioinen......, Finland. This is the largest standardized crop model inter-comparison under different levels of N supply to date. The models were calibrated using data from 2002 and 2008, of which 2008 included six N rates ranging from 0 to 150 kg N/ha. Calibration data consisted of weather, soil, phenology, leaf area...... ranged from 170 to 870 kg/ha. During the test year 2009, most models failed to accurately reproduce the observed low yield without N fertilizer as well as the steep yield response to N applications. The multi-model predictions were closer to observations than most single-model predictions, but multi...

  11. Comparing methodologies for structural identification and fatigue life prediction of a highway bridge

    OpenAIRE

    Pai, Sai Ganesh Sarvotham; Nussbaumer, Alain; Smith, Ian F. C.

    2018-01-01

    Accurate measurement-data interpretation leads to increased understanding of structural behavior and enhanced asset-management decision making. In this paper, four data-interpretation methodologies, residual minimization, traditional Bayesian model updating, modified Bayesian model updating (with an L∞-norm-based Gaussian likelihood function), and error-domain model falsification (EDMF), a method that rejects models that have unlikely differences between predictions and measurements, are comp...

  12. Comparing Structural Identification Methodologies for Fatigue Life Prediction of a Highway Bridge

    OpenAIRE

    Pai, Sai G.S.; Nussbaumer, Alain; Smith, Ian F.C.

    2018-01-01

    Accurate measurement-data interpretation leads to increased understanding of structural behavior and enhanced asset-management decision making. In this paper, four data-interpretation methodologies, residual minimization, traditional Bayesian model updating, modified Bayesian model updating (with an L∞-norm-based Gaussian likelihood function), and error-domain model falsification (EDMF), a method that rejects models that have unlikely differences between predictions and measurements, are comp...

  13. Comparative analysis for the measured and the predicted relative sensitivity of rhodium In core detector

    International Nuclear Information System (INIS)

    Moon, Sang Rae; Cha, Kyoon Ho; Bae, Seong Man

    2012-01-01

    Self-powered neutron detector (SPND) is widely used as in-core flux monitoring in nuclear power plants. OPR1000 has applied a rhodium (Rh) as the emitter of the SPND. The SPND contains a neutron-sensitive metallic emitter surrounded by a ceramic insulator. When capturing a neutron, the Rh will be decayed by emitting some electrons which is crossing the sheath and produce current. This current can be measured externally using pico-ammeter. The sensitivity of detectors is closely related with the geometry and material of the detectors. The lifetime of in-core detector is determined by calculating the relative sensitivity of Rh detector. It is required that the Rh detector should be replaced before the burn-up of Rh detector has reached 66% of its original compositions. To predict Rh detector's relative sensitivity ANC code, advanced nodal code capable of two-dimensional and three-dimensional calculations, is used. It is determined that the Rh detectors should be replaced on the basis of the predicted sensitivity value calculated by ANC code. When evaluating the life of Rh detectors using ANC code, it is assumed that the uncertainty of the sensitivity calculation include the measurement error of 5%. As a result of the analysis of measured and predicted data for the Rh detector's relative sensitivity, it is possible to reduce the assumed uncertainty

  14. A Kolmogorov-Smirnov Based Test for Comparing the Predictive Accuracy of Two Sets of Forecasts

    Directory of Open Access Journals (Sweden)

    Hossein Hassani

    2015-08-01

    Full Text Available This paper introduces a complement statistical test for distinguishing between the predictive accuracy of two sets of forecasts. We propose a non-parametric test founded upon the principles of the Kolmogorov-Smirnov (KS test, referred to as the KS Predictive Accuracy (KSPA test. The KSPA test is able to serve two distinct purposes. Initially, the test seeks to determine whether there exists a statistically significant difference between the distribution of forecast errors, and secondly it exploits the principles of stochastic dominance to determine whether the forecasts with the lower error also reports a stochastically smaller error than forecasts from a competing model, and thereby enables distinguishing between the predictive accuracy of forecasts. We perform a simulation study for the size and power of the proposed test and report the results for different noise distributions, sample sizes and forecasting horizons. The simulation results indicate that the KSPA test is correctly sized, and robust in the face of varying forecasting horizons and sample sizes along with significant accuracy gains reported especially in the case of small sample sizes. Real world applications are also considered to illustrate the applicability of the proposed KSPA test in practice.

  15. Comparative analysis for the measured and the predicted relative sensitivity of rhodium In core detector

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Sang Rae; Cha, Kyoon Ho; Bae, Seong Man [Nuclear Reactor Safety Lab., KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2012-10-15

    Self-powered neutron detector (SPND) is widely used as in-core flux monitoring in nuclear power plants. OPR1000 has applied a rhodium (Rh) as the emitter of the SPND. The SPND contains a neutron-sensitive metallic emitter surrounded by a ceramic insulator. When capturing a neutron, the Rh will be decayed by emitting some electrons which is crossing the sheath and produce current. This current can be measured externally using pico-ammeter. The sensitivity of detectors is closely related with the geometry and material of the detectors. The lifetime of in-core detector is determined by calculating the relative sensitivity of Rh detector. It is required that the Rh detector should be replaced before the burn-up of Rh detector has reached 66% of its original compositions. To predict Rh detector's relative sensitivity ANC code, advanced nodal code capable of two-dimensional and three-dimensional calculations, is used. It is determined that the Rh detectors should be replaced on the basis of the predicted sensitivity value calculated by ANC code. When evaluating the life of Rh detectors using ANC code, it is assumed that the uncertainty of the sensitivity calculation include the measurement error of 5%. As a result of the analysis of measured and predicted data for the Rh detector's relative sensitivity, it is possible to reduce the assumed uncertainty.

  16. A Systematic Framework and Nanoperiodic Concept for Unifying Nanoscience: Hard/Soft Nanoelements, Superatoms, Meta-Atoms, New Emerging Properties, Periodic Property Patterns, and Predictive Mendeleev-like Nanoperiodic Tables.

    Science.gov (United States)

    Tomalia, Donald A; Khanna, Shiv N

    2016-02-24

    Development of a central paradigm is undoubtedly the single most influential force responsible for advancing Dalton's 19th century atomic/molecular chemistry concepts to the current maturity enjoyed by traditional chemistry. A similar central dogma for guiding and unifying nanoscience has been missing. This review traces the origins, evolution, and current status of such a critical nanoperiodic concept/framework for defining and unifying nanoscience. Based on parallel efforts and a mutual consensus now shared by both chemists and physicists, a nanoperiodic/systematic framework concept has emerged. This concept is based on the well-documented existence of discrete, nanoscale collections of traditional inorganic/organic atoms referred to as hard and soft superatoms (i.e., nanoelement categories). These nanometric entities are widely recognized to exhibit nanoscale atom mimicry features reminiscent of traditional picoscale atoms. All unique superatom/nanoelement physicochemical features are derived from quantized structural control defined by six critical nanoscale design parameters (CNDPs), namely, size, shape, surface chemistry, flexibility/rigidity, architecture, and elemental composition. These CNDPs determine all intrinsic superatom properties, their combining behavior to form stoichiometric nanocompounds/assemblies as well as to exhibit nanoperiodic properties leading to new nanoperiodic rules and predictive Mendeleev-like nanoperiodic tables, and they portend possible extension of these principles to larger quantized building blocks including meta-atoms.

  17. Hard breakup of the deuteron into two Δ isobars

    International Nuclear Information System (INIS)

    Granados, Carlos G.; Sargsian, Misak M.

    2011-01-01

    We study high-energy photodisintegration of the deuteron into two Δ isobars at large center of mass angles within the QCD hard rescattering model (HRM). According to the HRM, the process develops in three main steps: the photon knocks a quark from one of the nucleons in the deuteron; the struck quark rescatters off a quark from the other nucleon sharing the high energy of the photon; then the energetic quarks recombine into two outgoing baryons which have large transverse momenta. Within the HRM, the cross section is expressed through the amplitude of pn→ΔΔ scattering which we evaluated based on the quark-interchange model of hard hadronic scattering. Calculations show that the angular distribution and the strength of the photodisintegration is mainly determined by the properties of the pn→ΔΔ scattering. We predict that the cross section of the deuteron breakup to Δ ++ Δ - is 4-5 times larger than that of the breakup to the Δ + Δ 0 channel. Also, the angular distributions for these two channels are markedly different. These can be compared with the predictions based on the assumption that two hard Δ isobars are the result of the disintegration of the preexisting ΔΔ components of the deuteron wave function. In this case, one expects the angular distributions and cross sections of the breakup in both Δ ++ Δ - and Δ + Δ 0 channels to be similar.

  18. Comparative Application of Radial Basis Function and Multilayer Perceptron Neural Networks to Predict Traffic Noise Pollution in Tehran Roads

    Directory of Open Access Journals (Sweden)

    Ali Mansourkhaki

    2018-01-01

    Full Text Available Noise pollution is a level of environmental noise which is considered as a disturbing and annoying phenomenon for human and wildlife. It is one of the environmental problems which has not been considered as harmful as the air and water pollution. Compared with other pollutants, the attempts to control noise pollution have largely been unsuccessful due to the inadequate knowledge of its effectson humans, as well as the lack of clear standards in previous years. However, with an increase of traveling vehicles, the adverse impact of increasing noise pollution on human health is progressively emerging. Hence, investigators all around the world are seeking to findnew approaches for predicting, estimating and controlling this problem and various models have been proposed. Recently, developing learning algorithms such as neural network has led to novel solutions for this challenge. These algorithms provide intelligent performance based on the situations and input data, enabling to obtain the best result for predicting noise level. In this study, two types of neural networks – multilayer perceptron and radial basis function – were developed for predicting equivalent continuous sound level (LA eq by measuring the traffivolume, average speed and percentage of heavy vehicles in some roads in west and northwest of Tehran. Then, their prediction results were compared based on the coefficienof determination (R 2 and the Mean Squared Error (MSE. Although both networks are of high accuracy in prediction of noise level, multilayer perceptron neural network based on selected criteria had a better performance.

  19. A Forging Hardness Dispersion Effect on the Energy Consumption of Machining

    Directory of Open Access Journals (Sweden)

    L. D. Mal'kova

    2015-01-01

    Full Text Available The aim of the work is to evaluate a hardness dispersion of forgings to be further machined, and analyse the impact of this dispersion on the resulting power consumption when cutting.The paper studies the hardness values of three kinds of parts for automotive manufacturing. Sample of each part was n = 100 pieces. Analysis of measurements showed that 46% - 93% of parts meet requirements for a range defined by the work-piece working drawing. It was found that hardness of one batch of forgings is under dispersion, which distribution is governed by the normal law.The work provides calculations for machining the external cylindrical surfaces of the considered parts. In the context of calculating are adopted parameters of the enterprise-processing rate. It is found that power consumption of machining because of the dispersion values of the work-piece hardness is a function of the random BH variable and it itself is a random variable. Two types of samples are considered, namely: the full sample and that of the values that meet requirements for hardness. The coefficient of variation for samples that meet the technical requirements for hardness is lower than for the full samples, so their average value is more reliable characteristic of a set. It was also found that to ensure a reliable prediction of power consumption in designing the manufacturing processes it is necessary to reduce a tolerance range of workpiece hardness to the limit.The work gives a comparative evaluation of electric power consumption per unit cylindrical surface of the parts under consideration. A relative change in the electric power consumed at the minimum and maximum levels of the hardness value was introduced as an evaluation criterion. It is found that with changing hardness of machined work-pieces within the tolerance, the change in power consumption in machining the unit surface reaches 16% while in the case its being out of the specified range it does 47%.

  20. Campylobacter fetus subspecies: Comparative genomics and prediction of potential virulence targets

    DEFF Research Database (Denmark)

    Ali, Amjad; Soares, Siomar C.; Santos, Anderson R.

    2012-01-01

    . The potential candidate factors identified for attenuation and/or subunit vaccine development against C. fetus subspecies contain: nucleoside diphosphate kinase (Ndk), type IV secretion systems (T4SS), outer membrane proteins (OMP), substrate binding proteins CjaA and CjaC, surface array proteins, sap gene......, and cytolethal distending toxin (CDT). Significantly, many of those genes were found in genomic regions with signals of horizontal gene transfer and, therefore, predicted as putative pathogenicity islands. We found CRISPR loci and dam genes in an island specific for C. fetus subsp. fetus, and T4SS and sap genes...

  1. A comparative study of various inflow boundary conditions and turbulence models for wind turbine wake predictions

    Science.gov (United States)

    Tian, Lin-Lin; Zhao, Ning; Song, Yi-Lei; Zhu, Chun-Ling

    2018-05-01

    This work is devoted to perform systematic sensitivity analysis of different turbulence models and various inflow boundary conditions in predicting the wake flow behind a horizontal axis wind turbine represented by an actuator disc (AD). The tested turbulence models are the standard k-𝜀 model and the Reynolds Stress Model (RSM). A single wind turbine immersed in both uniform flows and in modeled atmospheric boundary layer (ABL) flows is studied. Simulation results are validated against the field experimental data in terms of wake velocity and turbulence intensity.

  2. Comparative analysis of methods for classification in predicting the quality of bread

    OpenAIRE

    E. A. Balashova; V. K. Bitjukov; E. A. Savvina

    2013-01-01

    The comparative analysis of classification methods of two-stage cluster and discriminant analysis and neural networks was performed. System of informative signs which classifies with a minimum of errors has been proposed.

  3. Comparative analysis of methods for classification in predicting the quality of bread

    Directory of Open Access Journals (Sweden)

    E. A. Balashova

    2013-01-01

    Full Text Available The comparative analysis of classification methods of two-stage cluster and discriminant analysis and neural networks was performed. System of informative signs which classifies with a minimum of errors has been proposed.

  4. Soft and hard pomerons

    International Nuclear Information System (INIS)

    Maor, Uri; Tel Aviv Univ.

    1995-09-01

    The role of s-channel unitarity screening corrections, calculated in the eikonal approximation, is investigated for soft Pomeron exchange responsible for elastic and diffractive hadron scattering in the high energy limit. We examine the differences between our results and those obtained from the supercritical Pomeron-Regge model with no such corrections. It is shown that screening saturation is attained at different scales for different channels. We then proceed to discuss the new HERA data on hard (PQCD) Pomeron diffractive channels and discuss the relationship between the soft and hard Pomerons and the relevance of our analysis to this problem. (author). 18 refs, 9 figs, 1 tab

  5. Hard exclusive QCD processes

    Energy Technology Data Exchange (ETDEWEB)

    Kugler, W.

    2007-01-15

    Hard exclusive processes in high energy electron proton scattering offer the opportunity to get access to a new generation of parton distributions, the so-called generalized parton distributions (GPDs). This functions provide more detailed informations about the structure of the nucleon than the usual PDFs obtained from DIS. In this work we present a detailed analysis of exclusive processes, especially of hard exclusive meson production. We investigated the influence of exclusive produced mesons on the semi-inclusive production of mesons at fixed target experiments like HERMES. Further we give a detailed analysis of higher order corrections (NLO) for the exclusive production of mesons in a very broad range of kinematics. (orig.)

  6. Hard-hat day

    CERN Multimedia

    2003-01-01

    CERN will be organizing a special information day on Friday, 27th June, designed to promote the wearing of hard hats and ensure that they are worn correctly. A new prevention campaign will also be launched.The event will take place in the hall of the Main Building from 11.30 a.m. to 2.00 p.m., when you will be able to come and try on various models of hard hat, including some of the very latest innovative designs, ask questions and pass on any comments and suggestions.

  7. Hard scattering in γp interactions

    International Nuclear Information System (INIS)

    Ahmed, T.; Andreev, V.; Andrieu, B.

    1992-10-01

    We report on the investigation of the final state in interactions of quasi-real photons with protons. The data were taken with the H1 detector at the HERA ep collider. Evidence for hard interactions is seen in both single particle spectra and jet formation. The data can best be described by inclusion of resolved photon processes as predicted by QCD. (orig.)

  8. Model predictions of metal speciation in freshwaters compared to measurements by in situ techniques.

    NARCIS (Netherlands)

    Unsworth, Emily R; Warnken, Kent W; Zhang, Hao; Davison, William; Black, Frank; Buffle, Jacques; Cao, Jun; Cleven, Rob; Galceran, Josep; Gunkel, Peggy; Kalis, Erwin; Kistler, David; Leeuwen, Herman P van; Martin, Michel; Noël, Stéphane; Nur, Yusuf; Odzak, Niksa; Puy, Jaume; Riemsdijk, Willem van; Sigg, Laura; Temminghoff, Erwin; Tercier-Waeber, Mary-Lou; Toepperwien, Stefanie; Town, Raewyn M; Weng, Liping; Xue, Hanbin

    2006-01-01

    Measurements of trace metal species in situ in a softwater river, a hardwater lake, and a hardwater stream were compared to the equilibrium distribution of species calculated using two models, WHAM 6, incorporating humic ion binding model VI and visual MINTEQ incorporating NICA-Donnan. Diffusive

  9. Predicting the Effects of Comparable Worth Programs on Female Labor Supply.

    Science.gov (United States)

    Nakamura, Alice; Nakamura, Masao

    1989-01-01

    Surveys theories in labor economics about how the female labor supply is affected by the wage offers that women receive. Summarizes the implications concerning expected effects of comparable worth wage adjustments on female labor supply. Examines empirical evidence pertaining to the theory of female labor supply. (JS)

  10. Reconciled Rat and Human Metabolic Networks for Comparative Toxicogenomics and Biomarker Predictions

    Science.gov (United States)

    2017-02-08

    reconstructions for comparative systems analysis. PLoS Comput. Biol. 7, e1001116 (2011). 26. Bartell, J. A., Yen, P., Varga, J. J., Goldberg , J. B. & Papin, J. A...Rat Genome Database 2015: genomic, phenotypic and environmental variations and disease. Nucleic Acids Res. 43, D743–D750 (2015). 34. NCBI Resource

  11. Hard times; Schwere Zeiten

    Energy Technology Data Exchange (ETDEWEB)

    Grunwald, Markus

    2012-10-02

    The prices of silicon and solar wafers keep dropping. According to market research specialist IMS research, this is the result of weak traditional solar markets and global overcapacities. While many manufacturers are facing hard times, big producers of silicon are continuing to expand.

  12. Hardness of Clustering

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Hardness of Clustering. Both k-means and k-medians intractable (when n and d are both inputs even for k =2). The best known deterministic algorithms. are based on Voronoi partitioning that. takes about time. Need for approximation – “close” to optimal.

  13. Rock-hard coatings

    OpenAIRE

    Muller, M.

    2007-01-01

    Aircraft jet engines have to be able to withstand infernal conditions. Extreme heat and bitter cold tax coatings to the limit. Materials expert Dr Ir. Wim Sloof fits atoms together to develop rock-hard coatings. The latest invention in this field is known as ceramic matrix composites. Sloof has signed an agreement with a number of parties to investigate this material further.

  14. Rock-hard coatings

    NARCIS (Netherlands)

    Muller, M.

    2007-01-01

    Aircraft jet engines have to be able to withstand infernal conditions. Extreme heat and bitter cold tax coatings to the limit. Materials expert Dr Ir. Wim Sloof fits atoms together to develop rock-hard coatings. The latest invention in this field is known as ceramic matrix composites. Sloof has

  15. Hardness and excitation energy

    Indian Academy of Sciences (India)

    It is shown that the first excitation energy can be given by the Kohn-Sham hardness (i.e. the energy difference of the ground-state lowest unoccupied and highest occupied levels) plus an extra term coming from the partial derivative of the ensemble exchange-correlation energy with respect to the weighting factor in the ...

  16. Comparative evaluation of urinary PCA3 and TMPRSS2: ERG scores and serum PHI in predicting prostate cancer aggressiveness.

    Science.gov (United States)

    Tallon, Lucile; Luangphakdy, Devillier; Ruffion, Alain; Colombel, Marc; Devonec, Marian; Champetier, Denis; Paparel, Philippe; Decaussin-Petrucci, Myriam; Perrin, Paul; Vlaeminck-Guillem, Virginie

    2014-07-30

    It has been suggested that urinary PCA3 and TMPRSS2:ERG fusion tests and serum PHI correlate to cancer aggressiveness-related pathological criteria at prostatectomy. To evaluate and compare their ability in predicting prostate cancer aggressiveness, PHI and urinary PCA3 and TMPRSS2:ERG (T2) scores were assessed in 154 patients who underwent radical prostatectomy for biopsy-proven prostate cancer. Univariate and multivariate analyses using logistic regression and decision curve analyses were performed. All three markers were predictors of a tumor volume≥0.5 mL. Only PHI predicted Gleason score≥7. T2 score and PHI were both independent predictors of extracapsular extension(≥pT3), while multifocality was only predicted by PCA3 score. Moreover, when compared to a base model (age, digital rectal examination, serum PSA, and Gleason sum at biopsy), the addition of both PCA3 score and PHI to the base model induced a significant increase (+12%) when predicting tumor volume>0.5 mL. PHI and urinary PCA3 and T2 scores can be considered as complementary predictors of cancer aggressiveness at prostatectomy.

  17. Comparative Evaluation of Urinary PCA3 and TMPRSS2: ERG Scores and Serum PHI in Predicting Prostate Cancer Aggressiveness

    Directory of Open Access Journals (Sweden)

    Lucile Tallon

    2014-07-01

    Full Text Available It has been suggested that urinary PCA3 and TMPRSS2:ERG fusion tests and serum PHI correlate to cancer aggressiveness-related pathological criteria at prostatectomy. To evaluate and compare their ability in predicting prostate cancer aggressiveness, PHI and urinary PCA3 and TMPRSS2:ERG (T2 scores were assessed in 154 patients who underwent radical prostatectomy for biopsy-proven prostate cancer. Univariate and multivariate analyses using logistic regression and decision curve analyses were performed. All three markers were predictors of a tumor volume ≥0.5 mL. Only PHI predicted Gleason score ≥7. T2 score and PHI were both independent predictors of extracapsular extension (≥pT3, while multifocality was only predicted by PCA3 score. Moreover, when compared to a base model (age, digital rectal examination, serum PSA, and Gleason sum at biopsy, the addition of both PCA3 score and PHI to the base model induced a significant increase (+12% when predicting tumor volume >0.5 mL. PHI and urinary PCA3 and T2 scores can be considered as complementary predictors of cancer aggressiveness at prostatectomy.

  18. Application of analytical methods for determination of hardness distribution in welded joint made of S1100QL steel

    Directory of Open Access Journals (Sweden)

    Piekarska Wiesława

    2018-01-01

    Full Text Available The prediction of hardness distribution in the cross section of welded join made of S1100QL steel is performed in this study on the basis of analytical methods. Analytical CCT diagram and volume fraction of each phases of S1100QL steel as a function of cooling time t8/5 are determined. A numerical simulation of welding process is performed in ABAQUS. Thermal cycles and temperature field in welded joints are determined. Prediction of hardness distribution in the cross section of the joint is performed on the basis of obtained cooling times t8/5. Results of numerical simulations are compared with experimentally obtained results.

  19. A comparative study of finite element methodologies for the prediction of torsional response of bladed rotors

    International Nuclear Information System (INIS)

    Scheepers, R.; Heyns, P. S.

    2016-01-01

    The prevention of torsional vibration-induced fatigue damage to turbo-generators requires determining natural frequencies by either field testing or mathematical modelling. Torsional excitation methods, measurement techniques and mathematical modelling are active fields of research. However, these aspects are mostly considered in isolation and often without experimental verification. The objective of this work is to compare one dimensional (1D), full three dimensional (3D) and 3D cyclic symmetric (3DCS) Finite element (FE) methodologies for torsional vibration response. Results are compared to experimental results for a small-scale test rotor. It is concluded that 3D approaches are feasible given the current computing technology and require less simplification with potentially increased accuracy. Accuracy of 1D models may be reduced due to simplifications but faster solution times are obtained. For high levels of accuracy model updating using field test results is recommended

  20. Predicting transcription factor binding sites using local over-representation and comparative genomics

    Directory of Open Access Journals (Sweden)

    Touzet Hélène

    2006-08-01

    Full Text Available Abstract Background Identifying cis-regulatory elements is crucial to understanding gene expression, which highlights the importance of the computational detection of overrepresented transcription factor binding sites (TFBSs in coexpressed or coregulated genes. However, this is a challenging problem, especially when considering higher eukaryotic organisms. Results We have developed a method, named TFM-Explorer, that searches for locally overrepresented TFBSs in a set of coregulated genes, which are modeled by profiles provided by a database of position weight matrices. The novelty of the method is that it takes advantage of spatial conservation in the sequence and supports multiple species. The efficiency of the underlying algorithm and its robustness to noise allow weak regulatory signals to be detected in large heterogeneous data sets. Conclusion TFM-Explorer provides an efficient way to predict TFBS overrepresentation in related sequences. Promising results were obtained in a variety of examples in human, mouse, and rat genomes. The software is publicly available at http://bioinfo.lifl.fr/TFM-Explorer.

  1. Do GCI indicators predict SME creation? A Western Balkans cross-country comparative analysis

    Directory of Open Access Journals (Sweden)

    Fëllënza Lushaku

    2016-03-01

    Full Text Available In early stages SMEs were seen as insignificant supplement to large business supply, whereas today they have a very important social and economic role, because of their contribution to job creation. These contributions are very valuable in times of crises and rising unemployment. In Kosovo and the Western Balkan countries, including countries such as Albania, Macedonia, Montenegro, Serbia and Bosnia and Herzegovina, the development of SMEs can contribute in facing many challenges, effects of inequality, high level of unemployment and demographic challenges. In addition, SME development can contribute to strengthening the competitiveness and productivity, while also promoting the growth of income per capita. Besides the positive perception the creation of small and medium enterprises has, it is also indispensable to consider their extinction rate, being the most affected category of businesses, especially in the initial stages. It is proved that the net SME creation and cross-country differences in the relationship between new businesses and extinct businesses, can serve as a recommendation for policy makers in order to create a favorable climate for small and medium enterprises. GCI indicators that measures global competitiveness are used to determine if the climate of competitiveness predicts the development of SMEs.

  2. A comparative approach to predicting effective dielectric, piezoelectric and elastic properties of PZT/PVDF composites

    International Nuclear Information System (INIS)

    Ahmad, Zeeshan; Prasad, Ashutosh; Prasad, K.

    2009-01-01

    The present study addresses the problem of quantitative prediction of effective relative permittivity, dielectric loss factor, piezoelectric charge coefficient, and Young's modulus of PZT/PVDF diphasic ceramic-polymer composite as a function of volume fraction of PZT in the different compositions. Theoretical results for effective relative permittivity derived from several dielectric mixture equations like those of Knott, Rother-Lichtenecker, Bruggeman, Maxwell-Wagner-Webmann-Skipetrov or Dias-Dasgupta, Furukawa, Lewin, Wiener, Jayasundere-Smith, Modified Cule-Torquato, Taylor, Poon-Shin and Rao et al. were fitted to the experimental data taken from previous works of Yamada et al. Similarly, the results for effective piezoelectric coefficient and Young's modulus, derived from different appropriate equations were fitted to the corresponding experimental data taken from the literature. The study revealed that only a few equations like modified Rother-Lichtenecker equation, Dias-Dasgupta equation and Rao equation for dielectric and piezoelectric properties while the four new equations developed in the present study of elastic property (Young's modulus) well fitted the corresponding experimental results. Further, the acceptable data put to various regression analyses showed that in most of the cases the third order polynomial regression analysis provided more acceptable fits.

  3. Predicting groundwater level fluctuations with meteorological effect implications—A comparative study among soft computing techniques

    Science.gov (United States)

    Shiri, Jalal; Kisi, Ozgur; Yoon, Heesung; Lee, Kang-Kun; Hossein Nazemi, Amir

    2013-07-01

    The knowledge of groundwater table fluctuations is important in agricultural lands as well as in the studies related to groundwater utilization and management levels. This paper investigates the abilities of Gene Expression Programming (GEP), Adaptive Neuro-Fuzzy Inference System (ANFIS), Artificial Neural Networks (ANN) and Support Vector Machine (SVM) techniques for groundwater level forecasting in following day up to 7-day prediction intervals. Several input combinations comprising water table level, rainfall and evapotranspiration values from Hongcheon Well station (South Korea), covering a period of eight years (2001-2008) were used to develop and test the applied models. The data from the first six years were used for developing (training) the applied models and the last two years data were reserved for testing. A comparison was also made between the forecasts provided by these models and the Auto-Regressive Moving Average (ARMA) technique. Based on the comparisons, it was found that the GEP models could be employed successfully in forecasting water table level fluctuations up to 7 days beyond data records.

  4. Vibrational dynamics of icosahedrally symmetric biomolecular assemblies compared with predictions based on continuum elasticity.

    Science.gov (United States)

    Yang, Zheng; Bahar, Ivet; Widom, Michael

    2009-06-03

    Coarse-grained elastic network models elucidate the fluctuation dynamics of proteins around their native conformations. Low-frequency collective motions derived by simplified normal mode analysis are usually involved in biological function, and these motions often possess noteworthy symmetries related to the overall shape of the molecule. Here, insights into these motions and their frequencies are sought by considering continuum models with appropriate symmetry and boundary conditions to approximately represent the true atomistic molecular structure. We solve the elastic wave equations analytically for the case of spherical symmetry, yielding a symmetry-based classification of molecular motions together with explicit predictions for their vibrational frequencies. We address the case of icosahedral symmetry as a perturbation to the spherical case. Applications to lumazine synthase, satellite tobacco mosaic virus, and brome mosaic virus show that the spherical elastic model efficiently provides insights on collective motions that are otherwise obtained by detailed elastic network models. A major utility of the continuum models is the possibility of estimating macroscopic material properties such as the Young's modulus or Poisson's ratio for different types of viruses.

  5. Comparing the Effects of Negative and Mixed Emotional Messages on Predicted Occasional Excessive Drinking

    OpenAIRE

    Carrera, Pilar; Caballero, Amparo; Mu?oz, Dolores

    2008-01-01

    In this work we present two types of emotional message, negative (sadness) versus mixed (joy and sadness), with the aim of studying their differential effect on attitude change and the probability estimated by participants of repeating the behavior of occasional excessive drinking in the near future. The results show that for the group of participants with moderate experience in this behavior the negative message, compared to the mixed one, is associated with higher probability of repeating t...

  6. Comparing the effects of negative and mixed emotional messages on predicted occasional excessive drinking

    OpenAIRE

    Carrera Levillain, Pilar; Caballero González, Amparo; Muñoz Cáceres, María Dolores

    2008-01-01

    In this work we present two types of emotional message, negative (sadness) versus mixed (joy and sadness), with the aim of studying their differential effect on attitude change and the probability estimated by participants of repeating the behavior of occasional excessive drinking in the near future. The results show that for the group of participants with moderate experience in this behavior the negative message, compared to the mixed one, is associated with higher probability of repeating t...

  7. Risk and protective factors for recreational and hard drug use among Malaysian adolescents and young adults.

    Science.gov (United States)

    Razali, Muzafar Mohd; Kliewer, Wendy

    2015-11-01

    This study investigated risk and protective factors for recreational and hard drug use in Malaysian adolescents and young adults. Participants (n = 859; M age = 17.24 years, SD = 2.75 years, range = 13-25 years; 59% male) were recruited from secondary schools, technical colleges, a juvenile detention center and a national training center in Malaysia. A version of the Communities That Care survey validated for use in Malaysia (Razali & Kliewer, 2015) was used to assess study constructs. One in 6 adolescents and 1 in 3 young adults reported lifetime recreational and hard drug use, with greater use reported by males across all drug categories. Structural equation modeling was used to determine the strongest risk and protective factors for recreational and hard drug use. The overall pattern of findings was similar for recreational and hard drug use. Shared risk factors for lifetime recreational and hard drug use included early initiation of antisocial behavior, peer antisocial behavior, and peer reinforcement for engaging in antisocial behavior; shared protective factors included religious practices and opportunities for prosocial school involvement. Multiple group analyses comparing adolescents and young adults indicated that patterns of risk and protective factors predicting drug use differed across these age groups. There were fewer significant predictors of either recreational or hard drug use for young adults relative to adolescents. Results suggest that interventions should target multiple microsystems (e.g., peer groups, family systems, school environments) and be tailored to the developmental stage of the individual. Copyright © 2015. Published by Elsevier Ltd.

  8. The diagnostic value of specific IgE to Ara h 2 to predict peanut allergy in children is comparable to a validated and updated diagnostic prediction model.

    Science.gov (United States)

    Klemans, Rob J B; Otte, Dianne; Knol, Mirjam; Knol, Edward F; Meijer, Yolanda; Gmelig-Meyling, Frits H J; Bruijnzeel-Koomen, Carla A F M; Knulst, André C; Pasmans, Suzanne G M A

    2013-01-01

    A diagnostic prediction model for peanut allergy in children was recently published, using 6 predictors: sex, age, history, skin prick test, peanut specific immunoglobulin E (sIgE), and total IgE minus peanut sIgE. To validate this model and update it by adding allergic rhinitis, atopic dermatitis, and sIgE to peanut components Ara h 1, 2, 3, and 8 as candidate predictors. To develop a new model based only on sIgE to peanut components. Validation was performed by testing discrimination (diagnostic value) with an area under the receiver operating characteristic curve and calibration (agreement between predicted and observed frequencies of peanut allergy) with the Hosmer-Lemeshow test and a calibration plot. The performance of the (updated) models was similarly analyzed. Validation of the model in 100 patients showed good discrimination (88%) but poor calibration (P original model: sex, skin prick test, peanut sIgE, and total IgE minus sIgE. When building a model with sIgE to peanut components, Ara h 2 was the only predictor, with a discriminative ability of 90%. Cutoff values with 100% positive and negative predictive values could be calculated for both the updated model and sIgE to Ara h 2. In this way, the outcome of the food challenge could be predicted with 100% accuracy in 59% (updated model) and 50% (Ara h 2) of the patients. Discrimination of the validated model was good; however, calibration was poor. The discriminative ability of Ara h 2 was almost comparable to that of the updated model, containing 4 predictors. With both models, the need for peanut challenges could be reduced by at least 50%. Copyright © 2012 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.

  9. Comparing the effects of negative and mixed emotional messages on predicted occasional excessive drinking.

    Science.gov (United States)

    Carrera, Pilar; Caballero, Amparo; Muñoz, Dolores

    2008-01-01

    In this work we present two types of emotional message, negative (sadness) versus mixed (joy and sadness), with the aim of studying their differential effect on attitude change and the probability estimated by participants of repeating the behavior of occasional excessive drinking in the near future. The results show that for the group of participants with moderate experience in this behavior the negative message, compared to the mixed one, is associated with higher probability of repeating the risk behavior and a less negative attitude toward it. These results suggest that mixed emotional messages (e.g. joy and sadness messages) could be more effective in campaigns for the prevention of this risk behavior.

  10. RNAspa: a shortest path approach for comparative prediction of the secondary structure of ncRNA molecules

    Directory of Open Access Journals (Sweden)

    Michaeli Shulamit

    2007-10-01

    Full Text Available Abstract Background In recent years, RNA molecules that are not translated into proteins (ncRNAs have drawn a great deal of attention, as they were shown to be involved in many cellular functions. One of the most important computational problems regarding ncRNA is to predict the secondary structure of a molecule from its sequence. In particular, we attempted to predict the secondary structure for a set of unaligned ncRNA molecules that are taken from the same family, and thus presumably have a similar structure. Results We developed the RNAspa program, which comparatively predicts the secondary structure for a set of ncRNA molecules in linear time in the number of molecules. We observed that in a list of several hundred suboptimal minimal free energy (MFE predictions, as provided by the RNAsubopt program of the Vienna package, it is likely that at least one suggested structure would be similar to the true, correct one. The suboptimal solutions of each molecule are represented as a layer of vertices in a graph. The shortest path in this graph is the basis for structural predictions for the molecule. We also show that RNA secondary structures can be compared very rapidly by a simple string Edit-Distance algorithm with a minimal loss of accuracy. We show that this approach allows us to more deeply explore the suboptimal structure space. Conclusion The algorithm was tested on three datasets which include several ncRNA families taken from the Rfam database. These datasets allowed for comparison of the algorithm with other methods. In these tests, RNAspa performed better than four other programs.

  11. A New Framework to Compare Mass-Flux Schemes Within the AROME Numerical Weather Prediction Model

    Science.gov (United States)

    Riette, Sébastien; Lac, Christine

    2016-08-01

    In the Application of Research to Operations at Mesoscale (AROME) numerical weather forecast model used in operations at Météo-France, five mass-flux schemes are available to parametrize shallow convection at kilometre resolution. All but one are based on the eddy-diffusivity-mass-flux approach, and differ in entrainment/detrainment, the updraft vertical velocity equation and the closure assumption. The fifth is based on a more classical mass-flux approach. Screen-level scores obtained with these schemes show few discrepancies and are not sufficient to highlight behaviour differences. Here, we describe and use a new experimental framework, able to compare and discriminate among different schemes. For a year, daily forecast experiments were conducted over small domains centred on the five French metropolitan radio-sounding locations. Cloud base, planetary boundary-layer height and normalized vertical profiles of specific humidity, potential temperature, wind speed and cloud condensate were compared with observations, and with each other. The framework allowed the behaviour of the different schemes in and above the boundary layer to be characterized. In particular, the impact of the entrainment/detrainment formulation, closure assumption and cloud scheme were clearly visible. Differences mainly concerned the transport intensity thus allowing schemes to be separated into two groups, with stronger or weaker updrafts. In the AROME model (with all interactions and the possible existence of compensating errors), evaluation diagnostics gave the advantage to the first group.

  12. Comparing strategies for selection of low-density SNPs for imputation-mediated genomic prediction in U. S. Holsteins.

    Science.gov (United States)

    He, Jun; Xu, Jiaqi; Wu, Xiao-Lin; Bauck, Stewart; Lee, Jungjae; Morota, Gota; Kachman, Stephen D; Spangler, Matthew L

    2018-04-01

    SNP chips are commonly used for genotyping animals in genomic selection but strategies for selecting low-density (LD) SNPs for imputation-mediated genomic selection have not been addressed adequately. The main purpose of the present study was to compare the performance of eight LD (6K) SNP panels, each selected by a different strategy exploiting a combination of three major factors: evenly-spaced SNPs, increased minor allele frequencies, and SNP-trait associations either for single traits independently or for all the three traits jointly. The imputation accuracies from 6K to 80K SNP genotypes were between 96.2 and 98.2%. Genomic prediction accuracies obtained using imputed 80K genotypes were between 0.817 and 0.821 for daughter pregnancy rate, between 0.838 and 0.844 for fat yield, and between 0.850 and 0.863 for milk yield. The two SNP panels optimized on the three major factors had the highest genomic prediction accuracy (0.821-0.863), and these accuracies were very close to those obtained using observed 80K genotypes (0.825-0.868). Further exploration of the underlying relationships showed that genomic prediction accuracies did not respond linearly to imputation accuracies, but were significantly affected by genotype (imputation) errors of SNPs in association with the traits to be predicted. SNPs optimal for map coverage and MAF were favorable for obtaining accurate imputation of genotypes whereas trait-associated SNPs improved genomic prediction accuracies. Thus, optimal LD SNP panels were the ones that combined both strengths. The present results have practical implications on the design of LD SNP chips for imputation-enabled genomic prediction.

  13. Hard Copy Market Overview

    Science.gov (United States)

    Testan, Peter R.

    1987-04-01

    A number of Color Hard Copy (CHC) market drivers are currently indicating strong growth in the use of CHC technologies for the business graphics marketplace. These market drivers relate to product, software, color monitors and color copiers. The use of color in business graphics allows more information to be relayed than is normally the case in a monochrome format. The communicative powers of full-color computer generated output in the business graphics application area will continue to induce end users to desire and require color in their future applications. A number of color hard copy technologies will be utilized in the presentation graphics arena. Thermal transfer, ink jet, photographic and electrophotographic technologies are all expected to be utilized in the business graphics presentation application area in the future. Since the end of 1984, the availability of color application software packages has grown significantly. Sales revenue generated by business graphics software is expected to grow at a compound annual growth rate of just over 40 percent to 1990. Increased availability of packages to allow the integration of text and graphics is expected. Currently, the latest versions of page description languages such as Postscript, Interpress and DDL all support color output. The use of color monitors will also drive the demand for color hard copy in the business graphics market place. The availability of higher resolution screens is allowing color monitors to be easily used for both text and graphics applications in the office environment. During 1987, the sales of color monitors are expected to surpass the sales of monochrome monitors. Another major color hard copy market driver will be the color copier. In order to take advantage of the communications power of computer generated color output, multiple copies are required for distribution. Product introductions of a new generation of color copiers is now underway with additional introductions expected

  14. Comparing the Effects of Negative and Mixed Emotional Messages on Predicted Occasional Excessive Drinking

    Directory of Open Access Journals (Sweden)

    Pilar Carrera

    2008-01-01

    Full Text Available In this work we present two types of emotional message, negative (sadness versus mixed (joy and sadness, with the aim of studying their differential effect on attitude change and the probability estimated by participants of repeating the behavior of occasional excessive drinking in the near future. The results show that for the group of participants with moderate experience in this behavior the negative message, compared to the mixed one, is associated with higher probability of repeating the risk behavior and a less negative attitude toward it. These results suggest that mixed emotional messages (e.g. joy and sadness messages could be more effective in campaigns for the prevention of this risk behavior.

  15. Comparing artificial neural networks, general linear models and support vector machines in building predictive models for small interfering RNAs.

    Directory of Open Access Journals (Sweden)

    Kyle A McQuisten

    2009-10-01

    Full Text Available Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models.Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation.The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features found to result in maximally predictive models are

  16. A Behavioral Economic Reward Index Predicts Drinking Resolutions: Moderation Re-visited and Compared with Other Outcomes

    Science.gov (United States)

    Tucker, Jalie A.; Roth, David L.; Vignolo, Mary J.; Westfall, Andrew O.

    2014-01-01

    Data were pooled from three studies of recently resolved community-dwelling problem drinkers to determine whether a behavioral economic index of the value of rewards available over different time horizons distinguished among moderation (n = 30), abstinent (n = 95), and unresolved (n = 77) outcomes. Moderation over 1-2 year prospective follow-up intervals was hypothesized to involve longer term behavior regulation processes compared to abstinence or relapse and to be predicted by more balanced pre-resolution monetary allocations between short- and longer-term objectives (i.e., drinking and saving for the future). Standardized odds ratios (OR) based on changes in standard deviation units from a multinomial logistic regression indicated that increases on this “Alcohol-Savings Discretionary Expenditure” index predicted higher rates of both abstinence (OR = 1.93, p = .004) and relapse (OR = 2.89, p moderation outcomes. The index had incremental utility in predicting moderation in complex models that included other established predictors. The study adds to evidence supporting a behavioral economic analysis of drinking resolutions and shows that a systematic analysis of pre-resolution spending patterns aids in predicting moderation. PMID:19309182

  17. Wave packet autocorrelation functions for quantum hard-disk and hard-sphere billiards in the high-energy, diffraction regime.

    Science.gov (United States)

    Goussev, Arseni; Dorfman, J R

    2006-07-01

    We consider the time evolution of a wave packet representing a quantum particle moving in a geometrically open billiard that consists of a number of fixed hard-disk or hard-sphere scatterers. Using the technique of multiple collision expansions we provide a first-principle analytical calculation of the time-dependent autocorrelation function for the wave packet in the high-energy diffraction regime, in which the particle's de Broglie wavelength, while being small compared to the size of the scatterers, is large enough to prevent the formation of geometric shadow over distances of the order of the particle's free flight path. The hard-disk or hard-sphere scattering system must be sufficiently dilute in order for this high-energy diffraction regime to be achievable. Apart from the overall exponential decay, the autocorrelation function exhibits a generally complicated sequence of relatively strong peaks corresponding to partial revivals of the wave packet. Both the exponential decay (or escape) rate and the revival peak structure are predominantly determined by the underlying classical dynamics. A relation between the escape rate, and the Lyapunov exponents and Kolmogorov-Sinai entropy of the counterpart classical system, previously known for hard-disk billiards, is strengthened by generalization to three spatial dimensions. The results of the quantum mechanical calculation of the time-dependent autocorrelation function agree with predictions of the semiclassical periodic orbit theory.

  18. Hard Electromagnetic Processes

    International Nuclear Information System (INIS)

    Richard, F.

    1987-09-01

    Among hard electromagnetic processes, I will use the most recent data and focus on quantitative test of QCD. More specifically, I will retain two items: - hadroproduction of direct photons, - Drell-Yan. In addition, I will briefly discuss a recent analysis of ISR data obtained with AFS (Axial Field Spectrometer) which sheds a new light on the e/π puzzle at low P T

  19. Mindfulness predicts student nurses' communication self-efficacy: A cross-national comparative study.

    Science.gov (United States)

    Sundling, Vibeke; Sundler, Annelie J; Holmström, Inger K; Kristensen, Dorte Vesterager; Eide, Hilde

    2017-08-01

    The aim of this study was to compare student nurses' communication self-efficacy, empathy, and mindfulness across two countries, and to analyse the relationship between these qualities. The study had a cross-sectional design. Data was collected from final year student nurses in Norway and Sweden. Communication self-efficacy, empathy, and mindfulness were reported by questionnaires; Clear-cut communication with patients, Jefferson Scale of Empathy, and Langer 14 items mindfulness scale. The study included 156 student nurses, 94 (60%) were Swedish. The mean communication self-efficacy score was 119 (95% CI 116-122), empathy score 115 (95% CI 113-117) and mindfulness score 79 (95% CI 78-81). A Mann-Whitney test showed that Swedish students scored significantly higher on communication self-efficacy, empathy, and mindfulness than Norwegian students did. When adjusted for age, gender, and country in a multiple linear regression, mindfulness was the only independent predictor of communication self-efficacy. The Swedish student nurses in this study scored higher on communication self-efficacy, empathy, and mindfulness than Norwegian students did. Student nurses scoring high on mindfulness rated their communication self-efficacy higher. A mindful learning approach may improve communication self-efficacy and possibly the effect of communication skills training. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Analytical Modeling of Hard-Coating Cantilever Composite Plate considering the Material Nonlinearity of Hard Coating

    Directory of Open Access Journals (Sweden)

    Wei Sun

    2015-01-01

    Full Text Available Due to the material nonlinearity of hard coating, the coated structure produces the nonlinear dynamical behaviors of variable stiffness and damping, which make the modeling of hard-coating composite structure become a challenging task. In this study, the polynomial was adopted to characterize this material nonlinearity and an analytical modeling method was developed for the hard-coating composite plate. Firstly, to relate the hard-coating material parameters obtained by test and the analytical model, the expression of equivalent strain of composite plate was derived. Then, the analytical model of hard-coating composite plate was created by energy method considering the material nonlinearity of hard coating. Next, using the Newton-Raphson method to solve the vibration response and resonant frequencies of composite plate and a specific calculation procedure was also proposed. Finally, a cantilever plate coated with MgO + Al2O3 hard coating was chosen as study case; the vibration response and resonant frequencies of composite plate were calculated using the proposed method. The calculation results were compared with the experiment and general linear calculation, and the correctness of the created model was verified. The study shows the proposed method can still maintain an acceptable precision when the material nonlinearity of hard coating is stronger.

  1. From HERA to the Tevatron: A scaling law in hard diffraction

    International Nuclear Information System (INIS)

    Goulianos, K.

    1997-01-01

    Results on hard diffraction from CDF are reviewed and compared with predictions based on the diffractive structure function of the proton measured in deep inelastic scattering at HERA. The predictions are generally larger than the measured rates by a factor of ∼ 6, suggesting a breakdown of conventional factorization. Correct predictions are obtained by scaling the rapidity gap probability distribution of the diffractive structure function to the total integrated gap probability. The scaling of the gap probability is traced back to the pomeron flux renormalization hypothesis, which was introduced to unitarize the soft diffraction amplitude

  2. A comparative analysis of hazard models for predicting debris flows in Madison County, VA

    Science.gov (United States)

    Morrissey, Meghan M.; Wieczorek, Gerald F.; Morgan, Benjamin A.

    2001-01-01

    During the rainstorm of June 27, 1995, roughly 330-750 mm of rain fell within a sixteen-hour period, initiating floods and over 600 debris flows in a small area (130 km2) of Madison County, Virginia. Field studies showed that the majority (70%) of these debris flows initiated with a thickness of 0.5 to 3.0 m in colluvium on slopes from 17 o to 41 o (Wieczorek et al., 2000). This paper evaluated and compared the approaches of SINMAP, LISA, and Iverson's (2000) transient response model for slope stability analysis by applying each model to the landslide data from Madison County. Of these three stability models, only Iverson's transient response model evaluated stability conditions as a function of time and depth. Iverson?s model would be the preferred method of the three models to evaluate landslide hazards on a regional scale in areas prone to rain-induced landslides as it considers both the transient and spatial response of pore pressure in its calculation of slope stability. The stability calculation used in SINMAP and LISA is similar and utilizes probability distribution functions for certain parameters. Unlike SINMAP that only considers soil cohesion, internal friction angle and rainfall-rate distributions, LISA allows the use of distributed data for all parameters, so it is the preferred model to evaluate slope stability over SINMAP. Results from all three models suggested similar soil and hydrologic properties for triggering the landslides that occurred during the 1995 storm in Madison County, Virginia. The colluvium probably had cohesion of less than 2KPa. The root-soil system is above the failure plane and consequently root strength and tree surcharge had negligible effect on slope stability. The result that the final location of the water table was near the ground surface is supported by the water budget analysis of the rainstorm conducted by Smith et al. (1996).

  3. How to estimate hardness of crystals on a pocket calculator

    International Nuclear Information System (INIS)

    Simunek, Antonin

    2007-01-01

    A generalization of the semiempirical microscopic model of hardness is presented and applied to currently studied borides, carbides, and nitrides of heavy transition metals. The hardness of OsB, OsC, OsN, PtN, RuC, RuB 2 , ReB 2 , OsB 2 , IrN 2 , PtN 2 , and OsN 2 crystals in various structural phases is predicted. It is found that none of the transition metal crystals is superhard, i.e., with hardness greater than 40 GPa. The presented method provides materials researchers with a practical tool in the search for new hard materials

  4. Comparing predictive ability of Laser-Induced Breakdown Spectroscopy to Near Infrared Spectroscopy for soil texture and organic carbon determination

    DEFF Research Database (Denmark)

    Knadel, Maria; Peng, Yi; Gislum, René

    Soil organic carbon (SOC) and texture have a practical value for agronomy and the environment. Thus, alternative techniques to supplement or substitute for the expensive conventional analysis of soil are developed. Here the feasibility of laser-induced breakdown spectroscopy (LIBS) to determine SOC...... and texture was tested and compared with near infrared spectroscopy (NIRS) technique and traditional laboratory analysis. Calibration models were developed on 50 topsoil samples. For all properties except silt, higher predictive ability of LIBS than NIRS models was obtained. Successful calibrations indicate...... that LIBS can be used as a fast and reliable method for SOC and texture estimation....

  5. Endothelial cell loss and refractive predictability in femtosecond laser-assisted cataract surgery compared with conventional cataract surgery

    DEFF Research Database (Denmark)

    Krarup, Therese; Holm, Lars Morten; la Cour, Morten

    2014-01-01

    and the contralateral eye operated by CPS (stop and chop technique). Both eyes had intraocular aspheric lenses implanted. Uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), central corneal endothelial cell count and hexagonality with a non-contact specular microscope were assessed......PURPOSE: To investigate the amount of endothelial cell loss (ECL) and refractive predictability by femtosecond laser-assisted cataract surgery (FLACS) compared to conventional phacoemulsification cataract surgery (CPS). METHODS: Forty-seven patients had one eye operated by FLACS...

  6. Modelling the distribution of hard seabed using calibrated multibeam acoustic backscatter data in a tropical, macrotidal embayment: Darwin Harbour, Australia

    Science.gov (United States)

    Siwabessy, P. Justy W.; Tran, Maggie; Picard, Kim; Brooke, Brendan P.; Huang, Zhi; Smit, Neil; Williams, David K.; Nicholas, William A.; Nichol, Scott L.; Atkinson, Ian

    2018-06-01

    Spatial information on the distribution of seabed substrate types in high use coastal areas is essential to support their effective management and environmental monitoring. For Darwin Harbour, a rapidly developing port in northern Australia, the distribution of hard substrate is poorly documented but known to influence the location and composition of important benthic biological communities (corals, sponges). In this study, we use angular backscatter response curves to model the distribution of hard seabed in the subtidal areas of Darwin Harbour. The angular backscatter response curve data were extracted from multibeam sonar data and analysed against backscatter intensity for sites observed from seabed video to be representative of "hard" seabed. Data from these sites were consolidated into an "average curve", which became a reference curve that was in turn compared to all other angular backscatter response curves using the Kolmogorov-Smirnov goodness-of-fit. The output was used to generate interpolated spatial predictions of the probability of hard seabed ( p-hard) and derived hard seabed parameters for the mapped area of Darwin Harbour. The results agree well with the ground truth data with an overall classification accuracy of 75% and an area under curve measure of 0.79, and with modelled bed shear stress for the Harbour. Limitations of this technique are discussed with attention to discrepancies between the video and acoustic results, such as in areas where sediment forms a veneer over hard substrate.

  7. Comparative prediction of nonepileptic events using MMPI-2 clinical scales, Harris Lingoes subscales, and restructured clinical scales.

    Science.gov (United States)

    Yamout, Karim Z; Heinrichs, Robin J; Baade, Lyle E; Soetaert, Dana K; Liow, Kore K

    2017-03-01

    The Minnesota Multiphasic Personality Inventory-2 (MMPI-2) is a psychological testing tool used to measure psychological and personality constructs. The MMPI-2 has proven helpful in identifying individuals with nonepileptic events/nonepileptic seizures. However, the MMPI-2 has had some updates that enhanced its original scales. The aim of this article was to test the utility of updated MMPI-2 scales in predicting the likelihood of non-epileptic seizures in individuals admitted to an EEG video monitoring unit. We compared sensitivity, specificity, and likelihood ratios of traditional MMPI-2 Clinical Scales against more homogenous MMPI-2 Harris-Lingoes subscales and the newer Restructured Clinical (RC) scales. Our results showed that the Restructured Scales did not show significant improvement over the original Clinical scales. However, one Harris-Lingoes subscale (HL4 of Clinical Scale 3) did show improved predictive utility over the original Clinical scales as well as over the newer Restructured Clinical scales. Our study suggests that the predictive utility of the MMPI-2 can be improved using already existing scales. This is particularly useful for those practitioners who are not invested in switching over to the newly developed MMPI-2 Restructured Form (MMPI-2 RF). Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Thermographic skin temperature measurement compared with cold sensation in predicting the efficacy and distribution of epidural anesthesia.

    Science.gov (United States)

    Bruins, Arnoud A; Kistemaker, Kay R J; Boom, Annemieke; Klaessens, John H G M; Verdaasdonk, Rudolf M; Boer, Christa

    2018-04-01

    Due to the high rates of epidural failure (3-32%), novel techniques are required to objectively assess the successfulness of an epidural block. In this study we therefore investigated whether thermographic temperature measurements have a higher predictive value for a successful epidural block when compared to the cold sensation test as gold standard. Epidural anesthesia was induced in 61 patients undergoing elective abdominal, thoracic or orthopedic surgery. A thermographic picture was recorded at 5, 10 and 15 min following epidural anesthesia induction. After 15 min a cold sensation test was performed. Epidural anesthesia is associated with a decrease in skin temperature. Thermography predicts a successful epidural block with a sensitivity of 54% and a PPV of 92% and a specificity of 67% and a NPV of 17%. The cold sensation test shows a higher sensitivity and PPV than thermography (97 and 93%), but a lower specificity and NPV than thermography (25 and 50%). Thermographic temperature measurements can be used as an additional and objective method for the assessment of the effectiveness of an epidural block next to the cold sensation test, but have a low sensitivity and negative predictive value. The local decrease in temperature as observed in our study during epidural anesthesia is mainly attributed to a core-to-peripheral redistribution of body heat and vasodilation.

  9. A comparative analysis of primary and secondary Gleason pattern predictive ability for positive surgical margins after radical prostatectomy.

    Science.gov (United States)

    Sfoungaristos, S; Kavouras, A; Kanatas, P; Polimeros, N; Perimenis, P

    2011-01-01

    To compare the predictive ability of primary and secondary Gleason pattern for positive surgical margins in patients with clinically localized prostate cancer and a preoperative Gleason score ≤ 6. A retrospective analysis of the medical records of patients undergone a radical prostatectomy between January 2005 and October 2010 was conducted. Patients' age, prostate volume, preoperative PSA, biopsy Gleason score, the 1st and 2nd Gleason pattern were entered a univariate and multivariate analysis. The 1st and 2nd pattern were tested for their ability to predict positive surgical margins using receiver operating characteristic curves. Positive surgical margins were noticed in 56 cases (38.1%) out of 147 studied patients. The 2nd pattern was significantly greater in those with positive surgical margins while the 1st pattern was not significantly different between the 2 groups of patients. ROC analysis revealed that area under the curve was 0.53 (p=0.538) for the 1st pattern and 0.60 (p=0.048) for the 2nd pattern. Concerning the cases with PSA <10 ng/ml, it was also found that only the 2nd pattern had a predictive ability (p=0.050). When multiple logistic regression analysis was conducted it was found that the 2nd pattern was the only independent predictor. The second Gleason pattern was found to be of higher value than the 1st one for the prediction of positive surgical margins in patients with preoperative Gleason score ≤ 6 and this should be considered especially when a neurovascular bundle sparing radical prostatectomy is planned, in order not to harm the oncological outcome.

  10. Detailed comparative study regarding different formulae of predicting the iron losses in a machine excited by non-sinusoidal supply

    International Nuclear Information System (INIS)

    El-Kharashi, Eyhab

    2014-01-01

    Variable-speed drives in any machine provide an accurate control and high-energy efficiency. More and more often machines are excited by non-sinusoidal voltages. Predicting the amount of iron losses in non-sinusoidal excitation is important. The paper aims to achieve accurate efficiency estimation by presenting a new modified calculation method to predict the iron losses. In a switched reluctance motor, the iron losses can't be ignored, it has considered value. This paper presents conventional and modified Steinmetz formulae for the estimation of the iron losses. The conventional Steinmetz formula consists of three terms: hysteresis, eddy current and anomalous losses. The equations of hysteresis and eddy current losses depend mainly on the value of the peak flux density. The reason to modify the Steinmetz formula is to avoid the need of knowing the peak flux density and the anomalous losses in accurate figures. The paper also explains and clarifies the methods of using both the conventional as well as the modified Steinmetz formulae in accurate calculation of the iron losses in different sections of the magnetic circuit. For both formulae, a comparison is made between the distributions of the iron losses in different parts of the magnetic circuit and the efficiencies. - Highlights: • The paper aims to achieve accurate efficiency estimation. • The predicted iron loss by the conventional Steinmetz formula is inaccurate. • The modified Steinmetz formula is more accurate because it includes the minor loops losses caused by each flux density. • The paper compared the predicted losses obtained by the two different formals to stand on the degree of accuracy

  11. Anamnestic prediction of bucket handle compared to other tear patterns of the medial meniscus in stable knees.

    Science.gov (United States)

    Haviv, Barak; Bronak, Shlomo; Kosashvili, Yona; Thein, Rafael

    2016-12-01

    The aim of this study was to analyze and compare the preoperative anamnestic details between patients with an arthroscopic diagnosis of bucket handle and other tear patterns of the medial meniscus in stable knees. A total of 204 patients (mean age 49.3 ± 13 years) were included in the study. The mean age was 49.3 ± 13 years. The study group included 65 patients (63 males, 2 females) with an arthroscopic diagnosis of bucket handle tear and the control group included 139 patients (90 males, 49 females) with non-bucket handle tear patterns. The preoperative clinical assessments of the two groups were analyzed retrospectively. Anamnestic prediction for the diagnosis of a bucket handle tear was based upon various medical history parameters. Multivariate logistic regression was carried out to identify independent anamnestic factors for predicting isolated bucket handle tears of the medial meniscus compared to non-bucket handle tears. Analysis of the multivariate logistic regression yielded 3 statistically significant independent anamnestic risk factors for predicting isolated bucket handle tears of the medial meniscus: male gender (OR, 9.7; 95% CI, 1.1-37.6), locking events (OR, 4.6; 95% CI, 1.8-11.3) and pain in extension (OR, 6.9; 95% CI, 2.5-23.7). Other preoperative variables such as age, BMI, activity level, comorbidities, duration of symptoms, pain location, preceding injury and its mechanism had no significant effect on tear pattern. Preoperative strong clues for bucket handle tears of the medial meniscus in stable knees are male gender, locking events and limitation in extension. Level III, Diagnostic study. Copyright © 2016 Turkish Association of Orthopaedics and Traumatology. Production and hosting by Elsevier B.V. All rights reserved.

  12. Comparing the predictive capacity of observed in-session resistance to self-reported motivation in cognitive behavioral therapy.

    Science.gov (United States)

    Westra, Henny A

    2011-02-01

    Self-report measures of motivation for changing anxiety have been weakly and inconsistently related to outcome in cognitive behavioral therapy (CBT). While clients may not be able to accurately report their motivation, ambivalence about change may nonetheless be expressed in actual therapy sessions as opposition to the direction set by the therapist (i.e., resistance). In the context of CBT for generalized anxiety disorder, the present study compared the ability of observed in-session resistance in CBT session 1 and two self-report measures of motivation for changing anxiety (the Change Questionnaire & the Client Motivational for Therapy Scale) to (1) predict client and therapist rated homework compliance (2) predict post-CBT and one-year post-treatment worry reduction, and (3) differentiate those who received motivational interviewing prior to CBT from those who received no pre-treatment. Observed in-session resistance performed very well on each index, compared to the performance of self-reported motivation which was inconsistent and weaker relative to observed resistance. These findings strongly support both clinician sensitivity to moments of client resistance in actual therapy sessions as early as session 1, and the inclusion of observational process measures in CBT research. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  13. Revisiting the definition of local hardness and hardness kernel.

    Science.gov (United States)

    Polanco-Ramírez, Carlos A; Franco-Pérez, Marco; Carmona-Espíndola, Javier; Gázquez, José L; Ayers, Paul W

    2017-05-17

    An analysis of the hardness kernel and local hardness is performed to propose new definitions for these quantities that follow a similar pattern to the one that characterizes the quantities associated with softness, that is, we have derived new definitions for which the integral of the hardness kernel over the whole space of one of the variables leads to local hardness, and the integral of local hardness over the whole space leads to global hardness. A basic aspect of the present approach is that global hardness keeps its identity as the second derivative of energy with respect to the number of electrons. Local hardness thus obtained depends on the first and second derivatives of energy and electron density with respect to the number of electrons. When these derivatives are approximated by a smooth quadratic interpolation of energy, the expression for local hardness reduces to the one intuitively proposed by Meneses, Tiznado, Contreras and Fuentealba. However, when one combines the first directional derivatives with smooth second derivatives one finds additional terms that allow one to differentiate local hardness for electrophilic attack from the one for nucleophilic attack. Numerical results related to electrophilic attacks on substituted pyridines, substituted benzenes and substituted ethenes are presented to show the overall performance of the new definition.

  14. Biochemical methane potential prediction of plant biomasses: Comparing chemical composition versus near infrared methods and linear versus non-linear models.

    Science.gov (United States)

    Godin, Bruno; Mayer, Frédéric; Agneessens, Richard; Gerin, Patrick; Dardenne, Pierre; Delfosse, Philippe; Delcarte, Jérôme

    2015-01-01

    The reliability of different models to predict the biochemical methane potential (BMP) of various plant biomasses using a multispecies dataset was compared. The most reliable prediction models of the BMP were those based on the near infrared (NIR) spectrum compared to those based on the chemical composition. The NIR predictions of local (specific regression and non-linear) models were able to estimate quantitatively, rapidly, cheaply and easily the BMP. Such a model could be further used for biomethanation plant management and optimization. The predictions of non-linear models were more reliable compared to those of linear models. The presentation form (green-dried, silage-dried and silage-wet form) of biomasses to the NIR spectrometer did not influence the performances of the NIR prediction models. The accuracy of the BMP method should be improved to enhance further the BMP prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Comparative pharmacogenetic analysis of risk polymorphisms in Caucasian and Vietnamese children with acute lymphoblastic leukemia: prediction of therapeutic outcome?

    Science.gov (United States)

    Hoang, Phuong Thu Vu; Ambroise, Jérôme; Dekairelle, Anne-France; Durant, Jean-François; Butoescu, Valentina; Chi, Vu Luan Dang; Huynh, Nghia; Nguyen, Tan Binh; Robert, Annie; Vermylen, Christiane; Gala, Jean-Luc

    2015-03-01

    Acute lymphoblastic leukemia (ALL) is the most common of all paediatric cancers. Aside from predisposing to ALL, polymorphisms could also be associated with poor outcome. Indeed, genetic variations involved in drug metabolism could, at least partially, be responsible for heterogeneous responses to standardized leukemia treatments, hence requiring more personalized therapy. The aims of this study were to (a) to determine the prevalence of seven common genetic polymorphisms including those that affect the folate and/or thiopurine metabolic pathways, i.e. cyclin D1 (CCND1-G870A), γ-glutamyl hydrolase (GGH-C452T), methylenetetrahydrofolate reductase (MTHFR-C677T and MTHFR-A1298C), thymidylate synthase promoter (TYMS-TSER), thiopurine methyltransferase (TPMT*3A and TPMT*3C) and inosine triphosphate pyrophosphatase (ITPA-C94A), in Caucasian (n = 94, age Vietnamese (n = 141, age Vietnamese (P < 0.001 and P = 0.02, respectively). Compared with children with a low MGRS (≤ 3), those with a high MGRS (≥ 4) were 2.06 (95% CI = 1.01, 4.22; P = 0.04) times more likely to relapse. Adding MGRS into a multivariate Cox regression model with race/ethnicity and four clinical variables improved the predictive accuracy of the model (AUC from 0.682 to 0.709 at 24 months). Including MGRS into a clinical model improved the predictive accuracy of short and medium term prognosis, hence confirming the association between well determined pharmacogenotypes and outcome of paediatric ALL. Whether variants on other genes associated with folate metabolism can substantially improve the predictive value of current MGRS is not known but deserves further evaluation. © 2014 The British Pharmacological Society.

  16. Hard and Soft Governance

    DEFF Research Database (Denmark)

    Moos, Lejf

    2009-01-01

    of Denmark, and finally the third layer: the leadership used in Danish schools. The use of 'soft governance' is shifting the focus of governance and leadership from decisions towards influence and power and thus shifting the focus of the processes from the decision-making itself towards more focus......The governance and leadership at transnational, national and school level seem to be converging into a number of isomorphic forms as we see a tendency towards substituting 'hard' forms of governance, that are legally binding, with 'soft' forms based on persuasion and advice. This article analyses...... and discusses governance forms at several levels. The first layer is the global: the methods of 'soft governance' that are being utilised by transnational agencies. The second layer is the national and local: the shift in national and local governance seen in many countries, but here demonstrated in the case...

  17. Zirconium nitride hard coatings

    International Nuclear Information System (INIS)

    Roman, Daiane; Amorim, Cintia Lugnani Gomes de; Soares, Gabriel Vieira; Figueroa, Carlos Alejandro; Baumvol, Israel Jacob Rabin; Basso, Rodrigo Leonardo de Oliveira

    2010-01-01

    Zirconium nitride (ZrN) nanometric films were deposited onto different substrates, in order to study the surface crystalline microstructure and also to investigate the electrochemical behavior to obtain a better composition that minimizes corrosion reactions. The coatings were produced by physical vapor deposition (PVD). The influence of the nitrogen partial pressure, deposition time and temperature over the surface properties was studied. Rutherford backscattering spectrometry (RBS), X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), scanning electron microscopy (SEM) and corrosion experiments were performed to characterize the ZrN hard coatings. The ZrN films properties and microstructure changes according to the deposition parameters. The corrosion resistance increases with temperature used in the films deposition. Corrosion tests show that ZrN coating deposited by PVD onto titanium substrate can improve the corrosion resistance. (author)

  18. Signalign: An Ontology of DNA as Signal for Comparative Gene Structure Prediction Using Information-Coding-and-Processing Techniques.

    Science.gov (United States)

    Yu, Ning; Guo, Xuan; Gu, Feng; Pan, Yi

    2016-03-01

    Conventional character-analysis-based techniques in genome analysis manifest three main shortcomings-inefficiency, inflexibility, and incompatibility. In our previous research, a general framework, called DNA As X was proposed for character-analysis-free techniques to overcome these shortcomings, where X is the intermediates, such as digit, code, signal, vector, tree, graph network, and so on. In this paper, we further implement an ontology of DNA As Signal, by designing a tool named Signalign for comparative gene structure analysis, in which DNA sequences are converted into signal series, processed by modified method of dynamic time warping and measured by signal-to-noise ratio (SNR). The ontology of DNA As Signal integrates the principles and concepts of other disciplines including information coding theory and signal processing into sequence analysis and processing. Comparing with conventional character-analysis-based methods, Signalign can not only have the equivalent or superior performance, but also enrich the tools and the knowledge library of computational biology by extending the domain from character/string to diverse areas. The evaluation results validate the success of the character-analysis-free technique for improved performances in comparative gene structure prediction.

  19. Thermodynamic perturbation theory for fused hard-sphere and hard-disk chain fluids

    International Nuclear Information System (INIS)

    Zhou, Y.; Hall, C.K.; Stell, G.

    1995-01-01

    We find that first-order thermodynamic perturbation theory (TPT1) which incorporates the reference monomer fluid used in the generalized Flory--AB (GF--AB) theory yields an equation of state for fused hard-sphere (FHS) chain fluids that has accuracy comparable to the GF--AB and GF--dimer--AC theories. The new TPT1 equation of state is significantly more accurate than other extensions of the TPT1 theory to FHS chain fluids. The TPT1 is also extended to two-dimensional fused hard-disk chain fluids. For the fused hard-disk dimer fluid, the extended TPT1 equation of state is found to be more accurate than the Boublik hard-disk dimer equation of state. copyright 1995 American Institute of Physics

  20. Estimating Janka hardness from specific gravity for tropical and temperate species

    Science.gov (United States)

    Michael C. Wiemann; David W. Green

    2007-01-01

    Using mean values for basic (green) specific gravity and Janka side hardness for individual species obtained from the world literature, regression equations were developed to predict side hardness from specific gravity. Statistical and graphical methods showed that the hardness–specific gravity relationship is the same for tropical and temperate hardwoods, but that the...

  1. Towards Radiation Hard Sensor Materials for the CMS Tracker Upgrade

    CERN Document Server

    Steinbrueck, Georg

    2012-01-01

    Many measurements are described in literature, performed on a variety of silicon materials and technologies, but they are often hard to compare, because they were done under different conditions. To systematically compare the prope...

  2. Quantifying and comparing dynamic predictive accuracy of joint models for longitudinal marker and time-to-event in presence of censoring and competing risks.

    Science.gov (United States)

    Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie; Berr, Claudine; Dartigues, Jean-François; Jacqmin-Gadda, Hélène

    2015-03-01

    Thanks to the growing interest in personalized medicine, joint modeling of longitudinal marker and time-to-event data has recently started to be used to derive dynamic individual risk predictions. Individual predictions are called dynamic because they are updated when information on the subject's health profile grows with time. We focus in this work on statistical methods for quantifying and comparing dynamic predictive accuracy of this kind of prognostic models, accounting for right censoring and possibly competing events. Dynamic area under the ROC curve (AUC) and Brier Score (BS) are used to quantify predictive accuracy. Nonparametric inverse probability of censoring weighting is used to estimate dynamic curves of AUC and BS as functions of the time at which predictions are made. Asymptotic results are established and both pointwise confidence intervals and simultaneous confidence bands are derived. Tests are also proposed to compare the dynamic prediction accuracy curves of two prognostic models. The finite sample behavior of the inference procedures is assessed via simulations. We apply the proposed methodology to compare various prediction models using repeated measures of two psychometric tests to predict dementia in the elderly, accounting for the competing risk of death. Models are estimated on the French Paquid cohort and predictive accuracies are evaluated and compared on the French Three-City cohort. © 2014, The International Biometric Society.

  3. Janka hardness using nonstandard specimens

    Science.gov (United States)

    David W. Green; Marshall Begel; William Nelson

    2006-01-01

    Janka hardness determined on 1.5- by 3.5-in. specimens (2×4s) was found to be equivalent to that determined using the 2- by 2-in. specimen specified in ASTM D 143. Data are presented on the relationship between Janka hardness and the strength of clear wood. Analysis of historical data determined using standard specimens indicated no difference between side hardness...

  4. Dependence of Hardness of Silicate Glasses on Composition and Thermal History

    DEFF Research Database (Denmark)

    Jensen, Martin; Smedskjær, Morten Mattrup; Yue, Yuanzheng

    composition on hardness of silicate glasses. E-glasses of different compositions are subjected to various degrees of annealing to obtain various fictive temperatures in the glasses. It is found that hardness decreases with the fictive temperature. Addition of Na2O to a SiO2-Al2O3-Na2O glass system causes......The prediction of hardness is possible for crystalline materials, but so far not possible for glasses. In this work, several important factors that should be used for predicting the hardness of glasses are discussed. To do so, we have studied the influences of thermal history and chemical...... a decrease in hardness. However, hardness cannot solely be determined from the degree of polymerisation of the glass network. It is also determined by the effect of ionic radius on hardness. However, this effect has opposite trend for alkali and alkaline earth ions. The hardness increases with ionic radius...

  5. 2TB hard disk drive

    CERN Multimedia

    This particular object was used up until 2012 in the Data Centre. It slots into one of the Disk Server trays. Hard disks were invented in the 1950s. They started as large disks up to 20 inches in diameter holding just a few megabytes (link is external). They were originally called "fixed disks" or "Winchesters" (a code name used for a popular IBM product). They later became known as "hard disks" to distinguish them from "floppy disks (link is external)." Hard disks have a hard platter that holds the magnetic medium, as opposed to the flexible plastic film found in tapes and floppies.

  6. Comparative transcriptome analyses of three medicinal Forsythia species and prediction of candidate genes involved in secondary metabolisms.

    Science.gov (United States)

    Sun, Luchao; Rai, Amit; Rai, Megha; Nakamura, Michimi; Kawano, Noriaki; Yoshimatsu, Kayo; Suzuki, Hideyuki; Kawahara, Nobuo; Saito, Kazuki; Yamazaki, Mami

    2018-05-07

    The three Forsythia species, F. suspensa, F. viridissima and F. koreana, have been used as herbal medicines in China, Japan and Korea for centuries and they are known to be rich sources of numerous pharmaceutical metabolites, forsythin, forsythoside A, arctigenin, rutin and other phenolic compounds. In this study, de novo transcriptome sequencing and assembly was performed on these species. Using leaf and flower tissues of F. suspensa, F. viridissima and F. koreana, 1.28-2.45-Gbp sequences of Illumina based pair-end reads were obtained and assembled into 81,913, 88,491 and 69,458 unigenes, respectively. Classification of the annotated unigenes in gene ontology terms and KEGG pathways was used to compare the transcriptome of three Forsythia species. The expression analysis of orthologous genes across all three species showed the expression in leaf tissues being highly correlated. The candidate genes presumably involved in the biosynthetic pathway of lignans and phenylethanoid glycosides were screened as co-expressed genes. They express highly in the leaves of F. viridissima and F. koreana. Furthermore, the three unigenes annotated as acyltransferase were predicted to be associated with the biosynthesis of acteoside and forsythoside A from the expression pattern and phylogenetic analysis. This study is the first report on comparative transcriptome analyses of medicinally important Forsythia genus and will serve as an important resource to facilitate further studies on biosynthesis and regulation of therapeutic compounds in Forsythia species.

  7. Predicting human papillomavirus vaccine uptake in young adult women: Comparing the Health Belief Model and Theory of Planned Behavior

    Science.gov (United States)

    Gerend, Mary A.; Shepherd, Janet E.

    2012-01-01

    Background Although theories of health behavior have guided thousands of studies, relatively few studies have compared these theories against one another. Purpose The purpose of the current study was to compare two classic theories of health behavior—the Health Belief Model (HBM) and the Theory of Planned Behavior (TPB)—in their prediction of human papillomavirus (HPV) vaccination. Methods After watching a gain-framed, loss-framed, or control video, women (N=739) ages 18–26 completed a survey assessing HBM and TPB constructs. HPV vaccine uptake was assessed ten months later. Results Although the message framing intervention had no effect on vaccine uptake, support was observed for both the TPB and HBM. Nevertheless, the TPB consistently outperformed the HBM. Key predictors of uptake included subjective norms, self-efficacy, and vaccine cost. Conclusions Despite the observed advantage of the TPB, findings revealed considerable overlap between the two theories and highlighted the importance of proximal versus distal predictors of health behavior. PMID:22547155

  8. Particle production at large transverse momentum and hard collision models

    International Nuclear Information System (INIS)

    Ranft, G.; Ranft, J.

    1977-04-01

    The majority of the presently available experimental data is consistent with hard scattering models. Therefore the hard scattering model seems to be well established. There is good evidence for jets in large transverse momentum reactions as predicted by these models. The overall picture is however not yet well enough understood. We mention only the empirical hard scattering cross section introduced in most of the models, the lack of a deep theoretical understanding of the interplay between quark confinement and jet production, and the fact that we are not yet able to discriminate conclusively between the many proposed hard scattering models. The status of different hard collision models discussed in this paper is summarized. (author)

  9. Validation of the LOD score compared with APACHE II score in prediction of the hospital outcome in critically ill patients.

    Science.gov (United States)

    Khwannimit, Bodin

    2008-01-01

    The Logistic Organ Dysfunction score (LOD) is an organ dysfunction score that can predict hospital mortality. The aim of this study was to validate the performance of the LOD score compared with the Acute Physiology and Chronic Health Evaluation II (APACHE II) score in a mixed intensive care unit (ICU) at a tertiary referral university hospital in Thailand. The data were collected prospectively on consecutive ICU admissions over a 24 month period from July1, 2004 until June 30, 2006. Discrimination was evaluated by the area under the receiver operating characteristic curve (AUROC). The calibration was assessed by the Hosmer-Lemeshow goodness-of-fit H statistic. The overall fit of the model was evaluated by the Brier's score. Overall, 1,429 patients were enrolled during the study period. The mortality in the ICU was 20.9% and in the hospital was 27.9%. The median ICU and hospital lengths of stay were 3 and 18 days, respectively, for all patients. Both models showed excellent discrimination. The AUROC for the LOD and APACHE II were 0.860 [95% confidence interval (CI) = 0.838-0.882] and 0.898 (95% Cl = 0.879-0.917), respectively. The LOD score had perfect calibration with the Hosmer-Lemeshow goodness-of-fit H chi-2 = 10 (p = 0.44). However, the APACHE II had poor calibration with the Hosmer-Lemeshow goodness-of-fit H chi-2 = 75.69 (p < 0.001). Brier's score showed the overall fit for both models were 0.123 (95%Cl = 0.107-0.141) and 0.114 (0.098-0.132) for the LOD and APACHE II, respectively. Thus, the LOD score was found to be accurate for predicting hospital mortality for general critically ill patients in Thailand.

  10. Hard coal; Steinkohle

    Energy Technology Data Exchange (ETDEWEB)

    Loo, Kai van de; Sitte, Andreas-Peter [Gesamtverband Steinkohle e.V. (GVSt), Herne (Germany)

    2015-07-01

    International the coal market in 2014 was the first time in a long time in a period of stagnation. In Germany, the coal consumption decreased even significantly, mainly due to the decrease in power generation. Here the national energy transition has now been noticable affected negative for coal use. The political guidances can expect a further significant downward movement for the future. In the present phase-out process of the German hard coal industry with still three active mines there was in 2014 no decommissioning. But the next is at the end of 2015, and the plans for the time after mining have been continued. [German] International war der Markt fuer Steinkohle 2014 erstmals seit langem wieder von einer Stagnation gekennzeichnet. In Deutschland ging der Steinkohlenverbrauch sogar deutlich zurueck, vor allem wegen des Rueckgangs in der Stromerzeugung. Hier hat sich die nationale Energiewende nun spuerbar und fuer die Steinkohlennutzung negativ ausgewirkt. Die politischen Weichenstellungen lassen fuer die Zukunft eine weitere erhebliche Abwaertsbewegung erwarten. Bei dem im Auslaufprozess befindlichen deutschen Steinkohlenbergbau mit noch drei aktiven Bergwerken gab es 2014 keine Stilllegung. Doch die naechste steht zum Jahresende 2015 an, und die Planungen fuer die Zeit nach dem Bergbau sind fortgefuehrt worden.

  11. A novel transcriptomics based in vitro method to compare and predict hepatotoxicity based on mode of action

    International Nuclear Information System (INIS)

    De Abrew, K. Nadira; Overmann, Gary J.; Adams, Rachel L.; Tiesman, Jay P.; Dunavent, John; Shan, Yuqing K.; Carr, Gregory J.; Daston, George P.; Naciff, Jorge M.

    2015-01-01

    High-content data have the potential to inform mechanism of action for toxicants. However, most data to support this notion have been generated in vivo. Because many cell lines and primary cells maintain a differentiated cell phenotype, it is possible that cells grown in culture may also be useful in predictive toxicology via high-content approaches such as whole-genome microarray. We evaluated global changes in gene expression in primary rat hepatocytes exposed to two concentrations of ten hepatotoxicants: acetaminophen (APAP), β-naphthoflavone (BNF), chlorpromazine (CPZ), clofibrate (CLO), bis(2-ethylhexyl)phthalate (DEHP), diisononyl phthalate (DINP), methapyrilene (MP), valproic acid (VPA), phenobarbital (PB) and WY14643 at two separate time points. These compounds were selected to cover a range of mechanisms of toxicity, with some overlap in expected mechanism to address the question of how predictive gene expression analysis is, for a given mode of action. Gene expression microarray analysis was performed on cells after 24 h and 48 h of exposure to each chemical using Affymetrix microarrays. Cluster analysis suggests that the primary hepatocyte model was capable of responding to these hepatotoxicants, with changes in gene expression that appear to be mode of action-specific. Among the different methods used for analysis of the data, a combination method that used pathways (MOAs) to filter total probesets provided the most robust analysis. The analysis resulted in the phthalates clustering closely together, with the two other peroxisome proliferators, CLO and WY14643, eliciting similar responses at the whole-genome and pathway levels. The Cyp inducers PB, MP, CPZ and BNF also clustered together. VPA and APAP had profiles that were unique. A similar analysis was performed on externally available (TG-GATES) in vivo data for 6 of the chemicals (APAP, CLO, CPZ, MP, MP and WY14643) and compared to the in vitro result. These results indicate that transcription

  12. C-terminal motif prediction in eukaryotic proteomes using comparative genomics and statistical over-representation across protein families

    Directory of Open Access Journals (Sweden)

    Cutler Sean R

    2007-06-01

    Full Text Available Abstract Background The carboxy termini of proteins are a frequent site of activity for a variety of biologically important functions, ranging from post-translational modification to protein targeting. Several short peptide motifs involved in protein sorting roles and dependent upon their proximity to the C-terminus for proper function have already been characterized. As a limited number of such motifs have been identified, the potential exists for genome-wide statistical analysis and comparative genomics to reveal novel peptide signatures functioning in a C-terminal dependent manner. We have applied a novel methodology to the prediction of C-terminal-anchored peptide motifs involving a simple z-statistic and several techniques for improving the signal-to-noise ratio. Results We examined the statistical over-representation of position-specific C-terminal tripeptides in 7 eukaryotic proteomes. Sequence randomization models and simple-sequence masking were applied to the successful reduction of background noise. Similarly, as C-terminal homology among members of large protein families may artificially inflate tripeptide counts in an irrelevant and obfuscating manner, gene-family clustering was performed prior to the analysis in order to assess tripeptide over-representation across protein families as opposed to across all proteins. Finally, comparative genomics was used to identify tripeptides significantly occurring in multiple species. This approach has been able to predict, to our knowledge, all C-terminally anchored targeting motifs present in the literature. These include the PTS1 peroxisomal targeting signal (SKL*, the ER-retention signal (K/HDEL*, the ER-retrieval signal for membrane bound proteins (KKxx*, the prenylation signal (CC* and the CaaX box prenylation motif. In addition to a high statistical over-representation of these known motifs, a collection of significant tripeptides with a high propensity for biological function exists

  13. Melting of polydisperse hard disks

    NARCIS (Netherlands)

    Pronk, S.; Frenkel, D.

    2004-01-01

    The melting of a polydisperse hard-disk system is investigated by Monte Carlo simulations in the semigrand canonical ensemble. This is done in the context of possible continuous melting by a dislocation-unbinding mechanism, as an extension of the two-dimensional hard-disk melting problem. We find

  14. Estimated GFR (eGFR by prediction equation in staging of chronic kidney disease compared to gamma camera GFR

    Directory of Open Access Journals (Sweden)

    Mohammad Masum Alam

    2016-07-01

    Full Text Available Background: Glomerular filtration rate is an effective tool for diagnosis and staging of chronic kidney disease. The effect ofrenal insufficiency by different method of this tool among patients with CKD is controversial.Objective: The objec­tive of this study was to evaluate the performance of eGFR in staging of CKD compared to gamma camera based GFR.Methods: This cross sectional analytical study was conducted in the Department of Biochemistry Bangabandhu Sheikh Mujib Medical University (BSMMU with the collaboration with National Institute of Nuclear Medicine and Allied Sciences, BSMMU during the period of January 2011 to December 2012. Gama camera based GFR was estimated from DTP A reno gram and eGFR was estimated by three prediction equations. Comparison was done by Bland Altman agree­ment test to see the agreement on the measurement of GFR between three equation based eGFR method and gama camera based GFR method. Staging comparison was done by Kappa analysis to see the agreement between the stages identified by those different methods.Results: Bland-Altman agreement analysis between GFR measured by gamma camera, CG equation ,CG equation corrected by BSA and MDRD equation shows statistically significant. CKD stages determined by CG GFR, CG GFR corrected by BSA , MDRD GFR and gamma camera based GFR was compared by Kappa statistical analysis .The kappa value was 0.66, 0.77 and 0.79 respectively.Conclusions: This study findings suggest that GFR estimation by MDRD equation in CKD patients shows good agreement with gamma camera based GFR and for staging of CKD patients, eGFR by MDRD formula may be used as very effective tool in Bangladeshi population.

  15. Wind-Wave Effects on Vertical Mixing in Chesapeake Bay, USA: comparing observations to second-moment closure predictions.

    Science.gov (United States)

    Fisher, A. W.; Sanford, L. P.; Scully, M. E.

    2016-12-01

    Coherent wave-driven turbulence generated through wave breaking or nonlinear wave-current interactions, e.g. Langmuir turbulence (LT), can significantly enhance the downward transfer of momentum, kinetic energy, and dissolved gases in the oceanic surface layer. There are few observations of these processes in the estuarine or coastal environments, where wind-driven mixing may co-occur with energetic tidal mixing and strong density stratification. This presents a major challenge for evaluating vertical mixing parameterizations used in modeling estuarine and coastal dynamics. We carried out a large, multi-investigator study of wind-driven estuarine dynamics in the middle reaches of Chesapeake Bay, USA, during 2012-2013. The center of the observational array was an instrumented turbulence tower with both atmospheric and marine turbulence sensors as well as rapidly sampled temperature and conductivity sensors. For this paper, we examined the impacts of surface gravity waves on vertical profiles of turbulent mixing and compared our results to second-moment turbulence closure predictions. Wave and turbulence measurements collected from the vertical array of Acoustic Doppler Velocimeters (ADVs) provided direct estimates of the dominant terms in the TKE budget and the surface wave field. Observed dissipation rates, TKE levels, and turbulent length scales are compared to published scaling relations and used in the calculation of second-moment nonequilibrium stability functions. Results indicate that in the surface layer of the estuary, where elevated dissipation is balanced by vertical divergence in TKE flux, existing nonequilibrium stability functions underpredict observed eddy viscosities. The influences of wave breaking and coherent wave-driven turbulence on modeled and observed stability functions will be discussed further in the context of turbulent length scales, TKE and dissipation profiles, and the depth at which the wave-dominated turbulent transport layer

  16. Added value of CT perfusion compared to CT angiography in predicting clinical outcomes of stroke patients treated with mechanical thrombectomy

    Energy Technology Data Exchange (ETDEWEB)

    Tsogkas, Ioannis; Knauth, Michael; Schregel, Katharina; Behme, Daniel; Psychogios, Marios Nikos [University Medicine Goettingen, Department of Neuroradiology, Goettingen (Germany); Wasser, Katrin; Maier, Ilko; Liman, Jan [University Medicine Goettingen, Department of Neurology, Goettingen (Germany)

    2016-11-15

    CTP images analyzed with the Alberta stroke program early CT scale (ASPECTS) have been shown to be optimal predictors of clinical outcome. In this study we compared two biomarkers, the cerebral blood volume (CBV)-ASPECTS and the CTA-ASPECTS as predictors of clinical outcome after thrombectomy. Stroke patients with thrombosis of the M1 segment of the middle cerebral artery were included in our study. All patients underwent initial multimodal CT with CTP and CTA on a modern CT scanner. Treatment consisted of full dose intravenous tissue plasminogen activator, when applicable, and mechanical thrombectomy. Three neuroradiologists separately scored CTP and CTA images with the ASPECTS score. Sixty-five patients were included. Median baseline CBV-ASPECTS and CTA-ASPECTS for patients with favourable clinical outcome at follow-up were 8 [interquartile range (IQR) 8-9 and 7-9 respectively]. Patients with poor clinical outcome showed a median baseline CBV-ASPECTS of 6 (IQR 5-8, P < 0.0001) and a median baseline CTA-ASPECTS of 7 (IQR 7-8, P = 0.18). Using CBV-ASPECTS and CTA-ASPECTS raters predicted futile reperfusions in 96 % and 56 % of the cases, respectively. CBV-ASPECTS is a significant predictor of clinical outcome in patients with acute ischemic stroke treated with mechanical thrombectomy. (orig.)

  17. Prediction Model of Cutting Parameters for Turning High Strength Steel Grade-H: Comparative Study of Regression Model versus ANFIS

    Directory of Open Access Journals (Sweden)

    Adel T. Abbas

    2017-01-01

    Full Text Available The Grade-H high strength steel is used in the manufacturing of many civilian and military products. The procedures of manufacturing these parts have several turning operations. The key factors for the manufacturing of these parts are the accuracy, surface roughness (Ra, and material removal rate (MRR. The production line of these parts contains many CNC turning machines to get good accuracy and repeatability. The manufacturing engineer should fulfill the required surface roughness value according to the design drawing from first trail (otherwise these parts will be rejected as well as keeping his eye on maximum metal removal rate. The rejection of these parts at any processing stage will represent huge problems to any factory because the processing and raw material of these parts are very expensive. In this paper the artificial neural network was used for predicting the surface roughness for different cutting parameters in CNC turning operations. These parameters were investigated to get the minimum surface roughness. In addition, a mathematical model for surface roughness was obtained from the experimental data using a regression analysis method. The experimental data are then compared with both the regression analysis results and ANFIS (Adaptive Network-based Fuzzy Inference System estimations.

  18. Usefulness of semi-automatic volumetry compared to established linear measurements in predicting lymph node metastases in MSCT

    Energy Technology Data Exchange (ETDEWEB)

    Buerke, Boris; Puesken, Michael; Heindel, Walter; Wessling, Johannes (Dept. of Clinical Radiology, Univ. of Muenster (Germany)), email: buerkeb@uni-muenster.de; Gerss, Joachim (Dept. of Medical Informatics and Biomathematics, Univ. of Muenster (Germany)); Weckesser, Matthias (Dept. of Nuclear Medicine, Univ. of Muenster (Germany))

    2011-06-15

    Background Volumetry of lymph nodes potentially better reflect asymmetric size alterations independently of lymph node orientation in comparison to metric parameters (e.g. long-axis diameter). Purpose To distinguish between benign and malignant lymph nodes by comparing 2D and semi-automatic 3D measurements in MSCT. Material and Methods FDG-18 PET-CT was performed in 33 patients prior to therapy for malignant melanoma at stage III/IV. One hundred and eighty-six cervico-axillary, abdominal and inguinal lymph nodes were evaluated independently by two radiologists, both manually and with the use of semi-automatic segmentation software. Long axis (LAD), short axis (SAD), maximal 3D diameter, volume and elongation were obtained. PET-CT, PET-CT follow-up and/or histology served as a combined reference standard. Statistics encompassed intra-class correlation coefficients and ROC curves. Results Compared to manual assessment, semi-automatic inter-observer variability was found to be lower, e.g. at 2.4% (95% CI 0.05-4.8) for LAD. The standard of reference revealed metastases in 90 (48%) of 186 lymph nodes. Semi-automatic prediction of lymph node metastases revealed highest areas under the ROC curves for volume (reader 1 0.77, 95%CI 0.64-0.90; reader 2 0.76, 95%CI 0.59-0.86) and SAD (reader 1 0.76, 95%CI 0.64-0.88; reader 2 0.75, 95%CI 0.62-0.89). The findings for LAD (reader 1 0.73, 95%CI 0.60-0.86; reader 2 0.71, 95%CI 0.71, 95%CI 0.57-0.85) and maximal 3D diameter (reader 1 0.70, 95%CI 0.53-0.86; reader 2 0.76, 95%CI 0.50-0.80) were found substantially lower and for elongation (reader 1 0.65, 95%CI 0.50-0.79; reader 2 0.66, 95%CI 0.52-0.81) significantly lower (p < 0.05). Conclusion Semi-automatic analysis of lymph nodes in malignant melanoma is supported by high segmentation quality and reproducibility. As compared to established SAD, semi-automatic lymph node volumetry does not have an additive role for categorizing lymph nodes as normal or metastatic in malignant

  19. Hardness variability in commercial technologies

    International Nuclear Information System (INIS)

    Shaneyfelt, M.R.; Winokur, P.S.; Meisenheimer, T.L.; Sexton, F.W.; Roeske, S.B.; Knoll, M.G.

    1994-01-01

    The radiation hardness of commercial Floating Gate 256K E 2 PROMs from a single diffusion lot was observed to vary between 5 to 25 krad(Si) when irradiated at a low dose rate of 64 mrad(Si)/s. Additional variations in E 2 PROM hardness were found to depend on bias condition and failure mode (i.e., inability to read or write the memory), as well as the foundry at which the part was manufactured. This variability is related to system requirements, and it is shown that hardness level and variability affect the allowable mode of operation for E 2 PROMs in space applications. The radiation hardness of commercial 1-Mbit CMOS SRAMs from Micron, Hitachi, and Sony irradiated at 147 rad(Si)/s was approximately 12, 13, and 19 krad(Si), respectively. These failure levels appear to be related to increases in leakage current during irradiation. Hardness of SRAMs from each manufacturer varied by less than 20%, but differences between manufacturers are significant. The Qualified Manufacturer's List approach to radiation hardness assurance is suggested as a way to reduce variability and to improve the hardness level of commercial technologies

  20. A multi-step system for screening and localization of hard exudates in retinal images

    Science.gov (United States)

    Bopardikar, Ajit S.; Bhola, Vishal; Raghavendra, B. S.; Narayanan, Rangavittal

    2012-03-01

    The number of people being affected by Diabetes mellitus worldwide is increasing at an alarming rate. Monitoring of the diabetic condition and its effects on the human body are therefore of great importance. Of particular interest is diabetic retinopathy (DR) which is a result of prolonged, unchecked diabetes and affects the visual system. DR is a leading cause of blindness throughout the world. At any point of time 25 - 44% of people with diabetes are afflicted by DR. Automation of the screening and monitoring process for DR is therefore essential for efficient utilization of healthcare resources and optimizing treatment of the affected individuals. Such automation would use retinal images and detect the presence of specific artifacts such as hard exudates, hemorrhages and soft exudates (that may appear in the image) to gauge the severity of DR. In this paper, we focus on the detection of hard exudates. We propose a two step system that consists of a screening step that classifies retinal images as normal or abnormal based on the presence of hard exudates and a detection stage that localizes these artifacts in an abnormal retinal image. The proposed screening step automatically detects the presence of hard exudates with a high sensitivity and positive predictive value (PPV ). The detection/localization step uses a k-means based clustering approach to localize hard exudates in the retinal image. Suitable feature vectors are chosen based on their ability to isolate hard exudates while minimizing false detections. The algorithm was tested on a benchmark dataset (DIARETDB1) and was seen to provide a superior performance compared to existing methods. The two-step process described in this paper can be embedded in a tele-ophthalmology system to aid with speedy detection and diagnosis of the severity of DR.

  1. Comparative study of various grading systems in oral squamous cell carcinoma and their value in predicting lymph node metastasis

    Directory of Open Access Journals (Sweden)

    Saleha Jamadar

    2014-01-01

    Conclusion: The histopathological parameters that could help in predicting lymph node metastases (LNM are keratinization, nuclear pleomorphism (NP, and the pattern of invasion (POI when assessed at the invasive front. When the whole tumor was considered, histopathological parameters like NP and POI were significant in predicting LNM.

  2. On scale dependence of hardness

    International Nuclear Information System (INIS)

    Shorshorov, M.Kh.; Alekhin, V.P.; Bulychev, S.I.

    1977-01-01

    The concept of hardness as a structure-sensitive characteristic of a material is considered. It is shown that in conditions of a decreasing stress field under the inventor the hardness function is determined by the average distance, Lsub(a), between the stops (fixed and sessile dislocations, segregation particles, etc.). In the general case, Lsub(a) depends on the size of the impression and explains the great diversity of hardness functions. The concept of average true deformation rate on depression is introduced

  3. Photon technology. Hard photon technology; Photon technology. Hard photon gijutsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-03-01

    Research results of hard photon technology have been summarized as a part of novel technology development highly utilizing the quantum nature of photon. Hard photon technology refers to photon beam technologies which use photon in the 0.1 to 200 nm wavelength region. Hard photon has not been used in industry due to the lack of suitable photon sources and optical devices. However, hard photon in this wavelength region is expected to bring about innovations in such areas as ultrafine processing and material synthesis due to its atom selective reaction, inner shell excitation reaction, and spatially high resolution. Then, technological themes and possibility have been surveyed. Although there are principle proposes and their verification of individual technologies for the technologies of hard photon generation, regulation and utilization, they are still far from the practical applications. For the photon source technology, the laser diode pumped driver laser technology, laser plasma photon source technology, synchrotron radiation photon source technology, and vacuum ultraviolet photon source technology are presented. For the optical device technology, the multi-layer film technology for beam mirrors and the non-spherical lens processing technology are introduced. Also are described the reduction lithography technology, hard photon excitation process, and methods of analysis and measurement. 430 refs., 165 figs., 23 tabs.

  4. Shock outcome prediction before and after CPR: a comparative study of manual and automated active compression-decompression CPR.

    Science.gov (United States)

    Box, M S; Watson, J N; Addison, P S; Clegg, G R; Robertson, C E

    2008-09-01

    We report on a study designed to compare the relative efficacy of manual CPR (M-CPR) and automated mechanical CPR (ACD-CPR) provided by an active compression-decompression (ACD) device. The ECG signals of out-of-hospital cardiac arrest patients of cardiac aetiology were analysed just prior to, and immediately after, cardiopulmonary resuscitation (CPR) to assess the likelihood of successful defibrillation at these time points. The cardioversion outcome prediction (COP) measure previously developed by our group was used to quantify the probability of return of spontaneous circulation (ROSC) after counter-shock and was used as a measure of the efficacy of CPR. An initial validation study using COP to predict shock outcome from the patient data set resulted in a performance of 60% specificity achieved at 100% sensitivity on a blind test of the data. This is comparable with previous studies and provided confidence in the robustness of the technique across hardware platforms. Significantly, the COP marker also displayed an ability to stratify according to outcomes: asystole, ventricular fibrillation (VF), pulseless electrical activity (PEA), normal sinus rhythm (NSR). We then used the validated COP marker to analyse the ECG data record just prior to and immediately after the chest compression segments. This was initially performed for 87 CPR segments where VF was both the pre- and post-CPR waveform. An increase in the mean COP values was found for both CPR types. A signed rank sum test found the increase due to manual CPR not to be significant (p>0.05) whereas the automated CPR was found to be significant (pCPR (1.26, p=0.024) than for the manual CPR (0.99, p=0.124). These results indicate that the application of CPR does indeed provide beneficial preparation of the heart prior to defibrillation therapy whether manual or automated CPR is applied. The COP marker shows promise as a definitive, quantitative determinant of the immediate positive effect of both types of CPR

  5. Stochastic continuum simulation of mass arrival using a synthetic data set. The effect of hard and soft conditioning

    International Nuclear Information System (INIS)

    Kung Chen Shan; Wen Xian Huan; Cvetkovic, V.; Winberg, A.

    1992-06-01

    The non-parametric and parametric stochastic continuum approaches were applied to a realistic synthetic exhaustive hydraulic conductivity field to study the effects of hard and soft conditioning. From the reference domain, a number of data points were selected, either in a random or designed fashion, to form sample data sets. Based on established experimental variograms and the conditioning data, 100 realizations each of the studied domain were generated. The flow field was calculated for each realization, and particle arrival time and arrival position along the discharge boundary were evaluated. It was shown that conditioning on soft data reduces the uncertainty of solute arrival time, and that conditioning on soft data suggests an improvement in characterizing channeling effects. It was found that the improvement in the prediction of the breakthrough was moderate when conditioning on 25 hard and 100 soft data compared to 25 hard data only. (au)

  6. Comparative Human and Automatic Evaluation of Glass-Box and Black-Box Approaches to Interactive Translation Prediction

    Directory of Open Access Journals (Sweden)

    Torregrosa Daniel

    2017-06-01

    Full Text Available Interactive translation prediction (ITP is a modality of computer-aided translation that assists professional translators by offering context-based computer-generated continuation suggestions as they type. While most state-of-the-art ITP systems follow a glass-box approach, meaning that they are tightly coupled to an adapted machine translation system, a black-box approach which does not need access to the inner workings of the bilingual resources used to generate the suggestions has been recently proposed in the literature: this new approach allows new sources of bilingual information to be included almost seamlessly. In this paper, we compare for the first time the glass-box and the black-box approaches by means of an automatic evaluation of translation tasks between related languages such as English–Spanish and unrelated ones such as Arabic–English and English–Chinese, showing that, with our setup, 20%–50% of keystrokes could be saved using either method and that the black-box approach outperformed the glass-box one in five out of six scenarios operating under similar conditions. We also performed a preliminary human evaluation of English to Spanish translation for both approaches. On average, the evaluators saved 10% keystrokes and were 4% faster with the black-box approach, and saved 15% keystrokes and were 12% slower with the glass-box one; but they could have saved 51% and 69% keystrokes respectively if they had used all the compatible suggestions. Users felt the suggestions helped them to translate faster and easier. All the tools used to perform the evaluation are available as free/open–source software.

  7. Comparative assessment for future prediction of urban water environment using WEAP model: A case study of Kathmandu, Manila and Jakarta

    Science.gov (United States)

    Kumar, Pankaj; Yoshifumi, Masago; Ammar, Rafieiemam; Mishra, Binaya; Fukushi, Ken

    2017-04-01

    Uncontrolled release of pollutants, increasing extreme weather condition, rapid urbanization and poor governance posing a serious threat to sustainable water resource management in developing urban spaces. Considering half of the world's mega-cities are in the Asia and the Pacific with 1.7 billion people do not access to improved water and sanitation, water security through its proper management is both an increasing concern and an imperative critical need. This research work strives to give a brief glimpse about predicted future water environment in Bagmati, Pasig and Ciliwung rivers from three different cities viz. Manila, Kathmandu and Jakarta respectively. Hydrological model used here to foresee the collective impacts of rapid population growth because of urbanization as well as climate change on unmet demand and water quality in near future time by 2030. All three rivers are major source of water for different usage viz. domestic, industrial, agriculture and recreation but uncontrolled withdrawal and sewerage disposal causing deterioration of water environment in recent past. Water Evaluation and Planning (WEAP) model was used to model river water quality pollution future scenarios using four indicator species i.e. Dissolved Oxygen (DO), Biochemical Oxygen Demand (BOD), Chemical Oxygen Demand (COD) and Nitrate (NO3). Result for simulated water quality as well as unmet demand for year 2030 when compared with that of reference year clearly indicates that not only water quality deteriorates but also unmet demands is increasing in future course of time. This also suggests that current initiatives and policies for water resource management are not sufficient enough and hence immediate and inclusive action through transdisciplinary research.

  8. Slow Slip Predictions Based on Gabbro Dehydration and Friction Data Compared to GPS Measurements in Northern Cascadia

    Science.gov (United States)

    Rice, J. R.; Liu, Y.

    2008-12-01

    For episodic slow slip transients in subduction zones, a large uncertainty in comparing surface deformations predicted by rate and state friction modeling [Liu and Rice, JGR, 2007] to GPS measurements lies in our limited knowledge of the frictional properties and fluid pore pressure along the fault. In this study, we apply petrological data [Peacock et al., USGS, 2002; Hacker et al., JGR 2003; Wada et al., JGR, 2008] and recently reported friction data [He et al., Tectonophys, 2006, 2007] for gabbro, as a reasonable representation of the seafloor, to a Cascadia-like 2D model in order to produce simulations which show spontaneous aseismic transients. We compare the resulting inter-transient and transient surface deformations to GPS observations along the northern Cascadia margin. An inferred region along dip of elevated fluid pressure is constrained by seismological observations where available, and by thermal and petrological models for the Cascadia and SW Japan subduction zones. For the assumed a and a-b profiles, we search the model parameter space, by varying the level of effective normal stress σ, characteristic slip distance L in the source areas of transients, and the fault width under that low σ, to identify simulation cases which produce transient aseismic slip and recurrence interval similar to the observed 20-30 mm and 14 months, respectively, in northern Cascadia. Using a simple planar fault geometry and extrapolating the 2D fault slip to a 3D distribution, we find that the gabbro gouge friction data allows a much better fit to GPS observations than is possible with the granite data [Blanpied et al., JGR, 1995, 1998] which, for lack of a suitable alternative, has been used as the basis for most previous subduction earthquake modeling, including ours. Nevertheless, the values of L required to reasonably fit the geodetic data during a transient event are somewhat larger than 100 microns, rather than in the range of 10 to a few 10s of microns as might be

  9. Time-Predictable Virtual Memory

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Schoeberl, Martin

    2016-01-01

    Virtual memory is an important feature of modern computer architectures. For hard real-time systems, memory protection is a particularly interesting feature of virtual memory. However, current memory management units are not designed for time-predictability and therefore cannot be used...... in such systems. This paper investigates the requirements on virtual memory from the perspective of hard real-time systems and presents the design of a time-predictable memory management unit. Our evaluation shows that the proposed design can be implemented efficiently. The design allows address translation...... and address range checking in constant time of two clock cycles on a cache miss. This constant time is in strong contrast to the possible cost of a miss in a translation look-aside buffer in traditional virtual memory organizations. Compared to a platform without a memory management unit, these two additional...

  10. Comparative prediction of irradiation test of CNFT and Cise prototypes of CIRENE fuel pins, a prediction by transuranus M1V1J12 code

    International Nuclear Information System (INIS)

    Suwardi

    2014-01-01

    A prototype of fuel pin design for HWR by CIRENE has been realized by Center for Nuclear Fuel Technology CNFT-BATAN. The prototype will be irradiated in PRTF Power Ramp Test (PRTF). The facility has been installed inside RSG-GA Siwabessy at Serpong. The present paper reports the preparation of experimentation and prediction of irradiation test. One previous PCI test report is found in, written by Lysell G and Valli G in 1973. The CNFT fuel irradiation test parameter is adapted to both PRTF and power loop design for RSG-GAS reactor in Serpong mainly the maxima of: rod length, neutrons flux, total power of rod, and power ramp rate. The CNFT CIRENE prototype design has been reported by Futichah et al 2007 and 2010. The AEC-India HWR fuel pin is of 19/22 fuel bundle design has also been evaluated as comparison. The first PCI test prediction has experiment comparison for Cise pin. The second prediction will be used for optimizing the design of ramp test for CNFT CIRENE fuel pin prototype. (author)

  11. Hard Diffraction - from Blois 1985 to 2005

    Energy Technology Data Exchange (ETDEWEB)

    Gunnar, Ingelman [Uppsala Univ., High Energy Physics (Sweden)

    2005-07-01

    The idea of diffractive processes with a hard scale involved, to resolve the underlying parton dynamics, was presented at the first Blois conference in 1985 and experimentally verified a few years later. Today hard diffraction is an attractive research field with high-quality data and new theoretical models. The trend from Regge-based pomeron models to QCD-based parton level models has given insights on QCD dynamics involving perturbative gluon exchange mechanisms. In the new QCD-based models, the pomeron is not part of the proton wave function, but diffraction is an effect of the scattering process. Models based on interactions with a colour background field provide an interesting approach which avoids conceptual problems of pomeron-based models, such as the pomeron flux, and provide a basis for common theoretical framework for all final states, diffractive gap events as well as non-diffractive events. Finally, the new process of gaps between jets provides strong evidence for the BFKL dynamics as predicted since long by QCD, but so far hard to establish experimentally.

  12. A new, bright and hard aluminum surface produced by anodization

    Science.gov (United States)

    Hou, Fengyan; Hu, Bo; Tay, See Leng; Wang, Yuxin; Xiong, Chao; Gao, Wei

    2017-07-01

    Anodized aluminum (Al) and Al alloys have a wide range of applications. However, certain anodized finishings have relatively low hardness, dull appearance and/or poor corrosion resistance, which limited their applications. In this research, Al was first electropolished in a phosphoric acid-based solution, then anodized in a sulfuric acid-based solution under controlled processing parameters. The anodized specimen was then sealed by two-step sealing method. A systematic study including microstructure, surface morphology, hardness and corrosion resistance of these anodized films has been conducted. Results show that the hardness of this new anodized film was increased by a factor of 10 compared with the pure Al metal. Salt spray corrosion testing also demonstrated the greatly improved corrosion resistance. Unlike the traditional hard anodized Al which presents a dull-colored surface, this newly developed anodized Al alloy possesses a very bright and shiny surface with good hardness and corrosion resistance.

  13. Predictability and interpretability of hybrid link-level crash frequency models for urban arterials compared to cluster-based and general negative binomial regression models.

    Science.gov (United States)

    Najaf, Pooya; Duddu, Venkata R; Pulugurtha, Srinivas S

    2018-03-01

    Machine learning (ML) techniques have higher prediction accuracy compared to conventional statistical methods for crash frequency modelling. However, their black-box nature limits the interpretability. The objective of this research is to combine both ML and statistical methods to develop hybrid link-level crash frequency models with high predictability and interpretability. For this purpose, M5' model trees method (M5') is introduced and applied to classify the crash data and then calibrate a model for each homogenous class. The data for 1134 and 345 randomly selected links on urban arterials in the city of Charlotte, North Carolina was used to develop and validate models, respectively. The outputs from the hybrid approach are compared with the outputs from cluster-based negative binomial regression (NBR) and general NBR models. Findings indicate that M5' has high predictability and is very reliable to interpret the role of different attributes on crash frequency compared to other developed models.

  14. Sequence-based prediction of protein-binding sites in DNA: comparative study of two SVM models.

    Science.gov (United States)

    Park, Byungkyu; Im, Jinyong; Tuvshinjargal, Narankhuu; Lee, Wook; Han, Kyungsook

    2014-11-01

    As many structures of protein-DNA complexes have been known in the past years, several computational methods have been developed to predict DNA-binding sites in proteins. However, its inverse problem (i.e., predicting protein-binding sites in DNA) has received much less attention. One of the reasons is that the differences between the interaction propensities of nucleotides are much smaller than those between amino acids. Another reason is that DNA exhibits less diverse sequence patterns than protein. Therefore, predicting protein-binding DNA nucleotides is much harder than predicting DNA-binding amino acids. We computed the interaction propensity (IP) of nucleotide triplets with amino acids using an extensive dataset of protein-DNA complexes, and developed two support vector machine (SVM) models that predict protein-binding nucleotides from sequence data alone. One SVM model predicts protein-binding nucleotides using DNA sequence data alone, and the other SVM model predicts protein-binding nucleotides using both DNA and protein sequences. In a 10-fold cross-validation with 1519 DNA sequences, the SVM model that uses DNA sequence data only predicted protein-binding nucleotides with an accuracy of 67.0%, an F-measure of 67.1%, and a Matthews correlation coefficient (MCC) of 0.340. With an independent dataset of 181 DNAs that were not used in training, it achieved an accuracy of 66.2%, an F-measure 66.3% and a MCC of 0.324. Another SVM model that uses both DNA and protein sequences achieved an accuracy of 69.6%, an F-measure of 69.6%, and a MCC of 0.383 in a 10-fold cross-validation with 1519 DNA sequences and 859 protein sequences. With an independent dataset of 181 DNAs and 143 proteins, it showed an accuracy of 67.3%, an F-measure of 66.5% and a MCC of 0.329. Both in cross-validation and independent testing, the second SVM model that used both DNA and protein sequence data showed better performance than the first model that used DNA sequence data. To the best of

  15. The underlying event in hard scattering processes

    International Nuclear Information System (INIS)

    Field, R.

    2002-01-01

    The authors study the behavior of the underlying event in hard scattering proton-antiproton collisions at 1.8 TeV and compare with the QCD Monte-Carlo models. The underlying event is everything except the two outgoing hard scattered jets and receives contributions from the beam-beam remnants plus initial and final-state radiation. The data indicate that neither ISAJET or HERWIG produce enough charged particles (with p T > 0.5 GeV/c) from the beam-beam remnant component and that ISAJET produces too many charged particles from initial-state radiation. PYTHIA which uses multiple parton scattering to enhance the underlying event does the best job describing the data

  16. Somatic growth of mussels Mytilus edulis in field studies compared to predictions using BEG, DEB, and SFG models

    DEFF Research Database (Denmark)

    Larsen, Poul Scheel; Filgueira, Ramón; Riisgård, Hans Ulrik

    2014-01-01

    Prediction of somatic growth of blue mussels, Mytilus edulis, based on the data from 2 field-growth studies of mussels in suspended net-bags in Danish waters was made by 3 models: the bioenergetic growth (BEG), the dynamic energy budget (DEB), and the scope for growth (SFG). Here, the standard BEG...... at nearly constant environmental conditions with a mean chl a concentration of C=2.7μgL−1, and the observed monotonous growth in the dry weight of soft parts was best predicted by DEB while BEG and SFG models produced lower growth. The second 165-day field study was affected by large variations in chl...... a and temperature, and the observed growth varied accordingly, but nevertheless, DEB and SFG predicted monotonous growth in good agreement with the mean pattern while BEG mimicked the field data in response to observed changes in chl a concentration and temperature. The general features of the models were that DEB...

  17. Predicting Bank Financial Failures Using Discriminant Analysis And Support Vector Machines Methods A Comparative Analysis In Commercial Banks In Sudan 2006-2014

    Directory of Open Access Journals (Sweden)

    Mohammed A. SirElkhatim

    2017-04-01

    Full Text Available Bank failures threaten the economic system as a whole. Therefore predicting bank financial failures is crucial to prevent andor lessen its negative effects on the economic system. Financial crises affecting both emerging markets and advanced countries over the centuries have severe economic consequences but they can be hard to prevent and predict identifying financial crises causes remains both science and art said Stijn Claessens assistant director of the International Monetary Fund. While it would be better to mitigate risks financial crises will recur often in waves and better crisis management is therefore important. Analyses of recurrent causes suggest that to prevent crises governments should consider reforms in many underlying areas. That includes developing prudent fiscal and monetary policies better regulating the financial sector including reducing the problem of too-big-to-fail banks and developing effective macro-prudential policies. Despite new regulations and better supervision crises are likely to recur in part because they can reflect deeper problems related to income inequality the political economy and common human behavior. As such improvements in crisis management are also needed. This is originally a classification problem to categorize banks as healthy or non-healthy ones. This study aims to apply Discriminant analysis and Support Vector Machines methods to the bank failure prediction problem in a Sudanese case and to present a comprehensive computational comparison of the classification performances of the techniques tested. Eleven financial and non-financial ratios with six feature groups including capital adequacy asset quality Earning and liquidity CAMELS are selected as predictor variables in the study. Credit risk also been evaluated using logistic analysis to study the effect of Islamic finance modes sectors and payment types used by Sudanese banks with regard to their possibilities of failure. Experimental results

  18. Soil erosion model predictions using parent material/soil texture-based parameters compared to using site-specific parameters

    Science.gov (United States)

    R. B. Foltz; W. J. Elliot; N. S. Wagenbrenner

    2011-01-01

    Forested areas disturbed by access roads produce large amounts of sediment. One method to predict erosion and, hence, manage forest roads is the use of physically based soil erosion models. A perceived advantage of a physically based model is that it can be parameterized at one location and applied at another location with similar soil texture or geological parent...

  19. Comparative analysis of evolutionarily conserved motifs of epidermal growth factor receptor 2 (HER2) predicts novel potential therapeutic epitopes

    DEFF Research Database (Denmark)

    Deng, Xiaohong; Zheng, Xuxu; Yang, Huanming

    2014-01-01

    druggable epitopes/targets. We employed the PROSITE Scan to detect structurally conserved motifs and PRINTS to search for linearly conserved motifs of ECD HER2. We found that the epitopes recognized by trastuzumab and pertuzumab are located in the predicted conserved motifs of ECD HER2, supporting our...

  20. Comparing aboveground biomass predictions for an uneven-aged pine-dominated stand using local, regional, and national models

    Science.gov (United States)

    D.C. Bragg; K.M. McElligott

    2013-01-01

    Sequestration by Arkansas forests removes carbon dioxide from the atmosphere, storing this carbon in biomass that fills a number of critical ecological and socioeconomic functions. We need a better understanding of the contribution of forests to the carbon cycle, including the accurate quantification of tree biomass. Models have long been developed to predict...

  1. Comparing direct image and wavelet transform-based approaches to analysing remote sensing imagery for predicting wildlife distribution

    NARCIS (Netherlands)

    Murwira, A.; Skidmore, A.K.

    2010-01-01

    In this study we tested the ability to predict the probability of elephant (Loxodonta africana) presence in an agricultural landscape of Zimbabwe based on three methods of measuring the spatial heterogeneity in vegetation cover, where vegetation cover was measured using the Landsat Thematic Mapper

  2. Comparing large-scale hydrological model predictions with observed streamflow in the Pacific Northwest: effects of climate and groundwater

    Science.gov (United States)

    Mohammad Safeeq; Guillaume S. Mauger; Gordon E. Grant; Ivan Arismendi; Alan F. Hamlet; Se-Yeun Lee

    2014-01-01

    Assessing uncertainties in hydrologic models can improve accuracy in predicting future streamflow. Here, simulated streamflows using the Variable Infiltration Capacity (VIC) model at coarse (1/16°) and fine (1/120°) spatial resolutions were evaluated against observed streamflows from 217 watersheds. In...

  3. Comparing the predictive abilities of phenotypic and marker-assisted selection methods in a biparental lettuce population

    Science.gov (United States)

    Breeding and selection for the traits with polygenic inheritance is a challenging task that can be done by phenotypic selection, by marker-assisted selection or by genome wide selection. We tested predictive ability of four selection models in a biparental population genotyped with 95 SNP markers an...

  4. Comparing the Factors That Predict Completion and Grades among For-Credit and Open/MOOC Students in Online Learning

    Science.gov (United States)

    Almeda, Ma. Victoria; Zuech, Joshua; Utz, Chris; Higgins, Greg; Reynolds, Rob; Baker, Ryan S.

    2018-01-01

    Online education continues to become an increasingly prominent part of higher education, but many students struggle in distance courses. For this reason, there has been considerable interest in predicting which students will succeed in online courses and which will receive poor grades or drop out prior to completion. Effective intervention depends…

  5. Comparing Weighted and Unweighted Grade Point Averages in Predicting College Success of Diverse and Low-Income College Students

    Science.gov (United States)

    Warne, Russell T.; Nagaishi, Chanel; Slade, Michael K.; Hermesmeyer, Paul; Peck, Elizabeth Kimberli

    2014-01-01

    While research has shown the statistical significance of high school grade point averages (HSGPAs) in predicting future academic outcomes, the systems with which HSGPAs are calculated vary drastically across schools. Some schools employ unweighted grades that carry the same point value regardless of the course in which they are earned; other…

  6. Somatic growth of mussels Mytilus edulis in field studies compared to predictions using BEG, DEB, and SFG models

    Science.gov (United States)

    Larsen, Poul S.; Filgueira, Ramón; Riisgård, Hans Ulrik

    2014-04-01

    Prediction of somatic growth of blue mussels, Mytilus edulis, based on the data from 2 field-growth studies of mussels in suspended net-bags in Danish waters was made by 3 models: the bioenergetic growth (BEG), the dynamic energy budget (DEB), and the scope for growth (SFG). Here, the standard BEG model has been expanded to include the temperature dependence of filtration rate and respiration and an ad hoc modification to ensure a smooth transition to zero ingestion as chlorophyll a (chl a) concentration approaches zero, both guided by published data. The first 21-day field study was conducted at nearly constant environmental conditions with a mean chl a concentration of C = 2.7 μg L- 1, and the observed monotonous growth in the dry weight of soft parts was best predicted by DEB while BEG and SFG models produced lower growth. The second 165-day field study was affected by large variations in chl a and temperature, and the observed growth varied accordingly, but nevertheless, DEB and SFG predicted monotonous growth in good agreement with the mean pattern while BEG mimicked the field data in response to observed changes in chl a concentration and temperature. The general features of the models were that DEB produced the best average predictions, SFG mostly underestimated growth, whereas only BEG was sensitive to variations in chl a concentration and temperature. DEB and SFG models rely on the calibration of the half-saturation coefficient to optimize the food ingestion function term to that of observed growth, and BEG is independent of observed actual growth as its predictions solely rely on the time history of the local chl a concentration and temperature.

  7. Statistical theory of correlations in random packings of hard particles.

    Science.gov (United States)

    Jin, Yuliang; Puckett, James G; Makse, Hernán A

    2014-05-01

    A random packing of hard particles represents a fundamental model for granular matter. Despite its importance, analytical modeling of random packings remains difficult due to the existence of strong correlations which preclude the development of a simple theory. Here, we take inspiration from liquid theories for the n-particle angular correlation function to develop a formalism of random packings of hard particles from the bottom up. A progressive expansion into a shell of particles converges in the large layer limit under a Kirkwood-like approximation of higher-order correlations. We apply the formalism to hard disks and predict the density of two-dimensional random close packing (RCP), ϕ(rcp) = 0.85 ± 0.01, and random loose packing (RLP), ϕ(rlp) = 0.67 ± 0.01. Our theory also predicts a phase diagram and angular correlation functions that are in good agreement with experimental and numerical data.

  8. Hard diffraction and rapidity gaps

    International Nuclear Information System (INIS)

    Brandt, A.

    1995-09-01

    The field of hard diffraction, which studies events with a rapidity gap and a hard scattering, has expanded dramatically recently. A review of new results from CDF, D OE, H1 and ZEUS will be given. These results include diffractive jet production, deep-inelastic scattering in large rapidity gap events, rapidity gaps between high transverse energy jets, and a search for diffractive W-boson production. The combination of these results gives new insight into the exchanged object, believed to be the pomeron. The results axe consistent with factorization and with a hard pomeron that contains both quarks and gluons. There is also evidence for the exchange of a strongly interacting color singlet in high momentum transfer (36 2 ) events

  9. Initiative hard coal; Initiative Steinkohle

    Energy Technology Data Exchange (ETDEWEB)

    Leonhardt, J.

    2007-08-02

    In order to decrease the import dependence of hard coal in the European Union, the author has submitted suggestions to the director of conventional sources of energy (directorate general for energy and transport) of the European community, which found a positive resonance. These suggestions are summarized in an elaboration 'Initiative Hard Coal'. After clarifying the starting situation and defining the target the presupposition for a better use of hard coal deposits as raw material in the European Union are pointed out. On that basis concrete suggestions for measures are made. Apart from the conditions of the deposits it concerns thereby also new mining techniques and mining-economical developments, connected with tasks for the mining-machine industry. (orig.)

  10. Single and repeated GnRH agonist stimulation tests compared with basal markers of ovarian reserve in the prediction of outcome in IVF

    NARCIS (Netherlands)

    Hendriks, D.J.; Broekmans, F.J.M.; Bancsi, L.F.J.M.M.; Looman, C.W.N.; Jong, F.H. de; Velde, E.R. te

    Purpose: To study the value of a single or repeated GnRH agonist stimulation test (GAST) in predicting outcome in IVF compared to basal ovarian reserve tests. Methods: A total of 57 women was included. In a cycle prior to the IVF treatment, on day 3, an antral follicle count (AFC) was performed

  11. Fat-free mass prediction equations for bioelectric impedance analysis compared to dual energy X-ray absorptiometry in obese adolescents: a validation study

    NARCIS (Netherlands)

    Hofsteenge, G.H.; Chin A Paw, M.J.M.; Weijs, P.J.M.

    2015-01-01

    Background: In clinical practice, patient friendly methods to assess body composition in obese adolescents are needed. Therefore, the bioelectrical impedance analysis (BIA) related fat-free mass (FFM) prediction equations (FFM-BIA) were evaluated in obese adolescents (age 11-18 years) compared to

  12. Accuracy of Prediction Equations to Assess Percentage of Body Fat in Children and Adolescents with Down Syndrome Compared to Air Displacement Plethysmography

    Science.gov (United States)

    Gonzalez-Aguero, A.; Vicente-Rodriguez, G.; Ara, I.; Moreno, L. A.; Casajus, J. A.

    2011-01-01

    To determine the accuracy of the published percentage body fat (%BF) prediction equations (Durnin et al., Johnston et al., Brook and Slaughter et al.) from skinfold thickness compared to air displacement plethysmography (ADP) in children and adolescents with Down syndrome (DS). Twenty-eight children and adolescents with DS (10-20 years old; 12…

  13. Comparing deep neural network and other machine learning algorithms for stroke prediction in a large-scale population-based electronic medical claims database.

    Science.gov (United States)

    Chen-Ying Hung; Wei-Chen Chen; Po-Tsun Lai; Ching-Heng Lin; Chi-Chun Lee

    2017-07-01

    Electronic medical claims (EMCs) can be used to accurately predict the occurrence of a variety of diseases, which can contribute to precise medical interventions. While there is a growing interest in the application of machine learning (ML) techniques to address clinical problems, the use of deep-learning in healthcare have just gained attention recently. Deep learning, such as deep neural network (DNN), has achieved impressive results in the areas of speech recognition, computer vision, and natural language processing in recent years. However, deep learning is often difficult to comprehend due to the complexities in its framework. Furthermore, this method has not yet been demonstrated to achieve a better performance comparing to other conventional ML algorithms in disease prediction tasks using EMCs. In this study, we utilize a large population-based EMC database of around 800,000 patients to compare DNN with three other ML approaches for predicting 5-year stroke occurrence. The result shows that DNN and gradient boosting decision tree (GBDT) can result in similarly high prediction accuracies that are better compared to logistic regression (LR) and support vector machine (SVM) approaches. Meanwhile, DNN achieves optimal results by using lesser amounts of patient data when comparing to GBDT method.

  14. Evaluation of hard fossil fuel

    International Nuclear Information System (INIS)

    Zivkovic, S.; Nuic, J.

    1999-01-01

    Because of its inexhaustible supplies hard fossil fuel will represent the pillar of the power systems of the 21st century. Only high-calorie fossil fuels have the market value and participate in the world trade. Low-calorie fossil fuels ((brown coal and lignite) are fuels spent on the spot and their value is indirectly expressed through manufactured kWh. For the purpose of determining the real value of a tonne of low-calorie coal, the criteria that help in establishing the value of a tonne of hard coal have to be corrected and thus evaluated and assessed at the market. (author)

  15. Calorimeter triggers for hard collisions

    International Nuclear Information System (INIS)

    Landshoff, P.V.; Polkinghorne, J.C.

    1978-01-01

    We discuss the use of a forward calorimeter to trigger on hard hadron-hadron collisions. We give a derivation in the covariant parton model of the Ochs-Stodolsky scaling law for single-hard-scattering processes, and investigate the conditions when instead a multiple- scattering mechanism might dominate. With a proton beam, this mechanism results in six transverse jets, with a total average multiplicity about twice that seen in ordinary events. We estimate that its cross section is likely to be experimentally accessible at avalues of the beam energy in the region of 100 GeV/c

  16. Diffusive gradient in thin FILMS (DGT) compared with soil solution and labile uranium fraction for predicting uranium bioavailability to ryegrass.

    Science.gov (United States)

    Duquène, L; Vandenhove, H; Tack, F; Van Hees, M; Wannijn, J

    2010-02-01

    The usefulness of uranium concentration in soil solution or recovered by selective extraction as unequivocal bioavailability indices for uranium uptake by plants is still unclear. The aim of the present study was to test if the uranium concentration measured by the diffusive gradient in thin films (DGT) technique is a relevant substitute for plant uranium availability in comparison to uranium concentration in the soil solution or uranium recovered by ammonium acetate. Ryegrass (Lolium perenne L. var. Melvina) is grown in greenhouse on a range of uranium spiked soils. The DGT-recovered uranium concentration (C(DGT)) was correlated with uranium concentration in the soil solution or with uranium recovered by ammonium acetate extraction. Plant uptake was better predicted by the summed soil solution concentrations of UO(2)(2+), uranyl carbonate complexes and UO(2)PO(4)(-). The DGT technique did not provide significant advantages over conventional methods to predict uranium uptake by plants. Copyright 2009 Elsevier Ltd. All rights reserved.

  17. Diffusive gradient in thin FILMS (DGT) compared with soil solution and labile uranium fraction for predicting uranium bioavailability to ryegrass

    Energy Technology Data Exchange (ETDEWEB)

    Duquene, L. [SCK-CEN, Biosphere Impact Studies, Boeretang 200, B-2400 Mol (Belgium); Vandenhove, H., E-mail: hvandenh@sckcen.b [SCK-CEN, Biosphere Impact Studies, Boeretang 200, B-2400 Mol (Belgium); Tack, F. [Ghent University, Laboratory for Analytical Chemistry and Applied Ecochemistry, Coupure Links 653, B-9000 Gent (Belgium); Van Hees, M.; Wannijn, J. [SCK-CEN, Biosphere Impact Studies, Boeretang 200, B-2400 Mol (Belgium)

    2010-02-15

    The usefulness of uranium concentration in soil solution or recovered by selective extraction as unequivocal bioavailability indices for uranium uptake by plants is still unclear. The aim of the present study was to test if the uranium concentration measured by the diffusive gradient in thin films (DGT) technique is a relevant substitute for plant uranium availability in comparison to uranium concentration in the soil solution or uranium recovered by ammonium acetate. Ryegrass (Lolium perenne L. var. Melvina) is grown in greenhouse on a range of uranium spiked soils. The DGT-recovered uranium concentration (C{sub DGT}) was correlated with uranium concentration in the soil solution or with uranium recovered by ammonium acetate extraction. Plant uptake was better predicted by the summed soil solution concentrations of UO{sub 2}{sup 2+}, uranyl carbonate complexes and UO{sub 2}PO{sub 4}{sup -}. The DGT technique did not provide significant advantages over conventional methods to predict uranium uptake by plants.

  18. Diffusive gradient in thin FILMS (DGT) compared with soil solution and labile uranium fraction for predicting uranium bioavailability to ryegrass

    International Nuclear Information System (INIS)

    Duquene, L.; Vandenhove, H.; Tack, F.; Van Hees, M.; Wannijn, J.

    2010-01-01

    The usefulness of uranium concentration in soil solution or recovered by selective extraction as unequivocal bioavailability indices for uranium uptake by plants is still unclear. The aim of the present study was to test if the uranium concentration measured by the diffusive gradient in thin films (DGT) technique is a relevant substitute for plant uranium availability in comparison to uranium concentration in the soil solution or uranium recovered by ammonium acetate. Ryegrass (Lolium perenne L. var. Melvina) is grown in greenhouse on a range of uranium spiked soils. The DGT-recovered uranium concentration (C DGT ) was correlated with uranium concentration in the soil solution or with uranium recovered by ammonium acetate extraction. Plant uptake was better predicted by the summed soil solution concentrations of UO 2 2+ , uranyl carbonate complexes and UO 2 PO 4 - . The DGT technique did not provide significant advantages over conventional methods to predict uranium uptake by plants.

  19. Prediction of Salmonella carcass contamination by a comparative quantitative analysis of E. coli and Salmonella during pig slaughter

    DEFF Research Database (Denmark)

    Nauta, Maarten; Barfod, Kristen; Hald, Tine

    2013-01-01

    Salmonella concentrations. It is concluded that the faecal carriage of Salmonella together with the faecal contamination of carcasses, as predicted from E. coli data in the animal faeces and hygiene performance of the slaughterhouse, is not sufficient to explain carcass contamination with Salmonella. Our...... extensive data set showed that other factors than the observed faecal carriage of Salmonella by the individual animals brought to slaughter, play a more important role in the Salmonella carcass contamination of pork.......Faecal contamination of carcasses in the slaughterhouse is generally considered to be the source of Salmonella on pork. In this study the hygiene indicator Escherichia coli is used to quantify faecal contamination of carcasses and it is hypothesized that it can be used to predict the quantitative...

  20. What predicts inattention in adolescents? An experience-sampling study comparing chronotype, subjective, and objective sleep parameters.

    Science.gov (United States)

    Hennig, Timo; Krkovic, Katarina; Lincoln, Tania M

    2017-10-01

    Many adolescents sleep insufficiently, which may negatively affect their functioning during the day. To improve sleep interventions, we need a better understanding of the specific sleep-related parameters that predict poor functioning. We investigated to which extent subjective and objective parameters of sleep in the preceding night (state parameters) and the trait variable chronotype predict daytime inattention as an indicator of poor functioning. We conducted an experience-sampling study over one week with 61 adolescents (30 girls, 31 boys; mean age = 15.5 years, standard deviation = 1.1 years). Participants rated their inattention two times each day (morning, afternoon) on a smartphone. Subjective sleep parameters (feeling rested, positive affect upon awakening) were assessed each morning on the smartphone. Objective sleep parameters (total sleep time, sleep efficiency, wake after sleep onset) were assessed with a permanently worn actigraph. Chronotype was assessed with a self-rated questionnaire at baseline. We tested the effect of subjective and objective state parameters of sleep on daytime inattention, using multilevel multiple regressions. Then, we tested whether the putative effect of the trait parameter chronotype on inattention is mediated through state sleep parameters, again using multilevel regressions. We found that short sleep time, but no other state sleep parameter, predicted inattention to a small effect. As expected, the trait parameter chronotype also predicted inattention: morningness was associated with less inattention. However, this association was not mediated by state sleep parameters. Our results indicate that short sleep time causes inattention in adolescents. Extended sleep time might thus alleviate inattention to some extent. However, it cannot alleviate the effect of being an 'owl'. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Chemical hardness and density functional theory

    Indian Academy of Sciences (India)

    Unknown

    RALPH G PEARSON. Chemistry Department, University of California, Santa Barbara, CA 93106, USA. Abstract. The concept of chemical hardness is reviewed from a personal point of view. Keywords. Hardness; softness; hard & soft acids bases (HSAB); principle of maximum hardness. (PMH) density functional theory (DFT) ...

  2. Performance evaluation of an irreversible Miller cycle comparing FTT (finite-time thermodynamics) analysis and ANN (artificial neural network) prediction

    International Nuclear Information System (INIS)

    Mousapour, Ashkan; Hajipour, Alireza; Rashidi, Mohammad Mehdi; Freidoonimehr, Navid

    2016-01-01

    In this paper, the first and second-laws efficiencies are applied to performance analysis of an irreversible Miller cycle. In the irreversible cycle, the linear relation between the specific heat of the working fluid and its temperature, the internal irreversibility described using the compression and expansion efficiencies, the friction loss computed according to the mean velocity of the piston and the heat-transfer loss are considered. The effects of various design parameters, such as the minimum and maximum temperatures of the working fluid and the compression ratio on the power output and the first and second-laws efficiencies of the cycle are discussed. In the following, a procedure named ANN is used for predicting the thermal efficiency values versus the compression ratio, and the minimum and maximum temperatures of the Miller cycle. Nowadays, Miller cycle is widely used in the automotive industry and the obtained results of this study will provide some significant theoretical grounds for the design optimization of the Miller cycle. - Highlights: • The performance of an irreversible Miller cycle is investigated using FFT. • The effects of design parameters on the performance of the cycle are investigated. • ANN is applied to predict the thermal efficiency and the power output values. • There is an excellent correlation between FTT and ANN data. • ANN can be applied to predict data where FTT analysis has not been performed.

  3. Convergent RANK- and c-Met-mediated signaling components predict survival of patients with prostate cancer: an interracial comparative study.

    Science.gov (United States)

    Hu, Peizhen; Chung, Leland W K; Berel, Dror; Frierson, Henry F; Yang, Hua; Liu, Chunyan; Wang, Ruoxiang; Li, Qinlong; Rogatko, Andre; Zhau, Haiyen E

    2013-01-01

    We reported (PLoS One 6 (12):e28670, 2011) that the activation of c-Met signaling in RANKL-overexpressing bone metastatic LNCaP cell and xenograft models increased expression of RANK, RANKL, c-Met, and phosphorylated c-Met, and mediated downstream signaling. We confirmed the significance of the RANK-mediated signaling network in castration resistant clinical human prostate cancer (PC) tissues. In this report, we used a multispectral quantum dot labeling technique to label six RANK and c-Met convergent signaling pathway mediators simultaneously in formalin fixed paraffin embedded (FFPE) tissue specimens, quantify the intensity of each expression at the sub-cellular level, and investigated their potential utility as predictors of patient survival in Caucasian-American, African-American and Chinese men. We found that RANKL and neuropilin-1 (NRP-1) expression predicts survival of Caucasian-Americans with PC. A Gleason score ≥ 8 combined with nuclear p-c-Met expression predicts survival in African-American PC patients. Neuropilin-1, p-NF-κB p65 and VEGF are predictors for the overall survival of Chinese men with PC. These results collectively support interracial differences in cell signaling networks that can predict the survival of PC patients.

  4. Convergent RANK- and c-Met-mediated signaling components predict survival of patients with prostate cancer: an interracial comparative study.

    Directory of Open Access Journals (Sweden)

    Peizhen Hu

    Full Text Available We reported (PLoS One 6 (12:e28670, 2011 that the activation of c-Met signaling in RANKL-overexpressing bone metastatic LNCaP cell and xenograft models increased expression of RANK, RANKL, c-Met, and phosphorylated c-Met, and mediated downstream signaling. We confirmed the significance of the RANK-mediated signaling network in castration resistant clinical human prostate cancer (PC tissues. In this report, we used a multispectral quantum dot labeling technique to label six RANK and c-Met convergent signaling pathway mediators simultaneously in formalin fixed paraffin embedded (FFPE tissue specimens, quantify the intensity of each expression at the sub-cellular level, and investigated their potential utility as predictors of patient survival in Caucasian-American, African-American and Chinese men. We found that RANKL and neuropilin-1 (NRP-1 expression predicts survival of Caucasian-Americans with PC. A Gleason score ≥ 8 combined with nuclear p-c-Met expression predicts survival in African-American PC patients. Neuropilin-1, p-NF-κB p65 and VEGF are predictors for the overall survival of Chinese men with PC. These results collectively support interracial differences in cell signaling networks that can predict the survival of PC patients.

  5. To compare the accuracy of Prayer's sign and Mallampatti test in predicting difficult intubation in Diabetic patients

    International Nuclear Information System (INIS)

    Baig, M. M. A.; Khan, F. H.

    2014-01-01

    Objective: To determine the accuracy of Prayer's sign and Mallampatti test in predicting difficult endotracheal intubation in diabetic patients. Methods: The cross-sectional study was performed at Aga Khan University Hospital, Karachi, over a period from January 2009 to April 2010, and comprised 357 patients who required endotracheal intubation for elective surgical procedures. Prayer's sign and Mallampatti tests were performed for the assessment of airway by trained observers. Ease or difficulty of laryngoscopy after the patient was fully anaesthetised with standard technique were observed and laryngoscopic view of first attempt was rated according to Cormack-Lehan grade of intubation. SPSS 15 was used for statistical analysis. Results: Of the 357 patients, 125(35%) were classified as difficult to intubate. Prayer's sign showed significantly lower accuracy, positive and negative predictive values than Mallampatti test. The sensitivity of Prayer's sign was lower 29.6 (95% Confidence Interval, 21.9-38.5) than Mallampatti test 79.3 (95% confidence interval, 70.8-85.7) while specificity of both the tests was not found to be significantly different. Conclusion: Prayer's sign is not acceptable as a single best bedside test for prediction of difficult intubation. (author)

  6. The Sensitivity, Specificity and Predictive Values of Snellen Chart Compared to the Diagnostic Test in Amblyopia Screening Program in Iran

    Directory of Open Access Journals (Sweden)

    Fatemeh Rivakani

    2015-12-01

    Full Text Available Introduction Amblyopia is a leading cause of visual impairment in both childhood and adult populations. Our aim in this study was to assess the epidemiological characteristics of the amblyopia screening program in Iran. Materials and Methods A cross-sectional study was done on a randomly selected sample of 4,636 Iranian children who were referred to screening program in 2013 were participated in validity study, too. From each provinces the major city were selected. Screening and diagnostic tests were done by instructors in first stage and optometrists in second stage, respectively. Finally data were analyzed by Stata version 13. Results The sensitivity was ranged from 74% to 100% among the various provinces such that Fars and Ardabil province had maximum and minimum values, respectively. The pattern of specificity was differ and ranged 44% to 84% among the provinces; Hormozgan and Fars had maximum and minimum values, respectively. The positive predictive value was also ranged from 35% to %81 which was assigned to Khuzestan and Ardabil provinces, respectively. The range of Negative Predictive value was 61% to 100% which was belonged to Ardabil and Fars provinces. Conclusion The total sensitivity (89% and negative predictive values (93% of screening test among children aged 3-6 years is acceptable, but only 51% of children refereed to second stage are true positive and this imposes considerable cost to health system.

  7. Isotropic-nematic transition in a mixture of hard spheres and hard spherocylinders: scaled particle theory description

    Directory of Open Access Journals (Sweden)

    M.F. Holovko

    2017-12-01

    Full Text Available The scaled particle theory is developed for the description of thermodynamical properties of a mixture of hard spheres and hard spherocylinders. Analytical expressions for free energy, pressure and chemical potentials are derived. From the minimization of free energy, a nonlinear integral equation for the orientational singlet distribution function is formulated. An isotropic-nematic phase transition in this mixture is investigated from the bifurcation analysis of this equation. It is shown that with an increase of concentration of hard spheres, the total packing fraction of a mixture on phase boundaries slightly increases. The obtained results are compared with computer simulations data.

  8. The background counting rates in a balloon borne hard X-ray telescope

    International Nuclear Information System (INIS)

    Dean, A.J.; Dipper, N.A.; Lewis, R.A.

    1986-01-01

    A detailed Monte Carlo model of a hard (20-300 keV) X-ray astronomical telescope has been developed in order to calculate the energy loss distribution of the unwanted background noise events in the prime detection elements. The spectral distributions of the background rates measured at balloon altitudes over Palestine, Texas are compared to the predictions of the theoretical model. Good agreement has been found in terms of both the overall intensity level as well as the spectral distribution. (orig.)

  9. Tests of hard and soft QCD with $e^{+}e^{-}$ Annihilation Data

    CERN Document Server

    Kluth, S

    2002-01-01

    Experimental tests of QCD predictions for event shape distributions combining contributions from hard and soft processes are discussed. The hard processes are predicted by perturbative QCD calculations. The soft processes cannot be calculated directly using perturbative QCD, they are treated by a power correction model based on the analysis of infrared renormalons. Furthermore, an analysis of the gauge structure of QCD is presented using fits of the colour factors within the same combined QCD predictions.

  10. Predictive value of casual ECG-based resting heart rate compared with resting heart rate obtained from Holter recording

    DEFF Research Database (Denmark)

    Carlson, Nicholas; Dixen, Ulrik; Marott, Jacob L

    2014-01-01

    BACKGROUND: Elevated resting heart rate (RHR) is associated with cardiovascular mortality and morbidity. Assessment of heart rate (HR) from Holter recording may afford a more precise estimate of the effect of RHR on cardiovascular risk, as compared to casual RHR. Comparative analysis was carried ...

  11. Comparative Analysis of Local Control Prediction Using Different Biophysical Models for Non-Small Cell Lung Cancer Patients Undergoing Stereotactic Body Radiotherapy

    Directory of Open Access Journals (Sweden)

    Bao-Tian Huang

    2017-01-01

    Full Text Available Purpose. The consistency for predicting local control (LC data using biophysical models for stereotactic body radiotherapy (SBRT treatment of lung cancer is unclear. This study aims to compare the results calculated from different models using the treatment planning data. Materials and Methods. Treatment plans were designed for 17 patients diagnosed with primary non-small cell lung cancer (NSCLC using 5 different fraction schemes. The Martel model, Ohri model, and the Tai model were used to predict the 2-year LC value. The Gucken model, Santiago model, and the Tai model were employed to estimate the 3-year LC data. Results. We found that the employed models resulted in completely different LC prediction except for the Gucken and the Santiago models which exhibited quite similar 3-year LC data. The predicted 2-year and 3-year LC values in different models were not only associated with the dose normalization but also associated with the employed fraction schemes. The greatest difference predicted by different models was up to 15.0%. Conclusions. Our results show that different biophysical models influence the LC prediction and the difference is not only correlated to the dose normalization but also correlated to the employed fraction schemes.

  12. Hard equality constrained integer knapsacks

    NARCIS (Netherlands)

    Aardal, K.I.; Lenstra, A.K.; Cook, W.J.; Schulz, A.S.

    2002-01-01

    We consider the following integer feasibility problem: "Given positive integer numbers a 0, a 1,..., a n, with gcd(a 1,..., a n) = 1 and a = (a 1,..., a n), does there exist a nonnegative integer vector x satisfying ax = a 0?" Some instances of this type have been found to be extremely hard to solve

  13. Stress in hard metal films

    NARCIS (Netherlands)

    Janssen, G.C.A.M.; Kamminga, J.D.

    2004-01-01

    In the absence of thermal stress, tensile stress in hard metal films is caused by grain boundary shrinkage and compressive stress is caused by ion peening. It is shown that the two contributions are additive. Moreover tensile stress generated at the grain boundaries does not relax by ion

  14. Incorporating High-Frequency Physiologic Data Using Computational Dictionary Learning Improves Prediction of Delayed Cerebral Ischemia Compared to Existing Methods.

    Science.gov (United States)

    Megjhani, Murad; Terilli, Kalijah; Frey, Hans-Peter; Velazquez, Angela G; Doyle, Kevin William; Connolly, Edward Sander; Roh, David Jinou; Agarwal, Sachin; Claassen, Jan; Elhadad, Noemie; Park, Soojin

    2018-01-01

    Accurate prediction of delayed cerebral ischemia (DCI) after subarachnoid hemorrhage (SAH) can be critical for planning interventions to prevent poor neurological outcome. This paper presents a model using convolution dictionary learning to extract features from physiological data available from bedside monitors. We develop and validate a prediction model for DCI after SAH, demonstrating improved precision over standard methods alone. 488 consecutive SAH admissions from 2006 to 2014 to a tertiary care hospital were included. Models were trained on 80%, while 20% were set aside for validation testing. Modified Fisher Scale was considered the standard grading scale in clinical use; baseline features also analyzed included age, sex, Hunt-Hess, and Glasgow Coma Scales. An unsupervised approach using convolution dictionary learning was used to extract features from physiological time series (systolic blood pressure and diastolic blood pressure, heart rate, respiratory rate, and oxygen saturation). Classifiers (partial least squares and linear and kernel support vector machines) were trained on feature subsets of the derivation dataset. Models were applied to the validation dataset. The performances of the best classifiers on the validation dataset are reported by feature subset. Standard grading scale (mFS): AUC 0.54. Combined demographics and grading scales (baseline features): AUC 0.63. Kernel derived physiologic features: AUC 0.66. Combined baseline and physiologic features with redundant feature reduction: AUC 0.71 on derivation dataset and 0.78 on validation dataset. Current DCI prediction tools rely on admission imaging and are advantageously simple to employ. However, using an agnostic and computationally inexpensive learning approach for high-frequency physiologic time series data, we demonstrated that we could incorporate individual physiologic data to achieve higher classification accuracy.

  15. Incorporating High-Frequency Physiologic Data Using Computational Dictionary Learning Improves Prediction of Delayed Cerebral Ischemia Compared to Existing Methods

    Directory of Open Access Journals (Sweden)

    Murad Megjhani

    2018-03-01

    Full Text Available PurposeAccurate prediction of delayed cerebral ischemia (DCI after subarachnoid hemorrhage (SAH can be critical for planning interventions to prevent poor neurological outcome. This paper presents a model using convolution dictionary learning to extract features from physiological data available from bedside monitors. We develop and validate a prediction model for DCI after SAH, demonstrating improved precision over standard methods alone.Methods488 consecutive SAH admissions from 2006 to 2014 to a tertiary care hospital were included. Models were trained on 80%, while 20% were set aside for validation testing. Modified Fisher Scale was considered the standard grading scale in clinical use; baseline features also analyzed included age, sex, Hunt–Hess, and Glasgow Coma Scales. An unsupervised approach using convolution dictionary learning was used to extract features from physiological time series (systolic blood pressure and diastolic blood pressure, heart rate, respiratory rate, and oxygen saturation. Classifiers (partial least squares and linear and kernel support vector machines were trained on feature subsets of the derivation dataset. Models were applied to the validation dataset.ResultsThe performances of the best classifiers on the validation dataset are reported by feature subset. Standard grading scale (mFS: AUC 0.54. Combined demographics and grading scales (baseline features: AUC 0.63. Kernel derived physiologic features: AUC 0.66. Combined baseline and physiologic features with redundant feature reduction: AUC 0.71 on derivation dataset and 0.78 on validation dataset.ConclusionCurrent DCI prediction tools rely on admission imaging and are advantageously simple to employ. However, using an agnostic and computationally inexpensive learning approach for high-frequency physiologic time series data, we demonstrated that we could incorporate individual physiologic data to achieve higher classification accuracy.

  16. Comparing predictive abilities of three visible-near infrared spectrophotometers for soil organic carbon and clay determination

    DEFF Research Database (Denmark)

    Knadel, Maria; Stenberg, Bo; Deng, Fan

    2013-01-01

    carbon (SOC) and clay calibrations for 194 Danish top soils. Scanning procedures for the three spectrophotometers where done according to uniform laboratory protocols. SOC and clay calibrations were performed using PLS regression. One third of the data was used as an independent test set. A range...... of spectral preprocessing methods was applied in search for model improvement. Validation for SOC content using an independent data set derived from all three spectrophotometers provided values of RMSEP between 0.45 and 0.52 %, R2=0.44-0.58 and RPD=1.3-1.5. Clay content was predicted with a higher precision...

  17. A comparative analysis of predictive models of morbidity in intensive care unit after cardiac surgery – Part I: model planning

    Directory of Open Access Journals (Sweden)

    Biagioli Bonizella

    2007-11-01

    Full Text Available Abstract Background Different methods have recently been proposed for predicting morbidity in intensive care units (ICU. The aim of the present study was to critically review a number of approaches for developing models capable of estimating the probability of morbidity in ICU after heart surgery. The study is divided into two parts. In this first part, popular models used to estimate the probability of class membership are grouped into distinct categories according to their underlying mathematical principles. Modelling techniques and intrinsic strengths and weaknesses of each model are analysed and discussed from a theoretical point of view, in consideration of clinical applications. Methods Models based on Bayes rule, k-nearest neighbour algorithm, logistic regression, scoring systems and artificial neural networks are investigated. Key issues for model design are described. The mathematical treatment of some aspects of model structure is also included for readers interested in developing models, though a full understanding of mathematical relationships is not necessary if the reader is only interested in perceiving the practical meaning of model assumptions, weaknesses and strengths from a user point of view. Results Scoring systems are very attractive due to their simplicity of use, although this may undermine their predictive capacity. Logistic regression models are trustworthy tools, although they suffer from the principal limitations of most regression procedures. Bayesian models seem to be a good compromise between complexity and predictive performance, but model recalibration is generally necessary. k-nearest neighbour may be a valid non parametric technique, though computational cost and the need for large data storage are major weaknesses of this approach. Artificial neural networks have intrinsic advantages with respect to common statistical models, though the training process may be problematical. Conclusion Knowledge of model

  18. Fat-free mass prediction equations for bioelectric impedance analysis compared to dual energy X-ray absorptiometry in obese adolescents: a validation study.

    Science.gov (United States)

    Hofsteenge, Geesje H; Chinapaw, Mai J M; Weijs, Peter J M

    2015-10-15

    In clinical practice, patient friendly methods to assess body composition in obese adolescents are needed. Therefore, the bioelectrical impedance analysis (BIA) related fat-free mass (FFM) prediction equations (FFM-BIA) were evaluated in obese adolescents (age 11-18 years) compared to FFM measured by dual-energy x-ray absorptiometry (FFM-DXA) and a new population specific FFM-BIA equation is developed. After an overnight fast, the subjects attended the outpatient clinic. After measuring height and weight, a full body scan by dual-energy x-ray absorptiometry (DXA) and a BIA measurement was performed. Thirteen predictive FFM-BIA equations based on weight, height, age, resistance, reactance and/or impedance were systematically selected and compared to FFM-DXA. Accuracy of FFM-BIA equations was evaluated by the percentage adolescents predicted within 5% of FFM-DXA measured, the mean percentage difference between predicted and measured values (bias) and the Root Mean Squared prediction Error (RMSE). Multiple linear regression was conducted to develop a new BIA equation. Validation was based on 103 adolescents (60% girls), age 14.5 (sd1.7) years, weight 94.1 (sd15.6) kg and FFM-DXA of 56.1 (sd9.8) kg. The percentage accurate estimations varied between equations from 0 to 68%; bias ranged from -29.3 to +36.3% and RMSE ranged from 2.8 to 12.4 kg. An alternative prediction equation was developed: FFM = 0.527 * H(cm)(2)/Imp + 0.306 * weight - 1.862 (R(2) = 0.92, SEE = 2.85 kg). Percentage accurate prediction was 76%. Compared to DXA, the Gray equation underestimated the FFM with 0.4 kg (55.7 ± 8.3), had an RMSE of 3.2 kg, 63% accurate prediction and the smallest bias of (-0.1%). When split by sex, the Gray equation had the narrowest range in accurate predictions, bias, and RMSE. For the assessment of FFM with BIA, the Gray-FFM equation appears to be the most accurate, but 63% is still not at an acceptable accuracy level for obese adolescents. The new equation appears to

  19. The prediction of the in-hospital mortality of acutely ill medical patients by electrocardiogram (ECG) dispersion mapping compared with established risk factors and predictive scores--a pilot study.

    LENUS (Irish Health Repository)

    Kellett, John

    2011-08-01

    ECG dispersion mapping (ECG-DM) is a novel technique that analyzes low amplitude ECG oscillations and reports them as the myocardial micro-alternation index (MMI). This study compared the ability of ECG-DM to predict in-hospital mortality with traditional risk factors such as age, vital signs and co-morbid diagnoses, as well as three predictive scores: the Simple Clinical Score (SCS)--based on clinical and ECG findings, and two Medical Admission Risk System scores--one based on vital signs and laboratory data (MARS), and one only on laboratory data (LD).

  20. A Comparative Study between SVM and Fuzzy Inference System for the Automatic Prediction of Sleep Stages and the Assessment of Sleep Quality

    Directory of Open Access Journals (Sweden)

    John Gialelis

    2015-11-01

    Full Text Available This paper compares two supervised learning algorithms for predicting the sleep stages based on the human brain activity. The first step of the presented work regards feature extraction from real human electroencephalography (EEG data together with its corresponding sleep stages that are utilized for training a support vector machine (SVM, and a fuzzy inference system (FIS algorithm. Then, the trained algorithms are used to predict the sleep stages of real human patients. Extended comparison results are demonstrated which indicate that both classifiers could be utilized as a basis for an unobtrusive sleep quality assessment.

  1. Predictions of the thermomechanical code ''RESTA'' compared with fuel element examinations after irradiation in the BR3 reactor

    International Nuclear Information System (INIS)

    Petitgrand, S.

    1980-01-01

    A large number of fuel rods have been irradiated in the small power plant BR3. Many of them have been examined in hot cells after irradiation, giving thus valuable experimental information. On the other hand a thermomechanical code, named RESTA, has been developed by the C.E.A. to describe and predict the behaviour of a fuel pin in a PWR environment and in stationary conditions. The models used in that code derive chiefly from the C.E.A.'s own experience and are briefly reviewed in this paper. The comparison between prediction and experience has been performed for four power history classes: (1) moderate (average linear rating approximately equal to 20 kw m -1 ) and short (approximately equal to 300 days) rating, (2) moderate (approximately equal to 20 kw m -1 ) and long (approximately equal to 600 days) rating, (3) high (25-30 kw m -1 ) and long (approximately equal to 600 days) rating and (4) very high (30-40 kw m -1 ) and long (approximately equal to 600 days) rating. Satisfactory agreement has been found between experimental and calculated results in all cases, concerning fuel structural change, fission gas release, pellet-clad interaction as well as clad permanent strain. (author)

  2. Comparing natural and artificial carious lesions in human crowns by means of conventional hard x-ray micro-tomography and two-dimensional x-ray scattering with synchrotron radiation

    Science.gov (United States)

    Botta, Lea Maria; White, Shane N.; Deyhle, Hans; Dziadowiec, Iwona; Schulz, Georg; Thalmann, Peter; Müller, Bert

    2016-10-01

    Dental caries, one of the most prevalent infectious bacterial diseases in the world, is caused by specific types of acid-producing bacteria. Caries is a disease continuum resulting from the earliest loss of ions from apatite crystals through gross cavitation. Enamel dissolution starts when the pH-value drops below 5.5. Neutralizing the pH-value in the oral cavity opposes the process of demineralization, and so caries lesions occur in a dynamic cyclic de-mineralizing/remineralizing environment. Unfortunately, biomimetic regeneration of cavitated enamel is not yet possible, although remineralization of small carious lesions occurs under optimal conditions. Therefore, the development of methods that can regenerate carious lesions, and subsequently recover and retain teeth, is highly desirable. For the present proceedings we analyzed one naturally occurring sub-surface and one artificially produced lesion. For the characterization of artificial and natural lesions micro computed tomography is the method of choice when looking to determine three-dimensional mineral distribution and to quantify the degree of mineralization. In this pilot study we elucidate that the de-mineralized enamel in natural and artificially induced lesions shows comparable X-ray attenuation behavior, thereby implying that the study protocol employed herein seems to be appropriate. Once we know that the lesions are comparable, a series of well-reproducible in vitro experiments on enamel regeneration could be performed. In order to quantify further lesion morphology, the anisotropy of the enamel's nanostructure can be characterized by using spatially resolved, small-angle X-ray scattering. We wanted to demonstrate that the artificially induced defect fittingly resembles the natural carious lesion.

  3. The ABCD2 score is better for stroke risk prediction after anterior circulation TIA compared to posterior circulation TIA.

    Science.gov (United States)

    Wang, Junjun; Wu, Jimin; Liu, Rongyi; Gao, Feng; Hu, Haitao; Yin, Xinzhen

    2015-01-01

    Transient ischemic attacks (TIAs) are divided into anterior and posterior circulation types (AC-TIA, PC-TIA, respectively). In the present study, we sought to evaluate the ABCD2 score for predicting stroke in either AC-TIA or PC-TIA. We prospectively studied 369 consecutive patients who presented with TIA between June 2009 and December 2012. The 7 d occurrence of stroke after TIA was recorded and correlated with the ABCD2 score with regards to AC-TIA or PC-TIA. Overall, 273 AC-TIA and 96 PC-TIA patients were recruited. Twenty-one patients with AC-TIA and seven with PC-TIA developed a stroke within the subsequent 7 d (7.7% vs. 7.3%, p = 0.899). The ABCD2 score had a higher predictive value of stroke occurrence in AC-TIA (the AUC was 0.790; 95% CI, 0.677-0.903) than in PC-TIA (the AUC was 0.535; 95% CI, 0.350-0.727) and the z-value of two receiver operating characteristic (ROC) curves was 2.24 (p = 0.025). AC-TIA resulted in a higher incidence of both unilateral weakness and speech disturbance and longer durations of the symptoms. Inversely, PC-TIA was associated with a higher incidence of diabetes mellitus (19.8% vs. 10.6%, p = 0.022). Evaluating each component of scores, age ≥ 60 yr (OR = 7.010, 95% CI 1.599-30.743), unilateral weakness (OR = 3.455, 95% CI 1.131-10.559), and blood pressure (OR = 9.652, 95% CI 2.202-42.308) were associated with stroke in AC-TIA, while in PC-TIA, diabetes mellitus (OR = 9.990, 95% CI 1.895-52.650) was associated with stroke. In our study, the ABCD2 score could predict the short-term risk of stroke after AC-TIA, but might have limitation for PC-TIA.

  4. Prediction of monthly global solar radiation using adaptive neuro fuzzy inference system (ANFIS) technique over the state of Tamilnadu (India): a comparative study

    International Nuclear Information System (INIS)

    Sumithira, T. R.; Nirmal, Kumar A.

    2012-01-01

    Enormous potential of solar energy as a clean and pollution free source enrich the global power generation. India, being a tropical country, has high solar radiation and it lies to the north of equator between 8 degree 4' and 37 degree 6' North latitude and 68 degree 7' , and 97 degree 5' East longitude. In south india, Tamilnadu is located in the extreme south east with an average temperature of grater than 27.5 degree (> 81.5 F). In this study, an adaptive neuro-fuzzy inference system (ANFIS) based modelling approach to predict the monthly global solar radiation (MGSR) in Tamilnadu is presented using the real meteorological solar radiation data from the 31 districts of Tamilnadu with different latitude and longitude. The purpose of the study is to compare the accuracy of ANFIS and other soft computing models as found in literature to assess the solar radiation. The performance of the proposed model was tested and compared with other earth region in a case study. The statistical performance parameters such as root mean square error (RMSE), mean bias error (MBE), and coefficient of determination (R2) are presented and compared to validate the performance. The comparative test results prove the ANFIS based prediction are better than other models and furthermore proves its prediction capability for any geographical area with changing meteorological conditions. (author)

  5. The correlation functions of hard-sphere chain fluids: Comparison of the Wertheim integral equation theory with the Monte Carlo simulation

    International Nuclear Information System (INIS)

    Chang, J.; Sandler, S.I.

    1995-01-01

    The correlation functions of homonuclear hard-sphere chain fluids are studied using the Wertheim integral equation theory for associating fluids and the Monte Carlo simulation method. The molecular model used in the simulations is the freely jointed hard-sphere chain with spheres that are tangentially connected. In the Wertheim theory, such a chain molecule is described by sticky hard spheres with two independent attraction sites on the surface of each sphere. The OZ-like equation for this associating fluid is analytically solved using the polymer-PY closure and by imposing a single bonding condition. By equating the mean chain length of this associating hard sphere fluid to the fixed length of the hard-sphere chains used in simulation, we find that the correlation functions for the chain fluids are accurately predicted. From the Wertheim theory we also obtain predictions for the overall correlation functions that include intramolecular correlations. In addition, the results for the average intermolecular correlation functions from the Wertheim theory and from the Chiew theory are compared with simulation results, and the differences between these theories are discussed

  6. Comparing the predictive value of the pelvic ring injury classification systems by Tile and by Young and Burgess.

    Science.gov (United States)

    Osterhoff, Georg; Scheyerer, Max J; Fritz, Yannick; Bouaicha, Samy; Wanner, Guido A; Simmen, Hans-Peter; Werner, Clément M L

    2014-04-01

    Radiology-based classifications of pelvic ring injuries and their relevance for the prognosis of morbidity and mortality are disputed in the literature. The purpose of this study was to evaluate potential differences between the pelvic ring injury classification systems by Tile and by Young and Burgess with regard to their predictive value on mortality, transfusion/infusion requirement and concomitant injuries. Two-hundred-and-eighty-five consecutive patients with pelvic ring fractures were analyzed for mortality within 30 days after admission, number of blood units and total volume of fluid infused during the first 24h after trauma, the Abbreviated Injury Severity (AIS) scores for head, chest, spine, abdomen and extremities as a function of the Tile and the Young-Burgess classifications. There was no significant relationship between occurrence of death and fracture pattern but a significant relationship between fracture pattern and need for blood units/total fluid volume for Tile (p<.001/p<.001) and Young-Burgess (p<.001/p<.001). In both classifications, open book fractures were associated with more fluid requirement and more severe injuries of the abdomen, spine and extremities (p<.05). When divided into the larger subgroups "partially stable" and "unstable", unstable fractures were associated with a higher mortality rate in the Young-Burgess system (p=.036). In both classifications, patients with unstable fractures required significantly more blood transfusions (p<.001) and total fluid infusion (p<.001) and higher AIS scores. In this first direct comparison of both classifications, we found no clinical relevant differences with regard to their predictive value on mortality, transfusion/infusion requirement and concomitant injuries. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Can Bcl-XL expression predict the radio sensitivity of Bilharzial-related squamous bladder carcinoma? a prospective comparative study

    Directory of Open Access Journals (Sweden)

    Kamel Nermen A

    2011-01-01

    Full Text Available Abstract Background Local pelvic recurrence after radical cystectomy for muscle invasive bilharzial related squamous cell carcinoma accounts for 75% of treatment failures even in organ confined tumors. Despite the proven value of lymphadenectomy, up to 60% of patients undergoing cystectomy do not have it. These factors are in favor of adjuvant radiotherapy reevaluation. objectives: to evaluate the effect of adjuvant radiotherapy on disease free survival in muscle invasive bilharzial related squamous cell carcinoma of the urinary bladder and to test the predictability of radio-sensitivity using the anti apoptotic protein Bcl-XL. Methods The study prospectively included 71 patients, (47 males, 24 females with muscle invasive bilharzial related squamous cell carcinoma of the bladder (Stage pT2a-T3N0-N3M0 who underwent radical cystectomy in Assiut university hospitals between January 2005 and December 2006. Thirty eight patients received adjuvant radiotherapy to the pelvis in the dose of 50Gy/25 fractions/5 weeks (Group 1, while 33 patients did not receive adjuvant radiotherapy (group 2. Immunohistochemical characterization for bcl-xL expression was done. Follow up was done every 3 months for 12 to 36 months with a mean of 16 ± 10 months. All data were analyzed using SPSS version 16. Three years cumulative disease free survival was calculated and adjusted to Bcl-XL expression and side effects of the treatment were recorded. Results The disease free cumulative survival was 48% for group 1 and 29% for group 2 (log rank p value 0.03. The multivariate predictors of tumor recurrence were the positive Bcl-XL expression (odd ratio 41.1, 95% CI 8.4 - 102.3, p Conclusions Adjuvant radiotherapy for muscle invasive bilharzial related squamous cell carcinoma of the urinary bladder has potential effectiveness and minor side effects. Moreover, Bcl-XL expression is a valuable tool for predicting those who might not respond to this adjuvant treatment.

  8. Prepulse dependence in hard x-ray generation from microdroplets

    International Nuclear Information System (INIS)

    Anand, M.; Kahaly, S.; Kumar, G. Ravindra; Sandhu, A. S.; Gibbon, P.; Krishnamurthy, M.

    2006-01-01

    We report on experiments which show that liquid microdroplets are very efficient in hard x-ray generation. We make a comparative study of hard x-ray emission from 15 μm methanol microdroplets and a plain slab target of similar atomic composition at similar laser intensities. The hard X-ray yield from droplet plasmas is about 35 times more than that obtained from solid plasmas. A prepulse that is about 10ns and at least 2% in intensity of the main pulse is essential for hard x-ray generation from the droplets at about 1015 W cm-2. A hot electron temperature of 36 keV is measured from the droplets at 8 x 1014 W cm-2; three times higher intensity is needed to obtain similar hot electron temperature from solid plasmas that have similar atomic composition. We use 1D-PIC simulation to obtain qualitative correlation to the experimental observations

  9. Hardness and Microstructure of Binary and Ternary Nitinol Compounds

    Science.gov (United States)

    Stanford, Malcolm K.

    2016-01-01

    The hardness and microstructure of twenty-six binary and ternary Nitinol (nickel titanium, nickel titanium hafnium, nickel titanium zirconium and nickel titanium tantalum) compounds were studied. A small (50g) ingot of each compound was produced by vacuum arc remelting. Each ingot was homogenized in vacuum for 48 hr followed by furnace cooling. Specimens from the ingots were then heat treated at 800, 900, 1000 or 1100 degree C for 2 hr followed by water quenching. The hardness and microstructure of each specimen was compared to the baseline material (55-Nitinol, 55 at.% nickel - 45 at.% titanium, after heat treatment at 900 degC). The results show that eleven of the studied compounds had higher hardness values than the baseline material. Moreover, twelve of the studied compounds had measured hardness values greater 600HV at heat treatments from 800 to 900 degree C.

  10. Constraint satisfaction problems with isolated solutions are hard

    International Nuclear Information System (INIS)

    Zdeborová, Lenka; Mézard, Marc

    2008-01-01

    We study the phase diagram and the algorithmic hardness of the random 'locked' constraint satisfaction problems, and compare them to the commonly studied 'non-locked' problems like satisfiability of Boolean formulae or graph coloring. The special property of the locked problems is that clusters of solutions are isolated points. This simplifies significantly the determination of the phase diagram, which makes the locked problems particularly appealing from the mathematical point of view. On the other hand, we show empirically that the clustered phase of these problems is extremely hard from the algorithmic point of view: the best known algorithms all fail to find solutions. Our results suggest that the easy/hard transition (for currently known algorithms) in the locked problems coincides with the clustering transition. These should thus be regarded as new benchmarks of really hard constraint satisfaction problems

  11. Part III: Comparing observed growth of selected test organisms in food irradiation studies with growth predictions calculated by ComBase softwares

    International Nuclear Information System (INIS)

    Farkas, J.; Andrassy, E.; Meszaros, L.; Beczner, J.; Polyak-Feher, K.; Gaal, O.; Lebovics, V.K.; Lugasi, A.

    2009-01-01

    As a result of intensive predictive microbiological modelling activities, several computer programs and softwares became available recently for facilitating microbiological risk assessment. Among these tools, the establishment of the ComBase, an international database and its predictive modelling softwares of the Pathogen Modelling Program (PMP) set up by the USDA Eastern Regional Research Center, Wyndmore, PA, and the Food Micromodel/Growth Predictor by the United Kingdom's Institute of Food Research, Norwich, are most important. The authors have used the PMP 6.1 software version of ComBase as a preliminary trial to compare observed growth of selected test organisms in relation to their food irradiation work during recent years within the FAO/IAEA Coordinated Food Irradiation Research Projects (D6.10.23 and D6.20.07) with the predicted growth on the basis of growth models available in ComBase for the same species as those of the authors' test organisms. The results of challenge tests with Listeria monocytogenes inoculum in untreated or irradiated experimental batches of semi-prepared breaded turkey meat steaks (cordon bleu), sliced tomato, sliced watermelon, sliced cantaloupe and sous vide processed mixed vegetables, as well as Staphylococcus aureus inoculum of a pasta product, tortellini, were compared with their respective growth models under relevant environmental conditions. This comparison showed good fits in the case of non-irradiated and high moisture food samples, but growth of radiation survivors lagged behind the predicted values. (author)

  12. Blazars in Hard X-rays

    Science.gov (United States)

    Ghisellini, Gabriele

    2009-05-01

    Although blazars are thought to emit most of their luminosity in the γ-ray band, there are subclasses of them very prominent in hard X-rays. These are the best candidates to be studied by Simbol-X. They are at the extremes of the blazar sequence, having very small or very high jet powers. The former are the class of TeV emitting BL Lacs, whose synchrotron emission often peaks at tens of keV or more. The latter are the blazars with the most powerful jets, have high black hole masses accreting at high (i.e. close to Eddington) rates. These sources are predicted to have their high energy peak even below the MeV band, and therefore are very promising candidates to be studied with Simbol-X.

  13. A comparative in silico linear B-cell epitope prediction and characterization for South American and African Trypanosoma vivax strains.

    Science.gov (United States)

    Guedes, Rafael Lucas Muniz; Rodrigues, Carla Monadeli Filgueira; Coatnoan, Nicolas; Cosson, Alain; Cadioli, Fabiano Antonio; Garcia, Herakles Antonio; Gerber, Alexandra Lehmkuhl; Machado, Rosangela Zacarias; Minoprio, Paola Marcella Camargo; Teixeira, Marta Maria Geraldes; de Vasconcelos, Ana Tereza Ribeiro

    2018-02-27

    Trypanosoma vivax is a parasite widespread across Africa and South America. Immunological methods using recombinant antigens have been developed aiming at specific and sensitive detection of infections caused by T. vivax. Here, we sequenced for the first time the transcriptome of a virulent T. vivax strain (Lins), isolated from an outbreak of severe disease in South America (Brazil) and performed a computational integrated analysis of genome, transcriptome and in silico predictions to identify and characterize putative linear B-cell epitopes from African and South American T. vivax. A total of 2278, 3936 and 4062 linear B-cell epitopes were respectively characterized for the transcriptomes of T. vivax LIEM-176 (Venezuela), T. vivax IL1392 (Nigeria) and T. vivax Lins (Brazil) and 4684 for the genome of T. vivax Y486 (Nigeria). The results presented are a valuable theoretical source that may pave the way for highly sensitive and specific diagnostic tools. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Comparative analysis of codon usage patterns and identification of predicted highly expressed genes in five Salmonella genomes

    Directory of Open Access Journals (Sweden)

    Mondal U

    2008-01-01

    Full Text Available Purpose: To anlyse codon usage patterns of five complete genomes of Salmonella , predict highly expressed genes, examine horizontally transferred pathogenicity-related genes to detect their presence in the strains, and scrutinize the nature of highly expressed genes to infer upon their lifestyle. Methods: Protein coding genes, ribosomal protein genes, and pathogenicity-related genes were analysed with Codon W and CAI (codon adaptation index Calculator. Results: Translational efficiency plays a role in codon usage variation in Salmonella genes. Low bias was noticed in most of the genes. GC3 (guanine cytosine at third position composition does not influence codon usage variation in the genes of these Salmonella strains. Among the cluster of orthologous groups (COGs, translation, ribosomal structure biogenesis [J], and energy production and conversion [C] contained the highest number of potentially highly expressed (PHX genes. Correspondence analysis reveals the conserved nature of the genes. Highly expressed genes were detected. Conclusions: Selection for translational efficiency is the major source of variation of codon usage in the genes of Salmonella . Evolution of pathogenicity-related genes as a unit suggests their ability to infect and exist as a pathogen. Presence of a lot of PHX genes in the information and storage-processing category of COGs indicated their lifestyle and revealed that they were not subjected to genome reduction.

  15. Comparing two remote video survey methods for spatial predictions of the distribution and environmental niche suitability of demersal fishes.

    Science.gov (United States)

    Galaiduk, Ronen; Radford, Ben T; Wilson, Shaun K; Harvey, Euan S

    2017-12-15

    Information on habitat associations from survey data, combined with spatial modelling, allow the development of more refined species distribution modelling which may identify areas of high conservation/fisheries value and consequentially improve conservation efforts. Generalised additive models were used to model the probability of occurrence of six focal species after surveys that utilised two remote underwater video sampling methods (i.e. baited and towed video). Models developed for the towed video method had consistently better predictive performance for all but one study species although only three models had a good to fair fit, and the rest were poor fits, highlighting the challenges associated with modelling habitat associations of marine species in highly homogenous, low relief environments. Models based on baited video dataset regularly included large-scale measures of structural complexity, suggesting fish attraction to a single focus point by bait. Conversely, models based on the towed video data often incorporated small-scale measures of habitat complexity and were more likely to reflect true species-habitat relationships. The cost associated with use of the towed video systems for surveying low-relief seascapes was also relatively low providing additional support for considering this method for marine spatial ecological modelling.

  16. Hard-to-fill vacancies.

    Science.gov (United States)

    Williams, Ruth

    2010-09-29

    Skills for Health has launched a set of resources to help healthcare employers tackle hard-to-fill entry-level vacancies and provide sustainable employment for local unemployed people. The Sector Employability Toolkit aims to reduce recruitment and retention costs for entry-level posts and repare people for employment through pre-job training programmes, and support employers to develop local partnerships to gain access to wider pools of candidates and funding streams.

  17. Pushing hard on the accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1987-09-15

    The quest for new techniques to drive future generations of particle accelerators has been pushed hard in recent years, efforts having been highlighted by workshops in Europe, organized by the European Committee for Future Accelerators, and in the US. The latest ECFA Workshop on New Developments in Particle Acceleration Techniques, held at Orsay from 29 June to 4 July, showed how the initial frantic search for innovation is now maturing.

  18. CMS results on hard diffraction

    CERN Document Server

    INSPIRE-00107098

    2013-01-01

    In these proceedings we present CMS results on hard diffraction. Diffractive dijet production in pp collisions at $\\sqrt{s}$=7 TeV is discussed. The cross section for dijet production is presented as a function of $\\tilde{\\xi}$, representing the fractional momentum loss of the scattered proton in single-diffractive events. The observation of W and Z boson production in events with a large pseudo-rapidity gap is also presented.

  19. Polarization observables in hard rescattering mechanism of deuteron photodisintegration

    Energy Technology Data Exchange (ETDEWEB)

    Sargsian, Misak M

    2004-05-06

    Polarization properties of high energy photodisintegration of the deuteron are studied within the framework of the hard rescattering mechanism (HRM). In HRM, a quark of one nucleon knocked-out by the incoming photon rescatters with a quark of the other nucleon leading to the production of two nucleons with high relative momentum. Summation of all relevant quark rescattering amplitudes allows us to express the scattering amplitude of the reaction through the convolution of a hard photon-quark interaction vertex, the large angle p-n scattering amplitude and the low momentum deuteron wave function. Within HRM, it is demonstrated that the polarization observables in hard photodisintegration of the deuteron can be expressed through the five helicity amplitudes of NN scattering at high momentum transfer. At 90 deg. CM scattering HRM predicts the dominance of the isovector channel of hard pn rescattering, and it explains the observed smallness of induced, P{sub y} and transfered, C{sub x} polarizations without invoking the argument of helicity conservation. Namely, HRM predicts that P{sub y} and C{sub x} are proportional to the phi{sub 5} helicity amplitude which vanishes at {theta}{sub cm}=90 deg. due to symmetry reasons. HRM predicts also a nonzero value for C{sub z} in the helicity-conserving regime and a positive {sigma} asymmetry which is related to the dominance of the isovector channel in the hard reinteraction. We extend our calculations to the region where large polarization effects are observed in pp scattering as well as give predictions for angular dependences.

  20. Playing Moderately Hard to Get

    Directory of Open Access Journals (Sweden)

    Stephen Reysen

    2013-12-01

    Full Text Available In two studies, we examined the effect of different degrees of attraction reciprocation on ratings of attraction toward a potential romantic partner. Undergraduate college student participants imagined a potential romantic partner who reciprocated a low (reciprocating attraction one day a week, moderate (reciprocating attraction three days a week, high (reciprocating attraction five days a week, or unspecified degree of attraction (no mention of reciprocation. Participants then rated their degree of attraction toward the potential partner. The results of Study 1 provided only partial support for Brehm’s emotion intensity theory. However, after revising the high reciprocation condition vignette in Study 2, supporting Brehm’s emotion intensity theory, results show that a potential partners’ display of reciprocation of attraction acted as a deterrent to participants’ intensity of experienced attraction to the potential partner. The results support the notion that playing moderately hard to get elicits more intense feelings of attraction from potential suitors than playing too easy or too hard to get. Discussion of previous research examining playing hard to get is also re-examined through an emotion intensity theory theoretical lens.

  1. CMOS optimization for radiation hardness

    International Nuclear Information System (INIS)

    Derbenwick, G.F.; Fossum, J.G.

    1975-01-01

    Several approaches to the attainment of radiation-hardened MOS circuits have been investigated in the last few years. These have included implanting the SiO 2 gate insulator with aluminum, using chrome-aluminum layered gate metallization, using Al 2 O 3 as the gate insulator, and optimizing the MOS fabrication process. Earlier process optimization studies were restricted primarily to p-channel devices operating with negative gate biases. Since knowledge of the hardness dependence upon processing and design parameters is essential in producing hardened integrated circuits, a comprehensive investigation of the effects of both process and design optimization on radiation-hardened CMOS integrated circuits was undertaken. The goals are to define and establish a radiation-hardened processing sequence for CMOS integrated circuits and to formulate quantitative relationships between process and design parameters and the radiation hardness. Using these equations, the basic CMOS design can then be optimized for radiation hardness and some understanding of the basic physics responsible for the radiation damage can be gained. Results are presented

  2. A Comparative Study of Glasgow Coma Scale and Full Outline of Unresponsiveness Scores for Predicting Long-Term Outcome After Brain Injury.

    Science.gov (United States)

    McNett, Molly M; Amato, Shelly; Philippbar, Sue Ann

    2016-01-01

    The aim of this study was to compare predictive ability of hospital Glasgow Coma Scale (GCS) scores and scores obtained using a novel coma scoring tool (the Full Outline of Unresponsiveness [FOUR] scale) on long-term outcomes among patients with traumatic brain injury. Preliminary research of the FOUR scale suggests that it is comparable with GCS for predicting mortality and functional outcome at hospital discharge. No research has investigated relationships between coma scores and outcome 12 months postinjury. This is a prospective cohort study. Data were gathered on adult patients with traumatic brain injury admitted to urban level I trauma center. GCS and FOUR scores were assigned at 24 and 72 hours and at hospital discharge. Glasgow Outcome Scale scores were assigned at 6 and 12 months. The sample size was n = 107. Mean age was 53.5 (SD = ±21, range = 18-91) years. Spearman correlations were comparable and strongest among discharge GCS and FOUR scores and 12-month outcome (r = .73, p coma scores performed best for both tools, with GCS discharge scores predictive in multivariate models.

  3. Comparing the Effectiveness of Sagittal Balance, Foraminal Stenosis, and Preoperative Cord Rotation in Predicting Postoperative C5 Palsy.

    Science.gov (United States)

    Chugh, Arunit J S; Weinberg, Douglas S; Alonso, Fernando; Eubanks, Jason D

    2017-11-01

    Retrospective cohort review. To determine whether preoperative cord rotation is independently correlated with C5 palsy when analyzed alongside measures of sagittal balance and foraminal stenosis. Postoperative C5 palsy is a well-documented complication of cervical procedures with a prevalence of 4%-8%. Recent studies have shown a correlation with preoperative spinal cord rotation. There have been few studies, however, that have examined the role of sagittal balance and foraminal stenosis in the development of C5 palsy. A total of 77 patients who underwent cervical decompression-10 of whom developed C5 palsy-were reviewed. Sagittal balance was assessed using curvature angle and curvature index on radiographs and magnetic resonance image (MRI). Cord rotation was assessed on axial MRI. C4-C5 foraminal stenosis was assessed on sagittal MRI using area measurements and a grading scale. Demographics and information on surgical approach were gathered from chart review. Correlation with C5 palsy was performed by point-biserial, χ, and regression analyses. Point-biserial analysis indicated that only cord rotation showed significance (Pbalance did not correlate with presence of C5 palsy. Logistic regression model yielded cord rotation as the only significant independent predictor of C5 palsy. For every degree of axial cord rotation, the likelihood ratio for suffering a C5 palsy was 3.93 (95% confidence interval, 2.01-8.66; Ppoints to mechanisms other than direct compression as the etiology. In addition, the lack of correlation with postoperative changes in sagittal balance hints that measures of curvature angle and curvature index may not be appropriate to accurately predict this complication. Level 3.

  4. Evaluation of Treatments to Reduce Hardness of Agave americana Core

    Directory of Open Access Journals (Sweden)

    José A. Ramírez

    2006-01-01

    Full Text Available Agave americana contains inulin as storage carbohydrate. Therefore, agave is interesting to be used for the extraction of inulin by pressing. The yield of the process is low due to the high hardness of the core. The objective of this work was to evaluate pretreatments to reduce the hardness in the process of obtaining inulin by pressing. Treatments with water, sulphuric acid 1 % (by mass or sodium hydroxide 1 % (by mass were tested and optimized. The pretreatment of the core of A. americana with sulphuric acid 1 % allowed the reduction of hardness from 30 000 g to 2000 g of breaking force. The mathematical model obtained predicts an optimum processing at 84 °C during 75 min. The treatment with sulphuric acid 1 % also allows white core of A. americana to be obtained, while the other treatments provide yellow core. These results open a good alternative to obtain value added products from this resource.

  5. Quantitative Trait Locus (QTL meta-analysis and comparative genomics for candidate gene prediction in perennial ryegrass (Lolium perenne L.

    Directory of Open Access Journals (Sweden)

    Shinozuka Hiroshi

    2012-11-01

    Full Text Available Abstract Background In crop species, QTL analysis is commonly used for identification of factors contributing to variation of agronomically important traits. As an important pasture species, a large number of QTLs have been reported for perennial ryegrass based on analysis of biparental mapping populations. Further characterisation of those QTLs is, however, essential for utilisation in varietal improvement programs. Results A bibliographic survey of perennial ryegrass trait-dissection studies identified a total of 560 QTLs from previously published papers, of which 189, 270 and 101 were classified as morphology-, physiology- and resistance/tolerance-related loci, respectively. The collected dataset permitted a subsequent meta-QTL study and implementation of a cross-species candidate gene identification approach. A meta-QTL analysis based on use of the BioMercator software was performed to identify two consensus regions for pathogen resistance traits. Genes that are candidates for causal polymorphism underpinning perennial ryegrass QTLs were identified through in silico comparative mapping using rice databases, and 7 genes were assigned to the p150/112 reference map. Markers linked to the LpDGL1, LpPh1 and LpPIPK1 genes were located close to plant size, leaf extension time and heading date-related QTLs, respectively, suggesting that these genes may be functionally associated with important agronomic traits in perennial ryegrass. Conclusions Functional markers are valuable for QTL meta-analysis and comparative genomics. Enrichment of such genetic markers may permit further detailed characterisation of QTLs. The outcomes of QTL meta-analysis and comparative genomics studies may be useful for accelerated development of novel perennial ryegrass cultivars with desirable traits.

  6. Surface ECG and Fluoroscopy are Not Predictive of Right Ventricular Septal Lead Position Compared to Cardiac CT.

    Science.gov (United States)

    Rowe, Matthew K; Moore, Peter; Pratap, Jit; Coucher, John; Gould, Paul A; Kaye, Gerald C

    2017-05-01

    Controversy exists regarding the optimal lead position for chronic right ventricular (RV) pacing. Placing a lead at the RV septum relies upon fluoroscopy assisted by a surface 12-lead electrocardiogram (ECG). We compared the postimplant lead position determined by ECG-gated multidetector contrast-enhanced computed tomography (MDCT) with the position derived from the surface 12-lead ECG. Eighteen patients with permanent RV leads were prospectively enrolled. Leads were placed in the RV septum (RVS) in 10 and the RV apex (RVA) in eight using fluoroscopy with anteroposterior and left anterior oblique 30° views. All patients underwent MDCT imaging and paced ECG analysis. ECG criteria were: QRS duration; QRS axis; positive or negative net QRS amplitude in leads I, aVL, V1, and V6; presence of notching in the inferior leads; and transition point in precordial leads at or after V4. Of the 10 leads implanted in the RVS, computed tomography (CT) imaging revealed seven to be at the anterior RV wall, two at the anteroseptal junction, and one in the true septum. For the eight RVA leads, four were anterior, two septal, and two anteroseptal. All leads implanted in the RVS met at least one ECG criteria (median 3, range 1-6). However, no criteria were specific for septal position as judged by MDCT. Mean QRS duration was 160 ± 24 ms in the RVS group compared with 168 ± 14 ms for RVA pacing (P = 0.38). We conclude that the surface ECG is not sufficiently accurate to determine RV septal lead tip position compared to cardiac CT. © 2017 Wiley Periodicals, Inc.

  7. Magnetically actuated bi-directional microactuators with permalloy and Fe/Pt hard magnet

    International Nuclear Information System (INIS)

    Pan, C.T.; Shen, S.C.

    2005-01-01

    Bi-directional polyimide (PI) electromagnetic microactuator with different geometries are designed, fabricated and tested. Fabrication of the electromagnetic microactuator consists of 10 μm thick Ni/Fe (80/20) permalloy deposition on the PI diaphragm by electroplating, high aspect ratio electroplating of copper planar coil with 10 μm in thickness, bulk micromachining, and excimer laser selective ablation. They were fabricated by a novel concept avoiding the etching selectivity and residual stress problems during wafer etching. A mathematical model is created by ANSYS software to analyze the microactuator. The external magnetic field intensity (H ext ) generated by the planar coil is simulated by ANSYS software. ANSYS software is used to predict the deflection angle of the microactuator. Besides, to provide bi-directional and large deflection angle of microactuator, hard magnet Fe/Pt is deposited at a low temperature of 300 deg. C by sputtering onto the PI diaphragm to produce a perpendicular magnetic anisotropic field. This magnetic field can enhance the interaction with H ext to induce attractive and repulsive bi-directional force to provide large displacement. The results of magnetic microactuator with and without hard magnets are compared and discussed. The preliminary result reveals that the electromagnetic microactuator with hard magnet shows a greater deflection angle than that without one

  8. Hard coatings on magnesium alloys by sputter deposition using a pulsed d.c. bias voltage

    Energy Technology Data Exchange (ETDEWEB)

    Reiners, G. [Bundesanstalt fuer Materialforschung und -pruefung, Berlin (Germany); Griepentrog, M. [Bundesanstalt fuer Materialforschung und -pruefung, Berlin (Germany)

    1995-12-01

    An increasing use of magnesium-based light-metal alloys for various industrial applications was predicted in different technological studies. Companies in different branches have developed machine parts made of magnesium alloys (e.g. cars, car engines, sewing and knitting machines). Hence, this work was started to evaluate the ability of hard coatings obtained by physical vapour deposition (PVD) in combination with coatings obtained by electrochemical deposition to protect magnesium alloys against wear and corrosion. TiN hard coatings were deposited onto magnesium alloys by unbalanced magnetron sputter deposition. A bipolar pulsed d.c. bias voltage was used to limit substrate temperatures to 180 C during deposition without considerable loss of microhardness and adhesion. Adhesion, hardness and load-carrying capacity of TiN coatings deposited directly onto magnesium alloys are compared with the corresponding values of TiN coatings deposited onto substrates which had been coated electroless with an Ni-P alloy interlayer prior to the PVD. (orig.)

  9. MODELING THE THERMAL DIFFUSE SOFT AND HARD X-RAY EMISSION IN M17

    International Nuclear Information System (INIS)

    Velázquez, P. F.; Rodríguez-González, A.; Esquivel, A.; Rosado, M.; Reyes-Iturbide, J.

    2013-01-01

    We present numerical models of very young wind driven superbubbles. The parameters chosen for the simulations correspond to the particular case of the M17 nebula, but are appropriate for any young superbubble in which the wind sources have not completely dispersed their parental cloud. From the simulations, we computed the diffuse emission in the soft ([0.5-1.5] keV) and hard ([1.5-5] keV) X-ray bands. The total luminosity in our simulations agrees with the observations of Hyodo et al., about two orders of magnitude below the prediction of the standard model of Weaver et al.. The difference with respect to the standard (adiabatic) model is the inclusion of radiative cooling, which is still important in such young bubbles. We show that for this type of object the diffuse hard X-ray luminosity is significant compared to that of soft X-rays, contributing as much as 10% of the total luminosity, in contrast with more evolved bubbles where the hard X-ray emission is indeed negligible, being at least four orders of magnitude lower than the soft X-ray emission.

  10. Accuracy of 'My Gut Feeling:' Comparing System 1 to System 2 Decision-Making for Acuity Prediction, Disposition and Diagnosis in an Academic Emergency Department.

    Science.gov (United States)

    Cabrera, Daniel; Thomas, Jonathan F; Wiswell, Jeffrey L; Walston, James M; Anderson, Joel R; Hess, Erik P; Bellolio, M Fernanda

    2015-09-01

    Current cognitive sciences describe decision-making using the dual-process theory, where a System 1 is intuitive and a System 2 decision is hypothetico-deductive. We aim to compare the performance of these systems in determining patient acuity, disposition and diagnosis. Prospective observational study of emergency physicians assessing patients in the emergency department of an academic center. Physicians were provided the patient's chief complaint and vital signs and allowed to observe the patient briefly. They were then asked to predict acuity, final disposition (home, intensive care unit (ICU), non-ICU bed) and diagnosis. A patient was classified as sick by the investigators using previously published objective criteria. We obtained 662 observations from 289 patients. For acuity, the observers had a sensitivity of 73.9% (95% CI [67.7-79.5%]), specificity 83.3% (95% CI [79.5-86.7%]), positive predictive value 70.3% (95% CI [64.1-75.9%]) and negative predictive value 85.7% (95% CI [82.0-88.9%]). For final disposition, the observers made a correct prediction in 80.8% (95% CI [76.1-85.0%]) of the cases. For ICU admission, emergency physicians had a sensitivity of 33.9% (95% CI [22.1-47.4%]) and a specificity of 96.9% (95% CI [94.0-98.7%]). The correct diagnosis was made 54% of the time with the limited data available. System 1 decision-making based on limited information had a sensitivity close to 80% for acuity and disposition prediction, but the performance was lower for predicting ICU admission and diagnosis. System 1 decision-making appears insufficient for final decisions in these domains but likely provides a cognitive framework for System 2 decision-making.

  11. Impact of aging on radiation hardness

    International Nuclear Information System (INIS)

    Shaneyfelt, M.R.; Winokur, P.S.; Fleetwood, D.M.

    1997-01-01

    Burn-in effects are used to demonstrate the potential impact of thermally activated aging effects on functional and parametric radiation hardness. These results have implications on hardness assurance testing. Techniques for characterizing aging effects are proposed

  12. Why Are Drugs So Hard to Quit?

    Medline Plus

    Full Text Available ... Quitting drugs is hard because addiction is a brain disease. Your brain is like a control tower that sends out ... and choices. Addiction changes the signals in your brain and makes it hard to feel OK without ...

  13. Cheatgrass percent cover change: Comparing recent estimates to climate change − Driven predictions in the Northern Great Basin

    Science.gov (United States)

    Boyte, Stephen P.; Wylie, Bruce K.; Major, Donald J.

    2016-01-01

    Cheatgrass (Bromus tectorum L.) is a highly invasive species in the Northern Great Basin that helps decrease fire return intervals. Fire fragments the shrub steppe and reduces its capacity to provide forage for livestock and wildlife and habitat critical to sagebrush obligates. Of particular interest is the greater sage grouse (Centrocercus urophasianus), an obligate whose populations have declined so severely due, in part, to increases in cheatgrass and fires that it was considered for inclusion as an endangered species. Remote sensing technologies and satellite archives help scientists monitor terrestrial vegetation globally, including cheatgrass in the Northern Great Basin. Along with geospatial analysis and advanced spatial modeling, these data and technologies can identify areas susceptible to increased cheatgrass cover and compare these with greater sage grouse priority areas for conservation (PAC). Future climate models forecast a warmer and wetter climate for the Northern Great Basin, which likely will force changing cheatgrass dynamics. Therefore, we examine potential climate-caused changes to cheatgrass. Our results indicate that future cheatgrass percent cover will remain stable over more than 80% of the study area when compared with recent estimates, and higher overall cheatgrass cover will occur with slightly more spatial variability. The land area projected to increase or decrease in cheatgrass cover equals 18% and 1%, respectively, making an increase in fire disturbances in greater sage grouse habitat likely. Relative susceptibility measures, created by integrating cheatgrass percent cover and temporal standard deviation datasets, show that potential increases in future cheatgrass cover match future projections. This discovery indicates that some greater sage grouse PACs for conservation could be at heightened risk of fire disturbance. Multiple factors will affect future cheatgrass cover including changes in precipitation timing and totals and

  14. Predicting inferior vena cava (IVC) filter retrievability using positional parameters: A comparative study of various filter types.

    Science.gov (United States)

    Gotra, A; Doucet, C; Delli Fraine, P; Bessissow, A; Dey, C; Gallix, B; Boucher, L-M; Valenti, D

    2018-05-14

    To compare changes in inferior vena cava (IVC) filter positional parameters from insertion to removal and examine how they affect retrievability amongst various filter types. A total of 447 patients (260 men, 187 women) with a mean age of 55 years (range: 13-91 years) who underwent IVC filter retrieval between 2007-2014 were retrospectively included. Post-insertion and pre-retrieval angiographic studies were assessed for filter tilt, migration, strut wall penetration and retrieval outcomes. ANCOVA and multiple logistic regression models were used to analyze factors affecting retrieval success. Pairwise comparisons between filter types were performed. Of 488 IVC filter retrieval attempts, 94.1% were ultimately successful. The ALN filter had the highest mean absolute value of tilt (5.6 degrees), the Optease filter demonstrated the largest mean migration (-8.0mm) and the Bard G2 filter showed highest mean penetration (5.2mm). Dwell time of 0-90 days (OR, 11.1; P=0.01) or 90-180 days (OR, 2.6; P=0.02), net tilt of 10-15 degrees (OR 8.9; P=0.05), caudal migration of -10 to 0mm (OR, 3.46; P=0.03) and penetration less than 3mm (OR, 2.6; P=0.01) were positive predictors of successful retrievability. Higher odds of successful retrieval were obtained for the Bard G2X, Bard G2 and Cook Celect when compared to the ALN and Cordis Optease filters. Shorter dwell time, lower mean tilt, caudal migration and less caval wall penetration are positive predictors of successful IVC filter retrieval. Copyright © 2018 Société française de radiologie. Published by Elsevier Masson SAS. All rights reserved.

  15. Effect of QTc interval on prediction of hypotension following subarachnoid block in patients undergoing cesarean section: A comparative study

    Directory of Open Access Journals (Sweden)

    Sampa Dutta Gupta

    2012-01-01

    Full Text Available Background: Previous studies have revealed that QTc interval is prolonged in pre-eclamptic parturients. Another study reflected the relationship between the sympathetic block and QTc interval. Subarachnoid block was safely administered in patients with severe pre-eclampsia. It has also been noticed that hypotension in response to spinal anesthesia is relatively less in pre-eclamptic patients than normal parturients. Aim: To compare the QTc values in normal and pre-eclamptic term parturients and to establish whether any correlation exists between the QTc interval and the systemic hypotension following subarachnoid block. Materials and Methods: Twenty-five pre-eclamptic patients (Group A and 25 normotensive patients (Group B were included in this study. QTc interval was recorded for each patient before subarachnoid block for cesarean section. Changes in arterial blood pressure and heart rate were measured in both the groups and compared. Results: Baseline QTc was significantly higher in the pre-eclamptic group (Group A: 0.47 ± 0.11 with that of control (Group B: 0.36. ± 0.02. Significant fall in blood pressure was seen only in one group with QTc between 0.38 and 0.39 in Group A. Hypotension was significantly more in normotensive mothers (Group B. However, no statistical correlation could be drawn from this study between QTc interval and hypotension, although a trend toward increasing hypotension with decreasing QTc was present. Discussion : The prolonged QTc intervals seen in pre-eclamptic patients may be due to the contributory effects of sympathetic hyperactivity, hypertension, and hypocalcemia secondary to underlying vasoconstriction. Decreased vagal control of heart in pre-eclampsia may have produced the difference in change in hemodynamic status between pre-eclamptic and normotensive parturient. Conclusion: Any consistent correlation between QTc and hypotension following subarachnoid block could not be derived from this study. To achieve a

  16. A comparative study between the use of artificial neural networks and multiple linear regression for caustic concentration prediction in a stage of alumina production

    Directory of Open Access Journals (Sweden)

    Giovanni Leopoldo Rozza

    2015-09-01

    Full Text Available With world becoming each day a global village, enterprises continuously seek to optimize their internal processes to hold or improve their competitiveness and make better use of natural resources. In this context, decision support tools are an underlying requirement. Such tools are helpful on predicting operational issues, avoiding cost risings, loss of productivity, work-related accident leaves or environmental disasters. This paper has its focus on the prediction of spent liquor caustic concentration of Bayer process for alumina production. Caustic concentration measuring is essential to keep it at expected levels, otherwise quality issues might arise. The organization requests caustic concentration by chemical analysis laboratory once a day, such information is not enough to issue preventive actions to handle process inefficiencies that will be known only after new measurement on the next day. Thereby, this paper proposes using Multiple Linear Regression and Artificial Neural Networks techniques a mathematical model to predict the spent liquor´s caustic concentration. Hence preventive actions will occur in real time. Such models were built using software tool for numerical computation (MATLAB and a statistical analysis software package (SPSS. The models output (predicted caustic concentration were compared with the real lab data. We found evidence suggesting superior results with use of Artificial Neural Networks over Multiple Linear Regression model. The results demonstrate that replacing laboratorial analysis by the forecasting model to support technical staff on decision making could be feasible.

  17. Comparative Analysis of Predictive Models of Pain Level from Work-Related Musculoskeletal Disorders among Sewing Machine Operators in the Garments Industry

    Directory of Open Access Journals (Sweden)

    Carlos Ignacio P. Luga

    2017-02-01

    Full Text Available The Philippine garments industry has been experiencing a roller-coaster ride during the past decades, with much competition from its Asian neighbors, especially in the wake of the ASEAN 2015 Integration. One of the areas in the industry which can be looked into and possibly improved is the concern on Work-related Musculoskeletal Disorders (WMSDs. Literatures have shown that pain from WMSDs among sewing machine operators in this industry is very prevalent and its effects on the same operators have been very costly. After identifying the risk factors which may cause pain from WMSDs, this study generated three models which would predict the said pain level. These models were analyzed, compared and the best model was identified to make the most accurate prediction of pain level. This predictive model would be helpful for management of garment firms since first, the risk factors have been identified and hence can be used as bases for proposed improvements. Second, the prediction of each operator’s pain level would allow management to assess better its employees in terms of their sewing capacity vis-à-vis the company’s production plans.

  18. Measuring treatment response to systemic therapy and predicting outcome in biliary tract cancer: comparing tumor size, volume, density, and metabolism.

    Science.gov (United States)

    Sahani, Dushyant V; Hayano, Koichi; Galluzzo, Anna; Zhu, Andrew X

    2015-04-01

    The purpose of this study was to evaluate the response of biliary tract cancer treated with multidrug chemotherapy using FDG PET in comparison with morphologic and density changes. In this phase II clinical trial, 28 patients with unresectable or metastatic biliary tract cancers treated with gemcitabine and oxaliplatin combined with bevacizumab (GEMOX-B) underwent FDG PET and contrast-enhanced CT at baseline and after the second cycle of the therapy (8 weeks). A single reviewer recorded tumor maximum standardized uptake value (SUVmax) along with size, volume (3D-sphere), and density. The percentage changes of the parameters were compared with progression-free survival at 7 months. Overall survival was compared with the percentage change of SUVmax. After 8 weeks, measurable reductions (±SD) in size (7.05±4.19 to 5.52±3.28 cm, -21.70%), volume (411.38±540.08 to 212.41±293.45 cm3, -48.36%), and density (60.76±20.65 to 50.68±16.89 HU, -15.59%) were noted along with a substantial drop in SUVmax (5.95±1.95 to 3.36±1.28, -43.52%). The SUVmax change showed positive correlations with tumor size change (R2=0.39, p=0.0004) and volumetric change (R2=0.34, p=0.001). Patients who showed a larger drop in SUVmax at 8 weeks correlated with favorable progression-free survival (p=0.02). ROC analysis showed that a 45% reduction in SUVmax was the best cutoff value to detect favorable progression-free survival patients. When we used this cutoff value, Kaplan-Meier analysis showed that patients with tumors showing greater reduction in SUVmax had favorable progression-free survival and overall survival (p=0.0009, p=0.03). In biliary tract cancers treated with GEMOX-B, the reduction of SUVmax after therapy is a better predictor for survival than morphologic and density changes.

  19. Comparative genomic analysis of carbon and nitrogen assimilation mechanisms in three indigenous bioleaching bacteria: predictions and validations

    Directory of Open Access Journals (Sweden)

    Ehrenfeld Nicole

    2008-12-01

    Full Text Available Abstract Background Carbon and nitrogen fixation are essential pathways for autotrophic bacteria living in extreme environments. These bacteria can use carbon dioxide directly from the air as their sole carbon source and can use different sources of nitrogen such as ammonia, nitrate, nitrite, or even nitrogen from the air. To have a better understanding of how these processes occur and to determine how we can make them more efficient, a comparative genomic analysis of three bioleaching bacteria isolated from mine sites in Chile was performed. This study demonstrated that there are important differences in the carbon dioxide and nitrogen fixation mechanisms among bioleaching bacteria that coexist in mining environments. Results In this study, we probed that both Acidithiobacillus ferrooxidans and Acidithiobacillus thiooxidans incorporate CO2 via the Calvin-Benson-Bassham cycle; however, the former bacterium has two copies of the Rubisco type I gene whereas the latter has only one copy. In contrast, we demonstrated that Leptospirillum ferriphilum utilizes the reductive tricarboxylic acid cycle for carbon fixation. Although all the species analyzed in our study can incorporate ammonia by an ammonia transporter, we demonstrated that Acidithiobacillus thiooxidans could also assimilate nitrate and nitrite but only Acidithiobacillus ferrooxidans could fix nitrogen directly from the air. Conclusion The current study utilized genomic and molecular evidence to verify carbon and nitrogen fixation mechanisms for three bioleaching bacteria and provided an analysis of the potential regulatory pathways and functional networks that control carbon and nitrogen fixation in these microorganisms.

  20. Comparative lethality kinetic curves and predictive models of F-value for Listeria monocytogenes using different sanitizers

    Science.gov (United States)

    Beltrame, Cezar A; Kubiak, Gabriela B; Rottava, Ieda; Toniazzo, Geciane; Cansian, Rogério L; Lerin, Lindomar A; de Oliveira, Débora; Treichel, Helen

    2013-01-01

    The objective of this work was to evaluate the kinetic of inactivation of Listeria monocytogenes using peracetic acid, chlorhexidine, and organic acids as active agent, determining the respective D-, Z-, and F-values. From our knowledge, these important results from an industrial view point are not available in the current literature, mainly for organic acids, pointing out the main contribution of the present work. Lower D-values were obtained for peracetic acid and chlorhexidine, compared with the organic acids. For the reduction of 6 log10 of L. monocytogenes using peracetic acid, at 0.2, 0.1, and 0.05% are necessary 7.08, 31.08, and 130.44 min of contact, respectively. The mathematical models of F-values showed that at concentrations lower than 0.15% one can verify an exponential increase in F-values, for both de chlorhexidine and peracetic acid. The organic acids presented a linear behavior, showing slight variation in F-values, is even more effective in under dosage. The results obtained are of fundamental importance in terms of industrial strategy for sanitization procedure, permitting to choose the best relation product concentration/exposure time, aiming at reducing costs without compromising the disinfectant efficiency. PMID:24804011

  1. Comparative genomic analysis of carbon and nitrogen assimilation mechanisms in three indigenous bioleaching bacteria: predictions and validations

    Science.gov (United States)

    Levicán, Gloria; Ugalde, Juan A; Ehrenfeld, Nicole; Maass, Alejandro; Parada, Pilar

    2008-01-01

    Background Carbon and nitrogen fixation are essential pathways for autotrophic bacteria living in extreme environments. These bacteria can use carbon dioxide directly from the air as their sole carbon source and can use different sources of nitrogen such as ammonia, nitrate, nitrite, or even nitrogen from the air. To have a better understanding of how these processes occur and to determine how we can make them more efficient, a comparative genomic analysis of three bioleaching bacteria isolated from mine sites in Chile was performed. This study demonstrated that there are important differences in the carbon dioxide and nitrogen fixation mechanisms among bioleaching bacteria that coexist in mining environments. Results In this study, we probed that both Acidithiobacillus ferrooxidans and Acidithiobacillus thiooxidans incorporate CO2 via the Calvin-Benson-Bassham cycle; however, the former bacterium has two copies of the Rubisco type I gene whereas the latter has only one copy. In contrast, we demonstrated that Leptospirillum ferriphilum utilizes the reductive tricarboxylic acid cycle for carbon fixation. Although all the species analyzed in our study can incorporate ammonia by an ammonia transporter, we demonstrated that Acidithiobacillus thiooxidans could also assimilate nitrate and nitrite but only Acidithiobacillus ferrooxidans could fix nitrogen directly from the air. Conclusion The current study utilized genomic and molecular evidence to verify carbon and nitrogen fixation mechanisms for three bioleaching bacteria and provided an analysis of the potential regulatory pathways and functional networks that control carbon and nitrogen fixation in these microorganisms. PMID:19055775

  2. Studies of the underlying-event properties and of hard double parton scattering with the ATLAS detector

    CERN Document Server

    Kuprash, Oleg; The ATLAS collaboration

    2017-01-01

    A correct modelling of the underlying event in proton-proton collisions is important for the proper simulation of kinematic distributions of high-energy collisions. The ATLAS collaboration extended previous studies at 7 TeV with a leading track or jet or Z boson by a new study at 13 TeV, measuring the number and transverse-momentum sum of charged particles as a function of pseudorapidity and azimuthal angle in dependence of the reconstructed leading track. These measurements are sensitive to the underlying-event as well as the onset of hard emissions. The results are compared to predictions of several MC generators. + Inclusive four-jet events produced in proton--proton collisions at a center-of-mass energy of 7 TeV have been analyzed for the presence of hard double parton scattering collected with the ATLAS detector. The contribution of hard double parton scattering to the production of four-jet events has been extracted using an artificial neural network. The assumption was made that hard double parton scat...

  3. Fused hard-sphere chain molecules: Comparison between Monte Carlo simulation for the bulk pressure and generalized Flory theories

    International Nuclear Information System (INIS)

    Costa, L.A.; Zhou, Y.; Hall, C.K.; Carra, S.

    1995-01-01

    We report Monte Carlo simulation results for the bulk pressure of fused-hard-sphere (FHS) chain fluids with bond-length-to-bead-diameter ratios ∼ 0.4 at chain lengths n=4, 8 and 16. We also report density profiles for FHS chain fluids at a hard wall. The results for the compressibility factor are compared to results from extensions of the Generalized Flory (GF) and Generalized Flory Dimer (GFD) theories proposed by Yethiraj et al. and by us. Our new GF theory, GF-AB, significantly improves the prediction of the bulk pressure of fused-hard-sphere chains over the GFD theories proposed by Yethiraj et al. and by us although the GFD theories give slightly better low-density results. The GFD-A theory, the GFD-B theory and the new theories (GF-AB, GFD-AB, and GFD-AC) satisfy the exact zero-bonding-length limit. All theories considered recover the GF or GFD theories at the tangent hard-sphere chain limit

  4. Accuracy of Demirjian′s 8 teeth method for age prediction in South Indian children: A comparative study

    Directory of Open Access Journals (Sweden)

    Rezwana Begum Mohammed

    2015-01-01

    Full Text Available Introduction: Demirjian′s method of tooth development is most commonly used to assess age in individuals with emerging teeth. However, its application on numerous populations has resulted in wide variations in age estimates and consequent suggestions for the method′s adaptation to the local sample. Original Demirjian′s method utilized seven mandibular teeth, to which recently third molar is added so that the method can be applied on a wider age group. Furthermore, the revised method developed regression formulas for assessing age. In Indians, as these formulas resulted in underestimation, India-specific regression formulas were developed recently. The purpose of this cross-sectional study was to evaluate the accuracy and applicability of original regression formulas (Chaillet and Demirjian 2004 and India-specific regression formulas (Acharya 2010 using Demirjian′s 8 teeth method in South Indian children of age groups 9-20 years. Methods: The present study consisted of 660 randomly selected subjects (330 males and 330 females were in the aged ranging from 9 to 20 years divided into 11 groups according to their age. Demirjian′s 8 teeth method was used for staging of teeth. Results: Demirjian′s method underestimated the dental age (DA by 1.66 years for boys and 1.55 years for girls and 1.61 years in total. Acharya′s method over estimated DA by 0.21 years for boys and 0.85 years for girls and 0.53 years in total. The absolute accuracy was better for Acharya′s method compared with Demirjian method. Conclusion: This study concluded that both the Demirjian and Indian regression formulas were reliable in assessing age making Demirjian′s 8 teeth method applicable for South Indians.

  5. The hard problem of cooperation.

    Directory of Open Access Journals (Sweden)

    Kimmo Eriksson

    Full Text Available Based on individual variation in cooperative inclinations, we define the "hard problem of cooperation" as that of achieving high levels of cooperation in a group of non-cooperative types. Can the hard problem be solved by institutions with monitoring and sanctions? In a laboratory experiment we find that the answer is affirmative if the institution is imposed on the group but negative if development of the institution is left to the group to vote on. In the experiment, participants were divided into groups of either cooperative types or non-cooperative types depending on their behavior in a public goods game. In these homogeneous groups they repeatedly played a public goods game regulated by an institution that incorporated several of the key properties identified by Ostrom: operational rules, monitoring, rewards, punishments, and (in one condition change of rules. When change of rules was not possible and punishments were set to be high, groups of both types generally abided by operational rules demanding high contributions to the common good, and thereby achieved high levels of payoffs. Under less severe rules, both types of groups did worse but non-cooperative types did worst. Thus, non-cooperative groups profited the most from being governed by an institution demanding high contributions and employing high punishments. Nevertheless, in a condition where change of rules through voting was made possible, development of the institution in this direction was more often voted down in groups of non-cooperative types. We discuss the relevance of the hard problem and fit our results into a bigger picture of institutional and individual determinants of cooperative behavior.

  6. The hard problem of cooperation.

    Science.gov (United States)

    Eriksson, Kimmo; Strimling, Pontus

    2012-01-01

    Based on individual variation in cooperative inclinations, we define the "hard problem of cooperation" as that of achieving high levels of cooperation in a group of non-cooperative types. Can the hard problem be solved by institutions with monitoring and sanctions? In a laboratory experiment we find that the answer is affirmative if the institution is imposed on the group but negative if development of the institution is left to the group to vote on. In the experiment, participants were divided into groups of either cooperative types or non-cooperative types depending on their behavior in a public goods game. In these homogeneous groups they repeatedly played a public goods game regulated by an institution that incorporated several of the key properties identified by Ostrom: operational rules, monitoring, rewards, punishments, and (in one condition) change of rules. When change of rules was not possible and punishments were set to be high, groups of both types generally abided by operational rules demanding high contributions to the common good, and thereby achieved high levels of payoffs. Under less severe rules, both types of groups did worse but non-cooperative types did worst. Thus, non-cooperative groups profited the most from being governed by an institution demanding high contributions and employing high punishments. Nevertheless, in a condition where change of rules through voting was made possible, development of the institution in this direction was more often voted down in groups of non-cooperative types. We discuss the relevance of the hard problem and fit our results into a bigger picture of institutional and individual determinants of cooperative behavior.

  7. Hard electroproduction of hybrid mesons

    International Nuclear Information System (INIS)

    Anikin, I.V.; LPT Universite Paris-Sud, Orsay; Szymanowski, L.; Teryaev, O.V.; ); Wallon, S.

    2005-01-01

    We estimate the sizeable cross section for deep exclusive electroproduction of an exotic J PC = 1 -+ hybrid meson in the Bjorken regime. The production amplitude scales like the one for usual meson electroproduction, i.e. as 1/Q 2 . This is due to the non-vanishing leading twist distribution amplitude for the hybrid meson, which may be normalized thanks to its relation to the energy momentum tensor and to the QCD sum rules technique. The hard amplitude is considered up to next-to-leading order in as and we explore the consequences of fixing the renormalization scale ambiguity through the BLM procedure. (author)

  8. Hard Identity and Soft Identity

    Directory of Open Access Journals (Sweden)

    Hassan Rachik

    2006-04-01

    Full Text Available Often collective identities are classified depending on their contents and rarely depending on their forms. Differentiation between soft identity and hard identity is applied to diverse collective identities: religious, political, national, tribal ones, etc. This classification is made following the principal dimensions of collective identities: type of classification (univocal and exclusive or relative and contextual, the absence or presence of conflictsof loyalty, selective or totalitarian, objective or subjective conception, among others. The different characteristics analysed contribute to outlining an increasingly frequent type of identity: the authoritarian identity.

  9. Accuracy and precision of the CKD-EPI and MDRD predictive equations compared with glomerular filtration rate measured by inulin clearance in a Saudi population.

    Science.gov (United States)

    Al-Wakeel, Jamal Saleh

    2016-01-01

    Predictive equations for estimating glomerular filtration rate (GFR) in different clinical conditions should be validated by comparing with the measurement of GFR using inulin clearance, a highly accurate measure of GFR. Our aim was to validate the Chronic Kidney Disease-Epidemiology Collaboration (CKD-EPI) and Modification of Diet in Renal Disease (MDRD) equations by comparing it to the GFR measured using inulin clearance in chronic kidney disease (CKD) patients. Cross-sectional study performed in adult Saudi patients with CKD. King Saud University Affiliated Hospital, Riyadh, Saudi Arabia in 2014. We compared GFR measured by inulin clearance with the estimated GFR calculated using CKD-EPI and MDRD predictive formulas. Correlation, bias, precision and accuracy between the estimated GFR and inulin clearance. Comparisons were made in 31 participants (23 CKD and 8 transplanted), including 19 males (mean age 42.2 [15] years and weight 68.7 [18] kg). GFR using inulin was 51.54 (33.8) mL/min/1.73 m2 in comparison to inulin clearance, the GFR by the predictive equations was: CKD-EPI creatinine 52.6 (34.4) mL/ min/1.73 m2 (P=.490), CKD-EPI cystatin C 41.39 (30.30) mL/min/1.73 m2 (P=.002), CKD creatinine-cystatin C 45.03 (30.9) mL/min/1.73 m2 (P=.004) and MDRD GFR 48.35 (31.5) mL/min/1.73 m2 (P=.028) (statistical comparisons vs inulin). Bland-Altman plots demonstrated that GFR estimated by the CKD-EPI creatinine was the most accurate compared with inulin clearance, having a mean difference (estimated bias) and limits of agreement of -1.1 (15.6,-17.7). By comparison the mean differences for predictive equations were: CKD-EPI cystatin C 10.2 (43.7,-23.4), CKD creatinine-cystatin C 6.5 (29.3,-16.3) and MDRD 3.2 (18.3,-11.9). except for CKD-EPI creatinine, all of the equations underestimated GFR in comparison with inulin clearance. When compared with inulin clearance, the CKD-EPI creatinine equation is the most accurate, precise and least biased equation for estimation of GFR

  10. A Comparative Performance Analysis of Multispectral and RGB Imaging on HER2 Status Evaluation for the Prediction of Breast Cancer Prognosis.

    Science.gov (United States)

    Liu, Wenlou; Wang, Linwei; Liu, Jiuyang; Yuan, Jingping; Chen, Jiamei; Wu, Han; Xiang, Qingming; Yang, Guifang; Li, Yan

    2016-12-01

    Despite the extensive application of multispectral imaging (MSI) in biomedical multidisciplinary researches, there is a paucity of data available regarding the implication of MSI in tumor prognosis prediction. We compared the behaviors of multispectral (MS) and conventional red-green-blue (RGB) images on assessment of human epidermal growth factor receptor 2 (HER2) immunohistochemistry to explore their impact on outcome in patients with invasive breast cancer (BC). Tissue microarrays containing 240 BC patients were introduced to compare the performance of MS and RGB imaging methods on the quantitative assessment of HER2 status and the prognostic value of 5-year disease-free survival (5-DFS). Both the total and average signal optical density values of HER2 MS and RGB images were analyzed, and all patients were divided into two groups based on the different 5-DFS. The quantification of HER2 MS images was negatively correlated with 5-DFS in lymph node-negative and -positive patients (Panalysis indicated that the hazard ratio (HR) of HER2 MS was higher than that of HER2 RGB (HR=2.454; 95% confidence interval [CI], 1.636-3.681 vs HR=2.060; 95% CI, 1.361-3.119). Additionally, area under curve (AUC) by receiver operating characteristic analysis for HER2 MS was greater than that for HER2 RGB (AUC=0.649; 95% CI, 0.577-0.722 vs AUC=0.596; 95% CI, 0.522-0.670) in predicting the risk for recurrence. More importantly, the quantification of HER2 MS images has higher prediction accuracy than that of HER2 RGB images (69.6% vs 65.0%) on 5-DFS. Our study suggested that better information on BC prognosis could be obtained from the quantification of HER2 MS images and MS images might perform better in predicting BC prognosis than conventional RGB images. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Comparative assessment of 18F-fluorodeoxyglucose PET and 99mTc-tetrofosmin SPECT for the prediction of functional recovery in patients with reperfused acute myocardial infarction

    International Nuclear Information System (INIS)

    Shirasaki, Haruhisa; Nakano, Akira; Uzui, Hiroyasu; Ueda, Takanori; Lee, Jong-Dae; Yonekura, Yoshiharu; Okazawa, Hidehiko

    2006-01-01

    Purpose: Although preserved glucose metabolism is considered to be a marker of myocardial viability in the chronic stage, it has not been fully elucidated whether this is also true with regard to reperfused acute myocardial infarction (AMI). The aim of this study was to compare the diagnostic performance of 99m Tc-tetrofosmin SPECT and 18 F-fluorodeoxyglucose (FDG) PET for the prediction of functional recovery in reperfused AMI.Methods: The study population comprised 28 patients. Both tetrofosmin SPECT and FDG PET were performed in all 28 patients at ca. 2 weeks and in 23 at 6 months. The tetrofosmin and FDG findings in infarct-related segments were compared with the regional wall motion score assessed by left ventriculography over 6 months to determine the predictive value for functional recovery. Of 120 infarct-related segments, 83 had preserved flow (tetrofosmin uptake ≥50%) and 81 had preserved glucose metabolism (FDG uptake ≥40%). The sensitivity and specificity of tetrofosmin SPECT for the prediction of functional recovery tended to be superior to those of FDG PET (90.0% and 72.5% vs 85.0% and 67.5%, respectively). Thirteen segments with preserved flow and decreased glucose metabolism demonstrated marked recovery of contractile function from 2.5±1.0 to 1.4±1.4 (p<0.01), with restoration of glucose metabolism at 6 months. In contrast, 11 segments with decreased flow and preserved glucose metabolism demonstrated incomplete functional improvement from 3.0±0.8 to 2.2±1.2. In the subacute phase, preserved myocardial blood flow is more reliable than glucose metabolism in predicting functional recovery in reperfused myocardium. (orig.)

  12. "Refsdal" Meets Popper: Comparing Predictions of the Re-appearance of the Multiply Imaged Supernova Behind MACSJ1149.5+2223

    Science.gov (United States)

    Treu, T.; Brammer, G.; Diego, J. M.; Grillo, C.; Kelly, P. L.; Oguri, M.; Rodney, S. A.; Rosati, P.; Sharon, K.; Zitrin, A.; Balestra, I.; Bradač, M.; Broadhurst, T.; Caminha, G. B.; Halkola, A.; Hoag, A.; Ishigaki, M.; Johnson, T. L.; Karman, W.; Kawamata, R.; Mercurio, A.; Schmidt, K. B.; Strolger, L.-G.; Suyu, S. H.; Filippenko, A. V.; Foley, R. J.; Jha, S. W.; Patel, B.

    2016-01-01

    Supernova “Refsdal,” multiply imaged by cluster MACS1149.5+2223, represents a rare opportunity to make a true blind test of model predictions in extragalactic astronomy, on a timescale that is short compared to a human lifetime. In order to take advantage of this event, we produced seven gravitational lens models with five independent methods, based on Hubble Space Telescope (HST) Hubble Frontier Field images, along with extensive spectroscopic follow-up observations by HST, the Very Large and the Keck Telescopes. We compare the model predictions and show that they agree reasonably well with the measured time delays and magnification ratios between the known images, even though these quantities were not used as input. This agreement is encouraging, considering that the models only provide statistical uncertainties, and do not include additional sources of uncertainties such as structure along the line of sight, cosmology, and the mass sheet degeneracy. We then present the model predictions for the other appearances of supernova “Refsdal.” A future image will reach its peak in the first half of 2016, while another image appeared between 1994 and 2004. The past image would have been too faint to be detected in existing archival images. The future image should be approximately one-third as bright as the brightest known image (I.e., {H}{{AB}}≈ 25.7 mag at peak and {H}{{AB}}≈ 26.7 mag six months before peak), and thus detectable in single-orbit HST images. We will find out soon whether our predictions are correct.

  13. Comparison of time-dependent changes in the surface hardness of different composite resins

    Science.gov (United States)

    Ozcan, Suat; Yikilgan, Ihsan; Uctasli, Mine Betul; Bala, Oya; Kurklu, Zeliha Gonca Bek

    2013-01-01

    Objective: The aim of this study was to evaluate the change in surface hardness of silorane-based composite resin (Filtek Silorane) in time and compare the results with the surface hardness of two methacrylate-based resins (Filtek Supreme and Majesty Posterior). Materials and Methods: From each composite material, 18 wheel-shaped samples (5-mm diameter and 2-mm depth) were prepared. Top and bottom surface hardness of these samples was measured using a Vicker's hardness tester. The samples were then stored at 37°C and 100% humidity. After 24 h and 7, 30 and 90 days, the top and bottom surface hardness of the samples was measured. In each measurement, the rate between the hardness of the top and bottom surfaces were recorded as the hardness rate. Statistical analysis was performed by one-way analysis of variance, multiple comparisons by Tukey's test and binary comparisons by t-test with a significance level of P = 0.05. Results: The highest hardness values were obtained from each two surfaces of Majesty Posterior and the lowest from Filtek Silorane. Both the top and bottom surface hardness of the methacrylate based composite resins was high and there was a statistically significant difference between the top and bottom hardness values of only the silorane-based composite, Filtek Silorane (P composite resin Filtek Silorane showed adequate hardness ratio, the use of incremental technic during application is more important than methacrylate based composites. PMID:24966724

  14. [Computer-assisted phacoemulsification for hard cataracts].

    Science.gov (United States)

    Zemba, M; Papadatu, Adriana-Camelia; Sîrbu, Laura-Nicoleta; Avram, Corina

    2012-01-01

    to evaluate the efficiency of new torsional phacoemulsification software (Ozil IP system) in hard nucleus cataract extraction. 45 eyes with hard senile cataract (degree III and IV) underwent phacoemulsification performed by the same surgeon, using the same technique (stop and chop). Infiniti (Alcon) platform was used, with Ozil IP software and Kelman phaco tip miniflared, 45 degrees. The nucleus was split into two and after that the first half was phacoemulsificated with IP-on (group 1) and the second half with IP-off (group 2). For every group we measured: cumulative dissipated energy (CDE), numbers of tip closure that needed manual desobstruction the amount of BSS used. The mean CDE was the same in group 1 and in group 2 (between 6.2 and 14.9). The incidence of occlusion that needed manual desobstruction was lower in group 1 (5 times) than in group 2 (13 times). Group 2 used more BSS compared to group 1. The new torsional software (IP system) significantly decreased occlusion time and balanced salt solution use over standard torsional software, particularly with denser cataracts.

  15. A comparative study on the predictive ability of the decision tree, support vector machine and neuro-fuzzy models in landslide susceptibility mapping using GIS

    Science.gov (United States)

    Pradhan, Biswajeet

    2013-02-01

    The purpose of the present study is to compare the prediction performances of three different approaches such as decision tree (DT), support vector machine (SVM) and adaptive neuro-fuzzy inference system (ANFIS) for landslide susceptibility mapping at Penang Hill area, Malaysia. The necessary input parameters for the landslide susceptibility assessments were obtained from various sources. At first, landslide locations were identified by aerial photographs and field surveys and a total of 113 landslide locations were constructed. The study area contains 340,608 pixels while total 8403 pixels include landslides. The landslide inventory was randomly partitioned into two subsets: (1) part 1 that contains 50% (4000 landslide grid cells) was used in the training phase of the models; (2) part 2 is a validation dataset 50% (4000 landslide grid cells) for validation of three models and to confirm its accuracy. The digitally processed images of input parameters were combined in GIS. Finally, landslide susceptibility maps were produced, and the performances were assessed and discussed. Total fifteen landslide susceptibility maps were produced using DT, SVM and ANFIS based models, and the resultant maps were validated using the landslide locations. Prediction performances of these maps were checked by receiver operating characteristics (ROC) by using both success rate curve and prediction rate curve. The validation results showed that, area under the ROC curve for the fifteen models produced using DT, SVM and ANFIS varied from 0.8204 to 0.9421 for success rate curve and 0.7580 to 0.8307 for prediction rate curves, respectively. Moreover, the prediction curves revealed that model 5 of DT has slightly higher prediction performance (83.07), whereas the success rate showed that model 5 of ANFIS has better prediction (94.21) capability among all models. The results of this study showed that landslide susceptibility mapping in the Penang Hill area using the three approaches (e

  16. Arterial spin labeling-based Z-maps have high specificity and positive predictive value for neurodegenerative dementia compared to FDG-PET

    Energy Technology Data Exchange (ETDEWEB)

    Faellmar, David; Larsson, Elna-Marie [Uppsala University, Department of Surgical Sciences, Radiology, Uppsala (Sweden); Haller, Sven [Uppsala University, Department of Surgical Sciences, Radiology, Uppsala (Sweden); University Medical Center Freiburg, Department of Neuroradiology, Freiburg (Germany); University of Geneva, Faculty of Medicine, Geneva (Switzerland); Affidea CDRC - Centre Diagnostique Radiologique de Carouge, Carouge (Switzerland); Lilja, Johan [Uppsala University, Department of Surgical Sciences, Nuclear Medicine and PET, Uppsala (Sweden); Hermes Medical Solutions, Stockholm (Sweden); Danfors, Torsten [Uppsala University, Department of Surgical Sciences, Nuclear Medicine and PET, Uppsala (Sweden); Kilander, Lena [Uppsala University, Department of Public Health and Caring Sciences, Geriatrics, Uppsala (Sweden); Tolboom, Nelleke; Croon, Philip M.; Berckel, Bart N.M. van [VU University Medical Center, Department of Radiology and Nuclear Medicine, Amsterdam (Netherlands); Egger, Karl [University Medical Center Freiburg, Department of Neuroradiology, Freiburg (Germany); Kellner, Elias [Medical Center University of Freiburg, Department of Radiology, Medical Physics, Faculty of Medicine, Freiburg (Germany); Verfaillie, Sander C.J.; Ossenkoppele, Rik [VU University Medical Center, Department of Neurology, Alzheimer Center Amsterdam, Amsterdam (Netherlands); Barkhof, Frederik [VU University Medical Center, Department of Radiology and Nuclear Medicine, Amsterdam (Netherlands); UCL, Institutes of Neurology and Healthcare Engineering, London (United Kingdom)

    2017-10-15

    Cerebral perfusion analysis based on arterial spin labeling (ASL) MRI has been proposed as an alternative to FDG-PET in patients with neurodegenerative disease. Z-maps show normal distribution values relating an image to a database of controls. They are routinely used for FDG-PET to demonstrate disease-specific patterns of hypometabolism at the individual level. This study aimed to compare the performance of Z-maps based on ASL to FDG-PET. Data were combined from two separate sites, each cohort consisting of patients with Alzheimer's disease (n = 18 + 7), frontotemporal dementia (n = 12 + 8) and controls (n = 9 + 29). Subjects underwent pseudocontinuous ASL and FDG-PET. Z-maps were created for each subject and modality. Four experienced physicians visually assessed the 166 Z-maps in random order, blinded to modality and diagnosis. Discrimination of patients versus controls using ASL-based Z-maps yielded high specificity (84%) and positive predictive value (80%), but significantly lower sensitivity compared to FDG-PET-based Z-maps (53% vs. 96%, p < 0.001). Among true-positive cases, correct diagnoses were made in 76% (ASL) and 84% (FDG-PET) (p = 0.168). ASL-based Z-maps can be used for visual assessment of neurodegenerative dementia with high specificity and positive predictive value, but with inferior sensitivity compared to FDG-PET. (orig.)

  17. Characterization and Tribological Properties of Hard Anodized and Micro Arc Oxidized 5754 Quality Aluminum Alloy

    Directory of Open Access Journals (Sweden)

    M. Ovundur

    2015-03-01

    Full Text Available This study was initiated to compare the tribological performances of a 5754 quality aluminum alloy after hard anodic oxidation and micro arc oxidation processes. The structural analyses of the coatings were performed using XRD and SEM techniques. The hardness of the coatings was determined using a Vickers micro-indentation tester. Tribological performances of the hard anodized and micro arc oxidized samples were compared on a reciprocating wear tester under dry sliding conditions. The dry sliding wear tests showed that the wear resistance of the oxide coating generated by micro arc oxidation is remarkably higher than that of the hard anodized alloy.

  18. Aespoe hard rock laboratory Sweden

    International Nuclear Information System (INIS)

    1992-01-01

    The aim of the new Aespoe hard rock laboratory is to demonstrate state of the art of technology and evaluation methods before the start of actual construction work on the planned deep repository for spent nuclear fuel. The nine country OECD/NEA project in the Stripa mine in Sweden has been an excellent example of high quality international research co-operation. In Sweden the new Aespoe hard rock laboratory will gradually take over and finalize this work. SKB very much appreciates the continued international participation in Aespoe which is of great value for the quality efficiency, and confidence in this kind of work. We have invited a number of leading experts to this first international seminar to summarize the current state of a number of key questions. The contributions show the great progress that has taken place during the years. The results show that there is a solid scientific basis for using this knowledge on site specific preparation and work on actual repositories. (au)

  19. Experimental investigation and modelling of surface roughness and resultant cutting force in hard turning of AISI H13 Steel

    Science.gov (United States)

    Boy, M.; Yaşar, N.; Çiftçi, İ.

    2016-11-01

    In recent years, turning of hardened steels has replaced grinding for finishing operations. This process is compared to grinding operations; hard turning has higher material removal rates, the possibility of greater process flexibility, lower equipment costs, and shorter setup time. CBN or ceramic cutting tools are widely used hard part machining. For successful application of hard turning, selection of suitable cutting parameters for a given cutting tool is an important step. For this purpose, an experimental investigation was conducted to determine the effects of cutting tool edge geometry, feed rate and cutting speed on surface roughness and resultant cutting force in hard turning of AISI H13 steel with ceramic cutting tools. Machining experiments were conducted in a CNC lathe based on Taguchi experimental design (L16) in different levels of cutting parameters. In the experiments, a Kistler 9257 B, three cutting force components (Fc, Ff and Fr) piezoelectric dynamometer was used to measure cutting forces. Surface roughness measurements were performed by using a Mahrsurf PS1 device. For statistical analysis, analysis of variance has been performed and mathematical model have been developed for surface roughness and resultant cutting forces. The analysis of variance results showed that the cutting edge geometry, cutting speed and feed rate were the most significant factors on resultant cutting force while the cutting edge geometry and feed rate were the most significant factor for the surface roughness. The regression analysis was applied to predict the outcomes of the experiment. The predicted values and measured values were very close to each other. Afterwards a confirmation tests were performed to make a comparison between the predicted results and the measured results. According to the confirmation test results, measured values are within the 95% confidence interval.

  20. Modeling HAZ hardness and weld features with BPN technology

    International Nuclear Information System (INIS)

    Morinishi, S.; Bibby, M.J.; Chan, B.

    2000-01-01

    A BPN (back propagation network) system for predicting HAZ (heat-affected zone) hardnesses and GMAW (gas metal arc) weld features (size and shape) is described in this presentation. Among other things, issues of network structure, training and testing data selection, software efficiency and user interface are discussed. The system is evaluated by comparing network output with experimentally measured test data in the first instance, and with regression methods available for this purpose, thereafter. The potential of the web for exchanging weld process data and for accessing models generated with this system is addressed. In this regard the software has been made available on the Cambridge University 'steel' and 'neural' websites. In addition Java coded software has recently been generated to provide web flexibility and accessibility. Over and above this, the possibility of offering an on-line 'server' training service, arranged to capture user data (user identification, measured welding parameters and features) and trained models for the use of the entire welding community is described. While the possibility of such an exchange is attractive, there are several difficulties in designing such a system. Server software design, computing resources, data base and communications considerations are some of the issues that must be addressed with regard to a server centered training and database system before it becomes reality. (author)

  1. In Vivo Predictive Dissolution: Comparing the Effect of Bicarbonate and Phosphate Buffer on the Dissolution of Weak Acids and Weak Bases.

    Science.gov (United States)

    Krieg, Brian J; Taghavi, Seyed Mohammad; Amidon, Gordon L; Amidon, Gregory E

    2015-09-01

    Bicarbonate is the main buffer in the small intestine and it is well known that buffer properties such as pKa can affect the dissolution rate of ionizable drugs. However, bicarbonate buffer is complicated to work with experimentally. Finding a suitable substitute for bicarbonate buffer may provide a way to perform more physiologically relevant dissolution tests. The dissolution of weak acid and weak base drugs was conducted in bicarbonate and phosphate buffer using rotating disk dissolution methodology. Experimental results were compared with the predicted results using the film model approach of (Mooney K, Mintun M, Himmelstein K, Stella V. 1981. J Pharm Sci 70(1):22-32) based on equilibrium assumptions as well as a model accounting for the slow hydration reaction, CO2 + H2 O → H2 CO3 . Assuming carbonic acid is irreversible in the dehydration direction: CO2 + H2 O ← H2 CO3 , the transport analysis can accurately predict rotating disk dissolution of weak acid and weak base drugs in bicarbonate buffer. The predictions show that matching the dissolution of weak acid and weak base drugs in phosphate and bicarbonate buffer is possible. The phosphate buffer concentration necessary to match physiologically relevant bicarbonate buffer [e.g., 10.5 mM (HCO3 (-) ), pH = 6.5] is typically in the range of 1-25 mM and is very dependent upon drug solubility and pKa . © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  2. Tumor size evaluated by pelvic examination compared with 3-D MR quantitative analysis in the prediction of outcome for cervical cancer

    International Nuclear Information System (INIS)

    Mayr, Nina A.; Jie Zheng; Yuh, William T.C.; B-Chen, Wen; Ehrhardt, James C.; Sorosky, Joel I.; Pelsang, Retta E.; Hussey, David H.

    1996-01-01

    Purpose: Tumor size estimated by pelvic examination (PE) is an important prognostic factor in cervical cancer treated with radiation therapy (RT). Recent histologic correlation studies also showed that magnetic resonance imaging (MR) provides high accuracy in the measurement of the actual tumor volume. The purpose of this study was to: (a) compare the accuracy of PE and MR in predicting outcome, and (b) correlate tumor measurements by PE vs. MR. Materials and Methods: Tumor measurements were performed prospectively in 172 MR studies in 43 patients with advanced cervical cancer. MR and PE were performed at the same time intervals: exam 1 (start of RT), exam 2 (after 20-24 Gy/2-2.5 wks), exam 3 (after 40-50 Gy/4-5 wks), and exam 4 (1-2 months after RT). PE determined tumor diameters in anteroposterior (ap), lateral (lat), and craniocaudal (cc) direction, and clinical tumor size was computed as maximum diameter, average diameter, and volume (ap x lat x cc x π/6). MR-derived tumor size was computed by summation of the tumor areas in each slice and multiplication by the slice thickness. Tumor regression during RT was calculated for each method as percentage of initial volume. The measurements were correlated with local recurrence and disease-free survival. Median follow-up was 18 months (range: 3-50 months). Results: Prediction of local control. Overall, tumor regression rate (rapid vs. slow; Table 1) was more precise than the initial tumor size (Table 2) in the prediction of outcome. MR provided a significantly more accurate and earlier prediction of local control (exam 2 and 3 vs. exam 4; Table 1) and disease-free survival than PE. Based on the initial tumor size (Table 2), MR was also better than PE in predicting local control and disease-free survival, particularly in large (≥ 100 cm 3 ) tumors. Size correlation. Tumor size (maximum diameter, average diameter, volume) by PE and MR did not correlate well (r 2 = .51, .61, .58, respectively). When using MR

  3. Study of the hard-disk system at high densities: the fluid-hexatic phase transition.

    Science.gov (United States)

    Mier-Y-Terán, Luis; Machorro-Martínez, Brian Ignacio; Chapela, Gustavo A; Del Río, Fernando

    2018-06-21

    Integral equations of uniform fluids have been considered unable to predict any characteristic feature of the fluid-solid phase transition, including the shoulder that arises in the second peak of the fluid-phase radial distribution function, RDF, of hard-core systems obtained by computer simulations, at fluid densities very close to the structural two-step phase transition. This reasoning is based on the results of traditional integral approximations, like Percus-Yevick, PY, which does not show such a shoulder in hard-core systems, neither in two nor three dimensions. In this work, we present results of three Ansätze, based on the PY theory, that were proposed to remedy the lack of PY analytical solutions in two dimensions. This comparative study shows that one of those Ansätze does develop a shoulder in the second peak of the RDF at densities very close to the phase transition, qualitatively describing this feature. Since the shoulder grows into a peak at still higher densities, this integral equation approach predicts the appearance of an orientational order characteristic of the hexatic phase in a continuous fluid-hexatic phase transition.

  4. A Comparative Analysis and Prediction of Traffic Accident Causalities in the Sultanate of Oman using Artificial Neural Networks and Statistical methods

    Directory of Open Access Journals (Sweden)

    Galal A. Ali

    1998-12-01

    Full Text Available Traffic accidents are among the major causes of death in the Sultanate of Oman This is particularly the case in the age group of I6 to 25. Studies indicate that, in spite of Oman's high population-per-vehicle ratio, its fatality rate per l0,000 vehicles is one of the highest in the world. This alarming Situation underlines the importance of analyzing traffic accident data and predicting accident casualties. Such steps will lead to understanding the underlying causes of traffic accidents, and thereby to devise appropriate measures to reduce the number of car accidents and enhance safety standards. In this paper, a comparative study of car accident casualties in Oman was undertaken. Artificial Neural Networks (ANNs were used to analyze the data and make predictions of the number of accident casualties. The results were compared with those obtained from the analysis and predictions by regression techniques. Both approaches attempted to model accident casualties using historical  data on related factors, such as population, number of cars on the road and so on, covering the period from I976 to 1994. Forecasts for the years 1995 to 2000 were made using ANNs and regression equations. The results from ANNs provided the best fit for the data. However, it was found that ANNs gave lower forecasts relative to those obtained by the regression methods used, indicating that ANNs are suitable for interpolation but their use for extrapolation may be limited. Nevertheless, the study showed that ANNs provide a potentially powerful tool in analyzing and forecasting traffic accidents and casualties.

  5. Comparative studies of the ITU-T prediction model for radiofrequency radiation emission and real time measurements at some selected mobile base transceiver stations in Accra, Ghana

    International Nuclear Information System (INIS)

    Obeng, S. O

    2014-07-01

    Recent developments in the electronics industry have led to the widespread use of radiofrequency (RF) devices in various areas including telecommunications. The increasing numbers of mobile base station (BTS) as well as their proximity to residential areas have been accompanied by public health concerns due to the radiation exposure. The main objective of this research was to compare and modify the ITU- T predictive model for radiofrequency radiation emission for BTS with measured data at some selected cell sites in Accra, Ghana. Theoretical and experimental assessment of radiofrequency exposures due to mobile base station antennas have been analysed. The maximum and minimum average power density measured from individual base station in the town was 1. 86µW/m2 and 0.00961µW/m2 respectively. The ITU-T Predictive model power density ranged between 6.40mW/m 2 and 0.344W/m 2 . Results obtained showed a variation between measured power density levels and the ITU-T predictive model. The ITU-T model power density levels decrease with increase in radial distance while real time measurements do not due to fluctuations during measurement. The ITU-T model overestimated the power density levels by a factor l0 5 as compared to real time measurements. The ITU-T model was modified to reduce the level of overestimation. The result showed that radiation intensity varies from one base station to another even at the same distance. Occupational exposure quotient ranged between 5.43E-10 and 1.89E-08 whilst general public exposure quotient ranged between 2.72E-09 and 9.44E-08. From the results, it shows that the RF exposure levels in Accra from these mobile phone base station antennas are below the permitted RF exposure limit to the general public recommended by the International Commission on Non-Ionizing Radiation Protection. (au)

  6. Comparing the measured basal metabolic rates in patients with chronic disorders of consciousness to the estimated basal metabolic rate calculated from common predictive equations.

    Science.gov (United States)

    Xiao, Guizhen; Xie, Qiuyou; He, Yanbin; Wang, Ziwen; Chen, Yan; Jiang, Mengliu; Ni, Xiaoxiao; Wang, Qinxian; Murong, Min; Guo, Yequn; Qiu, Xiaowen; Yu, Ronghao

    2017-10-01

    Accurately predicting the basal metabolic rate (BMR) of patients in a vegetative state (VS) or minimally conscious state (MCS) is critical to proper nutritional therapy, but commonly used equations have not been shown to be accurate. Therefore, we compared the BMR measured by indirect calorimetry (IC) to BMR values estimated using common predictive equations in VS and MCS patients. Body composition variables were measured using the bioelectric impedance analysis (BIA) technique. BMR was measured by IC in 82 patients (64 men and 18 women) with VS or MCS. Patients were classified by body mass index as underweight (BMR was estimated for each group using the Harris-Benedict (H-B), Schofield, or Cunningham equations and compared to the measured BMR using Bland-Altman analyses. For the underweight group, there was a significant difference between the measured BMR values and the estimated BMR values calculated using the H-B, Schofield, and Cunningham equations (p BMR values estimated using the H-B and Cunningham equations were different significantly from the measured BMR (p BMR in the normal-weight group. The Schofield equation showed the best concordance (only 41.5%) with the BMR values measured by IC. None of the commonly used equations to estimate BMR were suitable for the VS or MCS populations. Indirect calorimetry is the preferred way to avoid either over or underestimate of BMR values. Copyright © 2016. Published by Elsevier Ltd.

  7. Assessment of delta ferrite in multipass TIG welds of 40 mm thick SS 316L: A comparative study of ferrite number (FN) prediction and measurements

    Science.gov (United States)

    Buddu, Ramesh Kumar; Raole, P. M.; Sarkar, B.

    2017-04-01

    Austenitic stainless steels are widely used in the fabrication of fusion reactor major systems like vacuum vessel, divertor, cryostat and other structural components development. Multipass welding is used for the development of thick plates for the structural components fabrication. Due to the repeated weld thermal cycles, the microstructure adversely alters owing to the presence of complex phases like austenite, ferrite and delta ferrite and subsequently influences the mechanical properties like tensile and impact toughness of joints. The present paper reports the detail analysis of delta ferrite phase in welded region of 40 mm thick SS316L plates welded by special design multipass narrow groove TIG welding process under three different heat input conditions. The correlation of delta ferrite microstructure of different type structures acicular and vermicular is observed. The chemical composition of weld samples was used to predict the Ferrite Number (FN), which is representative form of delta ferrite in welds, with Schaeffler’s, WRC-1992 diagram and DeLong techniques by calculating the Creq and Nieq ratios and compared with experimental data of FN from Feritescope measurements. The low heat input conditions (1.67 kJ/mm) have produced higher FN (7.28), medium heat input (1.72 kJ/mm) shown FN (7.04) where as high heat input (1.87 kJ/mm) conditions has shown FN (6.68) decreasing trend and FN data is compared with the prediction methods.

  8. Comparing the applicability of some geostatistical methods to predict the spatial distribution of topsoil Calcium Carbonate in part of farmland of Zanjan Province

    Science.gov (United States)

    Sarmadian, Fereydoon; Keshavarzi, Ali

    2010-05-01

    Most of soils in iran, were located in the arid and semi-arid regions and have high pH (more than 7) and high amount of calcium carbonate and this problem cause to their calcification.In calcareous soils, plant growing and production is difficult. Most part of this problem, in relation to high pH and high concentration of calcium ion that cause to fixation and unavailability of elements which were dependent to pH, especially Phosphorous and some micro nutrients such as Fe, Zn, Mn and Cu. Prediction of soil calcium carbonate in non-sampled areas and mapping the calcium carbonate variability in order to sustainable management of soil fertility is very important.So, this research was done with the aim of evaluation and analyzing spatial variability of topsoil calcium carbonate as an aspect of soil fertility and plant nutrition, comparing geostatistical methods such as kriging and co-kriging and mapping topsoil calcium carbonate. For geostatistical analyzing, sampling was done with stratified random method and soil samples from 0 to 15 cm depth were collected with auger within 23 locations.In co-kriging method, salinity data was used as auxiliary variable. For comparing and evaluation of geostatistical methods, cross validation were used by statistical parameters of RMSE. The results showed that co-kriging method has the highest correlation coefficient and less RMSE and has the higher accuracy than kriging method to prediction of calcium carbonate content in non-sampled areas.

  9. Optical fiber composition and radiation hardness

    International Nuclear Information System (INIS)

    Wall, J.A.; Loretz, T.J.

    1982-01-01

    Germanium phosphosilicate and germanium borosilicate fibers doped with cerium were fabricated and tested for their responses to steady-state Co-60 radiation at -55 C, +20 C and +125 C. A fiber with germanium, boron and phosphorous in the silicate core and doped with antimony in the core and clad was similarly tested. All of the fibers showed significant improvements in radiation hardness at 20 C compared to undoped fibers of the same base composition. At -55 C, however, all except the cerium doped germanium phosphosilicate were very radiation sensitive and also showed increases in the rate of induced loss at +125 C. The cerium doped germanium phosphosilicate fiber showed virtually no change in radiation sensitivity at the temperature extremes and could prove useful in applications requiring relatively short lengths of fiber

  10. Controlling grass weeds on hard surfaces

    DEFF Research Database (Denmark)

    Rask, Anne Merete; Kristoffersen, Palle; Andreasen, Christian

    2012-01-01

    An experiment was conducted on a specially designed hard surface to study the impact of time interval between flaming treatments on the regrowth and flower production of two grass weeds. The goal of this experiment was to optimize the control of annual bluegrass and perennial ryegrass, both species...... that are very difficult to control without herbicides. Aboveground biomass from 72 plants per treatment was harvested and dry weights were recorded at regular intervals to investigate how the plants responded to flaming. Regrowth of the grasses was measured by harvesting aboveground biomass 2 wk after......, as they did not increase the reduction of aboveground biomass compared with the 7-d treatment interval. Knowledge on the regrowth of grass weeds after flaming treatments provided by this study can help improve recommendations given to road keepers and park managers for management on these weeds. Nomenclature...

  11. Developmental Stuttering in Children Who Are Hard of Hearing

    Science.gov (United States)

    Arena, Richard M.; Walker, Elizabeth A.; Oleson, Jacob J.

    2017-01-01

    Purpose: A number of studies with large sample sizes have reported lower prevalence of stuttering in children with significant hearing loss compared to children without hearing loss. This study used a parent questionnaire to investigate the characteristics of stuttering (e.g., incidence, prevalence, and age of onset) in children who are hard of…

  12. Hardness ratio evolutionary curves of gamma-ray bursts expected by the curvature effect

    International Nuclear Information System (INIS)

    Qin, Y.-P.; Su, C.-Y.; Fan, J. H.; Gupta, A. C.

    2006-01-01

    We have investigated the gamma-ray bursts (GRBs) pulses with a fast rise and an exponential decay phase, assumed to arise from relativistically expending fireballs, and found that the curvature effect influences the evolutionary curve of the corresponding hardness ratio (hereafter HRC). We find, due to the curvature effect, the evolutionary curve of the pure hardness ratio (when the background count is not included) would peak at the very beginning of the curve, and then would undergo a drop-to-rise-to-decay phase. In the case of the raw hardness ratio (when the background count is included), the curvature effect would give rise to several types of evolutionary curve, depending on the hardness of a burst. For a soft burst, an upside down pulse of its raw HRC would be observed; for a hard burst, its raw HRC shows a pulselike profile with a sinkage in its decaying phase; for a very hard burst, the raw HRC possesses a pulselike profile without a sinkage in its decaying phase. For a pulselike raw HRC as shown in the case of the hard and very hard bursts, its peak would appear in advance of that of the corresponding light curve, which was observed previously in some GRBs. For illustration, we have studied here the HRC of GRB 920216, GRB 920830, and GRB 990816 in detail. The features of the raw HRC expected in the hard burst are observed in these bursts. A fit to the three bursts shows that the curvature effect alone could indeed account for the predicted characteristics of HRCs. In addition, we find that the observed hardness ratio tends to be harder at the beginning of the pulses than what the curvature effect could predict and be softer at the late time of the pulses. We believe this is an evidence showing the existence of intrinsic hard-to-soft radiation which might be due to the acceleration-to-deceleration mode of shocks

  13. Development of radiation hard scintillators

    International Nuclear Information System (INIS)

    Markley, F.; Woods, D.; Pla-Dalmau, A.; Foster, G.; Blackburn, R.

    1992-05-01

    Substantial improvements have been made in the radiation hardness of plastic scintillators. Cylinders of scintillating materials 2.2 cm in diameter and 1 cm thick have been exposed to 10 Mrads of gamma rays at a dose rate of 1 Mrad/h in a nitrogen atmosphere. One of the formulations tested showed an immediate decrease in pulse height of only 4% and has remained stable for 12 days while annealing in air. By comparison a commercial PVT scintillator showed an immediate decrease of 58% and after 43 days of annealing in air it improved to a 14% loss. The formulated sample consisted of 70 parts by weight of Dow polystyrene, 30 pbw of pentaphenyltrimethyltrisiloxane (Dow Corning DC 705 oil), 2 pbw of p-terphenyl, 0.2 pbw of tetraphenylbutadiene, and 0.5 pbw of UVASIL299LM from Ferro

  14. Earthquake prediction by Kina Method

    International Nuclear Information System (INIS)

    Kianoosh, H.; Keypour, H.; Naderzadeh, A.; Motlagh, H.F.

    2005-01-01

    Earthquake prediction has been one of the earliest desires of the man. Scientists have worked hard to predict earthquakes for a long time. The results of these efforts can generally be divided into two methods of prediction: 1) Statistical Method, and 2) Empirical Method. In the first method, earthquakes are predicted using statistics and probabilities, while the second method utilizes variety of precursors for earthquake prediction. The latter method is time consuming and more costly. However, the result of neither method has fully satisfied the man up to now. In this paper a new method entitled 'Kiana Method' is introduced for earthquake prediction. This method offers more accurate results yet lower cost comparing to other conventional methods. In Kiana method the electrical and magnetic precursors are measured in an area. Then, the time and the magnitude of an earthquake in the future is calculated using electrical, and in particular, electrical capacitors formulas. In this method, by daily measurement of electrical resistance in an area we make clear that the area is capable of earthquake occurrence in the future or not. If the result shows a positive sign, then the occurrence time and the magnitude can be estimated by the measured quantities. This paper explains the procedure and details of this prediction method. (authors)

  15. Hard photoproduction: An analysis of the first ZEUS data

    International Nuclear Information System (INIS)

    Feld, L.W.

    1993-10-01

    The electron-proton storage ring HERA gives the unique opportunity to study photon-proton collisions at center of mass energies around 200 GeV. This analysis covers the extraction of hard photoproduction events from the data taken by the ZEUS-detector in its first year of operation. It is shown, that these events are well described by the Monte Carlo generators PYTHIA and HERWIG. A jet analysis allows to measure the kinematics of the hard subprocess. Clear evidence for both direct and resolved photon processes is seen in the data. In detailed Monte Carlo studies different photon structure functions are compared to the data. (orig.)

  16. The effect of gamma radiation on hardness evolution in high density polyethylene at elevated temperatures

    International Nuclear Information System (INIS)

    Chen, Pei-Yun; Chen, C.C.; Harmon, Julie P.; Lee, Sanboh

    2014-01-01

    This research focuses on characterizing hardness evolution in irradiated high density polyethylene (HDPE) at elevated temperatures. Hardness increases with increasing gamma ray dose, annealing temperature and annealing time. The hardness change is attributed to the variation of defects in microstructure and molecular structure. The kinetics of defects that control the hardness are assumed to follow the first order structure relaxation. The experimental data are in good agreement with the predicted model. The rate constant follows the Arrhenius equation, and the corresponding activation energy decreases with increasing dose. The defects that control hardness in post-annealed HDPE increase with increasing dose and annealing temperature. The structure relaxation of HDPE has a lower energy of mixing in crystalline regions than in amorphous regions. Further, the energy of mixing for defects that influence hardness in HDPE is lower than those observed in polycarbonate (PC), poly(methyl methacrylate) (PMMA) and poly (hydroxyethyl methacrylate) (HEMA). This is due to the fact that polyethylene is a semi-crystalline material, while PC, PMMA and PHEMA are amorphous. - Highlights: • Hardness of HDPE increases with increasing gamma ray dose, annealing time and temperature. • The hardness change arises from defects in microstructure and molecular structure. • Defects affecting hardness follow a kinetics of structure relaxation. • The structure relaxation has a low energy of mixing in crystalline regime

  17. The effect of gamma radiation on hardness evolution in high density polyethylene at elevated temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Pei-Yun [Department of Materials Science and Engineering, National Tsing Hua University, Hsinchu 300, Taiwan (China); Chen, C.C. [Institute of Nuclear Energy Research, Longtan, Taoyuan 325, Taiwan (China); Harmon, Julie P. [Department of Chemistry, University of South Florida, Tampa, FL 33620 (United States); Lee, Sanboh, E-mail: sblee@mx.nthu.edu.tw [Department of Materials Science and Engineering, National Tsing Hua University, Hsinchu 300, Taiwan (China)

    2014-08-01

    This research focuses on characterizing hardness evolution in irradiated high density polyethylene (HDPE) at elevated temperatures. Hardness increases with increasing gamma ray dose, annealing temperature and annealing time. The hardness change is attributed to the variation of defects in microstructure and molecular structure. The kinetics of defects that control the hardness are assumed to follow the first order structure relaxation. The experimental data are in good agreement with the predicted model. The rate constant follows the Arrhenius equation, and the corresponding activation energy decreases with increasing dose. The defects that control hardness in post-annealed HDPE increase with increasing dose and annealing temperature. The structure relaxation of HDPE has a lower energy of mixing in crystalline regions than in amorphous regions. Further, the energy of mixing for defects that influence hardness in HDPE is lower than those observed in polycarbonate (PC), poly(methyl methacrylate) (PMMA) and poly (hydroxyethyl methacrylate) (HEMA). This is due to the fact that polyethylene is a semi-crystalline material, while PC, PMMA and PHEMA are amorphous. - Highlights: • Hardness of HDPE increases with increasing gamma ray dose, annealing time and temperature. • The hardness change arises from defects in microstructure and molecular structure. • Defects affecting hardness follow a kinetics of structure relaxation. • The structure relaxation has a low energy of mixing in crystalline regime.

  18. Does trampoline or hard surface jumping influence lower extremity alignment?

    Science.gov (United States)

    Akasaka, Kiyokazu; Tamura, Akihiro; Katsuta, Aoi; Sagawa, Ayako; Otsudo, Takahiro; Okubo, Yu; Sawada, Yutaka; Hall, Toby

    2017-12-01

    [Purpose] To determine whether repetitive trampoline or hard surface jumping affects lower extremity alignment on jump landing. [Subjects and Methods] Twenty healthy females participated in this study. All subjects performed a drop vertical jump before and after repeated maximum effort trampoline or hard surface jumping. A three-dimensional motion analysis system and two force plates were used to record lower extremity angles, moments, and vertical ground reaction force during drop vertical jumps. [Results] Knee extensor moment after trampoline jumping was greater than that after hard surface jumping. There were no significant differences between trials in vertical ground reaction force and lower extremity joint angles following each form of exercise. Repeated jumping on a trampoline increased peak vertical ground reaction force, hip extensor, knee extensor moments, and hip adduction angle, while decreasing hip flexion angle during drop vertical jumps. In contrast, repeated jumping on a hard surface increased peak vertical ground reaction force, ankle dorsiflexion angle, and hip extensor moment during drop vertical jumps. [Conclusion] Repeated jumping on the trampoline compared to jumping on a hard surface has different effects on lower limb kinetics and kinematics. Knowledge of these effects may be useful in designing exercise programs for different clinical presentations.

  19. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    Science.gov (United States)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  20. Using hardness to model yield and tensile strength

    Energy Technology Data Exchange (ETDEWEB)

    Hawk, Jeffrey A.; Dogan, Omer N.; Schrems, Karol K.

    2005-02-01

    The current direction in hardness research is towards smaller and smaller loads as nano-scale materials are developed. There remains, however, a need to investigate the mechanical behavior of complex alloys for severe environment service. In many instances this entails casting large ingots and making numerous tensile samples as the bounds of the operating environment are explored. It is possible to gain an understanding of the tensile strength of these alloys using room and elevated temperature hardness in conjunction with selected tensile tests. The approach outlined here has its roots in the work done by Tabor for metals and low alloy and carbon steels. This research seeks to extend the work to elevated temperatures for multi-phase, complex alloys. A review of the approach will be given after which the experimental data will be examined. In particular, the yield stress and tensile strength will be compared to their corresponding hardness based values.

  1. Broad-band hard X-ray reflectors

    DEFF Research Database (Denmark)

    Joensen, K.D.; Gorenstein, P.; Hoghoj, P.

    1997-01-01

    Interest in optics for hard X-ray broad-band application is growing. In this paper, we compare the hard X-ray (20-100 keV) reflectivity obtained with an energy-dispersive reflectometer, of a standard commercial gold thin-film with that of a 600 bilayer W/Si X-ray supermirror. The reflectivity...... of the multilayer is found to agree extraordinarily well with theory (assuming an interface roughness of 4.5 Angstrom), while the agreement for the gold film is less, The overall performance of the supermirror is superior to that of gold, extending the band of reflection at least a factor of 2.8 beyond...... that of the gold, Various other design options are discussed, and we conclude that continued interest in the X-ray supermirror for broad-band hard X-ray applications is warranted....

  2. Nanostructural Evolution of Hard Turning Layers in Carburized Steel

    Science.gov (United States)

    Bedekar, Vikram

    The mechanisms of failure for components subjected to contact fatigue are sensitive to the structure and properties of the material surface. Although, the bulk material properties are determined by the steel making, forming and the heat treatment; the near surface material properties are altered during final material removal processes such as hard turning or grinding. Therefore, the ability to optimize, modulate and predict the near surface properties during final metal removal operations would be extremely useful in the enhancement of service life of a component. Hard machining is known to induce severely deformed layers causing dramatic microstructural transformations. These transformations occur via grain refinement or thermal phenomena depending upon cutting conditions. The aim of this work is to engineer the near surface nanoscale structure and properties during hard turning by altering strain, strain rate, temperature and incoming microstructure. The near surface material transformations due to hard turning were studied on carburized SAE 8620 bearing steel. Variations in parent material microstructures were introduced by altering the retained austenite content. The strain, strain rate and temperature achieved during final metal cutting were altered by varying insert geometry, insert wear and cutting speed. The subsurface evolution was quantified by a series of advanced characterization techniques such as transmission electron microscopy (TEM), glancing angle X-ray diffraction (GAXRD), X-ray stress evaluation and nanoindentation which were coupled with numerical modeling. Results showed that the grain size of the nanocrystalline near surface microstructure can be effectively controlled by altering the insert geometry, insert wear, cutting speed and the incoming microstructure. It was also evident that the near surface retained austenite decreased at lower cutting speed indicating transformation due to plastic deformation, while it increased at higher cutting

  3. Theory of hard diffraction and rapidity gaps

    International Nuclear Information System (INIS)

    Del Duca, V.

    1995-06-01

    In this talk we review the models describing the hard diffractive production of jets or more generally high-mass states in presence of rapidity gaps in hadron-hadron and lepton-hadron collisions. By rapidity gaps we mean regions on the lego plot in (pseudo)-rapidity and azimuthal angle where no hadrons are produced, between the jet(s) and an elastically scattered hadron (single hard diffraction) or between two jets (double hard diffraction). (orig.)

  4. Theory of hard diffraction and rapidity gaps

    International Nuclear Information System (INIS)

    Del Duca, V.

    1996-01-01

    In this talk we review the models describing the hard diffractive production of jets or more generally high-mass states in presence of rapidity gaps in hadron-hadron and lepton-hadron collisions. By rapidity gaps we mean regions on the lego plot in (pseudo)-rapidity and azimuthal angle where no hadrons are produced, between the jet(s) and an elastically scattered hadron (single hard diffraction) or between two jets (double hard diffraction). copyright 1996 American Institute of Physics

  5. Advances in hard nucleus cataract surgery

    Directory of Open Access Journals (Sweden)

    Wei Cui

    2013-11-01

    Full Text Available Security and perfect vision and fewer complications are our goals in cataract surgery, and hard-nucleus cataract surgery is always a difficulty one. Many new studies indicate that micro-incision phacoemulsification in treating hard nucleus cataract is obviously effective. This article reviews the evolution process of hard nuclear cataract surgery, the new progress in the research of artificial intraocular lens for microincision, and analyse advantages and disadvantages of various surgical methods.

  6. Nano-hardness estimation by means of Ar+ ion etching

    International Nuclear Information System (INIS)

    Bartali, R.; Micheli, V.; Gottardi, G.; Vaccari, A.; Safeen, M.K.; Laidani, N.

    2015-01-01

    When the coatings are in nano-scale, the mechanical properties cannot be easily estimated by means of the conventional methods due to: tip shape, instrument resolution, roughness, and substrate effect. In this paper, we proposed a semi-empirical method to evaluate the mechanical properties of thin films based on the sputtering rate induced by bombardment of Ar + ion. The Ar + ion bombardment was induced by ion gun implemented in Auger electron spectroscopy (AES). This procedure has been applied on a series of coatings with different structure (carbon films) and a series of coating with a different density (ZnO thin films). The coatings were deposited on Silicon substrates by RF sputtering plasma. The results show that, as predicted by Insepov et al., there is a correlation between hardness and sputtering rate. Using reference materials and a simple power law equation the estimation of the nano-hardness using an Ar + beam is possible. - Highlights: • ZnO film and Carbon films were grown on silicon using PVD. • The growth temperature was room temperature. • The hardness of the coatings was estimated by means of nanoindentation. • Evaluation of resistance of materials to the mechanical damage induced by an Ar + ion gun (AES). • The hardness have been studied and a power law with the erosion rate has been found

  7. Use of mathematic modeling to compare and predict hemodynamic effects of the modified Blalock-Taussig and right ventricle-pulmonary artery shunts for hypoplastic left heart syndrome.

    Science.gov (United States)

    Bove, Edward L; Migliavacca, Francesco; de Leval, Marc R; Balossino, Rossella; Pennati, Giancarlo; Lloyd, Thomas R; Khambadkone, Sachin; Hsia, Tain-Yen; Dubini, Gabriele

    2008-08-01

    Stage one reconstruction (Norwood operation) for hypoplastic left heart syndrome can be performed with either a modified Blalock-Taussig shunt or a right ventricle-pulmonary artery shunt. Both methods have certain inherent characteristics. It is postulated that mathematic modeling could help elucidate these differences. Three-dimensional computer models of the Blalock-Taussig shunt and right ventricle-pulmonary artery shunt modifications of the Norwood operation were developed by using the finite volume method. Conduits of 3, 3.5, and 4 mm were used in the Blalock-Taussig shunt model, whereas conduits of 4, 5, and 6 mm were used in the right ventricle-pulmonary artery shunt model. The hydraulic nets (lumped resistances, compliances, inertances, and elastances) were identical in the 2 models. A multiscale approach was adopted to couple the 3-dimensional models with the circulation net. Computer simulations were compared with postoperative catheterization data. Good correlation was found between predicted and observed data. For the right ventricle-pulmonary artery shunt modification, there was higher aortic diastolic pressure, decreased pulmonary artery pressure, lower Qp/Qs ratio, and higher coronary perfusion pressure. Mathematic modeling predicted minimal regurgitant flow in the right ventricle-pulmonary artery shunt model, which correlated with postoperative Doppler measurements. The right ventricle-pulmonary artery shunt demonstrated lower stroke work and a higher mechanical efficiency (stroke work/total mechanical energy). The close correlation between predicted and observed data supports the use of mathematic modeling in the design and assessment of surgical procedures. The potentially damaging effects of a systemic ventriculotomy in the right ventricle-pulmonary artery shunt modification of the Norwood operation have not been analyzed.

  8. Comparative utility of the BESTest, mini-BESTest, and brief-BESTest for predicting falls in individuals with Parkinson disease: a cohort study.

    Science.gov (United States)

    Duncan, Ryan P; Leddy, Abigail L; Cavanaugh, James T; Dibble, Leland E; Ellis, Terry D; Ford, Matthew P; Foreman, K Bo; Earhart, Gammon M

    2013-04-01

    The newly developed brief-balance evaluation system test (brief-BESTest) may be useful for measuring balance and predicting falls in individuals with Parkinson disease (PD). The purposes of this study were: (1) to describe the balance performance of those with PD using the brief-BESTest, (2) to determine the relationships among the scores derived from the 3 versions of the BESTest (i.e., full BESTest, mini-BESTest, and brief-BESTest), and (3) to compare the accuracy of the brief-BESTest with that of the mini-BESTest and BESTest in identifying recurrent fallers among people with PD. This was a prospective cohort study. Eighty participants with PD completed a baseline balance assessment. All participants reported a fall history during the previous 6 months. Fall history was again collected 6 months (n=51) and 12 months (n=40) later. At baseline, participants had varying levels of balance impairment, and brief-BESTest scores were significantly correlated with mini-BESTest (r=.94, PBrief-BESTest was moderately high (area under the curve [AUC]=0.82, sensitivity=0.76, and specificity=0.84). Prospective fall prediction accuracy over 6 months was similarly accurate (AUC=0.88, sensitivity=0.71, and specificity=0.87), but was less sensitive over 12 months (AUC=0.76, sensitivity=0.53, and specificity=0.93). The sample included primarily individuals with mild to moderate PD. Also, there was a moderate dropout rate at 6 and 12 months. All versions of the BESTest were reasonably accurate in identifying future recurrent fallers, especially during the 6 months following assessment. Clinicians can reasonably rely on the brief-BESTest for predicting falls, particularly when time and equipment constraints are of concern.

  9. Prediction of bending moment resistance of screw connected joints in plywood members using regression models and compare with that commercial medium density fiberboard (MDF and particleboard

    Directory of Open Access Journals (Sweden)

    Sadegh Maleki

    2014-11-01

    Full Text Available The study aimed at predicting bending moment resistance plywood of screw (coarse and fine threads joints using regression models. Thickness of the member was 19mm and compared with medium density fiberboard (MDF and particleboard with 18mm thicknesses. Two types of screws including coarse and fine thread drywall screw with nominal diameters of 6, 8 and 10mm and 3.5, 4 and 5 cm length respectively and sheet metal screw with diameters of 8 and 10 and length of 4 cm were used. The results of the study have shown that bending moment resistance of screw was increased by increasing of screws diameter and penetrating depth. Screw Length was found to have a larger influence on bending moment resistance than screw diameter. Bending moment resistance with coarse thread drywall screws was higher than those of fine thread drywall screws. The highest bending moment resistance (71.76 N.m was observed in joints made with coarse screw which were 5 mm in diameter and 28 mm in depth of penetration. The lowest bending moment resistance (12.08 N.m was observed in joints having fine screw with 3.5 mm diameter and 9 mm penetrations. Furthermore, bending moment resistance in plywood was higher than those of medium density fiberboard (MDF and particleboard. Finally, it has been found that the ultimate bending moment resistance of plywood joint can be predicted following formula Wc = 0.189×D0.726×P0.577 for coarse thread drywall screws and Wf = 0.086×D0.942×P0.704 for fine ones according to diameter and penetrating depth. The analysis of variance of the experimental and predicted data showed that the developed models provide a fair approximation of actual experimental measurements.

  10. Prediction of GWL with the help of GRACE TWS for unevenly spaced time series data in India : Analysis of comparative performances of SVR, ANN and LRM

    Science.gov (United States)

    Mukherjee, Amritendu; Ramachandran, Parthasarathy

    2018-03-01

    Prediction of Ground Water Level (GWL) is extremely important for sustainable use and management of ground water resource. The motivations for this work is to understand the relationship between Gravity Recovery and Climate Experiment (GRACE) derived terrestrial water change (ΔTWS) data and GWL, so that ΔTWS could be used as a proxy measurement for GWL. In our study, we have selected five observation wells from different geographic regions in India. The datasets are unevenly spaced time series data which restricts us from applying standard time series methodologies and therefore in order to model and predict GWL with the help of ΔTWS, we have built Linear Regression Model (LRM), Support Vector Regression (SVR) and Artificial Neural Network (ANN). Comparative performances of LRM, SVR and ANN have been evaluated with the help of correlation coefficient (ρ) and Root Mean Square Error (RMSE) between the actual and fitted (for training dataset) or predicted (for test dataset) values of GWL. It has been observed in our study that ΔTWS is highly significant variable to model GWL and the amount of total variations in GWL that could be explained with the help of ΔTWS varies from 36.48% to 74.28% (0.3648 ⩽R2 ⩽ 0.7428) . We have found that for the model GWL ∼ Δ TWS, for both training and test dataset, performances of SVR and ANN are better than that of LRM in terms of ρ and RMSE. It also has been found in our study that with the inclusion of meteorological variables along with ΔTWS as input parameters to model GWL, the performance of SVR improves and it performs better than ANN. These results imply that for modelling irregular time series GWL data, ΔTWS could be very useful.

  11. Comparing between predicted output temperature of flat-plate solar collector and experimental results: computational fluid dynamics and artificial neural network

    Directory of Open Access Journals (Sweden)

    F Nadi

    2017-05-01

    Full Text Available Introduction The significant of solar energy as a renewable energy source, clean and without damage to the environment, for the production of electricity and heat is of great importance. Furthermore, due to the oil crisis as well as reducing the cost of home heating by 70%, solar energy in the past two decades has been a favorite of many researchers. Solar collectors are devices for collecting solar radiant energy through which this energy is converted into heat and then heat is transferred to a fluid (usually air or water. Therefore, a key component in performance improvement of solar heating system is a solar collector optimization under different testing conditions. However, estimation of output parameters under different testing conditions is costly, time consuming and mostly impossible. As a result, smart use of neural networks as well as CFD (computational fluid dynamics to predict the properties with which desired output would have been acquired is valuable. To the best of our knowledge, there are no any studies that compare experimental results with CFD and ANN. Materials and Methods A corrugated galvanized iron sheet of 2 m length, 1 m wide and 0.5 mm in thickness was used as an absorber plate for absorbing the incident solar radiation (Fig. 1 and 2. Corrugations in absorber were caused turbulent air and improved heat transfer coefficient. Computational fluid dynamics K-ε turbulence model was used for simulation. The following assumptions are made in the analysis. (1 Air is a continuous medium and incompressible. (2 The flow is steady and possesses have turbulent flow characteristics, due to the high velocity of flow. (3 The thermal-physical properties of the absorber sheet and the absorber tube are constant with respect to the operating temperature. (4 The bottom side of the absorber tube and the absorber plate are assumed to be adiabatic. Artificial neural network In this research a one-hidden-layer feed-forward network based on the

  12. Comparative proteomics of cerebrospinal fluid reveals a predictive model for differential diagnosis of pneumococcal, meningococcal, and enteroviral meningitis, and novel putative therapeutic targets

    Science.gov (United States)

    2015-01-01

    Background Meningitis is the inflammation of the meninges in response to infection or chemical agents. While aseptic meningitis, most frequently caused by enteroviruses, is usually benign with a self-limiting course, bacterial meningitis remains associated with high morbidity and mortality rates, despite advances in antimicrobial therapy and intensive care. Fast and accurate differential diagnosis is crucial for assertive choice of the appropriate therapeutic approach for each form of meningitis. Methods We used 2D-PAGE and mass spectrometry to identify the cerebrospinal fluid proteome specifically related to the host response to pneumococcal, meningococcal, and enteroviral meningitis. The disease-specific proteome signatures were inspected by pathway analysis. Results Unique cerebrospinal fluid proteome signatures were found to the three aetiological forms of meningitis investigated, and a qualitative predictive model with four protein markers was developed for the differential diagnosis of these diseases. Nevertheless, pathway analysis of the disease-specific proteomes unveiled that Kallikrein-kinin system may play a crucial role in the pathophysiological mechanisms leading to brain damage in bacterial meningitis. Proteins taking part in this cellular process are proposed as putative targets to novel adjunctive therapies. Conclusions Comparative proteomics of cerebrospinal fluid disclosed candidate biomarkers, which were combined in a qualitative and sequential predictive model with potential to improve the differential diagnosis of pneumococcal, meningococcal and enteroviral meningitis. Moreover, we present the first evidence of the possible implication of Kallikrein-kinin system in the pathophysiology of bacterial meningitis. PMID:26040285

  13. The predictive value of mental health for long-term sickness absence: the Major Depression Inventory (MDI) and the Mental Health Inventory (MHI-5) compared.

    Science.gov (United States)

    Thorsen, Sannie Vester; Rugulies, Reiner; Hjarsbech, Pernille U; Bjorner, Jakob Bue

    2013-09-17

    Questionnaires are valuable for population surveys of mental health. Different survey instruments may however give different results. The present study compares two mental health instruments, the Major Depression Inventory (MDI) and the Mental Health Inventory (MHI-5), in regard to their prediction of long-term sickness absence. Questionnaire data was collected from N = 4153 Danish employees. The questionnaire included the MDI and the MHI-5. The information of long-term sickness absence was obtained from a register. We used Cox regression to calculate covariance adjusted hazard ratios for long-term sickness absence for both measures. Both the MDI and the MHI-5 had a highly significant prediction of long-term sickness absence. A one standard deviation change in score was associated with an increased risk of long-term sickness absence of 27% for the MDI and 37% for the MHI-5. When both measures were included in the same analysis, the MHI-5 performed best. In general population surveys, the MHI-5 is a better predictor of long-term sickness absence than the MDI.

  14. Hard-hard coupling assisted anomalous magnetoresistance effect in amine-ended single-molecule magnetic junction

    Science.gov (United States)

    Tang, Y.-H.; Lin, C.-J.; Chiang, K.-R.

    2017-06-01

    We proposed a single-molecule magnetic junction (SMMJ), composed of a dissociated amine-ended benzene sandwiched between two Co tip-like nanowires. To better simulate the break junction technique for real SMMJs, the first-principles calculation associated with the hard-hard coupling between a amine-linker and Co tip-atom is carried out for SMMJs with mechanical strain and under an external bias. We predict an anomalous magnetoresistance (MR) effect, including strain-induced sign reversal and bias-induced enhancement of the MR value, which is in sharp contrast to the normal MR effect in conventional magnetic tunnel junctions. The underlying mechanism is the interplay between four spin-polarized currents in parallel and anti-parallel magnetic configurations, originated from the pronounced spin-up transmission feature in the parallel case and spiky transmission peaks in other three spin-polarized channels. These intriguing findings may open a new arena in which magnetotransport and hard-hard coupling are closely coupled in SMMJs and can be dually controlled either via mechanical strain or by an external bias.

  15. Nucleon fragmentation into baryons in proton-nucleon interactions at 19 GeV/c compared with some quark-parton model predictions

    Energy Technology Data Exchange (ETDEWEB)

    Bakken, V.; Breivik, F.O.; Jacobsen, T. (Oslo Univ. (Norway). Fysisk Inst.)

    1983-06-21

    We present some new data on baryon production in pn interactions at 19 GeV/c obtained in a bubble chamber experiment. We determine the longitudinal-momentum spectra dsigma/dx of the baryon in the reaction pn->psub(F)+X, pn->psub(B)+X, pn->..delta..sub(F)/sup + +/(1232)+X and pn->..delta..sub(B)/sup + +/(1232)+X, where F(B) labels the forward (backward) c.m. hemisphere. The spectra of psub(F) and psub(B) are also given when the effects of diffraction and ..delta../sup + +/(1232) resonance production are substracted. These data, together with dsigma/dx of pp->..lambda../sup 0/+X at the same beam momentum, are compared with the predictions of some quark-parton models. Particle multiplicities of nucleons, ..delta../sup + +/(1232) and hyperons are found to be incompatible with the probabilistic quark model of Van Hove.

  16. On Verifying Currents and Other Features in the Hawaiian Islands Region Using Fully Coupled Ocean/Atmosphere Mesoscale Prediction System Compared to Global Ocean Model and Ocean Observations

    Science.gov (United States)

    Jessen, P. G.; Chen, S.

    2014-12-01

    This poster introduces and evaluates features concerning the Hawaii, USA region using the U.S. Navy's fully Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS-OS™) coupled to the Navy Coastal Ocean Model (NCOM). It also outlines some challenges in verifying ocean currents in the open ocean. The system is evaluated using in situ ocean data and initial forcing fields from the operational global Hybrid Coordinate Ocean Model (HYCOM). Verification shows difficulties in modelling downstream currents off the Hawaiian islands (Hawaii's wake). Comparing HYCOM to NCOM current fields show some displacement of small features such as eddies. Generally, there is fair agreement from HYCOM to NCOM in salinity and temperature fields. There is good agreement in SSH fields.

  17. Radiation hardness studies for DEPFETs in Belle II

    International Nuclear Information System (INIS)

    Ritter, Andreas

    2014-01-01

    The study of CP violation requires dedicated detectors and accelerators. At KEK, the High Energy Accelerator Research Organization located in Tsukuba, Japan, an upgrade of the present accelerator KEKB and its detector is in progress. For this new Belle II detector, a new vertex system will be installed, consisting of a silicon strip detector (SVD) and a pixel detector (PXD). The PXD exhibits eight million pixels, each of them made of Depleted p-channel Field Effect Transistors (DEPFETs). During the operation of Belle II various machine- as well as luminosity-related background processes affect the device performance of the DEPFET through radiation damage. As a Metal-Oxide-Semiconductor (MOS) device, the DEPFET is affected by ionizing radiation damage as well as by damages to the silicon bulk itself. The major part of the radiation damage has its origin in the creation of electrons and positrons near the interaction point. Therefore, the hardness factor of electrons of relevant energy was investigated in this work. With this quantity the damage by electrons could be compared to the damage inflicted by neutrons. Neutron irradiations were performed with DEPFETs and related silicon material. The effects of leakage current increase and type inversion were studied. As the electron hardness investigation indicates, the bulk damage done to the DEPFET is small in comparison to the impact on the silicon dioxide layer of the device. Ionizing radiation results in a build-up of oxide charge, thus changing the device characteristics. Especially the threshold voltage of the DEPFET is shifted to more negative values. This shift has to be compensated during the operation of Belle II and is limited by device and system constraints, thus an overall small shift is desired. The changes in the device characteristics were investigated for the two gate electrodes of the DEPFET with respect to their biasing and production related issues. With an additional layer of silicon nitride and a

  18. 30 CFR 75.1720-1 - Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced...

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Distinctively colored hard hats, or hard caps... STANDARDS-UNDERGROUND COAL MINES Miscellaneous § 75.1720-1 Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color...

  19. 30 CFR 77.1710-1 - Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced...

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Distinctively colored hard hats or hard caps... Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color from those worn by experienced miners shall be worn at...

  20. Hard disks with SCSI interface

    CERN Document Server

    Denisov, O Yu

    1999-01-01

    The testing of 20 models of hard SCSI-disks is carried out: the Fujitsu MAE3091LP; the IBM DDRS-39130, DGHS-318220, DNES-318350, DRHS-36V and DRVS-18V; the Quantum Atlas VI 18.2; the Viking 11 9.1; the Seagate ST118202LW, ST118273LW, ST118273W, ST318203LW, ST318275LW, ST34520W, ST39140LW and ST39173W; and the Western Digital WDE9100-0007, WDE9100-AV0016, WDE9100-AV0030 and WDE9180-0048. All tests ran under the Windows NT 4.0 workstation operating system with Service Pack 4, under video mode with 1024*768 pixel resolution, 32- bit colour depth and V-frequency equal to 85 Hz. The detailed description and characteristics of SCSI stores are presented. Test results (ZD Winstone 99 and ZD WinBench 99 tests) are given in both table and diagram (disk transfer rate) forms. (0 refs).

  1. Development of a hard microcontroller

    International Nuclear Information System (INIS)

    Measel, P.R.; Sivo, L.L.; Quilitz, W.E.; Davidson, T.K.

    1976-01-01

    The applicability of commercially available microprocessors to certain systems requiring radiation survival was assessed. A microcontroller was designed and built to perform a monitor and control function of military operational ground equipment, and demonstrated to exceed the radiation hardness goal. The preparation of the microcontroller module required hardware and software design, selection of LSI and other piece part types, development of piece part and module electrical and radiation test techniques, and the performance of radiation tests on the LSI piece parts and the completed module. The microcontroller has a 16-bit central processor unit, a 4096 word read only memory, and a 256 word read-write memory. The module has circumvention circuitry, including a PIN diode radiation detector. The processor device used was the MMI 6701 T 2 L Schottky bipolar 4-bit slice. Electrical exerciser circuits were developed for in-situ electrical testing of microprocessors and memories during irradiation. A test program was developed for a Terradyne J283 microcircuit tester for more complete electrical characterization of the MMI 6701 microprocessor. A simple self-test algorithm was used in the microcontroller for performance testing during irradiation. For the operational demonstration of the microcontroller a TI 960A minicomputer was used to provide the required complex inputs to the module and verify the module outputs

  2. Risk prediction in stable cardiovascular disease using a high-sensitivity cardiac troponin T single biomarker strategy compared to the ESC-SCORE.

    Science.gov (United States)

    Biener, Moritz; Giannitsis, Evangelos; Kuhner, Manuel; Zelniker, Thomas; Mueller-Hennessen, Matthias; Vafaie, Mehrshad; Stoyanov, Kiril M; Neumann, Franz-Josef; Katus, Hugo A; Hochholzer, Willibald; Valina, Christian Marc

    2018-01-01

    To evaluate the prognostic performance of high-sensitivity cardiac troponin T (hs-cTnT) compared with the ESC-SCORE. We included low-risk outpatients with stable cardiovascular (CV) disease categorised into need for non-secondary and secondary prevention. The prognostication of hs-cTnT at index visit was compared with the European Society of Cardiology-Systematic COronary Risk Evaluation (ESC-SCORE) with respect to all-cause mortality (ACM) and two composite endpoints (ACM, acute myocardial infarction (AMI) and stroke and ACM, AMI, stroke and rehospitalisation for acute coronary syndrome (ACS) and decompensated heart failure (DHF)). Within a median follow-up of 796 days, a total of 16 deaths, 32 composite endpoints of ACM, AMI and stroke and 83 composite endpoints of ACM, AMI, stroke, rehospitalisation for ACS and DHF were observed among 693 stable low-risk outpatients. Using C-statistics, measurement of hs-cTnT alone outperformed the ESC-SCORE for the prediction of ACM in the entire study population (Δarea under the curve (AUC) 0.221, p=0.0039) and both prevention groups (non-secondary: ΔAUC 0.164, p=0.0208; secondary: ΔAUC 0.264, p=0.0134). For the prediction of all other secondary endpoints, hs-cTnT was at least as effective as the ESC-SCORE, both in secondary and non-secondary prevention. Using continuous and categorical net reclassification improvement and integrated discrimination improvement, hs-cTnT significantly improved reclassification regarding all endpoints in the entire population and in the secondary prevention cohort. In non-secondary prevention, hs-cTnT improved reclassification only for ACM. The results were confirmed in an independent external cohort on 2046 patients. Hs-cTnT is superior to the multivariable ESC-SCORE for the prediction of ACM and a composite endpoint in stable outpatients with and without relevant CV disease. NCT01954303; Pre-results.

  3. A production throughput forecasting system in an automated hard disk drive test operation using GRNN

    Energy Technology Data Exchange (ETDEWEB)

    Samattapapong, N.; Afzulpurkar, N.

    2016-07-01

    The goal of this paper is to develop a pragmatic system of a production throughput forecasting system for an automated test operation in a hard drive manufacturing plant. The accurate forecasting result is necessary for the management team to response to any changes in the production processes and the resources allocations. In this study, we design a production throughput forecasting system in an automated test operation in hard drive manufacturing plant. In the proposed system, consists of three main stages. In the first stage, a mutual information method was adopted for selecting the relevant inputs into the forecasting model. In the second stage, a generalized regression neural network (GRNN) was implemented in the forecasting model development phase. Finally, forecasting accuracy was improved by searching the optimal smoothing parameter which selected from comparisons result among three optimization algorithms: particle swarm optimization (PSO), unrestricted search optimization (USO) and interval halving optimization (IHO). The experimental result shows that (1) the developed production throughput forecasting system using GRNN is able to provide forecasted results close to actual values, and to projected the future trends of production throughput in an automated hard disk drive test operation; (2) An IHO algorithm performed as superiority appropriate optimization method than the other two algorithms. (3) Compared with current forecasting system in manufacturing, the results show that the proposed system’s performance is superior to the current system in prediction accuracy and suitable for real-world application. The production throughput volume is a key performance index of hard disk drive manufacturing systems that need to be forecast. Because of the production throughput forecasting result is useful information for management team to respond to any changing in production processes and resources allocation. However, a practically forecasting system for

  4. [Hepatic transit times and liver elasticity compared with meld in predicting a 1 year adverse clinical outcome of a clinically diagnosed cirrhosis].

    Science.gov (United States)

    Koller, Tomáš; Piešťanská, Zuzana; Hlavatý, Tibor; Holomáň, Jozef; Glasa, Jozef; Payer, Juraj

    Hepatic transit times measured by the contrast enhanced ultrasonography and liver elasticity were found to predict a clinically significant portal hypertension. However, these modalities we not yet sufficiently evaluated in predicting adverse clinical outcome in patients with clinically diagnosed cirrhosis (D´Amico stages > 1), having a clinically significant portal hypertension. The aim of our study was to assess the predictive power of the liver transit times and the liver elasticity on an adverse clinical outcome of clinically diagnosed cirrhosis compared with the MELD score. The study group included 48 consecutive outpatients with cirrhosis in the 2., 3. and 4. DAmico stages. Patients with stage 4 could have jaundice, patients with other complications of portal hypertension were excluded. Transit times were measured from the time of intravenous administration of contrast agent (Sonovue) to a signal appearance in a hepatic vein (hepatic vein arrival time, HVAT) or time difference between the contrast signal in the hepatic artery and hepatic vein (hepatic transit time, HTT) in seconds. Elasticity was measured using the transient elastography (Fibroscan). The transit times and elasticity were measured at baseline and patients were followed for up for 1 year. Adverse outcome of cirrhosis was defined as the appearance of clinically apparent ascites and/or hospitalization for liver disease and/or death within 1 year. The mean age was 61 years, with female/male ratio 23/25. At baseline, the median Child-Pugh score was 5 (IQR 5.0-6.0), MELD 9.5 (IQR 7.6 to 12.1), median HVAT was 22 s (IQR 19-25) and HTT 6 (IQR 5-9). HTT and HVAT negatively correlated with Child-Pugh (-0.351 and -0.441, p = 0.002) and MELD (-0.479 and -0.388, p = 0.006) scores. The adverse outcome at 1-year was observed in 11 cases (22.9 %), including 6 deaths and 5 hospitalizations. Median HVAT in those with/without the adverse outcome was 20 seconds (IQR 19.3-23.5) compared with 22 s (IQR 19-26, p

  5. Hard QCD at hadron colliders

    Energy Technology Data Exchange (ETDEWEB)

    Moch, S

    2008-02-15

    We review the status of QCD at hadron colliders with emphasis on precision predictions and the latest theoretical developments for cross sections calculations to higher orders. We include an overview of our current information on parton distributions and discuss various Standard Model reactions such as W{sup {+-}}/Z-boson, Higgs boson or top quark production. (orig.)

  6. Hard QCD at hadron colliders

    International Nuclear Information System (INIS)

    Moch, S.

    2008-02-01

    We review the status of QCD at hadron colliders with emphasis on precision predictions and the latest theoretical developments for cross sections calculations to higher orders. We include an overview of our current information on parton distributions and discuss various Standard Model reactions such as W ± /Z-boson, Higgs boson or top quark production. (orig.)

  7. Nanomechanics of hard films on compliant substrates.

    Energy Technology Data Exchange (ETDEWEB)

    Reedy, Earl David, Jr. (Sandia National Laboratories, Albuquerque, NM); Emerson, John Allen (Sandia National Laboratories, Albuquerque, NM); Bahr, David F. (Washington State University, Pullman, WA); Moody, Neville Reid; Zhou, Xiao Wang; Hales, Lucas (University of Minnesota, Minneapolis, MN); Adams, David Price (Sandia National Laboratories, Albuquerque, NM); Yeager,John (Washington State University, Pullman, WA); Nyugen, Thao D. (Johns Hopkins University, Baltimore, MD); Corona, Edmundo (Sandia National Laboratories, Albuquerque, NM); Kennedy, Marian S. (Clemson University, Clemson, SC); Cordill, Megan J. (Erich Schmid Institute, Leoben, Austria)

    2009-09-01

    Development of flexible thin film systems for biomedical, homeland security and environmental sensing applications has increased dramatically in recent years [1,2,3,4]. These systems typically combine traditional semiconductor technology with new flexible substrates, allowing for both the high electron mobility of semiconductors and the flexibility of polymers. The devices have the ability to be easily integrated into components and show promise for advanced design concepts, ranging from innovative microelectronics to MEMS and NEMS devices. These devices often contain layers of thin polymer, ceramic and metallic films where differing properties can lead to large residual stresses [5]. As long as the films remain substrate-bonded, they may deform far beyond their freestanding counterpart. Once debonded, substrate constraint disappears leading to film failure where compressive stresses can lead to wrinkling, delamination, and buckling [6,7,8] while tensile stresses can lead to film fracture and decohesion [9,10,11]. In all cases, performance depends on film adhesion. Experimentally it is difficult to measure adhesion. It is often studied using tape [12], pull off [13,14,15], and peel tests [16,17]. More recent techniques for measuring adhesion include scratch testing [18,19,20,21], four point bending [22,23,24], indentation [25,26,27], spontaneous blisters [28,29] and stressed overlayers [7,26,30,31,32,33]. Nevertheless, sample design and test techniques must be tailored for each system. There is a large body of elastic thin film fracture and elastic contact mechanics solutions for elastic films on rigid substrates in the published literature [5,7,34,35,36]. More recent work has extended these solutions to films on compliant substrates and show that increasing compliance markedly changes fracture energies compared with rigid elastic solution results [37,38]. However, the introduction of inelastic substrate response significantly complicates the problem [10,39,40]. As

  8. Complex technique for materials hardness measurement

    Energy Technology Data Exchange (ETDEWEB)

    Krashchenko, V P; Oksametnaya, O B

    1984-01-01

    A review of existing methods of measurement of material hardness in national and foreign practice has been made. A necessity of improving the technique of material hardness measurement in a wide temperature range and insuring load change with indenting, continuity of imprint application, smooth changing of temperatures along a sample length, and deformation rate control has been noted.

  9. Hard scattering and a diffractive trigger

    International Nuclear Information System (INIS)

    Berger, E.L.; Collins, J.C.; Soper, D.E.; Sterman, G.

    1986-02-01

    Conclusions concerning the properties of hard scattering in diffractively produced systems are summarized. One motivation for studying diffractive hard scattering is to investigate the interface between Regge theory and perturbative QCD. Another is to see whether diffractive triggering can result in an improvement in the signal-to-background ratio of measurements of production of very heavy quarks. 5 refs

  10. ERRATUM: Work smart, wear your hard hat

    CERN Multimedia

    2003-01-01

    An error appeared in the article «Work smart, wear your hard hat» published in Weekly Bulletin 27/2003, page 5. The impact which pierced a hole in the hard hat worn by Gerd Fetchenhauer was the equivalent of a box weighing 5 kg and not 50 kg.

  11. 7 CFR 201.57 - Hard seeds.

    Science.gov (United States)

    2010-01-01

    ... REGULATIONS Germination Tests in the Administration of the Act § 201.57 Hard seeds. Seeds which remain hard at the end of the prescribed test because they have not absorbed water, due to an impermeable seed coat... percentage of germination. For flatpea, continue the swollen seed in test for 14 days when germinating at 15...

  12. The influence of material hardness on liquid droplet impingement erosion

    International Nuclear Information System (INIS)

    Fujisawa, Nobuyuki; Yamagata, Takayuki; Takano, Shotaro; Saito, Kengo; Morita, Ryo; Fujiwara, Kazutoshi; Inada, Fumio

    2015-01-01

    Highlights: • Liquid droplet impingement erosion is studied for various metal materials. • Average power dependency on droplet velocity is found as 7. • Power dependency on Vickers hardness is found as −4.5. • An empirical formula is constructed for erosion rates of metal materials. • Predicted erosion rate is well correlated with experiment within a factor of 1.5. - Abstract: This paper describes the experimental study on the liquid droplet impingement erosion of metal materials to understand the influence of material hardness on the erosion rate. The experiment is carried out using a water spray jet apparatus with a condition of relatively thin liquid film thickness. The metal materials tested are pure aluminum, aluminum alloy, brass, mild steel, carbon steel and stainless steel. The liquid droplets considered are 30 ± 5 μm in volume average diameter of water, which is the same order of droplet diameter in the actual pipeline in nuclear/fossil power plants. In order to understand the influence of material hardness on the liquid droplet impingement erosion, the scanning electron microscope (SEM) observation on the eroded surface and the measurement of erosion rate are carried out in the terminal stage of erosion. The experimental results indicate that the erosion rates are expressed by the droplet velocity, volume flux, Vickers hardness and the liquid film thickness, which are fundamentals of the liquid droplet impingement erosion. The empirical formula shows that the power index for droplet velocity dependency is found to be 7 with a scattering from 5 to 9 depending on the materials, while the power index for Vickers hardness dependency is found as −4.5

  13. Effort and Displeasure in People Who Are Hard of Hearing.

    Science.gov (United States)

    Matthen, Mohan

    2016-01-01

    Listening effort helps explain why people who are hard of hearing are prone to fatigue and social withdrawal. However, a one-factor model that cites only effort due to hardness of hearing is insufficient as there are many who lead happy lives despite their disability. This article explores other contributory factors, in particular motivational arousal and pleasure. The theory of rational motivational arousal predicts that some people forego listening comprehension because they believe it to be impossible and hence worth no effort at all. This is problematic. Why should the listening task be rated this way, given the availability of aids that reduce its difficulty? Two additional factors narrow the explanatory gap. First, we separate the listening task from the benefit derived as a consequence. The latter is temporally more distant, and is discounted as a result. The second factor is displeasure attributed to the listening task, which increases listening cost. Many who are hard of hearing enjoy social interaction. In such cases, the actual activity of listening is a benefit, not a cost. These people also reap the benefits of listening, but do not have to balance these against the displeasure of the task. It is suggested that if motivational harmony can be induced by training in somebody who is hard of hearing, then the obstacle to motivational arousal would be removed. This suggests a modified goal for health care professionals. Do not just teach those who are hard of hearing how to use hearing assistance devices. Teach them how to do so with pleasure and enjoyment.

  14. The influence of material hardness on liquid droplet impingement erosion

    Energy Technology Data Exchange (ETDEWEB)

    Fujisawa, Nobuyuki, E-mail: fujisawa@eng.niigata-u.ac.jp [Visualization Research Center, Niigata University, 8050, Ikarashi 2-Nocho, Nishi-ku, Niigata 950-2181 (Japan); Yamagata, Takayuki, E-mail: yamagata@eng.niigata-u.ac.jp [Visualization Research Center, Niigata University, 8050, Ikarashi 2-Nocho, Nishi-ku, Niigata 950-2181 (Japan); Takano, Shotaro; Saito, Kengo [Graduate School of Science and Technology, Niigata University, 8050, Ikarashi 2-Nocho, Nishi-ku, Niigata 950-2181 (Japan); Morita, Ryo; Fujiwara, Kazutoshi; Inada, Fumio [Central Research Institute of Electric Power Industry, 2-11-1, Iwatokita, Komae, Tokyo 201-8511 (Japan)

    2015-07-15

    Highlights: • Liquid droplet impingement erosion is studied for various metal materials. • Average power dependency on droplet velocity is found as 7. • Power dependency on Vickers hardness is found as −4.5. • An empirical formula is constructed for erosion rates of metal materials. • Predicted erosion rate is well correlated with experiment within a factor of 1.5. - Abstract: This paper describes the experimental study on the liquid droplet impingement erosion of metal materials to understand the influence of material hardness on the erosion rate. The experiment is carried out using a water spray jet apparatus with a condition of relatively thin liquid film thickness. The metal materials tested are pure aluminum, aluminum alloy, brass, mild steel, carbon steel and stainless steel. The liquid droplets considered are 30 ± 5 μm in volume average diameter of water, which is the same order of droplet diameter in the actual pipeline in nuclear/fossil power plants. In order to understand the influence of material hardness on the liquid droplet impingement erosion, the scanning electron microscope (SEM) observation on the eroded surface and the measurement of erosion rate are carried out in the terminal stage of erosion. The experimental results indicate that the erosion rates are expressed by the droplet velocity, volume flux, Vickers hardness and the liquid film thickness, which are fundamentals of the liquid droplet impingement erosion. The empirical formula shows that the power index for droplet velocity dependency is found to be 7 with a scattering from 5 to 9 depending on the materials, while the power index for Vickers hardness dependency is found as −4.5.

  15. Aespoe Hard Rock Laboratory. Annual report 1997

    International Nuclear Information System (INIS)

    1998-05-01

    The Aespoe Hard Rock Laboratory has been constructed as part of the preparations for the deep geological repository for spent nuclear fuel in Sweden. The surface and borehole investigations and the research work performed in parallel with construction have provided a thorough test of methods for investigation and evaluation of bedrock conditions for construction of a deep repository. The Tracer Retention Understanding Experiments are made to gain a better understanding of radionuclide retention in the rock and create confidence in the radionuclide transport models that are intended to be used in the licensing of a deep repository for spent fuel. The experimental results of the first tracer test with sorbing radioactive tracers have been obtained. These tests have been subject to blind predictions by the Aespoe Task Force on groundwater flow and transports of solutes. The manufacturing of the CHEMLAB probe was completed during 1996, and the first experiments were started early in 1997. During 1997 three experiments on diffusion in bentonite using 57 Co, 114 Cs, 85 Sr, 99 Tc, and 131 I were conducted. The Prototype Repository Test is focused on testing and demonstrating repository system function. A full scale prototype including six deposition holes with canisters with electric heaters surrounded by highly compacted bentonite will be built and instrumented. The characterization of the rock mass in the area of the prototype repository is in progress. The objectives of the Demonstration of Repository Technology are to develop, test, and demonstrate methodology and equipment for encapsulation and deposition of spent nuclear fuel. The demonstration of handling and deposition will be made in a new drift. The Backfill and Plug Test includes tests of backfill materials and emplacement methods and a test of a full scale plug. The backfill and rock will be instrumented with about 230 transducers for measuring the thermo-hydro-mechanical processes. The Retrieval Test is

  16. Aespoe Hard Rock Laboratory Annual Report 1999

    International Nuclear Information System (INIS)

    2000-08-01

    The Aespoe Hard Rock Laboratory has been constructed as part of the preparations for the deep geological repository for spent nuclear fuel in Sweden. The Tracer Retention Understanding Experiments are made to gain a better understanding of radionuclide retention in the rock and create confidence in the radionuclide transport models that are intended to be used in the licensing of a deep repository for spent fuel. The TRUE -1 experiment including tests with sorbing radioactive tracers in a single fracture over a distance of about 5 m has been completed. Diffusion and sorption in the rock matrix is the dominant retention mechanism over the time scales of the experiments. The main objective of the TRUE Block Scale Experiment is to increase understanding and our ability to predict tracer transport in a fracture network over spatial scales of 10 to 50 m. In total six boreholes have been drilled into the experimental volume located at the 450 m level. The Long-Term Diffusion Experiment is intended as a complement to the dynamic in-situ experiments and the laboratory experiments performed in the TRUE Programme. Diffusion from a fracture into the rock matrix will be studied in situ. The REX project focuses on the reduction of oxygen in a repository after closure due to reactions with rock minerals and microbial activity. Results show that oxygen is consumed within a few days both for the field and laboratory experiments. A new site for the CHEMLAB experiments was selected and prepared during 1999. All future experiment will be conducted in the J niche at 450 m depth. The Prototype Repository Test is focused on testing and demonstrating repository system function. A full-scale prototype including six deposition holes with canisters with electric heaters surrounded by highly compacted bentonite will be built and instrumented. Characterisation of the rock mass in the area of the Prototype repository is completed and the six deposition holes have been drilled. The Backfill and

  17. Aespoe Hard Rock Laboratory. Annual report 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-05-01

    The Aespoe Hard Rock Laboratory has been constructed as part of the preparations for the deep geological repository for spent nuclear fuel in Sweden. The surface and borehole investigations and the research work performed in parallel with construction have provided a thorough test of methods for investigation and evaluation of bedrock conditions for construction of a deep repository. The Tracer Retention Understanding Experiments are made to gain a better understanding of radionuclide retention in the rock and create confidence in the radionuclide transport models that are intended to be used in the licensing of a deep repository for spent fuel. The experimental results of the first tracer test with sorbing radioactive tracers have been obtained. These tests have been subject to blind predictions by the Aespoe Task Force on groundwater flow and transports of solutes. The manufacturing of the CHEMLAB probe was completed during 1996, and the first experiments were started early in 1997. During 1997 three experiments on diffusion in bentonite using {sup 57}Co, {sup 114}Cs,{sup 85}Sr, {sup 99}Tc, and {sup 131}I were conducted. The Prototype Repository Test is focused on testing and demonstrating repository system function. A full scale prototype including six deposition holes with canisters with electric heaters surrounded by highly compacted bentonite will be built and instrumented. The characterization of the rock mass in the area of the prototype repository is in progress. The objectives of the Demonstration of Repository Technology are to develop, test, and demonstrate methodology and equipment for encapsulation and deposition of spent nuclear fuel. The demonstration of handling and deposition will be made in a new drift. The Backfill and Plug Test includes tests of backfill materials and emplacement methods and a test of a full scale plug. The backfill and rock will be instrumented with about 230 transducers for measuring the thermo-hydro-mechanical processes. The

  18. Aespoe Hard Rock Laboratory Annual Report 1999

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-08-01

    The Aespoe Hard Rock Laboratory has been constructed as part of the preparations for the deep geological repository for spent nuclear fuel in Sweden. The Tracer Retention Understanding Experiments are made to gain a better understanding of radionuclide retention in the rock and create confidence in the radionuclide transport models that are intended to be used in the licensing of a deep repository for spent fuel. The TRUE -1 experiment including tests with sorbing radioactive tracers in a single fracture over a distance of about 5 m has been completed. Diffusion and sorption in the rock matrix is the dominant retention mechanism over the time scales of the experiments. The main objective of the TRUE Block Scale Experiment is to increase understanding and our ability to predict tracer transport in a fracture network over spatial scales of 10 to 50 m. In total six boreholes have been drilled into the experimental volume located at the 450 m level. The Long-Term Diffusion Experiment is intended as a complement to the dynamic in-situ experiments and the laboratory experiments performed in the TRUE Programme. Diffusion from a fracture into the rock matrix will be studied in situ. The REX project focuses on the reduction of oxygen in a repository after closure due to reactions with rock minerals and microbial activity. Results show that oxygen is consumed within a few days both for the field and laboratory experiments. A new site for the CHEMLAB experiments was selected and prepared during 1999. All future experiment will be conducted in the J niche at 450 m depth. The Prototype Repository Test is focused on testing and demonstrating repository system function. A full-scale prototype including six deposition holes with canisters with electric heaters surrounded by highly compacted bentonite will be built and instrumented. Characterisation of the rock mass in the area of the Prototype repository is completed and the six deposition holes have been drilled. The Backfill and

  19. Correlating particle hardness with powder compaction performance.

    Science.gov (United States)

    Cao, Xiaoping; Morganti, Mikayla; Hancock, Bruno C; Masterson, Victoria M

    2010-10-01

    Assessing particle mechanical properties of pharmaceutical materials quickly and with little material can be very important to early stages of pharmaceutical research. In this study, a wide range of pharmaceutical materials were studied using atomic force microscopy (AFM) nanoindentation. A significant amount of particle hardness and elastic modulus data were provided. Moreover, powder compact mechanical properties of these materials were investigated in order to build correlation between the particle hardness and powder compaction performance. It was found that the materials with very low or high particle hardness most likely exhibit poor compaction performance while the materials with medium particle hardness usually have good compaction behavior. Additionally, the results from this study enriched Hiestand's special case concept on particle hardness and powder compaction performance. This study suggests that the use of AFM nanoindentation can help to screen mechanical properties of pharmaceutical materials at early development stages of pharmaceutical research.

  20. Hard Break-Up of Two-Nucleons and QCD Dynamics of NN Interaction

    International Nuclear Information System (INIS)

    Sargsian, Misak

    2008-01-01

    We discus recent developments in theory of high energy two-body break-up of few-nucleon systems. The characteristics of these reactions are such that the hard two-body quasielastic subprocess can be clearly separated from the accompanying soft subprocesses. We discuss in details the hard rescattering model (HRM) in which hard photodisintegration develops in two stages. At first, photon knocks-out an energetic quark which rescatters subsequently with a quark of the other nucleon. The latter provides a mechanism of sharing the initial high momentum of the photon between two outgoing nucleons. This final state hard rescattering can be expressed through the hard NN scattering amplitude. Within HRM we discuss hard break-up reactions involving D and 3 He targets and demonstrate how these reactions are sensitive to the dynamics of hard pn and pp interaction. Another development of HRM is the prediction of new helicity selection mechanism for hard two-body reactions, which was apparently confirmed in the recent JLab experiment.

  1. [Determination of Hard Rate of Alfalfa (Medicago sativa L.) Seeds with Near Infrared Spectroscopy].

    Science.gov (United States)

    Wang, Xin-xun; Chen, Ling-ling; Zhang, Yun-wei; Mao, Pei-sheng

    2016-03-01

    Alfalfa (Medicago sativa L.) is the most commonly grown forage crop due to its better quality characteristics and high adaptability in China. However, there was 20%-80% hard seeds in alfalfa which could not be identified easily from non hard seeds which would cause the loss of seed utilization value and plant production. This experiment was designed for 121 samples of alfalfa. Seeds were collected according to different regions, harvested year and varieties. 31 samples were artificial matched as hard rates ranging from 20% to 80% to establish a model for hard seed rate by near infrared spectroscopy (NIRS) with Partial Least Square (PLS). The objective of this study was to establish a model and to estimate the efficiency of NIRS for determining hard rate of alfalfa seeds. The results showed that the correlation coefficient (R2(cal)) of calibration model was 0.981 6, root mean square error of cross validation (RMSECV) was 5.32, and the ratio of prediction to deviation (RPD) was 3.58. The forecast model in this experiment presented the satisfied precision. The proposed method using NIRS technology is feasible for identification and classification of hard seed in alfalfa. A new method, as nondestructive testing of hard seed rate, was provided to theoretical basis for fast nondestructive detection of hard seed rates in alfalfa.

  2. Efficacy of the semiempirical sparkle model as compared to ECP ab-initio calculations for the prediction of ligand field parameters of europium (III) complexes

    International Nuclear Information System (INIS)

    Freire, Ricardo O.; Rocha, Gerd B.; Albuquerque, Rodrigo Q.; Simas, Alfredo M.

    2005-01-01

    The second version of the sparkle model for the calculation of lanthanide complexes (SMLC II) as well as ab-initio calculations (HF/STO-3G and HF/3-21G) have been used to calculate the geometries of a series of europium (III) complexes with different coordination numbers (CN=7, 8 and 9), ligating atoms (O and N) and ligands (mono, bi and polydentate). The so-called ligand field parameters, Bqk's, have been calculated from both SMLC II and ab-initio optimized structures and compared to the ones calculated from crystallographic data. The results show that the SMLC II model represents a significant improvement over the previous version (SMLC) and has given good results when compared to ab-initio methods, which demand a much higher computational effort. Indeed, ab-initio methods take around a hundred times more computing time than SMLC. As such, our results indicate that our sparkle model can be a very useful and a fast tool when applied to the prediction of both ground state geometries and ligand field parameters of europium (III) complexes

  3. Molecular-scale hydrophobic interactions between hard-sphere reference solutes are attractive and endothermic.

    Science.gov (United States)

    Chaudhari, Mangesh I; Holleran, Sinead A; Ashbaugh, Henry S; Pratt, Lawrence R

    2013-12-17

    The osmotic second virial coefficients, B2, for atomic-sized hard spheres in water are attractive (B2 attractive with increasing temperature (ΔB2/ΔT attractive and endothermic at moderate temperatures. Hydrophobic interactions between atomic-sized hard spheres in water are more attractive than predicted by the available statistical mechanical theory. These results constitute an initial step toward detailed molecular theory of additional intermolecular interaction features, specifically, attractive interactions associated with hydrophobic solutes.

  4. Stroke volume variation compared with pulse pressure variation and cardiac index changes for prediction of fluid responsiveness in mechanically ventilated patients

    Directory of Open Access Journals (Sweden)

    Randa Aly Soliman

    2015-04-01

    Conclusions: Baseline stroke volume variation ⩾8.15% predicted fluid responsiveness in mechanically ventilated patients with acute circulatory failure. The study also confirmed the ability of pulse pressure variation to predict fluid responsiveness.

  5. Influence of orthopedic treatment on hard and soft facial structures of individuals presenting with Class II, Division 1 malocclusion: a comparative study A influência do tratamento ortopédico nas estruturas faciais de indivíduos com má oclusão de Classe II, 1ª Divisão: um estudo comparativo

    Directory of Open Access Journals (Sweden)

    Liliana Ávila Maltagliati

    2004-06-01

    Full Text Available The purpose of this investigation was to comparatively evaluate the cephalometric changes in soft and hard tissues related to treatment of Class II, division 1 malocclusion with activator-headgear and Bionator appliances. Twenty-four individuals formed the activator-headgear group and twenty-five comprised the Bionator group, while other twenty-four presenting the same malocclusion did not receive any intervention and served as controls. Lateral headfilms were taken at the beginning and at the end of the observation period and were digitized with computerized cephalometrics; cephalometric analysis was performed and the results were submitted to statistical test. According to the methodology employed, our findings suggested that both appliances do not significantly alter the growth path, and also they were not able to modify the posterior inferior height and the sagittal and vertical position of the upper lip. The lower lip and the soft menton were only slightly modified by the orthopedic appliances, but the mentolabial sulcus showed a significant decrease in deepness compared to the control group. Of statistical significance, only the anterior inferior hard and soft facial heights and the lower lip height increased more in the treated groups.Esta pesquisa teve por objetivo avaliar, comparativamente, as alterações cefalométricas tegumentares e esqueléticas, decorrentes do tratamento das más oclusões de classe II, 1a divisão, com o ativador combinado com a ancoragem extrabucal e com o bionator. O grupo tratado com o ativador combinado com a ancoragem extrabucal foi composto por 24 indivíduos e o grupo tratado com o bionator compreendeu 25 pacientes, enquanto que outros 24 indivíduos compuseram o grupo controle, apresentando a mesma má oclusão, porém sem terem sido submetidos a nenhuma terapia ortodôntica. Obteve-se telerradiografias laterais de todos os indivíduos no início e final do período de observação que foram digitalizadas

  6. Ferroelectric Dipole Electrets Prepared from Soft and Hard PZT Ceramics in Electrostatic Vibration Energy Harvesters

    International Nuclear Information System (INIS)

    Asanuma, H; Oguchi, H; Hara, M; Kuwano, H

    2013-01-01

    Aiming at longer stability of surface potential, we propose a ferroelectric dipole electret (FDE) prepared from hard ferroelectric material. We compared output power of electrostatic vibration energy harvester and surface potential stability between FDEs prepared from soft and hard PZT ceramics, as well as a CYTOP polymer electret. The hard FDE showed a seven-fold increase in output power over the soft FDE and nine-fold increase over the CYTOP polymer electret. The hard FDE also showed longer stability of surface potential than that of the soft FDE, whereas the stability of the hard FDE was not yet comparable to that of CYTOP polymer electret. A FDE prepared from harder PZT ceramic (with higher coercive electric field and Curie temperature) may provide more stability in surface potential

  7. Depth-resolved X-ray residual stress analysis in PVD (Ti, Cr) N hard coatings

    CERN Document Server

    Genzel, C

    2003-01-01

    Physical vapour deposition (PVD) of thin hard coatings on TiN basis is usually performed at rather low temperatures (T sub D < 500 C) far from thermal equilibrium, which leads to high intrinsic residual stresses in the growing film. In contrast to the extrinsic thermal residual stresses which can easily be estimated from the difference of the coefficients of thermal expansion between the substrate and the coating, a theoretical prediction of the intrinsic residual stresses is difficult, because their amount as well as their distribution within the film depend in a very complex way on the deposition kinetics. By the example of strongly fibre-textured PVD (Ti, Cr)N coatings which have been prepared under defined variation of the deposition parameters in order to adjust the residual stress distribution within the coatings, the paper compares different X-ray diffraction techniques with respect to their applicability for detecting residual stresses which are non-uniform over the coating thickness. (orig.)

  8. Collective modes in simple melts: Transition from soft spheres to the hard sphere limit.

    Science.gov (United States)

    Khrapak, Sergey; Klumov, Boris; Couëdel, Lénaïc

    2017-08-11

    We study collective modes in a classical system of particles with repulsive inverse-power-law (IPL) interactions in the fluid phase, near the fluid-solid coexistence (IPL melts). The IPL exponent is varied from n = 10 to n = 100 to mimic the transition from moderately soft to hard-sphere-like interactions. We compare the longitudinal dispersion relations obtained using molecular dynamic (MD) simulations with those calculated using the quasi-crystalline approximation (QCA) and find that this simple theoretical approach becomes grossly inaccurate for [Formula: see text]. Similarly, conventional expressions for high-frequency (instantaneous) elastic moduli, predicting their divergence as n increases, are meaningless in this regime. Relations of the longitudinal and transverse elastic velocities of the QCA model to the adiabatic sound velocity, measured in MD simulations, are discussed for the regime where QCA is applicable. Two potentially useful freezing indicators for classical particle systems with steep repulsive interactions are discussed.

  9. Influence of H-C bonds on the stopping power of hard and soft carbonized layers

    International Nuclear Information System (INIS)

    Boutard, D.; Moeller, W.; Scherzer, B.M.U.

    1988-01-01

    Soft and hard carbon-hydrogen films were deposited in a rf glow discharge. Their stopping powers were deduced from depth-profile analysis by means of proton enhanced-cross-section scattering at around 1.5 MeV and 4 He + elastic-recoil detection at 2.6 MeV. In the case of soft films, ion-induced hydrogen depletion allowed study of the dependence of the stopping on hydrogen concentration. The presence of hydrogen increases the stopping power of the film by a factor of up to ∼2 compared to the predicted value for pure carbon. Moreover, Bragg's rule underestimates the total stopping considerably. However, good agreement is obtained with a recent theoretical model by Sabin et al. which takes into account the different C-C and C-H s

  10. Release consequence analysis for a hypothetical geologic radioactive waste repository in hard rock

    International Nuclear Information System (INIS)

    1979-12-01

    This report makes an evaluation of the long-term behaviour of the wastes placed in a hard rock repository. Impacts were analyzed for the seven reference fuel cycles of WG 7. The reference repository for this study is for granitic rock or gneiss as the host rock. The descriptions of waste packages and repository facilities used in this study represent only one of many possible designs based on the multiple barriers concept. The repository's size is based on a nuclear economy producing 100 gigawatts of electricity per year for 1 year. The objective of the modeling efforts presented in this study is to predict the rate of transport of radioactive contaminants from the repository through the geosphere to the biosphere and thus determine an estimate of the potential dose to humans so that the release consequence impacts of the various fuel cycles can be compared. Currently available hydrologic, leach, transport, and dose models were used in this study

  11. Edwards' approach to horizontal and vertical segregation in a mixture of hard spheres under gravity

    International Nuclear Information System (INIS)

    Fierro, Annalisa; Nicodemi, Mario; Coniglio, Antonio

    2003-01-01

    We study the phenomenon of size segregation, observed in models of vibrated granular mixtures such as powders or sand. This consists of the de-mixing of the different components of the system under shaking. Several mechanisms have been proposed to explain this phenomenon. However, the criteria for predicting segregation in a mixture, an issue of great practical importance, are largely unknown. In the present paper we study a binary hard-sphere mixture under gravity on a three-dimensional lattice using Monte Carlo simulations. The vertical and horizontal segregation observed during the tap dynamics is interpreted in the framework of a statistical mechanics approach to granular media in the manner of Edwards. A phase diagram for the vertical segregation is derived, and compared with the simulation data

  12. Modeling Flare Hard X-ray Emission from Electrons in Contracting Magnetic Islands

    Science.gov (United States)

    Guidoni, Silvina E.; Allred, Joel C.; Alaoui, Meriem; Holman, Gordon D.; DeVore, C. Richard; Karpen, Judith T.

    2016-05-01

    The mechanism that accelerates particles to the energies required to produce the observed impulsive hard X-ray emission in solar flares is not well understood. It is generally accepted that this emission is produced by a non-thermal beam of electrons that collides with the ambient ions as the beam propagates from the top of a flare loop to its footpoints. Most current models that investigate this transport assume an injected beam with an initial energy spectrum inferred from observed hard X-ray spectra, usually a power law with a low-energy cutoff. In our previous work (Guidoni et al. 2016), we proposed an analytical method to estimate particle energy gain in contracting, large-scale, 2.5-dimensional magnetic islands, based on a kinetic model by Drake et al. (2010). We applied this method to sunward-moving islands formed high in the corona during fast reconnection in a simulated eruptive flare. The overarching purpose of the present work is to test this proposed acceleration model by estimating the hard X-ray flux resulting from its predicted accelerated-particle distribution functions. To do so, we have coupled our model to a unified computational framework that simulates the propagation of an injected beam as it deposits energy and momentum along its way (Allred et al. 2015). This framework includes the effects of radiative transfer and return currents, necessary to estimate flare emission that can be compared directly to observations. We will present preliminary results of the coupling between these models.

  13. Comparative analysis of the predicted secretomes of Rosaceae scab pathogens Venturia inaequalis and V. pirina reveals expanded effector families and putative determinants of host range.

    Science.gov (United States)

    Deng, Cecilia H; Plummer, Kim M; Jones, Darcy A B; Mesarich, Carl H; Shiller, Jason; Taranto, Adam P; Robinson, Andrew J; Kastner, Patrick; Hall, Nathan E; Templeton, Matthew D; Bowen, Joanna K

    2017-05-02

    Fungal plant pathogens belonging to the genus Venturia cause damaging scab diseases of members of the Rosaceae. In terms of economic impact, the most important of these are V. inaequalis, which infects apple, and V. pirina, which is a pathogen of European pear. Given that Venturia fungi colonise the sub-cuticular space without penetrating plant cells, it is assumed that effectors that contribute to virulence and determination of host range will be secreted into this plant-pathogen interface. Thus the predicted secretomes of a range of isolates of Venturia with distinct host-ranges were interrogated to reveal putative proteins involved in virulence and pathogenicity. Genomes of Venturia pirina (one European pear scab isolate) and Venturia inaequalis (three apple scab, and one loquat scab, isolates) were sequenced and the predicted secretomes of each isolate identified. RNA-Seq was conducted on the apple-specific V. inaequalis isolate Vi1 (in vitro and infected apple leaves) to highlight virulence and pathogenicity components of the secretome. Genes encoding over 600 small secreted proteins (candidate effectors) were identified, most of which are novel to Venturia, with expansion of putative effector families a feature of the genus. Numerous genes with similarity to Leptosphaeria maculans AvrLm6 and the Verticillium spp. Ave1 were identified. Candidates for avirulence effectors with cognate resistance genes involved in race-cultivar specificity were identified, as were putative proteins involved in host-species determination. Candidate effectors were found, on average, to be in regions of relatively low gene-density and in closer proximity to repeats (e.g. transposable elements), compared with core eukaryotic genes. Comparative secretomics has revealed candidate effectors from Venturia fungal plant pathogens that attack pome fruit. Effectors that are putative determinants of host range were identified; both those that may be involved in race-cultivar and host

  14. Comparative genomics and prediction of conditionally dispensable sequences in legume-infecting Fusarium oxysporum formae speciales facilitates identification of candidate effectors.

    Science.gov (United States)

    Williams, Angela H; Sharma, Mamta; Thatcher, Louise F; Azam, Sarwar; Hane, James K; Sperschneider, Jana; Kidd, Brendan N; Anderson, Jonathan P; Ghosh, Raju; Garg, Gagan; Lichtenzveig, Judith; Kistler, H Corby; Shea, Terrance; Young, Sarah; Buck, Sally-Anne G; Kamphuis, Lars G; Saxena, Rachit; Pande, Suresh; Ma, Li-Jun; Varshney, Rajeev K; Singh, Karam B

    2016-03-05

    Soil-borne fungi of the Fusarium oxysporum species complex cause devastating wilt disease on many crops including legumes that supply human dietary protein needs across many parts of the globe. We present and compare draft genome assemblies for three legume-infecting formae speciales (ff. spp.): F. oxysporum f. sp. ciceris (Foc-38-1) and f. sp. pisi (Fop-37622), significant pathogens of chickpea and pea respectively, the world's second and third most important grain legumes, and lastly f. sp. medicaginis (Fom-5190a) for which we developed a model legume pathosystem utilising Medicago truncatula. Focusing on the identification of pathogenicity gene content, we leveraged the reference genomes of Fusarium pathogens F. oxysporum f. sp. lycopersici (tomato-infecting) and F. solani (pea-infecting) and their well-characterised core and dispensable chromosomes to predict genomic organisation in the newly sequenced legume-infecting isolates. Dispensable chromosomes are not essential for growth and in Fusarium species are known to be enriched in host-specificity and pathogenicity-associated genes. Comparative genomics of the publicly available Fusarium species revealed differential patterns of sequence conservation across F. oxysporum formae speciales, with legume-pathogenic formae speciales not exhibiting greater sequence conservation between them relative to non-legume-infecting formae speciales, possibly indicating the lack of a common ancestral source for legume pathogenicity. Combining predicted dispensable gene content with in planta expression in the model legume-infecting isolate, we identified small conserved regions and candidate effectors, four of which shared greatest similarity to proteins from another legume-infecting ff. spp. We demonstrate that distinction of core and potential dispensable genomic regions of novel F. oxysporum genomes is an effective tool to facilitate effector discovery and the identification of gene content possibly linked to host

  15. Evaluating the Relationships Between NTNU/SINTEF Drillability Indices with Index Properties and Petrographic Data of Hard Igneous Rocks

    Science.gov (United States)

    Aligholi, Saeed; Lashkaripour, Gholam Reza; Ghafoori, Mohammad; Azali, Sadegh Tarigh

    2017-11-01

    Thorough and realistic performance predictions are among the main requisites for estimating excavation costs and time of the tunneling projects. Also, NTNU/SINTEF rock drillability indices, including the Drilling Rate Index™ (DRI), Bit Wear Index™ (BWI), and Cutter Life Index™ (CLI), are among the most effective indices for determining rock drillability. In this study, brittleness value (S20), Sievers' J-Value (SJ), abrasion value (AV), and Abrasion Value Cutter Steel (AVS) tests are conducted to determine these indices for a wide range of Iranian hard igneous rocks. In addition, relationships between such drillability parameters with petrographic features and index properties of the tested rocks are investigated. The results from multiple regression analysis revealed that the multiple regression models prepared using petrographic features provide a better estimation of drillability compared to those prepared using index properties. Also, it was found that the semiautomatic petrography and multiple regression analyses provide a suitable complement to determine drillability properties of igneous rocks. Based on the results of this study, AV has higher correlations with studied mineralogical indices than AVS. The results imply that, in general, rock surface hardness of hard igneous rocks is very high, and the acidic igneous rocks have a lower strength and density and higher S20 than those of basic rocks. Moreover, DRI is higher, while BWI is lower in acidic igneous rocks, suggesting that drill and blast tunneling is more convenient in these rocks than basic rocks.

  16. Nanoindentation hardness of hot-pressed boron suboxide

    International Nuclear Information System (INIS)

    Machaka, Ronald; Derry, Trevor E.; Sigalas, Iakovos

    2011-01-01

    Highlights: → The load-displacement indentation response of hot-pressed B 6 O is measured and analysed. → The nanoindentation hardness of hot-pressed boron suboxide is reported. → An approach is developed to simulate multi-cycling loading load-displacement curves. → A comprehensive model inter-comparison study of the ISE in hot-pressed B 6 O is also presented. → The fractal dimension is a better measure of ISE than the Meyer's index. - Abstract: The existence of the indentation size effect implies the absence of a single hardness value for the material under investigation especially at low applied loads. In this paper we present an investigation of the indentation size dependence behaviour of nanoindentation hardness in boron suboxide ceramic compacts prepared by uniaxial hot-pressing. Berkovich nanohardness indentations were conducted and analyzed accordingly. In addition to the ordinary Oliver and Pharr method of nanoindentation data analysis, a quantitative approach for the loading curve analysis is proposed. Using the proposed approach, the description and characterization of the observed indentation size effect through the application of the Meyer's law, and the classical and the modified proportional specimen resistance models as well as the multi-fractal scaling law was conducted and is reported. The load-independent hardness values deduced from our quantitative approach are comparable to the results calculated with conventional methods, especially with the multi-fractal scaling law.

  17. Relationship between nickel and cobalt sensitization in hard metal workers

    Energy Technology Data Exchange (ETDEWEB)

    Rystedt, I; Fischer, T

    1983-05-01

    Eight hundred fifty-three hard metal workers were examined and patch tested with 20 substances from their environment, including nickel and cobalt. Nickel sensitivity was found in 2 men and 38 women. 88% of the nickel-sensitive individuals had developed a jewelry dermatitis prior to employment in the hard metal industry or before the appearance of hand eczema. 29% of the hard metal workers gave a history of slight irritant dermatitis. In the nickel sensitized group, 40% had had severe hand eczema which generally appeared 6-12 months after starting employment. In 25% of the cases, nickel sensitive individuals developed cobalt allergy, compared with 5% in the total population investigated. Most facts indicate that nickel sensitivity and irritant hand eczema precede cobalt sensitization. Hard metal workers with simultaneous nickel and cobalt sensitivity had a more severe hand eczema than those with isolated cobalt or nickel sensitivity or only irritant dermatitis. 64% of the female population had pierced ear lobes. Among the nickel allergic women, 95% had pierced ear lobes. The use of earrings containing nickel after piercing is strongly suspected of being the major cause of nickel sensitivity. Piercing at an early age seems to increase the risk of incurring nickel sensitivity.

  18. Nanoindentation hardness of hot-pressed boron suboxide

    Energy Technology Data Exchange (ETDEWEB)

    Machaka, Ronald, E-mail: Ronald.Machaka@wits.ac.za [DST/NRF Centre of Excellence in Strong Materials, University of the Witwatersrand, P. Bag 3, Wits, Johannesburg (South Africa); School of Chemical and Metallurgical Engineering, University of the Witwatersrand, P.O. Bag 3, Wits, Johannesburg 2050 (South Africa); Derry, Trevor E. [DST/NRF Centre of Excellence in Strong Materials, University of the Witwatersrand, P. Bag 3, Wits, Johannesburg (South Africa); School of Physics, University of the Witwatersrand, P.O. Bag 3, Wits, Johannesburg 2050 (South Africa); Sigalas, Iakovos [DST/NRF Centre of Excellence in Strong Materials, University of the Witwatersrand, P. Bag 3, Wits, Johannesburg (South Africa); School of Chemical and Metallurgical Engineering, University of the Witwatersrand, P.O. Bag 3, Wits, Johannesburg 2050 (South Africa)

    2011-07-15

    Highlights: {yields} The load-displacement indentation response of hot-pressed B{sub 6}O is measured and analysed. {yields} The nanoindentation hardness of hot-pressed boron suboxide is reported. {yields} An approach is developed to simulate multi-cycling loading load-displacement curves. {yields} A comprehensive model inter-comparison study of the ISE in hot-pressed B{sub 6}O is also presented. {yields} The fractal dimension is a better measure of ISE than the Meyer's index. - Abstract: The existence of the indentation size effect implies the absence of a single hardness value for the material under investigation especially at low applied loads. In this paper we present an investigation of the indentation size dependence behaviour of nanoindentation hardness in boron suboxide ceramic compacts prepared by uniaxial hot-pressing. Berkovich nanohardness indentations were conducted and analyzed accordingly. In addition to the ordinary Oliver and Pharr method of nanoindentation data analysis, a quantitative approach for the loading curve analysis is proposed. Using the proposed approach, the description and characterization of the observed indentation size effect through the application of the Meyer's law, and the classical and the modified proportional specimen resistance models as well as the multi-fractal scaling law was conducted and is reported. The load-independent hardness values deduced from our quantitative approach are comparable to the results calculated with conventional methods, especially with the multi-fractal scaling law.

  19. Hard QCD Measurements at LHC

    CERN Document Server

    Pasztor, Gabriella

    2018-01-01

    The rich proton-proton collision data of the LHC allow to study QCD processes in a previously unexplored region with ever improving precision. This paper summarises recent results of the ATLAS, CMS and LHCb Collaborations using primarily multi-jet and vector boson plus jet data collected at $\\sqrt s$ = 8 and 13 TeV. Comparisons to higher-order theoretical calculations and sophisticated Monte Carlo predictions are presented, as well as the impact of the data on the determination of the parton distribution functions and the measurement of the strong coupling constant, $\\alpha_s$.

  20. Hard evidence on soft skills.

    Science.gov (United States)

    Heckman, James J; Kautz, Tim

    2012-08-01

    This paper summarizes recent evidence on what achievement tests measure; how achievement tests relate to other measures of "cognitive ability" like IQ and grades; the important skills that achievement tests miss or mismeasure, and how much these skills matter in life. Achievement tests miss, or perhaps more accurately, do not adequately capture, soft skills -personality traits, goals, motivations, and preferences that are valued in the labor market, in school, and in many other domains. The larger message of this paper is that soft skills predict success in life, that they causally produce that success, and that programs that enhance soft skills have an important place in an effective portfolio of public policies.

  1. Analysis of Hard Thin Film Coating

    Science.gov (United States)

    Shen, Dashen

    1998-01-01

    MSFC is interested in developing hard thin film coating for bearings. The wearing of the bearing is an important problem for space flight engine. Hard thin film coating can drastically improve the surface of the bearing and improve the wear-endurance of the bearing. However, many fundamental problems in surface physics, plasma deposition, etc, need further research. The approach is using electron cyclotron resonance chemical vapor deposition (ECRCVD) to deposit hard thin film an stainless steel bearing. The thin films in consideration include SiC, SiN and other materials. An ECRCVD deposition system is being assembled at MSFC.

  2. Aespoe Hard Rock Laboratory. Annual report 1998

    International Nuclear Information System (INIS)

    1999-05-01

    The Aespoe Hard Rock Laboratory has been constructed as part of the preparations for the deep geological repository for spent nuclear fuel in Sweden. The Tracer Retention Understanding Experiments are made to gain a better understanding of radionuclide retention in the rock and create confidence in the radionuclide transport models that are intended to be used in the licensing of a deep repository for spent fuel. Experiments with sorbing radioactive tracers have been completed in a single fracture over a distance of about 5 m. These tests have been subject to blind predictions by the Aespoe Task Force on groundwater flow and transports of solutes. Breakthrough of sorbing tracers in the TRUE-I tests is retarded more strongly than would be expected based on laboratory data alone. Results are consistent for all tracers and tracer tests. The main objective of the TRUE Block Scale Experiment is to increase understanding and our ability to predict tracer transport in a fracture network over spatial scales of 10 to 50 m. The total duration of the project is approximately 4.5 years with a scheduled finish at the end of the year 2000. The REX project focuses on the reduction of oxygen in a repository after closure due to reactions with rock minerals and microbial activity. Results show that oxygen is consumed within a few days both for the field and laboratory experiments. The project Degassing of groundwater and two phase flow was initiated to improve our understanding of observations of hydraulic conditions made in drifts and interpretation of experiments performed close to drifts. The analysis performed so far shows that the experimentally observed flow reductions indeed are consistent with the degassing hypothesis. The Prototype Repository Test is focused on testing and demonstrating repository system function. A full-scale prototype including six deposition holes with canisters with electric heaters surrounded by highly compacted bentonite will be built and

  3. Aespoe Hard Rock Laboratory. Annual report 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-05-01

    The Aespoe Hard Rock Laboratory has been constructed as part of the preparations for the deep geological repository for spent nuclear fuel in Sweden. The Tracer Retention Understanding Experiments are made to gain a better understanding of radionuclide retention in the rock and create confidence in the radionuclide transport models that are intended to be used in the licensing of a deep repository for spent fuel. Experiments with sorbing radioactive tracers have been completed in a single fracture over a distance of about 5 m. These tests have been subject to blind predictions by the Aespoe Task Force on groundwater flow and transports of solutes. Breakthrough of sorbing tracers in the TRUE-I tests is retarded more strongly than would be expected based on laboratory data alone. Results are consistent for all tracers and tracer tests. The main objective of the TRUE Block Scale Experiment is to increase understanding and our ability to predict tracer transport in a fracture network over spatial scales of 10 to 50 m. The total duration of the project is approximately 4.5 years with a scheduled finish at the end of the year 2000. The REX project focuses on the reduction of oxygen in a repository after closure due to reactions with rock minerals and microbial activity. Results show that oxygen is consumed within a few days both for the field and laboratory experiments. The project Degassing of groundwater and two phase flow was initiated to improve our understanding of observations of hydraulic conditions made in drifts and interpretation of experiments performed close to drifts. The analysis performed so far shows that the experimentally observed flow reductions indeed are consistent with the degassing hypothesis. The Prototype Repository Test is focused on testing and demonstrating repository system function. A full-scale prototype including six deposition holes with canisters with electric heaters surrounded by highly compacted bentonite will be built and

  4. SU-E-T-196: Comparative Analysis of Surface Dose Measurements Using MOSFET Detector and Dose Predicted by Eclipse - AAA with Varying Dose Calculation Grid Size

    Energy Technology Data Exchange (ETDEWEB)

    Badkul, R; Nejaiman, S; Pokhrel, D; Jiang, H; Kumar, P [University of Kansas Medical Center, Kansas City, KS (United States)

    2015-06-15

    Purpose: Skin dose can be the limiting factor and fairly common reason to interrupt the treatment, especially for treating head-and-neck with Intensity-modulated-radiation-therapy(IMRT) or Volumetrically-modulated - arc-therapy (VMAT) and breast with tangentially-directed-beams. Aim of this study was to investigate accuracy of near-surface dose predicted by Eclipse treatment-planning-system (TPS) using Anisotropic-Analytic Algorithm (AAA)with varying calculation grid-size and comparing with metal-oxide-semiconductor-field-effect-transistors(MOSFETs)measurements for a range of clinical-conditions (open-field,dynamic-wedge, physical-wedge, IMRT,VMAT). Methods: QUASAR™-Body-Phantom was used in this study with oval curved-surfaces to mimic breast, chest wall and head-and-neck sites.A CT-scan was obtained with five radio-opaque markers(ROM) placed on the surface of phantom to mimic the range of incident angles for measurements and dose prediction using 2mm slice thickness.At each ROM, small structure(1mmx2mm) were contoured to obtain mean-doses from TPS.Calculations were performed for open-field,dynamic-wedge,physical-wedge,IMRT and VMAT using Varian-21EX,6&15MV photons using twogrid-sizes:2.5mm and 1mm.Calibration checks were performed to ensure that MOSFETs response were within ±5%.Surface-doses were measured at five locations and compared with TPS calculations. Results: For 6MV: 2.5mm grid-size,mean calculated doses(MCD)were higher by 10%(±7.6),10%(±7.6),20%(±8.5),40%(±7.5),30%(±6.9) and for 1mm grid-size MCD were higher by 0%(±5.7),0%(±4.2),0%(±5.5),1.2%(±5.0),1.1% (±7.8) for open-field,dynamic-wedge,physical-wedge,IMRT,VMAT respectively.For 15MV: 2.5mm grid-size,MCD were higher by 30%(±14.6),30%(±14.6),30%(±14.0),40%(±11.0),30%(±3.5)and for 1mm grid-size MCD were higher by 10% (±10.6), 10%(±9.8),10%(±8.0),30%(±7.8),10%(±3.8) for open-field, dynamic-wedge, physical-wedge, IMRT, VMAT respectively.For 6MV, 86% and 56% of all measured values

  5. Comparative Analysis and Predictors of 10-year Tumor Necrosis Factor Inhibitors Drug Survival in Patients with Spondyloarthritis: First-year Response Predicts Longterm Drug Persistence.

    Science.gov (United States)

    Flouri, Irini D; Markatseli, Theodora E; Boki, Kyriaki A; Papadopoulos, Ioannis; Skopouli, Fotini N; Voulgari, Paraskevi V; Settas, Loukas; Zisopoulos, Dimitrios; Iliopoulos, Alexios; Geborek, Pierre; Drosos, Alexandros A; Boumpas, Dimitrios T; Sidiropoulos, Prodromos

    2018-04-01

    To evaluate the 10-year drug survival of the first tumor necrosis factor inhibitor (TNFi) administered to patients with spondyloarthritis (SpA) overall and comparatively between SpA subsets, and to identify predictors of drug retention. Patients with SpA in the Hellenic Registry of Biologic Therapies, a prospective multicenter observational cohort, starting their first TNFi between 2004-2014 were analyzed. Kaplan-Meier curves and Cox regression models were used. Overall, 404 out of 1077 patients (37.5%) discontinued treatment (followup: 4288 patient-yrs). Ten-year drug survival was 49%. In the unadjusted analyses, higher TNFi survival was observed in patients with ankylosing spondylitis (AS) compared to undifferentiated SpA and psoriatic arthritis [PsA; significant beyond the first 2.5 (p = 0.003) years and 7 years (p < 0.001), respectively], and in patients treated for isolated axial versus peripheral arthritis (p = 0.001). In all multivariable analyses, male sex was a predictor for longer TNFi survival. Use of methotrexate (MTX) was a predictor in PsA and in patients with peripheral arthritis. Absence of peripheral arthritis and use of a monoclonal antibody (as opposed to non-antibody TNFi) independently predicted longer TNFi survival in axial disease because of lower rates of inefficacy. Achievement of major responses during the first year in either axial or peripheral arthritis was the strongest predictor of longer therapy retention (HR 0.33, 95% CI 0.26-0.41 for Ankylosing Spondylitis Disease Activity Score inactive disease, and HR 0.35, 95% CI 0.24-0.50 for 28-joint Disease Activity Score remission). The longterm retention of the first TNFi administered to patients with SpA is high, especially for males with axial disease. The strongest predictor of longterm TNFi survival is a major response within the first year of treatment.

  6. Future Hard Disk Storage: Limits & Potential Solutions

    Science.gov (United States)

    Lambeth, David N.

    2000-03-01

    For several years the hard disk drive technology pace has raced along at 60-100products this year and laboratory demonstrations approaching what has been estimated as a physical thermal stability limit of around 40 Gbit/in2. For sometime now the data storage industry has recogniz d that doing business as usually will not be viable for long and so both incremental evolutionary and revolutionary technologies are being explored. While new recording head materials or thermal recording techniques may allow higher coercivity materials to be recorded upon, and while high sensitivity spin transport transducer technology may provide sufficient signals to extend beyond the 100 Gigabit/in2 regime, conventional isotropic longitudinal media will show large data retention problems at less than 1/2 of this value. We have recently developed a simple model which indicates that while thermal instability issues may appear at different areal densities, they are non-discriminatory as to the magnetic recording modality: longitudinal, perpendicular, magnetooptic, near field, etc. The model indicates that a strong orientation of the media tends to abate the onset of the thermal limit. Hence, for the past few years we have taken an approach of controlled growth of the microstructure of thin film media. This knowledge has lead us to believe that epitaxial growth of multiple thin film layers on single crystalline Si may provide a pathway to nearly perfect crystallites of various, highly oriented, thin film textures. Here we provide an overview of the recording system media challenges, which are useful for the development of a future media design philosophy and then discuss materials issues and processing techniques for multi-layered thin film material structures which may be used to achieve media structures which can easy exceed the limits predicted for isotropic media.

  7. Exploring pyrazolo[3,4-d]pyrimidine phosphodiesterase 1 (PDE1) inhibitors: a predictive approach combining comparative validated multiple molecular modelling techniques.

    Science.gov (United States)

    Amin, Sk Abdul; Bhargava, Sonam; Adhikari, Nilanjan; Gayen, Shovanlal; Jha, Tarun

    2018-02-01

    Phosphodiesterase 1 (PDE1) is a potential target for a number of neurodegenerative disorders such as Schizophrenia, Parkinson's and Alzheimer's diseases. A number of pyrazolo[3,4-d]pyrimidine PDE1 inhibitors were subjected to different molecular modelling techniques [such as regression-based quantitative structure-activity relationship (QSAR): multiple linear regression, support vector machine and artificial neural network; classification-based QSAR: Bayesian modelling and Recursive partitioning; Monte Carlo based QSAR; Open3DQSAR; pharmacophore mapping and molecular docking analyses] to get a detailed knowledge about the physicochemical and structural requirements for higher inhibitory activity. The planarity of the pyrimidinone ring plays an important role for PDE1 inhibition. The N-methylated function at the 5th position of the pyrazolo[3,4-d]pyrimidine core is required for interacting with the PDE1 enzyme. The cyclopentyl ring fused with the parent scaffold is necessary for PDE1 binding potency. The phenylamino substitution at 3rd position is crucial for PDE1 inhibition. The N2-substitution at the pyrazole moiety is important for PDE1 inhibition compared to the N1-substituted analogues. Moreover, the p-substituted benzyl side chain at N2-position helps to enhance the PDE1 inhibitory profile. Depending on these observations, some new molecules are predicted that may possess better PDE1 inhibition.

  8. Application of the Min-Projection and the Model Predictive Strategies for Current Control of Three-Phase Grid-Connected Converters: a Comparative Study

    Directory of Open Access Journals (Sweden)

    M. Oloumi

    2015-06-01

    Full Text Available This paper provides a detailed comparative study concerning the performance of min-projection strategy (MPS and model predictive control (MPC systems to control the three-phase grid connected converters. To do so, first, the converter is modeled as a switched linear system. Then, the feasibility of the MPS technique is investigated and its stability criterion is derived as a lower limit on the DC link voltage. Next, the fundamental equations of the MPS to control a VSC are obtained in the stationary reference frame. The mathematical analysis reveals that the MPS is independent of the load, grid, filter and converter parameters. This feature is a great advantage of MPS over the MPC approach. However, the latter is a well-known model-based control technique, has already developed for controlling the VSC in the stationary reference frame. To control the grid connected VSC, both MPS and MPC approaches are simulated in the PSCAD/EMTDC environment. Simulation results illustrate that the MPS is functioning well and is less sensitive to grid and filter inductances as well as the DC link voltage level. However, the MPC approach renders slightly a better performance in the steady state conditions.

  9. Fisher: a program for the detection of H/ACA snoRNAs using MFE secondary structure prediction and comparative genomics - assessment and update.

    Science.gov (United States)

    Freyhult, Eva; Edvardsson, Sverker; Tamas, Ivica; Moulton, Vincent; Poole, Anthony M

    2008-07-21

    The H/ACA family of small nucleolar RNAs (snoRNAs) plays a central role in guiding the pseudouridylation of ribosomal RNA (rRNA). In an effort to systematically identify the complete set of rRNA-modifying H/ACA snoRNAs from the genome sequence of the budding yeast, Saccharomyces cerevisiae, we developed a program - Fisher - and previously presented several candidate snoRNAs based on our analysis 1. In this report, we provide a brief update of this work, which was aborted after the publication of experimentally-identified snoRNAs 2 identical to candidates we had identified bioinformatically using Fisher. Our motivation for revisiting this work is to report on the status of the candidate snoRNAs described in 1, and secondly, to report that a modified version of Fisher together with the available multiple yeast genome sequences was able to correctly identify several H/ACA snoRNAs for modification sites not identified by the snoGPS program 3. While we are no longer developing Fisher, we briefly consider the merits of the Fisher algorithm relative to snoGPS, which may be of use for workers considering pursuing a similar search strategy for the identification of small RNAs. The modified source code for Fisher is made available as supplementary material. Our results confirm the validity of using minimum free energy (MFE) secondary structure prediction to guide comparative genomic screening for RNA families with few sequence constraints.

  10. Motif-independent prediction of a secondary metabolism gene cluster using comparative genomics: application to sequenced genomes of Aspergillus and ten other filamentous fungal species.

    Science.gov (United States)

    Takeda, Itaru; Umemura, Myco; Koike, Hideaki; Asai, Kiyoshi; Machida, Masayuki

    2014-08-01

    Despite their biological importance, a significant number of genes for secondary metabolite biosynthesis (SMB) remain undetected due largely to the fact that they are highly diverse and are not expressed under a variety of cultivation conditions. Several software tools including SMURF and antiSMASH have been developed to predict fungal SMB gene clusters by finding core genes encoding polyketide synthase, nonribosomal peptide synthetase and dimethylallyltryptophan synthase as well as several others typically present in the cluster. In this work, we have devised a novel comparative genomics method to identify SMB gene clusters that is independent of motif information of the known SMB genes. The method detects SMB gene clusters by searching for a similar order of genes and their presence in nonsyntenic blocks. With this method, we were able to identify many known SMB gene clusters with the core genes in the genomic sequences of 10 filamentous fungi. Furthermore, we have also detected SMB gene clusters without core genes, including the kojic acid biosynthesis gene cluster of Aspergillus oryzae. By varying the detection parameters of the method, a significant difference in the sequence characteristics was detected between the genes residing inside the clusters and those outside the clusters. © The Author 2014. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  11. Unraveling Quantum Annealers using Classical Hardness

    Science.gov (United States)

    Martin-Mayor, Victor; Hen, Itay

    2015-01-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257

  12. HARDNESS PHENOMENON IN BEACH PEA (Lethyrus maritimus L.)

    OpenAIRE

    U.D. Chavan; R. Amarowicz; F. Shahidi

    2013-01-01

    Beach pea is mostly grown on seashores and it contains higher amount of protein than other legumes. However, the pea has several undesirable  attributes, such as long cooking time and hard to germinate (imbibitions) that limited its use as food. The present investigation aimed to study the physico-chemical properties, cooking characteristics and hull crude fibre structure of beach pea as compare to other similar legumes. Standard methods of processing pulses were used for present study. Beach...

  13. Hard x- and gamma-rays from supernova 1987A

    International Nuclear Information System (INIS)

    Kumagai, S.; Shigeyama, T.; Nomoto, K.; Nishmura, J.; Itoh, M.

    1988-01-01

    The x-ray light curve and spectrum from SN 1987A due to Compton degradation of γ-rays from the 56 Co decay are calculated and compared with the Ginga and Kvant observations. If mixing of 56 Co into outer layers has taken place, the x-rays emerge much earlier than in the case without mixing and the resulting hard x-rays are in reasonable agreement with observations

  14. Elastic constants of the hard disc system in the self-consistent free volume approximation

    International Nuclear Information System (INIS)

    Wojciechowski, K.W.

    1990-09-01

    Elastic moduli of the two dimensional hard disc crystal are determined exactly within the Kirkwood self-consistent free volume approximation and compared with the Monte Carlo simulation results. (author). 22 refs, 1 fig., 1 tab

  15. Non-hard sphere thermodynamic perturbation theory.

    Science.gov (United States)

    Zhou, Shiqi

    2011-08-21

    A non-hard sphere (HS) perturbation scheme, recently advanced by the present author, is elaborated for several technical matters, which are key mathematical details for implementation of the non-HS perturbation scheme in a coupling parameter expansion (CPE) thermodynamic perturbation framework. NVT-Monte Carlo simulation is carried out for a generalized Lennard-Jones (LJ) 2n-n potential to obtain routine thermodynamic quantities such as excess internal energy, pressure, excess chemical potential, excess Helmholtz free energy, and excess constant volume heat capacity. Then, these new simulation data, and available simulation data in literatures about a hard core attractive Yukawa fluid and a Sutherland fluid, are used to test the non-HS CPE 3rd-order thermodynamic perturbation theory (TPT) and give a comparison between the non-HS CPE 3rd-order TPT and other theoretical approaches. It is indicated that the non-HS CPE 3rd-order TPT is superior to other traditional TPT such as van der Waals/HS (vdW/HS), perturbation theory 2 (PT2)/HS, and vdW/Yukawa (vdW/Y) theory or analytical equation of state such as mean spherical approximation (MSA)-equation of state and is at least comparable to several currently the most accurate Ornstein-Zernike integral equation theories. It is discovered that three technical issues, i.e., opening up new bridge fun