WorldWideScience

Sample records for single-step most-probable-number method

  1. Rapid, single-step most-probable-number method for enumerating fecal coliforms in effluents from sewage treatment plants

    Science.gov (United States)

    Munoz, E. F.; Silverman, M. P.

    1979-01-01

    A single-step most-probable-number method for determining the number of fecal coliform bacteria present in sewage treatment plant effluents is discussed. A single growth medium based on that of Reasoner et al. (1976) and consisting of 5.0 gr. proteose peptone, 3.0 gr. yeast extract, 10.0 gr. lactose, 7.5 gr. NaCl, 0.2 gr. sodium lauryl sulfate, and 0.1 gr. sodium desoxycholate per liter is used. The pH is adjusted to 6.5, and samples are incubated at 44.5 deg C. Bacterial growth is detected either by measuring the increase with time in the electrical impedance ratio between the innoculated sample vial and an uninnoculated reference vial or by visual examination for turbidity. Results obtained by the single-step method for chlorinated and unchlorinated effluent samples are in excellent agreement with those obtained by the standard method. It is suggested that in automated treatment plants impedance ratio data could be automatically matched by computer programs with the appropriate dilution factors and most probable number tables already in the computer memory, with the corresponding result displayed as fecal coliforms per 100 ml of effluent.

  2. Miniaturized most probable number for the enumeration of Salmonella sp in artificially contaminated chicken meat

    Directory of Open Access Journals (Sweden)

    FL Colla

    2014-03-01

    Full Text Available Salmonella is traditionally identified by conventional microbiological tests, but the enumeration of this bacterium is not used on a routine basis. Methods such as the most probable number (MPN, which utilize an array of multiple tubes, are time-consuming and expensive, whereas miniaturized most probable number (mMPN methods, which use microplates, can be adapted for the enumeration of bacteria, saving up time and materials. The aim of the present paper is to assess two mMPN methods for the enumeration of Salmonella sp in artificially-contaminated chicken meat samples. Microplates containing 24 wells (method A and 96 wells (method B, both with peptone water as pre-enrichment medium and modified semi-solid Rappaport-Vassiliadis (MSRV as selective enrichment medium, were used. The meat matrix consisted of 25g of autoclaved ground chicken breast contaminated with dilutions of up to 10(6 of Salmonella Typhimurium (ST and Escherichia coli (EC. In method A, the dilution 10-5 of Salmonella Typhimurium corresponded to >57 MPN/mL and the dilution 10-6 was equal to 30 MPN/mL. There was a correlation between the counts used for the artificial contamination of the samples and those recovered by mMPN, indicating that the method A was sensitive for the enumeration of different levels of contamination of the meat matrix. In method B, there was no correlation between the inoculated dilutions and the mMPN results.

  3. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  4. Most probable number methodology for quantifying dilute concentrations and fluxes of Escherichia coli O157:H7 in surface waters.

    Science.gov (United States)

    Jenkins, M B; Endale, D M; Fisher, D S; Gay, P A

    2009-02-01

    To better understand the transport and enumeration of dilute densities of Escherichia coli O157:H7 in agricultural watersheds, we developed a culture-based, five tube-multiple dilution most probable number (MPN) method. The MPN method combined a filtration technique for large volumes of surface water with standard selective media, biochemical and immunological tests, and a TaqMan confirmation step. This method determined E. coli O157:H7 concentrations as low as 0.1 MPN per litre, with a 95% confidence level of 0.01-0.7 MPN per litre. Escherichia coli O157:H7 densities ranged from not detectable to 9 MPN per litre for pond inflow, from not detectable to 0.9 MPN per litre for pond outflow and from not detectable to 8.3 MPN per litre for within pond. The MPN methodology was extended to mass flux determinations. Fluxes of E. coli O157:H7 ranged from 10(4) MPN per hour. This culture-based method can detect small numbers of viable/culturable E. coli O157:H7 in surface waters of watersheds containing animal agriculture and wildlife. This MPN method will improve our understanding of the transport and fate of E. coli O157:H7 in agricultural watersheds, and can be the basis of collections of environmental E. coli O157:H7.

  5. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    Science.gov (United States)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of

  6. Development and application of a most probable number-PCR assay to quantify flagellate populations in soil samples

    DEFF Research Database (Denmark)

    Fredslund, Line; Ekelund, Flemming; Jacobsen, Carsten Suhr

    2001-01-01

    This paper reports on the first successful molecular detection and quantification of soil protozoa. Quantification of heterotrophic flagellates and naked amoebae in soil has traditionally relied on dilution culturing techniques, followed by most-probable-number (MPN) calculations. Such methods...... are biased by differences in the culturability of soil protozoa and are unable to quantify specific taxonomic groups, and the results are highly dependent on the choice of media and the skills of the microscopists. Successful detection of protozoa in soil by DNA techniques requires (i) the development...

  7. Comparing rapid methods for detecting Listeria in seafood and environmental samples using the most probably number (MPN) technique.

    Science.gov (United States)

    Cruz, Cristina D; Win, Jessicah K; Chantarachoti, Jiraporn; Mutukumira, Anthony N; Fletcher, Graham C

    2012-02-15

    The standard Bacteriological Analytical Manual (BAM) protocol for detecting Listeria in food and on environmental surfaces takes about 96 h. Some studies indicate that rapid methods, which produce results within 48 h, may be as sensitive and accurate as the culture protocol. As they only give presence/absence results, it can be difficult to compare the accuracy of results generated. We used the Most Probable Number (MPN) technique to evaluate the performance and detection limits of six rapid kits for detecting Listeria in seafood and on an environmental surface compared with the standard protocol. Three seafood products and an environmental surface were inoculated with similar known cell concentrations of Listeria and analyzed according to the manufacturers' instructions. The MPN was estimated using the MPN-BAM spreadsheet. For the seafood products no differences were observed among the rapid kits and efficiency was similar to the BAM method. On the environmental surface the BAM protocol had a higher recovery rate (sensitivity) than any of the rapid kits tested. Clearview™, Reveal®, TECRA® and VIDAS® LDUO detected the cells but only at high concentrations (>10(2) CFU/10 cm(2)). Two kits (VIP™ and Petrifilm™) failed to detect 10(4) CFU/10 cm(2). The MPN method was a useful tool for comparing the results generated by these presence/absence test kits. There remains a need to develop a rapid and sensitive method for detecting Listeria in environmental samples that performs as well as the BAM protocol, since none of the rapid tests used in this study achieved a satisfactory result. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. A single-step method for rapid extraction of total lipids from green microalgae.

    Directory of Open Access Journals (Sweden)

    Martin Axelsson

    Full Text Available Microalgae produce a wide range of lipid compounds of potential commercial interest. Total lipid extraction performed by conventional extraction methods, relying on the chloroform-methanol solvent system are too laborious and time consuming for screening large numbers of samples. In this study, three previous extraction methods devised by Folch et al. (1957, Bligh and Dyer (1959 and Selstam and Öquist (1985 were compared and a faster single-step procedure was developed for extraction of total lipids from green microalgae. In the single-step procedure, 8 ml of a 2∶1 chloroform-methanol (v/v mixture was added to fresh or frozen microalgal paste or pulverized dry algal biomass contained in a glass centrifuge tube. The biomass was manually suspended by vigorously shaking the tube for a few seconds and 2 ml of a 0.73% NaCl water solution was added. Phase separation was facilitated by 2 min of centrifugation at 350 g and the lower phase was recovered for analysis. An uncharacterized microalgal polyculture and the green microalgae Scenedesmus dimorphus, Selenastrum minutum, and Chlorella protothecoides were subjected to the different extraction methods and various techniques of biomass homogenization. The less labour intensive single-step procedure presented here allowed simultaneous recovery of total lipid extracts from multiple samples of green microalgae with quantitative yields and fatty acid profiles comparable to those of the previous methods. While the single-step procedure is highly correlated in lipid extractability (r² = 0.985 to the previous method of Folch et al. (1957, it allowed at least five times higher sample throughput.

  9. An automated technique for most-probable-number (MPN) analysis of densities of phagotrophic protists with lux-AB labelled bacteria as growth medium

    DEFF Research Database (Denmark)

    Ekelund, Flemming; Christensen, Søren; Rønn, Regin

    1999-01-01

    An automated modification of the most-probable-number (MPN) technique has been developed for enumeration of phagotrophic protozoa. The method is based on detection of prey depletion in micro titre plates rather than on presence of protozoa. A transconjugant Pseudomonas fluorescens DR54 labelled w...

  10. Improving Genetic Evaluation of Litter Size Using a Single-step Model

    DEFF Research Database (Denmark)

    Guo, Xiangyu; Christensen, Ole Fredslund; Ostersen, Tage

    A recently developed single-step method allows genetic evaluation based on information from phenotypes, pedigree and markers simultaneously. This paper compared reliabilities of predicted breeding values obtained from single-step method and the traditional pedigree-based method for two litter size...... traits, total number of piglets born (TNB), and litter size at five days after birth (Ls 5) in Danish Landrace and Yorkshire pigs. The results showed that the single-step method combining phenotypic and genotypic information provided more accurate predictions than the pedigree-based method, not only...

  11. Kualitas Air Sumur Gali Kelurahan Lubuk Buaya Kecamatan Koto Tangah Kota Padang Berdasarkan Indeks Most Probable Number (MPN

    Directory of Open Access Journals (Sweden)

    Randa Novalino

    2016-09-01

    minister. The 15 samples was water of dug well  in some of the RT households from several neighborhoods that was selected. The research was conducted in two stages, dug well water samples as well as observation of the factors that affect water quality and microbiological examination of the Most Probable Number method (MPN Test. This test consists of presumptive tests and confirmative tests that were tailored to regulation of Indonesian health minister.  The result were 73.33% of the wells tested did not meet the standards of Indonesian health minister regulation, because it contains >50 coliform in every 100 ml of water. Only 26.6% of the wells were inspected to meet the standards set. Several factors can affect the location of sources of pollution, parapet walls, drainage or sewer water, cover the wells, and water collection facilities.Keywords: dug well water quality, MPN, coliform

  12. Single-Camera-Based Method for Step Length Symmetry Measurement in Unconstrained Elderly Home Monitoring.

    Science.gov (United States)

    Cai, Xi; Han, Guang; Song, Xin; Wang, Jinkuan

    2017-11-01

    single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc. single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring

  13. A three-step maximum a posteriori probability method for InSAR data inversion of coseismic rupture with application to the 14 April 2010 Mw 6.9 Yushu, China, earthquake

    Science.gov (United States)

    Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei

    2013-08-01

    develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.

  14. Factors affecting GEBV accuracy with single-step Bayesian models.

    Science.gov (United States)

    Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng

    2018-01-01

    A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.

  15. Step-to-step spatiotemporal variables and ground reaction forces of intra-individual fastest sprinting in a single session.

    Science.gov (United States)

    Nagahara, Ryu; Mizutani, Mirai; Matsuo, Akifumi; Kanehisa, Hiroaki; Fukunaga, Tetsuo

    2018-06-01

    We aimed to investigate the step-to-step spatiotemporal variables and ground reaction forces during the acceleration phase for characterising intra-individual fastest sprinting within a single session. Step-to-step spatiotemporal variables and ground reaction forces produced by 15 male athletes were measured over a 50-m distance during repeated (three to five) 60-m sprints using a long force platform system. Differences in measured variables between the fastest and slowest trials were examined at each step until the 22nd step using a magnitude-based inferences approach. There were possibly-most likely higher running speed and step frequency (2nd to 22nd steps) and shorter support time (all steps) in the fastest trial than in the slowest trial. Moreover, for the fastest trial there were likely-very likely greater mean propulsive force during the initial four steps and possibly-very likely larger mean net anterior-posterior force until the 17th step. The current results demonstrate that better sprinting performance within a single session is probably achieved by 1) a high step frequency (except the initial step) with short support time at all steps, 2) exerting a greater mean propulsive force during initial acceleration, and 3) producing a greater mean net anterior-posterior force during initial and middle acceleration.

  16. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  17. Most probable dimension value and most flat interval methods for automatic estimation of dimension from time series

    International Nuclear Information System (INIS)

    Corana, A.; Bortolan, G.; Casaleggio, A.

    2004-01-01

    We present and compare two automatic methods for dimension estimation from time series. Both methods, based on conceptually different approaches, work on the derivative of the bi-logarithmic plot of the correlation integral versus the correlation length (log-log plot). The first method searches for the most probable dimension values (MPDV) and associates to each of them a possible scaling region. The second one searches for the most flat intervals (MFI) in the derivative of the log-log plot. The automatic procedures include the evaluation of the candidate scaling regions using two reliability indices. The data set used to test the methods consists of time series from known model attractors with and without the addition of noise, structured time series, and electrocardiographic signals from the MIT-BIH ECG database. Statistical analysis of results was carried out by means of paired t-test, and no statistically significant differences were found in the large majority of the trials. Consistent results are also obtained dealing with 'difficult' time series. In general for a more robust and reliable estimate, the use of both methods may represent a good solution when time series from complex systems are analyzed. Although we present results for the correlation dimension only, the procedures can also be used for the automatic estimation of generalized q-order dimensions and pointwise dimension. We think that the proposed methods, eliminating the need of operator intervention, allow a faster and more objective analysis, thus improving the usefulness of dimension analysis for the characterization of time series obtained from complex dynamical systems

  18. A high-order positivity-preserving single-stage single-step method for the ideal magnetohydrodynamic equations

    Science.gov (United States)

    Christlieb, Andrew J.; Feng, Xiao; Seal, David C.; Tang, Qi

    2016-07-01

    We propose a high-order finite difference weighted ENO (WENO) method for the ideal magnetohydrodynamics (MHD) equations. The proposed method is single-stage (i.e., it has no internal stages to store), single-step (i.e., it has no time history that needs to be stored), maintains a discrete divergence-free condition on the magnetic field, and has the capacity to preserve the positivity of the density and pressure. To accomplish this, we use a Taylor discretization of the Picard integral formulation (PIF) of the finite difference WENO method proposed in Christlieb et al. (2015) [23], where the focus is on a high-order discretization of the fluxes (as opposed to the conserved variables). We use the version where fluxes are expanded to third-order accuracy in time, and for the fluid variables space is discretized using the classical fifth-order finite difference WENO discretization. We use constrained transport in order to obtain divergence-free magnetic fields, which means that we simultaneously evolve the magnetohydrodynamic (that has an evolution equation for the magnetic field) and magnetic potential equations alongside each other, and set the magnetic field to be the (discrete) curl of the magnetic potential after each time step. In this work, we compute these derivatives to fourth-order accuracy. In order to retain a single-stage, single-step method, we develop a novel Lax-Wendroff discretization for the evolution of the magnetic potential, where we start with technology used for Hamilton-Jacobi equations in order to construct a non-oscillatory magnetic field. The end result is an algorithm that is similar to our previous work Christlieb et al. (2014) [8], but this time the time stepping is replaced through a Taylor method with the addition of a positivity-preserving limiter. Finally, positivity preservation is realized by introducing a parameterized flux limiter that considers a linear combination of high and low-order numerical fluxes. The choice of the free

  19. General methods for analysis of sequential "n-step" kinetic mechanisms: application to single turnover kinetics of helicase-catalyzed DNA unwinding.

    Science.gov (United States)

    Lucius, Aaron L; Maluf, Nasib K; Fischer, Christopher J; Lohman, Timothy M

    2003-10-01

    Helicase-catalyzed DNA unwinding is often studied using "all or none" assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using "n-step" sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the "kinetic step size", m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using "n-step" sequential mechanisms has previously been limited by an inability to float the number of "unwinding steps", n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, f(ss)(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain f(ss)(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation.

  20. Comparison on genomic predictions using GBLUP models and two single-step blending methods with different relationship matrices in the Nordic Holstein population

    DEFF Research Database (Denmark)

    Gao, Hongding; Christensen, Ole Fredslund; Madsen, Per

    2012-01-01

    Background A single-step blending approach allows genomic prediction using information of genotyped and non-genotyped animals simultaneously. However, the combined relationship matrix in a single-step method may need to be adjusted because marker-based and pedigree-based relationship matrices may...... not be on the same scale. The same may apply when a GBLUP model includes both genomic breeding values and residual polygenic effects. The objective of this study was to compare single-step blending methods and GBLUP methods with and without adjustment of the genomic relationship matrix for genomic prediction of 16......) a simple GBLUP method, 2) a GBLUP method with a polygenic effect, 3) an adjusted GBLUP method with a polygenic effect, 4) a single-step blending method, and 5) an adjusted single-step blending method. In the adjusted GBLUP and single-step methods, the genomic relationship matrix was adjusted...

  1. Assessment of the Effectiveness of Ectomycorrhizal Inocula to Promote Growth and Root Ectomycorrhizal Colonization in Pinus patula Seedlings Using the Most Probable Number Technique

    Directory of Open Access Journals (Sweden)

    Manuel Restrepo-Llano

    2014-01-01

    Full Text Available The aim of this study was to evaluate the response of Pinus patula seedlings to two inocula types: soil from a Pinus plantation (ES and an in vitro produced inoculum (EM. The most probable number method (MPN was used to quantify ectomycorrhizal propagule density (EPD in both inocula in a 7-order dilution series ranging from 100 (undiluted inoculum to 10−6 (the most diluted inoculum. The MPN method allowed establishing differences in the number of infective ectomycorrhizal propagules’ density (EPD (ES=34 per g; EM=156 per g. The results suggest that the EPD of an inoculum may be a key factor that influences the successfulness of the inoculation. The low EPD of the ES inoculum suggests that soil extracted from forest plantations had very low effectiveness for promoting root colonization and plant growth. In contrast, the high EPD found in the formulated inoculum (EM reinforced the idea that it is better to use proven high quality inocula for forest nurseries than using soil from a forestry plantation.

  2. AN EFFICIENT ANALYSIS FOR ABSORPTION AND GAIN COEFFICIENTS IN 'SINGLE STEP-INDEX WAVEGUIDE'S BY USING THE ALPHA METHOD

    Directory of Open Access Journals (Sweden)

    Mustafa TEMİZ

    2008-02-01

    Full Text Available In this study, some design parameters such as normalized frequency and especially normalized propagation constant have been obtained, depending on some parameters which are functions of energy eigenvalues of the carriers such as electrons and holes confined in a single step-index waveguide laser (SSIWGL or single stepindex waveguide (SSIWG. Some optical expressions about the optical power and probability quantities for the active region and cladding layers of the SSIWG or SSIWGL have been investigated. Investigations have been undertaken in terms of these parameters and also individually the optical even and odd electric field waves with the lowest-modes were theoretically computed. Especially absorption coefficients and loss coefficients addition to some important quantities of the single step-index waveguide lasers for the even and odd electric field waves are evaluated.

  3. An innovative method for offshore wind farm site selection based on the interval number with probability distribution

    Science.gov (United States)

    Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng

    2017-12-01

    There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.

  4. Comparing the mannitol-egg yolk-polymyxin agar plating method with the three-tube most-probable-number method for enumeration of Bacillus cereus spores in raw and high-temperature, short-time pasteurized milk.

    Science.gov (United States)

    Harper, Nigel M; Getty, Kelly J K; Schmidt, Karen A; Nutsch, Abbey L; Linton, Richard H

    2011-03-01

    The U.S. Food and Drug Administration's Bacteriological Analytical Manual recommends two enumeration methods for Bacillus cereus: (i) standard plate count method with mannitol-egg yolk-polymyxin (MYP) agar and (ii) a most-probable-number (MPN) method with tryptic soy broth (TSB) supplemented with 0.1% polymyxin sulfate. This study compared the effectiveness of MYP and MPN methods for detecting and enumerating B. cereus in raw and high-temperature, short-time pasteurized skim (0.5%), 2%, and whole (3.5%) bovine milk stored at 4°C for 96 h. Each milk sample was inoculated with B. cereus EZ-Spores and sampled at 0, 48, and 96 h after inoculation. There were no differences (P > 0.05) in B. cereus populations among sampling times for all milk types, so data were pooled to obtain overall mean values for each treatment. The overall B. cereus population mean of pooled sampling times for the MPN method (2.59 log CFU/ml) was greater (P milk samples ranged from 2.36 to 3.46 and 2.66 to 3.58 log CFU/ml for inoculated milk treatments for the MYP plate count and MPN methods, respectively, which is below the level necessary for toxin production. The MPN method recovered more B. cereus, which makes it useful for validation research. However, the MYP plate count method for enumeration of B. cereus also had advantages, including its ease of use and faster time to results (2 versus 5 days for MPN).

  5. Accuracy of Single-Step versus 2-Step Double-Mix Impression Technique

    DEFF Research Database (Denmark)

    Franco, Eduardo Batista; da Cunha, Leonardo Fernandes; Herrera, Francyle Simões

    2011-01-01

    Objective. To investigate the accuracy of dies obtained from single-step and 2-step double-mix impressions. Material and Methods. Impressions (n = 10) of a stainless steel die simulating a complete crown preparation were performed using a polyether (Impregum Soft Heavy and Light body) and a vinyl...

  6. Two-step estimation in ratio-of-mediator-probability weighted causal mediation analysis.

    Science.gov (United States)

    Bein, Edward; Deutsch, Jonah; Hong, Guanglei; Porter, Kristin E; Qin, Xu; Yang, Cheng

    2018-04-15

    This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score-based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio-of-mediator-probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score-based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2-step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio-of-mediator-probability weighting analysis a solution to the 2-step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance-covariance matrix for the indirect effect and direct effect 2-step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score-based weighting. Copyright © 2018 John Wiley & Sons, Ltd.

  7. Improvement of Source Number Estimation Method for Single Channel Signal.

    Directory of Open Access Journals (Sweden)

    Zhi Dong

    Full Text Available Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin's disk estimation (GDE and minimum description length (MDL, are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely.

  8. Application of Probability Calculations to the Study of the Permissible Step and Touch Potentials to Ensure Personnel Safety

    International Nuclear Information System (INIS)

    Eisawy, E.A.

    2011-01-01

    The aim of this paper is to develop a practical method to evaluate the actual step and touch potential distributions in order to determine the risk of failure of the grounding system. The failure probability, indicating the safety level of the grounding system, is related to both applied (stress) and withstand (strength) step or touch potentials. The probability distributions of the applied step and touch potentials as well as the corresponding withstand step and touch potentials which represent the capability of the human body to resist stress potentials are presented. These two distributions are used to evaluate the failure probability of the grounding system which denotes the probability that the applied potential exceeds the withstand potential. The method is accomplished in considering the resistance of the human body, the foot contact resistance and the fault clearing time as an independent random variables, rather than fixed values as treated in the previous analysis in determining the safety requirements for a given grounding system

  9. An novel frequent probability pattern mining algorithm based on circuit simulation method in uncertain biological networks

    Science.gov (United States)

    2014-01-01

    Background Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. Methods In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. Results The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. Conclusions The algorithm of probability graph isomorphism

  10. The intensity detection of single-photon detectors based on photon counting probability density statistics

    International Nuclear Information System (INIS)

    Zhang Zijing; Song Jie; Zhao Yuan; Wu Long

    2017-01-01

    Single-photon detectors possess the ultra-high sensitivity, but they cannot directly respond to signal intensity. Conventional methods adopt sampling gates with fixed width and count the triggered number of sampling gates, which is capable of obtaining photon counting probability to estimate the echo signal intensity. In this paper, we not only count the number of triggered sampling gates, but also record the triggered time position of photon counting pulses. The photon counting probability density distribution is obtained through the statistics of a series of the triggered time positions. Then Minimum Variance Unbiased Estimation (MVUE) method is used to estimate the echo signal intensity. Compared with conventional methods, this method can improve the estimation accuracy of echo signal intensity due to the acquisition of more detected information. Finally, a proof-of-principle laboratory system is established. The estimation accuracy of echo signal intensity is discussed and a high accuracy intensity image is acquired under low-light level environments. (paper)

  11. Single step fabrication method of fullerene/TiO2 composite photocatalyst for hydrogen production

    International Nuclear Information System (INIS)

    Kum, Jong Min; Cho, Sung Oh

    2011-01-01

    Hydrogen is one of the most promising alternative energy sources. Fossil fuel, which is the most widely used energy source, has two defects. One is CO 2 emission causing global warming. The other is exhaustion. On the other hand, hydrogen emits no CO 2 and can be produced by splitting water which is renewable and easily obtainable source. However, about 95% of hydrogen is derived from fossil fuel. It limits the merits of hydrogen. Hydrogen from fossil fuel is not a renewable energy anymore. To maximize the merits of hydrogen, renewability and no CO 2 emission, unconventional hydrogen production methods without using fossil fuel are required. Photocatalytic water-splitting is one of the unconventional hydrogen production methods. Photocatalytic water-splitting that uses hole/electron pairs of semiconductor is expectable way to produce clean and renewable hydrogen from solar energy. TiO 2 is the semiconductor material which has been most widely used as photocatalyst. TiO 2 shows high photocatalytic reactivity and stability in water. However, its wide band gap only absorbs UV light which is only 5% of sun light. To enhance the visible light responsibility, composition with fullerene based materials has been investigated. 1-2 Methano-fullerene carboxylic acid (FCA) is one of the fullerene based materials. We tried to fabricate FCA/TiO 2 composite using UV assisted single step method. The method not only simplified the fabrication procedures, but enhanced hydrogen production rate

  12. Single-step electrochemical method for producing very sharp Au scanning tunneling microscopy tips

    International Nuclear Information System (INIS)

    Gingery, David; Buehlmann, Philippe

    2007-01-01

    A single-step electrochemical method for making sharp gold scanning tunneling microscopy tips is described. 3.0M NaCl in 1% perchloric acid is compared to several previously reported etchants. The addition of perchloric acid to sodium chloride solutions drastically shortens etching times and is shown by transmission electron microscopy to produce very sharp tips with a mean radius of curvature of 15 nm

  13. Implementation of genomic recursions in single-step genomic best linear unbiased predictor for US Holsteins with a large number of genotyped animals.

    Science.gov (United States)

    Masuda, Y; Misztal, I; Tsuruta, S; Legarra, A; Aguilar, I; Lourenco, D A L; Fragomeni, B O; Lawlor, T J

    2016-03-01

    The objectives of this study were to develop and evaluate an efficient implementation in the computation of the inverse of genomic relationship matrix with the recursion algorithm, called the algorithm for proven and young (APY), in single-step genomic BLUP. We validated genomic predictions for young bulls with more than 500,000 genotyped animals in final score for US Holsteins. Phenotypic data included 11,626,576 final scores on 7,093,380 US Holstein cows, and genotypes were available for 569,404 animals. Daughter deviations for young bulls with no classified daughters in 2009, but at least 30 classified daughters in 2014 were computed using all the phenotypic data. Genomic predictions for the same bulls were calculated with single-step genomic BLUP using phenotypes up to 2009. We calculated the inverse of the genomic relationship matrix GAPY(-1) based on a direct inversion of genomic relationship matrix on a small subset of genotyped animals (core animals) and extended that information to noncore animals by recursion. We tested several sets of core animals including 9,406 bulls with at least 1 classified daughter, 9,406 bulls and 1,052 classified dams of bulls, 9,406 bulls and 7,422 classified cows, and random samples of 5,000 to 30,000 animals. Validation reliability was assessed by the coefficient of determination from regression of daughter deviation on genomic predictions for the predicted young bulls. The reliabilities were 0.39 with 5,000 randomly chosen core animals, 0.45 with the 9,406 bulls, and 7,422 cows as core animals, and 0.44 with the remaining sets. With phenotypes truncated in 2009 and the preconditioned conjugate gradient to solve mixed model equations, the number of rounds to convergence for core animals defined by bulls was 1,343; defined by bulls and cows, 2,066; and defined by 10,000 random animals, at most 1,629. With complete phenotype data, the number of rounds decreased to 858, 1,299, and at most 1,092, respectively. Setting up GAPY(-1

  14. Thermodynamic approach and comparison of two-step and single step DME (dimethyl ether) syntheses with carbon dioxide utilization

    International Nuclear Information System (INIS)

    Chen, Wei-Hsin; Hsu, Chih-Liang; Wang, Xiao-Dong

    2016-01-01

    DME (Dimethyl ether) synthesis from syngas with CO_2 utilization through two-step and single step processes is analyzed thermodynamically. The influences of reaction temperature, H_2/CO molar ratio, and CO_2/CO molar ratio on CO and CO_2 conversions, DME selectivity and yield, and thermal behavior are evaluated. Particular attention is paid to the comparison of the performance of DME synthesis between the two different methods. In the two-step method, the addition of CO_2 suppresses the CO conversion during methanol synthesis. An increase in CO_2/CO ratio decreases the CO_2 conversion (negative effect), but increases the total consumption amount of CO_2 (positive effect). At a given reaction temperature with H_2/CO = 4, the maximum DME yield develops at CO_2/CO = 1. In the single step method, over 98% of CO can be converted and the DME yield can be as high as 0.52 mol (mol CO)"−"1 at CO_2/CO = 2. The comparison of the single step and two-step processes indicates that the maximum CO conversion, DME selectivity, and DME yield in the former are higher than those in the latter, whereas an opposite result in the maximum CO_2 conversion is observed. These results reveal that the single step process has lower thermodynamic limitation and is a better option for DME synthesis. From CO_2 utilization point of view, the operation with low temperature, high H_2/CO ratio, and low CO_2/CO ratio results in higher CO_2 conversion, irrespective of two-step or single step DME synthesis. - Highlights: • DME (Dimethyl ether) synthesis with CO_2 utilization is analyzed thermodynamically. • Single step and two-step DME syntheses are studied and compared with each other. • CO_2 addition suppresses CO conversion in MeOH synthesis but increases MeOH yield. • The performance of the single step DME synthesis is better than that of the two-step one. • Increase CO_2/CO ratio decreases CO_2 conversion but increases CO_2 consumption amount.

  15. A probability-based multi-cycle sorting method for 4D-MRI: A simulation study.

    Science.gov (United States)

    Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing

    2016-12-01

    To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients' breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured

  16. Collision Probability Analysis

    DEFF Research Database (Denmark)

    Hansen, Peter Friis; Pedersen, Preben Terndrup

    1998-01-01

    It is the purpose of this report to apply a rational model for prediction of ship-ship collision probabilities as function of the ship and the crew characteristics and the navigational environment for MS Dextra sailing on a route between Cadiz and the Canary Islands.The most important ship and crew...... characteristics are: ship speed, ship manoeuvrability, the layout of the navigational bridge, the radar system, the number and the training of navigators, the presence of a look out etc. The main parameters affecting the navigational environment are ship traffic density, probability distributions of wind speeds...... probability, i.e. a study of the navigator's role in resolving critical situations, a causation factor is derived as a second step.The report documents the first step in a probabilistic collision damage analysis. Future work will inlcude calculation of energy released for crushing of structures giving...

  17. The transition probability and the probability for the left-most particle's position of the q-totally asymmetric zero range process

    Energy Technology Data Exchange (ETDEWEB)

    Korhonen, Marko [Department of Mathematics and Statistics, University of Helsinki, FIN-00014 (Finland); Lee, Eunghyun [Centre de Recherches Mathématiques (CRM), Université de Montréal, Quebec H3C 3J7 (Canada)

    2014-01-15

    We treat the N-particle zero range process whose jumping rates satisfy a certain condition. This condition is required to use the Bethe ansatz and the resulting model is the q-boson model by Sasamoto and Wadati [“Exact results for one-dimensional totally asymmetric diffusion models,” J. Phys. A 31, 6057–6071 (1998)] or the q-totally asymmetric zero range process (TAZRP) by Borodin and Corwin [“Macdonald processes,” Probab. Theory Relat. Fields (to be published)]. We find the explicit formula of the transition probability of the q-TAZRP via the Bethe ansatz. By using the transition probability we find the probability distribution of the left-most particle's position at time t. To find the probability for the left-most particle's position we find a new identity corresponding to identity for the asymmetric simple exclusion process by Tracy and Widom [“Integral formulas for the asymmetric simple exclusion process,” Commun. Math. Phys. 279, 815–844 (2008)]. For the initial state that all particles occupy a single site, the probability distribution of the left-most particle's position at time t is represented by the contour integral of a determinant.

  18. Optimizing the number of steps in learning tasks for complex skills.

    Science.gov (United States)

    Nadolski, Rob J; Kirschner, Paul A; van Merriënboer, Jeroen J G

    2005-06-01

    Carrying out whole tasks is often too difficult for novice learners attempting to acquire complex skills. The common solution is to split up the tasks into a number of smaller steps. The number of steps must be optimized for efficient and effective learning. The aim of the study is to investigate the relation between the number of steps provided to learners and the quality of their learning of complex skills. It is hypothesized that students receiving an optimized number of steps will learn better than those receiving either the whole task in only one step or those receiving a large number of steps. Participants were 35 sophomore law students studying at Dutch universities, mean age=22.8 years (SD=3.5), 63% were female. Participants were randomly assigned to 1 of 3 computer-delivered versions of a multimedia programme on how to prepare and carry out a law plea. The versions differed only in the number of learning steps provided. Videotaped plea-performance results were determined, various related learning measures were acquired and all computer actions were logged and analyzed. Participants exposed to an intermediate (i.e. optimized) number of steps outperformed all others on the compulsory learning task. No differences in performance on a transfer task were found. A high number of steps proved to be less efficient for carrying out the learning task. An intermediate number of steps is the most effective, proving that the number of steps can be optimized for improving learning.

  19. Tailoring single-photon and multiphoton probabilities of a single-photon on-demand source

    International Nuclear Information System (INIS)

    Migdall, A.L.; Branning, D.; Castelletto, S.

    2002-01-01

    As typically implemented, single-photon sources cannot be made to produce single photons with high probability, while simultaneously suppressing the probability of yielding two or more photons. Because of this, single-photon sources cannot really produce single photons on demand. We describe a multiplexed system that allows the probabilities of producing one and more photons to be adjusted independently, enabling a much better approximation of a source of single photons on demand

  20. Probability Maps for the Visualization of Assimilation Ensemble Flow Data

    KAUST Repository

    Hollt, Thomas

    2015-05-25

    Ocean forecasts nowadays are created by running ensemble simulations in combination with data assimilation techniques. Most of these techniques resample the ensemble members after each assimilation cycle. This means that in a time series, after resampling, every member can follow up on any of the members before resampling. Tracking behavior over time, such as all possible paths of a particle in an ensemble vector field, becomes very difficult, as the number of combinations rises exponentially with the number of assimilation cycles. In general a single possible path is not of interest but only the probabilities that any point in space might be reached by a particle at some point in time. In this work we present an approach using probability-weighted piecewise particle trajectories to allow such a mapping interactively, instead of tracing quadrillions of individual particles. We achieve interactive rates by binning the domain and splitting up the tracing process into the individual assimilation cycles, so that particles that fall into the same bin after a cycle can be treated as a single particle with a larger probability as input for the next time step. As a result we loose the possibility to track individual particles, but can create probability maps for any desired seed at interactive rates.

  1. Accurate step-hold tracking of smoothly varying periodic and aperiodic probability.

    Science.gov (United States)

    Ricci, Matthew; Gallistel, Randy

    2017-07-01

    Subjects observing many samples from a Bernoulli distribution are able to perceive an estimate of the generating parameter. A question of fundamental importance is how the current percept-what we think the probability now is-depends on the sequence of observed samples. Answers to this question are strongly constrained by the manner in which the current percept changes in response to changes in the hidden parameter. Subjects do not update their percept trial-by-trial when the hidden probability undergoes unpredictable and unsignaled step changes; instead, they update it only intermittently in a step-hold pattern. It could be that the step-hold pattern is not essential to the perception of probability and is only an artifact of step changes in the hidden parameter. However, we now report that the step-hold pattern obtains even when the parameter varies slowly and smoothly. It obtains even when the smooth variation is periodic (sinusoidal) and perceived as such. We elaborate on a previously published theory that accounts for: (i) the quantitative properties of the step-hold update pattern; (ii) subjects' quick and accurate reporting of changes; (iii) subjects' second thoughts about previously reported changes; (iv) subjects' detection of higher-order structure in patterns of change. We also call attention to the challenges these results pose for trial-by-trial updating theories.

  2. A technique of evaluating most probable stochastic valuables from a small number of samples and their accuracies and degrees of confidence

    Energy Technology Data Exchange (ETDEWEB)

    Katoh, K [Ibaraki Pref. Univ. Health Sci., (Japan)

    1997-12-31

    A problem of estimating stochastic characteristics of a population from a small number of samples is solved as an inverse problem, from view point of information theory and with the Bayesian statistics. For both Poisson-process and Bernoulli-process, the most probable values of the characteristics of the mother population and their accuracies and degrees of confidence are successfully obtained. Mathematical expressions are given to the general case where a limit amount of information and/or knowledge with the stochastic characteristics are available and a special case where no a priori information nor knowledge are available. Mathematical properties of the solutions obtained, practical appreciation to the problem to radiation measurement are also discussed.

  3. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset

    Science.gov (United States)

    Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502

  4. Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit

    Science.gov (United States)

    Vittaldev, Vivek; Russell, Ryan P.

    2017-09-01

    Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.

  5. Considering dominance in reduced single-step genomic evaluations.

    Science.gov (United States)

    Ertl, J; Edel, C; Pimentel, E C G; Emmerling, R; Götz, K-U

    2018-06-01

    Single-step models including dominance can be an enormous computational task and can even be prohibitive for practical application. In this study, we try to answer the question whether a reduced single-step model is able to estimate breeding values of bulls and breeding values, dominance deviations and total genetic values of cows with acceptable quality. Genetic values and phenotypes were simulated (500 repetitions) for a small Fleckvieh pedigree consisting of 371 bulls (180 thereof genotyped) and 553 cows (40 thereof genotyped). This pedigree was virtually extended for 2,407 non-genotyped daughters. Genetic values were estimated with the single-step model and with different reduced single-step models. Including more relatives of genotyped cows in the reduced single-step model resulted in a better agreement of results with the single-step model. Accuracies of genetic values were largest with single-step and smallest with reduced single-step when only the cows genotyped were modelled. The results indicate that a reduced single-step model is suitable to estimate breeding values of bulls and breeding values, dominance deviations and total genetic values of cows with acceptable quality. © 2018 Blackwell Verlag GmbH.

  6. A novel single-step, multipoint calibration method for instrumented Lab-on-Chip systems

    DEFF Research Database (Denmark)

    Pfreundt, Andrea; Patou, François; Zulfiqar, Azeem

    2014-01-01

    for instrument-based PoC blood biomarker analysis systems. Motivated by the complexity of associating high-accuracy biosensing using silicon nanowire field effect transistors with ease of use for the PoC system user, we propose a novel one-step, multipoint calibration method for LoC-based systems. Our approach...... specifically addresses the important interfaces between a novel microfluidic unit to integrate the sensor array and a mobile-device hardware accessory. A multi-point calibration curve is obtained by generating a defined set of reference concentrations from a single input. By consecutively splitting the flow...

  7. Improving genetic evaluation of litter size and piglet mortality for both genotyped and nongenotyped individuals using a single-step method.

    Science.gov (United States)

    Guo, X; Christensen, O F; Ostersen, T; Wang, Y; Lund, M S; Su, G

    2015-02-01

    A single-step method allows genetic evaluation using information of phenotypes, pedigree, and markers from genotyped and nongenotyped individuals simultaneously. This paper compared genomic predictions obtained from a single-step BLUP (SSBLUP) method, a genomic BLUP (GBLUP) method, a selection index blending (SELIND) method, and a traditional pedigree-based method (BLUP) for total number of piglets born (TNB), litter size at d 5 after birth (LS5), and mortality rate before d 5 (Mort; including stillbirth) in Danish Landrace and Yorkshire pigs. Data sets of 778,095 litters from 309,362 Landrace sows and 472,001 litters from 190,760 Yorkshire sows were used for the analysis. There were 332,795 Landrace and 207,255 Yorkshire animals in the pedigree data, among which 3,445 Landrace pigs (1,366 boars and 2,079 sows) and 3,372 Yorkshire pigs (1,241 boars and 2,131 sows) were genotyped with the Illumina PorcineSNP60 BeadChip. The results showed that the 3 methods with marker information (SSBLUP, GBLUP, and SELIND) produced more accurate predictions for genotyped animals than the pedigree-based method. For genotyped animals, the average of reliabilities for all traits in both breeds using traditional BLUP was 0.091, which increased to 0.171 w+hen using GBLUP and to 0.179 when using SELIND and further increased to 0.209 when using SSBLUP. Furthermore, the average reliability of EBV for nongenotyped animals was increased from 0.091 for traditional BLUP to 0.105 for the SSBLUP. The results indicate that the SSBLUP is a good approach to practical genomic prediction of litter size and piglet mortality in Danish Landrace and Yorkshire populations.

  8. Compatibility of pedigree-based and marker-based relationship matrices for single-step genetic evaluation

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund

    2012-01-01

    Single-step methods for genomic prediction have recently become popular because they are conceptually simple and in practice such a method can completely replace a pedigree-based method for routine genetic evaluation. An issue with single-step methods is compatibility between the marker-based rel...

  9. Optimization of radiation therapy, III: a method of assessing complication probabilities from dose-volume histograms

    International Nuclear Information System (INIS)

    Lyman, J.T.; Wolbarst, A.B.

    1987-01-01

    To predict the likelihood of success of a therapeutic strategy, one must be able to assess the effects of the treatment upon both diseased and healthy tissues. This paper proposes a method for determining the probability that a healthy organ that receives a non-uniform distribution of X-irradiation, heat, chemotherapy, or other agent will escape complications. Starting with any given dose distribution, a dose-cumulative-volume histogram for the organ is generated. This is then reduced by an interpolation scheme (involving the volume-weighting of complication probabilities) to a slightly different histogram that corresponds to the same overall likelihood of complications, but which contains one less step. The procedure is repeated, one step at a time, until there remains a final, single-step histogram, for which the complication probability can be determined. The formalism makes use of a complication response function C(D, V) which, for the given treatment schedule, represents the probability of complications arising when the fraction V of the organ receives dose D and the rest of the organ gets none. Although the data required to generate this function are sparse at present, it should be possible to obtain the necessary information from in vivo and clinical studies. Volume effects are taken explicitly into account in two ways: the precise shape of the patient's histogram is employed in the calculation, and the complication response function is a function of the volume

  10. The transmission probability method in one-dimensional cylindrical geometry

    International Nuclear Information System (INIS)

    Rubin, I.E.

    1983-01-01

    The collision probability method widely used in solving the problems of neutron transpopt in a reactor cell is reliable for simple cells with small number of zones. The increase of the number of zones and also taking into account the anisotropy of scattering greatly increase the scope of calculations. In order to reduce the time of calculation the transmission probability method is suggested to be used for flux calculation in one-dimensional cylindrical geometry taking into account the scattering anisotropy. The efficiency of the suggested method is verified using the one-group calculations for cylindrical cells. The use of the transmission probability method allows to present completely angular and spatial dependences is neutrons distributions without the increase in the scope of calculations. The method is especially effective in solving the multi-group problems

  11. Single-step affinity purification for fungal proteomics.

    Science.gov (United States)

    Liu, Hui-Lin; Osmani, Aysha H; Ukil, Leena; Son, Sunghun; Markossian, Sarine; Shen, Kuo-Fang; Govindaraghavan, Meera; Varadaraj, Archana; Hashmi, Shahr B; De Souza, Colin P; Osmani, Stephen A

    2010-05-01

    A single-step protein affinity purification protocol using Aspergillus nidulans is described. Detailed protocols for cell breakage, affinity purification, and depending on the application, methods for protein release from affinity beads are provided. Examples defining the utility of the approaches, which should be widely applicable, are included.

  12. Decrease in weekend number of steps in adolescents

    Directory of Open Access Journals (Sweden)

    Jana Vašíčková

    2013-03-01

    Full Text Available BACKGROUND: The activities with which young people spend the weekends do not support the prevalence of an active and healthy lifestyle enough. There are research evidence-based results that physical activity performed during weekend days is lower than physical activity in weekdays. OBJECTIVE: This study aims to find out to what extent young people achieve recommended levels of physical activity on weekends and to identify possible differences with regard to nationality or gender. METHODS: Research was realised between 2008 and 2011 at randomly selected schools in the Czech Republic and in Poland. The week long step count monitoring had 786 participants in the Czech Republic and 673 participants in Poland aged 15–16 years. The online system INDARES was used to answer questionnaires and gather data from pedometers. RESULTS: Results showed that young people on average record lower numbers of steps on weekends compared to schooldays (difference is 1356 step/day; F(1, 1458 = 147.61; p ≈ .000; ω2 = .232***. The most critical day of the week is Sunday. The simplified recommended amount of 11,000 steps/day is achieved by 65.93% of Czech boys and 64.73% of Czech girls (49.28% of Polish boys and 42.92% of Polish girls on schooldays, whereas only by 42.59% of Czech boys and 43.8% of Czech girls (40.1% of Polish boys and 39.27% of Polish girls on weekends. There were no significant differences between boys and girls in terms of average number of steps per day. CONCLUSION: Reducing the weekend would certainly not secure effective advancement of physical activity and healthy lifestyle among young people, yet a change of their values and creating a habit of spending weekends actively is imperative. Use of the internet mainly in boys and use of pedometers mainly in girls are some of the tools to stimulate physical activity and healthy lifestyle in youth.

  13. Valuing Euro rating-triggered step-up telecom bonds

    NARCIS (Netherlands)

    P. Houweling (Patrick); A.A. Mentink; A.C.F. Vorst (Ton)

    2003-01-01

    textabstractWe value rating-triggered step-up bonds with three methods: (i) the Jarrow, Lando and Turnbull (1997, JLT) framework, (ii) a similar framework using historical probabilities and (iii) as plain vanilla bonds. We find that the market seems to value single step-up bonds according to the JLT

  14. Valuing Euro Rating-Triggered Step-Up Telecom Bonds

    NARCIS (Netherlands)

    P. Houweling (Patrick); A.A. Mentink; A.C.F. Vorst (Ton)

    2003-01-01

    textabstractWe value rating-triggered step-up bonds with three methods: (i) the Jarrow, Lando and Turnbull (1997, JLT) framework, (ii) a similar framework using historical probabilities and (iii) as plain vanilla bonds. We find that the market seems to value single step-up bonds according to the JLT

  15. Development of a Single-Step Subtraction Method for Eukaryotic 18S and 28S Ribonucleic Acids

    Directory of Open Access Journals (Sweden)

    Marie J. Archer

    2011-01-01

    Full Text Available The abundance of mammalian 18S and 28S ribosomal RNA can decrease the detection sensitivity of bacterial or viral targets in complex host-pathogen mixtures. A method to capture human RNA in a single step was developed and characterized to address this issue. For this purpose, capture probes were covalently attached to magnetic microbeads using a dendrimer linker and the solid phase was tested using rat thymus RNA (mammalian components with Escherichia coli RNA (bacterial target as a model system. Our results indicated that random capture probes demonstrated better performance than specific ones presumably by increasing the number of possible binding sites, and the use of a tetrame-thylammonium-chloride (TMA-Cl- based buffer for the hybridization showed a beneficial effect in the selectivity. The subtraction efficiency determined through real-time RT-PCR revealed capture-efficiencies comparable with commercially available enrichment kits. The performance of the solid phase can be further fine tuned by modifying the annealing time and temperature.

  16. Comparative evaluation of direct plating and most probable number for enumeration of low levels of Listeria monocytogenes in naturally contaminated ice cream products.

    Science.gov (United States)

    Chen, Yi; Pouillot, Régis; S Burall, Laurel; Strain, Errol A; Van Doren, Jane M; De Jesus, Antonio J; Laasri, Anna; Wang, Hua; Ali, Laila; Tatavarthy, Aparna; Zhang, Guodong; Hu, Lijun; Day, James; Sheth, Ishani; Kang, Jihun; Sahu, Surasri; Srinivasan, Devayani; Brown, Eric W; Parish, Mickey; Zink, Donald L; Datta, Atin R; Hammack, Thomas S; Macarisin, Dumitru

    2017-01-16

    A precise and accurate method for enumeration of low level of Listeria monocytogenes in foods is critical to a variety of studies. In this study, paired comparison of most probable number (MPN) and direct plating enumeration of L. monocytogenes was conducted on a total of 1730 outbreak-associated ice cream samples that were naturally contaminated with low level of L. monocytogenes. MPN was performed on all 1730 samples. Direct plating was performed on all samples using the RAPID'L.mono (RLM) agar (1600 samples) and agar Listeria Ottaviani and Agosti (ALOA; 130 samples). Probabilistic analysis with Bayesian inference model was used to compare paired direct plating and MPN estimates of L. monocytogenes in ice cream samples because assumptions implicit in ordinary least squares (OLS) linear regression analyses were not met for such a comparison. The probabilistic analysis revealed good agreement between the MPN and direct plating estimates, and this agreement showed that the MPN schemes and direct plating schemes using ALOA or RLM evaluated in the present study were suitable for enumerating low levels of L. monocytogenes in these ice cream samples. The statistical analysis further revealed that OLS linear regression analyses of direct plating and MPN data did introduce bias that incorrectly characterized systematic differences between estimates from the two methods. Published by Elsevier B.V.

  17. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    Science.gov (United States)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  18. Most probable degree distribution at fixed structural entropy

    Indian Academy of Sciences (India)

    Here we derive the most probable degree distribution emerging ... the structural entropy of power-law networks is an increasing function of the expo- .... tition function Z of the network as the sum over all degree distributions, with given energy.

  19. Comparison study on mechanical properties single step and three step artificial aging on duralium

    Science.gov (United States)

    Tsamroh, Dewi Izzatus; Puspitasari, Poppy; Andoko, Sasongko, M. Ilman N.; Yazirin, Cepi

    2017-09-01

    Duralium is kind of non-ferro alloy that used widely in industrial. That caused its properties such as mild, high ductility, and resistance from corrosion. This study aimed to know mechanical properties of duralium on single step and three step articial aging process. Mechanical properties that discussed in this study focused on toughness value, tensile strength, and microstructure of duralium. Toughness value of single step artificial aging was 0.082 joule/mm2, and toughness value of three step artificial aging was 0,0721 joule/mm2. Duralium tensile strength of single step artificial aging was 32.36 kgf/mm^2, and duralium tensile strength of three step artificial aging was 32,70 kgf/mm^2. Based on microstructure photo of duralium of single step artificial aging showed that precipitate (θ) was not spreading evenly indicated by black spot which increasing the toughness of material. While microstructure photo of duralium that treated by three step artificial aging showed that it had more precipitate (θ) spread evenly compared with duralium that treated by single step artificial aging.

  20. A Renormalisation Group Method. V. A Single Renormalisation Group Step

    Science.gov (United States)

    Brydges, David C.; Slade, Gordon

    2015-05-01

    This paper is the fifth in a series devoted to the development of a rigorous renormalisation group method applicable to lattice field theories containing boson and/or fermion fields, and comprises the core of the method. In the renormalisation group method, increasingly large scales are studied in a progressive manner, with an interaction parametrised by a field polynomial which evolves with the scale under the renormalisation group map. In our context, the progressive analysis is performed via a finite-range covariance decomposition. Perturbative calculations are used to track the flow of the coupling constants of the evolving polynomial, but on their own perturbative calculations are insufficient to control error terms and to obtain mathematically rigorous results. In this paper, we define an additional non-perturbative coordinate, which together with the flow of coupling constants defines the complete evolution of the renormalisation group map. We specify conditions under which the non-perturbative coordinate is contractive under a single renormalisation group step. Our framework is essentially combinatorial, but its implementation relies on analytic results developed earlier in the series of papers. The results of this paper are applied elsewhere to analyse the critical behaviour of the 4-dimensional continuous-time weakly self-avoiding walk and of the 4-dimensional -component model. In particular, the existence of a logarithmic correction to mean-field scaling for the susceptibility can be proved for both models, together with other facts about critical exponents and critical behaviour.

  1. Single-step fabrication of quantum funnels via centrifugal colloidal casting of nanoparticle films

    Science.gov (United States)

    Kim, Jin Young; Adinolfi, Valerio; Sutherland, Brandon R.; Voznyy, Oleksandr; Kwon, S. Joon; Kim, Tae Wu; Kim, Jeongho; Ihee, Hyotcherl; Kemp, Kyle; Adachi, Michael; Yuan, Mingjian; Kramer, Illan; Zhitomirsky, David; Hoogland, Sjoerd; Sargent, Edward H.

    2015-01-01

    Centrifugal casting of composites and ceramics has been widely employed to improve the mechanical and thermal properties of functional materials. This powerful method has yet to be deployed in the context of nanoparticles—yet size–effect tuning of quantum dots is among their most distinctive and application-relevant features. Here we report the first gradient nanoparticle films to be constructed in a single step. By creating a stable colloid of nanoparticles that are capped with electronic-conduction-compatible ligands we were able to leverage centrifugal casting for thin-films devices. This new method, termed centrifugal colloidal casting, is demonstrated to form films in a bandgap-ordered manner with efficient carrier funnelling towards the lowest energy layer. We constructed the first quantum-gradient photodiode to be formed in a single deposition step and, as a result of the gradient-enhanced electric field, experimentally measured the highest normalized detectivity of any colloidal quantum dot photodetector. PMID:26165185

  2. Single-step fabrication of quantum funnels via centrifugal colloidal casting of nanoparticle films.

    KAUST Repository

    Kim, Jin Young; Adinolfi, Valerio; Sutherland, Brandon R; Voznyy, Oleksandr; Kwon, S Joon; Kim, Tae Wu; Kim, Jeongho; Ihee, Hyotcherl; Kemp, Kyle; Adachi, Michael; Yuan, Mingjian; Kramer, Illan; Zhitomirsky, David; Hoogland, Sjoerd; Sargent, Edward H

    2015-01-01

    Centrifugal casting of composites and ceramics has been widely employed to improve the mechanical and thermal properties of functional materials. This powerful method has yet to be deployed in the context of nanoparticles--yet size-effect tuning of quantum dots is among their most distinctive and application-relevant features. Here we report the first gradient nanoparticle films to be constructed in a single step. By creating a stable colloid of nanoparticles that are capped with electronic-conduction-compatible ligands we were able to leverage centrifugal casting for thin-films devices. This new method, termed centrifugal colloidal casting, is demonstrated to form films in a bandgap-ordered manner with efficient carrier funnelling towards the lowest energy layer. We constructed the first quantum-gradient photodiode to be formed in a single deposition step and, as a result of the gradient-enhanced electric field, experimentally measured the highest normalized detectivity of any colloidal quantum dot photodetector.

  3. Single-step fabrication of quantum funnels via centrifugal colloidal casting of nanoparticle films.

    KAUST Repository

    Kim, Jin Young

    2015-07-13

    Centrifugal casting of composites and ceramics has been widely employed to improve the mechanical and thermal properties of functional materials. This powerful method has yet to be deployed in the context of nanoparticles--yet size-effect tuning of quantum dots is among their most distinctive and application-relevant features. Here we report the first gradient nanoparticle films to be constructed in a single step. By creating a stable colloid of nanoparticles that are capped with electronic-conduction-compatible ligands we were able to leverage centrifugal casting for thin-films devices. This new method, termed centrifugal colloidal casting, is demonstrated to form films in a bandgap-ordered manner with efficient carrier funnelling towards the lowest energy layer. We constructed the first quantum-gradient photodiode to be formed in a single deposition step and, as a result of the gradient-enhanced electric field, experimentally measured the highest normalized detectivity of any colloidal quantum dot photodetector.

  4. Protein single-model quality assessment by feature-based probability density functions.

    Science.gov (United States)

    Cao, Renzhi; Cheng, Jianlin

    2016-04-04

    Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.

  5. Number projection method

    International Nuclear Information System (INIS)

    Kaneko, K.

    1987-01-01

    A relationship between the number projection and the shell model methods is investigated in the case of a single-j shell. We can find a one-to-one correspondence between the number projected and the shell model states

  6. Predictive probability methods for interim monitoring in clinical trials with longitudinal outcomes.

    Science.gov (United States)

    Zhou, Ming; Tang, Qi; Lang, Lixin; Xing, Jun; Tatsuoka, Kay

    2018-04-17

    In clinical research and development, interim monitoring is critical for better decision-making and minimizing the risk of exposing patients to possible ineffective therapies. For interim futility or efficacy monitoring, predictive probability methods are widely adopted in practice. Those methods have been well studied for univariate variables. However, for longitudinal studies, predictive probability methods using univariate information from only completers may not be most efficient, and data from on-going subjects can be utilized to improve efficiency. On the other hand, leveraging information from on-going subjects could allow an interim analysis to be potentially conducted once a sufficient number of subjects reach an earlier time point. For longitudinal outcomes, we derive closed-form formulas for predictive probabilities, including Bayesian predictive probability, predictive power, and conditional power and also give closed-form solutions for predictive probability of success in a future trial and the predictive probability of success of the best dose. When predictive probabilities are used for interim monitoring, we study their distributions and discuss their analytical cutoff values or stopping boundaries that have desired operating characteristics. We show that predictive probabilities utilizing all longitudinal information are more efficient for interim monitoring than that using information from completers only. To illustrate their practical application for longitudinal data, we analyze 2 real data examples from clinical trials. Copyright © 2018 John Wiley & Sons, Ltd.

  7. Froude Number is the Single Most Important Hydraulic Parameter for Salmonid Spawning Habitat.

    Science.gov (United States)

    Gillies, E.; Moir, H. J.

    2015-12-01

    Many gravel-bed rivers exhibit historic straightening or embanking, reducing river complexity and the available habitat for key species such as salmon. A defensible method for predicting salmonid spawning habitat is an important tool for anyone engaged in assessing a river restoration. Most empirical methods to predict spawning habitat use lookup tables of depth, velocity and substrate. However, natural site selection is different: salmon must pick a location where they can successfully build a redd, and where eggs have a sufficient survival rate. Also, using dimensional variables, such as depth and velocity, is problematic: spawning occurs in rivers of differing size, depth and velocity range. Non-dimensional variables have proven useful in other branches of fluid dynamics, and instream habitat is no different. Empirical river data has a high correlation between observed salmon redds and Froude number, without insight into why. Here we present a physics based model of spawning and bedform evolution, which shows that Froude number is indeed a rational choice for characterizing the bedform, substrate, and flow necessary for spawning. It is familiar for Froude to characterize surface waves, but Froude also characterizes longitudinal bedform in a mobile bed river. We postulate that these bedforms and their hydraulics perform two roles in salmonid spawning: allowing transport of clasts during redd building, and oxygenating eggs. We present an example of this Froude number and substrate based habitat characterization on a Scottish river for which we have detailed topography at several stages during river restoration and subsequent evolution of natural processes. We show changes to the channel Froude regime as a result of natural process and validate habitat predictions against redds observed during 2014 and 2015 spawning seasons, also relating this data to the Froude regime in other, nearby, rivers. We discuss the use of the Froude spectrum in providing an indicator of

  8. Probabilities the little numbers that rule our lives

    CERN Document Server

    Olofsson, Peter

    2014-01-01

    Praise for the First Edition"If there is anything you want to know, or remind yourself, about probabilities, then look no further than this comprehensive, yet wittily written and enjoyable, compendium of how to apply probability calculations in real-world situations."- Keith Devlin, Stanford University, National Public Radio's "Math Guy" and author of The Math Gene and The Unfinished GameFrom probable improbabilities to regular irregularities, Probabilities: The Little Numbers That Rule Our Lives, Second Edition investigates the often surprising effects of risk and chance in our lives. Featur

  9. Calculation of transition probabilities using the multiconfiguration Dirac-Fock method

    International Nuclear Information System (INIS)

    Kim, Yong Ki; Desclaux, Jean Paul; Indelicato, Paul

    1998-01-01

    The performance of the multiconfiguration Dirac-Fock (MCDF) method in calculating transition probabilities of atoms is reviewed. In general, the MCDF wave functions will lead to transition probabilities accurate to ∼ 10% or better for strong, electric-dipole allowed transitions for small atoms. However, it is more difficult to get reliable transition probabilities for weak transitions. Also, some MCDF wave functions for a specific J quantum number may not reduce to the appropriate L and S quantum numbers in the nonrelativistic limit. Transition probabilities calculated from such MCDF wave functions for nonrelativistically forbidden transitions are unreliable. Remedies for such cases are discussed

  10. Medicine in words and numbers: a cross-sectional survey comparing probability assessment scales

    Directory of Open Access Journals (Sweden)

    Koele Pieter

    2007-06-01

    Full Text Available Abstract Background In the complex domain of medical decision making, reasoning under uncertainty can benefit from supporting tools. Automated decision support tools often build upon mathematical models, such as Bayesian networks. These networks require probabilities which often have to be assessed by experts in the domain of application. Probability response scales can be used to support the assessment process. We compare assessments obtained with different types of response scale. Methods General practitioners (GPs gave assessments on and preferences for three different probability response scales: a numerical scale, a scale with only verbal labels, and a combined verbal-numerical scale we had designed ourselves. Standard analyses of variance were performed. Results No differences in assessments over the three response scales were found. Preferences for type of scale differed: the less experienced GPs preferred the verbal scale, the most experienced preferred the numerical scale, with the groups in between having a preference for the combined verbal-numerical scale. Conclusion We conclude that all three response scales are equally suitable for supporting probability assessment. The combined verbal-numerical scale is a good choice for aiding the process, since it offers numerical labels to those who prefer numbers and verbal labels to those who prefer words, and accommodates both more and less experienced professionals.

  11. Comparison of 10 single and stepped methods to identify frail older persons in primary care: diagnostic and prognostic accuracy.

    Science.gov (United States)

    Sutorius, Fleur L; Hoogendijk, Emiel O; Prins, Bernard A H; van Hout, Hein P J

    2016-08-03

    Many instruments have been developed to identify frail older adults in primary care. A direct comparison of the accuracy and prevalence of identification methods is rare and most studies ignore the stepped selection typically employed in routine care practice. Also it is unclear whether the various methods select persons with different characteristics. We aimed to estimate the accuracy of 10 single and stepped methods to identify frailty in older adults and to predict adverse health outcomes. In addition, the methods were compared on their prevalence of the identified frail persons and on the characteristics of persons identified. The Groningen Frailty Indicator (GFI), the PRISMA-7, polypharmacy, the clinical judgment of the general practitioner (GP), the self-rated health of the older adult, the Edmonton Frail Scale (EFS), the Identification Seniors At Risk Primary Care (ISAR PC), the Frailty Index (FI), the InterRAI screener and gait speed were compared to three measures: two reference standards (the clinical judgment of a multidisciplinary expert panel and Fried's frailty criteria) and 6-years mortality or long term care admission. Data were used from the Dutch Identification of Frail Elderly Study, consisting of 102 people aged 65 and over from a primary care practice in Amsterdam. Frail older adults were oversampled. The accuracy of each instrument and several stepped strategies was estimated by calculating the area under the ROC-curve. Prevalence rates of frailty ranged from 14.8 to 52.9 %. The accuracy for recommended cut off values ranged from poor (AUC = 0.556 ISAR-PC) to good (AUC = 0.865 gait speed). PRISMA-7 performed best over two reference standards, GP predicted adversities best. Stepped strategies resulted in lower prevalence rates and accuracy. Persons selected by the different instruments varied greatly in age, IADL dependency, receiving homecare and mood. We found huge differences between methods to identify frail persons in prevalence

  12. Time dependent and asymptotic neutron number probability distribution calculation using discrete Fourier transform

    International Nuclear Information System (INIS)

    Humbert, Ph.

    2005-01-01

    In this paper we consider the probability distribution of neutrons in a multiplying assembly. The problem is studied using a space independent one group neutron point reactor model without delayed neutrons. We recall the generating function methodology and analytical results obtained by G.I. Bell when the c 2 approximation is used and we present numerical solutions in the general case, without this approximation. The neutron source induced distribution is calculated using the single initial neutron distribution which satisfies a master (Kolmogorov backward) equation. This equation is solved using the generating function method. The generating function satisfies a differential equation and the probability distribution is derived by inversion of the generating function. Numerical results are obtained using the same methodology where the generating function is the Fourier transform of the probability distribution. Discrete Fourier transforms are used to calculate the discrete time dependent distributions and continuous Fourier transforms are used to calculate the asymptotic continuous probability distributions. Numerical applications are presented to illustrate the method. (author)

  13. Characterizing single-molecule FRET dynamics with probability distribution analysis.

    Science.gov (United States)

    Santoso, Yusdi; Torella, Joseph P; Kapanidis, Achillefs N

    2010-07-12

    Probability distribution analysis (PDA) is a recently developed statistical tool for predicting the shapes of single-molecule fluorescence resonance energy transfer (smFRET) histograms, which allows the identification of single or multiple static molecular species within a single histogram. We used a generalized PDA method to predict the shapes of FRET histograms for molecules interconverting dynamically between multiple states. This method is tested on a series of model systems, including both static DNA fragments and dynamic DNA hairpins. By fitting the shape of this expected distribution to experimental data, the timescale of hairpin conformational fluctuations can be recovered, in good agreement with earlier published results obtained using different techniques. This method is also applied to studying the conformational fluctuations in the unliganded Klenow fragment (KF) of Escherichia coli DNA polymerase I, which allows both confirmation of the consistency of a simple, two-state kinetic model with the observed smFRET distribution of unliganded KF and extraction of a millisecond fluctuation timescale, in good agreement with rates reported elsewhere. We expect this method to be useful in extracting rates from processes exhibiting dynamic FRET, and in hypothesis-testing models of conformational dynamics against experimental data.

  14. Predicting the probability of slip in gait: methodology and distribution study.

    Science.gov (United States)

    Gragg, Jared; Yang, James

    2016-01-01

    The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.

  15. METHOD OF FOREST FIRES PROBABILITY ASSESSMENT WITH POISSON LAW

    Directory of Open Access Journals (Sweden)

    A. S. Plotnikova

    2016-01-01

    Full Text Available The article describes the method for the forest fire burn probability estimation on a base of Poisson distribution. The λ parameter is assumed to be a mean daily number of fires detected for each Forest Fire Danger Index class within specific period of time. Thus, λ was calculated for spring, summer and autumn seasons separately. Multi-annual daily Forest Fire Danger Index values together with EO-derived hot spot map were input data for the statistical analysis. The major result of the study is generation of the database on forest fire burn probability. Results were validated against EO daily data on forest fires detected over Irkutsk oblast in 2013. Daily weighted average probability was shown to be linked with the daily number of detected forest fires. Meanwhile, there was found a number of fires which were developed when estimated probability was low. The possible explanation of this phenomenon was provided.

  16. An Estimation Method for number of carrier frequency

    Directory of Open Access Journals (Sweden)

    Xiong Peng

    2015-01-01

    Full Text Available This paper proposes a method that utilizes AR model power spectrum estimation based on Burg algorithm to estimate the number of carrier frequency in single pulse. In the modern electronic and information warfare, the pulse signal form of radar is complex and changeable, among which single pulse with multi-carrier frequencies is the most typical one, such as the frequency shift keying (FSK signal, the frequency shift keying with linear frequency (FSK-LFM hybrid modulation signal and the frequency shift keying with bi-phase shift keying (FSK-BPSK hybrid modulation signal. In view of this kind of single pulse which has multi-carrier frequencies, this paper adopts a method which transforms the complex signal into AR model, then takes power spectrum based on Burg algorithm to show the effect. Experimental results show that the estimation method still can determine the number of carrier frequencies accurately even when the signal noise ratio (SNR is very low.

  17. A quantitative comparison of single-cell whole genome amplification methods.

    Directory of Open Access Journals (Sweden)

    Charles F A de Bourcy

    Full Text Available Single-cell sequencing is emerging as an important tool for studies of genomic heterogeneity. Whole genome amplification (WGA is a key step in single-cell sequencing workflows and a multitude of methods have been introduced. Here, we compare three state-of-the-art methods on both bulk and single-cell samples of E. coli DNA: Multiple Displacement Amplification (MDA, Multiple Annealing and Looping Based Amplification Cycles (MALBAC, and the PicoPLEX single-cell WGA kit (NEB-WGA. We considered the effects of reaction gain on coverage uniformity, error rates and the level of background contamination. We compared the suitability of the different WGA methods for the detection of copy-number variations, for the detection of single-nucleotide polymorphisms and for de-novo genome assembly. No single method performed best across all criteria and significant differences in characteristics were observed; the choice of which amplifier to use will depend strongly on the details of the type of question being asked in any given experiment.

  18. Single-Step Affinity Purification for Fungal Proteomics ▿ †

    OpenAIRE

    Liu, Hui-Lin; Osmani, Aysha H.; Ukil, Leena; Son, Sunghun; Markossian, Sarine; Shen, Kuo-Fang; Govindaraghavan, Meera; Varadaraj, Archana; Hashmi, Shahr B.; De Souza, Colin P.; Osmani, Stephen A.

    2010-01-01

    A single-step protein affinity purification protocol using Aspergillus nidulans is described. Detailed protocols for cell breakage, affinity purification, and depending on the application, methods for protein release from affinity beads are provided. Examples defining the utility of the approaches, which should be widely applicable, are included.

  19. Free Modal Algebras Revisited: The Step-by-Step Method

    NARCIS (Netherlands)

    Bezhanishvili, N.; Ghilardi, Silvio; Jibladze, Mamuka

    2012-01-01

    We review the step-by-step method of constructing finitely generated free modal algebras. First we discuss the global step-by-step method, which works well for rank one modal logics. Next we refine the global step-by-step method to obtain the local step-by-step method, which is applicable beyond

  20. Properties of nano-structured Ni/YSZ anodes fabricated from plasma sprayable NiO/YSZ powder prepared by single step solution combustion method

    Energy Technology Data Exchange (ETDEWEB)

    Prakash, B. Shri; Balaji, N.; Kumar, S. Senthil; Aruna, S.T., E-mail: staruna194@gmail.com

    2016-12-15

    Highlights: • Preparation of plasma grade NiO/YSZ powder in single step. • Fabrication of nano-structured Ni/YSZ coating. • Conductivity of 600 S/cm at 800 °C. - Abstract: NiO/YSZ anode coatings are fabricated by atmospheric plasma spraying at different plasma powers from plasma grade NiO/YSZ powders that are prepared in a single step by solution combustion method. The process adopted is devoid of multi-steps that are generally involved in conventional spray drying or fusing and crushing methods. Density of the coating increased and porosity decreased with increase in the plasma power of deposition. An ideal nano-structured Ni/YSZ anode encompassing nano YSZ particles, nano Ni particles and nano pores is achieved on reducing the coating deposited at lower plasma powers. The coating exhibit porosities in the range of 27%, sufficient for anode functional layers. Electronic conductivity of the coatings is in the range of 600 S/cm at 800 °C.

  1. The Fractional Step Method Applied to Simulations of Natural Convective Flows

    Science.gov (United States)

    Westra, Douglas G.; Heinrich, Juan C.; Saxon, Jeff (Technical Monitor)

    2002-01-01

    This paper describes research done to apply the Fractional Step Method to finite-element simulations of natural convective flows in pure liquids, permeable media, and in a directionally solidified metal alloy casting. The Fractional Step Method has been applied commonly to high Reynold's number flow simulations, but is less common for low Reynold's number flows, such as natural convection in liquids and in permeable media. The Fractional Step Method offers increased speed and reduced memory requirements by allowing non-coupled solution of the pressure and the velocity components. The Fractional Step Method has particular benefits for predicting flows in a directionally solidified alloy, since other methods presently employed are not very efficient. Previously, the most suitable method for predicting flows in a directionally solidified binary alloy was the penalty method. The penalty method requires direct matrix solvers, due to the penalty term. The Fractional Step Method allows iterative solution of the finite element stiffness matrices, thereby allowing more efficient solution of the matrices. The Fractional Step Method also lends itself to parallel processing, since the velocity component stiffness matrices can be built and solved independently of each other. The finite-element simulations of a directionally solidified casting are used to predict macrosegregation in directionally solidified castings. In particular, the finite-element simulations predict the existence of 'channels' within the processing mushy zone and subsequently 'freckles' within the fully processed solid, which are known to result from macrosegregation, or what is often referred to as thermo-solutal convection. These freckles cause material property non-uniformities in directionally solidified castings; therefore many of these castings are scrapped. The phenomenon of natural convection in an alloy under-going directional solidification, or thermo-solutal convection, will be explained. The

  2. Evaluation of the probability distribution of intake from a single measurement on a personal air sampler

    International Nuclear Information System (INIS)

    Birchall, A.; Muirhead, C.R.; James, A.C.

    1988-01-01

    An analytical expression has been derived for the k-sum distribution, formed by summing k random variables from a lognormal population. Poisson statistics are used with this distribution to derive distribution of intake when breathing an atmosphere with a constant particle number concentration. Bayesian inference is then used to calculate the posterior probability distribution of concentrations from a given measurement. This is combined with the above intake distribution to give the probability distribution of intake resulting from a single measurement of activity made by an ideal sampler. It is shown that the probability distribution of intake is very dependent on the prior distribution used in Bayes' theorem. The usual prior assumption, that all number concentrations are equally probable, leads to an imbalance in the posterior intake distribution. This can be resolved if a new prior proportional to w -2/3 is used, where w is the expected number of particles collected. (author)

  3. Percutaneous Cystgastrostomy as a Single-Step Procedure

    International Nuclear Information System (INIS)

    Curry, L.; Sookur, P.; Low, D.; Bhattacharya, S.; Fotheringham, T.

    2009-01-01

    The purpose of this study was to evaluate the success of percutaneous transgastric cystgastrostomy as a single-step procedure. We performed a retrospective analysis of single-step percutaneous transgastric cystgastrostomy carried out in 12 patients (8 male, 4 female; mean age 44 years; range 21-70 years), between 2002 and 2007, with large symptomatic pancreatic pseudocysts for whom up to 1-year follow-up data (mean 10 months) were available. All pseudocysts were drained by single-step percutaneous cystgastrostomy with the placement of either one or two stents. The procedure was completed successfully in all 12 patients. The pseudocysts showed complete resolution on further imaging in 7 of 12 patients with either enteric passage of the stent or stent removal by endoscopy. In 2 of 12 patients, the pseudocysts showed complete resolution on imaging, with the stents still noted in situ. In 2 of 12 patients, the pseudocysts became infected after 1 month and required surgical intervention. In 1 of 12 patients, the pseudocyst showed partial resolution on imaging, but subsequently reaccumulated and later required external drainage. In our experience, percutaneous cystgastrostomy as a single-step procedure has a high success rate and good short-term outcomes over 1-year follow-up and should be considered in the treatment of large symptomatic cysts.

  4. Quantification of diazotrophs bacteria isolated from cocoa soils (Theobroma cacao L., by the technique of Most Probable Number (MPN

    Directory of Open Access Journals (Sweden)

    Adriana Zulay Argüello Navarro

    2016-07-01

    Full Text Available The objective of this research was to quantify diazotrophic bacteria and compare physicochemically rhizospheric soils of three cocoa plantations (Theobroma cacao L. in Norte de Santander Department, Colombia; for which they were characterized, differing in cultivated area, agronomic management and crop age. From serial dilutions of the samples and using the technique of Most Probable Number (MPN, In semisolid culture media (NFb, JMV, LGI, JNFb, the diazotrophs were quantified, evaluating as positive the formation of a subsurface film in the medium contained in sealed vials; equal samples were sent to the Bioambiental laboratory (UNET for physicochemical analyzes. As a result, the evaluated samples showed deficiencies in the percentage of organic matter and elements such as Potassium, Phosphorus and Magnesium. Statistically highly significant differences in MPN were reported. The highest quantification of diazotrophs was reported in the Florilandia farm, which was characterized by drip irrigation. The highest quantification of diazotrophs was recorded in the media NFb and JMV, demonstrating a greater presence of the presumed genera Azospirillum sp. and Burkholderia sp. which are easily isolated from rhizospheric soils, unlike the genera Herbaspirillum sp. and Gluconacetobacter sp. which by their endophytic character tend to be less predominant in this type of samples. It is also concluded that the physicochemical characteristics of the soil, humidity and climatic relationships at the moment of sampling, condition the amount of root exudates and therefore are factors that conditioned the presence of diazotrophs in the samples.

  5. Accuracy of Dual-Energy Virtual Monochromatic CT Numbers: Comparison between the Single-Source Projection-Based and Dual-Source Image-Based Methods.

    Science.gov (United States)

    Ueguchi, Takashi; Ogihara, Ryota; Yamada, Sachiko

    2018-03-21

    To investigate the accuracy of dual-energy virtual monochromatic computed tomography (CT) numbers obtained by two typical hardware and software implementations: the single-source projection-based method and the dual-source image-based method. A phantom with different tissue equivalent inserts was scanned with both single-source and dual-source scanners. A fast kVp-switching feature was used on the single-source scanner, whereas a tin filter was used on the dual-source scanner. Virtual monochromatic CT images of the phantom at energy levels of 60, 100, and 140 keV were obtained by both projection-based (on the single-source scanner) and image-based (on the dual-source scanner) methods. The accuracy of virtual monochromatic CT numbers for all inserts was assessed by comparing measured values to their corresponding true values. Linear regression analysis was performed to evaluate the dependency of measured CT numbers on tissue attenuation, method, and their interaction. Root mean square values of systematic error over all inserts at 60, 100, and 140 keV were approximately 53, 21, and 29 Hounsfield unit (HU) with the single-source projection-based method, and 46, 7, and 6 HU with the dual-source image-based method, respectively. Linear regression analysis revealed that the interaction between the attenuation and the method had a statistically significant effect on the measured CT numbers at 100 and 140 keV. There were attenuation-, method-, and energy level-dependent systematic errors in the measured virtual monochromatic CT numbers. CT number reproducibility was comparable between the two scanners, and CT numbers had better accuracy with the dual-source image-based method at 100 and 140 keV. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  6. SPAR-H Step-by-Step Guidance

    Energy Technology Data Exchange (ETDEWEB)

    W. J. Galyean; A. M. Whaley; D. L. Kelly; R. L. Boring

    2011-05-01

    This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from the psychology literature.

  7. SPAR-H Step-by-Step Guidance

    International Nuclear Information System (INIS)

    Galyean, W.J.; Whaley, A.M.; Kelly, D.L.; Boring, R.L.

    2011-01-01

    This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from the psychology literature.

  8. Comparative analysis of single-step and two-step biodiesel production using supercritical methanol on laboratory-scale

    International Nuclear Information System (INIS)

    Micic, Radoslav D.; Tomić, Milan D.; Kiss, Ferenc E.; Martinovic, Ferenc L.; Simikić, Mirko Ð.; Molnar, Tibor T.

    2016-01-01

    Highlights: • Single-step supercritical transesterification compared to the two-step process. • Two-step process: oil hydrolysis and subsequent supercritical methyl esterification. • Experiments were conducted in a laboratory-scale batch reactor. • Higher biodiesel yields in two-step process at milder reaction conditions. • Two-step process has potential to be cost-competitive with the single-step process. - Abstract: Single-step supercritical transesterification and two-step biodiesel production process consisting of oil hydrolysis and subsequent supercritical methyl esterification were studied and compared. For this purpose, comparative experiments were conducted in a laboratory-scale batch reactor and optimal reaction conditions (temperature, pressure, molar ratio and time) were determined. Results indicate that in comparison to a single-step transesterification, methyl esterification (second step of the two-step process) produces higher biodiesel yields (95 wt% vs. 91 wt%) at lower temperatures (270 °C vs. 350 °C), pressures (8 MPa vs. 12 MPa) and methanol to oil molar ratios (1:20 vs. 1:42). This can be explained by the fact that the reaction system consisting of free fatty acid (FFA) and methanol achieves supercritical condition at milder reaction conditions. Furthermore, the dissolved FFA increases the acidity of supercritical methanol and acts as an acid catalyst that increases the reaction rate. There is a direct correlation between FFA content of the product obtained in hydrolysis and biodiesel yields in methyl esterification. Therefore, the reaction parameters of hydrolysis were optimized to yield the highest FFA content at 12 MPa, 250 °C and 1:20 oil to water molar ratio. Results of direct material and energy costs comparison suggest that the process based on the two-step reaction has the potential to be cost-competitive with the process based on single-step supercritical transesterification. Higher biodiesel yields, similar or lower energy

  9. Implementation of the probability table method in a continuous-energy Monte Carlo code system

    International Nuclear Information System (INIS)

    Sutton, T.M.; Brown, F.B.

    1998-10-01

    RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5

  10. Most probable mixing state of aerosols in Delhi NCR, northern India

    Science.gov (United States)

    Srivastava, Parul; Dey, Sagnik; Srivastava, Atul Kumar; Singh, Sachchidanand; Tiwari, Suresh

    2018-02-01

    Unknown mixing state is one of the major sources of uncertainty in estimating aerosol direct radiative forcing (DRF). Aerosol DRF in India is usually reported for external mixing and any deviation from this would lead to high bias and error. Limited information on aerosol composition hinders in resolving this issue in India. Here we use two years of aerosol chemical composition data measured at megacity Delhi to examine the most probable aerosol mixing state by comparing the simulated clear-sky downward surface flux with the measured flux. We consider external, internal, and four combinations of core-shell (black carbon, BC over dust; water-soluble, WS over dust; WS over water-insoluble, WINS and BC over WINS) mixing. Our analysis reveals that choice of external mixing (usually considered in satellite retrievals and climate models) seems reasonable in Delhi only in the pre-monsoon (Mar-Jun) season. During the winter (Dec-Feb) and monsoon (Jul-Sep) seasons, 'WS coating over dust' externally mixed with BC and WINS appears to be the most probable mixing state; while 'WS coating over WINS' externally mixed with BC and dust seems to be the most probable mixing state in the post-monsoon (Oct-Nov) season. Mean seasonal TOA (surface) aerosol DRF for the most probable mixing states are 4.4 ± 3.9 (- 25.9 ± 3.9), - 16.3 ± 5.7 (- 42.4 ± 10.5), 13.6 ± 11.4 (- 76.6 ± 16.6) and - 5.4 ± 7.7 (- 80.0 ± 7.2) W m- 2 respectively in the pre-monsoon, monsoon, post-monsoon and winter seasons. Our results highlight the importance of realistic mixing state treatment in estimating aerosol DRF to aid in policy making to combat climate change.

  11. Two Ranking Methods of Single Valued Triangular Neutrosophic Numbers to Rank and Evaluate Information Systems Quality

    Directory of Open Access Journals (Sweden)

    Samah Ibrahim Abdel Aal

    2018-03-01

    Full Text Available The concept of neutrosophic can provide a generalization of fuzzy set and intuitionistic fuzzy set that make it is the best fit in representing indeterminacy and uncertainty. Single Valued Triangular Numbers (SVTrN-numbers is a special case of neutrosophic set that can handle ill-known quantity very difficult problems. This work intended to introduce a framework with two types of ranking methods. The results indicated that each ranking method has its own advantage. In this perspective, the weighted value and ambiguity based method gives more attention to uncertainty in ranking and evaluating ISQ as well as it takes into account cut sets of SVTrN numbers that can reflect the information on Truth-membership-membership degree, false membership-membership degree and Indeterminacy-membership degree. The value index and ambiguity index method can reflect the decision maker's subjectivity attitude to the SVTrN- numbers.

  12. Towards single step production of multi-layer inorganic hollow fibers

    NARCIS (Netherlands)

    de Jong, J.; Benes, Nieck Edwin; Koops, G.H.; Wessling, Matthias

    2004-01-01

    In this work we propose a generic synthesis route for the single step production of multi-layer inorganic hollow fibers, based on polymer wet spinning combined with a heat treatment. With this new method, membranes with a high surface area per unit volume ratio can be produced, while production time

  13. Talking probabilities: communicating probabilistic information with words and numbers

    NARCIS (Netherlands)

    Renooij, S.; Witteman, C.L.M.

    1999-01-01

    The number of knowledge-based systems that build on Bayesian belief networks is increasing. The construction of such a network however requires a large number of probabilities in numerical form. This is often considered a major obstacle, one of the reasons being that experts are reluctant to

  14. Genomic prediction in a nuclear population of layers using single-step models.

    Science.gov (United States)

    Yan, Yiyuan; Wu, Guiqin; Liu, Aiqiao; Sun, Congjiao; Han, Wenpeng; Li, Guangqi; Yang, Ning

    2018-02-01

    Single-step genomic prediction method has been proposed to improve the accuracy of genomic prediction by incorporating information of both genotyped and ungenotyped animals. The objective of this study is to compare the prediction performance of single-step model with a 2-step models and the pedigree-based models in a nuclear population of layers. A total of 1,344 chickens across 4 generations were genotyped by a 600 K SNP chip. Four traits were analyzed, i.e., body weight at 28 wk (BW28), egg weight at 28 wk (EW28), laying rate at 38 wk (LR38), and Haugh unit at 36 wk (HU36). In predicting offsprings, individuals from generation 1 to 3 were used as training data and females from generation 4 were used as validation set. The accuracies of predicted breeding values by pedigree BLUP (PBLUP), genomic BLUP (GBLUP), SSGBLUP and single-step blending (SSBlending) were compared for both genotyped and ungenotyped individuals. For genotyped females, GBLUP performed no better than PBLUP because of the small size of training data, while the 2 single-step models predicted more accurately than the PBLUP model. The average predictive ability of SSGBLUP and SSBlending were 16.0% and 10.8% higher than the PBLUP model across traits, respectively. Furthermore, the predictive abilities for ungenotyped individuals were also enhanced. The average improvements of prediction abilities were 5.9% and 1.5% for SSGBLUP and SSBlending model, respectively. It was concluded that single-step models, especially the SSGBLUP model, can yield more accurate prediction of genetic merits and are preferable for practical implementation of genomic selection in layers. © 2017 Poultry Science Association Inc.

  15. Talking probabilities: communicating probalistic information with words and numbers

    NARCIS (Netherlands)

    Renooij, S.; Witteman, C.L.M.

    1999-01-01

    The number of knowledge-based systems that build on Bayesian belief networks is increasing. The construction of such a network however requires a large number of probabilities in numerical form. This is often considered a major obstacle, one of the reasons being that experts are reluctant to provide

  16. Peyton’s four-step approach: differential effects of single instructional steps on procedural and memory performance – a clarification study

    Directory of Open Access Journals (Sweden)

    Krautter M

    2015-05-01

    Full Text Available Markus Krautter,1 Ronja Dittrich,2 Annette Safi,2 Justine Krautter,1 Imad Maatouk,2 Andreas Moeltner,2 Wolfgang Herzog,2 Christoph Nikendei2 1Department of Nephrology, 2Department of General Internal and Psychosomatic Medicine, University of Heidelberg Medical Hospital, Heidelberg, Germany Background: Although Peyton’s four-step approach is a widely used method for skills-lab training in undergraduate medical education and has been shown to be more effective than standard instruction, it is unclear whether its superiority can be attributed to a specific single step. Purpose: We conducted a randomized controlled trial to investigate the differential learning outcomes of the separate steps of Peyton’s four-step approach. Methods: Volunteer medical students were randomly assigned to four different groups. Step-1 group received Peyton’s Step 1, Step-2 group received Peyton’s Steps 1 and 2, Step-3 group received Peyton’s Steps 1, 2, and 3, and Step-3mod group received Peyton’s Steps 1 and 2, followed by a repetition of Step 2. Following the training, the first independent performance of a central venous catheter (CVC insertion using a manikin was video-recorded and scored by independent video assessors using binary checklists. The day after the training, memory performance during delayed recall was assessed with an incidental free recall test. Results: A total of 97 participants agreed to participate in the trial. There were no statistically significant group differences with regard to age, sex, completed education in a medical profession, completed medical clerkships, preliminary memory tests, or self-efficacy ratings. Regarding checklist ratings, Step-2 group showed a superior first independent performance of CVC placement compared to Step-1 group (P<0.001, and Step-3 group showed a superior performance to Step-2 group (P<0.009, while Step-2 group and Step-3mod group did not differ (P=0.055. The findings were similar in the incidental

  17. Multiple model cardinalized probability hypothesis density filter

    Science.gov (United States)

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  18. Arbuscular mycorrhizal propagules in soils from a tropical forest and an abandoned cornfield in Quintana Roo, Mexico: visual comparison of most-probable-number estimates.

    Science.gov (United States)

    Ramos-Zapata, José A; Guadarrama, Patricia; Navarro-Alberto, Jorge; Orellana, Roger

    2011-02-01

    The present study was aimed at comparing the number of arbuscular mycorrhizal fungi (AMF) propagules found in soil from a mature tropical forest and that found in an abandoned cornfield in Noh-Bec Quintana Roo, Mexico, during three seasons. Agricultural practices can dramatically reduce the availability and viability of AMF propagules, and in this way delay the regeneration of tropical forests in abandoned agricultural areas. In addition, rainfall seasonality, which characterizes deciduous tropical forests, may strongly influence AMF propagules density. To compare AMF propagule numbers between sites and seasons (summer rainy, winter rainy and dry season), a "most probable number" (MPN) bioassay was conducted under greenhouse conditions employing Sorgum vulgare L. as host plant. Results showed an average value of 3.5 ± 0.41 propagules in 50 ml of soil for the mature forest while the abandoned cornfield had 15.4 ± 5.03 propagules in 50 ml of soil. Likelihood analysis showed no statistical differences in MPN of propagules between seasons within each site, or between sites, except for the summer rainy season for which soil from the abandoned cornfield had eight times as many propagules compared to soil from the mature forest site for this season. Propagules of arbuscular mycorrhizal fungi remained viable throughout the sampling seasons at both sites. Abandoned areas resulting from traditional slash and burn agriculture practices involving maize did not show a lower number of AMF propagules, which should allow the establishment of mycotrophic plants thus maintaining the AMF inoculum potential in these soils.

  19. A novel single-step synthesis of N-doped TiO2 via a sonochemical method

    International Nuclear Information System (INIS)

    Wang, Xi-Kui; Wang, Chen; Guo, Wei-Lin; Wang, Jin-Gang

    2011-01-01

    Graphical abstract: The N-doped anatase TiO 2 nanoparticles were synthesized by sonochemical method. The as-prepared sample is characterized by XRD, TEM, XPS and UV-Vis DRS. The photocatalytic activity of the photocatalyst was evaluated by the photodegradation of an azo dye direct sky blue 5B. Highlights: → A novel singal-step sonochemical synthesis method for the preparation of anatase N-doped TiO 2 nanocrystalline at low temperature has been devoleped. → The as-prepared sample is characterized by XRD, TEM, XPS and UV-Vis DRS. → The photodegradation of azo dye direct sky blue 5 showed that the N-doped TiO 2 catalyst is of high visible-light photocatalytic activity. -- Abstract: A novel single-step synthetic method for the preparation of anatase N-doped TiO 2 nanocrystalline at low temperature has been devoleped. The N-doped anatase TiO 2 nanoparticles were synthesized by sonication of the solution of tetraisopropyl titanium and urea in water and isopropyl alcohol at 80 o C for 150 min. The as-prepared sample was characterized by X-ray diffraction, transmission electron microscopy, X-ray photoelectron spectroscopy and UV-vis absorption spectrum. The product structure depends on the reaction temperature and reaction time. The photocatalytic activity of the as-prepared photocatalyst was evaluated via the photodegradation of an azo dye direct sky blue 5B. The results show that the N-doped TiO 2 nanocrystalline prepared via sonication exhibit an excellent photocatalytic activity under UV light and simulated sunlight.

  20. Comparison of single-step and two-step purified coagulants from Moringa oleifera seed for turbidity and DOC removal.

    Science.gov (United States)

    Sánchez-Martín, J; Ghebremichael, K; Beltrán-Heredia, J

    2010-08-01

    The coagulant proteins from Moringa oleifera purified with single-step and two-step ion-exchange processes were used for the coagulation of surface water from Meuse river in The Netherlands. The performances of the two purified coagulants and the crude extract were assessed in terms of turbidity and DOC removal. The results indicated that the optimum dosage of the single-step purified coagulant was more than two times higher compared to the two-step purified coagulant in terms of turbidity removal. And the residual DOC in the two-step purified coagulant was lower than in single-step purified coagulant or crude extract. (c) 2010 Elsevier Ltd. All rights reserved.

  1. Assembly for the measurement of the most probable energy of directed electron radiation

    International Nuclear Information System (INIS)

    Geske, G.

    1987-01-01

    This invention relates to a setup for the measurement of the most probable energy of directed electron radiation up to 50 MeV. The known energy-range relationship with regard to the absorption of electron radiation in matter is utilized by an absorber with two groups of interconnected radiation detectors embedded in it. The most probable electron beam energy is derived from the quotient of both groups' signals

  2. Strong Stability Preserving Two-step Runge–Kutta Methods

    KAUST Repository

    Ketcheson, David I.; Gottlieb, Sigal; Macdonald, Colin B.

    2011-01-01

    We investigate the strong stability preserving (SSP) property of two-step Runge–Kutta (TSRK) methods. We prove that all SSP TSRK methods belong to a particularly simple subclass of TSRK methods, in which stages from the previous step are not used. We derive simple order conditions for this subclass. Whereas explicit SSP Runge–Kutta methods have order at most four, we prove that explicit SSP TSRK methods have order at most eight. We present explicit TSRK methods of up to eighth order that were found by numerical search. These methods have larger SSP coefficients than any known methods of the same order of accuracy and may be implemented in a form with relatively modest storage requirements. The usefulness of the TSRK methods is demonstrated through numerical examples, including integration of very high order weighted essentially non-oscillatory discretizations.

  3. Strong Stability Preserving Two-step Runge–Kutta Methods

    KAUST Repository

    Ketcheson, David I.

    2011-12-22

    We investigate the strong stability preserving (SSP) property of two-step Runge–Kutta (TSRK) methods. We prove that all SSP TSRK methods belong to a particularly simple subclass of TSRK methods, in which stages from the previous step are not used. We derive simple order conditions for this subclass. Whereas explicit SSP Runge–Kutta methods have order at most four, we prove that explicit SSP TSRK methods have order at most eight. We present explicit TSRK methods of up to eighth order that were found by numerical search. These methods have larger SSP coefficients than any known methods of the same order of accuracy and may be implemented in a form with relatively modest storage requirements. The usefulness of the TSRK methods is demonstrated through numerical examples, including integration of very high order weighted essentially non-oscillatory discretizations.

  4. TODIM Method for Single-Valued Neutrosophic Multiple Attribute Decision Making

    Directory of Open Access Journals (Sweden)

    Dong-Sheng Xu

    2017-10-01

    Full Text Available Recently, the TODIM has been used to solve multiple attribute decision making (MADM problems. The single-valued neutrosophic sets (SVNSs are useful tools to depict the uncertainty of the MADM. In this paper, we will extend the TODIM method to the MADM with the single-valued neutrosophic numbers (SVNNs. Firstly, the definition, comparison, and distance of SVNNs are briefly presented, and the steps of the classical TODIM method for MADM problems are introduced. Then, the extended classical TODIM method is proposed to deal with MADM problems with the SVNNs, and its significant characteristic is that it can fully consider the decision makers’ bounded rationality which is a real action in decision making. Furthermore, we extend the proposed model to interval neutrosophic sets (INSs. Finally, a numerical example is proposed.

  5. EDF: Computing electron number probability distribution functions in real space from molecular wave functions

    Science.gov (United States)

    Francisco, E.; Pendás, A. Martín; Blanco, M. A.

    2008-04-01

    : 2.80 GHz Intel Pentium IV CPU Operating system: GNU/Linux RAM: 55 992 KB Word size: 32 bits Classification: 2.7 External routines: Netlib Nature of problem: Let us have an N-electron molecule and define an exhaustive partition of the physical space into m three-dimensional regions. The edf program computes the probabilities P(n,n,…,n)≡P({n}) of all possible allocations of n electrons to Ω, n electrons to Ω,…, and n electrons to Ω,{n} being integers. Solution method: Let us assume that the N-electron molecular wave function, Ψ(1,N), is a linear combination of M Slater determinants, Ψ(1,N)=∑rMCψ(1,N). Calling SΩrs the overlap matrix over the 3D region Ω between the (real) molecular spin-orbitals (MSO) in ψ(χ1r,…χNr) and the MSOs in ψ,(χ1s,…,χNs), edf finds all the P({n})'s by solving the linear system ∑{n}{∏kmtkn}P({n})=∑r,sMCCdet[∑kmtSΩrs], where t=1 and t,…,t are arbitrary real numbers. Restrictions: The number of {n} sets grows very fast with m and N, so that the dimension of the linear system (1) soon becomes very large. Moreover, the computer time required to obtain the determinants in the second member of Eq. (1) scales quadratically with M. These two facts limit the applicability of the method to relatively small molecules. Unusual features: Most of the real variables are of precision real*16. Running time: 0.030, 2.010, and 0.620 seconds for Test examples 1, 2, and 3, respectively. References: [1] A. Martín Pendás, E. Francisco, M.A. Blanco, Faraday Discuss. 135 (2007) 423-438. [2] A. Martín Pendás, E. Francisco, M.A. Blanco, J. Phys. Chem. A 111 (2007) 1084-1090. [3] A. Martín Pendás, E. Francisco, M.A. Blanco, Phys. Chem. Chem. Phys. 9 (2007) 1087-1092. [4] E. Francisco, A. Martín Pendás, M.A. Blanco, J. Chem. Phys. 126 (2007) 094102. [5] A. Martín Pendás, E. Francisco, M.A. Blanco, C. Gatti, Chemistry: A European Journal 113 (2007) 9362-9371.

  6. Voluntary stepping behavior under single- and dual-task conditions in chronic stroke survivors: A comparison between the involved and uninvolved legs.

    Science.gov (United States)

    Melzer, Itshak; Goldring, Melissa; Melzer, Yehudit; Green, Elad; Tzedek, Irit

    2010-12-01

    If balance is lost, quick step execution can prevent falls. Research has shown that speed of voluntary stepping was able to predict future falls in old adults. The aim of the study was to investigate voluntary stepping behavior, as well as to compare timing and leg push-off force-time relation parameters of involved and uninvolved legs in stroke survivors during single- and dual-task conditions. We also aimed to compare timing and leg push-off force-time relation parameters between stroke survivors and healthy individuals in both task conditions. Ten stroke survivors performed a voluntary step execution test with their involved and uninvolved legs under two conditions: while focusing only on the stepping task and while a separate attention-demanding task was performed simultaneously. Temporal parameters related to the step time were measured including the duration of the step initiation phase, the preparatory phase, the swing phase, and the total step time. In addition, force-time parameters representing the push-off power during stepping were calculated from ground reaction data and compared with 10 healthy controls. The involved legs of stroke survivors had a significantly slower stepping time than uninvolved legs due to increased swing phase duration during both single- and dual-task conditions. For dual compared to single task, the stepping time increased significantly due to a significant increase in the duration of step initiation. In general, the force time parameters were significantly different in both legs of stroke survivors as compared to healthy controls, with no significant effect of dual compared with single-task conditions in both groups. The inability of stroke survivors to swing the involved leg quickly may be the most significant factor contributing to the large number of falls to the paretic side. The results suggest that stroke survivors were unable to rapidly produce muscle force in fast actions. This may be the mechanism of delayed execution

  7. The optimal number of surveys when detectability varies.

    Directory of Open Access Journals (Sweden)

    Alana L Moore

    Full Text Available The survey of plant and animal populations is central to undertaking field ecology. However, detection is imperfect, so the absence of a species cannot be determined with certainty. Methods developed to account for imperfect detectability during surveys do not yet account for stochastic variation in detectability over time or space. When each survey entails a fixed cost that is not spent searching (e.g., time required to travel to the site, stochastic detection rates result in a trade-off between the number of surveys and the length of each survey when surveying a single site. We present a model that addresses this trade-off and use it to determine the number of surveys that: 1 maximizes the expected probability of detection over the entire survey period; and 2 is most likely to achieve a minimally-acceptable probability of detection. We illustrate the applicability of our approach using three practical examples (minimum survey effort protocols, number of frog surveys per season, and number of quadrats per site to detect a plant species and test our model's predictions using data from experimental plant surveys. We find that when maximizing the expected probability of detection, the optimal survey design is most sensitive to the coefficient of variation in the rate of detection and the ratio of the search budget to the travel cost. When maximizing the likelihood of achieving a particular probability of detection, the optimal survey design is most sensitive to the required probability of detection, the expected number of detections if the budget were spent only on searching, and the expected number of detections that are missed due to travel costs. We find that accounting for stochasticity in detection rates is likely to be particularly important for designing surveys when detection rates are low. Our model provides a framework to do this.

  8. Learning Binomial Probability Concepts with Simulation, Random Numbers and a Spreadsheet

    Science.gov (United States)

    Rochowicz, John A., Jr.

    2005-01-01

    This paper introduces the reader to the concepts of binomial probability and simulation. A spreadsheet is used to illustrate these concepts. Random number generators are great technological tools for demonstrating the concepts of probability. Ideas of approximation, estimation, and mathematical usefulness provide numerous ways of learning…

  9. A method for estimating failure rates for low probability events arising in PSA

    International Nuclear Information System (INIS)

    Thorne, M.C.; Williams, M.M.R.

    1995-01-01

    The authors develop a method for predicting failure rates and failure probabilities per event when, over a given test period or number of demands, no failures have occurred. A Bayesian approach is adopted to calculate a posterior probability distribution for the failure rate or failure probability per event subsequent to the test period. This posterior is then used to estimate effective failure rates or probabilities over a subsequent period of time or number of demands. In special circumstances, the authors results reduce to the well-known rules of thumb, viz: 1/N and 1/T, where N is the number of demands during the test period for no failures and T is the test period for no failures. However, the authors are able to give strict conditions on the validity of these rules of thumb and to improve on them when necessary

  10. Probability approaching method (PAM) and its application on fuel management optimization

    International Nuclear Information System (INIS)

    Liu, Z.; Hu, Y.; Shi, G.

    2004-01-01

    For multi-cycle reloading optimization problem, a new solving scheme is presented. The multi-cycle problem is de-coupled into a number of relatively independent mono-cycle issues, then this non-linear programming problem with complex constraints is solved by an advanced new algorithm -probability approaching method (PAM), which is based on probability theory. The result on simplified core model shows well effect of this new multi-cycle optimization scheme. (authors)

  11. Hotspots ampersand other hidden targets: Probability of detection, number, frequency and area

    International Nuclear Information System (INIS)

    Vita, C.L.

    1994-01-01

    Concepts and equations are presented for making probability-based estimates of the detection probability, and the number, frequency, and area of hidden targets, including hotspots, at a given site. Targets include hotspots, which are areas of extreme or particular contamination, and any object or feature that is hidden from direct visual observation--including buried objects and geologic or hydrologic details or anomalies. Being Bayesian, results are fundamentally consistent with observational methods. Results are tools for planning or interpreting exploration programs used in site investigation or characterization, remedial design, construction, or compliance monitoring, including site closure. Used skillfully and creatively, these tools can help streamline and expedite environmental restoration, reducing time and cost, making site exploration cost-effective, and providing acceptable risk at minimum cost. 14 refs., 4 figs

  12. Determination of beam intensity in a single step for IMRT inverse planning

    International Nuclear Information System (INIS)

    Chuang, Keh-Shih; Chen, Tzong-Jer; Kuo, Shan-Chi; Jan, Meei-Ling; Hwang, Ing-Ming; Chen, Sharon; Lin, Ying-Chuan; Wu, Jay

    2003-01-01

    In intensity modulated radiotherapy (IMRT), targets are treated by multiple beams at different orientations each with spatially-modulated beam intensities. This approach spreads the normal tissue dose to a greater volume and produces a higher dose conformation to the target. In general, inverse planning is used for IMRT treatment planning. The inverse planning requires iterative calculation of dose distribution in order to optimize the intensity profile for each beam and is very computation intensive. In this paper, we propose a single-step method utilizing a figure of merit (FoM) to estimate the beam intensities for IMRT treatment planning. The FoM of a ray is defined as the ratio between the delivered tumour dose and normal tissue dose and is a good index for the dose efficacy of the ray. To maximize the beam utility, it is natural to irradiate the tumour with intensity of each ray proportional to the value of the FoM. The nonuniform beam intensity profiles are then fixed and the weights of the beam are determined iteratively in order to yield a uniform tumour dose. In this study, beams are employed at equispaced angles around the patient. Each beam with its field size that just covers the tumour is divided into a fixed number of beamlets. The FoM is calculated for each beamlet and this value is assigned to be the beam intensity. Various weighting factors are incorporated in the FoM computation to accommodate different clinical considerations. Two clinical datasets are used to test the feasibility of the algorithm. The resultant dose-volume histograms of this method are presented and compared to that of conformal therapy. Preliminary results indicate that this method reduces the critical organ doses at a small expense of uniformity in tumour dose distribution. This method estimates the beam intensity in one single step and the computation time is extremely fast and can be finished in less than one minute using a regular PC

  13. "Silicon millefeuille": From a silicon wafer to multiple thin crystalline films in a single step

    Science.gov (United States)

    Hernández, David; Trifonov, Trifon; Garín, Moisés; Alcubilla, Ramon

    2013-04-01

    During the last years, many techniques have been developed to obtain thin crystalline films from commercial silicon ingots. Large market applications are foreseen in the photovoltaic field, where important cost reductions are predicted, and also in advanced microelectronics technologies as three-dimensional integration, system on foil, or silicon interposers [Dross et al., Prog. Photovoltaics 20, 770-784 (2012); R. Brendel, Thin Film Crystalline Silicon Solar Cells (Wiley-VCH, Weinheim, Germany 2003); J. N. Burghartz, Ultra-Thin Chip Technology and Applications (Springer Science + Business Media, NY, USA, 2010)]. Existing methods produce "one at a time" silicon layers, once one thin film is obtained, the complete process is repeated to obtain the next layer. Here, we describe a technology that, from a single crystalline silicon wafer, produces a large number of crystalline films with controlled thickness in a single technological step.

  14. Sensitivity of the probability of failure to probability of detection curve regions

    International Nuclear Information System (INIS)

    Garza, J.; Millwater, H.

    2016-01-01

    Non-destructive inspection (NDI) techniques have been shown to play a vital role in fracture control plans, structural health monitoring, and ensuring availability and reliability of piping, pressure vessels, mechanical and aerospace equipment. Probabilistic fatigue simulations are often used in order to determine the efficacy of an inspection procedure with the NDI method modeled as a probability of detection (POD) curve. These simulations can be used to determine the most advantageous NDI method for a given application. As an aid to this process, a first order sensitivity method of the probability-of-failure (POF) with respect to regions of the POD curve (lower tail, middle region, right tail) is developed and presented here. The sensitivity method computes the partial derivative of the POF with respect to a change in each region of a POD or multiple POD curves. The sensitivities are computed at no cost by reusing the samples from an existing Monte Carlo (MC) analysis. A numerical example is presented considering single and multiple inspections. - Highlights: • Sensitivities of probability-of-failure to a region of probability-of-detection curve. • The sensitivities are computed with negligible cost. • Sensitivities identify the important region of a POD curve. • Sensitivities can be used as a guide to selecting the optimal POD curve.

  15. The stepping behavior analysis of pedestrians from different age groups via a single-file experiment

    Science.gov (United States)

    Cao, Shuchao; Zhang, Jun; Song, Weiguo; Shi, Chang'an; Zhang, Ruifang

    2018-03-01

    The stepping behavior of pedestrians with different age compositions in single-file experiment is investigated in this paper. The relation between step length, step width and stepping time are analyzed by using the step measurement method based on the calculation of curvature of the trajectory. The relations of velocity-step width, velocity-step length and velocity-stepping time for different age groups are discussed and compared with previous studies. Finally effects of pedestrian gender and height on stepping laws and fundamental diagrams are analyzed. The study is helpful for understanding pedestrian dynamics of movement. Meanwhile, it offers experimental data to develop a microscopic model of pedestrian movement by considering stepping behavior.

  16. Comparison of single-entry and double-entry two-step couple screening for cystic fibrosis carriers

    NARCIS (Netherlands)

    tenKate, LP; Verheij, JBGM; Wildhagen, MF; Hilderink, HBM; Kooij, L; Verzijl, JG; Habbema, JDF

    1996-01-01

    Both single-entry two-step (SETS) couple screening and double-entry two-step (DETS) couple screening have been recommended as methods to screen for cystic fibrosis gene carriers. In this paper we compare the expected results from both types of screening. In general, DETS results in a higher

  17. Single-Stage Step up/down Driver for Permanent-Magnet Synchronous Machines

    Science.gov (United States)

    Chen, T. R.; Juan, Y. L.; Huang, C. Y.; Kuo, C. T.

    2017-11-01

    The two-stage circuit composed of a step up/down dc converter and a three-phase voltage source inverter is usually adopted as the electric vehicle’s motor driver. The conventional topology is more complicated. Additional power loss resulted from twice power conversion would also cause lower efficiency. A single-stage step up/down Permanent-Magnet Synchronous Motor driver for Brushless DC (BLDC) Motor is proposed in this study. The number components and circuit complexity are reduced. The low frequency six-step square-wave control is used to reduce the switching losses. In the proposed topology, only one active switch is gated with a high frequency PWM signal for adjusting the rotation speed. The rotor position signals are fed back to calculate the motor speed for digital close-loop control in a MCU. A 600W prototype circuit is constructed to drive a BLDC motor with rated speed 3000 rpm, and can control the speed of six sections.

  18. Hide and vanish: data sets where the most parsimonious tree is known but hard to find, and their implications for tree search methods.

    Science.gov (United States)

    Goloboff, Pablo A

    2014-10-01

    Three different types of data sets, for which the uniquely most parsimonious tree can be known exactly but is hard to find with heuristic tree search methods, are studied. Tree searches are complicated more by the shape of the tree landscape (i.e. the distribution of homoplasy on different trees) than by the sheer abundance of homoplasy or character conflict. Data sets of Type 1 are those constructed by Radel et al. (2013). Data sets of Type 2 present a very rugged landscape, with narrow peaks and valleys, but relatively low amounts of homoplasy. For such a tree landscape, subjecting the trees to TBR and saving suboptimal trees produces much better results when the sequence of clipping for the tree branches is randomized instead of fixed. An unexpected finding for data sets of Types 1 and 2 is that starting a search from a random tree instead of a random addition sequence Wagner tree may increase the probability that the search finds the most parsimonious tree; a small artificial example where these probabilities can be calculated exactly is presented. Data sets of Type 3, the most difficult data sets studied here, comprise only congruent characters, and a single island with only one most parsimonious tree. Even if there is a single island, missing entries create a very flat landscape which is difficult to traverse with tree search algorithms because the number of equally parsimonious trees that need to be saved and swapped to effectively move around the plateaus is too large. Minor modifications of the parameters of tree drifting, ratchet, and sectorial searches allow travelling around these plateaus much more efficiently than saving and swapping large numbers of equally parsimonious trees with TBR. For these data sets, two new related criteria for selecting taxon addition sequences in Wagner trees (the "selected" and "informative" addition sequences) produce much better results than the standard random or closest addition sequences. These new methods for Wagner

  19. An Improved Split-Step Wavelet Transform Method for Anomalous Radio Wave Propagation Modelling

    Directory of Open Access Journals (Sweden)

    A. Iqbal

    2014-12-01

    Full Text Available Anomalous tropospheric propagation caused by ducting phenomenon is a major problem in wireless communication. Thus, it is important to study the behavior of radio wave propagation in tropospheric ducts. The Parabolic Wave Equation (PWE method is considered most reliable to model anomalous radio wave propagation. In this work, an improved Split Step Wavelet transform Method (SSWM is presented to solve PWE for the modeling of tropospheric propagation over finite and infinite conductive surfaces. A large number of numerical experiments are carried out to validate the performance of the proposed algorithm. Developed algorithm is compared with previously published techniques; Wavelet Galerkin Method (WGM and Split-Step Fourier transform Method (SSFM. A very good agreement is found between SSWM and published techniques. It is also observed that the proposed algorithm is about 18 times faster than WGM and provide more details of propagation effects as compared to SSFM.

  20. A method for generating skewed random numbers using two overlapping uniform distributions

    International Nuclear Information System (INIS)

    Ermak, D.L.; Nasstrom, J.S.

    1995-02-01

    The objective of this work was to implement and evaluate a method for generating skewed random numbers using a combination of uniform random numbers. The method provides a simple and accurate way of generating skewed random numbers from the specified first three moments without an a priori specification of the probability density function. We describe the procedure for generating skewed random numbers from unifon-n random numbers, and show that it accurately produces random numbers with the desired first three moments over a range of skewness values. We also show that in the limit of zero skewness, the distribution of random numbers is an accurate approximation to the Gaussian probability density function. Future work win use this method to provide skewed random numbers for a Langevin equation model for diffusion in skewed turbulence

  1. Income Adequacy Among Canadian Seniors: Helping Singles Most

    Directory of Open Access Journals (Sweden)

    Philip Bazel

    2014-02-01

    Income Supplement (GIS top up strictly for elderly people living alone. Another would be to simply expand the CPP survivor benefit from 60 per cent of the deceased spouse’s entitlement to 100 per cent. These policies are not without cost, of course. But the cost is not prohibitive. If the federal government were to allot $1.35 billion to these kinds of targeted policies, it could slash the number of single seniors living below the low income cut-off by half. With another $87 million, it could reduce the number by two-thirds. These amount, respectively, to just a 3.5 per cent and 5.8 per cent increase over current annual federal spending on elderly benefits. With Canadian policy-makers willing to spend resources and efforts on strengthening CPP benefits for relatively comfortable Canadians, it seems only appropriate that policies aimed at helping our most vulnerable seniors avoid poverty should come first.

  2. A Bayesian method for construction of Markov models to describe dynamics on various time-scales.

    Science.gov (United States)

    Rains, Emily K; Andersen, Hans C

    2010-10-14

    The dynamics of many biological processes of interest, such as the folding of a protein, are slow and complicated enough that a single molecular dynamics simulation trajectory of the entire process is difficult to obtain in any reasonable amount of time. Moreover, one such simulation may not be sufficient to develop an understanding of the mechanism of the process, and multiple simulations may be necessary. One approach to circumvent this computational barrier is the use of Markov state models. These models are useful because they can be constructed using data from a large number of shorter simulations instead of a single long simulation. This paper presents a new Bayesian method for the construction of Markov models from simulation data. A Markov model is specified by (τ,P,T), where τ is the mesoscopic time step, P is a partition of configuration space into mesostates, and T is an N(P)×N(P) transition rate matrix for transitions between the mesostates in one mesoscopic time step, where N(P) is the number of mesostates in P. The method presented here is different from previous Bayesian methods in several ways. (1) The method uses Bayesian analysis to determine the partition as well as the transition probabilities. (2) The method allows the construction of a Markov model for any chosen mesoscopic time-scale τ. (3) It constructs Markov models for which the diagonal elements of T are all equal to or greater than 0.5. Such a model will be called a "consistent mesoscopic Markov model" (CMMM). Such models have important advantages for providing an understanding of the dynamics on a mesoscopic time-scale. The Bayesian method uses simulation data to find a posterior probability distribution for (P,T) for any chosen τ. This distribution can be regarded as the Bayesian probability that the kinetics observed in the atomistic simulation data on the mesoscopic time-scale τ was generated by the CMMM specified by (P,T). An optimization algorithm is used to find the most

  3. A facile one-step fluorescence method for the quantitation of low-content single base deamination impurity in synthetic oligonucleotides.

    Science.gov (United States)

    Su, Xiaoye; Liang, Ruiting; Stolee, Jessica A

    2018-06-05

    Oligonucleotides are being researched and developed as potential drug candidates for the treatment of a broad spectrum of diseases. The characterization of antisense oligonucleotide (ASO) impurities caused by base mutations (e.g. deamination) which are closely related to the target ASO is a significant analytical challenge. Herein, we describe a novel one-step method, utilizing a strategy that combines fluorescence-ON detection with competitive hybridization, to achieve single base mutation quantitation in extensively modified synthetic ASOs. Given that this method is highly specific and sensitive (LoQ = 4 nM), we envision that it will find utility for screening other impurities as well as sequencing modified oligonucleotides. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Approximations to the Probability of Failure in Random Vibration by Integral Equation Methods

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard

    Close approximations to the first passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first passage probability density function and the distribution function for the time interval spent below a barrier before...... passage probability density. The results of the theory agree well with simulation results for narrow banded processes dominated by a single frequency, as well as for bimodal processes with 2 dominating frequencies in the structural response....... outcrossing. An integral equation for the probability density function of the time interval is formulated, and adequate approximations for the kernel are suggested. The kernel approximation results in approximate solutions for the probability density function of the time interval, and hence for the first...

  5. The maximum entropy method of moments and Bayesian probability theory

    Science.gov (United States)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  6. Constructing inverse probability weights for continuous exposures: a comparison of methods.

    Science.gov (United States)

    Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S

    2014-03-01

    Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.

  7. Waste Package Misload Probability

    International Nuclear Information System (INIS)

    Knudsen, J.K.

    2001-01-01

    The objective of this calculation is to calculate the probability of occurrence for fuel assembly (FA) misloads (i.e., Fa placed in the wrong location) and FA damage during FA movements. The scope of this calculation is provided by the information obtained from the Framatome ANP 2001a report. The first step in this calculation is to categorize each fuel-handling events that occurred at nuclear power plants. The different categories are based on FAs being damaged or misloaded. The next step is to determine the total number of FAs involved in the event. Using the information, a probability of occurrence will be calculated for FA misload and FA damage events. This calculation is an expansion of preliminary work performed by Framatome ANP 2001a

  8. Imprecise Probability Methods for Weapons UQ

    Energy Technology Data Exchange (ETDEWEB)

    Picard, Richard Roy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vander Wiel, Scott Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-13

    Building on recent work in uncertainty quanti cation, we examine the use of imprecise probability methods to better characterize expert knowledge and to improve on misleading aspects of Bayesian analysis with informative prior distributions. Quantitative approaches to incorporate uncertainties in weapons certi cation are subject to rigorous external peer review, and in this regard, certain imprecise probability methods are well established in the literature and attractive. These methods are illustrated using experimental data from LANL detonator impact testing.

  9. Evidence reasoning method for constructing conditional probability tables in a Bayesian network of multimorbidity.

    Science.gov (United States)

    Du, Yuanwei; Guo, Yubin

    2015-01-01

    The intrinsic mechanism of multimorbidity is difficult to recognize and prediction and diagnosis are difficult to carry out accordingly. Bayesian networks can help to diagnose multimorbidity in health care, but it is difficult to obtain the conditional probability table (CPT) because of the lack of clinically statistical data. Today, expert knowledge and experience are increasingly used in training Bayesian networks in order to help predict or diagnose diseases, but the CPT in Bayesian networks is usually irrational or ineffective for ignoring realistic constraints especially in multimorbidity. In order to solve these problems, an evidence reasoning (ER) approach is employed to extract and fuse inference data from experts using a belief distribution and recursive ER algorithm, based on which evidence reasoning method for constructing conditional probability tables in Bayesian network of multimorbidity is presented step by step. A multimorbidity numerical example is used to demonstrate the method and prove its feasibility and application. Bayesian network can be determined as long as the inference assessment is inferred by each expert according to his/her knowledge or experience. Our method is more effective than existing methods for extracting expert inference data accurately and is fused effectively for constructing CPTs in a Bayesian network of multimorbidity.

  10. Número mais provável de Salmonella isoladas de carcaças de frango resfriadas Most probable number of Salmonella isolated from refrigerated broiler carcasses

    Directory of Open Access Journals (Sweden)

    Anderlise Borsoi

    2010-11-01

    Full Text Available A Salmonella permanece um importante problema na avicultura e, considerando os patógenos transmitidos por alimentos, aparece como um dos agentes principais em surtos de toxinfecções alimentares. Para auxiliar na avaliação de riscos em adquirir infecção alimentar via carne de frangos que sofreram cocção inadequada, ou através de contaminação cruzada a partir desses animais, torna-se importante determinar a extensão de contaminação por patógenos em carne crua. No presente trabalho, foram analisadas 180 carcaças de frangos resfriadas, adquiridas em varejos, para pesquisa de Salmonella com determinação do número de células da bactéria. Foi utilizado o método do número mais provável (NMP nos ágares para isolamento verde brilhante com novobiocina (BGN e xilose-lisina tergitol 4 (XLT4. Os resultados mostraram 12,2% de ocorrência de Salmonella nas carcaças de frangos resfriadas e a média de NMP de Salmonella por mL, na leitura pelo ágar XLT4 foi de 2,7 células e no ágar BGN foi de 1,3 células. Os sorovares de Salmonella isolados das carcaças de frangos no estudo foram S. Enteritidis, S. Agona, S.Rissen, S. Heidelberg e S. Livingstone. A análise dos resultados demonstrou existir um número variável de células de Salmonella contaminando as carcaças de frango resfriadas que estão à venda ao consumidor.Salmonella in poultry remains an important worldwide problem, and among foodborne pathogens, the Salmonella appears as one of the most important outbreaks agents. To assess the risks of acquiring infection via undercooked poultry or cross contamination from chickens, it is important to determine the extent of the contamination on raw poultry with this pathogen. In this study, 180 refrigerated broiler carcasses, obtained from local stores, were assessed to recover Salmonella by the most probable number (MPN method to quantify bacterias cells onto brilliant green agar with novobiocin (BGN and xylose lysin tergitol 4 agar

  11. A robust method to analyze copy number alterations of less than 100 kb in single cells using oligonucleotide array CGH.

    Directory of Open Access Journals (Sweden)

    Birte Möhlendick

    Full Text Available Comprehensive genome wide analyses of single cells became increasingly important in cancer research, but remain to be a technically challenging task. Here, we provide a protocol for array comparative genomic hybridization (aCGH of single cells. The protocol is based on an established adapter-linker PCR (WGAM and allowed us to detect copy number alterations as small as 56 kb in single cells. In addition we report on factors influencing the success of single cell aCGH downstream of the amplification method, including the characteristics of the reference DNA, the labeling technique, the amount of input DNA, reamplification, the aCGH resolution, and data analysis. In comparison with two other commercially available non-linear single cell amplification methods, WGAM showed a very good performance in aCGH experiments. Finally, we demonstrate that cancer cells that were processed and identified by the CellSearch® System and that were subsequently isolated from the CellSearch® cartridge as single cells by fluorescence activated cell sorting (FACS could be successfully analyzed using our WGAM-aCGH protocol. We believe that even in the era of next-generation sequencing, our single cell aCGH protocol will be a useful and (cost- effective approach to study copy number alterations in single cells at resolution comparable to those reported currently for single cell digital karyotyping based on next generation sequencing data.

  12. Site-selective substitutional doping with atomic precision on stepped Al (111) surface by single-atom manipulation.

    Science.gov (United States)

    Chen, Chang; Zhang, Jinhu; Dong, Guofeng; Shao, Hezhu; Ning, Bo-Yuan; Zhao, Li; Ning, Xi-Jing; Zhuang, Jun

    2014-01-01

    In fabrication of nano- and quantum devices, it is sometimes critical to position individual dopants at certain sites precisely to obtain the specific or enhanced functionalities. With first-principles simulations, we propose a method for substitutional doping of individual atom at a certain position on a stepped metal surface by single-atom manipulation. A selected atom at the step of Al (111) surface could be extracted vertically with an Al trimer-apex tip, and then the dopant atom will be positioned to this site. The details of the entire process including potential energy curves are given, which suggests the reliability of the proposed single-atom doping method.

  13. Single-step link of the superdeformed band in 143Eu

    International Nuclear Information System (INIS)

    Atac, A.; Bergstroem, M.H.; Nyberg, J.; Persson, J.; Herskind, B.; Joss, D.T.; Lipoglavsek, M.; Tucek, K.

    1996-01-01

    A discrete γ-ray ransition with an energy of 3360.6 keV deexciting the second lowest SD state in 143 Eu has been discovered. It carries 3.2 % of the full intensity of the band and feeds into a nearly spherical state which is above the I = 35/2 (+) , E x =4947 keV level. The exact placement of the single-step link is, however, not established due to the specially complicated level scheme in the region of interest. The energy of the single-step link agrees well with the previously determined two-step links. (orig.)

  14. Optimizing the number of steps in learning tasks for complex skills.

    NARCIS (Netherlands)

    Nadolski, Rob; Kirschner, Paul A.; Van Merriënboer, Jeroen

    2007-01-01

    Background. Carrying out whole tasks is often too difficult for novice learners attempting to acquire complex skills. The common solution is to split up the tasks into a number of smaller steps. The number of steps must be optimised for efficient and effective learning. Aim. The aim of the study is

  15. Valve cam design using numerical step-by-step method

    OpenAIRE

    Vasilyev, Aleksandr; Bakhracheva, Yuliya; Kabore, Ousman; Zelenskiy, Yuriy

    2014-01-01

    This article studies the numerical step-by-step method of cam profile design. The results of the study are used for designing the internal combustion engine valve gear. This method allows to profile the peak efficiency of cams in view of many restrictions, connected with valve gear serviceability and reliability.

  16. Linker-dependent Junction Formation Probability in Single-Molecule Junctions

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Pil Sun; Kim, Taekyeong [HankukUniversity of Foreign Studies, Yongin (Korea, Republic of)

    2015-01-15

    We compare the junction formation probabilities of single-molecule junctions with different linker molecules by using a scanning tunneling microscope-based break-junction technique. We found that the junction formation probability varies as SH > SMe > NH2 for the benzene backbone molecule with different types of anchoring groups, through quantitative statistical analysis. These results are attributed to different bonding forces according to the linker groups formed with Au atoms in the electrodes, which is consistent with previous works. Our work allows a better understanding of the contact chemistry in the metal.molecule junction for future molecular electronic devices.

  17. Determination of critical nucleation number for a single nucleation amyloid-β aggregation model.

    Science.gov (United States)

    Ghosh, Preetam; Vaidya, Ashwin; Kumar, Amit; Rangachari, Vijayaraghavan

    2016-03-01

    Aggregates of amyloid-β (Aβ) peptide are known to be the key pathological agents in Alzheimer disease (AD). Aβ aggregates to form large, insoluble fibrils that deposit as senile plaques in AD brains. The process of aggregation is nucleation-dependent in which the formation of a nucleus is the rate-limiting step, and controls the physiochemical fate of the aggregates formed. Therefore, understanding the properties of nucleus and pre-nucleation events will be significant in reducing the existing knowledge-gap in AD pathogenesis. In this report, we have determined the plausible range of critical nucleation number (n(*)), the number of monomers associated within the nucleus for a homogenous aggregation model with single unique nucleation event, by two independent methods: A reduced-order stability analysis and ordinary differential equation based numerical analysis, supported by experimental biophysics. The results establish that the most likely range of n(*) is between 7 and 14 and within, this range, n(*) = 12 closely supports the experimental data. These numbers are in agreement with those previously reported, and importantly, the report establishes a new modeling framework using two independent approaches towards a convergent solution in modeling complex aggregation reactions. Our model also suggests that the formation of large protofibrils is dependent on the nature of n(*), further supporting the idea that pre-nucleation events are significant in controlling the fate of larger aggregates formed. This report has re-opened an old problem with a new perspective and holds promise towards revealing the molecular events in amyloid pathologies in the future. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. A Method of Erasing Data Using Random Number Generators

    OpenAIRE

    井上,正人

    2012-01-01

    Erasing data is an indispensable step for disposal of computers or external storage media. Except physical destruction, erasing data means writing random information on entire disk drives or media. We propose a method which erases data safely using random number generators. These random number generators create true random numbers based on quantum processes.

  19. Out of the picture: a study of family drawings by children from step-, single-parent, and non-step families.

    Science.gov (United States)

    Dunn, Judy; O'Connor, Thomas G; Levy, Irit

    2002-12-01

    Investigated the family drawings of 180 children ages 5 to 7 years in various family settings, including stepfather, single-parent, complex stepfamilies, and 2-parent control families. The relations of family type and biological relatedness to omission of family members and grouping of parents were examined. Children from step- and single-parent families were more likely to exclude family members than children from "control" non-step families, and exclusion was predicted from biological relatedness. Children who were biologically related to both resident parents were also more likely to group their parents together. Omission of family members was found to be associated with children's adjustment (specifically more externalizing and internalizing behavior) as reported by teachers and parents. The results indicate that biological relatedness is a salient aspect of very young children's representations of their families. The association between adjustment and exclusion of family members and grouping of parents indicates that family drawings may be useful research and clinical tools, when used in combination with other methods of assessment.

  20. Composition of single-step media used for human embryo culture.

    Science.gov (United States)

    Morbeck, Dean E; Baumann, Nikola A; Oglesbee, Devin

    2017-04-01

    To determine compositions of commercial single-step culture media and test with a murine model whether differences in composition are biologically relevant. Experimental laboratory study. University-based laboratory. Inbred female mice were superovulated and mated with outbred male mice. Amino acid, organic acid, and ions content were determined for single-step culture media: CSC, Global, G-TL, and 1-Step. To determine whether differences in composition of these media are biologically relevant, mouse one-cell embryos were cultured for 96 hours in each culture media at 5% and 20% oxygen in a time-lapse incubator. Compositions of four culture media were analyzed for concentrations of 30 amino acids, organic acids, and ions. Blastocysts at 96 hours of culture and cell cycle timings were calculated, and experiments were repeated in triplicate. Of the more than 30 analytes, concentrations of glucose, lactate, pyruvate, amino acids, phosphate, calcium, and magnesium varied in concentrations. Mouse embryos were differentially affected by oxygen in G-TL and 1-Step. Four single-step culture media have compositions that vary notably in pyruvate, lactate, and amino acids. Blastocyst development was affected by culture media and its interaction with oxygen concentration. Copyright © 2017 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  1. Handbook of statistical methods single subject design

    CERN Document Server

    Satake, Eiki; Maxwell, David L

    2008-01-01

    This book is a practical guide of the most commonly used approaches in analyzing and interpreting single-subject data. It arranges the methodologies used in a logical sequence using an array of research studies from the existing published literature to illustrate specific applications. The book provides a brief discussion of each approach such as visual, inferential, and probabilistic model, the applications for which it is intended, and a step-by-step illustration of the test as used in an actual research study.

  2. An evaluation of a single-step extraction chromatography separation method for Sm-Nd isotope analysis of micro-samples of silicate rocks by high-sensitivity thermal ionization mass spectrometry

    International Nuclear Information System (INIS)

    Li Chaofeng; Li Xianhua; Li Qiuli; Guo Jinghui; Li Xianghui; Liu Tao

    2011-01-01

    Graphical abstract: Distribution curve of all eluting fractions for a BCR-2 (1-2-3.5-7 mg) on LN column using HCl and HF as eluting reagent. Highlights: → This analytical protocol affords a simple and rapid analysis for Sm and Nd isotope in minor rock samples. → The single-step separation method exhibits satisfactory separation effect for complex silicate samples. → Corrected 143 Nd/ 144 Nd data show excellent accuracy even if the 140 Ce 16 O + / 144 Nd 16 O + ratio reached to 0.03. - Abstract: A single-step separation scheme is presented for Sm-Nd radiogenic isotope system on very small samples (1-3 mg) of silicate rock. This method is based on Eichrom LN Spec chromatographic material and affords a straightforward separation of Sm-Nd from complex matrix with good purity and satisfactory blank levels, suitable for thermal ionization mass spectrometry (TIMS). This technique, characterized by high efficiency (single-step Sm-Nd separation) and high sensitivity (TIMS on NdO + ion beam), is able to process rapidly (3-4 h), with low procedure blanks ( 143 Nd/ 144 Nd ratios and Sm-Nd concentrations are presented for eleven international silicate rock reference materials, spanning a wide range of Sm-Nd contents and bulk compositions. The analytical results show a good agreement with recommended values within ±0.004% for the 143 Nd/ 144 Nd isotopic ratio and ±2% for Sm-Nd quantification at the 95% confidence level. It is noted that the uncertainty of this method is about 3 times larger than typical precision achievable with two-stage full separation followed by state-of-the-art conventional TIMS using Nd + ion beams which require much larger amounts of Nd. Hence, our single-step separation followed by NdO + ion beam technique is preferred to the analysis for microsamples.

  3. Most probable trajectory of a muon in a scattering medium, when input and output trajectories are known

    Energy Technology Data Exchange (ETDEWEB)

    Benton, Christopher J., E-mail: cjb30@bath.ac.uk [Department of Electronic and Electrical Engineering, University of Bath, Claverton Down, Bath BA2 7AY (United Kingdom); Smith, Nathan D. [Department of Electronic and Electrical Engineering, University of Bath, Claverton Down, Bath BA2 7AY (United Kingdom); Quillin, Stephen J.; Steer, Christopher A. [AWE, Aldermaston, Reading, Berkshire RG7 4PR (United Kingdom)

    2012-11-21

    Tomographic imaging using cosmic ray muons has a range of applications including homeland security and geological imaging. To this end, we have developed a technique to calculate the most probable muon trajectory through a scattering material, given its measured entry and exit trajectories. This method has the potential to improve tomographic algorithms, in particular by replacing the muon paths assumed by the Point Of Closest Approach (POCA) method, with more realistic paths. These paths can be calculated for arbitary matter distributions, rather than just the point scatterers assumed by POCA.

  4. Quartz-Seq2: a high-throughput single-cell RNA-sequencing method that effectively uses limited sequence reads.

    Science.gov (United States)

    Sasagawa, Yohei; Danno, Hiroki; Takada, Hitomi; Ebisawa, Masashi; Tanaka, Kaori; Hayashi, Tetsutaro; Kurisaki, Akira; Nikaido, Itoshi

    2018-03-09

    High-throughput single-cell RNA-seq methods assign limited unique molecular identifier (UMI) counts as gene expression values to single cells from shallow sequence reads and detect limited gene counts. We thus developed a high-throughput single-cell RNA-seq method, Quartz-Seq2, to overcome these issues. Our improvements in the reaction steps make it possible to effectively convert initial reads to UMI counts, at a rate of 30-50%, and detect more genes. To demonstrate the power of Quartz-Seq2, we analyzed approximately 10,000 transcriptomes from in vitro embryonic stem cells and an in vivo stromal vascular fraction with a limited number of reads.

  5. Measurements of excited-state-to-excited-state transition probabilities and photoionization cross-sections using laser-induced fluorescence and photoionization signals

    International Nuclear Information System (INIS)

    Shah, M.L.; Sahoo, A.C.; Pulhani, A.K.; Gupta, G.P.; Dikshit, B.; Bhatia, M.S.; Suri, B.M.

    2014-01-01

    Laser-induced photoionization and fluorescence signals were simultaneously observed in atomic samarium using Nd:YAG-pumped dye lasers. Two-color, three-photon photoionization and two-color fluorescence signals were recorded simultaneously as a function of the second-step laser power for two photoionization pathways. The density matrix formalism has been employed to analyze these signals. Two-color laser-induced fluorescence signal depends on the laser powers used for the first and second-step transitions as well as the first and second-step transition probability whereas two-color, three-photon photoionization signal depends on the third-step transition cross-section at the second-step laser wavelength along with the laser powers and transition probability for the first and second-step transitions. Two-color laser-induced fluorescence was used to measure the second-step transition probability. The second-step transition probability obtained was used to infer the photoionization cross-section. Thus, the methodology combining two-color, three-photon photoionization and two-color fluorescence signals in a single experiment has been established for the first time to measure the second-step transition probability as well as the photoionization cross-section. - Highlights: • Laser-induced photoionization and fluorescence signals have been simultaneously observed. • The density matrix formalism has been employed to analyze these signals. • Two-color laser-induced fluorescence was used to measure the second-step transition probability. • The second-step transition probability obtained was used to infer the photoionization cross-section. • Transition probability and photoionization cross-section have been measured in a single experiment

  6. Random Numbers and Monte Carlo Methods

    Science.gov (United States)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  7. Probability-neighbor method of accelerating geometry treatment in reactor Monte Carlo code RMC

    International Nuclear Information System (INIS)

    She, Ding; Li, Zeguang; Xu, Qi; Wang, Kan; Yu, Ganglin

    2011-01-01

    Probability neighbor method (PNM) is proposed in this paper to accelerate geometry treatment of Monte Carlo (MC) simulation and validated in self-developed reactor Monte Carlo code RMC. During MC simulation by either ray-tracking or delta-tracking method, large amounts of time are spent in finding out which cell one particle is located in. The traditional way is to search cells one by one with certain sequence defined previously. However, this procedure becomes very time-consuming when the system contains a large number of cells. Considering that particles have different probability to enter different cells, PNM method optimizes the searching sequence, i.e., the cells with larger probability are searched preferentially. The PNM method is implemented in RMC code and the numerical results show that the considerable time of geometry treatment in MC calculation for complicated systems is saved, especially effective in delta-tracking simulation. (author)

  8. The Point Zoro Symmetric Single-Step Procedure for Simultaneous Estimation of Polynomial Zeros

    Directory of Open Access Journals (Sweden)

    Mansor Monsi

    2012-01-01

    Full Text Available The point symmetric single step procedure PSS1 has R-order of convergence at least 3. This procedure is modified by adding another single-step, which is the third step in PSS1. This modified procedure is called the point zoro symmetric single-step PZSS1. It is proven that the R-order of convergence of PZSS1 is at least 4 which is higher than the R-order of convergence of PT1, PS1, and PSS1. Hence, computational time is reduced since this procedure is more efficient for bounding simple zeros simultaneously.

  9. Linear positivity and virtual probability

    International Nuclear Information System (INIS)

    Hartle, James B.

    2004-01-01

    We investigate the quantum theory of closed systems based on the linear positivity decoherence condition of Goldstein and Page. The objective of any quantum theory of a closed system, most generally the universe, is the prediction of probabilities for the individual members of sets of alternative coarse-grained histories of the system. Quantum interference between members of a set of alternative histories is an obstacle to assigning probabilities that are consistent with the rules of probability theory. A quantum theory of closed systems therefore requires two elements: (1) a condition specifying which sets of histories may be assigned probabilities and (2) a rule for those probabilities. The linear positivity condition of Goldstein and Page is the weakest of the general conditions proposed so far. Its general properties relating to exact probability sum rules, time neutrality, and conservation laws are explored. Its inconsistency with the usual notion of independent subsystems in quantum mechanics is reviewed. Its relation to the stronger condition of medium decoherence necessary for classicality is discussed. The linear positivity of histories in a number of simple model systems is investigated with the aim of exhibiting linearly positive sets of histories that are not decoherent. The utility of extending the notion of probability to include values outside the range of 0-1 is described. Alternatives with such virtual probabilities cannot be measured or recorded, but can be used in the intermediate steps of calculations of real probabilities. Extended probabilities give a simple and general way of formulating quantum theory. The various decoherence conditions are compared in terms of their utility for characterizing classicality and the role they might play in further generalizations of quantum mechanics

  10. Fabrication of Polydimethylsiloxane Microlenses Utilizing Hydrogel Shrinkage and a Single Molding Step

    Directory of Open Access Journals (Sweden)

    Bader Aldalali

    2014-05-01

    Full Text Available We report on polydimethlysiloxane (PDMS microlenses and microlens arrays on flat and curved substrates fabricated via a relatively simple process combining liquid-phase photopolymerization and a single molding step. The mold for the formation of the PDMS lenses is fabricated by photopolymerizing a polyacrylamide (PAAm pre-hydrogel. The shrinkage of PAAm after its polymerization forms concave lenses. The lenses are then transferred to PDMS by a single step molding to form PDMS microlens array on a flat substrate. The PAAm concave lenses are also transferred to PDMS and another flexible polymer, Solaris, to realize artificial compound eyes. The resultant microlenses and microlens arrays possess good uniformity and optical properties. The focal length of the lenses is inversely proportional to the shrinkage time. The microlens mold can also be rehydrated to change the focal length of the ultimate PDMS microlenses. The spherical aberration is 2.85 μm and the surface roughness is on the order of 204 nm. The microlenses can resolve 10.10 line pairs per mm (lp/mm and have an f-number range between f/2.9 and f/56.5. For the compound eye, the field of view is 113°.

  11. The Most Probable Limit of Detection (MPL) for rapid microbiological methods

    NARCIS (Netherlands)

    Verdonk, G.P.H.T.; Willemse, M.J.; Hoefs, S.G.G.; Cremers, G.; Heuvel, E.R. van den

    Classical microbiological methods have nowadays unacceptably long cycle times. Rapid methods, available on the market for decades, are already applied within the clinical and food industry, but the implementation in pharmaceutical industry is hampered by for instance stringent regulations on

  12. The most probable limit of detection (MPL) for rapid microbiological methods

    NARCIS (Netherlands)

    Verdonk, G.P.H.T.; Willemse, M.J.; Hoefs, S.G.G.; Cremers, G.; Heuvel, van den E.R.

    2010-01-01

    Classical microbiological methods have nowadays unacceptably long cycle times. Rapid methods, available on the market for decades, are already applied within the clinical and food industry, but the implementation in pharmaceutical industry is hampered by for instance stringent regulations on

  13. Probability-Based Recognition Framework for Underwater Landmarks Using Sonar Images †.

    Science.gov (United States)

    Lee, Yeongjun; Choi, Jinwoo; Ko, Nak Yong; Choi, Hyun-Taek

    2017-08-24

    This paper proposes a probability-based framework for recognizing underwater landmarks using sonar images. Current recognition methods use a single image, which does not provide reliable results because of weaknesses of the sonar image such as unstable acoustic source, many speckle noises, low resolution images, single channel image, and so on. However, using consecutive sonar images, if the status-i.e., the existence and identity (or name)-of an object is continuously evaluated by a stochastic method, the result of the recognition method is available for calculating the uncertainty, and it is more suitable for various applications. Our proposed framework consists of three steps: (1) candidate selection, (2) continuity evaluation, and (3) Bayesian feature estimation. Two probability methods-particle filtering and Bayesian feature estimation-are used to repeatedly estimate the continuity and feature of objects in consecutive images. Thus, the status of the object is repeatedly predicted and updated by a stochastic method. Furthermore, we develop an artificial landmark to increase detectability by an imaging sonar, which we apply to the characteristics of acoustic waves, such as instability and reflection depending on the roughness of the reflector surface. The proposed method is verified by conducting basin experiments, and the results are presented.

  14. PDE-Foam - a probability-density estimation method using self-adapting phase-space binning

    CERN Document Server

    Dannheim, Dominik; Voigt, Alexander; Grahn, Karl-Johan; Speckmayer, Peter

    2009-01-01

    Probability-Density Estimation (PDE) is a multivariate discrimination technique based on sampling signal and background densities defined by event samples from data or Monte-Carlo (MC) simulations in a multi-dimensional phase space. To efficiently use large event samples to estimate the probability density, a binary search tree (range searching) is used in the PDE-RS implementation. It is a generalisation of standard likelihood methods and a powerful classification tool for problems with highly non-linearly correlated observables. In this paper, we present an innovative improvement of the PDE method that uses a self-adapting binning method to divide the multi-dimensional phase space in a finite number of hyper-rectangles (cells). The binning algorithm adjusts the size and position of a predefined number of cells inside the multidimensional phase space, minimizing the variance of the signal and background densities inside the cells. The binned density information is stored in binary trees, allowing for a very ...

  15. Electrofishing capture probability of smallmouth bass in streams

    Science.gov (United States)

    Dauwalter, D.C.; Fisher, W.L.

    2007-01-01

    Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.

  16. Probability and stochastic modeling

    CERN Document Server

    Rotar, Vladimir I

    2012-01-01

    Basic NotionsSample Space and EventsProbabilitiesCounting TechniquesIndependence and Conditional ProbabilityIndependenceConditioningThe Borel-Cantelli TheoremDiscrete Random VariablesRandom Variables and VectorsExpected ValueVariance and Other Moments. Inequalities for DeviationsSome Basic DistributionsConvergence of Random Variables. The Law of Large NumbersConditional ExpectationGenerating Functions. Branching Processes. Random Walk RevisitedBranching Processes Generating Functions Branching Processes Revisited More on Random WalkMarkov ChainsDefinitions and Examples. Probability Distributions of Markov ChainsThe First Step Analysis. Passage TimesVariables Defined on a Markov ChainErgodicity and Stationary DistributionsA Classification of States and ErgodicityContinuous Random VariablesContinuous DistributionsSome Basic Distributions Continuous Multivariate Distributions Sums of Independent Random Variables Conditional Distributions and ExpectationsDistributions in the General Case. SimulationDistribution F...

  17. Monte Carlo methods to calculate impact probabilities

    Science.gov (United States)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  18. Method for manufacturing a single crystal nanowire

    NARCIS (Netherlands)

    van den Berg, Albert; Bomer, Johan G.; Carlen, Edwin; Chen, S.; Kraaijenhagen, Roderik Adriaan; Pinedo, Herbert Michael

    2013-01-01

    A method for manufacturing a single crystal nano-structure is provided comprising the steps of providing a device layer with a 100 structure on a substrate; providing a stress layer onto the device layer; patterning the stress layer along the 110 direction of the device layer; selectively removing

  19. Method for manufacturing a single crystal nanowire

    NARCIS (Netherlands)

    van den Berg, Albert; Bomer, Johan G.; Carlen, Edwin; Chen, S.; Kraaijenhagen, R.A.; Pinedo, Herbert Michael

    2010-01-01

    A method for manufacturing a single crystal nano-structure is provided comprising the steps of providing a device layer with a 100 structure on a substrate; providing a stress layer onto the device layer; patterning the stress layer along the 110 direction of the device layer; selectively removing

  20. A method for using unmanned aerial vehicles for emergency investigation of single geo-hazards and sample applications of this method

    Science.gov (United States)

    Huang, Haifeng; Long, Jingjing; Yi, Wu; Yi, Qinglin; Zhang, Guodong; Lei, Bangjun

    2017-11-01

    In recent years, unmanned aerial vehicles (UAVs) have become widely used in emergency investigations of major natural hazards over large areas; however, UAVs are less commonly employed to investigate single geo-hazards. Based on a number of successful investigations in the Three Gorges Reservoir area, China, a complete UAV-based method for performing emergency investigations of single geo-hazards is described. First, a customized UAV system that consists of a multi-rotor UAV subsystem, an aerial photography subsystem, a ground control subsystem and a ground surveillance subsystem is described in detail. The implementation process, which includes four steps, i.e., indoor preparation, site investigation, on-site fast processing and application, and indoor comprehensive processing and application, is then elaborated, and two investigation schemes, automatic and manual, that are used in the site investigation step are put forward. Moreover, some key techniques and methods - e.g., the layout and measurement of ground control points (GCPs), route planning, flight control and image collection, and the Structure from Motion (SfM) photogrammetry processing - are explained. Finally, three applications are given. Experience has shown that using UAVs for emergency investigation of single geo-hazards greatly reduces the time, intensity and risks associated with on-site work and provides valuable, high-accuracy, high-resolution information that supports emergency responses.

  1. Single-step solution processing of small-molecule organic semiconductor field-effect transistors at high yield

    NARCIS (Netherlands)

    Yu, Liyang; Li, X.; Pavlica, E.; Loth, M.A.; Anthony, J.E.; Bratina, G.; Kjellander, B.K.C.; Gelinck, G.H.; Stutzmann, N.

    2011-01-01

    Here, we report a simple, alternative route towards high-mobility structures of the small-molecular semiconductor 5,11-bis(triethyl silylethynyl) anthradithiophene that requires one single processing step without the need for any post-deposition processing. The method relies on careful control of

  2. Dynamic pressure sensitivity determination with Mach number method

    Science.gov (United States)

    Sarraf, Christophe; Damion, Jean-Pierre

    2018-05-01

    Measurements of pressure in fast transient conditions are often performed even if the dynamic characteristic of the transducer are not traceable to international standards. Moreover, the question of a primary standard in dynamic pressure is still open, especially for gaseous applications. The question is to improve dynamic standards in order to respond to expressed industrial needs. In this paper, the method proposed in the EMRP IND09 ‘Dynamic’ project, which can be called the ‘ideal shock tube method’, is compared with the ‘collective standard method’ currently used in the Laboratoire de Métrologie Dynamique (LNE/ENSAM). The input is a step of pressure generated by a shock tube. The transducer is a piezoelectric pressure sensor. With the ‘ideal shock tube method’ the sensitivity of a pressure sensor is first determined dynamically. This method requires a shock tube implemented with piezoelectric shock wave detectors. The measurement of the Mach number in the tube allows an evaluation of the incident pressure amplitude of a step using a theoretical 1D model of the shock tube. Heat transfer, other actual effects and effects of the shock tube imperfections are not taken into account. The amplitude of the pressure step is then used to determine the sensitivity in dynamic conditions. The second method uses a frequency bandwidth comparison to determine pressure at frequencies from quasi-static conditions, traceable to static pressure standards, to higher frequencies (up to 10 kHz). The measurand is also a step of pressure generated by a supposed ideal shock tube or a fast-opening device. The results are provided as a transfer function with an uncertainty budget assigned to a frequency range, also deliverable frequency by frequency. The largest uncertainty in the bandwidth of comparison is used to trace the final pressure step level measured in dynamic conditions, owing that this pressure is not measurable in a steady state on a shock tube. A reference

  3. Single-cell mRNA transfection studies: delivery, kinetics and statistics by numbers.

    Science.gov (United States)

    Leonhardt, Carolin; Schwake, Gerlinde; Stögbauer, Tobias R; Rappl, Susanne; Kuhr, Jan-Timm; Ligon, Thomas S; Rädler, Joachim O

    2014-05-01

    In artificial gene delivery, messenger RNA (mRNA) is an attractive alternative to plasmid DNA (pDNA) since it does not require transfer into the cell nucleus. Here we show that, unlike for pDNA transfection, the delivery statistics and dynamics of mRNA-mediated expression are generic and predictable in terms of mathematical modeling. We measured the single-cell expression time-courses and levels of enhanced green fluorescent protein (eGFP) using time-lapse microscopy and flow cytometry (FC). The single-cell analysis provides direct access to the distribution of onset times, life times and expression rates of mRNA and eGFP. We introduce a two-step stochastic delivery model that reproduces the number distribution of successfully delivered and translated mRNA molecules and thereby the dose-response relation. Our results establish a statistical framework for mRNA transfection and as such should advance the development of RNA carriers and small interfering/micro RNA-based drugs. This team of authors established a statistical framework for mRNA transfection by using a two-step stochastic delivery model that reproduces the number distribution of successfully delivered and translated mRNA molecules and thereby their dose-response relation. This study establishes a nice connection between theory and experimental planning and will aid the cellular delivery of mRNA molecules. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Structural Studies of Silver Nanoparticles Obtained Through Single-Step Green Synthesis

    Science.gov (United States)

    Prasad Peddi, Siva; Abdallah Sadeh, Bilal

    2015-10-01

    Green synthesis of silver Nanoparticles (AGNP's) has been the most prominent among the metallic nanoparticles for research for over a decade and half now due to both the simplicity of preparation and the applicability of biological species with extensive applications in medicine and biotechnology to reduce and trap the particles. The current article uses Eclipta Prostrata leaf extract as the biological species to cap the AGNP's through a single step process. The characterization data obtained was used for the analysis of the sample structure. The article emphasizes the disquisition of their shape and size of the lattice parameters and proposes a general scheme and a mathematical model for the analysis of their dependence. The data of the synthesized AGNP's has been used to advantage through the introduction of a structural shape factor for the crystalline nanoparticles. The properties of the structure of the AGNP's proposed and evaluated through a theoretical model was undeviating with the experimental consequences. This modus operandi gives scope for the structural studies of ultrafine particles prepared using biological methods.

  5. Benchmarking PARTISN with Analog Monte Carlo: Moments of the Neutron Number and the Cumulative Fission Number Probability Distributions

    Energy Technology Data Exchange (ETDEWEB)

    O' Rourke, Patrick Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-27

    The purpose of this report is to provide the reader with an understanding of how a Monte Carlo neutron transport code was written, developed, and evolved to calculate the probability distribution functions (PDFs) and their moments for the neutron number at a final time as well as the cumulative fission number, along with introducing several basic Monte Carlo concepts.

  6. A methodology for more efficient tail area sampling with discrete probability distribution

    International Nuclear Information System (INIS)

    Park, Sang Ryeol; Lee, Byung Ho; Kim, Tae Woon

    1988-01-01

    Monte Carlo Method is commonly used to observe the overall distribution and to determine the lower or upper bound value in statistical approach when direct analytical calculation is unavailable. However, this method would not be efficient if the tail area of a distribution is concerned. A new method entitled 'Two Step Tail Area Sampling' is developed, which uses the assumption of discrete probability distribution and samples only the tail area without distorting the overall distribution. This method uses two step sampling procedure. First, sampling at points separated by large intervals is done and second, sampling at points separated by small intervals is done with some check points determined at first step sampling. Comparison with Monte Carlo Method shows that the results obtained from the new method converge to analytic value faster than Monte Carlo Method if the numbers of calculation of both methods are the same. This new method is applied to DNBR (Departure from Nucleate Boiling Ratio) prediction problem in design of the pressurized light water nuclear reactor

  7. Proposal for a Five-Step Method to Elicit Expert Judgment

    Directory of Open Access Journals (Sweden)

    Duco Veen

    2017-12-01

    Full Text Available Elicitation is a commonly used tool to extract viable information from experts. The information that is held by the expert is extracted and a probabilistic representation of this knowledge is constructed. A promising avenue in psychological research is to incorporated experts’ prior knowledge in the statistical analysis. Systematic reviews on elicitation literature however suggest that it might be inappropriate to directly obtain distributional representations from experts. The literature qualifies experts’ performance on estimating elements of a distribution as unsatisfactory, thus reliably specifying the essential elements of the parameters of interest in one elicitation step seems implausible. Providing feedback within the elicitation process can enhance the quality of the elicitation and interactive software can be used to facilitate the feedback. Therefore, we propose to decompose the elicitation procedure into smaller steps with adjustable outcomes. We represent the tacit knowledge of experts as a location parameter and their uncertainty concerning this knowledge by a scale and shape parameter. Using a feedback procedure, experts can accept the representation of their beliefs or adjust their input. We propose a Five-Step Method which consists of (1 Eliciting the location parameter using the trial roulette method. (2 Provide feedback on the location parameter and ask for confirmation or adjustment. (3 Elicit the scale and shape parameter. (4 Provide feedback on the scale and shape parameter and ask for confirmation or adjustment. (5 Use the elicited and calibrated probability distribution in a statistical analysis and update it with data or to compute a prior-data conflict within a Bayesian framework. User feasibility and internal validity for the Five-Step Method are investigated using three elicitation studies.

  8. Process analysis and modeling of a single-step lutein extraction method for wet microalgae.

    Science.gov (United States)

    Gong, Mengyue; Wang, Yuruihan; Bassi, Amarjeet

    2017-11-01

    Lutein is a commercial carotenoid with potential health benefits. Microalgae are alternative sources for the lutein production in comparison to conventional approaches using marigold flowers. In this study, a process analysis of a single-step simultaneous extraction, saponification, and primary purification process for free lutein production from wet microalgae biomass was carried out. The feasibility of binary solvent mixtures for wet biomass extraction was successfully demonstrated, and the extraction kinetics of lutein from chloroplast in microalgae were first evaluated. The effects of types of organic solvent, solvent polarity, cell disruption method, and alkali and solvent usage on lutein yields were examined. A mathematical model based on Fick's second law of diffusion was applied to model the experimental data. The mass transfer coefficients were used to estimate the extraction rates. The extraction rate was found more significantly related with alkali ratio to solvent than to biomass. The best conditions for extraction efficiency were found to be pre-treatment with ultrasonication at 0.5 s working cycle per second, react 0.5 h in 0.27 L/g solvent to biomass ratio, and 1:3 ether/ethanol (v/v) with 1.25 g KOH/L. The entire process can be controlled within 1 h and yield over 8 mg/g lutein, which is more economical for scale-up.

  9. Step Sizes for Strong Stability Preservation with Downwind-Biased Operators

    KAUST Repository

    Ketcheson, David I.

    2011-08-04

    Strong stability preserving (SSP) integrators for initial value ODEs preserve temporal monotonicity solution properties in arbitrary norms. All existing SSP methods, including implicit methods, either require small step sizes or achieve only first order accuracy. It is possible to achieve more relaxed step size restrictions in the discretization of hyperbolic PDEs through the use of both upwind- and downwind-biased semidiscretizations. We investigate bounds on the maximum SSP step size for methods that include negative coefficients and downwind-biased semi-discretizations. We prove that the downwind SSP coefficient for linear multistep methods of order greater than one is at most equal to two, while the downwind SSP coefficient for explicit Runge–Kutta methods is at most equal to the number of stages of the method. In contrast, the maximal downwind SSP coefficient for second order Runge–Kutta methods is shown to be unbounded. We present a class of such methods with arbitrarily large SSP coefficient and demonstrate that they achieve second order accuracy for large CFL number.

  10. Response of single polymers to localized step strains

    NARCIS (Netherlands)

    Panja, D.

    2009-01-01

    In this paper, the response of single three-dimensional phantom and self-avoiding polymers to localized step strains are studied for two cases in the absence of hydrodynamic interactions: (i) Polymers tethered at one end with the strain created at the point of tether, and (ii) free polymers with the

  11. Probability of Detection (POD) as a statistical model for the validation of qualitative methods.

    Science.gov (United States)

    Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T

    2011-01-01

    A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.

  12. Calculation of Friction Coefficient and Analysis of Fluid Flow in a Stepped Micro-Channel for Wide Range of Knudsen Number Using Lattice Boltzmann (MRT Method

    Directory of Open Access Journals (Sweden)

    Y. Bakhshan

    2015-01-01

    Full Text Available Micro scale gas flows has attracted significant research interest in the last two decades. In this research, the fluid flow of gases in the stepped micro-channel at a wide range of Knudsen number has been analyzed with using the Lattice Boltzmann (MRT method. In the model, a modified second-order slip boundary condition and a Bosanquet-type effective viscosity are used to consider the velocity slip at the boundaries and to cover the slip and transition regimes of flow and to gain an accurate simulation of rarefied gases. It includes the slip and transition regimes of flow. The flow specifications such as pressure loss, velocity profile, streamline and friction coefficient at different conditions have been presented. The results show good agreement with available experimental data. The calculation shows that the friction coefficient decreases with increasing the Knudsen number and stepping the micro-channel has an inverse effect on the friction coefficient. Furthermore, a new correlation is suggested for calculation of the friction coefficient in the stepped micro-channel as below: C_f Re  = 3.113+2.915/(1 +2 Kn+ 0.641 exp⁡(3.203/(1 + 2 Kn

  13. A Normalized Transfer Matrix Method for the Free Vibration of Stepped Beams: Comparison with Experimental and FE(3D Methods

    Directory of Open Access Journals (Sweden)

    Tamer Ahmed El-Sayed

    2017-01-01

    Full Text Available The exact solution for multistepped Timoshenko beam is derived using a set of fundamental solutions. This set of solutions is derived to normalize the solution at the origin of the coordinates. The start, end, and intermediate boundary conditions involve concentrated masses and linear and rotational elastic supports. The beam start, end, and intermediate equations are assembled using the present normalized transfer matrix (NTM. The advantage of this method is that it is quicker than the standard method because the size of the complete system coefficient matrix is 4 × 4. In addition, during the assembly of this matrix, there are no inverse matrix steps required. The validity of this method is tested by comparing the results of the current method with the literature. Then the validity of the exact stepped analysis is checked using experimental and FE(3D methods. The experimental results for stepped beams with single step and two steps, for sixteen different test samples, are in excellent agreement with those of the three-dimensional finite element FE(3D. The comparison between the NTM method and the finite element method results shows that the modal percentage deviation is increased when a beam step location coincides with a peak point in the mode shape. Meanwhile, the deviation decreases when a beam step location coincides with a straight portion in the mode shape.

  14. Acomparative Study Comparing Low-dose Step-up Versus Step-down in Polycystic Ovary Syndrome Resistant to Clomiphene

    Directory of Open Access Journals (Sweden)

    S Peivandi

    2010-03-01

    Full Text Available Introduction: Polycystic ovary syndrome(PCOS is one of the most common cause of infertility in women. clomiphene is the first line of treatment. however 20% of patients are resistant to clomiphene. because of follicular hypersensitivity to gonadotropins in pcod, multiple follicular growth and development occurs which is cause of OHSS and multiple pregnancy. Our aim of this random and clinical study was comparation between step-down and low dose step-up methods for induction ovulation in clomiphene resistant. Methods: 60 cases were included 30 women in low-dose step-up group and 30 women in step-down group. In low-dose step-up HMG 75u/d and in step-down HMG 225u/d was started on 3th days of cycle, monitoring with vaginal sonography was done on 8th days of cycle. When follicle with>14 mm in diameter was seen HMG dose was continued in low-dose step-up and was decreased in step-down group. When follicle reached to 18mm in diameter, amp HCG 10000 unit was injected and IUI was performed 36 hours later. Results: Number of HMG ampules, number of follicles> 14mm on the day of HCG injection and level of serum estradiol was greater in low dose step up protocol than step down protocol(p<0/0001. Ovulation rate and pregnancy rate was greater in lowdose step up group than step down group with significant difference (p<0/0001. Conclusion: Our study showed that low-dose step-up regimen with HMG is effective for stimulating ovulation and clinical pregnancy but in view of monofollicular growth, the step down method was more effective and safe. In our study multifolliular growth in step-up method was higher than step-down method. We can predict possibility of Ovarian Hyperstimulation Syndrome syndrome in highly sensitive PCOS patients.

  15. Quantitative Single-letter Sequencing: a method for simultaneously monitoring numerous known allelic variants in single DNA samples

    Directory of Open Access Journals (Sweden)

    Duborjal Hervé

    2008-02-01

    Full Text Available Abstract Background Pathogens such as fungi, bacteria and especially viruses, are highly variable even within an individual host, intensifying the difficulty of distinguishing and accurately quantifying numerous allelic variants co-existing in a single nucleic acid sample. The majority of currently available techniques are based on real-time PCR or primer extension and often require multiplexing adjustments that impose a practical limitation of the number of alleles that can be monitored simultaneously at a single locus. Results Here, we describe a novel method that allows the simultaneous quantification of numerous allelic variants in a single reaction tube and without multiplexing. Quantitative Single-letter Sequencing (QSS begins with a single PCR amplification step using a pair of primers flanking the polymorphic region of interest. Next, PCR products are submitted to single-letter sequencing with a fluorescently-labelled primer located upstream of the polymorphic region. The resulting monochromatic electropherogram shows numerous specific diagnostic peaks, attributable to specific variants, signifying their presence/absence in the DNA sample. Moreover, peak fluorescence can be quantified and used to estimate the frequency of the corresponding variant in the DNA population. Using engineered allelic markers in the genome of Cauliflower mosaic virus, we reliably monitored six different viral genotypes in DNA extracted from infected plants. Evaluation of the intrinsic variance of this method, as applied to both artificial plasmid DNA mixes and viral genome populations, demonstrates that QSS is a robust and reliable method of detection and quantification for variants with a relative frequency of between 0.05 and 1. Conclusion This simple method is easily transferable to many other biological systems and questions, including those involving high throughput analysis, and can be performed in any laboratory since it does not require specialized

  16. Recurrence and Polya Number of General One-Dimensional Random Walks

    International Nuclear Information System (INIS)

    Zhang Xiaokun; Wan Jing; Lu Jingju; Xu Xinping

    2011-01-01

    The recurrence properties of random walks can be characterized by Polya number, i.e., the probability that the walker has returned to the origin at least once. In this paper, we consider recurrence properties for a general 1D random walk on a line, in which at each time step the walker can move to the left or right with probabilities l and r, or remain at the same position with probability o (l + r + o = 1). We calculate Polya number P of this model and find a simple expression for P as, P = 1 - Δ, where Δ is the absolute difference of l and r (Δ = |l - r|). We prove this rigorous expression by the method of creative telescoping, and our result suggests that the walk is recurrent if and only if the left-moving probability l equals to the right-moving probability r. (general)

  17. Single mode step-index polymer optical fiber for humidity insensitive high temperature fiber Bragg grating sensors

    DEFF Research Database (Denmark)

    Woyessa, Getinet; Fasano, Andrea; Stefani, Alessio

    2016-01-01

    We have fabricated the first single-mode step-index and humidity insensitive polymer optical fiber operating in the 850 nm wavelength ranges. The step-index preform is fabricated using injection molding, which is an efficient method for cost effective, flexible and fast preparation of the fiber...... preform. The fabricated single-mode step-index (SI) polymer optical fiber (POF) has a 4.8µm core made from TOPAS grade 5013S-04 with a glass transition temperature of 134°C and a 150 µm cladding made from ZEONEX grade 480R with a glass transition temperature of 138°C. The key advantages of the proposed...... SIPOF are low water absorption, high operating temperature and chemical inertness to acids and bases and many polar solvents as compared to the conventional poly-methyl-methacrylate (PMMA) and polystyrene based POFs. In addition, the fiber Bragg grating writing time is short compared to microstructured...

  18. The Effects of Multiple-Step and Single-Step Directions on Fourth and Fifth Grade Students' Grammar Assessment Performance

    Science.gov (United States)

    Mazerik, Matthew B.

    2006-01-01

    The mean scores of English Language Learners (ELL) and English Only (EO) students in 4th and 5th grade (N = 110), across the teacher-administered Grammar Skills Test, were examined for differences in participants' scores on assessments containing single-step directions and assessments containing multiple-step directions. The results indicated no…

  19. A Geometry-Based Cycle Slip Detection and Repair Method with Time-Differenced Carrier Phase (TDCP for a Single Frequency Global Position System (GPS + BeiDou Navigation Satellite System (BDS Receiver

    Directory of Open Access Journals (Sweden)

    Chuang Qian

    2016-12-01

    Full Text Available As the field of high-precision applications based on carriers continues to expand, the development of low-cost, small, modular receivers and their application in diverse scenarios and situations with complex data quality has increased the requirements of carrier-phase data preprocessing. A new geometry-based cycle slip detection and repair method based on Global Position System (GPS + BeiDou Navigation Satellite System (BDS is proposed. The method uses a Time-differenced Carrier Phase (TDCP model, which eliminates the Inner-System Bias (ISB between GPS and BDS, and it is conducive to the effective combination of GPS and BDS. It avoids the interference of the noise of the pseudo-range with cycle slip detection, while the cycle slips are preserved as integers. This method does not limit the receiver frequency number, and it is applicable to single-frequency data. The process is divided into two steps to detect and repair cycle slip. The first step is cycle slip detection, using the Improved Local Analysis Method (ILAM to find satellites that have cycle slips; The second step is to repair the cycle slips, including estimating the float solution of changes in ambiguities at the satellites that have cycle slips with the least squares method and the integer solution of the cycle slips by rounding. In the process of rounding, in addition to the success probability, a decimal test is carried out to validate the result. Finally, experiments with filed test data are carried out to prove the effectiveness of this method. The results show that the detectable cycle slips number with GPS + BDS is much greater than that with GPS. The method can also detect the non-integer outliers while fixing the cycle slip. The maximum decimal bias in repair is less than that with GPS. It implies that this method takes full advantages of multi-system.

  20. Single-step digital backpropagation for nonlinearity mitigation

    DEFF Research Database (Denmark)

    Secondini, Marco; Rommel, Simon; Meloni, Gianluca

    2015-01-01

    Nonlinearity mitigation based on the enhanced split-step Fourier method (ESSFM) for the implementation of low-complexity digital backpropagation (DBP) is investigated and experimentally demonstrated. After reviewing the main computational aspects of DBP and of the conventional split-step Fourier...... in the computational complexity, power consumption, and latency with respect to a simple feed-forward equalizer for bulk dispersion compensation....

  1. METHOD FOR MANUFACTURING A SINGLE CRYSTAL NANO-WIRE.

    NARCIS (Netherlands)

    Van Den Berg, Albert; Bomer, Johan; Carlen Edwin, Thomas; Chen, Songyue; Kraaijenhagen Roderik, Adriaan; Pinedo Herbert, Michael

    2011-01-01

    A method for manufacturing a single crystal nano-structure is provided comprising the steps of providing a device layer with a 100 structure on a substrate; providing a stress layer onto the device layer; patterning the stress layer along the 110 direction of the device layer; selectively removing

  2. Method for making a single-step etch mask for 3D monolithic nanostructures

    International Nuclear Information System (INIS)

    Grishina, D A; Harteveld, C A M; Vos, W L; Woldering, L A

    2015-01-01

    Current nanostructure fabrication by etching is usually limited to planar structures as they are defined by a planar mask. The realization of three-dimensional (3D) nanostructures by etching requires technologies beyond planar masks. We present a method for fabricating a 3D mask that allows one to etch three-dimensional monolithic nanostructures using only CMOS-compatible processes. The mask is written in a hard-mask layer that is deposited on two adjacent inclined surfaces of a Si wafer. By projecting in a single step two different 2D patterns within one 3D mask on the two inclined surfaces, the mutual alignment between the patterns is ensured. Thereby after the mask pattern is defined, the etching of deep pores in two oblique directions yields a three-dimensional structure in Si. As a proof of concept we demonstrate 3D mask fabrication for three-dimensional diamond-like photonic band gap crystals in silicon. The fabricated crystals reveal a broad stop gap in optical reflectivity measurements. We propose how 3D nanostructures with five different Bravais lattices can be realized, namely cubic, tetragonal, orthorhombic, monoclinic and hexagonal, and demonstrate a mask for a 3D hexagonal crystal. We also demonstrate the mask for a diamond-structure crystal with a 3D array of cavities. In general, the 2D patterns on the different surfaces can be completely independently structured and still be in perfect mutual alignment. Indeed, we observe an alignment accuracy of better than 3.0 nm between the 2D mask patterns on the inclined surfaces, which permits one to etch well-defined monolithic 3D nanostructures. (paper)

  3. An optimized Line Sampling method for the estimation of the failure probability of nuclear passive systems

    International Nuclear Information System (INIS)

    Zio, E.; Pedroni, N.

    2010-01-01

    The quantitative reliability assessment of a thermal-hydraulic (T-H) passive safety system of a nuclear power plant can be obtained by (i) Monte Carlo (MC) sampling the uncertainties of the system model and parameters, (ii) computing, for each sample, the system response by a mechanistic T-H code and (iii) comparing the system response with pre-established safety thresholds, which define the success or failure of the safety function. The computational effort involved can be prohibitive because of the large number of (typically long) T-H code simulations that must be performed (one for each sample) for the statistical estimation of the probability of success or failure. In this work, Line Sampling (LS) is adopted for efficient MC sampling. In the LS method, an 'important direction' pointing towards the failure domain of interest is determined and a number of conditional one-dimensional problems are solved along such direction; this allows for a significant reduction of the variance of the failure probability estimator, with respect, for example, to standard random sampling. Two issues are still open with respect to LS: first, the method relies on the determination of the 'important direction', which requires additional runs of the T-H code; second, although the method has been shown to improve the computational efficiency by reducing the variance of the failure probability estimator, no evidence has been given yet that accurate and precise failure probability estimates can be obtained with a number of samples reduced to below a few hundreds, which may be required in case of long-running models. The work presented in this paper addresses the first issue by (i) quantitatively comparing the efficiency of the methods proposed in the literature to determine the LS important direction; (ii) employing artificial neural network (ANN) regression models as fast-running surrogates of the original, long-running T-H code to reduce the computational cost associated to the

  4. Probability of identification: a statistical model for the validation of qualitative botanical identification methods.

    Science.gov (United States)

    LaBudde, Robert A; Harnly, James M

    2012-01-01

    A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given.

  5. The effect of step thickness on the surface diffusion of a Pt adatom

    International Nuclear Information System (INIS)

    Yang, Jianyu; Deng, Yonghe; Xiao, Gang; Hu, Wangyu; Chen, Shuguang

    2009-01-01

    The diffusion of a single Pt adatom on the Pt(1 1 1) surface with {1 1 1}-faceted steps is studied using a combination of molecular dynamics and the nudged elastic band method. The interatomic interactions are described with the analytic embedded atom method. The simulation indicates that before diffusion across the descending step, the adatom becomes trapped at the step edge, and has to overcome an energy barrier to return the plane's center. The energy barrier for adatom migration to the step edge is almost independent of step thickness. In addition, the step thickness dependence of the diffusion energy barrier for the adatom over descending and ascending steps edge is obtained. For a monolayer step, the upward diffusion of the adatom to the {1 1 1}-faceted steps is very rare as compared with the downward diffusion. However, the probability of the adatom to ascend the {1 1 1}-faceted steps increases with increasing step thickness. The calculated character temperatures indicate the three-dimensional pyramidal island on the clean Pt(1 1 1) surface can be formed at higher temperature

  6. On the Pricing of Step-Up Bonds in the European Telecom Sector

    DEFF Research Database (Denmark)

    Lando, David; Mortensen, Allan

    This paper investigates the pricing of step-up bonds, i.e. corporatebonds with provisions stating that the coupon payments increase as thecredit rating level of the issuer declines. To assess the risk-neutral ratingtransition probabilities necessary to price these bonds, we introduce...... a newcalibration method within the reduced-form rating-based model of Jarrow,Lando, and Turnbull (1997). We also treat split ratings and adjust forrating outlook. Step-up bonds have been issued in large amounts in theEuropean telecom sector, and we find that, through most of the sample,step-up bonds issued...

  7. Single-step controlled-NOT logic from any exchange interaction

    Science.gov (United States)

    Galiautdinov, Andrei

    2007-11-01

    A self-contained approach to studying the unitary evolution of coupled qubits is introduced, capable of addressing a variety of physical systems described by exchange Hamiltonians containing Rabi terms. The method automatically determines both the Weyl chamber steering trajectory and the accompanying local rotations. Particular attention is paid to the case of anisotropic exchange with tracking controls, which is solved analytically. It is shown that, if computational subspace is well isolated, any exchange interaction can always generate high fidelity, single-step controlled-NOT (CNOT) logic, provided that both qubits can be individually manipulated. The results are then applied to superconducting qubit architectures, for which several CNOT gate implementations are identified. The paper concludes with consideration of two CNOT gate designs having high efficiency and operating with no significant leakage to higher-lying noncomputational states.

  8. Characterization of cyclic deformation behaviour of tempered and quenched 42CrMoS4 at single step and variable amplitude loading

    International Nuclear Information System (INIS)

    Schelp, M.; Eifler, D.

    2000-01-01

    Cyclic single steps tests were performed on tempered and quenched specimens of the steel 42CrMoS4. Strain, temperature and electrical resistance measurements yielded an empirical prediction of fatigue life according to Coffin, Manson and Morrow. All measured values are based on physical processes and therefore show a strong interaction. A new testing procedure was developed permitting hysteresis measurements to be used for the characterization and description of fatigue behaviour under variable amplitude loading. The basic idea is to combine fatigue tests with any kind of load spectrum with single step tests. This offers the possibility to apply lifetime prediction methods normally used for single step tests for those with random or service loading. (orig.)

  9. Reduced probability of smoking cessation in men with increasing number of job losses and partnership breakdowns

    DEFF Research Database (Denmark)

    Kriegbaum, Margit; Larsen, Anne Mette; Christensen, Ulla

    2011-01-01

    and to study joint exposure to both. Methods Birth cohort study of smoking cessation of 6232 Danish men born in 1953 with a follow-up at age 51 (response rate 66.2%). History of unemployment and cohabitation was measured annually using register data. Information on smoking cessation was obtained...... by a questionnaire. Results The probability of smoking cessation decreased with the number of job losses (ranging from 1 OR 0.54 (95% CI 0.46 to 0.64) to 3+ OR 0.41 (95% CI 0.30 to 0.55)) and of broken partnerships (ranging from 1 OR 0.74 (95% CI 0.63 to 0.85) to 3+ OR 0.50 (95% CI 0.39 to 0.63)). Furthermore......–23 years (OR 0.44, 95% CI 0.37 to 0.52)). Those who never cohabited and experienced one or more job losses had a particular low chance of smoking cessation (OR 0.19, 95% CI 0.12 to 0.30). Conclusion The numbers of job losses and of broken partnerships were both inversely associated with probability...

  10. s-Step Krylov Subspace Methods as Bottom Solvers for Geometric Multigrid

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lijewski, Mike [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Almgren, Ann [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Straalen, Brian Van [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Carson, Erin [Univ. of California, Berkeley, CA (United States); Knight, Nicholas [Univ. of California, Berkeley, CA (United States); Demmel, James [Univ. of California, Berkeley, CA (United States)

    2014-08-14

    Geometric multigrid solvers within adaptive mesh refinement (AMR) applications often reach a point where further coarsening of the grid becomes impractical as individual sub domain sizes approach unity. At this point the most common solution is to use a bottom solver, such as BiCGStab, to reduce the residual by a fixed factor at the coarsest level. Each iteration of BiCGStab requires multiple global reductions (MPI collectives). As the number of BiCGStab iterations required for convergence grows with problem size, and the time for each collective operation increases with machine scale, bottom solves in large-scale applications can constitute a significant fraction of the overall multigrid solve time. In this paper, we implement, evaluate, and optimize a communication-avoiding s-step formulation of BiCGStab (CABiCGStab for short) as a high-performance, distributed-memory bottom solver for geometric multigrid solvers. This is the first time s-step Krylov subspace methods have been leveraged to improve multigrid bottom solver performance. We use a synthetic benchmark for detailed analysis and integrate the best implementation into BoxLib in order to evaluate the benefit of a s-step Krylov subspace method on the multigrid solves found in the applications LMC and Nyx on up to 32,768 cores on the Cray XE6 at NERSC. Overall, we see bottom solver improvements of up to 4.2x on synthetic problems and up to 2.7x in real applications. This results in as much as a 1.5x improvement in solver performance in real applications.

  11. A simple method for encapsulating single cells in alginate microspheres allows for direct PCR and whole genome amplification.

    Directory of Open Access Journals (Sweden)

    Saharnaz Bigdeli

    Full Text Available Microdroplets are an effective platform for segregating individual cells and amplifying DNA. However, a key challenge is to recover the contents of individual droplets for downstream analysis. This paper offers a method for embedding cells in alginate microspheres and performing multiple serial operations on the isolated cells. Rhodobacter sphaeroides cells were diluted in alginate polymer and sprayed into microdroplets using a fingertip aerosol sprayer. The encapsulated cells were lysed and subjected either to conventional PCR, or whole genome amplification using either multiple displacement amplification (MDA or a two-step PCR protocol. Microscopic examination after PCR showed that the lumen of the occupied microspheres contained fluorescently stained DNA product, but multiple displacement amplification with phi29 produced only a small number of polymerase colonies. The 2-step WGA protocol was successful in generating fluorescent material, and quantitative PCR from DNA extracted from aliquots of microspheres suggested that the copy number inside the microspheres was amplified up to 3 orders of magnitude. Microspheres containing fluorescent material were sorted by a dilution series and screened with a fluorescent plate reader to identify single microspheres. The DNA was extracted from individual isolates, re-amplified with full-length sequencing adapters, and then a single isolate was sequenced using the Illumina MiSeq platform. After filtering the reads, the only sequences that collectively matched a genome in the NCBI nucleotide database belonged to R. sphaeroides. This demonstrated that sequencing-ready DNA could be generated from the contents of a single microsphere without culturing. However, the 2-step WGA strategy showed limitations in terms of low genome coverage and an uneven frequency distribution of reads across the genome. This paper offers a simple method for embedding cells in alginate microspheres and performing PCR on isolated

  12. Initialization and Restart in Stochastic Local Search: Computing a Most Probable Explanation in Bayesian Networks

    Science.gov (United States)

    Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan

    2010-01-01

    For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.

  13. Single step, pH induced gold nanoparticle chain formation in lecithin/water system.

    Science.gov (United States)

    Sharma, Damyanti

    2013-07-01

    Gold nanoparticle (AuNP) chains have been formed by a single step method in a lecithin/water system where lecithin itself plays the role of a reductant and a template for AuNP chain formation. Two preparative strategies were explored: (1) evaporating lecithin solution with aqueous gold chloride (HAuCl4) at different pHs and (2) dispersing lecithin vesicles in aqueous HAuCl4 solutions of various pHs in the range of 2.5-11.3. In method 1, at initial pH 2.5, 20-50 nm AuNPs are found attached to lecithin vesicles. When pH is raised to 5.5 there are no vesicles present and 20 nm monodisperse particles are found aggregating. Chain formation of fine nanoparticles (3-5 nm) is observed from neutral to basic pH, between 6.5-10.3 The chains formed are hundreds of nanometers to micrometer long and are usually 2-3 nanoparticles wide. On further increasing pH to 11.3, particles form disk-like or raft-like structures. When method (ii) was used a little chain formation was observed. Most of the nanoparticles formed were found either sitting together as raft like structures or scattered on lecithin structures. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. A reliable method for the counting and control of single ions for single-dopant controlled devices

    International Nuclear Information System (INIS)

    Shinada, T; Kurosawa, T; Nakayama, H; Zhu, Y; Hori, M; Ohdomari, I

    2008-01-01

    By 2016, transistor device size will be just 10 nm. However, a transistor that is doped at a typical concentration of 10 18 atoms cm -3 has only one dopant atom in the active channel region. Therefore, it can be predicted that conventional doping methods such as ion implantation and thermal diffusion will not be available ten years from now. We have been developing a single-ion implantation (SII) method that enables us to implant dopant ions one-by-one into semiconductors until the desired number is reached. Here we report a simple but reliable method to control the number of single-dopant atoms by detecting the change in drain current induced by single-ion implantation. The drain current decreases in a stepwise fashion as a result of the clusters of displaced Si atoms created by every single-ion incidence. This result indicates that the single-ion detection method we have developed is capable of detecting single-ion incidence with 100% efficiency. Our method potentially could pave the way to future single-atom devices, including a solid-state quantum computer

  15. Step by step parallel programming method for molecular dynamics code

    International Nuclear Information System (INIS)

    Orii, Shigeo; Ohta, Toshio

    1996-07-01

    Parallel programming for a numerical simulation program of molecular dynamics is carried out with a step-by-step programming technique using the two phase method. As a result, within the range of a certain computing parameters, it is found to obtain parallel performance by using the level of parallel programming which decomposes the calculation according to indices of do-loops into each processor on the vector parallel computer VPP500 and the scalar parallel computer Paragon. It is also found that VPP500 shows parallel performance in wider range computing parameters. The reason is that the time cost of the program parts, which can not be reduced by the do-loop level of the parallel programming, can be reduced to the negligible level by the vectorization. After that, the time consuming parts of the program are concentrated on less parts that can be accelerated by the do-loop level of the parallel programming. This report shows the step-by-step parallel programming method and the parallel performance of the molecular dynamics code on VPP500 and Paragon. (author)

  16. Probability evolution method for exit location distribution

    Science.gov (United States)

    Zhu, Jinjie; Chen, Zhen; Liu, Xianbin

    2018-03-01

    The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.

  17. A General Probability Formula of the Number of Location Areas' Boundaries Crossed by a Mobile Between Two Successive Call Arrivals

    Institute of Scientific and Technical Information of China (English)

    Yi-Hua Zhu; Ding-Hua Shi; Yong Xiong; Ji Gao; He-Zhi Luo

    2004-01-01

    Mobility management is a challenging topic in mobile computing environment. Studying the situation of mobiles crossing the boundaries of location areas is significant for evaluating the costs and performances of various location management strategies. Hitherto, several formulae were derived to describe the probability of the number of location areas' boundaries crossed by a mobile. Some of them were widely used in analyzing the costs and performances of mobility management strategies. Utilizing the density evolution method of vector Markov processes, we propose a general probability formula of the number of location areas' boundaries crossed by a mobile between two successive calls. Fortunately, several widely-used formulae are special cases of the proposed formula.

  18. Deriving the probability of a linear opinion pooling method being superior to a set of alternatives

    International Nuclear Information System (INIS)

    Bolger, Donnacha; Houlding, Brett

    2017-01-01

    Linear opinion pools are a common method for combining a set of distinct opinions into a single succinct opinion, often to be used in a decision making task. In this paper we consider a method, termed the Plug-in approach, for determining the weights to be assigned in this linear pool, in a manner that can be deemed as rational in some sense, while incorporating multiple forms of learning over time into its process. The environment that we consider is one in which every source in the pool is herself a decision maker (DM), in contrast to the more common setting in which expert judgments are amalgamated for use by a single DM. We discuss a simulation study that was conducted to show the merits of our technique, and demonstrate how theoretical probabilistic arguments can be used to exactly quantify the probability of this technique being superior (in terms of a probability density metric) to a set of alternatives. Illustrations are given of simulated proportions converging to these true probabilities in a range of commonly used distributional cases. - Highlights: • A novel context for combination of expert opinion is provided. • A dynamic reliability assessment method is stated, justified by properties and a data study. • The theoretical grounding underlying the data-driven justification is explored. • We conclude with areas for expansion and further relevant research.

  19. Perturbed Strong Stability Preserving Time-Stepping Methods For Hyperbolic PDEs

    KAUST Repository

    Hadjimichael, Yiannis

    2017-09-30

    A plethora of physical phenomena are modelled by hyperbolic partial differential equations, for which the exact solution is usually not known. Numerical methods are employed to approximate the solution to hyperbolic problems; however, in many cases it is difficult to satisfy certain physical properties while maintaining high order of accuracy. In this thesis, we develop high-order time-stepping methods that are capable of maintaining stability constraints of the solution, when coupled with suitable spatial discretizations. Such methods are called strong stability preserving (SSP) time integrators, and we mainly focus on perturbed methods that use both upwind- and downwind-biased spatial discretizations. Firstly, we introduce a new family of third-order implicit Runge–Kuttas methods with arbitrarily large SSP coefficient. We investigate the stability and accuracy of these methods and we show that they perform well on hyperbolic problems with large CFL numbers. Moreover, we extend the analysis of SSP linear multistep methods to semi-discretized problems for which different terms on the right-hand side of the initial value problem satisfy different forward Euler (or circle) conditions. Optimal perturbed and additive monotonicity-preserving linear multistep methods are studied in the context of such problems. Optimal perturbed methods attain augmented monotonicity-preserving step sizes when the different forward Euler conditions are taken into account. On the other hand, we show that optimal SSP additive methods achieve a monotonicity-preserving step-size restriction no better than that of the corresponding non-additive SSP linear multistep methods. Furthermore, we develop the first SSP linear multistep methods of order two and three with variable step size, and study their optimality. We describe an optimal step-size strategy and demonstrate the effectiveness of these methods on various one- and multi-dimensional problems. Finally, we establish necessary conditions

  20. Primitive polynomials selection method for pseudo-random number generator

    Science.gov (United States)

    Anikin, I. V.; Alnajjar, Kh

    2018-01-01

    In this paper we suggested the method for primitive polynomials selection of special type. This kind of polynomials can be efficiently used as a characteristic polynomials for linear feedback shift registers in pseudo-random number generators. The proposed method consists of two basic steps: finding minimum-cost irreducible polynomials of the desired degree and applying primitivity tests to get the primitive ones. Finally two primitive polynomials, which was found by the proposed method, used in pseudorandom number generator based on fuzzy logic (FRNG) which had been suggested before by the authors. The sequences generated by new version of FRNG have low correlation magnitude, high linear complexity, less power consumption, is more balanced and have better statistical properties.

  1. Development of a method to extract and purify target compounds from medicinal plants in a single step: online hyphenation of expanded bed adsorption chromatography and countercurrent chromatography.

    Science.gov (United States)

    Li, Yang; Wang, Nan; Zhang, Min; Ito, Yoichiro; Zhang, Hongyang; Wang, Yuerong; Guo, Xin; Hu, Ping

    2014-04-01

    Pure compounds extracted and purified from natural sources are crucial to lead discovery and drug screening. This study presents a novel two-dimensional hyphenation of expanded bed adsorption chromatography (EBAC) and high-speed countercurrent chromatography (HSCCC) for extraction and purification of target compounds from medicinal plants in a single step. The EBAC and HSCCC were hyphenated via a six-port injection valve as an interface. Fractionation of ingredients of Salvia miltiorrhiza and Rhizoma coptidis was performed on the hyphenated system to verify its efficacy. Two compounds were harvested from Salvia miltiorrhiza, one was 52.9 mg of salvianolic acid B with an over 95% purity and the other was 2.1 mg of rosmarinic acid with a 74% purity. Another two components were purified from Rhizoma coptidis, one was 4.6 mg of coptisine with a 98% purity and one was 4.1 mg of berberine with a 82% purity. The processing time was nearly 50% that of the multistep method. The results indicate that the present method is a rapid and green way to harvest targets from medicinal plants in a single step.

  2. Virtual substitution scan via single-step free energy perturbation.

    Science.gov (United States)

    Chiang, Ying-Chih; Wang, Yi

    2016-02-05

    With the rapid expansion of our computing power, molecular dynamics (MD) simulations ranging from hundreds of nanoseconds to microseconds or even milliseconds have become increasingly common. The majority of these long trajectories are obtained from plain (vanilla) MD simulations, where no enhanced sampling or free energy calculation method is employed. To promote the 'recycling' of these trajectories, we developed the Virtual Substitution Scan (VSS) toolkit as a plugin of the open-source visualization and analysis software VMD. Based on the single-step free energy perturbation (sFEP) method, VSS enables the user to post-process a vanilla MD trajectory for a fast free energy scan of substituting aryl hydrogens by small functional groups. Dihedrals of the functional groups are sampled explicitly in VSS, which improves the performance of the calculation and is found particularly important for certain groups. As a proof-of-concept demonstration, we employ VSS to compute the solvation free energy change upon substituting the hydrogen of a benzene molecule by 12 small functional groups frequently considered in lead optimization. Additionally, VSS is used to compute the relative binding free energy of four selected ligands of the T4 lysozyme. Overall, the computational cost of VSS is only a fraction of the corresponding multi-step FEP (mFEP) calculation, while its results agree reasonably well with those of mFEP, indicating that VSS offers a promising tool for rapid free energy scan of small functional group substitutions. This article is protected by copyright. All rights reserved. © 2016 Wiley Periodicals, Inc.

  3. An examination of the number of required apertures for step-and-shoot IMRT

    International Nuclear Information System (INIS)

    Jiang, Z; Earl, M A; Zhang, G W; Yu, C X; Shepard, D M

    2005-01-01

    We have examined the degree to which step-and-shoot IMRT treatment plans can be simplified (using a small number of apertures) without sacrificing the dosimetric quality of the plans. A key element of this study was the use of direct aperture optimization (DAO), an inverse planning technique where all of the multi-leaf collimator constraints are incorporated into the optimization. For seven cases (1 phantom, 1 prostate, 3 head-and-neck and 2 lung), DAO was used to perform a series of optimizations where the number of apertures per beam direction varied from 1 to 15. In this work, we attempt to provide general guidelines for how many apertures per beam direction are sufficient for various clinical cases using DAO. Analysis of the optimized treatment plans reveals that for most cases, only modest improvements in the objective function and the corresponding DVHs are seen beyond 5 apertures per beam direction. However, for more complex cases, some dosimetric gain can be achieved by increasing the number of apertures per beam direction beyond 5. Even in these cases, however, only modest improvements are observed beyond 9 apertures per beam direction. In our clinical experience, 38 out of the first 40 patients treated using IMRT plans produced using DAO were treated with 9 or fewer apertures per beam direction. The results indicate that many step-and-shoot IMRT treatment plans delivered today are more complex than necessary and can be simplified without sacrificing plan quality

  4. On the shake-off probability for atomic systems

    Energy Technology Data Exchange (ETDEWEB)

    Santos, A.C.F., E-mail: toniufrj@gmail.com [Instituto de Física, Universidade Federal do Rio de Janeiro, P.O. Box 68528, 21941-972 Rio de Janeiro, RJ (Brazil); Almeida, D.P. [Departamento de Física, Universidade Federal de Santa Catarina, 88040-900 Florianópolis (Brazil)

    2016-07-15

    Highlights: • The scope is to find the relationship among SO probabilities, Z and electron density. • A scaling law is suggested, allowing us to find the SO probabilities for atoms. • SO probabilities have been scaled as a function of target Z and polarizability. - Abstract: The main scope in this work has been upon the relationship between shake-off probabilities, target atomic number and electron density. By comparing the saturation values of measured double-to-single photoionization ratios from the literature, a simple scaling law has been found, which allows us to predict the shake-off probabilities for several elements up to Z = 54 within a factor 2. The electron shake-off probabilities accompanying valence shell photoionization have been scaled as a function of the target atomic number, Z, and polarizability, α. This behavior is in qualitative agreement with the experimental results.

  5. An enhanced unified uncertainty analysis approach based on first order reliability method with single-level optimization

    International Nuclear Information System (INIS)

    Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; Tooren, Michel van

    2013-01-01

    In engineering, there exist both aleatory uncertainties due to the inherent variation of the physical system and its operational environment, and epistemic uncertainties due to lack of knowledge and which can be reduced with the collection of more data. To analyze the uncertain distribution of the system performance under both aleatory and epistemic uncertainties, combined probability and evidence theory can be employed to quantify the compound effects of the mixed uncertainties. The existing First Order Reliability Method (FORM) based Unified Uncertainty Analysis (UUA) approach nests the optimization based interval analysis in the improved Hasofer–Lind–Rackwitz–Fiessler (iHLRF) algorithm based Most Probable Point (MPP) searching procedure, which is computationally inhibitive for complex systems and may encounter convergence problem as well. Therefore, in this paper it is proposed to use general optimization solvers to search MPP in the outer loop and then reformulate the double-loop optimization problem into an equivalent single-level optimization (SLO) problem, so as to simplify the uncertainty analysis process, improve the robustness of the algorithm, and alleviate the computational complexity. The effectiveness and efficiency of the proposed method is demonstrated with two numerical examples and one practical satellite conceptual design problem. -- Highlights: ► Uncertainty analysis under mixed aleatory and epistemic uncertainties is studied. ► A unified uncertainty analysis method is proposed with combined probability and evidence theory. ► The traditional nested analysis method is converted to single level optimization for efficiency. ► The effectiveness and efficiency of the proposed method are testified with three examples

  6. Single step radiolytic synthesis of iridium nanoparticles onto graphene oxide

    International Nuclear Information System (INIS)

    Rojas, J.V.; Molina Higgins, M.C.; Toro Gonzalez, M.; Castano, C.E.

    2015-01-01

    Graphical abstract: - Highlights: • Ir nanoparticles were synthesized through a single step gamma irradiation process. • Homogeneously distributed Ir nanoparticles on graphene oxide are ∼2.3 nm in size. • Ir−O bonds evidenced the interaction of the nanoparticles with the support. - Abstract: In this work a new approach to synthesize iridium nanoparticles on reduced graphene oxide is presented. The nanoparticles were directly deposited and grown on the surface of the carbon-based support using a single step reduction method through gamma irradiation. In this process, an aqueous isopropanol solution containing the iridium precursor, graphene oxide, and sodium dodecyl sulfate was initially prepared and sonicated thoroughly to obtain a homogeneous dispersion. The samples were irradiated with gamma rays with energies of 1.17 and 1.33 MeV emitted from the spontaneous decay of the 60 Co irradiator. The interaction of gamma rays with water in the presence of isopropanol generates highly reducing species homogeneously distributed in the solution that can reduce the Ir precursor down to a zero valence state. An absorbed dose of 60 kGy was used, which according to the yield of reducing species is sufficient to reduce the total amount of precursor present in the solution. This novel approach leads to the formation of 2.3 ± 0.5 nm Ir nanoparticles distributed along the surface of the support. The oxygenated functionalities of graphene oxide served as nucleation sites for the formation of Ir nuclei and their subsequent growth. XPS results revealed that the interaction of Ir with the support occurs through Ir−O bonds.

  7. The single most important education reform in developing country

    Science.gov (United States)

    Orija, O.

    2007-05-01

    I deciding teaching as peer educator and working with NGOs in my country, as method to need to consider students' background knowledge, environment, and their learning goals as well as standardized curriculum as determined by their school district. Strengthening relationships among students and adults, Improving engagement, alignment and rigor of teaching and learning in every classroom, every day. My single most reform achieves is the rural school and community trust is a national non-profit organization addressing the crucial relationship between good schools and thriving communities. Our mission is to help rural schools and communities get better together. Working in some of the poorest, most challenging places, the rural trust involves young people in learning linked to their communities, improves the quality of teaching and school leadership, and advocates in a variety of ways for appropriate state educational policies, including the key issue of equitable and national agenda (serve Peer Educator) where rural people and their issues are visible and credible for rural schools.

  8. Convergence of Transition Probability Matrix in CLVMarkov Models

    Science.gov (United States)

    Permana, D.; Pasaribu, U. S.; Indratno, S. W.; Suprayogi, S.

    2018-04-01

    A transition probability matrix is an arrangement of transition probability from one states to another in a Markov chain model (MCM). One of interesting study on the MCM is its behavior for a long time in the future. The behavior is derived from one property of transition probabilty matrix for n steps. This term is called the convergence of the n-step transition matrix for n move to infinity. Mathematically, the convergence of the transition probability matrix is finding the limit of the transition matrix which is powered by n where n moves to infinity. The convergence form of the transition probability matrix is very interesting as it will bring the matrix to its stationary form. This form is useful for predicting the probability of transitions between states in the future. The method usually used to find the convergence of transition probability matrix is through the process of limiting the distribution. In this paper, the convergence of the transition probability matrix is searched using a simple concept of linear algebra that is by diagonalizing the matrix.This method has a higher level of complexity because it has to perform the process of diagonalization in its matrix. But this way has the advantage of obtaining a common form of power n of the transition probability matrix. This form is useful to see transition matrix before stationary. For example cases are taken from CLV model using MCM called Model of CLV-Markov. There are several models taken by its transition probability matrix to find its convergence form. The result is that the convergence of the matrix of transition probability through diagonalization has similarity with convergence with commonly used distribution of probability limiting method.

  9. Inverse probability weighting in STI/HIV prevention research: methods for evaluating social and community interventions

    Science.gov (United States)

    Lippman, Sheri A.; Shade, Starley B.; Hubbard, Alan E.

    2011-01-01

    Background Intervention effects estimated from non-randomized intervention studies are plagued by biases, yet social or structural intervention studies are rarely randomized. There are underutilized statistical methods available to mitigate biases due to self-selection, missing data, and confounding in longitudinal, observational data permitting estimation of causal effects. We demonstrate the use of Inverse Probability Weighting (IPW) to evaluate the effect of participating in a combined clinical and social STI/HIV prevention intervention on reduction of incident chlamydia and gonorrhea infections among sex workers in Brazil. Methods We demonstrate the step-by-step use of IPW, including presentation of the theoretical background, data set up, model selection for weighting, application of weights, estimation of effects using varied modeling procedures, and discussion of assumptions for use of IPW. Results 420 sex workers contributed data on 840 incident chlamydia and gonorrhea infections. Participators were compared to non-participators following application of inverse probability weights to correct for differences in covariate patterns between exposed and unexposed participants and between those who remained in the intervention and those who were lost-to-follow-up. Estimators using four model selection procedures provided estimates of intervention effect between odds ratio (OR) .43 (95% CI:.22-.85) and .53 (95% CI:.26-1.1). Conclusions After correcting for selection bias, loss-to-follow-up, and confounding, our analysis suggests a protective effect of participating in the Encontros intervention. Evaluations of behavioral, social, and multi-level interventions to prevent STI can benefit by introduction of weighting methods such as IPW. PMID:20375927

  10. Influence of Coloured Correlated Noises on Probability Distribution and Mean of Tumour Cell Number in the Logistic Growth Model

    Institute of Scientific and Technical Information of China (English)

    HAN Li-Bo; GONG Xiao-Long; CAO Li; WU Da-Jin

    2007-01-01

    An approximate Fokker-P1anck equation for the logistic growth model which is driven by coloured correlated noises is derived by applying the Novikov theorem and the Fox approximation. The steady-state probability distribution (SPD) and the mean of the tumour cell number are analysed. It is found that the SPD is the single extremum configuration when the degree of correlation between the multiplicative and additive noises, λ, is in -1<λ ≤ 0 and can be the double extrema in 0<λ<1. A configuration transition occurs because of the variation of noise parameters. A minimum appears in the curve of the mean of the steady-state tumour cell number, 〈x〉, versus λ. The position and the value of the minimum are controlled by the noise-correlated times.

  11. Considerations for the independent reaction times and step-by-step methods for radiation chemistry simulations

    Science.gov (United States)

    Plante, Ianik; Devroye, Luc

    2017-10-01

    Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.

  12. Evaluation of accuracy in implant site preparation performed in single- or multi-step drilling procedures.

    Science.gov (United States)

    Marheineke, Nadine; Scherer, Uta; Rücker, Martin; von See, Constantin; Rahlf, Björn; Gellrich, Nils-Claudius; Stoetzer, Marcus

    2018-06-01

    Dental implant failure and insufficient osseointegration are proven results of mechanical and thermal damage during the surgery process. We herein performed a comparative study of a less invasive single-step drilling preparation protocol and a conventional multiple drilling sequence. Accuracy of drilling holes was precisely analyzed and the influence of different levels of expertise of the handlers and additional use of drill template guidance was evaluated. Six experimental groups, deployed in an osseous study model, were representing template-guided and freehanded drilling actions in a stepwise drilling procedure in comparison to a single-drill protocol. Each experimental condition was studied by the drilling actions of respectively three persons without surgical knowledge as well as three highly experienced oral surgeons. Drilling actions were performed and diameters were recorded with a precision measuring instrument. Less experienced operators were able to significantly increase the drilling accuracy using a guiding template, especially when multi-step preparations are performed. Improved accuracy without template guidance was observed when experienced operators were executing single-step versus multi-step technique. Single-step drilling protocols have shown to produce more accurate results than multi-step procedures. The outcome of any protocol can be further improved by use of guiding templates. Operator experience can be a contributing factor. Single-step preparations are less invasive and are promoting osseointegration. Even highly experienced surgeons are achieving higher levels of accuracy by combining this technique with template guidance. Hereby template guidance enables a reduction of hands-on time and side effects during surgery and lead to a more predictable clinical diameter.

  13. Model uncertainty: Probabilities for models?

    International Nuclear Information System (INIS)

    Winkler, R.L.

    1994-01-01

    Like any other type of uncertainty, model uncertainty should be treated in terms of probabilities. The question is how to do this. The most commonly-used approach has a drawback related to the interpretation of the probabilities assigned to the models. If we step back and look at the big picture, asking what the appropriate focus of the model uncertainty question should be in the context of risk and decision analysis, we see that a different probabilistic approach makes more sense, although it raise some implementation questions. Current work that is underway to address these questions looks very promising

  14. Avoid the tsunami of the Dirac sea in the imaginary time step method

    International Nuclear Information System (INIS)

    Zhang, Ying; Liang, Haozhao; Meng, Jie

    2010-01-01

    The discrete single-particle spectra in both the Fermi and Dirac sea have been calculated by the imaginary time step (ITS) method for the Schroedinger-like equation after avoiding the "tsunami" of the Dirac sea, i.e. the diving behavior of the single-particle level into the Dirac sea in the direct application of the ITS method for the Dirac equation. It is found that by the transform from the Dirac equation to the Schroedinger-like equation, the single-particle spectra, which extend from the positive to the negative infinity, can be separately obtained by the ITS evolution in either the Fermi sea or the Dirac sea. Identical results with those in the conventional shooting method have been obtained via the ITS evolution for the equivalent Schroedinger-like equation, which demonstrates the feasibility, practicality and reliability of the present algorithm and dispels the doubts on the ITS method in the relativistic system. (author)

  15. Rapid detection of coliforms in drinking water of Arak city using multiplex PCR method in comparison with the standard method of culture (Most Probably Number

    Directory of Open Access Journals (Sweden)

    Dehghan fatemeh

    2014-05-01

    Conclusions: Multiplex PCR method with shortened operation time was used for the simultaneous detection of total coliforms and Escherichia coli in distribution system of Arak city. It's recommended to be used at least as an initial screening test, and then the positive samples could be randomly tested by MPN.

  16. Incorporation of causative quantitative trait nucleotides in single-step GBLUP.

    Science.gov (United States)

    Fragomeni, Breno O; Lourenco, Daniela A L; Masuda, Yutaka; Legarra, Andres; Misztal, Ignacy

    2017-07-26

    Much effort is put into identifying causative quantitative trait nucleotides (QTN) in animal breeding, empowered by the availability of dense single nucleotide polymorphism (SNP) information. Genomic selection using traditional SNP information is easily implemented for any number of genotyped individuals using single-step genomic best linear unbiased predictor (ssGBLUP) with the algorithm for proven and young (APY). Our aim was to investigate whether ssGBLUP is useful for genomic prediction when some or all QTN are known. Simulations included 180,000 animals across 11 generations. Phenotypes were available for all animals in generations 6 to 10. Genotypes for 60,000 SNPs across 10 chromosomes were available for 29,000 individuals. The genetic variance was fully accounted for by 100 or 1000 biallelic QTN. Raw genomic relationship matrices (GRM) were computed from (a) unweighted SNPs, (b) unweighted SNPs and causative QTN, (c) SNPs and causative QTN weighted with results obtained with genome-wide association studies, (d) unweighted SNPs and causative QTN with simulated weights, (e) only unweighted causative QTN, (f-h) as in (b-d) but using only the top 10% causative QTN, and (i) using only causative QTN with simulated weight. Predictions were computed by pedigree-based BLUP (PBLUP) and ssGBLUP. Raw GRM were blended with 1 or 5% of the numerator relationship matrix, or 1% of the identity matrix. Inverses of GRM were obtained directly or with APY. Accuracy of breeding values for 5000 genotyped animals in the last generation with PBLUP was 0.32, and for ssGBLUP it increased to 0.49 with an unweighted GRM, 0.53 after adding unweighted QTN, 0.63 when QTN weights were estimated, and 0.89 when QTN weights were based on true effects known from the simulation. When the GRM was constructed from causative QTN only, accuracy was 0.95 and 0.99 with blending at 5 and 1%, respectively. Accuracies simulating 1000 QTN were generally lower, with a similar trend. Accuracies using the

  17. Space Situational Awareness of Large Numbers of Payloads From a Single Deployment

    Science.gov (United States)

    Segerman, A.; Byers, J.; Emmert, J.; Nicholas, A.

    2014-09-01

    The nearly simultaneous deployment of a large number of payloads from a single vehicle presents a new challenge for space object catalog maintenance and space situational awareness (SSA). Following two cubesat deployments last November, it took five weeks to catalog the resulting 64 orbits. The upcoming Kicksat mission will present an even greater SSA challenge, with its deployment of 128 chip-sized picosats. Although all of these deployments are in short-lived orbits, future deployments will inevitably occur at higher altitudes, with a longer term threat of collision with active spacecraft. With such deployments, individual scientific payload operators require rapid precise knowledge of their satellites' locations. Following the first November launch, the cataloguing did not initially associate a payload with each orbit, leaving this to the satellite operators. For short duration missions, the time required to identify an experiment's specific orbit may easily be a large fraction of the spacecraft's lifetime. For a Kicksat-type deployment, present tracking cannot collect enough observations to catalog each small object. The current approach is to treat the chip cloud as a single catalog object. However, the cloud dissipates into multiple subclouds and, ultimately, tiny groups of untrackable chips. One response to this challenge may be to mandate installation of a transponder on each spacecraft. Directional transponder transmission detections could be used as angle observations for orbit cataloguing. Of course, such an approach would only be employable with cooperative spacecraft. In other cases, a probabilistic association approach may be useful, with the goal being to establish the probability of an element being at a given point in space. This would permit more reliable assessment of the probability of collision of active spacecraft with any cloud element. This paper surveys the cataloguing challenges presented by large scale deployments of small spacecraft

  18. An Adjusted Probability Method for the Identification of Sociometric Status in Classrooms

    Directory of Open Access Journals (Sweden)

    Francisco J. García Bacete

    2017-10-01

    Full Text Available Objective: The aim of this study was to test the performance of an adjusted probability method for sociometric classification proposed by García Bacete (GB in comparison with two previous methods. Specific goals were to examine the overall agreement between methods, the behavioral correlates of each sociometric group, the sources for discrepant classifications between methods, the behavioral profiles of discrepant and consistent cases between methods, and age differences.Method: We compared the GB adjusted probability method with the standard score model proposed by Coie and Dodge (CD and the probability score model proposed by Newcomb and Bukowski (NB. The GB method is an adaptation of the NB method, cutoff scores are derived from the distribution of raw liked most and liked least scores in each classroom instead of using fixed and absolute scores as does NB method. The criteria for neglected status are also modified by the GB method. Participants were 569 children (45% girls from 23 elementary school classrooms (13 Grades 1–2, 10 Grades 5–6.Results: We found agreement as well as differences between the three methods. The CD method yielded discrepancies in the classifications because of its dependence on z-scores and composite dimensions. The NB method was less optimal in the validation of the behavioral characteristics of the sociometric groups, because of its fixed cutoffs for identifying preferred, rejected, and controversial children, and not differentiating between positive and negative nominations for neglected children. The GB method addressed some of the limitations of the other two methods. It improved the classified of neglected students, as well as discrepant cases of the preferred, rejected, and controversial groups. Agreement between methods was higher with the oldest children.Conclusion: GB is a valid sociometric method as evidences by the behavior profiles of the sociometric status groups identified with this method.

  19. A framework to assess diagnosis error probabilities in the advanced MCR

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ar Ryum; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Kim, Jong Hyun [Chosun University, Gwangju (Korea, Republic of); Jang, Inseok; Park, Jinkyun [Korea Atomic Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The Institute of Nuclear Power Operations (INPO)’s operating experience database revealed that about 48% of the total events in world NPPs for 2 years (2010-2011) happened due to human errors. The purposes of human reliability analysis (HRA) method are to evaluate the potential for, and mechanism of, human errors that may affect plant safety. Accordingly, various HRA methods have been developed such as technique for human error rate prediction (THERP), simplified plant analysis risk human reliability assessment (SPAR-H), cognitive reliability and error analysis method (CREAM) and so on. Many researchers have asserted that procedure, alarm, and display are critical factors to affect operators’ generic activities, especially for diagnosis activities. None of various HRA methods was explicitly designed to deal with digital systems. SCHEME (Soft Control Human error Evaluation MEthod) considers only for the probability of soft control execution error in the advanced MCR. The necessity of developing HRA methods in various conditions of NPPs has been raised. In this research, the framework to estimate diagnosis error probabilities in the advanced MCR was suggested. The assessment framework was suggested by three steps. The first step is to investigate diagnosis errors and calculate their probabilities. The second step is to quantitatively estimate PSFs’ weightings in the advanced MCR. The third step is to suggest the updated TRC model to assess the nominal diagnosis error probabilities. Additionally, the proposed framework was applied by using the full-scope simulation. Experiments conducted in domestic full-scope simulator and HAMMLAB were used as data-source. Total eighteen tasks were analyzed and twenty-three crews participated in.

  20. Probability-Based Determination Methods for Service Waiting in Service-Oriented Computing Environments

    Science.gov (United States)

    Zeng, Sen; Huang, Shuangxi; Liu, Yang

    Cooperative business processes (CBP)-based service-oriented enterprise networks (SOEN) are emerging with the significant advances of enterprise integration and service-oriented architecture. The performance prediction and optimization for CBP-based SOEN is very complex. To meet these challenges, one of the key points is to try to reduce an abstract service’s waiting number of its physical services. This paper introduces a probability-based determination method (PBDM) of an abstract service’ waiting number, M l , and time span, τ i , for its physical services. The determination of M i and τ i is according to the physical services’ arriving rule and their overall performance’s distribution functions. In PBDM, the arriving probability of the physical services with the best overall performance value is a pre-defined reliability. PBDM has made use of the information of the physical services’ arriving rule and performance distribution functions thoroughly, which will improve the computational efficiency for the scheme design and performance optimization of the collaborative business processes in service-oriented computing environments.

  1. Reaction probability derived from an interpolation formula for diffusion processes with an absorptive boundary condition

    International Nuclear Information System (INIS)

    Misawa, T.; Itakura, H.

    1995-01-01

    The present article focuses on a dynamical simulation of molecular motion in liquids. In the simulation involving diffusion-controlled reaction with discrete time steps, lack of information regarding the trajectory within the time step may result in a failure to count the number of reactions of the particles within the step. In order to rectify this, an interpolated diffusion process is used. The process is derived from a stochastic interpolation formula recently developed by the first author [J. Math. Phys. 34, 775 (1993)]. In this method, the probability that reaction has occurred during the time step given the initial and final positions of the particles is calculated. Some numerical examples confirm that the theoretical result corresponds to an improvement over the Clifford-Green work [Mol. Phys. 57, 123 (1986)] on the same matter

  2. Computing single step operators of logic programming in radial basis function neural networks

    Science.gov (United States)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  3. Computing single step operators of logic programming in radial basis function neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)

    2014-07-10

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  4. Computing single step operators of logic programming in radial basis function neural networks

    International Nuclear Information System (INIS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-01-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T p :I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks

  5. BAYES-HEP: Bayesian belief networks for estimation of human error probability

    International Nuclear Information System (INIS)

    Karthick, M.; Senthil Kumar, C.; Paul, Robert T.

    2017-01-01

    Human errors contribute a significant portion of risk in safety critical applications and methods for estimation of human error probability have been a topic of research for over a decade. The scarce data available on human errors and large uncertainty involved in the prediction of human error probabilities make the task difficult. This paper presents a Bayesian belief network (BBN) model for human error probability estimation in safety critical functions of a nuclear power plant. The developed model using BBN would help to estimate HEP with limited human intervention. A step-by-step illustration of the application of the method and subsequent evaluation is provided with a relevant case study and the model is expected to provide useful insights into risk assessment studies

  6. Single-step colloidal quantum dot films for infrared solar harvesting

    KAUST Repository

    Kiani, Amirreza; Sutherland, Brandon R.; Kim, Younghoon; Ouellette, Olivier; Levina, Larissa; Walters, Grant; Dinh, Cao Thang; Liu, Mengxia; Voznyy, Oleksandr; Lan, Xinzheng; Labelle, Andre J.; Ip, Alexander H.; Proppe, Andrew; Ahmed, Ghada H.; Mohammed, Omar F.; Hoogland, Sjoerd; Sargent, Edward H.

    2016-01-01

    . To date, IR CQD solar cells have been made using a wasteful and complex sequential layer-by-layer process. Here, we demonstrate ∼1 eV bandgap solar-harvesting CQD films deposited in a single step. By engineering a fast-drying solvent mixture for metal

  7. Validation of DRAGON side-step method for Bruce-A restart Phase-B physics tests

    International Nuclear Information System (INIS)

    Shen, W.; Ngo-Trong, C.; Davis, R.S.

    2004-01-01

    The DRAGON side-step method, developed at AECL, has a number of advantages over the all-DRAGON method that was used before. It is now the qualified method for reactivity-device calculations. Although the side-step-method-generated incremental cross sections have been validated against those previously calculated with the all-DRAGON method, it is highly desirable to validate the side-step method against device-worth measurements in power reactors directly. In this paper, the DRAGON side-step method was validated by comparison with the device-calibration measurements made in Bruce-A NGS Unit 4 restart Phase-B commissioning in 2003. The validation exercise showed excellent results, with the DRAGON code overestimating the measured ZCR worth by ∼5%. A sensitivity study was also performed in this paper to assess the effect of various DRAGON modelling techniques on the incremental cross sections. The assessment shows that the refinement of meshes in 3-D and the use of the side-step method are two major reasons contributing to the improved agreement between the calculated ZCR worths and the measurements. Use of different DRAGON versions, DRAGON libraries, local-parameter core conditions, and weighting techniques for the homogenization of tube clusters inside the ZCR have a very small effect on the ZCR incremental thermal absorption cross section and ZCR reactivity worth. (author)

  8. Numerical simulation of 3D backward facing step flows at various Reynolds numbers

    Directory of Open Access Journals (Sweden)

    Louda Petr

    2015-01-01

    Full Text Available The work deals with the numerical simulation of 3D turbulent flow over backward facing step in a narrow channel. The mathematical model is based on the RANS equations with an explicit algebraic Reynolds stress model (EARSM. The numerical method uses implicit finite volume upwind discretization. While the eddy viscosity models fail in predicting complex 3D flows, the EARSM model is shown to provide results which agree well with experimental PIV data. The reference experimental data provide the 3D flow field. The simulations are compared with experiment for 3 values of Reynolds number.

  9. Poisson Processes in Free Probability

    OpenAIRE

    An, Guimei; Gao, Mingchu

    2015-01-01

    We prove a multidimensional Poisson limit theorem in free probability, and define joint free Poisson distributions in a non-commutative probability space. We define (compound) free Poisson process explicitly, similar to the definitions of (compound) Poisson processes in classical probability. We proved that the sum of finitely many freely independent compound free Poisson processes is a compound free Poisson processes. We give a step by step procedure for constructing a (compound) free Poisso...

  10. Single-step linking transition from superdeformed to spherical states in {sup 143}Eu

    Energy Technology Data Exchange (ETDEWEB)

    Atac, A.; Axelsson, A.; Persson, J. [Uppsala Univ. (Sweden)] [and others

    1996-12-31

    A discrete {gamma}-ray transition which connects the second lowest SD state with a normally deformed one in {sup 143}Eu has been discovered. It has an energy of 3360.6 keV and carries 3.2 % of the full intensity of the SD band. It feeds into a nearly spherical state which is above the I = 35/2{sup +}, E=4947 keV level. The exact placement of the single-step link could, however, not be established due to the especially complicated level scheme in the region of interest. The angular correlation study favours a stretched dipole character for the 3360.6 keV transition. The single-step link agrees well with the previously determined two-step links, both with respect to energy and spin.

  11. Traffic simulation based ship collision probability modeling

    Energy Technology Data Exchange (ETDEWEB)

    Goerlandt, Floris, E-mail: floris.goerlandt@tkk.f [Aalto University, School of Science and Technology, Department of Applied Mechanics, Marine Technology, P.O. Box 15300, FI-00076 AALTO, Espoo (Finland); Kujala, Pentti [Aalto University, School of Science and Technology, Department of Applied Mechanics, Marine Technology, P.O. Box 15300, FI-00076 AALTO, Espoo (Finland)

    2011-01-15

    Maritime traffic poses various risks in terms of human, environmental and economic loss. In a risk analysis of ship collisions, it is important to get a reasonable estimate for the probability of such accidents and the consequences they lead to. In this paper, a method is proposed to assess the probability of vessels colliding with each other. The method is capable of determining the expected number of accidents, the locations where and the time when they are most likely to occur, while providing input for models concerned with the expected consequences. At the basis of the collision detection algorithm lays an extensive time domain micro-simulation of vessel traffic in the given area. The Monte Carlo simulation technique is applied to obtain a meaningful prediction of the relevant factors of the collision events. Data obtained through the Automatic Identification System is analyzed in detail to obtain realistic input data for the traffic simulation: traffic routes, the number of vessels on each route, the ship departure times, main dimensions and sailing speed. The results obtained by the proposed method for the studied case of the Gulf of Finland are presented, showing reasonable agreement with registered accident and near-miss data.

  12. Uncertainty about probability: a decision analysis perspective

    International Nuclear Information System (INIS)

    Howard, R.A.

    1988-01-01

    The issue of how to think about uncertainty about probability is framed and analyzed from the viewpoint of a decision analyst. The failure of nuclear power plants is used as an example. The key idea is to think of probability as describing a state of information on an uncertain event, and to pose the issue of uncertainty in this quantity as uncertainty about a number that would be definitive: it has the property that you would assign it as the probability if you knew it. Logical consistency requires that the probability to assign to a single occurrence in the absence of further information be the mean of the distribution of this definitive number, not the medium as is sometimes suggested. Any decision that must be made without the benefit of further information must also be made using the mean of the definitive number's distribution. With this formulation, they find further that the probability of r occurrences in n exchangeable trials will depend on the first n moments of the definitive number's distribution. In making decisions, the expected value of clairvoyance on the occurrence of the event must be at least as great as that on the definitive number. If one of the events in question occurs, then the increase in probability of another such event is readily computed. This means, in terms of coin tossing, that unless one is absolutely sure of the fairness of a coin, seeing a head must increase the probability of heads, in distinction to usual thought. A numerical example for nuclear power shows that the failure of one plant of a group with a low probability of failure can significantly increase the probability that must be assigned to failure of a second plant in the group

  13. Single-step fabrication of electrodes with controlled nanostructured surface roughness using optically-induced electrodeposition

    Science.gov (United States)

    Liu, N.; Li, M.; Liu, L.; Yang, Y.; Mai, J.; Pu, H.; Sun, Y.; Li, W. J.

    2018-02-01

    The customized fabrication of microelectrodes from gold nanoparticles (AuNPs) has attracted much attention due to their numerous applications in chemistry and biomedical engineering, such as for surface-enhanced Raman spectroscopy (SERS) and as catalyst sites for electrochemistry. Herein, we present a novel optically-induced electrodeposition (OED) method for rapidly fabricating gold electrodes which are also surface-modified with nanoparticles in one single step. The electrodeposition mechanism, with respect to the applied AC voltage signal and the elapsed deposition time, on the resulting morphology and particle sizes was investigated. The results from SEM and AFM analysis demonstrated that 80-200 nm gold particles can be formed on the surface of the gold electrodes. Simultaneously, both the size of the nanoparticles and the roughness of the fabricated electrodes can be regulated by the deposition time. Compared to state-of-the-art methods for fabricating microelectrodes with AuNPs, such as nano-seed-mediated growth and conventional electrodeposition, this OED technique has several advantages including: (1) electrode fabrication and surface modification using nanoparticles are completed in a single step, eliminating the need for prefabricating micro electrodes; (2) the patterning of electrodes is defined using a digitally-customized, projected optical image rather than using fixed physical masks; and (3) both the fabrication and surface modification processes are rapid, and the entire fabrication process only requires less than 6 s.

  14. A combined volume-of-fluid method and low-Mach-number approach for DNS of evaporating droplets in turbulence

    Science.gov (United States)

    Dodd, Michael; Ferrante, Antonino

    2017-11-01

    Our objective is to perform DNS of finite-size droplets that are evaporating in isotropic turbulence. This requires fully resolving the process of momentum, heat, and mass transfer between the droplets and surrounding gas. We developed a combined volume-of-fluid (VOF) method and low-Mach-number approach to simulate this flow. The two main novelties of the method are: (i) the VOF algorithm captures the motion of the liquid gas interface in the presence of mass transfer due to evaporation and condensation without requiring a projection step for the liquid velocity, and (ii) the low-Mach-number approach allows for local volume changes caused by phase change while the total volume of the liquid-gas system is constant. The method is verified against an analytical solution for a Stefan flow problem, and the D2 law is verified for a single droplet in quiescent gas. We also demonstrate the schemes robustness when performing DNS of an evaporating droplet in forced isotropic turbulence.

  15. Maximum parsimony, substitution model, and probability phylogenetic trees.

    Science.gov (United States)

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  16. Penerapan Metode Diagnosis Cepat Virus Avian Influenza H5N1 dengan Metode Single Step Multiplex RT-PCR

    Directory of Open Access Journals (Sweden)

    Aris Haryanto

    2010-12-01

    Full Text Available Avian influenza (AI virus is a segmented single stranded (ss RNA virus with negative polarity andbelong to the Orthomyxoviridae family. Diagnose of AI virus can be performed using conventional methodsbut it has low sensitivity and specificity. The objective of the research was to apply rapid, precise, andaccurate diagnostic method for AI virus and also to determine its type and subtype based on the SingleStep Multiplex Reverse Transcriptase-Polymerase Chain Reaction targeting M, H5, and N1 genes. In thismethod M, H5 and NI genes were simultaneously amplified in one PCR tube. The steps of this researchconsist of collecting viral RNAs from 10 different AI samples originated from Maros Disease InvestigationCenter during 2007. DNA Amplification was conducted by Simplex RT-PCR using M primer set. Then, bysingle step multiplex RT-PCR were conducted simultaneously using M, H5 and N1 primers set. The RTPCRproducts were then separated on 1.5% agarose gel, stained by ethidum bromide and visualized underUV transilluminator. Results showed that 8 of 10 RNA virus samples could be amplified by Simplex RTPCRfor M gene which generating a DNA fragment of 276 bp. Amplification using multiplex RT-PCRmethod showed two of 10 samples were AI positive using multiplex RT-PCR, three DNA fragments weregenerated consisting of 276 bp for M gene, 189 bp for H5 gene, and 131 bp for N1. In this study, rapid andeffective diagnosis method for AI virus can be conducted by using simultaneous Single Step Multiplex RTPCR.By this technique type and subtype of AI virus, can also be determined, especially H5N1.

  17. Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.

    Science.gov (United States)

    Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David

    2018-07-01

    To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP

  18. Pregnancy diagnosis in sheep: review of the most practical methods

    International Nuclear Information System (INIS)

    Karen, A.; Szenci, O.; Kovacs, P.; Beckers, J.F.

    2001-01-01

    Various practical methods have been used for pregnancy diagnosis in sheep: radiography, rectal abdominal palpation, assessment of progesterone, assessment of estrone sulphate, RIA assay of placental lactogen, assessment of pregnancy proteins or pregnancy-associated glycoproteins, A-mode ultrasound, Doppler ultrasound and real-time B-mode ultrasonography. Real-time, B-mode ultrasonography appears to be the most practical and accurate method for diagnosing pregnancy and determining fetal numbers in sheep. Transabdominal B-mode ultrasonography achieved high accuracy for pregnancy diagnosis (94-100 %) and the determination of fetal numbers (92-99 %) on d 29 to 106 of gestation

  19. Influence of application methods of one-step self-etching adhesives on microtensile bond strength

    Directory of Open Access Journals (Sweden)

    Chul-Kyu Choi,

    2011-05-01

    Full Text Available Objectives The purpose of this study was to evaluate the effect of various application methods of one-step self-etch adhesives to microtensile resin-dentin bond strength. Materials and Methods Thirty-six extracted human molars were used. The teeth were assigned randomly to twelve groups (n = 15, according to the three different adhesive systems (Clearfil Tri-S Bond, Adper Prompt L-Pop, G-Bond and application methods. The adhesive systems were applied on the dentin as follows: 1 The single coating, 2 The double coating, 3 Manual agitation, 4 Ultrasonic agitation. Following the adhesive application, light-cure composite resin was constructed. The restored teeth were stored in distilled water at room temperature for 24 hours, and prepared 15 specimens per groups. Then microtensile bond strength was measured and the failure mode was examined. Results Manual agitation and ultrasonic agitation of adhesive significantly increased the microtensile bond strength than single coating and double coating did. Double coating of adhesive significantly increased the microtensile bond strength than single coating did and there was no significant difference between the manual agitation and ultrasonic agitation group. There was significant difference in microtensile bonding strength among all adhesives and Clearfil Tri-S Bond showed the highest bond strength. Conclusions In one-step self-etching adhesives, there was significant difference according to application methods and type of adhesives. No matter of the material, the manual or ultrasonic agitation of the adhesive showed significantly higher microtensile bond strength.

  20. Approaches to Evaluating Probability of Collision Uncertainty

    Science.gov (United States)

    Hejduk, Matthew D.; Johnson, Lauren C.

    2016-01-01

    While the two-dimensional probability of collision (Pc) calculation has served as the main input to conjunction analysis risk assessment for over a decade, it has done this mostly as a point estimate, with relatively little effort made to produce confidence intervals on the Pc value based on the uncertainties in the inputs. The present effort seeks to try to carry these uncertainties through the calculation in order to generate a probability density of Pc results rather than a single average value. Methods for assessing uncertainty in the primary and secondary objects' physical sizes and state estimate covariances, as well as a resampling approach to reveal the natural variability in the calculation, are presented; and an initial proposal for operationally-useful display and interpretation of these data for a particular conjunction is given.

  1. Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines

    Science.gov (United States)

    Malley, J. D.; Kruppa, J.; Dasgupta, A.; Malley, K. G.; Ziegler, A.

    2011-01-01

    Summary Background Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications. PMID:21915433

  2. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  3. Calculating the albedo characteristics by the method of transmission probabilities

    International Nuclear Information System (INIS)

    Lukhvich, A.A.; Rakhno, I.L.; Rubin, I.E.

    1983-01-01

    The possibility to use the method of transmission probabilities for calculating the albedo characteristics of homogeneous and heterogeneous zones is studied. The transmission probabilities method is a numerical method for solving the transport equation in the integrated form. All calculations have been conducted as a one-group approximation for the planes and rods with different optical thicknesses and capture-to-scattering ratios. Above calculations for plane and cylindrical geometries have shown the possibility to use the numerical method of transmission probabilities for calculating the albedo characteristics of homogeneous and heterogeneous zones with high accuracy. In this case the computer time consumptions are minimum even with the cylindrical geometry, if the interpolation calculation of characteristics is used for the neutrons of the first path

  4. Fixation Probabilities of Evolutionary Graphs Based on the Positions of New Appearing Mutants

    Directory of Open Access Journals (Sweden)

    Pei-ai Zhang

    2014-01-01

    Full Text Available Evolutionary graph theory is a nice measure to implement evolutionary dynamics on spatial structures of populations. To calculate the fixation probability is usually regarded as a Markov chain process, which is affected by the number of the individuals, the fitness of the mutant, the game strategy, and the structure of the population. However the position of the new mutant is important to its fixation probability. Here the position of the new mutant is laid emphasis on. The method is put forward to calculate the fixation probability of an evolutionary graph (EG of single level. Then for a class of bilevel EGs, their fixation probabilities are calculated and some propositions are discussed. The conclusion is obtained showing that the bilevel EG is more stable than the corresponding one-rooted EG.

  5. Two-step method for creating a gastric tube during laparoscopic-thoracoscopic Ivor-Lewis esophagectomy.

    Science.gov (United States)

    Liu, Yu; Li, Ji-Jia; Zu, Peng; Liu, Hong-Xu; Yu, Zhan-Wu; Ren, Yi

    2017-12-07

    To introduce a two-step method for creating a gastric tube during laparoscopic-thoracoscopic Ivor-Lewis esophagectomy and assess its clinical application. One hundred and twenty-two patients with middle or lower esophageal cancer who underwent laparoscopic-thoracoscopic Ivor-Lewis esophagectomy at Liaoning Cancer Hospital and Institute from March 2014 to March 2016 were included in this study, and divided into two groups based on the procedure used for creating a gastric tube. One group used a two-step method for creating a gastric tube, and the other group used the conventional method. The two groups were compared regarding the operating time, surgical complications, and number of stapler cartridges used. The mean operating time was significantly shorter in the two-step method group than in the conventional method group [238 (179-293) min vs 272 (189-347) min, P creating a gastric tube during laparoscopic-thoracoscopic Ivor-Lewis esophagectomy has the advantages of simple operation, minimal damage to the tubular stomach, and reduced use of stapler cartridges.

  6. Nanopatterning of magnetic disks by single-step Ar+ Ion projection

    NARCIS (Netherlands)

    Dietzel, A.H.; Berger, R.; Loeschner, H.; Platzgummer, E.; Stengl, G.; Bruenger, W.H.; Letzkus, F.

    2003-01-01

    Large-area Ar+ projection has been used to generate planar magnetic nanostructures on a 1¿-format hard disk in a single step (see Figure). The recording pattern was transferred to a Co/Pt multilayer without resist processes or any other contact to the delicate media surface. It is conceivable that

  7. A Novel Ship Detection Method Based on Gradient and Integral Feature for Single-Polarization Synthetic Aperture Radar Imagery

    Directory of Open Access Journals (Sweden)

    Hao Shi

    2018-02-01

    Full Text Available With the rapid development of remote sensing technologies, SAR satellites like China’s Gaofen-3 satellite have more imaging modes and higher resolution. With the availability of high-resolution SAR images, automatic ship target detection has become an important topic in maritime research. In this paper, a novel ship detection method based on gradient and integral features is proposed. This method is mainly composed of three steps. First, in the preprocessing step, a filter is employed to smooth the clutters and the smoothing effect can be adaptive adjusted according to the statistics information of the sub-window. Thus, it can retain details while achieving noise suppression. Second, in the candidate area extraction, a sea-land segmentation method based on gradient enhancement is presented. The integral image method is employed to accelerate computation. Finally, in the ship target identification step, a feature extraction strategy based on Haar-like gradient information and a Radon transform is proposed. This strategy decreases the number of templates found in traditional Haar-like methods. Experiments were performed using Gaofen-3 single-polarization SAR images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency. In addition, this method has the potential for on-board processing.

  8. Estimating the concordance probability in a survival analysis with a discrete number of risk groups.

    Science.gov (United States)

    Heller, Glenn; Mo, Qianxing

    2016-04-01

    A clinical risk classification system is an important component of a treatment decision algorithm. A measure used to assess the strength of a risk classification system is discrimination, and when the outcome is survival time, the most commonly applied global measure of discrimination is the concordance probability. The concordance probability represents the pairwise probability of lower patient risk given longer survival time. The c-index and the concordance probability estimate have been used to estimate the concordance probability when patient-specific risk scores are continuous. In the current paper, the concordance probability estimate and an inverse probability censoring weighted c-index are modified to account for discrete risk scores. Simulations are generated to assess the finite sample properties of the concordance probability estimate and the weighted c-index. An application of these measures of discriminatory power to a metastatic prostate cancer risk classification system is examined.

  9. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    Science.gov (United States)

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  10. Bayesian pedigree inference with small numbers of single nucleotide polymorphisms via a factor-graph representation.

    Science.gov (United States)

    Anderson, Eric C; Ng, Thomas C

    2016-02-01

    We develop a computational framework for addressing pedigree inference problems using small numbers (80-400) of single nucleotide polymorphisms (SNPs). Our approach relaxes the assumptions, which are commonly made, that sampling is complete with respect to the pedigree and that there is no genotyping error. It relies on representing the inferred pedigree as a factor graph and invoking the Sum-Product algorithm to compute and store quantities that allow the joint probability of the data to be rapidly computed under a large class of rearrangements of the pedigree structure. This allows efficient MCMC sampling over the space of pedigrees, and, hence, Bayesian inference of pedigree structure. In this paper we restrict ourselves to inference of pedigrees without loops using SNPs assumed to be unlinked. We present the methodology in general for multigenerational inference, and we illustrate the method by applying it to the inference of full sibling groups in a large sample (n=1157) of Chinook salmon typed at 95 SNPs. The results show that our method provides a better point estimate and estimate of uncertainty than the currently best-available maximum-likelihood sibling reconstruction method. Extensions of this work to more complex scenarios are briefly discussed. Published by Elsevier Inc.

  11. Two-step Raman spectroscopy method for tumor diagnosis

    Science.gov (United States)

    Zakharov, V. P.; Bratchenko, I. A.; Kozlov, S. V.; Moryatov, A. A.; Myakinin, O. O.; Artemyev, D. N.

    2014-05-01

    Two-step Raman spectroscopy phase method was proposed for differential diagnosis of malignant tumor in skin and lung tissue. It includes detection of malignant tumor in healthy tissue on first step with identification of concrete cancer type on the second step. Proposed phase method analyze spectral intensity alteration in 1300-1340 and 1640-1680 cm-1 Raman bands in relation to the intensity of the 1450 cm-1 band on first step, and relative differences between RS intensities for tumor area and healthy skin closely adjacent to the lesion on the second step. It was tested more than 40 ex vivo samples of lung tissue and more than 50 in vivo skin tumors. Linear Discriminant Analysis, Quadratic Discriminant Analysis and Support Vector Machine were used for tumors type classification on phase planes. It is shown that two-step phase method allows to reach 88.9% sensitivity and 87.8% specificity for malignant melanoma diagnosis (skin cancer); 100% sensitivity and 81.5% specificity for adenocarcinoma diagnosis (lung cancer); 90.9% sensitivity and 77.8% specificity for squamous cell carcinoma diagnosis (lung cancer).

  12. Quantifying the number of color centers in single fluorescent nanodiamonds by photon correlation spectroscopy and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Hui, Y.Y.; Chang, Y.-R.; Lee, H.-Y.; Chang, H.-C.; Lim, T.-S.; Fann Wunshain

    2009-01-01

    The number of negatively charged nitrogen-vacancy centers (N-V) - in fluorescent nanodiamond (FND) has been determined by photon correlation spectroscopy and Monte Carlo simulations at the single particle level. By taking account of the random dipole orientation of the multiple (N-V) - fluorophores and simulating the probability distribution of their effective numbers (N e ), we found that the actual number (N a ) of the fluorophores is in linear correlation with N e , with correction factors of 1.8 and 1.2 in measurements using linearly and circularly polarized lights, respectively. We determined N a =8±1 for 28 nm FND particles prepared by 3 MeV proton irradiation

  13. Structural comparison of anodic nanoporous-titania fabricated from single-step and three-step of anodization using two paralleled-electrodes anodizing cell

    Directory of Open Access Journals (Sweden)

    Mallika Thabuot

    2016-02-01

    Full Text Available Anodization of Ti sheet in the ethylene glycol electrolyte containing 0.38wt% NH4F with the addition of 1.79wt% H2O at room temperature was studied. Applied potential of 10-60 V and anodizing time of 1-3 h were conducted by single-step and three-step of anodization within the two paralleled-electrodes anodizing cell. Their structural and textural properties were investigated by X-ray diffraction (XRD and scanning electron microscopy (SEM. After annealing at 600°C in the air furnace for 3 h, TiO2-nanotubes was transformed to the higher proportion of anatase crystal phase. Also crystallization of anatase phase was enhanced as the duration of anodization as the final step increased. By using single-step of anodization, pore texture of oxide film was started to reveal at the applied potential of 30 V. Better orderly arrangement of the TiO2-nanotubes array with larger pore size was obtained with the increase of applied potential. The applied potential of 60 V was selected for the three-step of anodization with anodizing time of 1-3 h. Results showed that the well-smooth surface coverage with higher density of porous-TiO2 was achieved using prolonging time at the first and second step, however, discontinuity tube in length was produced instead of the long-vertical tube. Layer thickness of anodic oxide film depended on the anodizing time at the last step of anodization. More well arrangement of nanostructured-TiO2 was produced using three-step of anodization under 60 V with 3 h for each step.

  14. Cellular Analysis of Boltzmann Most Probable Ideal Gas Statistics

    Science.gov (United States)

    Cahill, Michael E.

    2018-04-01

    Exact treatment of Boltzmann's Most Probable Statistics for an Ideal Gas of Identical Mass Particles having Translational Kinetic Energy gives a Distribution Law for Velocity Phase Space Cell j which relates the Particle Energy and the Particle Population according toB e(j) = A - Ψ(n(j) + 1)where A & B are the Lagrange Multipliers and Ψ is the Digamma Function defined byΨ(x + 1) = d/dx ln(x!)A useful sufficiently accurate approximation for Ψ is given byΨ(x +1) ≈ ln(e-γ + x)where γ is the Euler constant (≈.5772156649) & so the above distribution equation is approximatelyB e(j) = A - ln(e-γ + n(j))which can be inverted to solve for n(j) givingn(j) = (eB (eH - e(j)) - 1) e-γwhere B eH = A + γ& where B eH is a unitless particle energy which replaces the parameter A. The 2 approximate distribution equations imply that eH is the highest particle energy and the highest particle population isnH = (eB eH - 1) e-γwhich is due to the facts that population becomes negative if e(j) > eH and kinetic energy becomes negative if n(j) > nH.An explicit construction of Cells in Velocity Space which are equal in volume and homogeneous for almost all cells is shown to be useful in the analysis.Plots for sample distribution properties using e(j) as the independent variable are presented.

  15. Is the Number of Different MRI Findings More Strongly Associated with Low Back Pain Than Single MRI Findings?

    DEFF Research Database (Denmark)

    Hancock, Mark J; Kjaer, Per; Kent, Peter

    2017-01-01

    STUDY DESIGN: A cross-sectional and longitudinal analysis using 2 different data sets OBJECTIVE.: To investigate if the number of different MRI findings present is more strongly associated with low back pain (LBP) than single MRI findings. SUMMARY OF BACKGROUND DATA: Most previous studies have....... The outcome for the cross-sectional study was presence of LBP during the last year. The outcome for the longitudinal study was days to recurrence of activity limiting LBP. In both data sets we created an aggregate score of the number of different MRI findings present in each individual and assessed...... investigated the associations between single MRI findings and back pain rather than investigating combinations of MRI findings. If different individuals have different pathoanatomic sources contributing to their pain, then combinations of MRI findings may be more strongly associated with LBP. METHODS...

  16. Differences in Lower Extremity and Trunk Kinematics between Single Leg Squat and Step Down Tasks.

    Directory of Open Access Journals (Sweden)

    Cara L Lewis

    Full Text Available The single leg squat and single leg step down are two commonly used functional tasks to assess movement patterns. It is unknown how kinematics compare between these tasks. The purpose of this study was to identify kinematic differences in the lower extremity, pelvis and trunk between the single leg squat and the step down. Fourteen healthy individuals participated in this research and performed the functional tasks while kinematic data were collected for the trunk, pelvis, and lower extremities using a motion capture system. For the single leg squat task, the participant was instructed to squat as low as possible. For the step down task, the participant was instructed to stand on top of a box, slowly lower him/herself until the non-stance heel touched the ground, and return to standing. This was done from two different heights (16 cm and 24 cm. The kinematics were evaluated at peak knee flexion as well as at 60° of knee flexion. Pearson correlation coefficients (r between the angles at those two time points were also calculated to better understand the relationship between each task. The tasks resulted in kinematics differences at the knee, hip, pelvis, and trunk at both time points. The single leg squat was performed with less hip adduction (p ≤ 0.003, but more hip external rotation and knee abduction (p ≤ 0.030, than the step down tasks at 60° of knee flexion. These differences were maintained at peak knee flexion except hip external rotation was only significant in the 24 cm step down task (p ≤ 0.029. While there were multiple differences between the two step heights at peak knee flexion, the only difference at 60° of knee flexion was in trunk flexion (p < 0.001. Angles at the knee and hip had a moderate to excellent correlation (r = 0.51-0.98, but less consistently so at the pelvis and trunk (r = 0.21-0.96. The differences in movement patterns between the single leg squat and the step down should be considered when selecting a

  17. Increasing the number of steps walked each day improves physical fitness in Japanese community-dwelling adults.

    Science.gov (United States)

    Okamoto, N; Nakatani, T; Okamoto, Y; Iwamoto, J; Saeki, K; Kurumatani, N

    2010-04-01

    We aimed to investigate the effects of increasing the number of steps each day on physical fitness, and the change in physical fitness according to the angiotensin-converting enzyme (ACE) genotype. A total of 174 participants were randomly assigned to two groups. Subjects in group A were instructed for 24-week trial to increase the number of steps walked each day, while subjects in group B were instructed to engage in brisk walking, at a target heart rate, for 20 min or more a day on two or more days a week. The values of the 3-min shuttle stamina walk test (SSWT) and the 30-s chair-stand test (CS-30) significantly increased, but no differences in increase were found between the groups. A significant relationship was found between the percentage increase in SSWT values and the increase in the number of steps walked by 1 500 steps or more per day over their baseline values. Our results suggest that increasing the number of steps walked daily improves physical fitness. No significant relationships were observed between the change in physical fitness and ACE genotypes. Copyright Georg Thieme Verlag KG Stuttgart . New York.

  18. Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size

    KAUST Repository

    Hadjimichael, Yiannis

    2016-09-08

    Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order two and three) with variable step size, and prove their optimality, stability, and convergence. The choice of step size for multistep SSP methods is an interesting problem because the allowable step size depends on the SSP coefficient, which in turn depends on the chosen step sizes. The description of the methods includes an optimal step-size strategy. We prove sharp upper bounds on the allowable step size for explicit SSP linear multistep methods and show the existence of methods with arbitrarily high order of accuracy. The effectiveness of the methods is demonstrated through numerical examples.

  19. Further comments on the sequential probability ratio testing methods

    Energy Technology Data Exchange (ETDEWEB)

    Kulacsy, K. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics

    1997-05-23

    The Bayesian method for belief updating proposed in Racz (1996) is examined. The interpretation of the belief function introduced therein is found, and the method is compared to the classical binary Sequential Probability Ratio Testing method (SPRT). (author).

  20. Antibacterial and cytocompatible nanotextured Ti surface incorporating silver via single step hydrothermal processing

    Energy Technology Data Exchange (ETDEWEB)

    Mohandas, Anu; Krishnan, Amit G.; Biswas, Raja; Menon, Deepthy, E-mail: deepthymenon@aims.amrita.edu; Nair, Manitha B., E-mail: manithanair@aims.amrita.edu

    2017-06-01

    Nanosurface modification of Titanium (Ti) implants and prosthesis is proved to enhance osseointegration at the tissue–implant interface. However, many of these products lack adequate antibacterial capability, which leads to implant loosening. As a curative strategy, in this study, nanotextured Ti substrates embedded with silver nanoparticles were developed through a single step hydrothermal processing in an alkaline medium containing silver nitrate at different concentrations (15, 30 and 75 μM). Scanning electron micrographs revealed a non-periodically oriented nanoleafy structure on Ti (TNL) decorated with Ag nanoparticles (nanoAg), which was verified by XPS, XRD and EDS analysis. This TNLAg substrate proved to be mechanically stable upon nanoindentation and nanoscratch tests. Silver ions at detectable levels were released for a period of ~ 28 days only from substrates incorporating higher nanoAg content. The samples demonstrated antibacterial activity towards both Escherichia coli and Staphylococcus aureus, with a more favorable response to the former. Simultaneously, Ti substrates incorporating nanoAg at all concentrations supported the viability, proliferation and osteogenic differentiation of mesenchymal stem cells. Overall, nanoAg incorporation into surface modified Ti via a simple one-step thermochemical method is a favorable strategy for producing implants with dual characteristics of antibacterial activity and cell compatibility. - Highlights: • Nanosilver was incorporated within Ti nanoleafy topography by simple one-step thermochemical method • The nanosurface demonstrated antibacterial activity against gram positive and gram negative bacteria • The nanosurface promoted the viability, proliferation and osteogenic differentiation of mesenchymal stem cells.

  1. Balanced Photodetection in One-Step Liquid-Phase-Synthesized CsPbBr3 Micro-/Nanoflake Single Crystals.

    Science.gov (United States)

    Zheng, Wei; Xiong, Xufan; Lin, Richeng; Zhang, Zhaojun; Xu, Cunhua; Huang, Feng

    2018-01-17

    Here, we reported a low-cost and high-compatibility one-step liquid-phase synthesis method for synthesizing high-purity CsPbBr 3 micro-/nanoflake single crystals. On the basis of the high-purity CsPbBr 3 , we further prepared a low-dimensional photodetector capable of balanced photodetection, involving both high external quantum efficiency and rapid temporal response, which is barely realized in previously reported low-dimensional photodetectors.

  2. Response and reliability analysis of nonlinear uncertain dynamical structures by the probability density evolution method

    DEFF Research Database (Denmark)

    Nielsen, Søren R. K.; Peng, Yongbo; Sichani, Mahdi Teimouri

    2016-01-01

    The paper deals with the response and reliability analysis of hysteretic or geometric nonlinear uncertain dynamical systems of arbitrary dimensionality driven by stochastic processes. The approach is based on the probability density evolution method proposed by Li and Chen (Stochastic dynamics...... of structures, 1st edn. Wiley, London, 2009; Probab Eng Mech 20(1):33–44, 2005), which circumvents the dimensional curse of traditional methods for the determination of non-stationary probability densities based on Markov process assumptions and the numerical solution of the related Fokker–Planck and Kolmogorov......–Feller equations. The main obstacle of the method is that a multi-dimensional convolution integral needs to be carried out over the sample space of a set of basic random variables, for which reason the number of these need to be relatively low. In order to handle this problem an approach is suggested, which...

  3. Single step synthesis, characterization and applications of curcumin functionalized iron oxide magnetic nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Bhandari, Rohit; Gupta, Prachi; Dziubla, Thomas; Hilt, J. Zach, E-mail: zach.hilt@uky.edu

    2016-10-01

    Magnetic iron oxide nanoparticles have been well known for their applications in magnetic resonance imaging (MRI), hyperthermia, targeted drug delivery, etc. The surface modification of these magnetic nanoparticles has been explored extensively to achieve functionalized materials with potential application in biomedical, environmental and catalysis field. Herein, we report a novel and versatile single step methodology for developing curcumin functionalized magnetic Fe{sub 3}O{sub 4} nanoparticles without any additional linkers, using a simple coprecipitation technique. The magnetic nanoparticles (MNPs) were characterized using transmission electron microscopy, X-ray diffraction, fourier transform infrared spectroscopy and thermogravimetric analysis. The developed MNPs were employed in a cellular application for protection against an inflammatory agent, a polychlorinated biphenyl (PCB) molecule. - Graphical abstract: Novel single step curcumin coated magnetic Fe{sub 3}O{sub 4} nanoparticles without any additional linkers for medical, environmental, and other applications. Display Omitted - Highlights: • A novel and versatile single step methodology for developing curcumin functionalized magnetic Fe{sub 3}O{sub 4} nanoparticles is reported. • The magnetic nanoparticles (MNPs) were characterized using TEM, XRD, FTIR and TGA. • The developed MNPs were employed in a cellular application for protection against an inflammatory agent, a polychlorinated biphenyl (PCB).

  4. Analytical models of probability distribution and excess noise factor of solid state photomultiplier signals with crosstalk

    International Nuclear Information System (INIS)

    Vinogradov, S.

    2012-01-01

    Silicon Photomultipliers (SiPM), also called Solid State Photomultipliers (SSPM), are based on Geiger mode avalanche breakdown that is limited by a strong negative feedback. An SSPM can detect and resolve single photons due to the high gain and ultra-low excess noise of avalanche multiplication in this mode. Crosstalk and afterpulsing processes associated with the high gain introduce specific excess noise and deteriorate the photon number resolution of the SSPM. The probabilistic features of these processes are widely studied because of its significance for the SSPM design, characterization, optimization and application, but the process modeling is mostly based on Monte Carlo simulations and numerical methods. In this study, crosstalk is considered to be a branching Poisson process, and analytical models of probability distribution and excess noise factor (ENF) of SSPM signals based on the Borel distribution as an advance on the geometric distribution models are presented and discussed. The models are found to be in a good agreement with the experimental probability distributions for dark counts and a few photon spectrums in a wide range of fired pixels number as well as with observed super-linear behavior of crosstalk ENF.

  5. High throughput nonparametric probability density estimation.

    Science.gov (United States)

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  6. COMPARATIVE ANALYSIS OF ESTIMATION METHODS OF PHARMACY ORGANIZATION BANKRUPTCY PROBABILITY

    Directory of Open Access Journals (Sweden)

    V. L. Adzhienko

    2014-01-01

    Full Text Available A purpose of this study was to determine the probability of bankruptcy by various methods in order to predict the financial crisis of pharmacy organization. Estimating the probability of pharmacy organization bankruptcy was conducted using W. Beaver’s method adopted in the Russian Federation, with integrated assessment of financial stability use on the basis of scoring analysis. The results obtained by different methods are comparable and show that the risk of bankruptcy of the pharmacy organization is small.

  7. Production of the Q2 doubly excited states of the hydrogen molecule by electron impact in a single step

    Science.gov (United States)

    Santos, Leonardo O.; Rocha, Alexandre B.; Faria, Nelson Velho de Castro; Jalbert, Ginette

    2017-03-01

    We calculate the single step cross sections for excitation of Q 2 states of H2 and its subsequent dissociation. The cross section calculations were performed within the first Born approximation and the electronic wave functions were obtained via State-Averaged Multiconfigurational Self-Consistent Field followed by Configuration Interaction. We have assumed autoionization is the only important process competing with dissociation into neutral atoms. We have estimated its probability through a semi classical approach and compared with results of literature. Special attention was given to the Q 2 1Σg +(1) state which, as has been shown in a previous work, may dissociate into H(2 sσ) + H(2 sσ) fragments (some figures in this article are in colour only in the electronic version).

  8. Thermodynamic optimization of ground heat exchangers with single U-tube by entropy generation minimization method

    International Nuclear Information System (INIS)

    Li Min; Lai, Alvin C.K.

    2013-01-01

    Highlights: ► A second-law-based analysis is performed for single U-tube ground heat exchangers. ► Two expressions for the optimal length and flow velocity are developed for GHEs. ► Empirical velocities of GHEs are large compared to thermodynamic optimum values. - Abstract: This paper investigates thermodynamic performance of borehole ground heat exchangers with a single U-tube by the entropy generation minimization method which requires information of heat transfer and fluid mechanics, in addition to thermodynamics analysis. This study first derives an expression for dimensionless entropy generation number, a function that consists of five dimensionless variables, including Reynolds number, dimensionless borehole length, scale factor of pressures, and two duty parameters of ground heat exchangers. The derivation combines a heat transfer model and a hydraulics model for borehole ground heat exchangers with the first law and the second law of thermodynamics. Next, the entropy generation number is minimized to produce two analytical expressions for the optimal length and the optimal flow velocity of ground heat exchangers. Then, this paper discusses and analyzes implications and applications of these optimization formulas with two case studies. An important finding from the case studies is that widely used empirical velocities of circulating fluid are too large to operate ground-coupled heat pump systems in a thermodynamic optimization way. This paper demonstrates that thermodynamic optimal parameters of ground heat exchangers can probably be determined by using the entropy generation minimization method.

  9. Optimum Inductive Methods. A study in Inductive Probability, Bayesian Statistics, and Verisimilitude.

    NARCIS (Netherlands)

    Festa, Roberto

    1992-01-01

    According to the Bayesian view, scientific hypotheses must be appraised in terms of their posterior probabilities relative to the available experimental data. Such posterior probabilities are derived from the prior probabilities of the hypotheses by applying Bayes'theorem. One of the most important

  10. A new method for explicit modelling of single failure event within different common cause failure groups

    International Nuclear Information System (INIS)

    Kančev, Duško; Čepin, Marko

    2012-01-01

    Redundancy and diversity are the main principles of the safety systems in the nuclear industry. Implementation of safety components redundancy has been acknowledged as an effective approach for assuring high levels of system reliability. The existence of redundant components, identical in most of the cases, implicates a probability of their simultaneous failure due to a shared cause—a common cause failure. This paper presents a new method for explicit modelling of single component failure event within multiple common cause failure groups simultaneously. The method is based on a modification of the frequently utilised Beta Factor parametric model. The motivation for development of this method lays in the fact that one of the most widespread softwares for fault tree and event tree modelling as part of the probabilistic safety assessment does not comprise the option for simultaneous assignment of single failure event to multiple common cause failure groups. In that sense, the proposed method can be seen as an advantage of the explicit modelling of common cause failures. A standard standby safety system is selected as a case study for application and study of the proposed methodology. The results and insights implicate improved, more transparent and more comprehensive models within probabilistic safety assessment.

  11. Tumor control probability after a radiation of animal tumors

    International Nuclear Information System (INIS)

    Urano, Muneyasu; Ando, Koichi; Koike, Sachiko; Nesumi, Naofumi

    1975-01-01

    Tumor control and regrowth probability of animal tumors irradiated with a single x-ray dose were determined, using a spontaneous C3H mouse mammary carcinoma. Cellular radiation sensitivity of tumor cells and tumor control probability of the tumor were examined by the TD 50 and TCD 50 assays respectively. Tumor growth kinetics were measured by counting the percentage of labelled mitosis and by measuring the growth curve. A mathematical analysis of tumor control probability was made from these results. A formula proposed, accounted for cell population kinetics or division probability model, cell sensitivity to radiation and number of tumor cells. (auth.)

  12. Defining Probability in Sex Offender Risk Assessment.

    Science.gov (United States)

    Elwood, Richard W

    2016-12-01

    There is ongoing debate and confusion over using actuarial scales to predict individuals' risk of sexual recidivism. Much of the debate comes from not distinguishing Frequentist from Bayesian definitions of probability. Much of the confusion comes from applying Frequentist probability to individuals' risk. By definition, only Bayesian probability can be applied to the single case. The Bayesian concept of probability resolves most of the confusion and much of the debate in sex offender risk assessment. Although Bayesian probability is well accepted in risk assessment generally, it has not been widely used to assess the risk of sex offenders. I review the two concepts of probability and show how the Bayesian view alone provides a coherent scheme to conceptualize individuals' risk of sexual recidivism.

  13. Colloidal Quantum Dot Inks for Single-Step-Fabricated Field-Effect Transistors: The Importance of Postdeposition Ligand Removal.

    Science.gov (United States)

    Balazs, Daniel M; Rizkia, Nisrina; Fang, Hong-Hua; Dirin, Dmitry N; Momand, Jamo; Kooi, Bart J; Kovalenko, Maksym V; Loi, Maria Antonietta

    2018-02-14

    Colloidal quantum dots are a class of solution-processed semiconductors with good prospects for photovoltaic and optoelectronic applications. Removal of the surfactant, so-called ligand exchange, is a crucial step in making the solid films conductive, but performing it in solid state introduces surface defects and cracks in the films. Hence, the formation of thick, device-grade films have only been possible through layer-by-layer processing, limiting the technological interest for quantum dot solids. Solution-phase ligand exchange before the deposition allows for the direct deposition of thick, homogeneous films suitable for device applications. In this work, fabrication of field-effect transistors in a single step is reported using blade-coating, an upscalable, industrially relevant technique. Most importantly, a postdeposition washing step results in device properties comparable to the best layer-by-layer processed devices, opening the way for large-scale fabrication and further interest from the research community.

  14. Single step vacuum-free and hydrogen-free synthesis of graphene

    Directory of Open Access Journals (Sweden)

    Christian Orellana

    2017-08-01

    Full Text Available We report a modified method to grow graphene in a single-step process. It is based on chemical vapor deposition and considers the use of methane under extremely adverse synthesis conditions, namely in an open chamber without requiring the addition of gaseous hydrogen in any of the synthesis stages. The synthesis occurs between two parallel Cu plates, heated up via electromagnetic induction. The inductive heating yields a strong thermal gradient between the catalytic substrates and the surrounding environment, promoting the enrichment of hydrogen generated as fragments of the methane molecules within the volume confined by the Cu foils. This induced density gradient is due to thermo-diffusion, also known as the Soret effect. Hydrogen and other low mass molecular fractions produced during the process inhibit oxidative effects and simultaneously reduce the native oxide on the Cu surface. As a result, high quality graphene is obtained on the inner surfaces of the Cu sheets as confirmed by Raman spectroscopy.

  15. High-resolution wave-theory-based ultrasound reflection imaging using the split-step fourier and globally optimized fourier finite-difference methods

    Science.gov (United States)

    Huang, Lianjie

    2013-10-29

    Methods for enhancing ultrasonic reflection imaging are taught utilizing a split-step Fourier propagator in which the reconstruction is based on recursive inward continuation of ultrasonic wavefields in the frequency-space and frequency-wave number domains. The inward continuation within each extrapolation interval consists of two steps. In the first step, a phase-shift term is applied to the data in the frequency-wave number domain for propagation in a reference medium. The second step consists of applying another phase-shift term to data in the frequency-space domain to approximately compensate for ultrasonic scattering effects of heterogeneities within the tissue being imaged (e.g., breast tissue). Results from various data input to the method indicate significant improvements are provided in both image quality and resolution.

  16. Thermal disadvantage factor calculation by the multiregion collision probability method

    International Nuclear Information System (INIS)

    Ozgener, B.; Ozgener, H.A.

    2004-01-01

    A multi-region collision probability formulation that is capable of applying white boundary condition directly is presented and applied to thermal neutron transport problems. The disadvantage factors computed are compared with their counterparts calculated by S N methods with both direct and indirect application of white boundary condition. The results of the ABH and collision probability method with indirect application of white boundary condition are also considered and comparisons with benchmark Monte Carlo results are carried out. The studies show that the proposed formulation is capable of calculating thermal disadvantage factor with sufficient accuracy without resorting to the fictitious scattering outer shell approximation associated with the indirect application of the white boundary condition in collision probability solutions

  17. Probability

    CERN Document Server

    Shiryaev, A N

    1996-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, ergodic theory, weak convergence of probability measures, stationary stochastic processes, and the Kalman-Bucy filter Many examples are discussed in detail, and there are a large number of exercises The book is accessible to advanced undergraduates and can be used as a text for self-study This new edition contains substantial revisions and updated references The reader will find a deeper study of topics such as the distance between probability measures, metrization of weak convergence, and contiguity of probability measures Proofs for a number of some important results which were merely stated in the first edition have been added The author included new material on the probability of large deviations, and on the central limit theorem for sums of dependent random variables

  18. A dangerous hobby? Erysipelothrix rhusiopathiae bacteremia most probably acquired from freshwater aquarium fish handling.

    Science.gov (United States)

    Asimaki, E; Nolte, O; Overesch, G; Strahm, C

    2017-08-01

    Erysipelothrix rhusiopathiae is a facultative anaerobic Gram-positive rod that occurs widely in nature and is best known in veterinary medicine for causing swine erysipelas. In humans, infections are rare and mainly considered as occupationally acquired zoonosis. A case of E. rhusiopathiae bacteremia most likely associated with home freshwater aquarium handling is reported. The route of transmission was probably a cut with the dorsal fin of a dead pet fish. A short review of clinical presentations, therapeutic considerations and pitfalls of E. rhusiopathiae infections in humans is presented.

  19. A new method to determine the number of experimental data using statistical modeling methods

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jung-Ho; Kang, Young-Jin; Lim, O-Kaung; Noh, Yoojeong [Pusan National University, Busan (Korea, Republic of)

    2017-06-15

    For analyzing the statistical performance of physical systems, statistical characteristics of physical parameters such as material properties need to be estimated by collecting experimental data. For accurate statistical modeling, many such experiments may be required, but data are usually quite limited owing to the cost and time constraints of experiments. In this study, a new method for determining a rea- sonable number of experimental data is proposed using an area metric, after obtaining statistical models using the information on the underlying distribution, the Sequential statistical modeling (SSM) approach, and the Kernel density estimation (KDE) approach. The area metric is used as a convergence criterion to determine the necessary and sufficient number of experimental data to be acquired. The pro- posed method is validated in simulations, using different statistical modeling methods, different true models, and different convergence criteria. An example data set with 29 data describing the fatigue strength coefficient of SAE 950X is used for demonstrating the performance of the obtained statistical models that use a pre-determined number of experimental data in predicting the probability of failure for a target fatigue life.

  20. Stability of cell-free DNA from maternal plasma isolated following a single centrifugation step.

    Science.gov (United States)

    Barrett, Angela N; Thadani, Henna A; Laureano-Asibal, Cecille; Ponnusamy, Sukumar; Choolani, Mahesh

    2014-12-01

    Cell-free fetal DNA can be used for prenatal testing with no procedure-related risk to the fetus. However, yield of fetal DNA is low compared with maternal cell-free DNA fragments, resulting in technical challenges for some downstream applications. To maximize the fetal fraction, careful blood processing procedures are essential. We demonstrate that fetal fraction can be preserved using a single centrifugation step followed by postage of plasma to the laboratory for further processing. Digital PCR was used to quantify copies of total, maternal, and fetal DNA present in single-spun plasma at time points over a two-week period, compared with immediately processed double-spun plasma, with storage at room temperature, 4°C, and -80°C representing different postage scenarios. There was no significant change in total, maternal, or fetal DNA copy numbers when single-spun plasma samples were stored for up to 1 week at room temperature and 2 weeks at -80°C compared with plasma processed within 4 h. Following storage at 4°C no change in composition of cell-free DNA was observed. Single-spun plasma can be transported at room temperature if the journey is expected to take one week or less; shipping on dry ice is preferable for longer journeys. © 2014 John Wiley & Sons, Ltd.

  1. The method of modular characteristic direction probabilities in MPACT

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Z. [School of Nuclear Science and Technology, Xi' an Jiaotong University, No. 28 Xianning west road, Xi' an, Shaanxi 710049 (China); Kochunas, B.; Collins, B.; Downar, T. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2200 Bonisteel, Ann Arbor, MI 48109 (United States); Wu, H. [School of Nuclear Science and Technology, Xi' an Jiaotong University, No. 28 Xianning west road, Xi' an, Shaanxi 710049 (China)

    2013-07-01

    The method of characteristic direction probabilities (CDP) is based on a modular ray tracing technique which combines the benefits of the collision probability method (CPM) and the method of characteristics (MOC). This past year CDP was implemented in the transport code MPACT for 2-D and 3-D transport calculations. By only coupling the fine mesh regions passed by the characteristic rays in the particular direction, the scale of the probabilities matrix is much smaller compared to the CPM. At the same time, the CDP has the same capacity of dealing with the complicated geometries with the MOC, because the same modular ray tracing techniques are used. Results from the C5G7 benchmark problems are given for different cases to show the accuracy and efficiency of the CDP compared to MOC. For the cases examined, the CDP and MOC methods were seen to differ in k{sub eff} by about 1-20 pcm, and the computational efficiency of the CDP appears to be better than the MOC for some problems. However, in other problems, particularly when the CDP matrices have to be recomputed from changing cross sections, the CDP does not perform as well. This indicates an area of future work. (authors)

  2. LAMP-B: a Fortran program set for the lattice cell analysis by collision probability method

    International Nuclear Information System (INIS)

    Tsuchihashi, Keiichiro

    1979-02-01

    Nature of physical problem solved: LAMB-B solves an integral transport equation by the collision probability method for many variety of lattice cell geometries: spherical, plane and cylindrical lattice cell; square and hexagonal arrays of pin rods; annular clusters and square clusters. LAMP-B produces homogenized constants for multi and/or few group diffusion theory programs. Method of solution: LAMP-B performs an exact numerical integration to obtain the collision probabilities. Restrictions on the complexity of the problem: Not more than 68 group in the fast group calculation, and not more than 20 regions in the resonance integral calculation. Typical running time: It varies with the number of energy groups and the selection of the geometry. Unusual features of the program: Any or any combination of constituent subprograms can be used so that the partial use of this program is available. (author)

  3. Separation of Be and Al for AMS using single-step column chromatography

    Energy Technology Data Exchange (ETDEWEB)

    Binnie, Steven A., E-mail: sbinnie@uni-koeln.de [Institute for Geology und Mineralogy, University of Cologne, 4-6 Greinstrasse, Cologne D-50939 (Germany); Dunai, Tibor J.; Voronina, Elena; Goral, Tomasz [Institute for Geology und Mineralogy, University of Cologne, 4-6 Greinstrasse, Cologne D-50939 (Germany); Heinze, Stefan; Dewald, Alfred [University of Cologne, Institut für Kernphysik, Zülpicher Str. 77, Cologne D-50937 (Germany)

    2015-10-15

    With the aim of simplifying AMS target preparation procedures for TCN measurements we tested a new extraction chromatography approach which couples an anion exchange resin (WBEC) to a chelating resin (Beryllium resin) to separate Be and Al from dissolved quartz samples. Results show that WBEC–Beryllium resin stacks can be used to provide high purity Be and Al separations using a combination of hydrochloric/oxalic and nitric acid elutions. {sup 10}Be and {sup 26}Al concentrations from quartz samples prepared using more standard procedures are compared with results from replicate samples prepared using the coupled WBEC–Beryllium resin approach and show good agreement. The new column procedure is performed in a single step, reducing sample preparation times relative to more traditional methods of TCN target production.

  4. Separation of Be and Al for AMS using single-step column chromatography

    Science.gov (United States)

    Binnie, Steven A.; Dunai, Tibor J.; Voronina, Elena; Goral, Tomasz; Heinze, Stefan; Dewald, Alfred

    2015-10-01

    With the aim of simplifying AMS target preparation procedures for TCN measurements we tested a new extraction chromatography approach which couples an anion exchange resin (WBEC) to a chelating resin (Beryllium resin) to separate Be and Al from dissolved quartz samples. Results show that WBEC-Beryllium resin stacks can be used to provide high purity Be and Al separations using a combination of hydrochloric/oxalic and nitric acid elutions. 10Be and 26Al concentrations from quartz samples prepared using more standard procedures are compared with results from replicate samples prepared using the coupled WBEC-Beryllium resin approach and show good agreement. The new column procedure is performed in a single step, reducing sample preparation times relative to more traditional methods of TCN target production.

  5. Physical method to assess a probable maximum precipitation, using CRCM datas

    International Nuclear Information System (INIS)

    Beauchamp, J.

    2009-01-01

    'Full text:' For Nordic hydropower facilities, spillways are designed with a peak discharge based on extreme conditions. This peak discharge is generally derived using the concept of a probable maximum flood (PMF), which results from the combined effect of abundant downpours (probable maximum precipitation - PMP) and rapid snowmelt. On a gauged basin, the weather data record allows for the computation of the PMF. However, uncertainty in the future climate raises questions as to the accuracy of current PMP estimates for existing and future hydropower facilities. This project looks at the potential use of the Canadian Regional Climate Model (CRCM) data to compute the PMF in ungauged basins and to assess potential changes to the PMF in a changing climate. Several steps will be needed to accomplish this task. This paper presents the first step that aims at applying/adapting to CRCM data the in situ moisture maximization technique developed by the World Meteorological Organization, in order to compute the PMP at the watershed scale. The CRCM provides output data on a 45km grid at a six hour time step. All of the needed atmospheric data is available at sixteen different pressure levels. The methodology consists in first identifying extreme precipitation events under current climate conditions. Then, a maximum persisting twelve hours dew point is determined at each grid point and pressure level for the storm duration. Afterwards, the maximization ratio is approximated by merging the effective temperature with dew point and relative humidity values. The variables and maximization ratio are four-dimensional (x, y, z, t) values. Consequently, two different approaches are explored: a partial ratio at each step and a global ratio for the storm duration. For every identified extreme precipitation event, a maximized hyetograph is computed from the application of this ratio, either partial or global, on CRCM precipitation rates. Ultimately, the PMP is the depth of the

  6. Physical method to assess a probable maximum precipitation, using CRCM datas

    Energy Technology Data Exchange (ETDEWEB)

    Beauchamp, J. [Univ. de Quebec, Ecole de technologie superior, Quebec (Canada)

    2009-07-01

    'Full text:' For Nordic hydropower facilities, spillways are designed with a peak discharge based on extreme conditions. This peak discharge is generally derived using the concept of a probable maximum flood (PMF), which results from the combined effect of abundant downpours (probable maximum precipitation - PMP) and rapid snowmelt. On a gauged basin, the weather data record allows for the computation of the PMF. However, uncertainty in the future climate raises questions as to the accuracy of current PMP estimates for existing and future hydropower facilities. This project looks at the potential use of the Canadian Regional Climate Model (CRCM) data to compute the PMF in ungauged basins and to assess potential changes to the PMF in a changing climate. Several steps will be needed to accomplish this task. This paper presents the first step that aims at applying/adapting to CRCM data the in situ moisture maximization technique developed by the World Meteorological Organization, in order to compute the PMP at the watershed scale. The CRCM provides output data on a 45km grid at a six hour time step. All of the needed atmospheric data is available at sixteen different pressure levels. The methodology consists in first identifying extreme precipitation events under current climate conditions. Then, a maximum persisting twelve hours dew point is determined at each grid point and pressure level for the storm duration. Afterwards, the maximization ratio is approximated by merging the effective temperature with dew point and relative humidity values. The variables and maximization ratio are four-dimensional (x, y, z, t) values. Consequently, two different approaches are explored: a partial ratio at each step and a global ratio for the storm duration. For every identified extreme precipitation event, a maximized hyetograph is computed from the application of this ratio, either partial or global, on CRCM precipitation rates. Ultimately, the PMP is the depth of the

  7. Time Dependence of Collision Probabilities During Satellite Conjunctions

    Science.gov (United States)

    Hall, Doyle T.; Hejduk, Matthew D.; Johnson, Lauren C.

    2017-01-01

    The NASA Conjunction Assessment Risk Analysis (CARA) team has recently implemented updated software to calculate the probability of collision (P (sub c)) for Earth-orbiting satellites. The algorithm can employ complex dynamical models for orbital motion, and account for the effects of non-linear trajectories as well as both position and velocity uncertainties. This “3D P (sub c)” method entails computing a 3-dimensional numerical integral for each estimated probability. Our analysis indicates that the 3D method provides several new insights over the traditional “2D P (sub c)” method, even when approximating the orbital motion using the relatively simple Keplerian two-body dynamical model. First, the formulation provides the means to estimate variations in the time derivative of the collision probability, or the probability rate, R (sub c). For close-proximity satellites, such as those orbiting in formations or clusters, R (sub c) variations can show multiple peaks that repeat or blend with one another, providing insight into the ongoing temporal distribution of risk. For single, isolated conjunctions, R (sub c) analysis provides the means to identify and bound the times of peak collision risk. Additionally, analysis of multiple actual archived conjunctions demonstrates that the commonly used “2D P (sub c)” approximation can occasionally provide inaccurate estimates. These include cases in which the 2D method yields negligibly small probabilities (e.g., P (sub c)) is greater than 10 (sup -10)), but the 3D estimates are sufficiently large to prompt increased monitoring or collision mitigation (e.g., P (sub c) is greater than or equal to 10 (sup -5)). Finally, the archive analysis indicates that a relatively efficient calculation can be used to identify which conjunctions will have negligibly small probabilities. This small-P (sub c) screening test can significantly speed the overall risk analysis computation for large numbers of conjunctions.

  8. Method to Calculate Accurate Top Event Probability in a Seismic PSA

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Woo Sik [Sejong Univ., Seoul (Korea, Republic of)

    2014-05-15

    ACUBE(Advanced Cutset Upper Bound Estimator) calculates the top event probability and importance measures from cutsets by dividing cutsets into major and minor groups depending on the cutset probability, where the cutsets that have higher cutset probability are included in the major group and the others in minor cutsets, converting major cutsets into a Binary Decision Diagram (BDD). By applying the ACUBE algorithm to the seismic PSA cutsets, the accuracy of a top event probability and importance measures can be significantly improved. ACUBE works by dividing the cutsets into two groups (higher and lower cutset probability groups), calculating the top event probability and importance measures in each group, and combining the two results from the two groups. Here, ACUBE calculates the top event probability and importance measures of the higher cutset probability group exactly. On the other hand, ACUBE calculates these measures of the lower cutset probability group with an approximation such as MCUB. The ACUBE algorithm is useful for decreasing the conservatism that is caused by approximating the top event probability and importance measure calculations with given cutsets. By applying the ACUBE algorithm to the seismic PSA cutsets, the accuracy of a top event probability and importance measures can be significantly improved. This study shows that careful attention should be paid and an appropriate method be provided in order to avoid the significant overestimation of the top event probability calculation. Due to the strength of ACUBE that is explained in this study, the ACUBE became a vital tool for calculating more accurate CDF of the seismic PSA cutsets than the conventional probability calculation method.

  9. SPAR-H Step-by-Step Guidance

    Energy Technology Data Exchange (ETDEWEB)

    April M. Whaley; Dana L. Kelly; Ronald L. Boring; William J. Galyean

    2012-06-01

    Step-by-step guidance was developed recently at Idaho National Laboratory for the US Nuclear Regulatory Commission on the use of the Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method for quantifying Human Failure Events (HFEs). This work was done to address SPAR-H user needs, specifically requests for additional guidance on the proper application of various aspects of the methodology. This paper overviews the steps of the SPAR-H analysis process and highlights some of the most important insights gained during the development of the step-by-step directions. This supplemental guidance for analysts is applicable when plant-specific information is available, and goes beyond the general guidance provided in existing SPAR-H documentation. The steps highlighted in this paper are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff.

  10. Milestones of European Integration: Which matters most for Export Openness?

    DEFF Research Database (Denmark)

    Hiller, Sanne; Kruse, Robinson

    The European integration process has removed barriers to trade within Europe. We analyze which integration step has most profoundly influenced the trending behavior of export openness. We endogenously determine the single most decisive break in the trend, account for strong cross-country heteroge......The European integration process has removed barriers to trade within Europe. We analyze which integration step has most profoundly influenced the trending behavior of export openness. We endogenously determine the single most decisive break in the trend, account for strong cross...... and the Netherlands are the Euro introduction, the Maastricht Treaty, the Exchange Rate Mechanism I and the merge of EFTA and EEC to the European Economic Area, respectively. Our empirical results have important implications for inner-European economic development, as export openness feeds back into growth...

  11. Single-step generation of metal-plasma polymer multicore@shell nanoparticles from the gas phase.

    Science.gov (United States)

    Solař, Pavel; Polonskyi, Oleksandr; Olbricht, Ansgar; Hinz, Alexander; Shelemin, Artem; Kylián, Ondřej; Choukourov, Andrei; Faupel, Franz; Biederman, Hynek

    2017-08-17

    Nanoparticles composed of multiple silver cores and a plasma polymer shell (multicore@shell) were prepared in a single step with a gas aggregation cluster source operating with Ar/hexamethyldisiloxane mixtures and optionally oxygen. The size distribution of the metal inclusions as well as the chemical composition and the thickness of the shells were found to be controlled by the composition of the working gas mixture. Shell matrices ranging from organosilicon plasma polymer to nearly stoichiometric SiO 2 were obtained. The method allows facile fabrication of multicore@shell nanoparticles with tailored functional properties, as demonstrated here with the optical response.

  12. Aesthetic rehabilitation of a patient with an anterior maxillectomy defect, using an innovative single-step, single unit, plastic-based hollow obturator

    Directory of Open Access Journals (Sweden)

    Vishwas Bhatia

    2015-06-01

    Full Text Available What could be better than improving the comfort and quality of life of a patient with a life-threatening disease? Maxillectomy, the partial or total removal of the maxilla in patients suffering from benign or malignant neoplasms, creates a challenging defect for the maxillofacial prosthodontist when attempting to provide an effective obturator. Although previous methods have been described for rehabilitation of such patients, our goal should be to devise one stage techniques that will allow the patient an improved quality of life as soon as possible. The present report describes the aesthetic rehabilitation of a maxillectomy patient by use of a hollow obturator. The obturator is fabricated through a processing technique which is a variation of other well-known techniques, consisting of the use of a single-step flasking procedure to fabricate a single-unit hollow obturator using the lost salt technique. As our aim is to aesthetically and functionally rehabilitate the patient as soon as possible, the present method of restoring the maxillectomy defect is cost-effective, time-saving and beneficial for the patient.

  13. A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models

    International Nuclear Information System (INIS)

    Echard, B.; Gayton, N.; Lemaire, M.; Relun, N.

    2013-01-01

    Applying reliability methods to a complex structure is often delicate for two main reasons. First, such a structure is fortunately designed with codified rules leading to a large safety margin which means that failure is a small probability event. Such a probability level is difficult to assess efficiently. Second, the structure mechanical behaviour is modelled numerically in an attempt to reproduce the real response and numerical model tends to be more and more time-demanding as its complexity is increased to improve accuracy and to consider particular mechanical behaviour. As a consequence, performing a large number of model computations cannot be considered in order to assess the failure probability. To overcome these issues, this paper proposes an original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling. This new method is based on the AK-MCS algorithm previously published by Echard et al. [AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 2011;33(2):145–54]. It associates the Kriging metamodel and its advantageous stochastic property with the Importance Sampling method to assess small failure probabilities. It enables the correction or validation of the FORM approximation with only a very few mechanical model computations. The efficiency of the method is, first, proved on two academic applications. It is then conducted for assessing the reliability of a challenging aerospace case study submitted to fatigue.

  14. Improved perovskite phototransistor prepared using multi-step annealing method

    Science.gov (United States)

    Cao, Mingxuan; Zhang, Yating; Yu, Yu; Yao, Jianquan

    2018-02-01

    Organic-inorganic hybrid perovskites with good intrinsic physical properties have received substantial interest for solar cell and optoelectronic applications. However, perovskite film always suffers from a low carrier mobility due to its structural imperfection including sharp grain boundaries and pinholes, restricting their device performance and application potential. Here we demonstrate a straightforward strategy based on multi-step annealing process to improve the performance of perovskite photodetector. Annealing temperature and duration greatly affects the surface morphology and optoelectrical properties of perovskites which determines the device property of phototransistor. The perovskite films treated with multi-step annealing method tend to form highly uniform, well-crystallized and high surface coverage perovskite film, which exhibit stronger ultraviolet-visible absorption and photoluminescence spectrum compare to the perovskites prepared by conventional one-step annealing process. The field-effect mobilities of perovskite photodetector treated by one-step direct annealing method shows mobility as 0.121 (0.062) cm2V-1s-1 for holes (electrons), which increases to 1.01 (0.54) cm2V-1s-1 for that treated with muti-step slow annealing method. Moreover, the perovskite phototransistors exhibit a fast photoresponse speed of 78 μs. In general, this work focuses on the influence of annealing methods on perovskite phototransistor, instead of obtains best parameters of it. These findings prove that Multi-step annealing methods is feasible to prepared high performance based photodetector.

  15. General Methods for Analysis of Sequential “n-step” Kinetic Mechanisms: Application to Single Turnover Kinetics of Helicase-Catalyzed DNA Unwinding

    Science.gov (United States)

    Lucius, Aaron L.; Maluf, Nasib K.; Fischer, Christopher J.; Lohman, Timothy M.

    2003-01-01

    Helicase-catalyzed DNA unwinding is often studied using “all or none” assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using “n-step” sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the “kinetic step size”, m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using “n-step” sequential mechanisms has previously been limited by an inability to float the number of “unwinding steps”, n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, fss(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain fss(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation. PMID:14507688

  16. Probability estimation with machine learning methods for dichotomous and multicategory outcome: theory.

    Science.gov (United States)

    Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas

    2014-07-01

    Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Estimation of functional failure probability of passive systems based on subset simulation method

    International Nuclear Information System (INIS)

    Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)

  18. SYSTEMATIZATION OF THE BASIC STEPS OF THE STEP-AEROBICS

    Directory of Open Access Journals (Sweden)

    Darinka Korovljev

    2011-03-01

    Full Text Available Following the development of the powerful sport industry, in front of us appeared a lot of new opportunities for creating of the new programmes of exercising with certain requisites. One of such programmes is certainly step-aerobics. Step-aerobics can be defined as a type of aerobics consisting of the basic aerobic steps (basic steps applied in exercising on stepper (step bench, with a possibility to regulate its height. Step-aerobics itself can be divided into several groups, depending on the following: type of music, working methods and adopted knowledge of the attendants. In this work, the systematization of the basic steps in step-aerobics was made on the basis of the following criteria: steps origin, number of leg motions in stepping and relating the body support at the end of the step. Systematization of the basic steps of the step-aerobics is quite significant for making a concrete review of the existing basic steps, thus making creation of the step-aerobics lesson easier

  19. Two-step single slope/SAR ADC with error correction for CMOS image sensor.

    Science.gov (United States)

    Tang, Fang; Bermak, Amine; Amira, Abbes; Amor Benammar, Mohieddine; He, Debiao; Zhao, Xiaojin

    2014-01-01

    Conventional two-step ADC for CMOS image sensor requires full resolution noise performance in the first stage single slope ADC, leading to high power consumption and large chip area. This paper presents an 11-bit two-step single slope/successive approximation register (SAR) ADC scheme for CMOS image sensor applications. The first stage single slope ADC generates a 3-bit data and 1 redundant bit. The redundant bit is combined with the following 8-bit SAR ADC output code using a proposed error correction algorithm. Instead of requiring full resolution noise performance, the first stage single slope circuit of the proposed ADC can tolerate up to 3.125% quantization noise. With the proposed error correction mechanism, the power consumption and chip area of the single slope ADC are significantly reduced. The prototype ADC is fabricated using 0.18 μ m CMOS technology. The chip area of the proposed ADC is 7 μ m × 500 μ m. The measurement results show that the energy efficiency figure-of-merit (FOM) of the proposed ADC core is only 125 pJ/sample under 1.4 V power supply and the chip area efficiency is 84 k  μ m(2) · cycles/sample.

  20. Two-Step Single Slope/SAR ADC with Error Correction for CMOS Image Sensor

    Directory of Open Access Journals (Sweden)

    Fang Tang

    2014-01-01

    Full Text Available Conventional two-step ADC for CMOS image sensor requires full resolution noise performance in the first stage single slope ADC, leading to high power consumption and large chip area. This paper presents an 11-bit two-step single slope/successive approximation register (SAR ADC scheme for CMOS image sensor applications. The first stage single slope ADC generates a 3-bit data and 1 redundant bit. The redundant bit is combined with the following 8-bit SAR ADC output code using a proposed error correction algorithm. Instead of requiring full resolution noise performance, the first stage single slope circuit of the proposed ADC can tolerate up to 3.125% quantization noise. With the proposed error correction mechanism, the power consumption and chip area of the single slope ADC are significantly reduced. The prototype ADC is fabricated using 0.18 μm CMOS technology. The chip area of the proposed ADC is 7 μm × 500 μm. The measurement results show that the energy efficiency figure-of-merit (FOM of the proposed ADC core is only 125 pJ/sample under 1.4 V power supply and the chip area efficiency is 84 k μm2·cycles/sample.

  1. Solving point reactor kinetic equations by time step-size adaptable numerical methods

    International Nuclear Information System (INIS)

    Liao Chaqing

    2007-01-01

    Based on the analysis of effects of time step-size on numerical solutions, this paper showed the necessity of step-size adaptation. Based on the relationship between error and step-size, two-step adaptation methods for solving initial value problems (IVPs) were introduced. They are Two-Step Method and Embedded Runge-Kutta Method. PRKEs were solved by implicit Euler method with step-sizes optimized by using Two-Step Method. It was observed that the control error has important influence on the step-size and the accuracy of solutions. With suitable control errors, the solutions of PRKEs computed by the above mentioned method are accurate reasonably. The accuracy and usage of MATLAB built-in ODE solvers ode23 and ode45, both of which adopt Runge-Kutta-Fehlberg method, were also studied and discussed. (authors)

  2. An evaluation method for tornado missile strike probability with stochastic correction

    Energy Technology Data Exchange (ETDEWEB)

    Eguchi, Yuzuru; Murakami, Takahiro; Hirakuchi, Hiromaru; Sugimoto, Soichiro; Hattori, Yasuo [Nuclear Risk Research Center (External Natural Event Research Team), Central Research Institute of Electric Power Industry, Abiko (Japan)

    2017-03-15

    An efficient evaluation method for the probability of a tornado missile strike without using the Monte Carlo method is proposed in this paper. A major part of the proposed probability evaluation is based on numerical results computed using an in-house code, Tornado-borne missile analysis code, which enables us to evaluate the liftoff and flight behaviors of unconstrained objects on the ground driven by a tornado. Using the Tornado-borne missile analysis code, we can obtain a stochastic correlation between local wind speed and flight distance of each object, and this stochastic correlation is used to evaluate the conditional strike probability, QV(r), of a missile located at position r, where the local wind speed is V. In contrast, the annual exceedance probability of local wind speed, which can be computed using a tornado hazard analysis code, is used to derive the probability density function, p(V). Then, we finally obtain the annual probability of tornado missile strike on a structure with the convolutional integration of product of QV(r) and p(V) over V. The evaluation method is applied to a simple problem to qualitatively confirm the validity, and to quantitatively verify the results for two extreme cases in which an object is located just in the vicinity of or far away from the structure.

  3. An evaluation method for tornado missile strike probability with stochastic correction

    International Nuclear Information System (INIS)

    Eguchi, Yuzuru; Murakami, Takahiro; Hirakuchi, Hiromaru; Sugimoto, Soichiro; Hattori, Yasuo

    2017-01-01

    An efficient evaluation method for the probability of a tornado missile strike without using the Monte Carlo method is proposed in this paper. A major part of the proposed probability evaluation is based on numerical results computed using an in-house code, Tornado-borne missile analysis code, which enables us to evaluate the liftoff and flight behaviors of unconstrained objects on the ground driven by a tornado. Using the Tornado-borne missile analysis code, we can obtain a stochastic correlation between local wind speed and flight distance of each object, and this stochastic correlation is used to evaluate the conditional strike probability, QV(r), of a missile located at position r, where the local wind speed is V. In contrast, the annual exceedance probability of local wind speed, which can be computed using a tornado hazard analysis code, is used to derive the probability density function, p(V). Then, we finally obtain the annual probability of tornado missile strike on a structure with the convolutional integration of product of QV(r) and p(V) over V. The evaluation method is applied to a simple problem to qualitatively confirm the validity, and to quantitatively verify the results for two extreme cases in which an object is located just in the vicinity of or far away from the structure

  4. Modeling single-file diffusion with step fractional Brownian motion and a generalized fractional Langevin equation

    International Nuclear Information System (INIS)

    Lim, S C; Teo, L P

    2009-01-01

    Single-file diffusion behaves as normal diffusion at small time and as subdiffusion at large time. These properties can be described in terms of fractional Brownian motion with variable Hurst exponent or multifractional Brownian motion. We introduce a new stochastic process called Riemann–Liouville step fractional Brownian motion which can be regarded as a special case of multifractional Brownian motion with a step function type of Hurst exponent tailored for single-file diffusion. Such a step fractional Brownian motion can be obtained as a solution of the fractional Langevin equation with zero damping. Various kinds of fractional Langevin equations and their generalizations are then considered in order to decide whether their solutions provide the correct description of the long and short time behaviors of single-file diffusion. The cases where the dissipative memory kernel is a Dirac delta function, a power-law function and a combination of these functions are studied in detail. In addition to the case where the short time behavior of single-file diffusion behaves as normal diffusion, we also consider the possibility of a process that begins as ballistic motion

  5. Dispersed single-phase-step Michelson interferometer for Doppler imaging using sunlight.

    Science.gov (United States)

    Wan, Xiaoke; Ge, Jian

    2012-09-15

    A Michelson interferometer is dispersed with a fiber array-fed spectrograph, providing 59 Doppler sensing channels using sunlight in the 510-570 nm wavelength region. The interferometer operates at a single-phase-step mode, which is particularly advantageous in multiplexing and data processing compared to the phase-stepping mode of other interferometer spectrometer instruments. Spectral templates are prepared using a standard solar spectrum and simulated interferometer modulations, such that the correlation function with a measured 1D spectrum determines the Doppler shift. Doppler imaging of a rotating cylinder is demonstrated. The average Doppler sensitivity is ~12 m/s, with some channels reaching ~5 m/s.

  6. Synthetic lethality between gene defects affecting a single non-essential molecular pathway with reversible steps.

    Directory of Open Access Journals (Sweden)

    Andrei Zinovyev

    2013-04-01

    Full Text Available Systematic analysis of synthetic lethality (SL constitutes a critical tool for systems biology to decipher molecular pathways. The most accepted mechanistic explanation of SL is that the two genes function in parallel, mutually compensatory pathways, known as between-pathway SL. However, recent genome-wide analyses in yeast identified a significant number of within-pathway negative genetic interactions. The molecular mechanisms leading to within-pathway SL are not fully understood. Here, we propose a novel mechanism leading to within-pathway SL involving two genes functioning in a single non-essential pathway. This type of SL termed within-reversible-pathway SL involves reversible pathway steps, catalyzed by different enzymes in the forward and backward directions, and kinetic trapping of a potentially toxic intermediate. Experimental data with recombinational DNA repair genes validate the concept. Mathematical modeling recapitulates the possibility of kinetic trapping and revealed the potential contributions of synthetic, dosage-lethal interactions in such a genetic system as well as the possibility of within-pathway positive masking interactions. Analysis of yeast gene interaction and pathway data suggests broad applicability of this novel concept. These observations extend the canonical interpretation of synthetic-lethal or synthetic-sick interactions with direct implications to reconstruct molecular pathways and improve therapeutic approaches to diseases such as cancer.

  7. Qualification of the calculational methods of the fluence in the pressurised water reactors. Improvement of the cross sections treatment by the probability table method

    International Nuclear Information System (INIS)

    Zheng, S.H.

    1994-01-01

    It is indispensable to know the fluence on the nuclear reactor pressure vessel. The cross sections and their treatment have an important rule to this problem. In this study, two ''benchmarks'' have been interpreted by the Monte Carlo transport program TRIPOLI to qualify the calculational method and the cross sections used in the calculations. For the treatment of the cross sections, the multigroup method is usually used but it exists some problems such as the difficulty to choose the weighting function and the necessity of a great number of energy to represent well the cross section's fluctuation. In this thesis, we propose a new method called ''Probability Table Method'' to treat the neutron cross sections. For the qualification, a program of the simulation of neutron transport by the Monte Carlo method in one dimension has been written; the comparison of multigroup's results and probability table's results shows the advantages of this new method. The probability table has also been introduced in the TRIPOLI program; the calculational results of the iron deep penetration benchmark has been improved by comparing with the experimental results. So it is interest to use this new method in the shielding and neutronic calculation. (author). 42 refs., 109 figs., 36 tabs

  8. Improved method for estimating particle scattering probabilities to finite detectors for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Mickael, M.; Gardner, R.P.; Verghese, K.

    1988-01-01

    An improved method for calculating the total probability of particle scattering within the solid angle subtended by finite detectors is developed, presented, and tested. The limiting polar and azimuthal angles subtended by the detector are measured from the direction that most simplifies their calculation rather than from the incident particle direction. A transformation of the particle scattering probability distribution function (pdf) is made to match the transformation of the direction from which the limiting angles are measured. The particle scattering probability to the detector is estimated by evaluating the integral of the transformed pdf over the range of the limiting angles measured from the preferred direction. A general formula for transforming the particle scattering pdf is derived from basic principles and applied to four important scattering pdf's; namely, isotropic scattering in the Lab system, isotropic neutron scattering in the center-of-mass system, thermal neutron scattering by the free gas model, and gamma-ray Klein-Nishina scattering. Some approximations have been made to these pdf's to enable analytical evaluations of the final integrals. These approximations are shown to be valid over a wide range of energies and for most elements. The particle scattering probability to spherical, planar circular, and right circular cylindrical detectors has been calculated using the new and previously reported direct approach. Results indicate that the new approach is valid and is computationally faster by orders of magnitude

  9. Determination of the number of radicals in the initial chain reactions by mathematical methods

    Directory of Open Access Journals (Sweden)

    Pejović Branko B.

    2009-01-01

    Full Text Available Starting from the fact that the real mechanism in a chemical equation takes places through a certain number of radicals which participate in simultaneous reactions and initiate chain reactions according to a particular pattern, the aim of this study is to determine their number in the first couple of steps of the reaction. Based on this, the numbers of radicals were determined in the general case, in the form of linear difference equations, which, by certain mathematical transformations, were reduced to one equation that satisfies a particular numeric series, entirely defined if its first members are known. The equation obtained was solved by a common method developed in the theory of numeric series, in which its solutions represent the number of radicals in an arbitrary step of the reaction observed, in the analytical form. In the final part of the study, the method was tested and verified using two characteristic examples from general chemistry. The study also gives a suggestion of a more efficient procedure by reducing the difference equation to a lower order.

  10. A large number of stepping motor network construction by PLC

    Science.gov (United States)

    Mei, Lin; Zhang, Kai; Hongqiang, Guo

    2017-11-01

    In the flexible automatic line, the equipment is complex, the control mode is flexible, how to realize the large number of step and servo motor information interaction, the orderly control become a difficult control. Based on the existing flexible production line, this paper makes a comparative study of its network strategy. After research, an Ethernet + PROFIBUSE communication configuration based on PROFINET IO and profibus was proposed, which can effectively improve the data interaction efficiency of the equipment and stable data interaction information.

  11. Single-Cell-Based Platform for Copy Number Variation Profiling through Digital Counting of Amplified Genomic DNA Fragments.

    Science.gov (United States)

    Li, Chunmei; Yu, Zhilong; Fu, Yusi; Pang, Yuhong; Huang, Yanyi

    2017-04-26

    We develop a novel single-cell-based platform through digital counting of amplified genomic DNA fragments, named multifraction amplification (mfA), to detect the copy number variations (CNVs) in a single cell. Amplification is required to acquire genomic information from a single cell, while introducing unavoidable bias. Unlike prevalent methods that directly infer CNV profiles from the pattern of sequencing depth, our mfA platform denatures and separates the DNA molecules from a single cell into multiple fractions of a reaction mix before amplification. By examining the sequencing result of each fraction for a specific fragment and applying a segment-merge maximum likelihood algorithm to the calculation of copy number, we digitize the sequencing-depth-based CNV identification and thus provide a method that is less sensitive to the amplification bias. In this paper, we demonstrate a mfA platform through multiple displacement amplification (MDA) chemistry. When performing the mfA platform, the noise of MDA is reduced; therefore, the resolution of single-cell CNV identification can be improved to 100 kb. We can also determine the genomic region free of allelic drop-out with mfA platform, which is impossible for conventional single-cell amplification methods.

  12. Impact of controlling the sum of error probability in the sequential probability ratio test

    Directory of Open Access Journals (Sweden)

    Bijoy Kumarr Pradhan

    2013-05-01

    Full Text Available A generalized modified method is proposed to control the sum of error probabilities in sequential probability ratio test to minimize the weighted average of the two average sample numbers under a simple null hypothesis and a simple alternative hypothesis with the restriction that the sum of error probabilities is a pre-assigned constant to find the optimal sample size and finally a comparison is done with the optimal sample size found from fixed sample size procedure. The results are applied to the cases when the random variate follows a normal law as well as Bernoullian law.

  13. The Weibull probabilities analysis on the single kenaf fiber

    Science.gov (United States)

    Ibrahim, I.; Sarip, S.; Bani, N. A.; Ibrahim, M. H.; Hassan, M. Z.

    2018-05-01

    Kenaf fiber has a great potential to be replaced with the synthetic composite due to their advantages such as environmentally friendly and outstanding performance. However, the main issue of this natural fiber that to be used in structural composite is inconsistency of their mechanical properties. Here, the influence of the gage length on the mechanical properties of single kenaf fiber was evaluated. This fiber was tested using the Universal testing machine at a loading rate of 1mm per min following ASTM D3822 standard. In this study, the different length of treated fiber including 20, 30 and 40mm were being tested. Following, Weibull probabilities analysis was used to characterize the tensile strength and Young modulus of kenaf fiber. The predicted average tensile strength from this approach is in good agreement with experimental results for the obtained parameter.

  14. A novel single-step procedure for the calibration of the mounting parameters of a multi-camera terrestrial mobile mapping system

    Science.gov (United States)

    Habib, A.; Kersting, P.; Bang, K.; Rau, J.

    2011-12-01

    Mobile Mapping Systems (MMS) can be defined as moving platforms which integrates a set of imaging sensors and a position and orientation system (POS) for the collection of geo-spatial information. In order to fully explore the potential accuracy of such systems and guarantee accurate multi-sensor integration, a careful system calibration must be carried out. System calibration involves individual sensor calibration as well as the estimation of the inter-sensor geometric relationship. This paper tackles a specific component of the system calibration process of a multi-camera MMS - the estimation of the relative orientation parameters among the cameras, i.e., the inter-camera geometric relationship (lever-arm offsets and boresight angles among the cameras). For that purpose, a novel single step procedure, which is easy to implement and not computationally intensive, will be introduced. The proposed method is implemented in such a way that it can also be used for the estimation of the mounting parameters among the cameras and the IMU body frame, in case of directly georeferenced systems. The performance of the proposed method is evaluated through experimental results using simulated data. A comparative analysis between the proposed single-step and the two-step, which makes use of the traditional bundle adjustment procedure, is demonstrated.

  15. New 'one-step' method for the simultaneous synthesis and anchoring of organic monolith inside COC microchip channels

    International Nuclear Information System (INIS)

    Ladner, Yoann; Cretier, Gerard; Dugas, Vincent; Randon, Jerome; Faure, Karine; Bruchet, Anthony

    2012-01-01

    A new method for monolith synthesis and anchoring inside cyclic olefin copolymer (COC) microchannels in a single step is proposed. It is shown that type I photo-initiators, typically used in a polymerization mixture to generate free radicals during monolith synthesis, can simultaneously act as type II photo-initiators and react with the plastic surface through hydrogen abstraction. This mechanism is used to 'photo-graft' poly(ethylene glycol) methacrylate (PEGMA) on COC surfaces. Contact angle measurements were used to observe the changes in surface hydrophilicity when increasing initiator concentration and irradiation duration. The ability of type I photo-initiators to synthesize and anchor a monolith inside COC microchannels in a single step was proved through SEM observations. Different concentrations of photo-initiators were tried. Finally, electro-chromatographic separations of polycyclic aromatic hydrocarbons were realized to illustrate the beneficial effect of anchoring on chromatographic performances. The versatility of the method was demonstrated with two widely used photo-initiators: benzoin methyl ether (BME) and azobisisobutyronitrile (AIBN). (authors)

  16. Comparison of the quantitative dry culture methods with both conventional media and most probable number method for the enumeration of coliforms and Escherichia coli/coliforms in food.

    Science.gov (United States)

    Teramura, H; Sota, K; Iwasaki, M; Ogihara, H

    2017-07-01

    Sanita-kun™ CC (coliform count) and EC (Escherichia coli/coliform count), sheet quantitative culture systems which can avoid chromogenic interference by lactase in food, were evaluated in comparison with conventional methods for these bacteria. Based on the results of inclusivity and exclusivity studies using 77 micro-organisms, sensitivity and specificity of both Sanita-kun™ met the criteria for ISO 16140. Both media were compared with deoxycholate agar, violet red bile agar, Merck Chromocult™ coliform agar (CCA), 3M Petrifilm™ CC and EC (PEC) and 3-tube MPN, as reference methods, in 100 naturally contaminated food samples. The correlation coefficients of both Sanita-kun™ for coliform detection were more than 0·95 for all comparisons. For E. coli detection, Sanita-kun™ EC was compared with CCA, PEC and MPN in 100 artificially contaminated food samples. The correlation coefficients for E. coli detection of Sanita-kun™ EC were more than 0·95 for all comparisons. There were no significant differences in all comparisons when conducting a one-way analysis of variance (anova). Both Sanita-kun™ significantly inhibited colour interference by lactase when inhibition of enzymatic staining was assessed using 40 natural cheese samples spiked with coliform. Our results demonstrated Sanita-kun™ CC and EC are suitable alternatives for the enumeration of coliforms and E. coli/coliforms, respectively, in a variety of foods, and specifically in fermented foods. Current chromogenic media for coliforms and Escherichia coli/coliforms have enzymatic coloration due to breaking down of chromogenic substrates by food lactase. The novel sheet culture media which have film layer to avoid coloration by food lactase have been developed for enumeration of coliforms and E. coli/coliforms respectively. In this study, we demonstrated these media had comparable performance with reference methods and less interference by food lactase. These media have a possibility not only

  17. Jump probabilities in the non-Markovian quantum jump method

    International Nuclear Information System (INIS)

    Haerkoenen, Kari

    2010-01-01

    The dynamics of a non-Markovian open quantum system described by a general time-local master equation is studied. The propagation of the density operator is constructed in terms of two processes: (i) deterministic evolution and (ii) evolution of a probability density functional in the projective Hilbert space. The analysis provides a derivation for the jump probabilities used in the recently developed non-Markovian quantum jump (NMQJ) method (Piilo et al 2008 Phys. Rev. Lett. 100 180402).

  18. Genome-wide association mapping including phenotypes from relatives without genotypes in a single-step (ssGWAS for 6-week body weight in broiler chickens

    Directory of Open Access Journals (Sweden)

    Huiyu eWang

    2014-05-01

    Full Text Available The purpose of this study was to compare results obtained from various methodologies for genome-wide association studies, when applied to real data, in terms of number and commonality of regions identified and their genetic variance explained, computational speed, and possible pitfalls in interpretations of results. Methodologies include: two iteratively reweighted single-step genomic BLUP procedures (ssGWAS1 and ssGWAS2, a single-marker model (CGWAS, and BayesB. The ssGWAS methods utilize genomic breeding values (GEBVs based on combined pedigree, genomic and phenotypic information, while CGWAS and BayesB only utilize phenotypes from genotyped animals or pseudo-phenotypes. In this study, ssGWAS was performed by converting GEBVs to SNP marker effects. Unequal variances for markers were incorporated for calculating weights into a new genomic relationship matrix. SNP weights were refined iteratively. The data was body weight at 6 weeks on 274,776 broiler chickens, of which 4553 were genotyped using a 60k SNP chip. Comparison of genomic regions was based on genetic variances explained by local SNP regions (20 SNPs. After 3 iterations, the noise was greatly reduced of ssGWAS1 and results are similar to that of CGWAS, with 4 out of the top 10 regions in common. In contrast, for BayesB, the plot was dominated by a single region explaining 23.1% of the genetic variance. This same region was found by ssGWAS1 with the same rank, but the amount of genetic variation attributed to the region was only 3%. These finding emphasize the need for caution when comparing and interpreting results from various methods, and highlight that detected associations, and strength of association, strongly depends on methodologies and details of implementations. BayesB appears to overly shrink regions to zero, while overestimating the amount of genetic variation attributed to the remaining SNP effects. The real world is most likely a compromise between methods and remains to

  19. Method and allocation device for allocating pending requests for data packet transmission at a number of inputs to a number of outputs of a packet switching device in successive time slots

    Science.gov (United States)

    Abel, Francois [Rueschlikon, CH; Iliadis, Ilias [Rueschlikon, CH; Minkenberg, Cyriel J. A. [Adliswil, CH

    2009-02-03

    A method for allocating pending requests for data packet transmission at a number of inputs to a number of outputs of a switching system in successive time slots, including a matching method including the steps of providing a first request information in a first time slot indicating data packets at the inputs requesting transmission to the outputs of the switching system, performing a first step in the first time slot depending on the first request information to obtain a first matching information, providing a last request information in a last time slot successive to the first time slot, performing a last step in the last time slot depending on the last request information and depending on the first matching information to obtain a final matching information, and assigning the pending data packets at the number of inputs to the number of outputs based on the final matching information.

  20. Developing a Mathematical Model for Scheduling and Determining Success Probability of Research Projects Considering Complex-Fuzzy Networks

    Directory of Open Access Journals (Sweden)

    Gholamreza Norouzi

    2015-01-01

    Full Text Available In project management context, time management is one of the most important factors affecting project success. This paper proposes a new method to solve research project scheduling problems (RPSP containing Fuzzy Graphical Evaluation and Review Technique (FGERT networks. Through the deliverables of this method, a proper estimation of project completion time (PCT and success probability can be achieved. So algorithms were developed to cover all features of the problem based on three main parameters “duration, occurrence probability, and success probability.” These developed algorithms were known as PR-FGERT (Parallel and Reversible-Fuzzy GERT networks. The main provided framework includes simplifying the network of project and taking regular steps to determine PCT and success probability. Simplifications include (1 equivalent making of parallel and series branches in fuzzy network considering the concepts of probabilistic nodes, (2 equivalent making of delay or reversible-to-itself branches and impact of changing the parameters of time and probability based on removing related branches, (3 equivalent making of simple and complex loops, and (4 an algorithm that was provided to resolve no-loop fuzzy network, after equivalent making. Finally, the performance of models was compared with existing methods. The results showed proper and real performance of models in comparison with existing methods.

  1. Unconditional and Conditional QTL Mapping for Tiller Numbers at Various Stages with Single Segment Substitution Lines in Rice (Oryza sativa L.)

    Institute of Scientific and Technical Information of China (English)

    ZHAO Fang-ming; LIU Gui-fu; ZHU Hai-tao; DING Xiao-hua; ZENG Rui-zhen; ZHANG Ze-min; LI Wen-tao; ZHANG Gui-quan

    2008-01-01

    Tiller is one of the most important agronomic traits which influences quantity and quality of effective panicles and finally influences yield in rice.It is important to understand "static" and "dynamic" information of the QTLs for tillers in rice.This work was the first time to simultaneously map unconditional and conditional QTLs for tiller numbers at various stages by using single segment substitution lines in rice.Fourteen QTLs for tiller number,distributing on the corresponding substitution segments of chromosomes 1,2,3,4,6,7 and 8 were detected.Both the number and the effect of the QTLs for tiller number were various at different stages,from 6 to 9 in the number and from 1.49 to 3.49 in the effect,respectively. Tiller number QTLs expressed in a time order,mainly detected at three stages of 0-7d,14-21d and 35-42d after transplanting with 6 positive,9 random and 6 negative expressing QTLs,respectively.Each of the QTLs expressed one time at least during the whole duration of rice.The tiller number at a specific stage was determined by sum of QTL effects estimated by the unconditional method,while the increasing or decreasing number in a given time interval was controlled by the total of QTL effects estimated by the conditional method.These results demonstrated that it is highly effective and accurate for mapping of the QTLs by using single segment substitution lines and the conditional analysis methodology.

  2. A step-by-step workflow for low-level analysis of single-cell RNA-seq data with Bioconductor [version 2; referees: 1 approved, 4 approved with reservations

    Directory of Open Access Journals (Sweden)

    Aaron T.L. Lun

    2016-10-01

    Full Text Available Single-cell RNA sequencing (scRNA-seq is widely used to profile the transcriptome of individual cells. This provides biological resolution that cannot be matched by bulk RNA sequencing, at the cost of increased technical noise and data complexity. The differences between scRNA-seq and bulk RNA-seq data mean that the analysis of the former cannot be performed by recycling bioinformatics pipelines for the latter. Rather, dedicated single-cell methods are required at various steps to exploit the cellular resolution while accounting for technical noise. This article describes a computational workflow for low-level analyses of scRNA-seq data, based primarily on software packages from the open-source Bioconductor project. It covers basic steps including quality control, data exploration and normalization, as well as more complex procedures such as cell cycle phase assignment, identification of highly variable and correlated genes, clustering into subpopulations and marker gene detection. Analyses were demonstrated on gene-level count data from several publicly available datasets involving haematopoietic stem cells, brain-derived cells, T-helper cells and mouse embryonic stem cells. This will provide a range of usage scenarios from which readers can construct their own analysis pipelines.

  3. Evolvement simulation of the probability of neutron-initiating persistent fission chain

    International Nuclear Information System (INIS)

    Wang Zhe; Hong Zhenying

    2014-01-01

    Background: Probability of neutron-initiating persistent fission chain, which has to be calculated in analysis of critical safety, start-up of reactor, burst waiting time on pulse reactor, bursting time on pulse reactor, etc., is an inherent parameter in a multiplying assembly. Purpose: We aim to derive time-dependent integro-differential equation for such probability in relative velocity space according to the probability conservation, and develop the deterministic code Dynamic Segment Number Probability (DSNP) based on the multi-group S N method. Methods: The reliable convergence of dynamic calculation was analyzed and numerical simulation of the evolvement process of dynamic probability for varying concentration was performed under different initial conditions. Results: On Highly Enriched Uranium (HEU) Bare Spheres, when the time is long enough, the results of dynamic calculation approach to those of static calculation. The most difference of such results between DSNP and Partisn code is less than 2%. On Baker model, over the range of about 1 μs after the first criticality, the most difference between the dynamic and static calculation is about 300%. As for a super critical system, the finite fission chains decrease and the persistent fission chains increase as the reactivity aggrandizes, the dynamic evolvement curve of initiation probability is close to the static curve within the difference of 5% when the K eff is more than 1.2. The cumulative probability curve also indicates that the difference of integral results between the dynamic calculation and the static calculation decreases from 35% to 5% as the K eff increases. This demonstrated that the ability of initiating a self-sustaining fission chain reaction approaches stabilization, while the former difference (35%) showed the important difference of the dynamic results near the first criticality with the static ones. The DSNP code agrees well with Partisn code. Conclusions: There are large numbers of

  4. Recursive regularization step for high-order lattice Boltzmann methods

    Science.gov (United States)

    Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre

    2017-09-01

    A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.

  5. Traffic safety and step-by-step driving licence for young people

    DEFF Research Database (Denmark)

    Tønning, Charlotte; Agerholm, Niels

    2017-01-01

    presents a review of safety effects from step-by-step driving licence schemes. Most of the investigated schemes consist of a step-by-step driving licence with Step 1) various tests and education, Step 2) a period where driving is only allowed together with an experienced driver and Step 3) driving without...... companion is allowed but with various restrictions and, in some cases, additional driving education and tests. In general, a step-by-step driving licence improves traffic safety even though the young people are permitted to drive a car earlier on. The effects from driving with an experienced driver vary......Young novice car drivers are much more accident-prone than other drivers - up to 10 times that of their parents' generation. A central solution to improve the traffic safety for this group is implementation of a step-by-step driving licence. A number of countries have introduced a step...

  6. Spectral filtering modulation method for estimation of hemoglobin concentration and oxygenation based on a single fluorescence emission spectrum in tissue phantoms.

    Science.gov (United States)

    Liu, Quan; Vo-Dinh, Tuan

    2009-10-01

    Hemoglobin concentration and oxygenation in tissue are important biomarkers that are useful in both research and clinical diagnostics of a wide variety of diseases such as cancer. The authors aim to develop simple ratiometric method based on the spectral filtering modulation (SFM) of fluorescence spectra to estimate the total hemoglobin concentration and oxygenation in tissue using only a single fluorescence emission spectrum, which will eliminate the need of diffuse reflectance measurements and prolonged data processing as required by most current methods, thus enabling rapid clinical measurements. The proposed method consists of two steps. In the first step, the total hemoglobin concentration is determined by comparing a ratio of fluorescence intensities at two emission wavelengths to a calibration curve. The second step is to estimate oxygen saturation by comparing a double ratio that involves three emission wavelengths to another calibration curve that is a function of oxygen saturation for known total hemoglobin concentration. Theoretical derivation shows that the ratio in the first step is linearly proportional to the total hemoglobin concentrations and the double ratio in the second step is related to both total hemoglobin concentration and hemoglobin oxygenation for the chosen fiber-optic probe geometry. Experiments on synthetic fluorescent tissue phantoms, which included hemoglobin with both constant and varying oxygenation as the absorber, polystyrene spheres as scatterers, and flavin adenine dinucleotide as the fluorophore, were carried out to validate the theoretical prediction. Tissue phantom experiments confirm that the ratio in the first step is linearly proportional to the total hemoglobin concentration and the double ratio in the second step is related to both total hemoglobin concentrations and hemoglobin oxygenation. Furthermore, the relations between the two ratios and the total hemoglobin concentration and hemoglobin oxygenation are insensitive

  7. 8th International Conference on Soft Methods in Probability and Statistics

    CERN Document Server

    Giordani, Paolo; Vantaggi, Barbara; Gagolewski, Marek; Gil, María; Grzegorzewski, Przemysław; Hryniewicz, Olgierd

    2017-01-01

    This proceedings volume is a collection of peer reviewed papers presented at the 8th International Conference on Soft Methods in Probability and Statistics (SMPS 2016) held in Rome (Italy). The book is dedicated to Data science which aims at developing automated methods to analyze massive amounts of data and to extract knowledge from them. It shows how Data science employs various programming techniques and methods of data wrangling, data visualization, machine learning, probability and statistics. The soft methods proposed in this volume represent a collection of tools in these fields that can also be useful for data science.

  8. Bayesian optimization for computationally extensive probability distributions.

    Science.gov (United States)

    Tamura, Ryo; Hukushima, Koji

    2018-01-01

    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.

  9. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  10. Probabilities for profitable fungicide use against gray leaf spot in hybrid maize.

    Science.gov (United States)

    Munkvold, G P; Martinson, C A; Shriver, J M; Dixon, P M

    2001-05-01

    ABSTRACT Gray leaf spot, caused by the fungus Cercospora zeae-maydis, causes considerable yield losses in hybrid maize grown in the north-central United States and elsewhere. Nonchemical management tactics have not adequately prevented these losses. The probability of profitably using fungicide application as a management tool for gray leaf spot was evaluated in 10 field experiments under conditions of natural inoculum in Iowa. Gray leaf spot severity in untreated control plots ranged from 2.6 to 72.8% for the ear leaf and from 3.0 to 7.7 (1 to 9 scale) for whole-plot ratings. In each experiment, fungicide applications with propiconazole or mancozeb significantly reduced gray leaf spot severity. Fungicide treatment significantly (P single propiconazole application. There were significant (P < 0.05) correlations between gray leaf spot severity and yield. We used a Bayesian inference method to calculate for each experiment the probability of achieving a positive net return with one or two propiconazole applications, based on the mean yields and standard deviations for treated and untreated plots, the price of grain, and the costs of the fungicide applications. For one application, the probability ranged from approximately 0.06 to more than 0.99, and exceeded 0.50 in six of nine scenarios (specific experiment/hybrid). The highest probabilities occurred in the 1995 experiments with the most susceptible hybrid. Probabilities were almost always higher for a single application of propiconazole than for two applications. These results indicate that a single application of propiconazole frequently can be profitable for gray leaf spot management in Iowa, but the probability of a profitable application is strongly influenced by hybrid susceptibility. The calculation of probabilities for positive net returns was more informative than mean separation in terms of assessing the economic success of the fungicide applications.

  11. Neutron transport by collision probability method in complicated geometries

    International Nuclear Information System (INIS)

    Constantin, Marin

    2000-01-01

    For the first flight collision probability (FFCP) method a rapidly increasing of the memory requirements and execution time with the number of discrete regions occurs. Generally, the use of the method is restricted at cell/supercell level. However, the amazing developments both in computer hardware and computer architecture allow a real extending of the problems' domain and a more detailed treatment of the geometry. Two ways are discussed into the paper: the direct design of new codes and the improving of the mainframe old versions. The author's experience is focused on the performances' improving of the 3D integral transport code PIJXYZ (from an old version to a modern one) and on the design and developing of the 2D transport code CP 2 D in the last years. In the first case an optimization process have been performed before the parallelization. In the second a modular design and the newest techniques (factorization of the geometry, the macrobands method, the mobile set of chords, the automatic calculation of the integration error, optimal algorithms for the innermost programming level, the mixed method for tracking process and CPs calculation, etc.) were adopted. In both cases the parallelization uses a PCs network system. Some short examples for CP 2 D and PIJXYZ calculation are presented: reactivity void effect in typical CANDU cells using a multistratified coolant model, a problem of some adjacent fuel assemblies, CANDU reactivity devices 3D simulation. (author)

  12. Encounter Probability of Individual Wave Height

    DEFF Research Database (Denmark)

    Liu, Z.; Burcharth, H. F.

    1998-01-01

    wave height corresponding to a certain exceedence probability within a structure lifetime (encounter probability), based on the statistical analysis of long-term extreme significant wave height. Then the design individual wave height is calculated as the expected maximum individual wave height...... associated with the design significant wave height, with the assumption that the individual wave heights follow the Rayleigh distribution. However, the exceedence probability of such a design individual wave height within the structure lifetime is unknown. The paper presents a method for the determination...... of the design individual wave height corresponding to an exceedence probability within the structure lifetime, given the long-term extreme significant wave height. The method can also be applied for estimation of the number of relatively large waves for fatigue analysis of constructions....

  13. Quantum dot single-photon switches of resonant tunneling current for discriminating-photon-number detection.

    Science.gov (United States)

    Weng, Qianchun; An, Zhenghua; Zhang, Bo; Chen, Pingping; Chen, Xiaoshuang; Zhu, Ziqiang; Lu, Wei

    2015-03-23

    Low-noise single-photon detectors that can resolve photon numbers are used to monitor the operation of quantum gates in linear-optical quantum computation. Exactly 0, 1 or 2 photons registered in a detector should be distinguished especially in long-distance quantum communication and quantum computation. Here we demonstrate a photon-number-resolving detector based on quantum dot coupled resonant tunneling diodes (QD-cRTD). Individual quantum-dots (QDs) coupled closely with adjacent quantum well (QW) of resonant tunneling diode operate as photon-gated switches- which turn on (off) the RTD tunneling current when they trap photon-generated holes (recombine with injected electrons). Proposed electron-injecting operation fills electrons into coupled QDs which turn "photon-switches" to "OFF" state and make the detector ready for multiple-photons detection. With proper decision regions defined, 1-photon and 2-photon states are resolved in 4.2 K with excellent propabilities of accuracy of 90% and 98% respectively. Further, by identifying step-like photon responses, the photon-number-resolving capability is sustained to 77 K, making the detector a promising candidate for advanced quantum information applications where photon-number-states should be accurately distinguished.

  14. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis

    Directory of Open Access Journals (Sweden)

    Huanhuan Li

    2017-08-01

    Full Text Available The Shipboard Automatic Identification System (AIS is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW, a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our

  15. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis.

    Science.gov (United States)

    Li, Huanhuan; Liu, Jingxian; Liu, Ryan Wen; Xiong, Naixue; Wu, Kefeng; Kim, Tai-Hoon

    2017-08-04

    The Shipboard Automatic Identification System (AIS) is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW), a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA) is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our proposed method with

  16. Litigation-proof patents: avoiding the most common patent mistakes

    National Research Council Canada - National Science Library

    Goldstein, Larry M

    2014-01-01

    "Litigation-Proof Patents: Avoiding the Most Common Patent Mistakes explains the principles of excellent patents, presents the ten most common errors in patents, and details a step-by-step method for avoiding these common errors...

  17. Single Trial Probability Applications: Can Subjectivity Evade Frequency Limitations?

    Directory of Open Access Journals (Sweden)

    David Howden

    2009-10-01

    Full Text Available Frequency probability theorists define an event’s probability distribution as the limit of a repeated set of trials belonging to a homogeneous collective. The subsets of this collective are events which we have deficient knowledge about on an individual level, although for the larger collective we have knowledge its aggregate behavior. Hence, probabilities can only be achieved through repeated trials of these subsets arriving at the established frequencies that define the probabilities. Crovelli (2009 argues that this is a mistaken approach, and that a subjective assessment of individual trials should be used instead. Bifurcating between the two concepts of risk and uncertainty, Crovelli first asserts that probability is the tool used to manage uncertain situations, and then attempts to rebuild a definition of probability theory with this in mind. We show that such an attempt has little to gain, and results in an indeterminate application of entrepreneurial forecasting to uncertain decisions—a process far-removed from any application of probability theory.

  18. Approximation of the Monte Carlo Sampling Method for Reliability Analysis of Structures

    Directory of Open Access Journals (Sweden)

    Mahdi Shadab Far

    2016-01-01

    Full Text Available Structural load types, on the one hand, and structural capacity to withstand these loads, on the other hand, are of a probabilistic nature as they cannot be calculated and presented in a fully deterministic way. As such, the past few decades have witnessed the development of numerous probabilistic approaches towards the analysis and design of structures. Among the conventional methods used to assess structural reliability, the Monte Carlo sampling method has proved to be very convenient and efficient. However, it does suffer from certain disadvantages, the biggest one being the requirement of a very large number of samples to handle small probabilities, leading to a high computational cost. In this paper, a simple algorithm was proposed to estimate low failure probabilities using a small number of samples in conjunction with the Monte Carlo method. This revised approach was then presented in a step-by-step flowchart, for the purpose of easy programming and implementation.

  19. Genomic prediction by single-step genomic BLUP using cow reference population in Holstein crossbred cattle in India

    DEFF Research Database (Denmark)

    Nayee, Nilesh Kumar; Su, Guosheng; Gajjar, Swapnil

    2018-01-01

    Advantages of genomic selection in breeds with limited numbers of progeny tested bulls have been demonstrated by adding genotypes of females to the reference population (Thomasen et al., 2014). The current study was conducted to explore the feasibility of implementing genomic selection in a Holst......Advantages of genomic selection in breeds with limited numbers of progeny tested bulls have been demonstrated by adding genotypes of females to the reference population (Thomasen et al., 2014). The current study was conducted to explore the feasibility of implementing genomic selection...... in a Holstein Friesian crossbred population with cows kept under small holder conditions using test day records and single step genomic BLUP (ssGBLUP). Milk yield records from 10,797 daughters sired by 258 bulls were used Of these 2194 daughters and 109 sires were genotyped with customized genotyping chip...

  20. Assigning probability gain for precursors of four large Chinese earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Cao, T.; Aki, K.

    1983-03-10

    We extend the concept of probability gain associated with a precursor (Aki, 1981) to a set of precursors which may be mutually dependent. Making use of a new formula, we derive a criterion for selecting precursors from a given data set in order to calculate the probability gain. The probabilities per unit time immediately before four large Chinese earthquakes are calculated. They are approximately 0.09, 0.09, 0.07 and 0.08 per day for 1975 Haicheng (M = 7.3), 1976 Tangshan (M = 7.8), 1976 Longling (M = 7.6), and Songpan (M = 7.2) earthquakes, respectively. These results are encouraging because they suggest that the investigated precursory phenomena may have included the complete information for earthquake prediction, at least for the above earthquakes. With this method, the step-by-step approach to prediction used in China may be quantified in terms of the probability of earthquake occurrence. The ln P versus t curve (where P is the probability of earthquake occurrence at time t) shows that ln P does not increase with t linearly but more rapidly as the time of earthquake approaches.

  1. Design of a Two-Step Calibration Method of Kinematic Parameters for Serial Robots

    Science.gov (United States)

    WANG, Wei; WANG, Lei; YUN, Chao

    2017-03-01

    Serial robots are used to handle workpieces with large dimensions, and calibrating kinematic parameters is one of the most efficient ways to upgrade their accuracy. Many models are set up to investigate how many kinematic parameters can be identified to meet the minimal principle, but the base frame and the kinematic parameter are indistinctly calibrated in a one-step way. A two-step method of calibrating kinematic parameters is proposed to improve the accuracy of the robot's base frame and kinematic parameters. The forward kinematics described with respect to the measuring coordinate frame are established based on the product-of-exponential (POE) formula. In the first step the robot's base coordinate frame is calibrated by the unit quaternion form. The errors of both the robot's reference configuration and the base coordinate frame's pose are equivalently transformed to the zero-position errors of the robot's joints. The simplified model of the robot's positioning error is established in second-power explicit expressions. Then the identification model is finished by the least square method, requiring measuring position coordinates only. The complete subtasks of calibrating the robot's 39 kinematic parameters are finished in the second step. It's proved by a group of calibration experiments that by the proposed two-step calibration method the average absolute accuracy of industrial robots is updated to 0.23 mm. This paper presents that the robot's base frame should be calibrated before its kinematic parameters in order to upgrade its absolute positioning accuracy.

  2. A backward Monte Carlo method for efficient computation of runaway probabilities in runaway electron simulation

    Science.gov (United States)

    Zhang, Guannan; Del-Castillo-Negrete, Diego

    2017-10-01

    Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.

  3. Single step sequential polydimethylsiloxane wet etching to fabricate a microfluidic channel with various cross-sectional geometries

    Science.gov (United States)

    Wang, C.-K.; Liao, W.-H.; Wu, H.-M.; Lo, Y.-H.; Lin, T.-R.; Tung, Y.-C.

    2017-11-01

    Polydimethylsiloxane (PDMS) has become a widely used material to construct microfluidic devices for various biomedical and chemical applications due to its desirable material properties and manufacturability. PDMS microfluidic devices are usually fabricated using soft lithography replica molding methods with master molds made of photolithogrpahy patterned photoresist layers on silicon wafers. The fabricated microfluidic channels often have rectangular cross-sectional geometries with single or multiple heights. In this paper, we develop a single step sequential PDMS wet etching process that can be used to fabricate microfluidic channels with various cross-sectional geometries from single-layer PDMS microfluidic channels. The cross-sections of the fabricated channel can be non-rectangular, and varied along the flow direction. Furthermore, the fabricated cross-sectional geometries can be numerically simulated beforehand. In the experiments, we fabricate microfluidic channels with various cross-sectional geometries using the developed technique. In addition, we fabricate a microfluidic mixer with alternative mirrored cross-sectional geometries along the flow direction to demonstrate the practical usage of the developed technique.

  4. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    Science.gov (United States)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  5. Estimation of the number of physical flaws from periodic ISI data of SG tubes using effective POD

    International Nuclear Information System (INIS)

    Lee, Jae Bong; Park, Jai Hak; Kim, Hong Deok; Chung, Han Sub

    2008-01-01

    It is necessary to know the number of flaws and their size distribution in order to calculate the probability of failure or to estimate the amount of leakage through the tube wall of steam generators. But In-Service Inspection (ISI) flaw data is different from the physical flaw data. In case of a single inspection, it is easy to estimate the number of physical flaws using the POD curve. However, we may be faced with some difficulties in obtaining the number of physical flaws from the periodic in-service inspection data. In this study a method for estimating the number of physical flaws from periodic in-service inspection data is proposed. In order to calculate the number of physical flaws with periodic ISI data, both probabilities of detecting and missing flaws should be considered. And flaw initiation and growth history must be known also. The flaw initiation and growth history can be inferred from appropriate probabilistic flaw growth rate. Two inference methods are proposed and compared. One is Monte Carlo simulation method and the other is transition (stochastic) matrix method. The effective POD, the total possibility of detection considering both probabilities of detecting and missing flaws for each flaw size, can be calculated using above two inference methods. And two methods are compared and the usefulness and convenience are evaluated from several applications

  6. TumorBoost: Normalization of allele-specific tumor copy numbers from a single pair of tumor-normal genotyping microarrays

    Directory of Open Access Journals (Sweden)

    Neuvial Pierre

    2010-05-01

    Full Text Available Abstract Background High-throughput genotyping microarrays assess both total DNA copy number and allelic composition, which makes them a tool of choice for copy number studies in cancer, including total copy number and loss of heterozygosity (LOH analyses. Even after state of the art preprocessing methods, allelic signal estimates from genotyping arrays still suffer from systematic effects that make them difficult to use effectively for such downstream analyses. Results We propose a method, TumorBoost, for normalizing allelic estimates of one tumor sample based on estimates from a single matched normal. The method applies to any paired tumor-normal estimates from any microarray-based technology, combined with any preprocessing method. We demonstrate that it increases the signal-to-noise ratio of allelic signals, making it significantly easier to detect allelic imbalances. Conclusions TumorBoost increases the power to detect somatic copy-number events (including copy-neutral LOH in the tumor from allelic signals of Affymetrix or Illumina origin. We also conclude that high-precision allelic estimates can be obtained from a single pair of tumor-normal hybridizations, if TumorBoost is combined with single-array preprocessing methods such as (allele-specific CRMA v2 for Affymetrix or BeadStudio's (proprietary XY-normalization method for Illumina. A bounded-memory implementation is available in the open-source and cross-platform R package aroma.cn, which is part of the Aroma Project (http://www.aroma-project.org/.

  7. Development of a new Xe-133 single dose multi-step method (SDMM) for muscle blood flow measurement using gamma camera

    International Nuclear Information System (INIS)

    Bunko, Hisashi; Seto, Mikito; Taki, Junichi

    1985-01-01

    In order to measure the muscle blood flow (MBF) during exercise (Ex), a new Xe-133 single dose multi-step method (SDMM) for leg MBF measurement before, during and after Ex using gamma camera was developped. Theoretically, if the activity of Xe-133 in the muscle immediately before and after Ex are known, then the mean MBF during Ex can be calculated. In SDMM, these activities are corrected through correction formula using time delays between end of data aquisition (DA) at rest (R1) and begining of the Ex (TAB), and between end of Ex and begining of the DA after Ex (R2) (TDA). Validity of the SDMM and MBF response on mild and heavy Ex were evaluated in 11 normal volunteers. Ex MBF calculated from 5 and 2.5 min DA (5 sec/frame) both at R1 and R2 were highly correlated (r=.996). Ex MBF by SDMM and direct(measurement by fixed leg exercise were also highly correlated (r=.999). Reproducibility of the R1 and Ex MBF were excellent (r=.999). The highest MBF was seen in GCM on miled walking Ex and in VLM on heavy squatting Ex. After miled Ex, MBF rapidly returned to normal. After heavy Ex, MBF remaind high in VLM In conclusion, SDMM is simple and accurate method for evaluation of dynamic MBF response according to exercise. SDMM is also applicable to the field of sports medicine. (author)

  8. Optimization of Control Points Number at Coordinate Measurements based on the Monte-Carlo Method

    Science.gov (United States)

    Korolev, A. A.; Kochetkov, A. V.; Zakharov, O. V.

    2018-01-01

    Improving the quality of products causes an increase in the requirements for the accuracy of the dimensions and shape of the surfaces of the workpieces. This, in turn, raises the requirements for accuracy and productivity of measuring of the workpieces. The use of coordinate measuring machines is currently the most effective measuring tool for solving similar problems. The article proposes a method for optimizing the number of control points using Monte Carlo simulation. Based on the measurement of a small sample from batches of workpieces, statistical modeling is performed, which allows one to obtain interval estimates of the measurement error. This approach is demonstrated by examples of applications for flatness, cylindricity and sphericity. Four options of uniform and uneven arrangement of control points are considered and their comparison is given. It is revealed that when the number of control points decreases, the arithmetic mean decreases, the standard deviation of the measurement error increases and the probability of the measurement α-error increases. In general, it has been established that it is possible to repeatedly reduce the number of control points while maintaining the required measurement accuracy.

  9. Comments on the sequential probability ratio testing methods

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics

    1996-07-01

    In this paper the classical sequential probability ratio testing method (SPRT) is reconsidered. Every individual boundary crossing event of the SPRT is regarded as a new piece of evidence about the problem under hypothesis testing. The Bayes method is applied for belief updating, i.e. integrating these individual decisions. The procedure is recommended to use when the user (1) would like to be informed about the tested hypothesis continuously and (2) would like to achieve his final conclusion with high confidence level. (Author).

  10. Method for assessing the probability of accumulated doses from an intermittent source using the convolution technique

    International Nuclear Information System (INIS)

    Coleman, J.H.

    1980-10-01

    A technique is discussed for computing the probability distribution of the accumulated dose received by an arbitrary receptor resulting from several single releases from an intermittent source. The probability density of the accumulated dose is the convolution of the probability densities of doses from the intermittent releases. Emissions are not assumed to be constant over the brief release period. The fast fourier transform is used in the calculation of the convolution

  11. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    Science.gov (United States)

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  12. A three-step vehicle detection framework for range estimation using a single camera

    CSIR Research Space (South Africa)

    Kanjee, R

    2015-12-01

    Full Text Available This paper proposes and validates a real-time onroad vehicle detection system, which uses a single camera for the purpose of intelligent driver assistance. A three-step vehicle detection framework is presented to detect and track the target vehicle...

  13. Study for Safeguards Challenges to the Most Probably First Indonesian Future Power Plant of the Pebble Bed Modular Reactor

    International Nuclear Information System (INIS)

    Susilowati, E.

    2015-01-01

    In the near future Indonesia, the fourth most populous country, plans to build a small size power plant most probably a Pebble Bed Modular Reactor PBMR. This first nuclear power plant (NPP) is aimed to provide clear picture to the society in regard to performance and safety of nuclear power plant operation. Selection to the PBMR based on several factor including the combination of small size of the reactor and type of fuel allowing the use of passive safety systems, resulting in essential advantages in nuclear plant design and less dependence on plant operators for safety. In the light of safeguards perspective this typical reactor is also quite difference with previous light water reactor (LWR) design. From the fact that there are a small size large number of elements present in the reactor produced without individual serial numbers combine to on-line refueling same as the CANDU reactor, enforcing a new challenge to safeguards approach for this typical reactor. This paper discusses a bunch of safeguards measures have to be prepared by facility operator to support successfully international nuclear material and facility verification including elements of design relevant to safeguards need to be accomplished in consultation to the regulatory body, supplier or designer and the Agency/IAEA such as nuclear material balance area and key measurement point; possible diversion scenarios and safeguards strategy; and design features relevant to the IAEA equipment have to be installed at the reactor facility. It is deemed that result of discussion will alleviate and support the Agency approaching safeguards measure that may be applied to the purpose Indonesian first power plant of PBMR construction and operation. (author)

  14. An Improved Single-Step Cloning Strategy Simplifies the Agrobacterium tumefaciens-Mediated Transformation (ATMT)-Based Gene-Disruption Method for Verticillium dahliae.

    Science.gov (United States)

    Wang, Sheng; Xing, Haiying; Hua, Chenlei; Guo, Hui-Shan; Zhang, Jie

    2016-06-01

    The soilborne fungal pathogen Verticillium dahliae infects a broad range of plant species to cause severe diseases. The availability of Verticillium genome sequences has provided opportunities for large-scale investigations of individual gene function in Verticillium strains using Agrobacterium tumefaciens-mediated transformation (ATMT)-based gene-disruption strategies. Traditional ATMT vectors require multiple cloning steps and elaborate characterization procedures to achieve successful gene replacement; thus, these vectors are not suitable for high-throughput ATMT-based gene deletion. Several advancements have been made that either involve simplification of the steps required for gene-deletion vector construction or increase the efficiency of the technique for rapid recombinant characterization. However, an ATMT binary vector that is both simple and efficient is still lacking. Here, we generated a USER-ATMT dual-selection (DS) binary vector, which combines both the advantages of the USER single-step cloning technique and the efficiency of the herpes simplex virus thymidine kinase negative-selection marker. Highly efficient deletion of three different genes in V. dahliae using the USER-ATMT-DS vector enabled verification that this newly-generated vector not only facilitates the cloning process but also simplifies the subsequent identification of fungal homologous recombinants. The results suggest that the USER-ATMT-DS vector is applicable for efficient gene deletion and suitable for large-scale gene deletion in V. dahliae.

  15. Optimizing Probability of Detection Point Estimate Demonstration

    Science.gov (United States)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  16. Probability 1/e

    Science.gov (United States)

    Koo, Reginald; Jones, Martin L.

    2011-01-01

    Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.

  17. Quickest single-step one pot mechanosynthesis and characterization of ZnTe quantum dots

    Energy Technology Data Exchange (ETDEWEB)

    Patra, S. [Dept of Physics, University of Burdwan, Golapbag, Burdwan, West Bengal 713104 (India); Pradhan, S.K., E-mail: skp_bu@yahoo.com [Dept of Physics, University of Burdwan, Golapbag, Burdwan, West Bengal 713104 (India)

    2011-05-05

    Research highlights: > First time quickest mechanosynthesis of ZnTe QDs starting from Zn and Te powders. > Cubic ZnTe are formed in a single pot at RT in a single step within 1 h of milling. > The existence of stacking faults and twin faults are evident from HRTEM images. > Distinct blue shift has been observed in UV-vis absorption spectra. > First time report that ZnTe QDs with faults can also show the quantum size effect. - Abstract: ZnTe quantum dots (QDs) are synthesized at room temperature in a single step by mechanical alloying the stoichiometric equimolar mixture (1:1 mol) of Zn and Te powders under Ar within 1 h of milling. Both XRD and HRTEM characterizations reveal that these QDs having size {approx}5 nm contain stacking faults of different kinds. A distinct blue-shift in absorption spectra with decreasing particle size of QDs confirms the quantum size confinement effect (QSCE). It is observed for first time that the QDs with considerable amount of faults can also show the QSCE. Optical band gaps of these QDs increase with increasing milling time and their band gaps can be fine-tuned easily by varying milling time of QDs.

  18. Diagnosing developmental dyscalculia on the basis of reliable single case FMRI methods: promises and limitations.

    Directory of Open Access Journals (Sweden)

    Philipp Johannes Dinkel

    Full Text Available FMRI-studies are mostly based on a group study approach, either analyzing one group or comparing multiple groups, or on approaches that correlate brain activation with clinically relevant criteria or behavioral measures. In this study we investigate the potential of fMRI-techniques focusing on individual differences in brain activation within a test-retest reliability context. We employ a single-case analysis approach, which contrasts dyscalculic children with a control group of typically developing children. In a second step, a support-vector machine analysis and cluster analysis techniques served to investigate similarities in multivariate brain activation patterns. Children were confronted with a non-symbolic number comparison and a non-symbolic exact calculation task during fMRI acquisition. Conventional second level group comparison analysis only showed small differences around the angular gyrus bilaterally and the left parieto-occipital sulcus. Analyses based on single-case statistical procedures revealed that developmental dyscalculia is characterized by individual differences predominantly in visual processing areas. Dyscalculic children seemed to compensate for relative under-activation in the primary visual cortex through an upregulation in higher visual areas. However, overlap in deviant activation was low for the dyscalculic children, indicating that developmental dyscalculia is a disorder characterized by heterogeneous brain activation differences. Using support vector machine analysis and cluster analysis, we tried to group dyscalculic and typically developing children according to brain activation. Fronto-parietal systems seem to qualify for a distinction between the two groups. However, this was only effective when reliable brain activations of both tasks were employed simultaneously. Results suggest that deficits in number representation in the visual-parietal cortex get compensated for through finger related aspects of number

  19. Diagnosing developmental dyscalculia on the basis of reliable single case FMRI methods: promises and limitations.

    Science.gov (United States)

    Dinkel, Philipp Johannes; Willmes, Klaus; Krinzinger, Helga; Konrad, Kerstin; Koten, Jan Willem

    2013-01-01

    FMRI-studies are mostly based on a group study approach, either analyzing one group or comparing multiple groups, or on approaches that correlate brain activation with clinically relevant criteria or behavioral measures. In this study we investigate the potential of fMRI-techniques focusing on individual differences in brain activation within a test-retest reliability context. We employ a single-case analysis approach, which contrasts dyscalculic children with a control group of typically developing children. In a second step, a support-vector machine analysis and cluster analysis techniques served to investigate similarities in multivariate brain activation patterns. Children were confronted with a non-symbolic number comparison and a non-symbolic exact calculation task during fMRI acquisition. Conventional second level group comparison analysis only showed small differences around the angular gyrus bilaterally and the left parieto-occipital sulcus. Analyses based on single-case statistical procedures revealed that developmental dyscalculia is characterized by individual differences predominantly in visual processing areas. Dyscalculic children seemed to compensate for relative under-activation in the primary visual cortex through an upregulation in higher visual areas. However, overlap in deviant activation was low for the dyscalculic children, indicating that developmental dyscalculia is a disorder characterized by heterogeneous brain activation differences. Using support vector machine analysis and cluster analysis, we tried to group dyscalculic and typically developing children according to brain activation. Fronto-parietal systems seem to qualify for a distinction between the two groups. However, this was only effective when reliable brain activations of both tasks were employed simultaneously. Results suggest that deficits in number representation in the visual-parietal cortex get compensated for through finger related aspects of number representation in

  20. An improved 4-step commutation method application for matrix converter

    DEFF Research Database (Denmark)

    Guo, Yu; Guo, Yougui; Deng, Wenlang

    2014-01-01

    A novel four-step commutation method is proposed for matrix converter cell, 3 phase inputs to 1 phase output in this paper, which is obtained on the analysis of published commutation methods for matrix converter. The first and fourth step can be shorter than the second or third one. The discussed...... method here is implemented by programming in VHDL language. Finally, the novel method in this paper is verified by experiments....

  1. Probability for statisticians

    CERN Document Server

    Shorack, Galen R

    2017-01-01

    This 2nd edition textbook offers a rigorous introduction to measure theoretic probability with particular attention to topics of interest to mathematical statisticians—a textbook for courses in probability for students in mathematical statistics. It is recommended to anyone interested in the probability underlying modern statistics, providing a solid grounding in the probabilistic tools and techniques necessary to do theoretical research in statistics. For the teaching of probability theory to post graduate statistics students, this is one of the most attractive books available. Of particular interest is a presentation of the major central limit theorems via Stein's method either prior to or alternative to a characteristic function presentation. Additionally, there is considerable emphasis placed on the quantile function as well as the distribution function. The bootstrap and trimming are both presented. Martingale coverage includes coverage of censored data martingales. The text includes measure theoretic...

  2. Magnetic Beads-Based Sensor with Tailored Sensitivity for Rapid and Single-Step Amperometric Determination of miRNAs

    Directory of Open Access Journals (Sweden)

    Eva Vargas

    2017-11-01

    Full Text Available This work describes a sensitive amperometric magneto-biosensor for single-step and rapid determination of microRNAs (miRNAs. The developed strategy involves the use of direct hybridization of the target miRNA (miRNA-21 with a specific biotinylated DNA probe immobilized on streptavidin-modified magnetic beads (MBs, and labeling of the resulting heteroduplexes with a specific DNA–RNA antibody and the bacterial protein A (ProtA conjugated with an horseradish peroxidase (HRP homopolymer (Poly-HRP40 as an enzymatic label for signal amplification. Amperometric detection is performed upon magnetic capture of the modified MBs onto the working electrode surface of disposable screen-printed carbon electrodes (SPCEs using the H2O2/hydroquinone (HQ system. The magnitude of the cathodic signal obtained at −0.20 V (vs. the Ag pseudo-reference electrode demonstrated linear dependence with the concentration of the synthetic target miRNA over the 1.0 to 100 pM range. The method provided a detection limit (LOD of 10 attomoles (in a 25 μL sample without any target miRNA amplification in just 30 min (once the DNA capture probe-MBs were prepared. This approach shows improved sensitivity compared with that of biosensors constructed with the same anti-DNA–RNA Ab as capture instead of a detector antibody and further labeling with a Strep-HRP conjugate instead of the Poly-HRP40 homopolymer. The developed strategy involves a single step working protocol, as well as the possibility to tailor the sensitivity by enlarging the length of the DNA/miRNA heteroduplexes using additional probes and/or performing the labelling with ProtA conjugated with homopolymers prepared with different numbers of HRP molecules. The practical usefulness was demonstrated by determination of the endogenous levels of the mature target miRNA in 250 ng raw total RNA (RNAt extracted from human mammary epithelial normal (MCF-10A and cancer (MCF-7 cells and tumor tissues.

  3. PREDICTION OF RESERVOIR FLOW RATE OF DEZ DAM BY THE PROBABILITY MATRIX METHOD

    Directory of Open Access Journals (Sweden)

    Mohammad Hashem Kanani

    2012-12-01

    Full Text Available The data collected from the operation of existing storage reservoirs, could offer valuable information for the better allocation and management of fresh water rates for future use to mitigation droughts effect. In this paper the long-term Dez reservoir (IRAN water rate prediction is presented using probability matrix method. Data is analyzed to find the probability matrix of water rates in Dez reservoir based on the previous history of annual water entrance during the past and present years(40 years. The algorithm developed covers both, the overflow and non-overflow conditions in the reservoir. Result of this study shows that in non-overflow conditions the most exigency case is equal to 75%. This means that, if the reservoir is empty (the stored water is less than 100 MCM this year, it would be also empty by 75% next year. The stored water in the reservoir would be less than 300 MCM by 85% next year if the reservoir is empty this year. This percentage decreases to 70% next year if the water of reservoir is less than 300 MCM this year. The percentage also decreases to 5% next year if the reservoir is full this year. In overflow conditions the most exigency case is equal to 75% again. The reservoir volume would be less than 150 MCM by 90% next year, if it is empty this year. This percentage decreases to 70% if its water volume is less than 300 MCM and 55% if the water volume is less than 500 MCM this year. Result shows that too, if the probability matrix of water rates to a reservoir is multiplied by itself repeatedly; it converges to a constant probability matrix, which could be used to predict the long-term water rate of the reservoir. In other words, the probability matrix of series of water rates is changed to a steady probability matrix in the course of time, which could reflect the hydrological behavior of the watershed and could be easily used for the long-term prediction of water storage in the down stream reservoirs.

  4. A rapid, ratiometric, enzyme-free, and sensitive single-step miRNA detection using three-way junction based FRET probes

    Science.gov (United States)

    Luo, Qingying; Liu, Lin; Yang, Cai; Yuan, Jing; Feng, Hongtao; Chen, Yan; Zhao, Peng; Yu, Zhiqiang; Jin, Zongwen

    2018-03-01

    MicroRNAs (miRNAs) are single stranded endogenous molecules composed of only 18-24 nucleotides which are critical for gene expression regulating the translation of messenger RNAs. Conventional methods based on enzyme-assisted nucleic acid amplification techniques have many problems, such as easy contamination, high cost, susceptibility to false amplification, and tendency to have sequence mismatches. Here we report a rapid, ratiometric, enzyme-free, sensitive, and highly selective single-step miRNA detection using three-way junction assembled (or self-assembled) FRET probes. The developed strategy can be operated within the linear range from subnanomolar to hundred nanomolar concentrations of miRNAs. In comparison with the traditional approaches, our method showed high sensitivity for the miRNA detection and extreme selectivity for the efficient discrimination of single-base mismatches. The results reveal that the strategy paved a new avenue for the design of novel highly specific probes applicable in diagnostics and potentially in microscopic imaging of miRNAs in real biological environments.

  5. Light Scattering of Rough Orthogonal Anisotropic Surfaces with Secondary Most Probable Slope Distributions

    International Nuclear Information System (INIS)

    Li Hai-Xia; Cheng Chuan-Fu

    2011-01-01

    We study the light scattering of an orthogonal anisotropic rough surface with secondary most probable slope distribution. It is found that the scattered intensity profiles have obvious secondary maxima, and in the direction perpendicular to the plane of incidence, the secondary maxima are oriented in a curve on the observation plane, which is called the orientation curve. By numerical calculation of the scattering wave fields with the height data of the sample, it is validated that the secondary maxima are induced by the side face element, which constitutes the prismoid structure of the anisotropic surface. We derive the equation of the quadratic orientation curve. Experimentally, we construct the system for light scattering measurement using a CCD. The scattered intensity profiles are extracted from the images at different angles of incidence along the orientation curves. The experimental results conform to the theory. (fundamental areas of phenomenology(including applications))

  6. Gallium arsenide single crystal solar cell structure and method of making

    Science.gov (United States)

    Stirn, Richard J. (Inventor)

    1983-01-01

    A production method and structure for a thin-film GaAs crystal for a solar cell on a single-crystal silicon substrate (10) comprising the steps of growing a single-crystal interlayer (12) of material having a closer match in lattice and thermal expansion with single-crystal GaAs than the single-crystal silicon of the substrate, and epitaxially growing a single-crystal film (14) on the interlayer. The material of the interlayer may be germanium or graded germanium-silicon alloy, with low germanium content at the silicon substrate interface, and high germanium content at the upper surface. The surface of the interface layer (12) is annealed for recrystallization by a pulsed beam of energy (laser or electron) prior to growing the interlayer. The solar cell structure may be grown as a single-crystal n.sup.+ /p shallow homojunction film or as a p/n or n/p junction film. A Ga(Al)AS heteroface film may be grown over the GaAs film.

  7. Practical implementation of optimal management strategies in conservation programmes: a mate selection method

    Directory of Open Access Journals (Sweden)

    Fernández, J.

    2001-12-01

    Full Text Available The maintenance of genetic diversity is, from a genetic point of view, a key objective of conservation programmes. The selection of individuals contributing offspring and the decision of the mating scheme are the steps on which managers can control genetic diversity, specially on ‘ex situ’ programmes. Previous studies have shown that the optimal management strategy is to look for the parents’ contributions that yield minimum group coancestry (overall probability of identity by descent in the population and, then, to arrange mating couples following minimum pairwise coancestry. However, physiological constraints make it necessary to account for mating restrictions when deciding the contributions and, therefore, these should be implemented in a single step along with the mating plan. In the present paper, a single-step method is proposed to optimise the management of a conservation programme when restrictions on the mating scheme exist. The performance of the method is tested by computer simulation. The strategy turns out to be as efficient as the two-step method, regarding both the genetic diversity preserved and the fitness of the population.

  8. EnsembleGASVR: A novel ensemble method for classifying missense single nucleotide polymorphisms

    KAUST Repository

    Rapakoulia, Trisevgeni

    2014-04-26

    Motivation: Single nucleotide polymorphisms (SNPs) are considered the most frequently occurring DNA sequence variations. Several computational methods have been proposed for the classification of missense SNPs to neutral and disease associated. However, existing computational approaches fail to select relevant features by choosing them arbitrarily without sufficient documentation. Moreover, they are limited to the problem ofmissing values, imbalance between the learning datasets and most of them do not support their predictions with confidence scores. Results: To overcome these limitations, a novel ensemble computational methodology is proposed. EnsembleGASVR facilitates a twostep algorithm, which in its first step applies a novel evolutionary embedded algorithm to locate close to optimal Support Vector Regression models. In its second step, these models are combined to extract a universal predictor, which is less prone to overfitting issues, systematizes the rebalancing of the learning sets and uses an internal approach for solving the missing values problem without loss of information. Confidence scores support all the predictions and the model becomes tunable by modifying the classification thresholds. An extensive study was performed for collecting the most relevant features for the problem of classifying SNPs, and a superset of 88 features was constructed. Experimental results show that the proposed framework outperforms well-known algorithms in terms of classification performance in the examined datasets. Finally, the proposed algorithmic framework was able to uncover the significant role of certain features such as the solvent accessibility feature, and the top-scored predictions were further validated by linking them with disease phenotypes. © The Author 2014.

  9. MASS TRANSFER CONTROL OF A BACKWARD-FACING STEP FLOW BY LOCAL FORCING- EFFECT OF REYNOLDS NUMBER

    Directory of Open Access Journals (Sweden)

    Zouhaier MEHREZ

    2011-01-01

    Full Text Available The control of fluid mechanics and mass transfer in separated and reattaching flow over a backward-facing step by a local forcing, is studied using Large Eddy Simulation (LES.To control the flow, the local forcing is realized by a sinusoidal oscillating jet at the step edge. The Reynolds number is varied in the range 10000 ≤ Re≤ 50000 and the Schmidt number is fixed at 1.The found results show that the flow structure is modified and the local mass transfer is enhanced by the applied forcing. The observed changes depend on the Reynolds number and vary with the frequency and amplitude of the local forcing. For the all Reynolds numbers, the largest recirculation zone size reduction is obtained at the optimum forcing frequency St = 0.25. At this frequency the local mass transfer enhancement attains the maximum.

  10. Compendium of Experimental Cetane Numbers

    Energy Technology Data Exchange (ETDEWEB)

    Yanowitz, Janet [Ecoengineering, Sharonville, OH (United States); Ratcliff, Matthew A. [National Renewable Energy Lab. (NREL), Golden, CO (United States); McCormick, Robert L. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Taylor, J. D. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Murphy, M. J. [Battelle, Columbus, OH (United States)

    2017-02-22

    This report is an updated version of the 2014 Compendium of Experimental Cetane Number Data and presents a compilation of measured cetane numbers for pure chemical compounds. It includes all available single-compound cetane number data found in the scientific literature up until December 2016 as well as a number of previously unpublished values, most measured over the past decade at the National Renewable Energy Laboratory. This version of the compendium contains cetane values for 496 pure compounds, including 204 hydrocarbons and 292 oxygenates. 176 individual measurements are new to this version of the compendium, all of them collected using ASTM Method D6890, which utilizes an Ignition Quality Tester (IQT) a type of constant-volume combustion chamber. For many compounds, numerous measurements are included, often collected by different researchers using different methods. The text of this document is unchanged from the 2014 version, except for the numbers of compounds in Section 3.1, the Appendices, Table 1. Primary Cetane Number Data Sources and Table 2. Number of Measurements Included in Compendium. Cetane number is a relative ranking of a fuel's autoignition characteristics for use in compression ignition engines. It is based on the amount of time between fuel injection and ignition, also known as ignition delay. The cetane number is typically measured either in a single-cylinder engine or a constant-volume combustion chamber. Values in the previous compendium derived from octane numbers have been removed and replaced with a brief analysis of the correlation between cetane numbers and octane numbers. The discussion on the accuracy and precision of the most commonly used methods for measuring cetane number has been expanded, and the data have been annotated extensively to provide additional information that will help the reader judge the relative reliability of individual results.

  11. Modeling the radiation transfer of discontinuous canopies: results for gap probability and single-scattering contribution

    Science.gov (United States)

    Zhao, Feng; Zou, Kai; Shang, Hong; Ji, Zheng; Zhao, Huijie; Huang, Wenjiang; Li, Cunjun

    2010-10-01

    In this paper we present an analytical model for the computation of radiation transfer of discontinuous vegetation canopies. Some initial results of gap probability and bidirectional gap probability of discontinuous vegetation canopies, which are important parameters determining the radiative environment of the canopies, are given and compared with a 3- D computer simulation model. In the model, negative exponential attenuation of light within individual plant canopies is assumed. Then the computation of gap probability is resolved by determining the entry points and exiting points of the ray with the individual plants via their equations in space. For the bidirectional gap probability, which determines the single-scattering contribution of the canopy, a gap statistical analysis based model was adopted to correct the dependence of gap probabilities for both solar and viewing directions. The model incorporates the structural characteristics, such as plant sizes, leaf size, row spacing, foliage density, planting density, leaf inclination distribution. Available experimental data are inadequate for a complete validation of the model. So it was evaluated with a three dimensional computer simulation model for 3D vegetative scenes, which shows good agreement between these two models' results. This model should be useful to the quantification of light interception and the modeling of bidirectional reflectance distributions of discontinuous canopies.

  12. Method of forming catalyst layer by single step infiltration

    Science.gov (United States)

    Gerdes, Kirk; Lee, Shiwoo; Dowd, Regis

    2018-05-01

    Provided herein is a method for electrocatalyst infiltration of a porous substrate, of particular use for preparation of a cathode for a solid oxide fuel cell. The method generally comprises preparing an electrocatalyst infiltrate solution comprising an electrocatalyst, surfactant, chelating agent, and a solvent; pretreating a porous mixed ionic-electric conductive substrate; and applying the electrocatalyst infiltration solution to the porous mixed ionic-electric conductive substrate.

  13. Three-Step Predictor-Corrector of Exponential Fitting Method for Nonlinear Schroedinger Equations

    International Nuclear Information System (INIS)

    Tang Chen; Zhang Fang; Yan Haiqing; Luo Tao; Chen Zhanqing

    2005-01-01

    We develop the three-step explicit and implicit schemes of exponential fitting methods. We use the three-step explicit exponential fitting scheme to predict an approximation, then use the three-step implicit exponential fitting scheme to correct this prediction. This combination is called the three-step predictor-corrector of exponential fitting method. The three-step predictor-corrector of exponential fitting method is applied to numerically compute the coupled nonlinear Schroedinger equation and the nonlinear Schroedinger equation with varying coefficients. The numerical results show that the scheme is highly accurate.

  14. Establishment probability in newly founded populations

    Directory of Open Access Journals (Sweden)

    Gusset Markus

    2012-06-01

    Full Text Available Abstract Background Establishment success in newly founded populations relies on reaching the established phase, which is defined by characteristic fluctuations of the population’s state variables. Stochastic population models can be used to quantify the establishment probability of newly founded populations; however, so far no simple but robust method for doing so existed. To determine a critical initial number of individuals that need to be released to reach the established phase, we used a novel application of the “Wissel plot”, where –ln(1 – P0(t is plotted against time t. This plot is based on the equation P0t=1–c1e–ω1t, which relates the probability of extinction by time t, P0(t, to two constants: c1 describes the probability of a newly founded population to reach the established phase, whereas ω1 describes the population’s probability of extinction per short time interval once established. Results For illustration, we applied the method to a previously developed stochastic population model of the endangered African wild dog (Lycaon pictus. A newly founded population reaches the established phase if the intercept of the (extrapolated linear parts of the “Wissel plot” with the y-axis, which is –ln(c1, is negative. For wild dogs in our model, this is the case if a critical initial number of four packs, consisting of eight individuals each, are released. Conclusions The method we present to quantify the establishment probability of newly founded populations is generic and inferences thus are transferable to other systems across the field of conservation biology. In contrast to other methods, our approach disaggregates the components of a population’s viability by distinguishing establishment from persistence.

  15. New Systematic CFD Methods to Calculate Static and Single Dynamic Stability Derivatives of Aircraft

    Directory of Open Access Journals (Sweden)

    Bai-gang Mi

    2017-01-01

    Full Text Available Several new systematic methods for high fidelity and reliability calculation of static and single dynamic derivatives are proposed in this paper. Angle of attack step response is used to obtain static derivative directly; then translation acceleration dynamic derivative and rotary dynamic derivative can be calculated by employing the step response motion of rate of the angle of attack and unsteady motion of pitching angular velocity step response, respectively. Longitudinal stability derivative calculations of SACCON UCAV are taken as test cases for validation. Numerical results of all cases achieve good agreement with reference values or experiments data from wind tunnel, which indicate that the proposed methods can be considered as new tools in the process of design and production of advanced aircrafts for their high efficiency and precision.

  16. Optimizing the calculation of DM,CO and VC via the single breath single oxygen tension DLCO/NO method.

    Science.gov (United States)

    Coffman, Kirsten E; Taylor, Bryan J; Carlson, Alex R; Wentz, Robert J; Johnson, Bruce D

    2016-01-15

    Alveolar-capillary membrane conductance (D(M,CO)) and pulmonary-capillary blood volume (V(C)) are calculated via lung diffusing capacity for carbon monoxide (DL(CO)) and nitric oxide (DL(NO)) using the single breath, single oxygen tension (single-FiO2) method. However, two calculation parameters, the reaction rate of carbon monoxide with blood (θ(CO)) and the D(M,NO)/D(M,CO) ratio (α-ratio), are controversial. This study systematically determined optimal θ(CO) and α-ratio values to be used in the single-FiO2 method that yielded the most similar D(M,CO) and V(C) values compared to the 'gold-standard' multiple-FiO2 method. Eleven healthy subjects performed single breath DL(CO)/DL(NO) maneuvers at rest and during exercise. D(M,CO) and V(C) were calculated via the single-FiO2 and multiple-FiO2 methods by implementing seven θ(CO) equations and a range of previously reported α-ratios. The RP θ(CO) equation (Reeves, R.B., Park, H.K., 1992. Respiration Physiology 88 1-21) and an α-ratio of 4.0-4.4 yielded DM,CO and VC values that were most similar between methods. The RP θ(CO) equation and an experimental α-ratio should be used in future studies. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Uncertainties and quantification of common cause failure rates and probabilities for system analyses

    International Nuclear Information System (INIS)

    Vaurio, Jussi K.

    2005-01-01

    Simultaneous failures of multiple components due to common causes at random times are modelled by constant multiple-failure rates. A procedure is described for quantification of common cause failure (CCF) basic event probabilities for system models using plant-specific and multiple-plant failure-event data. Methodology is presented for estimating CCF-rates from event data contaminated with assessment uncertainties. Generalised impact vectors determine the moments for the rates of individual systems or plants. These moments determine the effective numbers of events and observation times to be input to a Bayesian formalism to obtain plant-specific posterior CCF-rates. The rates are used to determine plant-specific common cause event probabilities for the basic events of explicit fault tree models depending on test intervals, test schedules and repair policies. Three methods are presented to determine these probabilities such that the correct time-average system unavailability can be obtained with single fault tree quantification. Recommended numerical values are given and examples illustrate different aspects of the methodology

  18. Thermal gradient migration of brine inclusions in synthetic alkali halide single crystals

    International Nuclear Information System (INIS)

    Olander, D.R.; Machiels, A.J.; Balooch, M.; Yagnik, S.K.

    1982-01-01

    An apparatus consisting of an optical microscope with a hot stage attachment capable of simultaneously nonuniformly heating and mechanically loading small single crystals of salt was used to measure the velocities of all-liquid inclusions NaC1 and KC1 specimens under various conditions of temperature, temperature gradient, and uniaxial stress. The rate-controlling elementary step in the migration of the inclusions was found to be associated with interfacial processes, probably dissolution of the hot face. Dislocations are required for this step to take place. The small number of dislocation intersections with small inclusions in nearly perfect crystals causes substantial variations in the velocity, a sensitivity of the velocity to mechanical loading of the crystal, and a velocity which varies approximately as the second power of the temperature gradient

  19. Development of single step RT-PCR for detection of Kyasanur forest disease virus from clinical samples

    Directory of Open Access Journals (Sweden)

    Gouri Chaubal

    2018-02-01

    Discussion and conclusion: The previously published sensitive real time RT-PCR assay requires higher cost in terms of reagents and machine setup and technical expertise has been the primary reason for development of this assay. A single step RT-PCR is relatively easy to perform and more cost effective than real time RT-PCR in smaller setups in the absence of Biosafety Level-3 facility. This study reports the development and optimization of single step RT-PCR assay which is more sensitive and less time-consuming than nested RT-PCR and cost effective for rapid diagnosis of KFD viral RNA.

  20. Probability and risk criteria for channel depth design and channel operation

    CSIR Research Space (South Africa)

    Moes, H

    2008-05-01

    Full Text Available The paper reviews the various levels of probability of bottom touching and risk criteria which are being used. This leads to a relationship between the statistically expected number of vertical ship motions in the channel during a single shipping...

  1. Electric-current-induced step bunching on Si(111)

    International Nuclear Information System (INIS)

    Homma, Yoshikazu; Aizawa, Noriyuki

    2000-01-01

    We experimentally investigated step bunching induced by direct current on vicinal Si(111)'1x1' surfaces using scanning electron microscopy and atomic force microscopy. The scaling relation between the average step spacing l b and the number of steps N in a bunch, l b ∼N -α , was determined for four step-bunching temperature regimes above the 7x7-'1x1' transition temperature. The step-bunching rate and scaling exponent differ between neighboring step-bunching regimes. The exponent α is 0.7 for the two regimes where the step-down current induces step bunching (860-960 and 1210-1300 deg. C), and 0.6 for the two regimes where the step-up current induces step bunching (1060-1190 and >1320 deg. C). The number of single steps on terraces also differs in each of the four temperature regimes. For temperatures higher than 1280 deg. C, the prefactor of the scaling relation increases, indicating an increase in step-step repulsion. The scaling exponents obtained agree reasonably well with those predicted by theoretical models. However, they give unrealistic values for the effective charges of adatoms for step-up-current-induced step bunching when the 'transparent' step model is used

  2. A coupled weather generator - rainfall-runoff approach on hourly time steps for flood risk analysis

    Science.gov (United States)

    Winter, Benjamin; Schneeberger, Klaus; Dung Nguyen, Viet; Vorogushyn, Sergiy; Huttenlau, Matthias; Merz, Bruno; Stötter, Johann

    2017-04-01

    The evaluation of potential monetary damage of flooding is an essential part of flood risk management. One possibility to estimate the monetary risk is to analyze long time series of observed flood events and their corresponding damages. In reality, however, only few flood events are documented. This limitation can be overcome by the generation of a set of synthetic, physically and spatial plausible flood events and subsequently the estimation of the resulting monetary damages. In the present work, a set of synthetic flood events is generated by a continuous rainfall-runoff simulation in combination with a coupled weather generator and temporal disaggregation procedure for the study area of Vorarlberg (Austria). Most flood risk studies focus on daily time steps, however, the mesoscale alpine study area is characterized by short concentration times, leading to large differences between daily mean and daily maximum discharge. Accordingly, an hourly time step is needed for the simulations. The hourly metrological input for the rainfall-runoff model is generated in a two-step approach. A synthetic daily dataset is generated by a multivariate and multisite weather generator and subsequently disaggregated to hourly time steps with a k-Nearest-Neighbor model. Following the event generation procedure, the negative consequences of flooding are analyzed. The corresponding flood damage for each synthetic event is estimated by combining the synthetic discharge at representative points of the river network with a loss probability relation for each community in the study area. The loss probability relation is based on exposure and susceptibility analyses on a single object basis (residential buildings) for certain return periods. For these impact analyses official inundation maps of the study area are used. Finally, by analyzing the total event time series of damages, the expected annual damage or losses associated with a certain probability of occurrence can be estimated for

  3. A stabilized Runge–Kutta–Legendre method for explicit super-time-stepping of parabolic and mixed equations

    International Nuclear Information System (INIS)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-01

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s 2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful

  4. A transmission probability method for calculation of neutron flux distributions in hexagonal geometry

    International Nuclear Information System (INIS)

    Wasastjerna, F.; Lux, I.

    1980-03-01

    A transmission probability method implemented in the program TPHEX is described. This program was developed for the calculation of neutron flux distributions in hexagonal light water reactor fuel assemblies. The accuracy appears to be superior to diffusion theory, and the computation time is shorter than that of the collision probability method. (author)

  5. Bayesian maximum posterior probability method for interpreting plutonium urinalysis data

    International Nuclear Information System (INIS)

    Miller, G.; Inkret, W.C.

    1996-01-01

    A new internal dosimetry code for interpreting urinalysis data in terms of radionuclide intakes is described for the case of plutonium. The mathematical method is to maximise the Bayesian posterior probability using an entropy function as the prior probability distribution. A software package (MEMSYS) developed for image reconstruction is used. Some advantages of the new code are that it ensures positive calculated dose, it smooths out fluctuating data, and it provides an estimate of the propagated uncertainty in the calculated doses. (author)

  6. Link importance incorporated failure probability measuring solution for multicast light-trees in elastic optical networks

    Science.gov (United States)

    Li, Xin; Zhang, Lu; Tang, Ying; Huang, Shanguo

    2018-03-01

    The light-tree-based optical multicasting (LT-OM) scheme provides a spectrum- and energy-efficient method to accommodate emerging multicast services. Some studies focus on the survivability technologies for LTs against a fixed number of link failures, such as single-link failure. However, a few studies involve failure probability constraints when building LTs. It is worth noting that each link of an LT plays different important roles under failure scenarios. When calculating the failure probability of an LT, the importance of its every link should be considered. We design a link importance incorporated failure probability measuring solution (LIFPMS) for multicast LTs under independent failure model and shared risk link group failure model. Based on the LIFPMS, we put forward the minimum failure probability (MFP) problem for the LT-OM scheme. Heuristic approaches are developed to address the MFP problem in elastic optical networks. Numerical results show that the LIFPMS provides an accurate metric for calculating the failure probability of multicast LTs and enhances the reliability of the LT-OM scheme while accommodating multicast services.

  7. Probability of cavitation for single ultrasound pulses applied to tissues and tissue-mimicking materials.

    Science.gov (United States)

    Maxwell, Adam D; Cain, Charles A; Hall, Timothy L; Fowlkes, J Brian; Xu, Zhen

    2013-03-01

    In this study, the negative pressure values at which inertial cavitation consistently occurs in response to a single, two-cycle, focused ultrasound pulse were measured in several media relevant to cavitation-based ultrasound therapy. The pulse was focused into a chamber containing one of the media, which included liquids, tissue-mimicking materials, and ex vivo canine tissue. Focal waveforms were measured by two separate techniques using a fiber-optic hydrophone. Inertial cavitation was identified by high-speed photography in optically transparent media and an acoustic passive cavitation detector. The probability of cavitation (P(cav)) for a single pulse as a function of peak negative pressure (p(-)) followed a sigmoid curve, with the probability approaching one when the pressure amplitude was sufficient. The statistical threshold (defined as P(cav) = 0.5) was between p(-) = 26 and 30 MPa in all samples with high water content but varied between p(-) = 13.7 and >36 MPa in other media. A model for radial cavitation bubble dynamics was employed to evaluate the behavior of cavitation nuclei at these pressure levels. A single bubble nucleus with an inertial cavitation threshold of p(-) = 28.2 megapascals was estimated to have a 2.5 nm radius in distilled water. These data may be valuable for cavitation-based ultrasound therapy to predict the likelihood of cavitation at various pressure levels and dimensions of cavitation-induced lesions in tissue. Copyright © 2013 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  8. Unification of field theory and maximum entropy methods for learning probability densities

    Science.gov (United States)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  9. Unification of field theory and maximum entropy methods for learning probability densities.

    Science.gov (United States)

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  10. A two-step FEM-SEM approach for wave propagation analysis in cable structures

    Science.gov (United States)

    Zhang, Songhan; Shen, Ruili; Wang, Tao; De Roeck, Guido; Lombaert, Geert

    2018-02-01

    Vibration-based methods are among the most widely studied in structural health monitoring (SHM). It is well known, however, that the low-order modes, characterizing the global dynamic behaviour of structures, are relatively insensitive to local damage. Such local damage may be easier to detect by methods based on wave propagation which involve local high frequency behaviour. The present work considers the numerical analysis of wave propagation in cables. A two-step approach is proposed which allows taking into account the cable sag and the distribution of the axial forces in the wave propagation analysis. In the first step, the static deformation and internal forces are obtained by the finite element method (FEM), taking into account geometric nonlinear effects. In the second step, the results from the static analysis are used to define the initial state of the dynamic analysis which is performed by means of the spectral element method (SEM). The use of the SEM in the second step of the analysis allows for a significant reduction in computational costs as compared to a FE analysis. This methodology is first verified by means of a full FE analysis for a single stretched cable. Next, simulations are made to study the effects of damage in a single stretched cable and a cable-supported truss. The results of the simulations show how damage significantly affects the high frequency response, confirming the potential of wave propagation based methods for SHM.

  11. A MISO-ARX-Based Method for Single-Trial Evoked Potential Extraction

    Directory of Open Access Journals (Sweden)

    Nannan Yu

    2017-01-01

    Full Text Available In this paper, we propose a novel method for solving the single-trial evoked potential (EP estimation problem. In this method, the single-trial EP is considered as a complex containing many components, which may originate from different functional brain sites; these components can be distinguished according to their respective latencies and amplitudes and are extracted simultaneously by multiple-input single-output autoregressive modeling with exogenous input (MISO-ARX. The extraction process is performed in three stages: first, we use a reference EP as a template and decompose it into a set of components, which serve as subtemplates for the remaining steps. Then, a dictionary is constructed with these subtemplates, and EPs are preliminarily extracted by sparse coding in order to roughly estimate the latency of each component. Finally, the single-trial measurement is parametrically modeled by MISO-ARX while characterizing spontaneous electroencephalographic activity as an autoregression model driven by white noise and with each component of the EP modeled by autoregressive-moving-average filtering of the subtemplates. Once optimized, all components of the EP can be extracted. Compared with ARX, our method has greater tracking capabilities of specific components of the EP complex as each component is modeled individually in MISO-ARX. We provide exhaustive experimental results to show the effectiveness and feasibility of our method.

  12. Maximization of regional probabilities using Optimal Surface Graphs

    DEFF Research Database (Denmark)

    Arias Lorza, Andres M.; Van Engelen, Arna; Petersen, Jens

    2018-01-01

    Purpose: We present a segmentation method that maximizes regional probabilities enclosed by coupled surfaces using an Optimal Surface Graph (OSG) cut approach. This OSG cut determines the globally optimal solution given a graph constructed around an initial surface. While most methods for vessel...... wall segmentation only use edge information, we show that maximizing regional probabilities using an OSG improves the segmentation results. We applied this to automatically segment the vessel wall of the carotid artery in magnetic resonance images. Methods: First, voxel-wise regional probability maps...... were obtained using a Support Vector Machine classifier trained on local image features. Then, the OSG segments the regions which maximizes the regional probabilities considering smoothness and topological constraints. Results: The method was evaluated on 49 carotid arteries from 30 subjects...

  13. The perception of probability.

    Science.gov (United States)

    Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E

    2014-01-01

    We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  14. When a Step Is Not a Step! Specificity Analysis of Five Physical Activity Monitors.

    Directory of Open Access Journals (Sweden)

    Sandra O'Connell

    Full Text Available Physical activity is an essential aspect of a healthy lifestyle for both physical and mental health states. As step count is one of the most utilized measures for quantifying physical activity it is important that activity-monitoring devices be both sensitive and specific in recording actual steps taken and disregard non-stepping body movements. The objective of this study was to assess the specificity of five activity monitors during a variety of prescribed non-stepping activities.Participants wore five activity monitors simultaneously for a variety of prescribed activities including deskwork, taking an elevator, taking a bus journey, automobile driving, washing and drying dishes; functional reaching task; indoor cycling; outdoor cycling; and indoor rowing. Each task was carried out for either a specific duration of time or over a specific distance. Activity monitors tested were the ActivPAL micro™, NL-2000™ pedometer, Withings Smart Activity Monitor Tracker (Pulse O2™, Fitbit One™ and Jawbone UP™. Participants were video-recorded while carrying out the prescribed activities and the false positive step count registered on each activity monitor was obtained and compared to the video.All activity monitors registered a significant number of false positive steps per minute during one or more of the prescribed activities. The Withings™ activity performed best, registering a significant number of false positive steps per minute during the outdoor cycling activity only (P = 0.025. The Jawbone™ registered a significant number of false positive steps during the functional reaching task and while washing and drying dishes, which involved arm and hand movement (P < 0.01 for both. The ActivPAL™ registered a significant number of false positive steps during the cycling exercises (P < 0.001 for both.As a number of false positive steps were registered on the activity monitors during the non-stepping activities, the authors conclude that non-stepping

  15. The Seven Step Strategy

    Science.gov (United States)

    Schaffer, Connie

    2017-01-01

    Many well-intended instructors use Socratic or leveled questioning to facilitate the discussion of an assigned reading. While this engages a few students, most can opt to remain silent. The seven step strategy described in this article provides an alternative to classroom silence and engages all students. Students discuss a single reading as they…

  16. Security and gain improvement of a practical quantum key distribution using a gated single-photon source and probabilistic photon-number resolution

    International Nuclear Information System (INIS)

    Horikiri, Tomoyuki; Sasaki, Hideki; Wang, Haibo; Kobayashi, Takayoshi

    2005-01-01

    We propose a high security quantum key distribution (QKD) scheme utilizing one mode of spontaneous parametric downconversion gated by a photon number resolving detector. This photon number measurement is possible by using single-photon detectors operating at room temperature and optical fibers. By post selection, the multiphoton probability in this scheme can be reduced to lower than that of a scheme using an attenuated coherent light resulting in improvement of security. Furthermore, if distillation protocol (error correction and privacy amplification) is performed, the gain will be increased. Hence a QKD system with higher security and bit rate than the laser-based QKD system can be attained using present available technologies

  17. Multi-Level Wavelet Shannon Entropy-Based Method for Single-Sensor Fault Location

    Directory of Open Access Journals (Sweden)

    Qiaoning Yang

    2015-10-01

    Full Text Available In actual application, sensors are prone to failure because of harsh environments, battery drain, and sensor aging. Sensor fault location is an important step for follow-up sensor fault detection. In this paper, two new multi-level wavelet Shannon entropies (multi-level wavelet time Shannon entropy and multi-level wavelet time-energy Shannon entropy are defined. They take full advantage of sensor fault frequency distribution and energy distribution across multi-subband in wavelet domain. Based on the multi-level wavelet Shannon entropy, a method is proposed for single sensor fault location. The method firstly uses a criterion of maximum energy-to-Shannon entropy ratio to select the appropriate wavelet base for signal analysis. Then multi-level wavelet time Shannon entropy and multi-level wavelet time-energy Shannon entropy are used to locate the fault. The method is validated using practical chemical gas concentration data from a gas sensor array. Compared with wavelet time Shannon entropy and wavelet energy Shannon entropy, the experimental results demonstrate that the proposed method can achieve accurate location of a single sensor fault and has good anti-noise ability. The proposed method is feasible and effective for single-sensor fault location.

  18. The estimation of collision probabilities in complicated geometries

    International Nuclear Information System (INIS)

    Roth, M.J.

    1969-04-01

    This paper demonstrates how collision probabilities in complicated geometries may be estimated. It is assumed that the reactor core may be divided into a number of cells each with simple geometry so that a collision probability matrix can be calculated for each cell by standard methods. It is then shown how these may be joined together. (author)

  19. Estimation of Extreme Response and Failure Probability of Wind Turbines under Normal Operation using Probability Density Evolution Method

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Liu, W. F.

    2013-01-01

    Estimation of extreme response and failure probability of structures subjected to ultimate design loads is essential for structural design of wind turbines according to the new standard IEC61400-1. This task is focused on in the present paper in virtue of probability density evolution method (PDEM......), which underlies the schemes of random vibration analysis and structural reliability assessment. The short-term rare failure probability of 5-mega-watt wind turbines, for illustrative purposes, in case of given mean wind speeds and turbulence levels is investigated through the scheme of extreme value...... distribution instead of any other approximate schemes of fitted distribution currently used in statistical extrapolation techniques. Besides, the comparative studies against the classical fitted distributions and the standard Monte Carlo techniques are carried out. Numerical results indicate that PDEM exhibits...

  20. Improvement of Pulping Uniformity by Measurement of Single Fiber Kappa Number

    Energy Technology Data Exchange (ETDEWEB)

    Richard R. Gustafson; James B. Callis

    2001-11-20

    A method to measure the kappa of single fibers by staining with a fluorescent dye, Acridine Orange (AO), has been developed. This method is now applied to develop and automated flow-through instrument that permits routine kappa analysis on thousands of images of AO stained fibers to give the fiber kappa number distribution of a pulp sample in a few minutes. The design and operation of the instrument are similar to that of a flow cytometer but with the addition of extensive fiber imaging capability. Fluorescence measurements in the flow-through instrument are found to be consistent with those made with fluorescence microscope provided the signal processing in the flow-thou instrument is handled propertly. The kappa distributions of pulps that were analyzed by means of a density gradient column are compared to those measured with the flow-through instrument with good results. The kappa distributions of various laboratory pulps and commercial pulps have been measured. It has been found that all pulps are non-uniform but that ommercial pulps generally have broader kappa distributions thatn their laboratory counterparts. The effects of different pulping methods and chip pretreatments on pulp uniformity are discussed in the report. Finally, the application of flow-through fluorescence technology to other single fiber measurements are presented.

  1. Limited test data: The choice between confidence limits and inverse probability

    International Nuclear Information System (INIS)

    Nichols, P.

    1975-01-01

    For a unit which has been successfully designed to a high standard of reliability, any test programme of reasonable size will result in only a small number of failures. In these circumstances the failure rate estimated from the tests will depend on the statistical treatment applied. When a large number of units is to be manufactured, an unexpected high failure rate will certainly result in a large number of failures, so it is necessary to guard against optimistic unrepresentative test results by using a confidence limit approach. If only a small number of production units is involved, failures may not occur even with a higher than expected failure rate, and so one may be able to accept a method which allows for the possibility of either optimistic or pessimistic test results, and in this case an inverse probability approach, based on Bayes' theorem, might be used. The paper first draws attention to an apparently significant difference in the numerical results from the two methods, particularly for the overall probability of several units arranged in redundant logic. It then discusses a possible objection to the inverse method, followed by a demonstration that, for a large population and a very reasonable choice of prior probability, the inverse probability and confidence limit methods give the same numerical result. Finally, it is argued that a confidence limit approach is overpessimistic when a small number of production units is involved, and that both methods give the same answer for a large population. (author)

  2. Home-based step training using videogame technology in people with Parkinson's disease: a single-blinded randomised controlled trial.

    Science.gov (United States)

    Song, Jooeun; Paul, Serene S; Caetano, Maria Joana D; Smith, Stuart; Dibble, Leland E; Love, Rachelle; Schoene, Daniel; Menant, Jasmine C; Sherrington, Cathie; Lord, Stephen R; Canning, Colleen G; Allen, Natalie E

    2018-03-01

    To determine whether 12-week home-based exergame step training can improve stepping performance, gait and complementary physical and neuropsychological measures associated with falls in Parkinson's disease. A single-blinded randomised controlled trial. Community (experimental intervention), university laboratory (outcome measures). Sixty community-dwelling people with Parkinson's disease. Home-based step training using videogame technology. The primary outcomes were the choice stepping reaction time test and Functional Gait Assessment. Secondary outcomes included physical and neuropsychological measures associated with falls in Parkinson's disease, number of falls over six months and self-reported mobility and balance. Post intervention, there were no differences between the intervention ( n = 28) and control ( n = 25) groups in the primary or secondary outcomes except for the Timed Up and Go test, where there was a significant difference in favour of the control group ( P = 0.02). Intervention participants reported mobility improvement, whereas control participants reported mobility deterioration-between-group difference on an 11-point scale = 0.9 (95% confidence interval: -1.8 to -0.1, P = 0.03). Interaction effects between intervention and disease severity on physical function measures were observed ( P = 0.01 to P = 0.08) with seemingly positive effects for the low-severity group and potentially negative effects for the high-severity group. Overall, home-based exergame step training was not effective in improving the outcomes assessed. However, the improved physical function in the lower disease severity intervention participants as well as the self-reported improved mobility in the intervention group suggest home-based exergame step training may have benefits for some people with Parkinson's disease.

  3. Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size

    KAUST Repository

    Hadjimichael, Yiannis; Ketcheson, David I.; Loczi, Lajos; Né meth, Adriá n

    2016-01-01

    Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order

  4. Pólya number and first return of bursty random walk: Rigorous solutions

    Science.gov (United States)

    Wan, J.; Xu, X. P.

    2012-03-01

    The recurrence properties of random walks can be characterized by Pólya number, i.e., the probability that the walker has returned to the origin at least once. In this paper, we investigate Pólya number and first return for bursty random walk on a line, in which the walk has different step size and moving probabilities. Using the concept of the Catalan number, we obtain exact results for first return probability, the average first return time and Pólya number for the first time. We show that Pólya number displays two different functional behavior when the walk deviates from the recurrent point. By utilizing the Lagrange inversion formula, we interpret our findings by transferring Pólya number to the closed-form solutions of an inverse function. We also calculate Pólya number using another approach, which corroborates our results and conclusions. Finally, we consider the recurrence properties and Pólya number of two variations of the bursty random walk model.

  5. A simple method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation

    International Nuclear Information System (INIS)

    Begnozzi, L.; Gentile, F.P.; Di Nallo, A.M.; Chiatti, L.; Zicari, C.; Consorti, R.; Benassi, M.

    1994-01-01

    Since volumetric dose distributions are available with 3-dimensional radiotherapy treatment planning they can be used in statistical evaluation of response to radiation. This report presents a method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation. The mathematical expression for the calculation of normal tissue complication probability has been derived combining the Lyman model with the histogram reduction method of Kutcher et al. and using the normalized total dose (NTD) instead of the total dose. The fitting of published tolerance data, in case of homogeneous or partial brain irradiation, has been considered. For the same total or partial volume homogeneous irradiation of the brain, curves of normal tissue complication probability have been calculated with fraction size of 1.5 Gy and of 3 Gy instead of 2 Gy, to show the influence of fraction size. The influence of dose distribution inhomogeneity and α/β value has also been simulated: Considering α/β=1.6 Gy or α/β=4.1 Gy for kidney clinical nephritis, the calculated curves of normal tissue complication probability are shown. Combining NTD calculations and histogram reduction techniques, normal tissue complication probability can be estimated taking into account the most relevant contributing factors, including the volume effect. (orig.) [de

  6. Sequential Probability Ratio Testing with Power Projective Base Method Improves Decision-Making for BCI

    Science.gov (United States)

    Liu, Rong

    2017-01-01

    Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781

  7. Programming by Numbers -- A Programming Method for Complete Novices

    NARCIS (Netherlands)

    Glaser, Hugh; Hartel, Pieter H.

    2000-01-01

    Students often have difficulty with the minutiae of program construction. We introduce the idea of `Programming by Numbers', which breaks some of the programming process down into smaller steps, giving such students a way into the process of Programming in the Small. Programming by Numbers does not

  8. Uncertainty the soul of modeling, probability & statistics

    CERN Document Server

    Briggs, William

    2016-01-01

    This book presents a philosophical approach to probability and probabilistic thinking, considering the underpinnings of probabilistic reasoning and modeling, which effectively underlie everything in data science. The ultimate goal is to call into question many standard tenets and lay the philosophical and probabilistic groundwork and infrastructure for statistical modeling. It is the first book devoted to the philosophy of data aimed at working scientists and calls for a new consideration in the practice of probability and statistics to eliminate what has been referred to as the "Cult of Statistical Significance". The book explains the philosophy of these ideas and not the mathematics, though there are a handful of mathematical examples. The topics are logically laid out, starting with basic philosophy as related to probability, statistics, and science, and stepping through the key probabilistic ideas and concepts, and ending with statistical models. Its jargon-free approach asserts that standard methods, suc...

  9. Combining p-values in replicated single-case experiments with multivariate outcome.

    Science.gov (United States)

    Solmi, Francesca; Onghena, Patrick

    2014-01-01

    Interest in combining probabilities has a long history in the global statistical community. The first steps in this direction were taken by Ronald Fisher, who introduced the idea of combining p-values of independent tests to provide a global decision rule when multiple aspects of a given problem were of interest. An interesting approach to this idea of combining p-values is the one based on permutation theory. The methods belonging to this particular approach exploit the permutation distributions of the tests to be combined, and use a simple function to combine probabilities. Combining p-values finds a very interesting application in the analysis of replicated single-case experiments. In this field the focus, while comparing different treatments effects, is more articulated than when just looking at the means of the different populations. Moreover, it is often of interest to combine the results obtained on the single patients in order to get more global information about the phenomenon under study. This paper gives an overview of how the concept of combining p-values was conceived, and how it can be easily handled via permutation techniques. Finally, the method of combining p-values is applied to a simulated replicated single-case experiment, and a numerical illustration is presented.

  10. Designed optimization of a single-step extraction of fucose-containing sulfated polysaccharides from Sargassum sp

    DEFF Research Database (Denmark)

    Ale, Marcel Tutor; Mikkelsen, Jørn Dalgaard; Meyer, Anne S.

    2012-01-01

    Fucose-containing sulfated polysaccharides can be extracted from the brown seaweed, Sargassum sp. It has been reported that fucose-rich sulfated polysaccharides from brown seaweeds exert different beneficial biological activities including anti-inflammatory, anticoagulant, and anti-viral effects....... Classical extraction of fucose-containing sulfated polysaccharides from brown seaweed species typically involves extended, multiple-step, hot acid, or CaCl2 treatments, each step lasting several hours. In this work, we systematically examined the influence of acid concentration (HCl), time, and temperature...... on the yield of fucosecontaining sulfated polysaccharides (FCSPs) in statistically designed two-step and single-step multifactorial extraction experiments. All extraction factors had significant effects on the fucose-containing sulfated polysaccharides yield, with the temperature and time exerting positive...

  11. Single classifier, OvO, OvA and RCC multiclass classification method in handheld based smartphone gait identification

    Science.gov (United States)

    Raziff, Abdul Rafiez Abdul; Sulaiman, Md Nasir; Mustapha, Norwati; Perumal, Thinagaran

    2017-10-01

    Gait recognition is widely used in many applications. In the application of the gait identification especially in people, the number of classes (people) is many which may comprise to more than 20. Due to the large amount of classes, the usage of single classification mapping (direct classification) may not be suitable as most of the existing algorithms are mostly designed for the binary classification. Furthermore, having many classes in a dataset may result in the possibility of having a high degree of overlapped class boundary. This paper discusses the application of multiclass classifier mappings such as one-vs-all (OvA), one-vs-one (OvO) and random correction code (RCC) on handheld based smartphone gait signal for person identification. The results is then compared with a single J48 decision tree for benchmark. From the result, it can be said that using multiclass classification mapping method thus partially improved the overall accuracy especially on OvO and RCC with width factor more than 4. For OvA, the accuracy result is worse than a single J48 due to a high number of classes.

  12. Three counting methods agree on cell and neuron number in chimpanzee primary visual cortex

    Directory of Open Access Journals (Sweden)

    Daniel James Miller

    2014-05-01

    Full Text Available Determining the cellular composition of specific brain regions is crucial to our understanding of the function of neurobiological systems. It is therefore useful to identify the extent to which different methods agree when estimating the same properties of brain circuitry. In this study, we estimated the number of neuronal and non-neuronal cells in the primary visual cortex (area 17 or V1 of both hemispheres from a single chimpanzee. Specifically, we processed samples distributed across V1 of the right hemisphere after cortex was flattened into a sheet using two variations of the isotropic fractionator cell and neuron counting method. We processed the left hemisphere as serial brain slices for stereological investigation. The goal of this study was to evaluate the agreement between these methods in the most direct manner possible by comparing estimates of cell density across one brain region of interest in a single individual. In our hands, these methods produced similar estimates of the total cellular population (approximately 1 billion as well as the number of neurons (approximately 675 million in chimpanzee V1, providing evidence that both techniques estimate the same parameters of interest. In addition, our results indicate the strengths of each distinct tissue preparation procedure, highlighting the importance of attention to anatomical detail. In summary, we found that the isotropic fractionator and the stereological optical fractionator produced concordant estimates of the cellular composition of V1, and that this result supports the conclusion that chimpanzees conform to the primate pattern of exceptionally high packing density in V1. Ultimately, our data suggest that investigators can optimize their experimental approach by using any of these counting methods to obtain reliable cell and neuron counts.

  13. MIDPOINT TWO- STEPS RULE FOR THE SQUARE ROOT METHOD

    African Journals Online (AJOL)

    DR S.E UWAMUSI

    Aberth third order method for finding zeros of a polynomial in interval ... KEY WORDS: Square root iteration, midpoint two steps Method, ...... A New set of Methods for the simultaneous determination of zeros of polynomial equation and iterative ...

  14. Effects of long-term moderate exercise and increase in number of daily steps on serum lipids in women: randomised controlled trial [ISRCTN21921919

    Directory of Open Access Journals (Sweden)

    Mirbod Seyed

    2002-01-01

    Full Text Available Abstract Background This study was designed to evaluate the effects of a 24-month period of moderate exercise on serum lipids in menopausal women. Methods The subjects (40–60 y were randomly divided into an exercise group (n = 14 and a control group (n = 13. The women in the exercise group were asked to participate in a 90-minute physical education class once a week and to record their daily steps as measured by a pedometer for 24 months. Results Mean of daily steps was significantly higher in the exercise group from about 6,800 to over 8,500 steps (P P Conclusions These results suggest that daily exercise as well as increasing the number of daily steps can improve the profile of serum lipids.

  15. Interior Point Method for Solving Fuzzy Number Linear Programming Problems Using Linear Ranking Function

    Directory of Open Access Journals (Sweden)

    Yi-hua Zhong

    2013-01-01

    Full Text Available Recently, various methods have been developed for solving linear programming problems with fuzzy number, such as simplex method and dual simplex method. But their computational complexities are exponential, which is not satisfactory for solving large-scale fuzzy linear programming problems, especially in the engineering field. A new method which can solve large-scale fuzzy number linear programming problems is presented in this paper, which is named a revised interior point method. Its idea is similar to that of interior point method used for solving linear programming problems in crisp environment before, but its feasible direction and step size are chosen by using trapezoidal fuzzy numbers, linear ranking function, fuzzy vector, and their operations, and its end condition is involved in linear ranking function. Their correctness and rationality are proved. Moreover, choice of the initial interior point and some factors influencing the results of this method are also discussed and analyzed. The result of algorithm analysis and example study that shows proper safety factor parameter, accuracy parameter, and initial interior point of this method may reduce iterations and they can be selected easily according to the actual needs. Finally, the method proposed in this paper is an alternative method for solving fuzzy number linear programming problems.

  16. Rational Design of High-Number dsDNA Fragments Based on Thermodynamics for the Construction of Full-Length Genes in a Single Reaction.

    Science.gov (United States)

    Birla, Bhagyashree S; Chou, Hui-Hsien

    2015-01-01

    Gene synthesis is frequently used in modern molecular biology research either to create novel genes or to obtain natural genes when the synthesis approach is more flexible and reliable than cloning. DNA chemical synthesis has limits on both its length and yield, thus full-length genes have to be hierarchically constructed from synthesized DNA fragments. Gibson Assembly and its derivatives are the simplest methods to assemble multiple double-stranded DNA fragments. Currently, up to 12 dsDNA fragments can be assembled at once with Gibson Assembly according to its vendor. In practice, the number of dsDNA fragments that can be assembled in a single reaction are much lower. We have developed a rational design method for gene construction that allows high-number dsDNA fragments to be assembled into full-length genes in a single reaction. Using this new design method and a modified version of the Gibson Assembly protocol, we have assembled 3 different genes from up to 45 dsDNA fragments at once. Our design method uses the thermodynamic analysis software Picky that identifies all unique junctions in a gene where consecutive DNA fragments are specifically made to connect to each other. Our novel method is generally applicable to most gene sequences, and can improve both the efficiency and cost of gene assembly.

  17. Rational Design of High-Number dsDNA Fragments Based on Thermodynamics for the Construction of Full-Length Genes in a Single Reaction.

    Directory of Open Access Journals (Sweden)

    Bhagyashree S Birla

    Full Text Available Gene synthesis is frequently used in modern molecular biology research either to create novel genes or to obtain natural genes when the synthesis approach is more flexible and reliable than cloning. DNA chemical synthesis has limits on both its length and yield, thus full-length genes have to be hierarchically constructed from synthesized DNA fragments. Gibson Assembly and its derivatives are the simplest methods to assemble multiple double-stranded DNA fragments. Currently, up to 12 dsDNA fragments can be assembled at once with Gibson Assembly according to its vendor. In practice, the number of dsDNA fragments that can be assembled in a single reaction are much lower. We have developed a rational design method for gene construction that allows high-number dsDNA fragments to be assembled into full-length genes in a single reaction. Using this new design method and a modified version of the Gibson Assembly protocol, we have assembled 3 different genes from up to 45 dsDNA fragments at once. Our design method uses the thermodynamic analysis software Picky that identifies all unique junctions in a gene where consecutive DNA fragments are specifically made to connect to each other. Our novel method is generally applicable to most gene sequences, and can improve both the efficiency and cost of gene assembly.

  18. On the method of logarithmic cumulants for parametric probability density function estimation.

    Science.gov (United States)

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.

  19. Disadvantage factors for square lattice cells using a collision probability method

    International Nuclear Information System (INIS)

    Raghav, H.P.

    1976-01-01

    The flux distribution in an infinite square lattice consisting of cylindrical fuel rods and moderator is calculated by using a collision probability method. Neutrons are assumed to be monoenergetic and the sources as well as scattering are assumed to be isotropic. Carlvik's method for the calculation of collision probability is used. The important features of the method are that the square boundary is treated exactly and the contribution of the surrounding cells is calculated explicitly. The method is programmed in a computer code CELLC. This carries out integration by Simpson's rule. The convergence and accuracy of CELLC is assessed by computing disadvantage factors for the well-known Thie lattices and comparing the results with Monte Carlo and other integral transport theory methods used elsewhere. It is demonstrated that it is not correct to apply the white boundary condition in the Wigner Seitz Cell for low pitch and low cross sections. (orig.) [de

  20. A Group Contribution Method for Estimating Cetane and Octane Numbers

    Energy Technology Data Exchange (ETDEWEB)

    Kubic, William Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Process Modeling and Analysis Group

    2016-07-28

    Much of the research on advanced biofuels is devoted to the study of novel chemical pathways for converting nonfood biomass into liquid fuels that can be blended with existing transportation fuels. Many compounds under consideration are not found in the existing fuel supplies. Often, the physical properties needed to assess the viability of a potential biofuel are not available. The only reliable information available may be the molecular structure. Group contribution methods for estimating physical properties from molecular structure have been used for more than 60 years. The most common application is estimation of thermodynamic properties. More recently, group contribution methods have been developed for estimating rate dependent properties including cetane and octane numbers. Often, published group contribution methods are limited in terms of types of function groups and range of applicability. In this study, a new, broadly-applicable group contribution method based on an artificial neural network was developed to estimate cetane number research octane number, and motor octane numbers of hydrocarbons and oxygenated hydrocarbons. The new method is more accurate over a greater range molecular weights and structural complexity than existing group contribution methods for estimating cetane and octane numbers.

  1. A research on the importance function used in the calculation of the fracture probability through the optimum method

    International Nuclear Information System (INIS)

    Zegong, Zhou; Changhong, Liu

    1995-01-01

    On the basis of the research into original distribution function as the importance function after shifting an appropriate distance, this paper takes the variation of similar ratio of the original function to the importance function as the objective function, the optimum shifting distance obtained by use of an optimization method. The optimum importance function resulting from the optimization method can ensure that the number of Monte Carlo simulations is decreased and at the same time the good estimates of the yearly failure probabilities are obtained

  2. Computation of the Complex Probability Function

    Energy Technology Data Exchange (ETDEWEB)

    Trainer, Amelia Jo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ledwith, Patrick John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-22

    The complex probability function is important in many areas of physics and many techniques have been developed in an attempt to compute it for some z quickly and e ciently. Most prominent are the methods that use Gauss-Hermite quadrature, which uses the roots of the nth degree Hermite polynomial and corresponding weights to approximate the complex probability function. This document serves as an overview and discussion of the use, shortcomings, and potential improvements on the Gauss-Hermite quadrature for the complex probability function.

  3. Single ion implantation for single donor devices using Geiger mode detectors

    International Nuclear Information System (INIS)

    Bielejec, E; Seamons, J A; Carroll, M S

    2010-01-01

    Electronic devices that are designed to use the properties of single atoms such as donors or defects have become a reality with recent demonstrations of donor spectroscopy, single photon emission sources, and magnetic imaging using defect centers in diamond. Ion implantation, an industry standard for atom placement in materials, requires augmentation for single ion capability including a method for detecting a single ion arrival. Integrating single ion detection techniques with the single donor device construction region allows single ion arrival to be assured. Improving detector sensitivity is linked to improving control over the straggle of the ion as well as providing more flexibility in lay-out integration with the active region of the single donor device construction zone by allowing ion sensing at potentially greater distances. Using a remotely located passively gated single ion Geiger mode avalanche diode (SIGMA) detector we have demonstrated 100% detection efficiency at a distance of >75 μm from the center of the collecting junction. This detection efficiency is achieved with sensitivity to ∼600 or fewer electron-hole pairs produced by the implanted ion. Ion detectors with this sensitivity and integrated with a thin dielectric, for example a 5 nm gate oxide, using low energy Sb implantation would have an end of range straggle of -1 and 10 -4 for operation temperatures of ∼300 K and ∼77 K, respectively. Low temperature operation and reduced false, 'dark', counts are critical to achieving high confidence in single ion arrival. For the device performance in this work, the confidence is calculated as a probability of >98% for counting one and only one ion for a false count probability of 10 -4 at an average ion number per gated window of 0.015.

  4. A GPU-accelerated semi-implicit fractional-step method for numerical solutions of incompressible Navier-Stokes equations

    Science.gov (United States)

    Ha, Sanghyun; Park, Junshin; You, Donghyun

    2018-01-01

    Utility of the computational power of Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. The Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution methods used in the semi-implicit fractional-step method take advantage of multiple tridiagonal matrices whose inversion is known as the major bottleneck for acceleration on a typical multi-core machine. A novel implementation of the semi-implicit fractional-step method designed for GPU acceleration of the incompressible Navier-Stokes equations is presented. Aspects of the programing model of Compute Unified Device Architecture (CUDA), which are critical to the bandwidth-bound nature of the present method are discussed in detail. A data layout for efficient use of CUDA libraries is proposed for acceleration of tridiagonal matrix inversion and fast Fourier transform. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while the Navier-Stokes equations are computed on a GPU. Performance of the present method using CUDA is assessed by comparing the speed of solving three tridiagonal matrices using ADI with the speed of solving one heptadiagonal matrix using a conjugate gradient method. An overall speedup of 20 times is achieved using a Tesla K40 GPU in comparison with a single-core Xeon E5-2660 v3 CPU in simulations of turbulent boundary-layer flow over a flat plate conducted on over 134 million grids. Enhanced performance of 48 times speedup is reached for the same problem using a Tesla P100 GPU.

  5. Single-mismatch 2LSB embedding method of steganography

    OpenAIRE

    Khalind, Omed; Aziz, Benjamin

    2013-01-01

    This paper proposes a new method of 2LSB embedding steganography in still images. The proposed method considers a single mismatch in each 2LSB embedding between the 2LSB of the pixel value and the 2-bits of the secret message, while the 2LSB replacement overwrites the 2LSB of the image’s pixel value with 2-bits of the secret message. The number of bit-changes needed for the proposed method is 0.375 bits from the 2LSBs of the cover image, and is much less than the 2LSB replacement which is 0.5...

  6. Propensity, Probability, and Quantum Theory

    Science.gov (United States)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  7. Emission probability determination of {sup 133}Ba by the sum-peak method

    Energy Technology Data Exchange (ETDEWEB)

    Silva, R.L. da; Almeida, M.C.M. de; Delgado, J.U.; Poledna, R.; Araujo, M.T.F.; Trindade, O.L.; Veras, E.V. de; Santos, A.; Rangel, J.; Ferreira Filho, A.L., E-mail: ronaldo@ird.gov.br, E-mail: marcandida@yahoo.com.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2016-07-01

    The National Laboratory of Metrology Ionizing Radiation (LNMRI/IRD/CNEN) has several measurement methods in order to ensure low uncertainties about the results. Through gamma spectrometry analysis by sum-peak absolute method they were performed the standardization of {sup 133}Ba activity and your emission probability determination of different energies with reduced uncertainties. The advantages of radionuclides calibrations by absolute method are accuracy, low uncertainties and is not necessary the use of radionuclides reference standards. {sup 133}Ba is used in research laboratories on calibration detectors in different work areas. The uncertainties for the activity and for the emission probability results are lower than 1%. (author)

  8. A new four-step hierarchy method for combined assessment of groundwater quality and pollution.

    Science.gov (United States)

    Zhu, Henghua; Ren, Xiaohua; Liu, Zhizheng

    2017-12-28

    A new four-step hierarchy method was constructed and applied to evaluate the groundwater quality and pollution of the Dagujia River Basin. The assessment index system is divided into four types: field test indices, common inorganic chemical indices, inorganic toxicology indices, and trace organic indices. Background values of common inorganic chemical indices and inorganic toxicology indices were estimated with the cumulative-probability curve method, and the results showed that the background values of Mg 2+ (51.1 mg L -1 ), total hardness (TH) (509.4 mg L -1 ), and NO 3 - (182.4 mg L -1 ) are all higher than the corresponding grade III values of Quality Standard for Groundwater, indicating that they were poor indicators and therefore were not included in the groundwater quality assessment. The quality assessment results displayed that the field test indices were mainly classified as grade II, accounting for 60.87% of wells sampled. The indices of common inorganic chemical and inorganic toxicology were both mostly in the range of grade III, whereas the trace organic indices were predominantly classified as grade I. The variabilities and excess ratios of the indices were also calculated and evaluated. Spatial distributions showed that the groundwater with poor quality indices was mainly located in the northeast of the basin, which was well-connected with seawater intrusion. Additionally, the pollution assessment revealed that groundwater in well 44 was classified as "moderately polluted," wells 5 and 8 were "lightly polluted," and other wells were classified as "unpolluted."

  9. Characteristic analysis of laser isotope separation process by two-step photodissociation method

    International Nuclear Information System (INIS)

    Okamoto, Tsuyoshi; Suzuki, Atsuyuki; Kiyose, Ryohei

    1981-01-01

    A large number of laser isotope separation experiments have been performed actively in many countries. In this paper, the selective two-step photodissociation method is chosen and simultaneous nonlinear differential equations that express the separation process are solved directly by using computer. Predicted separation factors are investigated in relation to the incident pulse energy and the concentration of desired molecules. Furthermore, the concept of separative work is used to evaluate the results of separation for this method. It is shown from an example of numerical calculation that a very large separation factor can be obtained if the concentration of desired molecules is lowered and two laser pulses to be closely synchronized are not always required in operation for the photodissociation of molecules. (author)

  10. A Spiral Step-by-Step Educational Method for Cultivating Competent Embedded System Engineers to Meet Industry Demands

    Science.gov (United States)

    Jing,Lei; Cheng, Zixue; Wang, Junbo; Zhou, Yinghui

    2011-01-01

    Embedded system technologies are undergoing dramatic change. Competent embedded system engineers are becoming a scarce resource in the industry. Given this, universities should revise their specialist education to meet industry demands. In this paper, a spirally tight-coupled step-by-step educational method, based on an analysis of industry…

  11. Single-Molecule Fluorescence Reveals the Oligomerization and Folding Steps Driving the Prion-like Behavior of ASC.

    Science.gov (United States)

    Gambin, Yann; Giles, Nichole; O'Carroll, Ailís; Polinkovsky, Mark; Hunter, Dominic; Sierecki, Emma

    2018-02-16

    Single-molecule fluorescence has the unique ability to quantify small oligomers and track conformational changes at a single-protein level. Here we tackled one of the most extreme protein behaviors, found recently in an inflammation pathway. Upon danger recognition in the cytosol, NLRP3 recruits its signaling adaptor, ASC. ASC start polymerizing in a prion-like manner and the system goes in "overdrive" by producing a single micron-sized "speck." By precisely controlling protein expression levels in an in vitro translation system, we could trigger the polymerization of ASC and mimic formation of specks in the absence of inflammasome nucleators. We utilized single-molecule spectroscopy to fully characterize prion-like behaviors and self-propagation of ASC fibrils. We next used our controlled system to monitor the conformational changes of ASC upon fibrillation. Indeed, ASC consists of a PYD and CARD domains, separated by a flexible linker. Individually, both domains have been found to form fibrils, but the structure of the polymers formed by the full-length ASC proteins remains elusive. For the first time, using single-molecule Förster resonance energy transfer, we studied the relative positions of the CARD and PYD domains of full-length ASC. An unexpectedly large conformational change occurred upon ASC fibrillation, suggesting that the CARD domain folds back onto the PYD domain. However, contradicting current models, the "prion-like" conformer was not initiated by binding of ASC to the NLRP3 platform. Rather, using a new method, hybrid between Photon Counting Histogram and Number and Brightness analysis, we showed that NLRP3 forms hexamers with self-binding affinities around 300nM. Overall our data suggest a new mechanism, where NLRP3 can initiate ASC polymerization simply by increasing the local concentration of ASC above a supercritical level. Copyright © 2017. Published by Elsevier Ltd.

  12. Cellobiohydrolase 1 from Trichoderma reesei degrades cellulose in single cellobiose steps

    Science.gov (United States)

    Brady, Sonia K.; Sreelatha, Sarangapani; Feng, Yinnian; Chundawat, Shishir P. S.; Lang, Matthew J.

    2015-12-01

    Cellobiohydrolase 1 from Trichoderma reesei (TrCel7A) processively hydrolyses cellulose into cellobiose. Although enzymatic techniques have been established as promising tools in biofuel production, a clear understanding of the motor's mechanistic action has yet to be revealed. Here, we develop an optical tweezers-based single-molecule (SM) motility assay for precision tracking of TrCel7A. Direct observation of motility during degradation reveals processive runs and distinct steps on the scale of 1 nm. Our studies suggest TrCel7A is not mechanically limited, can work against 20 pN loads and speeds up when assisted. Temperature-dependent kinetic studies establish the energy requirements for the fundamental stepping cycle, which likely includes energy from glycosidic bonds and other sources. Through SM measurements of isolated TrCel7A domains, we determine that the catalytic domain alone is sufficient for processive motion, providing insight into TrCel7A's molecular motility mechanism.

  13. Measuring sensitivity in pharmacoeconomic studies. Refining point sensitivity and range sensitivity by incorporating probability distributions.

    Science.gov (United States)

    Nuijten, M J

    1999-07-01

    The aim of the present study is to describe a refinement of a previously presented method, based on the concept of point sensitivity, to deal with uncertainty in economic studies. The original method was refined by the incorporation of probability distributions which allow a more accurate assessment of the level of uncertainty in the model. In addition, a bootstrap method was used to create a probability distribution for a fixed input variable based on a limited number of data points. The original method was limited in that the sensitivity measurement was based on a uniform distribution of the variables and that the overall sensitivity measure was based on a subjectively chosen range which excludes the impact of values outside the range on the overall sensitivity. The concepts of the refined method were illustrated using a Markov model of depression. The application of the refined method substantially changed the ranking of the most sensitive variables compared with the original method. The response rate became the most sensitive variable instead of the 'per diem' for hospitalisation. The refinement of the original method yields sensitivity outcomes, which greater reflect the real uncertainty in economic studies.

  14. DEVELOPMENT OF THE PROBABLY-GEOGRAPHICAL FORECAST METHOD FOR DANGEROUS WEATHER PHENOMENA

    Directory of Open Access Journals (Sweden)

    Elena S. Popova

    2015-12-01

    Full Text Available This paper presents a scheme method of probably-geographical forecast for dangerous weather phenomena. Discuss two general realization stages of this method. Emphasize that developing method is response to actual questions of modern weather forecast and it’s appropriate phenomena: forecast is carried out for specific point in space and appropriate moment of time.

  15. Sci-Thur PM - Colourful Interactions: Highlights 08: ARC TBI using Single-Step Optimized VMAT Fields

    International Nuclear Information System (INIS)

    Hudson, Alana; Gordon, Deborah; Moore, Roseanne; Balogh, Alex; Pierce, Greg

    2016-01-01

    Purpose: This work outlines a new TBI delivery technique to replace a lateral POP full bolus technique. The new technique is done with VMAT arc delivery, without bolus, treating the patient prone and supine. The benefits of the arc technique include: increased patient experience and safety, better dose conformity, better organ at risk sparing, decreased therapist time and reduction of therapist injuries. Methods: In this work we build on a technique developed by Jahnke et al. We use standard arc fields with gantry speeds corrected for varying distance to the patient followed by a single step VMAT optimization on a patient CT to increase dose inhomogeneity and to reduce dose to the lungs (vs. blocks). To compare the arc TBI technique to our full bolus technique, we produced plans on patient CTs for both techniques and evaluated several dosimetric parameters using an ANOVA test. Results and Conclusions: The arc technique is able reduce both the hot areas to the body (D2% reduced from 122.2% to 111.8% p<0.01) and the lungs (mean lung dose reduced from 107.5% to 99.1%, p<0.01), both statistically significant, while maintaining coverage (D98% = 97.8% vs. 94.6%, p=0.313, not statistically significant). We developed a more patient and therapist-friendly TBI treatment technique that utilizes single-step optimized VMAT plans. It was found that this technique was dosimetrically equivalent to our previous lateral technique in terms of coverage and statistically superior in terms of reduced lung dose.

  16. Sci-Thur PM - Colourful Interactions: Highlights 08: ARC TBI using Single-Step Optimized VMAT Fields

    Energy Technology Data Exchange (ETDEWEB)

    Hudson, Alana; Gordon, Deborah; Moore, Roseanne; Balogh, Alex; Pierce, Greg [Tom Baker Cancer Centre (Canada)

    2016-08-15

    Purpose: This work outlines a new TBI delivery technique to replace a lateral POP full bolus technique. The new technique is done with VMAT arc delivery, without bolus, treating the patient prone and supine. The benefits of the arc technique include: increased patient experience and safety, better dose conformity, better organ at risk sparing, decreased therapist time and reduction of therapist injuries. Methods: In this work we build on a technique developed by Jahnke et al. We use standard arc fields with gantry speeds corrected for varying distance to the patient followed by a single step VMAT optimization on a patient CT to increase dose inhomogeneity and to reduce dose to the lungs (vs. blocks). To compare the arc TBI technique to our full bolus technique, we produced plans on patient CTs for both techniques and evaluated several dosimetric parameters using an ANOVA test. Results and Conclusions: The arc technique is able reduce both the hot areas to the body (D2% reduced from 122.2% to 111.8% p<0.01) and the lungs (mean lung dose reduced from 107.5% to 99.1%, p<0.01), both statistically significant, while maintaining coverage (D98% = 97.8% vs. 94.6%, p=0.313, not statistically significant). We developed a more patient and therapist-friendly TBI treatment technique that utilizes single-step optimized VMAT plans. It was found that this technique was dosimetrically equivalent to our previous lateral technique in terms of coverage and statistically superior in terms of reduced lung dose.

  17. A prototype method for diagnosing high ice water content probability using satellite imager data

    Science.gov (United States)

    Yost, Christopher R.; Bedka, Kristopher M.; Minnis, Patrick; Nguyen, Louis; Strapp, J. Walter; Palikonda, Rabindra; Khlopenkov, Konstantin; Spangenberg, Douglas; Smith, William L., Jr.; Protat, Alain; Delanoe, Julien

    2018-03-01

    Recent studies have found that ingestion of high mass concentrations of ice particles in regions of deep convective storms, with radar reflectivity considered safe for aircraft penetration, can adversely impact aircraft engine performance. Previous aviation industry studies have used the term high ice water content (HIWC) to define such conditions. Three airborne field campaigns were conducted in 2014 and 2015 to better understand how HIWC is distributed in deep convection, both as a function of altitude and proximity to convective updraft regions, and to facilitate development of new methods for detecting HIWC conditions, in addition to many other research and regulatory goals. This paper describes a prototype method for detecting HIWC conditions using geostationary (GEO) satellite imager data coupled with in situ total water content (TWC) observations collected during the flight campaigns. Three satellite-derived parameters were determined to be most useful for determining HIWC probability: (1) the horizontal proximity of the aircraft to the nearest overshooting convective updraft or textured anvil cloud, (2) tropopause-relative infrared brightness temperature, and (3) daytime-only cloud optical depth. Statistical fits between collocated TWC and GEO satellite parameters were used to determine the membership functions for the fuzzy logic derivation of HIWC probability. The products were demonstrated using data from several campaign flights and validated using a subset of the satellite-aircraft collocation database. The daytime HIWC probability was found to agree quite well with TWC time trends and identified extreme TWC events with high probability. Discrimination of HIWC was more challenging at night with IR-only information. The products show the greatest capability for discriminating TWC ≥ 0.5 g m-3. Product validation remains challenging due to vertical TWC uncertainties and the typically coarse spatio-temporal resolution of the GEO data.

  18. Refinement of a Method for Identifying Probable Archaeological Sites from Remotely Sensed Data

    Science.gov (United States)

    Tilton, James C.; Comer, Douglas C.; Priebe, Carey E.; Sussman, Daniel; Chen, Li

    2012-01-01

    To facilitate locating archaeological sites before they are compromised or destroyed, we are developing approaches for generating maps of probable archaeological sites, through detecting subtle anomalies in vegetative cover, soil chemistry, and soil moisture by analyzing remotely sensed data from multiple sources. We previously reported some success in this effort with a statistical analysis of slope, radar, and Ikonos data (including tasseled cap and NDVI transforms) with Student's t-test. We report here on new developments in our work, performing an analysis of 8-band multispectral Worldview-2 data. The Worldview-2 analysis begins by computing medians and median absolute deviations for the pixels in various annuli around each site of interest on the 28 band difference ratios. We then use principle components analysis followed by linear discriminant analysis to train a classifier which assigns a posterior probability that a location is an archaeological site. We tested the procedure using leave-one-out cross validation with a second leave-one-out step to choose parameters on a 9,859x23,000 subset of the WorldView-2 data over the western portion of Ft. Irwin, CA, USA. We used 100 known non-sites and trained one classifier for lithic sites (n=33) and one classifier for habitation sites (n=16). We then analyzed convex combinations of scores from the Archaeological Predictive Model (APM) and our scores. We found that that the combined scores had a higher area under the ROC curve than either individual method, indicating that including WorldView-2 data in analysis improved the predictive power of the provided APM.

  19. Reliable single chip genotyping with semi-parametric log-concave mixtures.

    Directory of Open Access Journals (Sweden)

    Ralph C A Rippe

    Full Text Available The common approach to SNP genotyping is to use (model-based clustering per individual SNP, on a set of arrays. Genotyping all SNPs on a single array is much more attractive, in terms of flexibility, stability and applicability, when developing new chips. A new semi-parametric method, named SCALA, is proposed. It is based on a mixture model using semi-parametric log-concave densities. Instead of using the raw data, the mixture is fitted on a two-dimensional histogram, thereby making computation time almost independent of the number of SNPs. Furthermore, the algorithm is effective in low-MAF situations.Comparisons between SCALA and CRLMM on HapMap genotypes show very reliable calling of single arrays. Some heterozygous genotypes from HapMap are called homozygous by SCALA and to lesser extent by CRLMM too. Furthermore, HapMap's NoCalls (NN could be genotyped by SCALA, mostly with high probability. The software is available as R scripts from the website www.math.leidenuniv.nl/~rrippe.

  20. PROCOPE, Collision Probability in Pin Clusters and Infinite Rod Lattices

    International Nuclear Information System (INIS)

    Amyot, L.; Daolio, C.; Benoist, P.

    1984-01-01

    1 - Nature of physical problem solved: Calculation of directional collision probabilities in pin clusters and infinite rod lattices. 2 - Method of solution: a) Gauss integration of analytical expressions for collision probabilities. b) alternately, an approximate closed expression (not involving integrals) may be used for pin-to-pin interactions. 3 - Restrictions on the complexity of the problem: number of fuel pins must be smaller than 62; maximum number of groups of symmetry is 300

  1. Reliability analysis of reactor systems by applying probability method; Analiza pouzdanosti reaktorskih sistema primenom metoda verovatnoce

    Energy Technology Data Exchange (ETDEWEB)

    Milivojevic, S [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)

    1974-12-15

    Probability method was chosen for analysing the reactor system reliability is considered realistic since it is based on verified experimental data. In fact this is a statistical method. The probability method developed takes into account the probability distribution of permitted levels of relevant parameters and their particular influence on the reliability of the system as a whole. The proposed method is rather general, and was used for problem of thermal safety analysis of reactor system. This analysis enables to analyze basic properties of the system under different operation conditions, expressed in form of probability they show the reliability of the system on the whole as well as reliability of each component.

  2. Single-Step Fabrication of High-Density Microdroplet Arrays of Low-Surface-Tension Liquids.

    Science.gov (United States)

    Feng, Wenqian; Li, Linxian; Du, Xin; Welle, Alexander; Levkin, Pavel A

    2016-04-01

    A facile approach for surface patterning that enables single-step fabrication of high-density arrays of low-surface-tension organic-liquid microdroplets is described. This approach enables miniaturized and parallel high-throughput screenings in organic solvents, formation of homogeneous arrays of hydrophobic nanoparticles, polymer micropads of specific shapes, and polymer microlens arrays. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Low-field multi-step magnetization of GaV4S8 single crystal

    Energy Technology Data Exchange (ETDEWEB)

    Nakamura, H; Kajinami, Y; Tabata, Y [Department of Materials Science and Engineering, Kyoto University, Kyoto 606-8501 (Japan); Ikeno, R; Motoyama, G; Kohara, T, E-mail: h.nakamura@ht8.ecs.kyoto-u.ac.j [Graduate School of Material Science, University of Hyogo, Kamigori, Hyogo 678-1297 (Japan)

    2009-01-01

    The magnetization process of single crystalline GaV4S8 including tetrahedral magnetic clusters was measured in the magnetically ordered state below T{sub C} {approx_equal} 13 K. Just below TC, steps were observed at very low fields of the order of 100 Oe, suggesting the competition of several intra- and inter-cluster interactions in a low energy range.

  4. Diffusion welding. [heat treatment of nickel alloys following single step vacuum welding process

    Science.gov (United States)

    Holko, K. H. (Inventor)

    1974-01-01

    Dispersion-strengthened nickel alloys are sanded on one side and chemically polished. This is followed by a single-step welding process wherein the polished surfaces are forced into intimate contact at 1,400 F for one hour in a vacuum. Diffusion, recrystallization, and grain growth across the original weld interface are obtained during postheating at 2,150 F for two hours in hydrogen.

  5. Estimating the empirical probability of submarine landslide occurrence

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.; Mosher, David C.; Shipp, Craig; Moscardelli, Lorena; Chaytor, Jason D.; Baxter, Christopher D. P.; Lee, Homa J.; Urgeles, Roger

    2010-01-01

    The empirical probability for the occurrence of submarine landslides at a given location can be estimated from age dates of past landslides. In this study, tools developed to estimate earthquake probability from paleoseismic horizons are adapted to estimate submarine landslide probability. In both types of estimates, one has to account for the uncertainty associated with age-dating individual events as well as the open time intervals before and after the observed sequence of landslides. For observed sequences of submarine landslides, we typically only have the age date of the youngest event and possibly of a seismic horizon that lies below the oldest event in a landslide sequence. We use an empirical Bayes analysis based on the Poisson-Gamma conjugate prior model specifically applied to the landslide probability problem. This model assumes that landslide events as imaged in geophysical data are independent and occur in time according to a Poisson distribution characterized by a rate parameter λ. With this method, we are able to estimate the most likely value of λ and, importantly, the range of uncertainty in this estimate. Examples considered include landslide sequences observed in the Santa Barbara Channel, California, and in Port Valdez, Alaska. We confirm that given the uncertainties of age dating that landslide complexes can be treated as single events by performing statistical test of age dates representing the main failure episode of the Holocene Storegga landslide complex.

  6. Principles of crystallization, and methods of single crystal growth

    International Nuclear Information System (INIS)

    Chacra, T.

    2010-01-01

    Most of single crystals (monocrystals), have distinguished optical, electrical, or magnetic properties, which make from single crystals, key elements in most of technical modern devices, as they may be used as lenses, Prisms, or grating sin optical devises, or Filters in X-Ray and spectrographic devices, or conductors and semiconductors in electronic, and computer industries. Furthermore, Single crystals are used in transducer devices. Moreover, they are indispensable elements in Laser and Maser emission technology.Crystal Growth Technology (CGT), has started, and developed in the international Universities and scientific institutions, aiming at some of single crystals, which may have significant properties and industrial applications, that can attract the attention of international crystal growth centers, to adopt the industrial production and marketing of such crystals. Unfortunately, Arab universities generally, and Syrian universities specifically, do not give even the minimum interest, to this field of Science.The purpose of this work is to attract the attention of Crystallographers, Physicists and Chemists in the Arab universities and research centers to the importance of crystal growth, and to work on, in the first stage to establish simple, uncomplicated laboratories for the growth of single crystal. Such laboratories can be supplied with equipment, which are partly available or can be manufactured in the local market. Many references (Articles, Papers, Diagrams, etc..) has been studied, to conclude the most important theoretical principles of Phase transitions,especially of crystallization. The conclusions of this study, are summarized in three Principles; Thermodynamic-, Morphologic-, and Kinetic-Principles. The study is completed by a brief description of the main single crystal growth methods with sketches, of equipment used in each method, which can be considered as primary designs for the equipment, of a new crystal growth laboratory. (author)

  7. Single-step colloidal quantum dot films for infrared solar harvesting

    KAUST Repository

    Kiani, Amirreza

    2016-11-01

    Semiconductors with bandgaps in the near- to mid-infrared can harvest solar light that is otherwise wasted by conventional single-junction solar cell architectures. In particular, colloidal quantum dots (CQDs) are promising materials since they are cost-effective, processed from solution, and have a bandgap that can be tuned into the infrared (IR) via the quantum size effect. These characteristics enable them to harvest the infrared portion of the solar spectrum to which silicon is transparent. To date, IR CQD solar cells have been made using a wasteful and complex sequential layer-by-layer process. Here, we demonstrate ∼1 eV bandgap solar-harvesting CQD films deposited in a single step. By engineering a fast-drying solvent mixture for metal iodide-capped CQDs, we deposited active layers greater than 200 nm in thickness having a mean roughness less than 1 nm. We integrated these films into infrared solar cells that are stable in air and exhibit power conversion efficiencies of 3.5% under illumination by the full solar spectrum, and 0.4% through a simulated silicon solar cell filter.

  8. The Technique of Changing the Drive Method of Micro Step Drive and Sensorless Drive for Hybrid Stepping Motor

    Science.gov (United States)

    Yoneda, Makoto; Dohmeki, Hideo

    The position control system with the advantage large torque, low vibration, and high resolution can be obtained by the constant current micro step drive applied to hybrid stepping motor. However loss is large, in order not to be concerned with load torque but to control current uniformly. As the one technique of a position control system in which high efficiency is realizable, the same sensorless control as a permanent magnet motor is effective. But, it was the purpose that the control method proposed until now controls speed. Then, this paper proposed changing the drive method of micro step drive and sensorless drive. The change of the drive method was verified from the simulation and the experiment. On no load, it was checked not producing change of a large speed at the time of a change by making electrical angle and carrying out zero reset of the integrator. On load, it was checked that a large speed change arose. The proposed system could change drive method by setting up the initial value of an integrator using the estimated result, without producing speed change. With this technique, the low loss position control system, which employed the advantage of the hybrid stepping motor, has been built.

  9. Bridge flap technique as a single-step solution to mucogingival problems: A case series

    Directory of Open Access Journals (Sweden)

    Vivek Gupta

    2011-01-01

    Full Text Available Shallow vestibule, gingival recession, inadequate width of attached gingiva (AG and aberrant frenum pull are an array of mucogingival problems for which several independent and effective surgical solutions are reported in the literature. This case series reports the effectiveness of the bridge flap technique as a single-step surgical entity for increasing the depth of the vestibule, root coverage, increasing the width of the AG and solving the problem of abnormal frenum pull. Eight patients with 18 teeth altogether having Millers class I, II or III recession along with problems of shallow vestibule, inadequate width of AG and with or without frenum pull underwent this surgical procedure and were followed-up till 9 months post-operatively. The mean root coverage obtained was 55% and the mean average gain in width of the AG was 3.5 mm. The mean percentage gain in clinical attachment level was 41%. The bridge flap technique can be an effective single-step solution for the aforementioned mucogingival problems if present simultaneously in any case, and offers considerable advantages over other mucogingival surgical techniques in terms of simplicity, limited chair-time for the patient and the operator, single surgical intervention for manifold mucogingival problems and low morbidity because of the absence of palatal donor tissue.

  10. Effects of phylogenetic reconstruction method on the robustness of species delimitation using single-locus data.

    Science.gov (United States)

    Tang, Cuong Q; Humphreys, Aelys M; Fontaneto, Diego; Barraclough, Timothy G; Paradis, Emmanuel

    2014-10-01

    Coalescent-based species delimitation methods combine population genetic and phylogenetic theory to provide an objective means for delineating evolutionarily significant units of diversity. The generalised mixed Yule coalescent (GMYC) and the Poisson tree process (PTP) are methods that use ultrametric (GMYC or PTP) or non-ultrametric (PTP) gene trees as input, intended for use mostly with single-locus data such as DNA barcodes. Here, we assess how robust the GMYC and PTP are to different phylogenetic reconstruction and branch smoothing methods. We reconstruct over 400 ultrametric trees using up to 30 different combinations of phylogenetic and smoothing methods and perform over 2000 separate species delimitation analyses across 16 empirical data sets. We then assess how variable diversity estimates are, in terms of richness and identity, with respect to species delimitation, phylogenetic and smoothing methods. The PTP method generally generates diversity estimates that are more robust to different phylogenetic methods. The GMYC is more sensitive, but provides consistent estimates for BEAST trees. The lower consistency of GMYC estimates is likely a result of differences among gene trees introduced by the smoothing step. Unresolved nodes (real anomalies or methodological artefacts) affect both GMYC and PTP estimates, but have a greater effect on GMYC estimates. Branch smoothing is a difficult step and perhaps an underappreciated source of bias that may be widespread among studies of diversity and diversification. Nevertheless, careful choice of phylogenetic method does produce equivalent PTP and GMYC diversity estimates. We recommend simultaneous use of the PTP model with any model-based gene tree (e.g. RAxML) and GMYC approaches with BEAST trees for obtaining species hypotheses.

  11. A simple and rapid method for high-resolution visualization of single-ion tracks

    Directory of Open Access Journals (Sweden)

    Masaaki Omichi

    2014-11-01

    Full Text Available Prompt determination of spatial points of single-ion tracks plays a key role in high-energy particle induced-cancer therapy and gene/plant mutations. In this study, a simple method for the high-resolution visualization of single-ion tracks without etching was developed through the use of polyacrylic acid (PAA-N, N’-methylene bisacrylamide (MBAAm blend films. One of the steps of the proposed method includes exposure of the irradiated films to water vapor for several minutes. Water vapor was found to promote the cross-linking reaction of PAA and MBAAm to form a bulky cross-linked structure; the ion-track scars were detectable at a nanometer scale by atomic force microscopy. This study demonstrated that each scar is easily distinguishable, and the amount of generated radicals of the ion tracks can be estimated by measuring the height of the scars, even in highly dense ion tracks. This method is suitable for the visualization of the penumbra region in a single-ion track with a high spatial resolution of 50 nm, which is sufficiently small to confirm that a single ion hits a cell nucleus with a size ranging between 5 and 20 μm.

  12. A simple and rapid method for high-resolution visualization of single-ion tracks

    Energy Technology Data Exchange (ETDEWEB)

    Omichi, Masaaki [Department of Applied Chemistry, Graduate School of Engineering, Osaka University, Osaka 565-0871 (Japan); Center for Collaborative Research, Anan National College of Technology, Anan, Tokushima 774-0017 (Japan); Choi, Wookjin; Sakamaki, Daisuke; Seki, Shu, E-mail: seki@chem.eng.osaka-u.ac.jp [Department of Applied Chemistry, Graduate School of Engineering, Osaka University, Osaka 565-0871 (Japan); Tsukuda, Satoshi [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, Sendai, Miyagi 980-8577 (Japan); Sugimoto, Masaki [Japan Atomic Energy Agency, Takasaki Advanced Radiation Research Institute, Gunma, Gunma 370-1292 (Japan)

    2014-11-15

    Prompt determination of spatial points of single-ion tracks plays a key role in high-energy particle induced-cancer therapy and gene/plant mutations. In this study, a simple method for the high-resolution visualization of single-ion tracks without etching was developed through the use of polyacrylic acid (PAA)-N, N’-methylene bisacrylamide (MBAAm) blend films. One of the steps of the proposed method includes exposure of the irradiated films to water vapor for several minutes. Water vapor was found to promote the cross-linking reaction of PAA and MBAAm to form a bulky cross-linked structure; the ion-track scars were detectable at a nanometer scale by atomic force microscopy. This study demonstrated that each scar is easily distinguishable, and the amount of generated radicals of the ion tracks can be estimated by measuring the height of the scars, even in highly dense ion tracks. This method is suitable for the visualization of the penumbra region in a single-ion track with a high spatial resolution of 50 nm, which is sufficiently small to confirm that a single ion hits a cell nucleus with a size ranging between 5 and 20 μm.

  13. Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

    NARCIS (Netherlands)

    Gu, G.; Mansouri, H.; Zangiabadi, M.; Bai, Y.Q.; Roos, C.

    2009-01-01

    We present several improvements of the full-Newton step infeasible interior-point method for linear optimization introduced by Roos (SIAM J. Optim. 16(4):1110–1136, 2006). Each main step of the method consists of a feasibility step and several centering steps. We use a more natural feasibility step,

  14. Single-step preparation of TiO2/MWCNT Nanohybrid materials by laser pyrolysis and application to efficient photovoltaic energy conversion.

    Science.gov (United States)

    Wang, Jin; Lin, Yaochen; Pinault, Mathieu; Filoramo, Arianna; Fabert, Marc; Ratier, Bernard; Bouclé, Johann; Herlin-Boime, Nathalie

    2015-01-14

    This paper presents the continuous-flowand single-step synthesis of a TiO2/MWCNT (multiwall carbon nanotubes) nanohybrid material. The synthesis method allows achieving high coverage and intimate interface between the TiO2particles and MWCNTs, together with a highly homogeneous distribution of nanotubes within the oxide. Such materials used as active layer in theporous photoelectrode of solid-state dye-sensitized solar cells leads to a substantial performance improvement (20%) as compared to reference devices.

  15. The Impact of Reconstruction Methods, Phylogenetic Uncertainty and Branch Lengths on Inference of Chromosome Number Evolution in American Daisies (Melampodium, Asteraceae).

    Science.gov (United States)

    McCann, Jamie; Schneeweiss, Gerald M; Stuessy, Tod F; Villaseñor, Jose L; Weiss-Schneeweiss, Hanna

    2016-01-01

    Chromosome number change (polyploidy and dysploidy) plays an important role in plant diversification and speciation. Investigating chromosome number evolution commonly entails ancestral state reconstruction performed within a phylogenetic framework, which is, however, prone to uncertainty, whose effects on evolutionary inferences are insufficiently understood. Using the chromosomally diverse plant genus Melampodium (Asteraceae) as model group, we assess the impact of reconstruction method (maximum parsimony, maximum likelihood, Bayesian methods), branch length model (phylograms versus chronograms) and phylogenetic uncertainty (topological and branch length uncertainty) on the inference of chromosome number evolution. We also address the suitability of the maximum clade credibility (MCC) tree as single representative topology for chromosome number reconstruction. Each of the listed factors causes considerable incongruence among chromosome number reconstructions. Discrepancies between inferences on the MCC tree from those made by integrating over a set of trees are moderate for ancestral chromosome numbers, but severe for the difference of chromosome gains and losses, a measure of the directionality of dysploidy. Therefore, reliance on single trees, such as the MCC tree, is strongly discouraged and model averaging, taking both phylogenetic and model uncertainty into account, is recommended. For studying chromosome number evolution, dedicated models implemented in the program ChromEvol and ordered maximum parsimony may be most appropriate. Chromosome number evolution in Melampodium follows a pattern of bidirectional dysploidy (starting from x = 11 to x = 9 and x = 14, respectively) with no prevailing direction.

  16. The Impact of Reconstruction Methods, Phylogenetic Uncertainty and Branch Lengths on Inference of Chromosome Number Evolution in American Daisies (Melampodium, Asteraceae.

    Directory of Open Access Journals (Sweden)

    Jamie McCann

    Full Text Available Chromosome number change (polyploidy and dysploidy plays an important role in plant diversification and speciation. Investigating chromosome number evolution commonly entails ancestral state reconstruction performed within a phylogenetic framework, which is, however, prone to uncertainty, whose effects on evolutionary inferences are insufficiently understood. Using the chromosomally diverse plant genus Melampodium (Asteraceae as model group, we assess the impact of reconstruction method (maximum parsimony, maximum likelihood, Bayesian methods, branch length model (phylograms versus chronograms and phylogenetic uncertainty (topological and branch length uncertainty on the inference of chromosome number evolution. We also address the suitability of the maximum clade credibility (MCC tree as single representative topology for chromosome number reconstruction. Each of the listed factors causes considerable incongruence among chromosome number reconstructions. Discrepancies between inferences on the MCC tree from those made by integrating over a set of trees are moderate for ancestral chromosome numbers, but severe for the difference of chromosome gains and losses, a measure of the directionality of dysploidy. Therefore, reliance on single trees, such as the MCC tree, is strongly discouraged and model averaging, taking both phylogenetic and model uncertainty into account, is recommended. For studying chromosome number evolution, dedicated models implemented in the program ChromEvol and ordered maximum parsimony may be most appropriate. Chromosome number evolution in Melampodium follows a pattern of bidirectional dysploidy (starting from x = 11 to x = 9 and x = 14, respectively with no prevailing direction.

  17. Rapid methods for detection of bacteria

    DEFF Research Database (Denmark)

    Corfitzen, Charlotte B.; Andersen, B.Ø.; Miller, M.

    2006-01-01

    Traditional methods for detection of bacteria in drinking water e.g. Heterotrophic Plate Counts (HPC) or Most Probable Number (MNP) take 48-72 hours to give the result. New rapid methods for detection of bacteria are needed to protect the consumers against contaminations. Two rapid methods...

  18. Tumour burden in early stage Hodgkin's disease: the single most important prognostic factor for outcome after radiotherapy

    DEFF Research Database (Denmark)

    Specht, L; Nordentoft, A M; Cold, Søren

    1987-01-01

    One hundred and forty-two patients with Hodgkin's disease PS I or II were treated with total or subtotal nodal irradiation as part of a prospective randomized trial in the Danish National Hodgkin Study during the period 1971-83. They were followed till death or--at the time of this analysis......--from 15 to 146 months after initiation of therapy. The initial tumour burden of each patient was assessed, combining tumour size of each involved region and number of regions involved. Tumour burden thus assessed proved to be the single most important prognostic factor with regard to disease free survival...

  19. When a Step Is Not a Step! Specificity Analysis of Five Physical Activity Monitors.

    Science.gov (United States)

    O'Connell, Sandra; ÓLaighin, Gearóid; Quinlan, Leo R

    2017-01-01

    Physical activity is an essential aspect of a healthy lifestyle for both physical and mental health states. As step count is one of the most utilized measures for quantifying physical activity it is important that activity-monitoring devices be both sensitive and specific in recording actual steps taken and disregard non-stepping body movements. The objective of this study was to assess the specificity of five activity monitors during a variety of prescribed non-stepping activities. Participants wore five activity monitors simultaneously for a variety of prescribed activities including deskwork, taking an elevator, taking a bus journey, automobile driving, washing and drying dishes; functional reaching task; indoor cycling; outdoor cycling; and indoor rowing. Each task was carried out for either a specific duration of time or over a specific distance. Activity monitors tested were the ActivPAL micro™, NL-2000™ pedometer, Withings Smart Activity Monitor Tracker (Pulse O2)™, Fitbit One™ and Jawbone UP™. Participants were video-recorded while carrying out the prescribed activities and the false positive step count registered on each activity monitor was obtained and compared to the video. All activity monitors registered a significant number of false positive steps per minute during one or more of the prescribed activities. The Withings™ activity performed best, registering a significant number of false positive steps per minute during the outdoor cycling activity only (P = 0.025). The Jawbone™ registered a significant number of false positive steps during the functional reaching task and while washing and drying dishes, which involved arm and hand movement (P positive steps during the cycling exercises (P positive steps were registered on the activity monitors during the non-stepping activities, the authors conclude that non-stepping physical activities can result in the false detection of steps. This can negatively affect the quantification of physical

  20. Residue 182 influences the second step of protein-tyrosine phosphatase-mediated catalysis

    DEFF Research Database (Denmark)

    Pedersen, A.K.; Guo, X.; Møller, K.B.

    2004-01-01

    Previous enzyme kinetic and structural studies have revealed a critical role for Asp(181) (PTP1B numbering) in PTP (protein-tyrosine phosphatase)-mediated catalysis. In the E-P (phosphoenzyme) formation step, Asp(181) functions as a general acid, while in the E-P hydrolysis step it acts...... as a general base. Most of our understanding of the role of Asp(181). is derived from studies with the Yersinia PTP and the mammalian PTP1B, and to some extent also TC (T-cell)-PTP and, the related PTPalpha and PTPepsilon. The neighbouring residue 182 is a phenylalanine in these four mammalian enzymes...... and a glutamine in Yersinia PTP. Surprisingly, little attention has been paid to the fact that this residue is a histidine in most other mammalian PTPs. Using a reciprocal single-point mutational approach with introduction of His(182) in PTP1B and Phe(182) in PTPH1, we demonstrate here that His(182)-PTPs...

  1. Comparing Multi-Step IMAC and Multi-Step TiO2 Methods for Phosphopeptide Enrichment

    Science.gov (United States)

    Yue, Xiaoshan; Schunter, Alissa; Hummon, Amanda B.

    2016-01-01

    Phosphopeptide enrichment from complicated peptide mixtures is an essential step for mass spectrometry-based phosphoproteomic studies to reduce sample complexity and ionization suppression effects. Typical methods for enriching phosphopeptides include immobilized metal affinity chromatography (IMAC) or titanium dioxide (TiO2) beads, which have selective affinity and can interact with phosphopeptides. In this study, the IMAC enrichment method was compared with the TiO2 enrichment method, using a multi-step enrichment strategy from whole cell lysate, to evaluate their abilities to enrich for different types of phosphopeptides. The peptide-to-beads ratios were optimized for both IMAC and TiO2 beads. Both IMAC and TiO2 enrichments were performed for three rounds to enable the maximum extraction of phosphopeptides from the whole cell lysates. The phosphopeptides that are unique to IMAC enrichment, unique to TiO2 enrichment, and identified with both IMAC and TiO2 enrichment were analyzed for their characteristics. Both IMAC and TiO2 enriched similar amounts of phosphopeptides with comparable enrichment efficiency. However, phosphopeptides that are unique to IMAC enrichment showed a higher percentage of multi-phosphopeptides, as well as a higher percentage of longer, basic, and hydrophilic phosphopeptides. Also, the IMAC and TiO2 procedures clearly enriched phosphopeptides with different motifs. Finally, further enriching with two rounds of TiO2 from the supernatant after IMAC enrichment, or further enriching with two rounds of IMAC from the supernatant TiO2 enrichment does not fully recover the phosphopeptides that are not identified with the corresponding multi-step enrichment. PMID:26237447

  2. An adjusted probability method for the identification of sociometric status in classrooms

    NARCIS (Netherlands)

    García Bacete, F.J.; Cillessen, A.H.N.

    2017-01-01

    Objective: The aim of this study was to test the performance of an adjusted probability method for sociometric classification proposed by García Bacete (GB) in comparison with two previous methods. Specific goals were to examine the overall agreement between methods, the behavioral correlates of

  3. Single-step syngas-to-distillates (S2D) process based on biomass-derived syngas--a techno-economic analysis.

    Science.gov (United States)

    Zhu, Yunhua; Jones, Susanne B; Biddy, Mary J; Dagle, Robert A; Palo, Daniel R

    2012-08-01

    This study compared biomass gasification based syngas-to-distillate (S2D) systems using techno-economic analysis (TEA). Three cases, state of technology (SOT), goal, and conventional, were compared in terms of performance and cost. The SOT case represented the best available experimental results for a process starting with syngas using a single-step dual-catalyst reactor for distillate generation. The conventional case mirrored a conventional two-step S2D process consisting of separate syngas-to-methanol and methanol-to-gasoline (MTG) processes. The goal case assumed the same performance as the conventional, but with a single-step S2D technology. TEA results revealed that the SOT was more expensive than the conventional and goal cases. The SOT case suffers from low one-pass yield and high selectivity to light hydrocarbons, both of which drive up production cost. Sensitivity analysis indicated that light hydrocarbon yield and single pass conversion efficiency were the key factors driving the high cost for the SOT case. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Towards Single-Step Biofabrication of Organs on a Chip via 3D Printing.

    Science.gov (United States)

    Knowlton, Stephanie; Yenilmez, Bekir; Tasoglu, Savas

    2016-09-01

    Organ-on-a-chip engineering employs microfabrication of living tissues within microscale fluid channels to create constructs that closely mimic human organs. With the advent of 3D printing, we predict that single-step fabrication of these devices will enable rapid design and cost-effective iterations in the development stage, facilitating rapid innovation in this field. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. A two-step Hilbert transform method for 2D image reconstruction

    International Nuclear Information System (INIS)

    Noo, Frederic; Clackdoyle, Rolf; Pack, Jed D

    2004-01-01

    The paper describes a new accurate two-dimensional (2D) image reconstruction method consisting of two steps. In the first step, the backprojected image is formed after taking the derivative of the parallel projection data. In the second step, a Hilbert filtering is applied along certain lines in the differentiated backprojection (DBP) image. Formulae for performing the DBP step in fan-beam geometry are also presented. The advantage of this two-step Hilbert transform approach is that in certain situations, regions of interest (ROIs) can be reconstructed from truncated projection data. Simulation results are presented that illustrate very similar reconstructed image quality using the new method compared to standard filtered backprojection, and that show the capability to correctly handle truncated projections. In particular, a simulation is presented of a wide patient whose projections are truncated laterally yet for which highly accurate ROI reconstruction is obtained

  6. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  7. Scale-invariant transition probabilities in free word association trajectories

    Directory of Open Access Journals (Sweden)

    Martin Elias Costa

    2009-09-01

    Full Text Available Free-word association has been used as a vehicle to understand the organization of human thoughts. The original studies relied mainly on qualitative assertions, yielding the widely intuitive notion that trajectories of word associations are structured, yet considerably more random than organized linguistic text. Here we set to determine a precise characterization of this space, generating a large number of word association trajectories in a web implemented game. We embedded the trajectories in the graph of word co-occurrences from a linguistic corpus. To constrain possible transport models we measured the memory loss and the cycling probability. These two measures could not be reconciled by a bounded diffusive model since the cycling probability was very high (16 % of order-2 cycles implying a majority of short-range associations whereas the memory loss was very rapid (converging to the asymptotic value in ∼ 7 steps which, in turn, forced a high fraction of long-range associations. We show that memory loss and cycling probabilities of free word association trajectories can be simultaneously accounted by a model in which transitions are determined by a scale invariant probability distribution.

  8. A Two-Step Resume Information Extraction Algorithm

    Directory of Open Access Journals (Sweden)

    Jie Chen

    2018-01-01

    Full Text Available With the rapid growth of Internet-based recruiting, there are a great number of personal resumes among recruiting systems. To gain more attention from the recruiters, most resumes are written in diverse formats, including varying font size, font colour, and table cells. However, the diversity of format is harmful to data mining, such as resume information extraction, automatic job matching, and candidates ranking. Supervised methods and rule-based methods have been proposed to extract facts from resumes, but they strongly rely on hierarchical structure information and large amounts of labelled data, which are hard to collect in reality. In this paper, we propose a two-step resume information extraction approach. In the first step, raw text of resume is identified as different resume blocks. To achieve the goal, we design a novel feature, Writing Style, to model sentence syntax information. Besides word index and punctuation index, word lexical attribute and prediction results of classifiers are included in Writing Style. In the second step, multiple classifiers are employed to identify different attributes of fact information in resumes. Experimental results on a real-world dataset show that the algorithm is feasible and effective.

  9. Radiometric method for the determination of uranium in soil and air: single-laboratory evaluation and interlaboratory collaborative study

    International Nuclear Information System (INIS)

    Casella, V.R.; Bishop, C.T.; Glosby, A.A.

    1980-02-01

    Results of a single-laboratory evaluation and an interlaboratory collaborative study of a method for determining uranium isotopes in soil and air samples are presented. The method is applicable to 10-gram soil samples and to both glass fiber and polystyrene (Microsorban) air filter samples. Sample decomposition is accomplished with a nitric-hydrofluoric acid dissolution. After a solvent extraction step to remove most of the iron present, the uranium is isolated by anion exchange chromatography and electrodeposition. Alpha spectrometry is used to measure the uranium isotopes. Two soil samples, a glass fiber air filter sample, and a polystyrene air filter sample were used to evaluate the method for uranium concentrations ranging from a few tenths to about one hundred disintegrations per minute per sample. Tracer recoveries for the single-laboratory evaluation averaged 78%, while the tracer recoveries for the collaborative study averaged 66%. Although the precision of the collaborative study results did not approach counting statistics errors, the measured uranium concentrations for these samples agreed to within 5% of the reference concentrations when the uranium concentration was greater than one disintegration per minute per gram of soil or one disintegration per minute per air filter

  10. Calibration of a single hexagonal NaI(Tl) detector using a new numerical method based on the efficiency transfer method

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Mahmoud I., E-mail: mabbas@physicist.net [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Badawi, M.S. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Ruskov, I.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 1784 Sofia (Bulgaria); El-Khatib, A.M. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Grozdanov, D.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 1784 Sofia (Bulgaria); Thabet, A.A. [Department of Medical Equipment Technology, Faculty of Allied Medical Sciences, Pharos University in Alexandria (Egypt); Kopatch, Yu.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Gouda, M.M. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Skoy, V.R. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation)

    2015-01-21

    Gamma-ray detector systems are important instruments in a broad range of science and new setup are continually developing. The most recent step in the evolution of detectors for nuclear spectroscopy is the construction of large arrays of detectors of different forms (for example, conical, pentagonal, hexagonal, etc.) and sizes, where the performance and the efficiency can be increased. In this work, a new direct numerical method (NAM), in an integral form and based on the efficiency transfer (ET) method, is used to calculate the full-energy peak efficiency of a single hexagonal NaI(Tl) detector. The algorithms and the calculations of the effective solid angle ratios for a point (isotropic irradiating) gamma-source situated coaxially at different distances from the detector front-end surface, taking into account the attenuation of the gamma-rays in the detector's material, end-cap and the other materials in-between the gamma-source and the detector, are considered as the core of this (ET) method. The calculated full-energy peak efficiency values by the (NAM) are found to be in a good agreement with the measured experimental data.

  11. Stability of one-step methods in transient nonlinear heat conduction

    International Nuclear Information System (INIS)

    Hughes, J.R.

    1977-01-01

    The purpose of the present work is to ascertain practical stability conditions for one-step methods commonly used in transient nonlinear heat conduction analyses. The class of problems considered is governed by a temporally continuous, spatially discrete system involving the capacity matrix C, conductivity matrix K, heat supply vector, temperature vector and time differenciation. In the linear case, in which K and C are constant, the stability behavior of one-step methods is well known. But in this paper the concepts of stability, appropriate to the nonlinear problem, are thoroughly discussed. They of course reduce to the usual stability criterion for the linear, constant coefficient case. However, for nonlinear problems there are differences and these ideas are of key importance in obtaining practical stability conditions. Of particular importance is a recent result which indicates that, in a sense, the trapezoidal and midpoint families are quivalent. Thus, stability results for one family may be translated into a result for the other. The main results obtained are summarized as follows. The stability behavior of the explicit Euler method in the nonlinear regime is analogous to that for linear problems. In particular, an a priori step size restriction may be determined for each time step. The precise time step restriction on implicit conditionally stable members of the trapezoidal and midpoint families is shown not to be determinable a priori. Of considerable practical significance, unconditionally stable members of the trapezoidal and midpoint families are identified

  12. Decomposition of conditional probability for high-order symbolic Markov chains

    Science.gov (United States)

    Melnik, S. S.; Usatenko, O. V.

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  13. A two-step leaching method designed based on chemical fraction distribution of the heavy metals for selective leaching of Cd, Zn, Cu, and Pb from metallurgical sludge.

    Science.gov (United States)

    Wang, Fen; Yu, Junxia; Xiong, Wanli; Xu, Yuanlai; Chi, Ru-An

    2018-01-01

    For selective leaching and highly effective recovery of heavy metals from a metallurgical sludge, a two-step leaching method was designed based on the distribution analysis of the chemical fractions of the loaded heavy metal. Hydrochloric acid (HCl) was used as a leaching agent in the first step to leach the relatively labile heavy metals and then ethylenediamine tetraacetic acid (EDTA) was applied to leach the residual metals according to their different fractional distribution. Using the two-step leaching method, 82.89% of Cd, 55.73% of Zn, 10.85% of Cu, and 0.25% of Pb were leached in the first step by 0.7 M HCl at a contact time of 240 min, and the leaching efficiencies for Cd, Zn, Cu, and Pb were elevated up to 99.76, 91.41, 71.85, and 94.06%, by subsequent treatment with 0.2 M EDTA at 480 min, respectively. Furthermore, HCl leaching induced fractional redistribution, which might increase the mobility of the remaining metals and then facilitate the following metal removal by EDTA. The facilitation was further confirmed by the comparison to the one-step leaching method with single HCl or single EDTA, respectively. These results suggested that the designed two-step leaching method by HCl and EDTA could be used for selective leaching and effective recovery of heavy metals from the metallurgical sludge or heavy metal-contaminated solid media.

  14. Camera trap arrays improve detection probability of wildlife: Investigating study design considerations using an empirical dataset.

    Science.gov (United States)

    O'Connor, Kelly M; Nathan, Lucas R; Liberati, Marjorie R; Tingley, Morgan W; Vokoun, Jason C; Rittenhouse, Tracy A G

    2017-01-01

    Camera trapping is a standard tool in ecological research and wildlife conservation. Study designs, particularly for small-bodied or cryptic wildlife species often attempt to boost low detection probabilities by using non-random camera placement or baited cameras, which may bias data, or incorrectly estimate detection and occupancy. We investigated the ability of non-baited, multi-camera arrays to increase detection probabilities of wildlife. Study design components were evaluated for their influence on wildlife detectability by iteratively parsing an empirical dataset (1) by different sizes of camera arrays deployed (1-10 cameras), and (2) by total season length (1-365 days). Four species from our dataset that represented a range of body sizes and differing degrees of presumed detectability based on life history traits were investigated: white-tailed deer (Odocoileus virginianus), bobcat (Lynx rufus), raccoon (Procyon lotor), and Virginia opossum (Didelphis virginiana). For all species, increasing from a single camera to a multi-camera array significantly improved detection probability across the range of season lengths and number of study sites evaluated. The use of a two camera array increased survey detection an average of 80% (range 40-128%) from the detection probability of a single camera across the four species. Species that were detected infrequently benefited most from a multiple-camera array, where the addition of up to eight cameras produced significant increases in detectability. However, for species detected at high frequencies, single cameras produced a season-long (i.e, the length of time over which cameras are deployed and actively monitored) detectability greater than 0.75. These results highlight the need for researchers to be critical about camera trap study designs based on their intended target species, as detectability for each focal species responded differently to array size and season length. We suggest that researchers a priori identify

  15. Camera trap arrays improve detection probability of wildlife: Investigating study design considerations using an empirical dataset.

    Directory of Open Access Journals (Sweden)

    Kelly M O'Connor

    Full Text Available Camera trapping is a standard tool in ecological research and wildlife conservation. Study designs, particularly for small-bodied or cryptic wildlife species often attempt to boost low detection probabilities by using non-random camera placement or baited cameras, which may bias data, or incorrectly estimate detection and occupancy. We investigated the ability of non-baited, multi-camera arrays to increase detection probabilities of wildlife. Study design components were evaluated for their influence on wildlife detectability by iteratively parsing an empirical dataset (1 by different sizes of camera arrays deployed (1-10 cameras, and (2 by total season length (1-365 days. Four species from our dataset that represented a range of body sizes and differing degrees of presumed detectability based on life history traits were investigated: white-tailed deer (Odocoileus virginianus, bobcat (Lynx rufus, raccoon (Procyon lotor, and Virginia opossum (Didelphis virginiana. For all species, increasing from a single camera to a multi-camera array significantly improved detection probability across the range of season lengths and number of study sites evaluated. The use of a two camera array increased survey detection an average of 80% (range 40-128% from the detection probability of a single camera across the four species. Species that were detected infrequently benefited most from a multiple-camera array, where the addition of up to eight cameras produced significant increases in detectability. However, for species detected at high frequencies, single cameras produced a season-long (i.e, the length of time over which cameras are deployed and actively monitored detectability greater than 0.75. These results highlight the need for researchers to be critical about camera trap study designs based on their intended target species, as detectability for each focal species responded differently to array size and season length. We suggest that researchers a priori

  16. Platelet-rich plasma differs according to preparation method and human variability.

    Science.gov (United States)

    Mazzocca, Augustus D; McCarthy, Mary Beth R; Chowaniec, David M; Cote, Mark P; Romeo, Anthony A; Bradley, James P; Arciero, Robert A; Beitzel, Knut

    2012-02-15

    Varying concentrations of blood components in platelet-rich plasma preparations may contribute to the variable results seen in recently published clinical studies. The purposes of this investigation were (1) to quantify the level of platelets, growth factors, red blood cells, and white blood cells in so-called one-step (clinically used commercial devices) and two-step separation systems and (2) to determine the influence of three separate blood draws on the resulting components of platelet-rich plasma. Three different platelet-rich plasma (PRP) separation methods (on blood samples from eight subjects with a mean age [and standard deviation] of 31.6 ± 10.9 years) were used: two single-spin processes (PRPLP and PRPHP) and a double-spin process (PRPDS) were evaluated for concentrations of platelets, red and white blood cells, and growth factors. Additionally, the effect of three repetitive blood draws on platelet-rich plasma components was evaluated. The content and concentrations of platelets, white blood cells, and growth factors for each method of separation differed significantly. All separation techniques resulted in a significant increase in platelet concentration compared with native blood. Platelet and white blood-cell concentrations of the PRPHP procedure were significantly higher than platelet and white blood-cell concentrations produced by the so-called single-step PRPLP and the so-called two-step PRPDS procedures, although significant differences between PRPLP and PRPDS were not observed. Comparing the results of the three blood draws with regard to the reliability of platelet number and cell counts, wide variations of intra-individual numbers were observed. Single-step procedures are capable of producing sufficient amounts of platelets for clinical usage. Within the evaluated procedures, platelet numbers and numbers of white blood cells differ significantly. The intra-individual results of platelet-rich plasma separations showed wide variations in

  17. Which Fall Ascertainment Method Captures Most Falls in Pre-Frail and Frail Seniors?

    Science.gov (United States)

    Teister, Corina J; Chocano-Bedoya, Patricia O; Orav, Endel J; Dawson-Hughes, Bess; Meyer, Ursina; Meyer, Otto W; Freystaetter, Gregor; Gagesch, Michael; Rizzoli, Rene; Egli, Andreas; Theiler, Robert; Kanis, John A; Bischoff-Ferrari, Heike A

    2018-06-15

    There is no consensus on most reliable falls ascertainment method. Therefore, we investigated which method captures most falls among pre-frail and frail seniors from two randomized controlled trials conducted in Zurich, Switzerland, a 18-month trial (2009-2010) including 200 community-dwelling pre-frail seniors with a prior fall and a 12-month trial (2005-2008) including 173 frail seniors with acute hip fracture. Both included the same fall ascertainment methods: monthly active-asking, daily self-report diary, and a call-in hotline. We compared number of falls reported and estimated overall and positive percent agreement between methods. Pre-frail seniors reported 499 falls (rate = 2.5/year) and frail seniors reported 205 falls (rate = 1.4/year). Most falls were reported by active-asking: 81% of falls in pre-frail, and 78% in frail seniors. Among pre-frail seniors, diaries captured additional 19% falls, while hotline added none. Among frail seniors, hotline added 16% falls, while diaries added 6%. The positive percent agreement between active-asking and diary was 100% among pre-frail and 88% among frail seniors. While monthly active-asking captures most falls in both groups, this method alone missed 19% of falls in pre-frail and 22% in frail seniors. Thus, a combination of active-asking and diaries for pre-frail, and active-asking and the hotline for frail seniors is warranted.

  18. Single-step preparation of selected biological fluids for the high performance liquid chromatographic analysis of fat-soluble vitamins and antioxidants.

    Science.gov (United States)

    Lazzarino, Giacomo; Longo, Salvatore; Amorini, Angela Maria; Di Pietro, Valentina; D'Urso, Serafina; Lazzarino, Giuseppe; Belli, Antonio; Tavazzi, Barbara

    2017-12-08

    Fat-soluble vitamins and antioxidants are of relevance in health and disease. Current methods to extract these compounds from biological fluids mainly need use of multi-steps and multi organic solvents. They are time-consuming and difficult to apply to treat simultaneously large sample number. We here describe a single-step, one solvent extraction of fat-soluble vitamins and antioxidants from biological fluids, and the chromatographic separation of all-trans-retinoic acid, 25-hydroxycholecalciferol, all-trans-retinol, astaxanthin, lutein, zeaxanthin, trans-β-apo-8'-carotenal, γ-tocopherol, β-cryptoxanthin, α-tocopherol, phylloquinone, lycopene, α-carotene, β-carotene and coenzyme Q 10 . Extraction is obtained by adding one volume of biological fluid to two acetonitrile volumes, vortexing for 60s and incubating for 60min at 37°C under agitation. HPLC separation occurs in 30min using Hypersil C18, 100×4.6mm, 5μm particle size column, gradient from 70% methanol+30% H 2 O to 100% acetonitrile, flow rate of 1.0ml/min and 37°C column temperature. Compounds are revealed using highly sensitive UV-VIS diode array detector. The HPLC method suitability was assessed in terms of sensitivity, reproducibility and recovery. Using the present extraction and chromatographic conditions we obtained values of the fat-soluble vitamins and antioxidants in serum from 50 healthy controls similar to those found in literature. Additionally, the profile of these compounds was also measured in seminal plasma from 20 healthy fertile donors. Results indicate that this simple, rapid and low cost sample processing is suitable to extract fat-soluble vitamins and antioxidants from biological fluids and can be applied in clinical and nutritional studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Application of perturbation theory methods to nuclear data uncertainty propagation using the collision probability method

    International Nuclear Information System (INIS)

    Sabouri, Pouya

    2013-01-01

    This thesis presents a comprehensive study of sensitivity/uncertainty analysis for reactor performance parameters (e.g. the k-effective) to the base nuclear data from which they are computed. The analysis starts at the fundamental step, the Evaluated Nuclear Data File and the uncertainties inherently associated with the data they contain, available in the form of variance/covariance matrices. We show that when a methodical and consistent computation of sensitivity is performed, conventional deterministic formalisms can be sufficient to propagate nuclear data uncertainties with the level of accuracy obtained by the most advanced tools, such as state-of-the-art Monte Carlo codes. By applying our developed methodology to three exercises proposed by the OECD (Uncertainty Analysis for Criticality Safety Assessment Benchmarks), we provide insights of the underlying physical phenomena associated with the used formalisms. (author)

  20. A high-yield, one-step synthesis of surfactant-free gold nanostars and numerical study for single-molecule SERS application

    Energy Technology Data Exchange (ETDEWEB)

    Chatterjee, S.; Ringane, A. B.; Arya, A.; Das, G. M.; Dantham, V. R., E-mail: dantham@iitp.ac.in; Laha, R. [Indian Institute of Technology Patna, Department of Physics (India); Hussian, S. [Indian Institute of Technology Patna, Department of Chemistry (India)

    2016-08-15

    We report a high-yield synthesis of star-shaped gold nanostructures in one step, using a new surfactant-free wet chemistry method. Compared to the existing reports, these nanostars were found to have longer and sharper spikes anchored uniformly on the surface of the spherical core, allowing at least a few hot spots irrespective of the incident light polarization. The average experimental values of core radius and spike length were found to be 88.5 and 72 nm, respectively. Using these values in numerical simulations, the local electric field enhancement (η) and localized surface plasmon resonance (LSPR) spectrum were obtained. Moreover, the single-molecule surface-enhanced Raman scattering (SERS) enhancement factor was found to vary from 10{sup 10} to 10{sup 13} depending on the excitation wavelengths. Our theoretical calculations suggest that these nanostructures can be used to fabricate efficient SERS-based biosensors for the detection of single molecules in real time and for predicting structural information of single molecules.

  1. Estimating factors influencing the detection probability of semiaquatic freshwater snails using quadrat survey methods

    Science.gov (United States)

    Roesler, Elizabeth L.; Grabowski, Timothy B.

    2018-01-01

    Developing effective monitoring methods for elusive, rare, or patchily distributed species requires extra considerations, such as imperfect detection. Although detection is frequently modeled, the opportunity to assess it empirically is rare, particularly for imperiled species. We used Pecos assiminea (Assiminea pecos), an endangered semiaquatic snail, as a case study to test detection and accuracy issues surrounding quadrat searches. Quadrats (9 × 20 cm; n = 12) were placed in suitable Pecos assiminea habitat and randomly assigned a treatment, defined as the number of empty snail shells (0, 3, 6, or 9). Ten observers rotated through each quadrat, conducting 5-min visual searches for shells. The probability of detecting a shell when present was 67.4 ± 3.0%, but it decreased with the increasing litter depth and fewer number of shells present. The mean (± SE) observer accuracy was 25.5 ± 4.3%. Accuracy was positively correlated to the number of shells in the quadrat and negatively correlated to the number of times a quadrat was searched. The results indicate quadrat surveys likely underrepresent true abundance, but accurately determine the presence or absence. Understanding detection and accuracy of elusive, rare, or imperiled species improves density estimates and aids in monitoring and conservation efforts.

  2. Single-step laser-based fabrication and patterning of cell-encapsulated alginate microbeads

    International Nuclear Information System (INIS)

    Kingsley, D M; Dias, A D; Corr, D T; Chrisey, D B

    2013-01-01

    Alginate can be used to encapsulate mammalian cells and for the slow release of small molecules. Packaging alginate as microbead structures allows customizable delivery for tissue engineering, drug release, or contrast agents for imaging. However, state-of-the-art microbead fabrication has a limited range in achievable bead sizes, and poor control over bead placement, which may be desired to localize cellular signaling or delivery. Herein, we present a novel, laser-based method for single-step fabrication and precise planar placement of alginate microbeads. Our results show that bead size is controllable within 8%, and fabricated microbeads can remain immobilized within 2% of their target placement. Demonstration of this technique using human breast cancer cells shows that cells encapsulated within these microbeads survive at a rate of 89.6%, decreasing to 84.3% after five days in culture. Infusing rhodamine dye into microbeads prior to fluorescent microscopy shows their 3D spheroidal geometry and the ability to sequester small molecules. Microbead fabrication and patterning is compatible with conventional cellular transfer and patterning by laser direct-write, allowing location-based cellular studies. While this method can also be used to fabricate microbeads en masse for collection, the greatest value to tissue engineering and drug delivery studies and applications lies in the pattern registry of printed microbeads. (paper)

  3. Confirmed Datura poisoning in a horse most probably due to D. ferox in contaminated tef hay : clinical communication

    Directory of Open Access Journals (Sweden)

    R. Gerber

    2006-06-01

    Full Text Available Two out of a group of 23 mares exposed to tef hay contaminated with Datura ferox (and possibly D. stramonium developed colic. The 1st animal was unresponsive to conservative treatment, underwent surgery for severe intestinal atony and had to be euthanased. The 2nd was less seriously affected, responded well to analgesics and made an uneventful recovery. This horse exhibited marked mydriasis on the first 2 days of being poisoned and showed protracted, milder mydriasis for a further 7 days. Scopolamine was chemically confirmed in urine from this horse for 3 days following the colic attack, while atropine could just be detected for 2 days. Scopolamine was also the main tropane alkaloid found in the contaminating plant material, confirming that this had most probably been a case of D. ferox poisoning. Although Datura intoxication of horses from contaminated hay was suspected previously, this is the 1st case where the intoxication could be confirmed by urine analysis for tropane alkaloids. Extraction and detection methods for atropine and scopolamine in urine are described employing enzymatic hydrolysis followed by liquid-liquid extraction and liquid chromatography tandem mass spectrometry (LC/MS/MS.

  4. Introduction to the Interface of Probability and Algorithms

    OpenAIRE

    Aldous, David; Steele, J. Michael

    1993-01-01

    Probability and algorithms enjoy an almost boisterous interaction that has led to an active, extensive literature that touches fields as diverse as number theory and the design of computer hardware. This article offers a gentle introduction to the simplest, most basic ideas that underlie this development.

  5. Estimation of functional failure probability of passive systems based on adaptive importance sampling method

    International Nuclear Information System (INIS)

    Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)

  6. Paraconsistent Probabilities: Consistency, Contradictions and Bayes’ Theorem

    Directory of Open Access Journals (Sweden)

    Juliana Bueno-Soler

    2016-09-01

    Full Text Available This paper represents the first steps towards constructing a paraconsistent theory of probability based on the Logics of Formal Inconsistency (LFIs. We show that LFIs encode very naturally an extension of the notion of probability able to express sophisticated probabilistic reasoning under contradictions employing appropriate notions of conditional probability and paraconsistent updating, via a version of Bayes’ theorem for conditionalization. We argue that the dissimilarity between the notions of inconsistency and contradiction, one of the pillars of LFIs, plays a central role in our extended notion of probability. Some critical historical and conceptual points about probability theory are also reviewed.

  7. Investigating the Differences of Single-Vehicle and Multivehicle Accident Probability Using Mixed Logit Model

    Directory of Open Access Journals (Sweden)

    Bowen Dong

    2018-01-01

    Full Text Available Road traffic accidents are believed to be associated with not only road geometric feature and traffic characteristic, but also weather condition. To address these safety issues, it is of paramount importance to understand how these factors affect the occurrences of the crashes. Existing studies have suggested that the mechanisms of single-vehicle (SV accidents and multivehicle (MV accidents can be very different. Few studies were conducted to examine the difference of SV and MV accident probability by addressing unobserved heterogeneity at the same time. To investigate the different contributing factors on SV and MV, a mixed logit model is employed using disaggregated data with the response variable categorized as no accidents, SV accidents, and MV accidents. The results indicate that, in addition to speed gap, length of segment, and wet road surfaces which are significant for both SV and MV accidents, most of other variables are significant only for MV accidents. Traffic, road, and surface characteristics are main influence factors of SV and MV accident possibility. Hourly traffic volume, inside shoulder width, and wet road surface are found to produce statistically significant random parameters. Their effects on the possibility of SV and MV accident vary across different road segments.

  8. Superthermal photon bunching in terms of simple probability distributions

    Science.gov (United States)

    Lettau, T.; Leymann, H. A. M.; Melcher, B.; Wiersig, J.

    2018-05-01

    We analyze the second-order photon autocorrelation function g(2 ) with respect to the photon probability distribution and discuss the generic features of a distribution that results in superthermal photon bunching [g(2 )(0 ) >2 ]. Superthermal photon bunching has been reported for a number of optical microcavity systems that exhibit processes such as superradiance or mode competition. We show that a superthermal photon number distribution cannot be constructed from the principle of maximum entropy if only the intensity and the second-order autocorrelation are given. However, for bimodal systems, an unbiased superthermal distribution can be constructed from second-order correlations and the intensities alone. Our findings suggest modeling superthermal single-mode distributions by a mixture of a thermal and a lasinglike state and thus reveal a generic mechanism in the photon probability distribution responsible for creating superthermal photon bunching. We relate our general considerations to a physical system, i.e., a (single-emitter) bimodal laser, and show that its statistics can be approximated and understood within our proposed model. Furthermore, the excellent agreement of the statistics of the bimodal laser and our model reveals that the bimodal laser is an ideal source of bunched photons, in the sense that it can generate statistics that contain no other features but the superthermal bunching.

  9. A method for the calculation of the cumulative failure probability distribution of complex repairable systems

    International Nuclear Information System (INIS)

    Caldarola, L.

    1976-01-01

    A method is proposed for the analytical evaluation of the cumulative failure probability distribution of complex repairable systems. The method is based on a set of integral equations each one referring to a specific minimal cut set of the system. Each integral equation links the unavailability of a minimal cut set to its failure probability density distribution and to the probability that the minimal cut set is down at the time t under the condition that it was down at time t'(t'<=t). The limitations for the applicability of the method are also discussed. It has been concluded that the method is applicable if the process describing the failure of a minimal cut set is a 'delayed semi-regenerative process'. (Auth.)

  10. Comparison of the Screening Tests for Gestational Diabetes Mellitus between "One-Step" and "Two-Step" Methods among Thai Pregnant Women.

    Science.gov (United States)

    Luewan, Suchaya; Bootchaingam, Phenphan; Tongsong, Theera

    2018-01-01

    To compare the prevalence and pregnancy outcomes of GDM between those screened by the "one-step" (75 gm GTT) and "two-step" (100 gm GTT) methods. A prospective study was conducted on singleton pregnancies at low or average risk of GDM. All were screened between 24 and 28 weeks, using the one-step or two-step method based on patients' preference. The primary outcome was prevalence of GDM, and secondary outcomes included birthweight, gestational age, rates of preterm birth, small/large-for-gestational age, low Apgar scores, cesarean section, and pregnancy-induced hypertension. A total of 648 women were screened: 278 in the one-step group and 370 in the two-step group. The prevalence of GDM was significantly higher in the one-step group; 32.0% versus 10.3%. Baseline characteristics and pregnancy outcomes in both groups were comparable. However, mean birthweight was significantly higher among pregnancies with GDM diagnosed by the two-step approach (3204 ± 555 versus 3009 ± 666 g; p =0.022). Likewise, the rate of large-for-date tended to be higher in the two-step group, but was not significant. The one-step approach is associated with very high prevalence of GDM among Thai population, without clear evidence of better outcomes. Thus, this approach may not be appropriate for screening in a busy antenatal care clinic like our setting or other centers in developing countries.

  11. Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

    OpenAIRE

    Gu, G.; Mansouri, H.; Zangiabadi, M.; Bai, Y.Q.; Roos, C.

    2009-01-01

    We present several improvements of the full-Newton step infeasible interior-point method for linear optimization introduced by Roos (SIAM J. Optim. 16(4):1110–1136, 2006). Each main step of the method consists of a feasibility step and several centering steps. We use a more natural feasibility step, which targets the ?+-center of the next pair of perturbed problems. As for the centering steps, we apply a sharper quadratic convergence result, which leads to a slightly wider neighborhood for th...

  12. Generation of pseudo-random numbers

    Science.gov (United States)

    Howell, L. W.; Rheinfurth, M. H.

    1982-01-01

    Practical methods for generating acceptable random numbers from a variety of probability distributions which are frequently encountered in engineering applications are described. The speed, accuracy, and guarantee of statistical randomness of the various methods are discussed.

  13. Transmission probability method based on triangle meshes for solving unstructured geometry neutron transport problem

    Energy Technology Data Exchange (ETDEWEB)

    Wu Hongchun [Nuclear Engineering Department, Xi' an Jiaotong University, Xi' an 710049, Shaanxi (China)]. E-mail: hongchun@mail.xjtu.edu.cn; Liu Pingping [Nuclear Engineering Department, Xi' an Jiaotong University, Xi' an 710049, Shaanxi (China); Zhou Yongqiang [Nuclear Engineering Department, Xi' an Jiaotong University, Xi' an 710049, Shaanxi (China); Cao Liangzhi [Nuclear Engineering Department, Xi' an Jiaotong University, Xi' an 710049, Shaanxi (China)

    2007-01-15

    In the advanced reactor, the fuel assembly or core with unstructured geometry is frequently used and for calculating its fuel assembly, the transmission probability method (TPM) has been used widely. However, the rectangle or hexagon meshes are mainly used in the TPM codes for the normal core structure. The triangle meshes are most useful for expressing the complicated unstructured geometry. Even though finite element method and Monte Carlo method is very good at solving unstructured geometry problem, they are very time consuming. So we developed the TPM code based on the triangle meshes. The TPM code based on the triangle meshes was applied to the hybrid fuel geometry, and compared with the results of the MCNP code and other codes. The results of comparison were consistent with each other. The TPM with triangle meshes would thus be expected to be able to apply to the two-dimensional arbitrary fuel assembly.

  14. Introduction to probability

    CERN Document Server

    Freund, John E

    1993-01-01

    Thorough, lucid coverage of permutations and factorials, probabilities and odds, frequency interpretation, mathematical expectation, decision making, postulates of probability, rule of elimination, binomial distribution, geometric distribution, standard deviation, law of large numbers, and much more. Exercises with some solutions. Summary. Bibliography. Includes 42 black-and-white illustrations. 1973 edition.

  15. Calculation of parameter failure probability of thermodynamic system by response surface and importance sampling method

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Chen Lisheng; Zhang Yangwei

    2012-01-01

    In this paper, the combined method of response surface and importance sampling was applied for calculation of parameter failure probability of the thermodynamic system. The mathematics model was present for the parameter failure of physics process in the thermodynamic system, by which the combination arithmetic model of response surface and importance sampling was established, then the performance degradation model of the components and the simulation process of parameter failure in the physics process of thermodynamic system were also present. The parameter failure probability of the purification water system in nuclear reactor was obtained by the combination method. The results show that the combination method is an effective method for the calculation of the parameter failure probability of the thermodynamic system with high dimensionality and non-linear characteristics, because of the satisfactory precision with less computing time than the direct sampling method and the drawbacks of response surface method. (authors)

  16. COVAL, Compound Probability Distribution for Function of Probability Distribution

    International Nuclear Information System (INIS)

    Astolfi, M.; Elbaz, J.

    1979-01-01

    1 - Nature of the physical problem solved: Computation of the probability distribution of a function of variables, given the probability distribution of the variables themselves. 'COVAL' has been applied to reliability analysis of a structure subject to random loads. 2 - Method of solution: Numerical transformation of probability distributions

  17. Decision making with consonant belief functions: Discrepancy resulting with the probability transformation method used

    Directory of Open Access Journals (Sweden)

    Cinicioglu Esma Nur

    2014-01-01

    Full Text Available Dempster−Shafer belief function theory can address a wider class of uncertainty than the standard probability theory does, and this fact appeals the researchers in operations research society for potential application areas. However, the lack of a decision theory of belief functions gives rise to the need to use the probability transformation methods for decision making. For representation of statistical evidence, the class of consonant belief functions is used which is not closed under Dempster’s rule of combination but is closed under Walley’s rule of combination. In this research, it is shown that the outcomes obtained using both Dempster’s and Walley’s rules do result in different probability distributions when pignistic transformation is used. However, when plausibility transformation is used, they do result in the same probability distribution. This result shows that the choice of the combination rule and probability transformation method may have a significant effect on decision making since it may change the choice of the decision alternative selected. This result is illustrated via an example of missile type identification.

  18. Single Laboratory Validated Method for Determination of Cylindrospermopsin and Anatoxin-a in Ambient Water by Liquid Chromatography/ Tandem Mass Spectrometry (LC/MS/MS)

    Science.gov (United States)

    This product is an LC/MS/MS single laboratory validated method for the determination of cylindrospermopsin and anatoxin-a in ambient waters. The product contains step-by-step instructions for sample preparation, analyses, preservation, sample holding time and QC protocols to ensu...

  19. Q-Step methods for Newton-Jacobi operator equation | Uwasmusi ...

    African Journals Online (AJOL)

    The paper considers the Newton-Jacobi operator equation for the solution of nonlinear systems of equations. Special attention is paid to the computational part of this method with particular reference to the q-step methods. Journal of the Nigerian Association of Mathematical Physics Vol. 8 2004: pp. 237-241 ...

  20. Probabilistic risk assessment on maritime spent nuclear fuel transportation (Part II: Ship collision probability)

    International Nuclear Information System (INIS)

    Christian, Robby; Kang, Hyun Gook

    2017-01-01

    This paper proposes a methodology to assess and reduce risks of maritime spent nuclear fuel transportation with a probabilistic approach. Event trees detailing the progression of collisions leading to transport casks’ damage were constructed. Parallel and crossing collision probabilities were formulated based on the Poisson distribution. Automatic Identification System (AIS) data were processed with the Hough Transform algorithm to estimate possible intersections between the shipment route and the marine traffic. Monte Carlo simulations were done to compute collision probabilities and impact energies at each intersection. Possible safety improvement measures through a proper selection of operational transport parameters were investigated. These parameters include shipment routes, ship's cruise velocity, number of transport casks carried in a shipment, the casks’ stowage configuration and loading order on board the ship. A shipment case study is presented. Waters with high collision probabilities were identified. Effective range of cruising velocity to reduce collision risks were discovered. The number of casks in a shipment and their stowage method which gave low cask damage frequencies were obtained. The proposed methodology was successful in quantifying ship collision and cask damage frequency. It was effective in assisting decision making processes to minimize risks in maritime spent nuclear fuel transportation. - Highlights: • Proposes a probabilistic framework on the safety of spent nuclear fuel transportation by sea. • Developed a marine traffic simulation model using Generalized Hough Transform (GHT) algorithm. • A transportation case study on South Korean waters is presented. • Single-vessel risk reduction method is outlined by optimizing transport parameters.